|For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.
By Jack Ganssle
One of my passions is sailing across long stretches of ocean on a small boat. In this hyperspeed age it's a silly obsession. I'm thrilled to average 5 knots (just under 6 MPH) over the course of a trip. A jet flies from here in Baltimore to Bermuda in two hours; by sailboat it's at least a week, sailing 24 hours per day.
Being so far from land and help we carry an extensive set of spare parts so a simple failure doesn't turn deadly. Extra sails, diesel parts, plumbing, rigging, electrical, and more, hundreds of items are stashed all over the boat. Before leaving on a long voyage I create a list of spares so I know exactly what's onboard and where it's all stowed.
I do the same thing after building an embedded product.
How much memory is unused? That's a critical thing to know. If it's 10 bytes count on extreme maintenance costs. The smallest patch may consume weeks as one recodes, retests, and requalifies other areas of the program just to free up some flash. It's a simple matter to look at the link map, in most cases, to find the program's size. RAM might be harder in an environment where it's being dynamically allocated and freed, but some tools, like the mem utility, (free from http://c.snippets.org/browser.php) will help. And stacks and heaps don't grow forever; they're generally bounded by the developer.
What about real-time issues? Is your system 9% loaded. or 99%? Do you even profile your code to see how much time is consumed? Time is a resource just as important as RAM, ROM or money. Run out, or run short, and maintenance costs will soar just as surely as they do when short on memory.
It's not hard to figure idle time. See http://www.embedded.com/showArticle.jhtml?articleID=187203692 for a couple of approaches.
I know one company that monitors CPU utilization with great precision, because their volumes are so high that the engineers are required to keep the processor loaded at 99% or better. Anything less and they figure a couple of transistors can come out, saving a microcent or two. When one considers products that optimize hardware to improve execution speed (like the tools for Altera's NIOS-II), you can't help wonder if there's a market for a reverse optimizer. Something that examines to the code to minimize hardware needs. On an FPGA the only direct benefit (since the transistors are still there) would be to free up resources to add features.
At the end of a project do you know how much memory, and how many CPU cycles, are still free?