For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.

Start Collecting Metrics Now

Summary: Start collecting metrics so you establish a baseline against which change can be measured.

How buggy is your code?

That's rather a bogus question, as it's completely context-free. What is being measured? At what phase of the project? What constitutes a bug?

But this is also a profoundly important question. Without this information it's impossible to know how your company compares to industry norms. Is your code in the top 1%... or is the team engaged in professional malpractice? Is your code getting worse or better?

The bogosity of this question is less than that about bugs, though there are still a number of uncertainties in how it is phrased. There may be a range of execution times; and those may vary depending on external events. But this is engineering, not product design by divine intervention, and an engineer measures the behavior of the system quantitatively. Otherwise it's impossible to know if there is enough timing margin for reliable operation.

Go to a manufacturing facility and you'll see the walls covered with charts and figures. Every controllable aspect of production is measured and optimized. Ask most engineering teams for any sort of metrics and you'll be met with blank stares.

Various pundits often demand we collect data that's very hard to gather, tough to interpret, and that has little obvious correlation to implementing positive change. To often, this is often used as an excuse to measure nothing.

One company told me they were able to cut 40% from their schedules when designing similar-sized products by adopting a couple of changes to their process. There's two important facts in that sentence: first, they're getting to market faster. Second, they know just how much things changed because they were fanatical about measurements.

Other teams could adopt a Harry Potter School of Magic process, but would have no real sense about the impact on, well, anything. There may be some vague sense things are better, but a "vague sense" is not engineering.

Some popular process models today have been adopted because the developers feel they get better results. Or management will knock down all of the walls because they heard - somewhere - that things improve. (I participated in one such experiment, with hard numbers, and that rumor, at least in this circumstance, was completely off-base). These are not scientific, business-like, or engineering. It's operating by gut instinct, usually unsupported by any sort of meaningful data.

In engineering, when someone says things are getting better, the logical rejoinder is "how much better and what are the error bands?" There's much we cannot understand in any useful way unless expressed numerically. We need to be as scientific and methodical about firmware development as civil engineers are when they compute loads, or EEs are in figuring power dissipation. Admittedly, there's a lot of art to building software, art which in many cases defies measurement. But that doesn't mean nothing can or should be measured.

When teams ask how they should go about improving their processes I always tell them to start with taking metrics, today. Create a benchmark against which changes can be evaluated.

In a future article I'll list some metrics that have proven to be critical.

What do you think? Do you measure anything, and if so, what?

Published June 21, 2013