For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.

Filters

Summary: A tiny impurity in the fuel can damage a diesel. The answer: filters.

Diesel engines are incredibly reliable, but they are extremely vulnerable to fuel contamination. So the little 27 HP diesel on my sailboat has 4 fuel filters: the first extracts water. Another uses centrifugal force to expel big particles. The next takes out matter bigger than 10 microns. A final 2 micron filter removes the tiny particles. The idea is to use multiple stages to remove all of the ugly stuff.

The same idea applies to writing firmware.

No matter how good you are, your code will have bugs. A defect-removal strategy that relies on a single stage of filtration is doomed to fail.

The average bit of firmware runs a 5 to 10% error rate post-compile. That is, after you clean up the syntax errors identified by the compiler, a 1000 line program will have 50 to 100 bugs. (I have seen organizations which do much better, and, unfortunately, much worse). These are typical numbers, so companies with highly disciplined processes, like effective code inspections, will fare much better.

Reviews of the requirements and design pull out lots of defects before manifesting as bugs.

The compiler will identify syntax errors. That's another filtering mechanism.

(People complain about false positives from these tools. It takes time to learn to tame them, to make them well-behaved. Static analyzers are expensive. But not compared with the cost of developers' time.)

Code inspections can be among the most effective of all defect-removal filters. The reason? For one, there's the mantra of the open source movement: with enough eyeballs all bugs are shallow. That is, we're not good at catching our own errors. Secondly, inspected code is better than uninspected code, even before the review begins! No one wants to look like an idiot when his code is being reviewed, so disciplined developers work harder to get it right from the beginning.

Tests filter defects, too. No test regimen is 100% effective; the data shows that most exercise only about half the code. This is one reason I'm not keen on Test-Driven Development. TDD has some great ideas, but when test is the only filter, expect problems.

The filters in my diesel engine get dirty and have to be replaced. I monitor them for strange behavior - why is there so much water in the fuel recently? This tank has been really dirty - maybe I should buy fuel elsewhere. Similarly, developers should monitor the effectiveness of their defect filters to ensure each is working well. Is one suddenly much less effective than it was? If we collect metrics we can understand what is going on and take corrective action. Without numbers the process is open-loop, out of control, and probably not terribly efficient.

It's also important to understand how our numbers compare with other companies. Are we world class or sub-par? Thanks to the brilliant Capers Jones, who surveyed 13,500 projects, we know where the industry stands. Here's the efficiency of different filters for three classes of companies: Lowest Median Highest Requirements review (informal) 20% 30% 50% Top-level design reviews (informal) 30% 40% 60% Detailed functional design inspection 30% 65% 85% Detailed logic design inspection 35% 65% 75% Code inspection or static analysis 35% 60% 90% Unit tests 10% 25% 50% New Function tests 20% 35% 65% Integration tests 25% 45% 60% System test 25% 50% 65% External Beta tests 15% 40% 75% CUMULATIVE EFFICIENCY 75% 98% 99.99%

So, we do have metrics for how effectively different classes of organizations remove bugs. Do you know your numbers?

Published November 12, 2014