For novel ideas about building embedded systems (both hardware and firmware), join the 35,000 engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.
By Jack Ganssle
Know Your Numbers
Do you know your how many defects will be injected into the next product during development? Or how many of those your team will fix before releasing it to the customer?
If not… why not?
In this month’s issue of Crosstalk the always interesting Capers Jones writes: “All software managers and quality assurance personnel should be familiar with these measurements because they have the largest impact on software quality, cost, and schedule of any known measures.”
Jones is one of the world’s authorities on software engineering and software metrics. In this current article he makes it clear that teams that don’t track defect metrics are working inefficiently at best.
He suggests we track “defect potentials” and “defect removal efficiency.” Though the article defines the former term a better description comes from his article “Software Benchmarking” in the October 1995 issue of IEEE Computer: “The defect potential of a software application is the total quantity of errors found in requirements, design, source code, user manuals, and bad fixes or secondary defects inserted as an accidental byproduct of repairing other defects.”
In other words, defect potential is the total of all mistakes injected into the product during development. Jones says the defect potential typically ranges between two and ten per function point, and for most organizations is about the number of function points raised to the 1.25 power.
Function points are only rarely used in the embedded industry; we think in terms of lines of code (LOC). Though the LOC metric is hardly ideal we do our analysis with the metrics we have. Referring once again to Jones’ work, his November 1995 article “Backfiring: Converting Lines of Code to Function Points” in IEEE Computer claims one function point equals, on average, about 128 lines of C, plus or minus about 60..
“Defect removal efficiency” tells us what percentage of those flaws will be removed prior to shipping. That’s an appalling 85% in the US. But in private correspondence he provided me with figures that suggest embedded projects are by far the best of the lot, with an average defect removal efficiency of 95%, if truly huge (1m function points) projects are excluded.
Jones claims only 5% of software organizations know their numbers. Yet in the many hundreds of embedded companies I’ve worked with, only two or three track these metrics. Since defects are a huge cause of late projects it’s seems reasonable to track them. And companies that don’t track defects can never know if they are best in class… or the very worst, which, in a glass-half-full way suggests lots of opportunities to improve.
What’s your take? What metrics does your company track?