For novel ideas about building embedded systems (both hardware and firmware), join the 35,000 engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.
Summary: Testing is important. But it's just one quality gate.
The real value of tests is not that they detect bugs in the code but that they detect inadequacies in the methods, concentration, and skills of those who design and produce the code. - C.A.R. Hoare
100% test coverage is insufficient. 35% of the faults are missing logic paths. - Robert Glass, Facts and Fallacies of Software Engineering A third of all software faults take more than 5000 execution-years to fail. Thus, testing is a lousy approach. - Adams, N.E "Optimizing preventive service of software products", IBM Journal of Research and Development.
Testing by itself does not improve software quality. Test results are an indicator of quality, but in and of themselves, they don't improve it. Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often. What you eat before you step onto the scale determines how much you will weigh, and the software development techniques you use determine how many errors testing will find. If you want to lose weight, don't buy a new scale; change your diet. If you want to improve your software, don't test more; develop better. - Steve McConnell, Code Complete
Most teams rely on testing as a quality gate to the exclusion of all else. And that's a darn shame.
For one thing, test does not work. Studies show that the usual test regime only exercises about half the code. It's tough to check deeply-nested ifs, exception handlers, and the like. Alas, all too often even unit tests don't even check boundary conditions, perhaps out of fear that the code might break.
Typical embedded projects devote half the schedule to test and debugging. So does that mean the other half is, well, bugging? Shrinking the bugging part will both accelerate the schedule and produce a higher quality product.
We need to realize that bugs get injected in each phase of development, and that a decent process employs numerous quality gates that each finds defects. That starts early with doing a careful analysis of requirements. It continues with the use of developers thinking deeply about their design and code. Tools like Lint and static analyzers expose other classes of problems - before testing commences. Code inspections reveal design errors early, and cheaply, when done properly. Test will surely turn up more problems, but most should have been found long before that time.
And a suitable metrics effort must be used to understand where the defects are coming from, and what processes are effective in eliminating the bugs. This is engineering, not art; engineers use numbers, graphs, and real data gathered empirically, to do their work. Sans metrics, we never have any idea how we stack up compared to industry standards. Sans metrics, we have no idea if we're getting better, or worse. Metrics are the cold and non-negotiable truth that reveals the strengths and weaknesses of our development efforts.
(The quality movement - which unfortunately seems to have bypassed software engineering - showed us the importance of taking measurements to understand, first, how we're doing, and second, to see the first derivative of the former. Metrics are a form of feedback that gives us insight and understanding into a process. Until software engineers embrace measurements quality will be an ad hoc notion achieved sporadically.)
While most of us use testing almost exclusively to manage our software quality, there are some teams that use test simply to show that their code is correct. They expect, and generally find, that at test time everything works correctly.
And these teams tend to deliver faster than the rest of us.
Published February 20, 2012