For novel ideas about building embedded systems (both hardware and firmware), join the 35,000 engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.
By Jack Ganssle
I gave a talk at the recent Embedded Systems Conference which covered, among other subjects, code inspections.
Inspections are sure to get developers riled up. We all hate them. They go against our very notion of programming, in which we hunch over the keyboard while running the debugger through its paces, hoping to beat our code into submission. Then, the stuff is "ours;" we take an ownership interest in the code, and tolerate no criticism of it.
I hate inspections.
But they're efficient, and the point of building firmware is to achieve a business objective: to create working code as quickly and as accurately as possible. The stark facts show inspections to be the fastest way we know to find bugs.
A developer who attended the talk called this week and we had an interesting chat about the subject. Then two emails drifted in, both of which mirrored the telecom. The numbers varied, but here's the gist of the conversations:
The short summary: These folks tried inspections. They didn't work.
I asked: How much code did you inspect?
Answer: Uh, about 500 lines. Maybe 700.
How did you do the inspection?
Answer: We tested the code and found around 20-30 or so bugs. Perhaps more. Then we inspected and found another, well, maybe like 10 to 15.
How long did debugging and inspecting take?
Answer: We're not really sure but not too long.
My response was this: The numbers are clearly meaningless. They're guesses. Unless you collect real metrics this data - such as it is - tells us nothing.
How many bugs were left in the system? What makes you think you found them all? Do you know if you shipped with no bugs. or 100?
Were the ones the inspection found mission-critical bugs?
None of the folks had tried inspecting before debugging, making a murky situation even dimmer.
We know that programming is different from building widgets on an assembly line. Yet too often people use that difference to excuse any attempt to measure and understand, quantitatively, the process of building firmware.
A big part of engineering is measuring things, whether its amps, volts, miles/hour, furlongs per fortnight, or bug rates. Software engineering is engineering only when we collect real metrics, numbers like inspection efficiency, bug rates, productivity, and quality.
There's a large body of knowledge about all of these subjects. If you don't collect your own metrics, there's no way to draw useful comparisons to the state of the art.
That's like driving with your eyes shut.