For novel ideas about building embedded systems (both hardware and firmware), join the 35,000 engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.
By Jack Ganssle
Static Analyzers at the ESC
The show floor at the Embedded Systems Conference is crowded with exhibitors showing all sorts of wares. Booths with glittering gadgets employing billions of transistors sit next to consultancies from third world countries peddling their services. An army of attendees prowl the floor looking for solutions to their problems.
I'll present my Better Firmware Faster seminar in Melbourne, Australia February 20. All are invited. More info here.
I'm a great fan of tools - anything that can automate or ease a developer's work. Especially interesting are static analysis tools. Give me a program with two buttons: "find bug" and "fix bug." Though such alchemy is not possible today, we've long had syntax checkers, Lint, and other contraptions that do find some problems.
A new and evolving class of static analyzers is being shown by at least two companies at the ESC. Products by Polyspace Technologies and Coverity do deep analysis of source code to find those hard-to-isolate problems. Though I've yet to take either of these offerings for a test drive the claims made by the vendors are interesting.
Coverity's products find null pointer dereferences, memory leaks, buffer overflows and much more, all without running the code, all just by analyzing the source tree. The salesman told me that, on average, the tools uncover about 1 bug per thousand lines of code analyzed. That might not seem like much. but since some defects might take days or weeks to find, the benefit is clear. That's 1000 bugs per megaline; think of the time required to find and fix a thousand bugs! And when a single bug can result in a huge product recall, this sort of insight has vast benefits.
Polyspace's tools do similar checks, and also find array overflows and other sorts of runtime errors. A new feature lets the user specify min and max ranges of global variables; the tool then insures none exceeds these specifications. All without actually running the program.
These tools perform insanely complex calculations requiring lots of compute time. I'm told Coverity's products can dissect 40 million lines of code in 3 or 4 hours. Polyspace's are apparently targeted at smaller projects, and burn something like an hour for a 20 KLOC project. Clearly you wouldn't invoke either tool from the make file, but it's reasonable to run a nightly build that includes the tests.
The cost? Coverity prices their product based on the number of lines of code you'll being examining. Polyspace has a flat $5k per seat price tag. It seems to me that the tools could pay for themselves after finding just a couple of tough bugs.
Yet developers have nearly no budget for software quality tools. I ran a survey where 80% of the respondents complained they're unable to get more than $500 for a static analyzer or other quality-enhancing product. Yet our EE brethren routinely snag $50k logic analyzers and similar equipment. But accounting can put a property tag on that gear.
What do you think? Is static analysis a valuable idea? Is it worth the money?