For novel ideas about building embedded systems (both hardware and firmware), join the 35,000 engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.

This month's giveaway is a Cypress CY8KIT-044 PSoC 4 M-series Pioneer kit. Enter here.

Published 3/2/2002

There was a time when a "database" was a drawer of index cards. Word processing meant creating text on a typewriter, using carbon paper instead of a "print 3 copies" command. The only intelligence in a car resided in the brain of the driver. Factory controllers used banks of relays singing their clicking songs.

Mainframe computers were generally inaccessible to ordinary people. Most folks experienced computing disasters only in the form of an insane credit card bill or the IRS' repeated demands for immediate payment of a zero dollar tax due.

Today we're not far from computerized car brakes sans backup hydraulic system. All modern aircraft can and do fly themselves and many can even land without human intervention. A 100,000 ton tanker heavily laden with a potential environmental catastrophe relies entirely on autopilot, GPS and radar for navigation and collision avoidance. Factories producing the most noxious and toxic of chemicals would grind to a standstill - or perhaps fail spectacularly - without an array of microprocessors that sequence every activity.

The firmware component of these systems has all grown spectacularly in the quarter century since the advent of the microprocessor, from programs consisting of but a few thousand lines of assembly to today's hundreds of thousands of lines of C or C++. We've learned many things over that time about building better code; one is that "perfect" software is a practical impossibility. The best code ever written is probably that in the Space Shuttle, but even there, even after spending $1000 per line, they still run into defects, though at the amazingly low rate of about 1 bug per 400,000 lines.

One ABS brake vendor told me their software is junk. "It's not a problem," he went on, "because the hydraulic system takes over when the code crashes." The rate of change of customer demands far exceeds our ability to learn and implement better design processes; one wonders how this organization will cope with demands for safe braking without mechanical backups.

All large systems have lurking hidden problems. Most of us work hard to ensure we've addressed real safety issues. None of us knows how to prove we've built something that is correct and that will never fail.

I was struck by a letter in the RISKS digest (http://catless.ncl.ac.uk/Risks/21.84.html#subj10.1) about the most common security problem found in Unix and Windows systems. Henry Baker wrote suggesting that programmers producing software that does not check for buffer overflows are criminally negligent, and should perhaps be liable for resulting damages. He makes the very interesting point that it's truly trivial to check incoming data streams for length, that we've known to do this for a generation or more, and yet an endless succession of bug reports out of Redmond and CERT testify to programmers totally neglecting this obvious problem.

One or two buffer overflow problems in IE that causes crashes or security vulnerabilities are just bugs, easily correctable, and perhaps does not reflect on the developers' abilities or ethics. Repeated buffer overflow bugs, though, are a different animal indeed. I would think that at some point, after a history of years of identical problems in IE and far too many other similar programs, the team leader might suggest auditing the code. Look at every input and make sure this common problem can not occur.

That Redmond and most other providers of PC programs do not take such action suggests they either have no clue what they're doing, or simply don't care. That is, negligent. If such a simple and easily-avoidable bug causes a corporation to lose data, do they have a case against the developers?

Extending this line of reasoning a bit, the software community has learned that a disciplined development process yields better code, fewer bugs. Yet very few of us employ a rigid process. For instance, we know from countless studies that keeping functions to a single page reduces defects. How many of us enforce a short-functions requirement? Code testing is a notoriously ineffective way to uncover bugs. How many of us couple effective tests with code coverage checks or inspections?

If bugs result from cavalier engineering. who is responsible?

In America most change today seems driven by litigation or the fear of it. Today, an engineer appearing in court is usually an expert witness. Will we soon be the defendants?

I have unbounded faith in our ingenuity to produce really fine products that are awfully safe. But since bugs cannot be completely eradicated, sooner or later the law of averages - or ambulance-chasing lawyers - will catch up with us. What then?

What do you think?