Follow @jack_ganssle

The logo for The Embedded Muse For novel ideas about building embedded systems (both hardware and firmware), join the 25,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype, no vendor PR. It takes just a few seconds (just enter your email, which is shared with absolutely no one) to subscribe.

By Jack Ganssle

Trusted ICs, Trusted Software

Published 5/31/2006

If you don't have enough to worry about, Eric Shufro sent a link (http://www.sciam.com/article.cfm?chanID=sa006&articleID=00003B8B-9E76-13F3-9E7683414B7F0000) to a Scientific American article about security threats embedded in DoD software written by overseas contractors. When a major defense system comprises millions of lines of code, how can one insure that a bad guy hasn't slipped in a little bit of nastiness or a back door? I suspect a system like the missile defense shield is especially problematic as it's so hard to test.

The article doesn't mention the use of code inspections to look for vulnerabilities. But inspections after delivery, designed just to look for security issues, are terribly expensive and not 100% effective.

Others (http://www.dailytech.com/article.aspx?newsid=1497) worry that PCs produced overseas may carry bugs. Not software defects, but hardware and/or software that monitors data streams to look for sensitive information. That may be a bit hysterical since government computers for classified use aren't connected to the Internet. A bug might ferret out some interesting nuggets, but has no way to send it back to the other side.

Eric asked an interesting related question: virus writers love injecting their bits of slime onto our machines to, mostly, hijack the computers to spew spam. But what if an evil person or government decided to infect our development tools? It wouldn't be too hard to replace stdlib or other library files. Now a runtime routine is sick, perhaps helping their evil overlords send spam. or maybe something much more sinister. If the code is written to switch modes at some later time the problem might not be found in testing.

To do this, a virus would have to execute some code on the PC to start changing libraries. Presumably the antivirus forces would quickly identify the new worm and issue updates to their software to find and quarantine these things. An attack on Visual Studio or other standard PC development tool would be quickly found and removed, as so many people use these products.

But things might be less sanguine in the embedded space. An attack on some 68HC12 compiler, used by a relatively small number of developers, could lurk for a very long time. And be very, very hard to find if it's purposely intermittent.

Some safety-critical standards require verified tools. Update the compiler and someone must re-verify it. However, if a virus surreptitiously tampers with a verified bit of software, will that attack slip through unnoticed?

Perhaps it's time we CRC the tools in the build cycle.

What do you think? Should we layer some level of defense around our development tools?