For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.

By Jack Ganssle

Middleware Madness

Published 4/02/2006

True stories:

A company, let's call it Acme Products, had a very popular product which many of us use on a daily basis. Powered by a single 16 bit processor, all the code fit into 256k of ROM. The code had evolved over decades; patch after patch added to the convoluted mess. Maintenance costs escalated yearly.

Five years of development and $40 million in engineering crept slowly by. System ROM went from 256k to a meg, to two and more. When they called me the application now consumed 32 MB, yet included only half the functionality in the original, 256KB, 16 bit version. Button presses, which previously offered instantaneous response, now took seconds. The safety aspects of the system were in question.

Grand dreams of vendor-neutral code mandated so many layers of middleware that the system burned too much CPU time, consumed huge amounts of code space, and ate so much engineering time they ultimately canned the project and started from scratch.

Then there was, let's call them ABC Security, who built a secure communications device upon a Pentium Pro. A quarter gig of RAM holding a version of UNIX, middleware that again aimed to provide a vendor-neutral API, more API-neutral access to a flash file system, a chain of device drivers so deep, so convoluted, that was sure to baffle Linus himself, led to a product that took full ten minutes to produce a dial tone when the unit went off-hook.

Also cancelled, after burning $2 million in engineering dollars.

Then there was the data acquisition product that migrated from an 8051 to a Power PC. Engineers replaced the old brain-dead idle-loop design with an RTOS from a well-known vendor, and the simple data storage structure with a real filesystem. TCP/IP replaced the previous incarnation's proprietary synchronous communications mechanism. And this system, too, never saw the light of day as a real product, as it could no longer keep up with the input datastream.

Magazine ads scream the benefits of higher levels of abstraction, of complex middleware, of languages guaranteed to provide immediate and painless reuse. If you're not working with a UNIX variant or some flavor of Windows your products are dinosaurs sure to fail in the marketplace.

In the olden days assembly programmers could count T-states to predict execution time. RAM and ROM needs were absolutely clear. The move to C did obscure this somewhat, but with a little experience developers could reasonably estimate real-time performance and memory requirements. That's not possible anymore when we bundle huge chunks of middleware into our products.

Management finds it hard to resist the promises of vendors, who earnestly show how just one more layer will ensure the product will outlive capricious decisions made by other vendors, of how converting to a popular API will let us replace expensive embedded experts with dollar-an-hour Visual C++ programmers. Yet the costs, in terms of memory and CPU cycles, never seem to enter these discussions. It's not till the system has grown into a bloated morass of code running at a crawl, that the realities become apparent. At that point there's usually no recovery other than a complete recode. Expensive? You bet.

Middleware, high-end languages, and complex OSes are awesome tools and capabilities that many embedded systems absolutely need. They're rarely the panaceas promised by anxious vendors. We must <i>demand</i> quantitative overhead specifications in place of the all-too-often vague promises used to sell this stuff today.

Real time resource-constrained embedded systems are still the norm, and still fundamentally differ philosophically from PCs. Yet too many vendors seem to view the embedded space as an extension of the desktop market.