For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.

By Jack Ganssle

What Makes Embedded Different?

Pediatricians treat children. Gerontologists work with the aging. Both practice medicine, but their skills are very different.

The mechanic at the Toyota dealership is probably clueless little about GM's cars, while a motorcycle repairman knows every nuance of BMW's bikes, but no doubt can't grok a 745i.

The Verizon FiOS dude can splice fiber, but knows nothing of Comcast's cable distribution system.

It's easy to draw pretty clear lines between the skills and duties of many seemingly close occupations. But it's increasingly hard to narrow what constitutes embedded work from other forms of programming.

Why do you read Embedded Systems Design? Presumably you build, or are interested in, embedded systems. But what characteristic defines this field? How are our skills different from those used by, say, a developer at Microsoft creating the next generation of spreadsheets?

In reading "Building Embedded Linux Systems" (2008, O'Reilly, Sebastopol, CA, by Karim Yaghmour, Jon Masters, Gilad Ben-Yossef and Phillipe Gerum - what a wonderful-sounding mash of international names!) I was struck by how much of the book has nothing to do with embedded systems. It's a nice work, to be sure, but most of the "embedded" content is in the final three chapters on real-time aspects of Linux. And there's little about actual timing one can expect when tossing Linux into a real-time system.

Many engineers scoff at the use of a desktop OS in embedded applications. Yet this sort of operating system makes sense in a lot of circumstances. Managers tell me they like the rich APIs these provide, and really appreciate the deep pool of programmers who are familiar with them. The Windows development environment has a much wider following than, say, that of VxWorks. In flush times bosses can hire plenty of Windows people, at more attractive rates, than deeply embedded folks. Processors are cheap; people aren't, so it's common to see systems structured around a hefty 32 bitter running a desktop OS, with one or more smaller CPUs doing the fast stuff and the deep bit twiddling.

The first microprocessors were almost unusable. Though ads promoted the notion, thrilling at the time, of a "computer on a chip," the facts were quite different. The 4004 and 8008 required enormous amounts of support circuitry. Memory was external, and usually consisted of a bank of 1702 EPROMs (each holding 256 bytes - yes, bytes of code) and often an entire board crammed with static RAMs. Multi-phase clocks, address latches, and much more were needed. For an example of the logic required just to build a digital clock with the 8008 see http://www.8008chron.com/, which is a bit of a "cheat" since it uses a 22V10 PAL. As I recall these came along much later. Some pictures of boards of the time (http://www.dev-monkey.com/blogs/jon_titus.php?mid=574) show just how many parts were required to build the simplest system. The reality of a computer on a chip didn't come till much later.

In those days no one used the word "embedded" but the idea was clear. An embedded system was one that used a microprocessor to control something. Embedded systems were physically large, due to all of the support circuitry, but logically tiny, cramped by small address spaces and insanely limiting development tools. When I started in this field our primary product used an 8008 with 4k of program, all written in assembly language, of course. The tools used paper tape for mass storage, which was fed through an ASR-33 teletypewriter at a blistering 10 characters per second. It took three days to assemble and link that program. Obviously, a bigger chunk of code would have taken prohibitive amounts of time to convert to a binary image. Yet in the real computer world of mainframes and even minicomputers developers used high level languages and crafted programs that were enormous by comparison. COBOL/Fortran programmers were like the gerontologists compared to us pediatricians building embedded products.

The two parallel but disparate fields had very different sorts of workers. CS people dominated big iron. Pretty much every embedded programmer was an EE, and it was common that a single EE both designed the hardware and wrote the code. A COBOL programmer might be clueless about how a computer worked, but every embedded engineer could draw a block diagram of a CPU in his (they were mostly male) sleep.

Though we were all in the computer world, we spoke very different languages and went about our work in very different ways. The split between the two was absolutely clear.

That's not so true today. Many systems are big, sometimes cramming millions of lines of code into a single application. A division of labor and skills is required to manage projects of these scales. CS folks with little exposure to the underlying hardware are commonly found building embedded applications, which is in my opinion a Good Thing. Computer scientists bring (or, at least can bring; too often they don't) more discipline to the process than the average EE. The latter probably has had no exposure to software processes in his or her formal education. While heroics can work on small-scale projects, they just don't scale to larger systems.

In the olden days an engineer might as naturally pick up a scope to debug the code as an emulator. Increasingly, especially on 32 bitters, we're rather divorced from the hardware. I often ask engineers what clock rate their CPU runs at to get a sense of how much they know about the often-mystical underpinnings of a system; surprisingly few can answer that question. Do they need to know this? Of course not. But it illustrates how embedded work is increasingly looking like conventional programming.

This is all a very general trend. The problem, and the delight, of embedded systems is that they encompass a vast range of applications, tools, and techniques. A lot of us still work on small microcontrollers and are knee-deep in the electronics. Surprisingly many continue to write all of the code in assembly language due to space or speed issues. I remain convinced this dichotomy will never change. Sure, 32 bit parts are cheap, and are even the backbone of many microcontrollers today. But cheaper transistors will create new opportunities for small controllers, many of which will be greatly price-sensitive. Byte-wide parts will continue to fill that niche.

So what makes embedded development different than conventional programming? Is our field slowly morphing into the generic "programming?"

The answer to the second question is both yes and no.

First, I continue to think that the colleges are under-serving the peculiar needs of this industry. There are exceptions; some universities are doing an excellent job of preparing the next generation of firmware experts. But recent articles in ACM and IEEE journals confirm my impression that too many schools crank out CS majors who live and breathe Java with too little exposure to development processes and even less to the notions of real-time and I/O that are so essential to embedded systems. Though there's lots to like about Java, it has pretty close to zero penetration into firmware. For better or for worse, this is a C/C++ world, and I expect that to continue for at least another generation.

Though I fondly remember the olden days of twiddling each bit by hand, that is a very inefficient way to develop code. The only reason the suits write our paychecks is because our role is nothing more than a business endeavor. We're being paid to help the company be profitable. Profits come from getting to market quickly and from developing products that customers want (read: plenty of features). Increasing functionality coupled with furious time-to-market demands imply that we will increasingly need to employ very high levels of abstraction in our work. That includes both reuse and high levels of abstraction, such as from executable models. Modeling still has only a tenuous market share, but that will necessarily change unless supplanted by another approach that offers faster development of bigger systems.

Abstraction and reuse will continue to distance us from the nuts and bolts of the system. Creeping featurism often manifests itself as a perceived need for better user interfaces, communications, and data management. All of these are essential parts of desktop OSes, so the use of those will continue to grow, and will grow rapidly. Glom in a standard operating system, and a great deal of the hardware disappears from view. Are we building a desktop app or classical firmware? Who can tell? The tools might be the same: GNU or Visual Studio, for instance. All of our debugging and compilations take place on a PC. Today we often use simulation or virtualization environments to test code despite unavailable hardware. As the commercial asked years ago: Is it real, or is it Memorex?

There's already a specialization of skills. Networking gurus slam packets around the world with great aplomb. Do they need to even know what platform is running their software?

Some of the products we build are hard to tell apart from a desktop system. Is the iPhone an embedded application or a general-purpose computer that happens to have some telephone capability? If we define an embedded system as a computer-based device that performs one type of action, does that mean a blade server is embedded? I think not, but have no real basis for that opinion.

So in many ways lots of embedded apps are converging with traditional general-purpose computers.

But important differences remain. Not too many desktop folks have written an interrupt service routine, yet those are essential parts of most firmware.

Real-time services are uniquely the provenance of embedded. We often count microseconds; miss a deadline by even the slimmest margin and the system fails. For that reason I think the traditional embedded RTOS will never go away. But they will change. The logical next step from "transistors are free" is "CPUs are free." RTOSes will evolve to handle activities spread among hundreds or even thousands of processors interconnected in a multitude of ways.

The need for RTOSes means a continued demand for engineers who understand and can manage the difficulties of tasking. Embedded people, that is.

Where will the next generation of writers of deeply-embedded firmware come from? I fear it will continue to be largely the EE community, simply because so many schools are poisoning the CS well with glorified web developers.

I think we have seen the future already, and I expect no major shifts in the experience required of embedded developers. More products will use a desktop OS, which will continue the demand for people who don't have a deep understanding of the unique requirements demanded by firmware. But there will always be hardware to control and microseconds to manage; there will always be a wealth of low-end apps coded in assembly or restricted versions of C. Those arenas will continue a healthy demand for the traditional embedded developer.