For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.

By Jack Ganssle

Faster!

Published in Embedded Systems Design, June 2008

Not long ago a close friend who spent his career managing real estate read "The Soul of a New Machine", Tracy Kidder's wonderful account of how engineers at Data General produced the Eclipse minicomputer in record time. Kirk found the book interesting and well written, but was dismayed at the high pressure schedule and the people burnout. Then he made a comment that literally made me stop in my tracks: "I just couldn't believe that the picture Kidder paints of the high pressure schedule is real, though - no one can work like that for long."

How could I explain to someone with no connection to the high tech world how schedules are always the bane of our existence? That in my career, and in those of almost every engineer I know, every project we do is made to capricious and impossible deadlines? That in recent years timelines have shrink even more, so Kidder's depiction seems almost benign by today's standards?

So I'm left wondering if perhaps no one not in the technology business has a clue about how we're driven to madness by impossible schedules. Is our business unique? How many other businesses have such long-term and relentless pressure to get things done faster? Is constant unpaid overtime a theme of any other segment of the economy?

Decent project management software appeared in the 80s. Anyone can enter complex PERT and Gantt charts outlining every nuance of piecing together big endeavors. Who uses this stuff. successfully? I've watched uncounted developers attempt to build a schedule around an arbitrary deadline set by marketing: they move triangles around like crazy, all except the final one, the one that has meaning, in an attempt to create a schedule that sounds believable, all the while knowing it's utter nonsense. When I was in high school the Jesuits mailed us our report and they always seemed to turn up on a Friday afternoon. We would pluck our reports from the mailbox and reinsert them Monday, so the weekend wouldn't be ruined, which was just a childish way to postpone the inevitable. which is exactly what engineering groups do when they jiggle triangles like this.

Project planning software is, of course, touted as an advance over the primitive, manual tools we used to create meaningless schedules in the old days. Now we can fabricate incorrect data even faster. That's one of the beauties of computers: once it took seconds, even minutes, to make a mistake. With computers you can make thousands of mistakes a second.

People have been writing software for over 50 years, and building embedded systems for 30. The one constant over all of that time is that features increase while schedules shrink.

We're trying to manage three conflicting things: an impossible schedule, an excess of desired features, and quality. Remove just one leg of then three and the project becomes trivial. Can we ship with lots and lots of bugs? If so, getting it out on time is pretty easy. Can we neglect the ship date? With infinite time we can get every feature working right.

This twisted triad dooms projects from the start when developers and management just don't recognize the truth buried in the conflict. The boss invariably wants all three legs: on-time delivery, perfect quality, and infinite features. He can't - and won't - get them.

It seems logical that we must manage features aggressively, since schedule and quality issues will always be non-negotiable. Use requirements scrubbing to identify and remove those features that really are not needed. Build the system in a logical manner so that even if you're late, you can still deliver a product that does the most important things well.

There is, of course, one other ingredient that forms the backdrop of the twisted triad, one that is more the fabric of the development environment. Resources. Decent tools, an adequate supply of smart people, an enlightened management team, all form the infrastructure of what we need to get the project done.

In the 20th century we learned to build embedded systems, but it seems management never figured out the appropriate role of resources in development projects. Somehow engineering projects are viewed much like building widgets on a production line. Need more widgets? Add more people, more machines. That just does not work in software engineering.

IBM found that as a project's scope increases software productivity goes down - dramatically - for the same reason. Their surveys showed code production (in number of lines per day) fell by an order of magnitude as projects grew.

Barry Boehm's Constructive Cost Model is probably the most famous predictor of software schedules. It, too, shows that timelines grow much faster than firmware size. Double the number of lines of code and the delivery date goes out by much more than a factor of two. Sometimes much more.

Yet "go hire some more people" seems the universal management mantra when a project plunges into trouble. It simply doesn't work.

Is there no hope? Will projects always be doomed to failure? Is the pressure so aptly illustrated in The Soul of a New Machine our destiny?

With project complexities exploding it's clear that unless we dedicate ourselves to a new paradigm of development, using so much that has been learned about software engineering in the last half-century, we'll stagnate, wither, and fail. Those companies that accept new modes - and old proven modes - of thinking will prosper. Two areas in particular are critical for new understanding, two areas that this volume deals with: reuse and tools.

Tools

In the 1940s all software was crafted in machine code. The 50s saw the introduction of the first compiled language, Fortran, which boosted coding efficiency almost overnight. Fortran came at the cost of bigger, slower code, a tradeoff that was then deemed unacceptable by too many engineers. But those who embraced Fortran were proven to be the vanguard of the future.

Today we hear the same arguments about modeling, C++ and Java. Too slow, too big. Yet clearly continuing to crank out millions of lines of C is not going to solve any problems; hand coding is no longer giving the productivity boosts absolutely required to keep up with increasing product demands.

Advanced languages offer us the power of abstraction, of working with higher level views of our project. Abstraction is fundamental to our future. We can no longer afford to be concerned with bits and bytes. Whether you love or hate it, the Windows API gives desktop developers an ineffably rich set of resources only an masochist would care to recreate.

Tools of various flavors are the enabling ingredient in abstracting us from lower level details. The first Fortran compiler, by today's standard laughably simple, gave engineers of the 50s a formidable weapon. Today we have even more choices.

We largely accept the extra overhead associate with compilers. Other tools, all of which abstract us further from the code, cost more in overhead, yet promise faster and better delivery. Modeling tools like UML have been successful in some domains. Too few developers grok LabVIEW and MATLAB, yet they are both important parts of the embedded landscape.

Tools that automatically search out bugs promise much for programmer productivity. Coverity, Klocwork, Polyspace, Green Hills, and GrammaTech all push static analyzers that looks for run time problems. The tools certainly can't find all bugs, but they offer a schedule-cheating weapon that as yet has little market penetration.

Reuse

Tools that help us write more code faster are but a part of the solution, though. Clearly a new model of reuse is desperately needed. A product comprised of a million lines of code is just too big to be built a line at a time, especially when the boss wants it shipped today.

Barring some fantastic new development in software engineering, the future simply must belong to reuse. Until we manage to beg, borrow, steal or buy huge chunks of the code base, we'll be forever doomed to writing every last bloody line ourselves. That's intolerable.

Reuse means much more than hacking up some code we salvaged from a previous project. It's more than recycling 20% of the firmware. Million-line-plus systems require extreme level of reuse.

But let's define a few terms to show what reuse is, and what it's not. Software Salvaging is using code again that was never designed for reuse. That is, hacking away at a base of crummy old source to somehow shoehorn it into a new application.

Carrying-over Code is porting firmware from an old project to a new one. Like salvaging, it's largely a matter of heroic source hacking.

True reuse is building systems a component at a time, not a line at a time. It's working with big blocks that work in well-defined ways. without digging into the guts to tweak, debug or improve. Richard Selby found that, when porting old code to a new project, if more than about 25% gets modified there's not much of a schedule boost. Reuse works best when done in the large.

The gritty truth is, though, that before a package is truly reuseable, it must have been reused at least three times. In other words, domain analysis is hard. We're not smart enough to truly understand the range of applications where a chunk of software may be used. Every domain requires its own unique features and tweaks; till we've actually used the code several times, over a wide enough range of apps, we won't have generalized it enough to have it truly reuseable.

This suggests that reuse is terribly expensive. We spend money like mad making decent code but don't reap the benefits till we've reused it three times. How many of us have the patience and discipline - and time - to create code for a future need? Reuse is like a savings account: there's no value till you've put a lot into the account. The more you invest, the more benefit accrues.

When will we be able to buy big chunks of our applications, rather than write them all from scratch? Is the Software IC even possible?

The future belongs to people brave and clever enough to discard old modes of thinking and adopt and create new ideas. We will find ways to design products using previously-written code. The benefits are too compelling to continue building systems a line at a time. It may mean tossing resources, memory, and high end CPUs at low end apps. It may mean new tools. Surely we'll design systems differently. Though some of the implementation details are murky today, the outcome is pretty clear.

The biggest change will be in our attitudes, our approach to building products. Someday each of us, supported by management, will recognize two critical factors: firmware is the most expensive thing in the universe, and any idiot can write code. The future belongs to developers who find better ways to build products, not to super-coders.

So, to my friend Kirk and all the rest of the non-engineering world, we do work under enormous scheduling pressures. You the public demand it. Your digitally stabilized binoculars, that $100 GPS, the digital camera, and all of the other electronic products that make up your world all come from engineers struggling under impossible deadlines to produce amazingly inexpensive and reliable systems.

Once in a while, when you use one of these systems, think of us! We're back in the lab, working on version 2.0.