Follow @jack_ganssle

The logo for The Embedded Muse For novel ideas about building embedded systems (both hardware and firmware), join the 27,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype, no vendor PR. It takes just a few seconds (just enter your email, which is shared with absolutely no one) to subscribe.

By Jack Ganssle

Cheap Changes

Published 7/01/2004

On page 23 of eXtreme Programming explained (http://www.amazon.com/exec/obidos/tg/detail/-/0201616416/102-1490925-0698512?v=glance) Ken Beck makes the startling claim that it is - or at least can be - very cheap to change software, even in big systems that have been deployed for years. According to him this is the "central technical premise of eXtreme Programming." Yet this flies in the face of decades of research that shows changes made late in the life cycle cost orders of magnitude more than those done early-on.

Beck and other agilers believe that XP's practices intrinsically lead to cheap changes.

What claptrap!

Sure, some changes are indeed very cheap. Need to invert the sense of an input bit? That's a 10 minute job if a driver insulates the peripheral from the rest of the code. Many other changes, such as altering the text of an error message, are equally trivial.

But the embedded world is the realm of limited resources. When we're down to the last few bytes of ROM even modifying that error message might take weeks as we fiddle with, well, everything to free up a bit of memory.

Performance-bound applications lead to similar headaches. Need just a few more CPU cycles to slightly enhance a feature? Development times can soar. The rule of thumb is a 90% loaded system doubles development time (over one at 70% or less). Figure on tripling the schedule in a system that burns 95% of the processor's time.

Sometimes the tiniest change can have huge repercussions. Suppose an input is running into Nyquist limits. A quick edit of a timing parameter doubles the sampling rate. except that may bog down other performance-bound parts of the program. If the A/D can't handle the increased speeds a hardware respin might be in the works. And where will you store the extra data? How much more time will the analysis code take to process the supersized buffer?

20 years ago an outsider recommended using an RTOS on a system I was building, but of course I knew better. As the project grew, interrupts, timers, and a plethora of OS-like functions sprouted, till it became painfully clear that only an RTOS would unsnarl the convoluted mess. The cost to rip out my mistakes and shoehorn in an operating system ate all of our profits on that job. A cheap change? Hardly.

Sure, it's usually easy to edit a function. That's like telling the architect to add a window to a room. Ask him for a mansion on a 10 x 10 lot, though, and expect soaring costs and schedules.

A great design is one that's malleable. Reasonable changes drop in without massive restructuring. But when the code grows organically, without a design, (or, as Beck puts it: "the larger the scale, the more you must rely on emergence") modifications become ever more dangerous.

XPers mitigate risks by a laser-focus on constant tests run automatically to validate each change. I wish most of us adopted their philosophy of checking everything, all the time, and of writing tests as we build the system's code. Most traditional test techniques only exercise half the code, a rather horrifying statistic when one considers the size of today's programs. A pretty good team will have around a 5% error rate. In a 100,000 line program that's 5000 bugs. Normal testing strategies insure half are shipped with the product. I do believe that XP's approach can ameliorate that significantly.

But testing alone doesn't lead to great products. Neither does constant refactoring and unending tinkering with the code. It's fun to edit, recompile and test, which is one reason XP is so seductive. It appeals to the puerile programmer in all of us.

There is a lot to like about XP. But I shudder whenever someone chants "just change it, run some tests, and see what happens."

(For an interesting take on agile methods, see the recent paper by Barry Boehm and Richard Turner at http://www.stsc.hill.af.mil/crosstalk/2003/12/0312Turner.pdf).