For novel ideas about building embedded systems (both hardware and firmware), join the 30,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.
By Jack Ganssle
Big Balls of Mud
Is software architecture critical, accidental, or disposable? A fascinating and amusing article at http://www.laputan.org/mud/mud.html suggests that despite the lip-service we all give to careful design, perhaps the most common structure is A Big Ball Of Mud.
Quoting from the piece: "A BIG BALL OF MUD is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle. We've all seen them. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated. The overall structure of the system may never have been well defined. If it was, it may have eroded beyond recognition. Programmers with a shred of architectural sensibility shun these quagmires. Only those who are unconcerned about architecture, and, perhaps, are comfortable with the inertia of the day-to-day chore of patching the holes in these failing dikes, are content to work on such systems."
The authors suggest that most systems tend towards A Big Ball Of Mud. Even the most carefully crafted applications suffer from hectic maintenance by programmers not entirely familiar with the structure. One patch becomes two; two four; over time all facets of the original design disappear in a welter of changes.
How many of us have worked on systems where, at the very least, the comments no longer mirror the code? Comment drift is the first sign of an application on its way to becoming A Big Ball Of Mud.
More likely, though, is a product which starts with only the outline of careful architecture, which in turn disappears as developers get deeper into the project. One problem reveals more; soon what existed of the original structure is gone. We manage to get the thing out the door, but it's a mess; maintenance is guaranteed to only make things much worse.
If indeed entropy is the most powerful force of all, then most code will drift into A Big Ball Of Mud. What are our options?
As the authors note, our best choice is to keep the system healthy. We can consciously alternate periods of fixes and enhancements with episodes of consolidation. Stop and fix architecture drift. Get the comments back in order, clean up the sloppy patches, repair and even enhance the structure. Rewrite the ugly stuff. This is a fantastic idea, one we should all start right now. But. is your boss that enlightened? Most companies I know view firmware as a necessary evil. Reengineering? Hey, if the damn thing works at all, ship it. Do we need a new feature? Cram it in; after all, it's only software.
Another option: toss the system out once in a while and start over. Yeah, right. I bet you do this all of the time.
The enduring popularity of the third option suggests this is the most viable: surrender to the chaos and wallow in the mire. Accept the limits of maintenance and plan on patching an increasingly impossible-to-maintain morass of code. Great job security, perhaps, but aesthetically unpleasing. We tell our boss that eventually the thing will be impossible to change, but that rarely comes true. Somehow we always manage to squeeze in the latest wishes and repairs.
Embedded products last forever. They're not like PC applications with microsecond life cycles. I've worked on 20 year old firmware. It's always a disaster, the product of panicked bug fixes and enhancements made to marketing's insane schedule requirements. Perhaps it's inevitable that our beautiful design will devolve into an embarrassing mess.
It's sort of like seeing your perfect little child, now grown up, behind bars. How did this happen? What went wrong? We had such idealistic dreams!
What do you do? Re-engineer, rewrite, or surrender to entropy?