Follow @jack_ganssle

The logo for The Embedded Muse

For novel ideas about building embedded systems (both hardware and firmware), join the 27,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. It takes just a few seconds (just enter your email, which is shared with absolutely no one) to subscribe.

Want a $7500 64 channel, 1 GHz logic analyzer? Take the salary survey and you'll be entered into a drawing to win one at the end of February, 2018.

By Jack Ganssle

Informal Observations

Published on December, 2012.

For this, my last column of 2012, I want to share some observations of the state of the industry from my peripatetic wandering this year over four continents.

I've presented my "Better Firmware Faster" seminar on-site to many hundreds of companies and thousands of developers. It was inspired by a couple of decades of work in the embedded tool business which took me into developer's labs all over the world. What I found then, and what is still true today, is that so many of us are facing the same problems: projects are late and systems get shipped seeded with bugs. In the 80s and 90s engineers tried to deal with these realities by working harder and by superhuman debugging efforts. Today, the workload has become unrealistic; most projects are late the day they start and never catch up. The seminar is designed to show ways to avoid these problems, and I get a kick out of talking to the attendees to learn about their unique challenges.

A central observation of the quality movement that revolutionized manufacturing is that products are both cheaper and delivered faster when quality is designed in from the beginning. In the software world this has morphed into a critical aphorism: a focus on fixing bugs will lead to neither on-time delivery nor quality code.

That last clause should cause most of us to rethink our approach to building embedded systems (which is the focus of my seminar). But it hasn't. Most teams embrace the same heroics used for decades.

Some things have changed.

Code is getting better. At least new code; a lot of people are in maintenance on old systems which is invariably based on awful software.

Though there are a ton of exceptions, I'm seeing more care going into structural issues: architecture and design. And, again with plenty of counterexamples, code is generally better crafted than a few years ago. Names make more sense and engineers are careful to follow at least some stylistic issues.

The data shows that this care translates into better products. The entire IT world fixes about 85% of the bugs in their code pre-shipment. Embedded products average a 95% fix rate. That's still too low, but at least our industry is ahead of the pack.

But I'm not convinced it's improving fast enough to keep up with the increasing demands being made on firmware.

I encounter teams that have wholeheartedly embraced one or another of the agile methods. There's a lot of talk about agile; it's a subject that always comes up, and XP and Scrum both get a lot of mind share. The truth, though, is that it's still pretty unusual to find groups that practice these approaches with any sort of discipline or rigor.

Test-Driven Development has an increasing number of adherents, but I find very few teams using it; instead there's the occasional lone wolf using TDD.

EEs are less common. Increasingly teams are forged from CS/CE people or EEs whom have never strayed outside of the domain of firmware. Again, exceptions abound; the last company I visited this month had a large group that was nearly all EEs. That is so unusual I was quite taken aback.

An increasing number of companies use at least some metrics, the most common being cyclometric complexity. Essentially none, though, use that to qualify their tests.

Testing is getting worse, driven by crazy schedules and the increasing size of code bases.

Teams continue to get bigger and individual developers complain more about being compartmentalized into narrow portions of a product. In many groups no one has any overall sense of how things work. It's not unusual to find hundreds of engineers - often in many different time zones - working on a single product.

Legacy code is both the best and worst problem faced by many. It's good simply because legacy code is the stuff that typically generates most of a company's revenue. It's a disaster because so much is so awful. Because code bases keep getting bigger, and since the embedded world (at 41 years since the first microprocessor) is well into middle-age, the landfill of old code is huge. Many teams today work only on the fringes of the old software, making minor enhancements or bolting on new features. Few of these people are happy with what they are doing and most feel a weight of impending doom, expecting the old stuff to eventually implode.

Big companies that buy lots of microprocessors rely more than ever on the semiconductor vendors to supply substantial portions of the code base. They have plenty of leverage and as a result it's sometimes hard to draw a line between the vendor's and the customer's engineers. Some of this is also driven by poor documentation of the parts, as well as insanely complex devices, like the SoCs in mobile phones, that are all but undocumentable.

What hasn't changed is that engineers at small-to midsized companies are almost uniformly happy in their work. Some big outfits do an amazing job, though, of disenfranchising their people. And that's a darn shame.

Happy holidays, all, and let's hope for a great 2013.