For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.

The Engineer's Lament

Summary: Engineering is a data-driven endeavor. Yet some neglect the known facts.

Does your spouse understand the way you think?

Engineering is an analytical profession that, when done correctly, holds ideas to the cold scrutiny of numbers. Data. It doesn't matter how you feel about something; what counts is the result.

A recent New Yorker piece (http://www.newyorker.com/magazine/2015/05/04/the-engineers-lament) by Malcolm Gladwell makes this point. It discusses the role of engineers in automotive recall offices. Remember the Pinto disaster? Rear-end collisions could result in the cars becoming fireballs. What was wrong with those engineers?

Turns out, the numbers just didn't support the passionate cries for change. The big case that was litigated, where three teenaged girls died when their Pinto burned, was won by Ford.

The article gives non-engineers excellent insight into how we think, how we make decisions, and even more importantly, how we view the world. It paints a stark contrast between we numbers-driven folks, and so many others who make decisions based on how they feel.

Of course, sometimes our analytical sides never soften. When we were raising kids my wife would ask why I'm always thinking about what could go wrong with them. My answer: "I'm trained in worst-case analysis."

I'm an EE, like many embedded people, one who has spent the last four decades split between designing circuits and writing code to support those devices. The hardware side is unforgiving of emotion or any idea that doesn't push electrons in exactly the way needed. The software side holds results to the "but does it work?" standard.

In hardware we have a body of knowledge. Ohm's Law. Maxwell's Laws. The physics of transistors. We can analyze hFE and other parameters to predict a transfer function, and can calculate Q and a resonant frequency to build a tuned circuit that meets certain requirements.

The software world is squishier. Predictions are difficult. Just how long will that ISR written in C take to execute? Most of us can't predict that. Thankfully, it is possible to measure it. Sadly, few do. How do we translate requirements into the size of flash that will be needed? How can one predict stack or heap size?

In hardware we can analyze worst-case situations. Extremes of temperature, component tolerances all ganging up the wrong way, all of these are understandable mathematically.

Software is not so clear. Flaws hide, lurking in unexpected places. A storm of interrupts is hard to analyze. Tasking isn't deterministic.

Then there are the fads. These sweep through software engineering like flames in a California conflagration. Rarely are they held to an engineering standard, that of cold, hard analysis. Unfortunately software processes are very difficult to study. The academic literature is full of papers about this idea or that, but the vast majority of these involve an experiment using a tiny code base created by a handful of developers, and those people are invariably students with little experience, and certainly no real-world experience. Engineers are rightfully skeptical of these toy experiments.

However, we do have a lot of data, which most in the software engineering community is unaware of.

What is the best way to develop code? That question probably isn't even meaningful, give the wide range of applications created every year. A program that will be used by one person twice has very different needs than that which controls the engines on an A380.

Consider the agile community. There are dozens of agile methods. Which ones work? Which one is best? No one knows. In fact, there is little hard data (outside of the toy experiments) about the efficacy of agile methods against other approaches. This is most definitely not to knock agile, as I find (unhappily not analytically due to lack of data) some of the agile ideas simply brilliant.

Some people have said that, sure, we're lacking data on agile, but that's true for other methods. There's a lot of truth in that. In EE we have had centuries to develop the theory, to benefit from Georg Ohm and the work of others. Software is a very new concept. We're still seeking out the Software Theory of Everything (SToE).

But suppose one were to reframe these questions. For instance: "what are the two most effective ways to reduce bugs and accelerate schedules?"

I wonder what your answer would be.

Mine is simple: formal inspections and static analysis. Why? We have the data. Actually, we have tens of thousands of data points. One source is The Economics of Software Quality by Capers Jones and Olivier Bonsignour, but there are many others.

We know, for a fact, that cyclomatic complexity is a good way to measure aspects of test effectiveness.

We know, for a fact, that the average IT software team doesn't remove 15% of the bugs they inject prior to shipping. We know that embedded teams ship a third of that number of bugs.

We have data, a portion of Ohm's Law for software engineering. Yet in the embedded world only about 2% of teams use that data to drive their development methods.

My point is that there is a lot of data, and I encourage developers to seek it out and to fold those results into their daily work.

W. Edwards Deming said it best: "In God we trust, all others bring data."

Published April 28, 2015