Go here to sign up for The Embedded Muse.
The Embedded Muse Logo The Embedded Muse
Issue Number 437, January 3, 2022
Copyright 2022 The Ganssle Group

Editor: Jack Ganssle, jack@ganssle.com

   Jack Ganssle, Editor of The Embedded Muse

You may redistribute this newsletter for non-commercial purposes. For commercial use contact info@ganssle.com. To subscribe or unsubscribe go here or drop Jack an email.

Editor's Notes

SEGGER Embedded Studio cross platform IDE

Tip for sending me email: My email filters are super aggressive and I no longer look at the spam mailbox. If you include the phrase "embedded muse" in the subject line your email will wend its weighty way to me.

Quotes and Thoughts

Unforeseen issues are often not unforeseeable. Jack's Law of Requirements Elicitation.

Tools and Tips

Please submit clever ideas or thoughts about tools, techniques and resources you love or hate. Here are the tool reviews submitted in the past.

Damian Bonicatto sent this link to a DYI printable oscilloscope probe for hard-to-probe nodes. Pretty cool!

Freebies and Discounts

January's giveaway is a copy of Chris Hallinan's book Embedded Linux Primer. It's an excellent volume for folks working with Linux. I reviewed it here.

Enter via this link.

Defects Cluster

I had a lot of off-the-record email discussions with readers about bugs after the last Muse (see the next article for some on-record conversations). A common theme was that "we don't know much about bug rates."

Actually, that isn't true. David Aikin wrote “Engineering is done with numbers. Analysis without numbers is only an opinion.”

While the data is fuzzy and not as pristine as a formula from classical physics (e.g., F=MA), it does paint a picture. Here's one set of numbers:

  • Barry Boehm (Software Engineering Economics) showed that 80% of the defects are in 20% of the modules.
  • IBM (see Facts and Fallacies of Software Engineering, Robert Glass et al) found 57% of them are in just 7% of the modules.
  • Gerald Weinberg had sharper numbers, pegging 80% in just 2% of the modules.
  • NSA (see Measuring Software Quality: A Case Study, Thomas Drake) (yes, NSA, those fellows who are listening right now) looked at 30 million lines of their code and found 2.5% of it was responsible for 95% of the problems.

In other words, defects cluster. A little bit of the code is responsible for most of the problems.

Worse, Boehm showed that these problematic modules cost, on average, four times more to develop than their better-behaved brothers. The reason? Debugging. The average team spends about half the schedule in debugging.

Makes you wonder if the other half is "bugging."

I admit to having written some crummy code, software that tormented me for weeks. Every time a bug was fixed another appeared. Often you can tell when the code is so poor; the structure is creaky and embarrassing. Changing a comment seems to break things.

Keep a tally of bugs in each function. It can be pretty informal. But that data will quickly and quantitatively show which are in the small batch of what is bad code. When a function histograms out of proportion to everything else, you know it needs attention. Most often that means a total rewrite. Yes, sometimes we blow it, create a monster that's impossible to tame. Likely the design is no good. But now that we understand the problem, rewriting the function will almost always result in a better solution. If we identify these early we'll save money. Remember Boehm's observation that "fixing" a module by trying to beat it into submission costs 4x good code.

More on Requirements and Bugs

Readers had comments about the articles in the last Muse about bugs and requirements.

Stuart Jobbins wrote:

In response to your article on KNCSL and TDD.

If we consider that MOST of the errors during a development stem from errors in HUMAN input (yes some are down to tools, but a surprisingly small number) then we should admit:

  1. That technology and tools (and even well-honed process) can help us eradicate (human) error insertion
  2. That ‘testing’ – at whatever stage – is largely  a mechanism to detect those errors caused by our human failings during some of the ‘creation’ process
  3. That a ‘test mechanism’ should be a foil to the ‘insertion mechanism’
  4. That review is a ‘test’ process (so needs ‘different’ human input to be valid – including a need to understand what that ‘difference’ means to be effective).

A lot of the truly high-integrity processes (and process standards) understand this and try to document it, with multiple feedback/error-correction paths; a clear differentiation between verification (errors in implementation of some stage) and validation (errors of misinterpretation of customer needs); recommendations on the technologies that support these. They still require the intellectual understanding (and measurement) of the goals.

In my experience “Requirements tools” and associated processes rarely recognise these facts (and both the verification (did we transcribe it correctly) and validation (is it what the customer intended)) of requirements is the most major error insertion, and poorest defect removal process, with the longest ‘tail’ (defect escapes) and typically costs.

I have lived with some VERY high integrity processes, whose ability to capture errors was identified over many projects, with enviable requirement-to-fielded-solution overall defects (and near zero fielded errors that the customer observed – the real-time mitigations were operated*, but rarely), by judicious use of tools technology and test scrutiny, as well as ensuring a wealth of competent people who not only recognise the expertise of the individual task, but the weakness of common human failings.

*(In these cases the systematic use of real-time mitigations was a design decision because of the impact to humans of the consequences of both residual, and environmentally-induced transient failures.)

Assuming a ‘good’ (defect free and correct) software requirement… the ‘signature’ of good software development processes is clearly visible if one marries the ‘defect detection’ time, type and number of detections to the phase of development , as these give credible evidence to ‘escapes’, where such defects SHOULD have been detected earlier. I have used this technique to ‘red team review’ failing software projects within hours, by matching randomly chosen component lifecycles (as shown by version control history) with the programme plan (planned activities of test/verify/validate) to give a measure of ‘process effectiveness’.

Although not a slavish fan of the higher CMMI levels, this is absolutely the goal of the ‘learning and improvement’ about software development processes that it tries to portray.

If TDD is seen through this ‘understanding of the efficacy of the process set’ eyes, like you, I would have a lot more empathy for its champions and users.

High-integrity software development processes, when targeted appropriately saves money (lots of it - now routinely called ‘Technical Debt’), so shouldn't really be the preserve of Safe/Secure development.. but I’m preaching to the choir!

Tony Ozrelic:


'There is no glory in getting the requirements right at the outset, but it's the essence of great engineering'

How true!

Yet most engineers dread the thought of interacting with Marketing. 

What follows are some insights I have gained from developing products since the 1970s. Perhaps they will help the reader.

Marketing has three essential tasks. Listed in order of importance, they are:

#1 - Demand Management (How many widgets will we need to build next week/month/quarter?) They work with Sales to come up with this guesstimate, and guessing wrong is very very expensive (do we need to make 1.0 or 1.1 million widgets next month?), so they spend a lot of time working with Sales to make a good guess. It is a never-ending, high-stress part of Marketing's job.

#2 - Demand Creation (how can we increase demand for widgets?) Doing things like advertisements, training videos, going to trade shows etc. This is why they will suddenly disappear from time to time (usually attending shows or visiting with major customers) and come back with stories about how the competition has got this new feature, and if we had it in our widget, we could sell more widgets. Make note of this information; likely it will pop up again when it is time for a product refresh.

#3 - New Product Development (what should our next widget be?), this gets the least amount of attention from Marketing compared to the two above, and they are so busy with #1 that they usually will not launch a new product development initiative (or process or program, insert corporate-speak here) unless Management sees that the flow of money is weakening or competitive pressures need to be addressed. This is the least important activity they engage in, until it becomes the most important activity due to pressure from Management or the market.

Why am I telling you this? To show you that Marketing has to work with forces beyond their control, just as Engineering has to.

Don't take it personal if they seem reluctant to work on new product development, they have a lot on their plate already.

If you want to extract the maximum information/utility out of Marketing, here are some tips:

- Don't expect them to define a product in engineering terms; they are most used to marketing-speak and customer-speak, not engineering-speak. Your Engineering team needs to have at least one person that can converse fluently in these kinds of -speak.

- Most people don't know what they want until they see it; to reduce (NOT eliminate, just reduce) risk, budget time for creating prototypes they can play with, unless they are utterly convinced they do not need them. Even then, Engineering may need them in order to clarify requirements and reduce risk. They may even show them to customers; if so, have someone observant and articulate from Engineering at the demo to watch and possibly run the demo so as to extract more information from the activity.

- Many times folks from Marketing will say things like 'just make it like what we've got now but with features X, Y and Z', or 'make it like what competitor X has, but BETTER, FASTER, and CHEAPER'. Usually this means they do not have the time or the inclination to do new product development. Being a pest will not help you extract new requirements from Marketing, but being helpful and organizing what little you have (even if it is just a list of requirements and features from the current product) and reviewing this with them will show that you are trying to make it easy for them to do their job, which helps you do your job.

- Once you do have a list of things, expect it to change; this will cause schedules to (unusually) contract or (usually) expand.

 To manage this, treat requirements like bugs; rank them in order of importance:

 - A Level 1 requirement HAS to be there at product launch, no excuses (think of delivering a car with no engine)

 - A Level 2 requirement WOULD BE NICE to have at product launch, but can be introduced after launch if it blows out schedule or budget

 - A Level 3 requirement is IMPORTANT to have at product launch, but can be introduced at the next product refresh/software release

 So now your requirements can be placed in a buglist management system (not necessarily the same one that you use to manage bugs) and when things change (usually items move up the list, e.g. a #2 requirement becomes #1 after talking to N customers), the schedule/budget hit can be calculated and the stakeholders (usually Management+Marketing+Engineering+Manufacturing) can decide if the change is warranted.

Marketing will be inclined to make EVERYTHING #1, and so you will need to explain the consequences (schedule/budget blowouts, delayed products, un-manufacturable products, angry customers, angry MANAGEMENT) of doing this and that it is in their best interests to rank things properly.

The great thing about this is that it gives you a chance to come up with an architecture that allows for a limited amount of expansion based on what is needed now as well as in the future; it's not bulletproof against major changes, but at least you know what is coming after product launch based on Level 2 and 3 requirements.

- If none of the above works, or you have gaps in requirements, you Basic Policy should be:

 What is the simplest thing that could possibly work?

Once Marketing understands that lack of a spec or requirement causes the Basic Policy to be engaged, you can move forward with preliminary risk reduction ('Oh, the widget has to have warp drive now? We'll need to create and budget warp drive risk reduction activities') and scheduling activities ('No spec for battery life, so per the Basic Policy we assume it is the same as the current product') so at the next review they can see the impact of not engaging fully with the development process.

Please understand, these are tips, not immutable laws of the universe; apply them wisely and you might just make it to the end of the project with your job and sanity intact and a shiny new widget for Marketing to show off at the next trade show.

Three Rules of Requirements

Some years ago, at a conference in Mexico, I attended a talk Steve Tockey’s gave about requirements. He very ably and succinctly summed up the rules for requirements, which I’ll paraphrase here:

  • A requirement is a statement about the system that is unambiguous. There’s only one way it can be interpreted, and the idea is expressed clearly so all of the stakeholders understand it.
  • A requirement is binding. The customer is willing to pay for it, and will not accept the system without it
  • A requirement is testable. It’s possible to show that the system either does or does not comply with the statement.

The last constraint shows how many so-called requirements are merely vague and useless statements that should be pruned from the requirements document. For instance, “the machine shall be fast” is not testable, and is therefore not a requirement. Neither is any unmeasurable statement about user-friendliness or maintainability.

An interesting corollary is that reliability is, at the very least, a difficult concept to wrap into the requirements since “bug-free” or any other proof-of-a-negative is hard or impossible to measure. In high-reliability applications it’s common to measure the software engineering process itself, and to buttress the odds of correctness by using safe languages, avoiding unqualified SOUP, or even to use formal methods (actually, the latter is not a common practice, but is one that has gained some traction).

Failure of the Week

Peter House sent this speedometer reading:

This is from Carl Palmgren. It seems Paypal has been around since the 30 Years War:

Have you submitted a Failure of the Week? I'm getting a ton of these and yours was added to the queue.


Let me know if you’re hiring embedded engineers. No recruiters please, and I reserve the right to edit ads to fit the format and intent of this newsletter. Please keep it to 100 words. There is no charge for a job ad.

Blue Ocean Gear is looking to hire a firmware developer with 3 - 5 years experience in the areas of battery/energy systems, wireless communications, and navigation. The opening would be for a position in the San Francisco bay area.


NRG Systems is seeking Firmware Engineers to help us develop next-generation intelligent products that will shape the future of global resource sustainability. In this role, you will focus on designing firmware platforms, developing device drivers, bringing up boards, implementing communications protocols, optimizing power consumption, and building embedded applications for microcontroller-based measurement and communications devices. You must be responsive, flexible, and desire to succeed within an open, collaborative peer environment. You also will be part of our global firmware engineering team and play a critical role in ensuring product quality and driving continuous improvement.  Link on our HR system:

Joke For The Week

These jokes are archived here.

What do you get if you cross a mosquito with a mountain climber?
Nothing. You can't cross a vector with a scaler.

About The Embedded Muse

The Embedded Muse is Jack Ganssle's newsletter. Send complaints, comments, and contributions to me at jack@ganssle.com.

The Embedded Muse is supported by The Ganssle Group, whose mission is to help embedded folks get better products to market faster. We offer seminars at your site offering hard-hitting ideas - and action - you can take now to improve firmware quality and decrease development time. Contact us at info@ganssle.com for more information.