Click here to go to the on-line version The Embedded Muse 490

The Embedded Muse Logo The Embedded Muse
Issue Number 490, May 20, 2024
Copyright 2024 The Ganssle Group

Editor: Jack Ganssle,

   Jack Ganssle, Editor of The Embedded Muse

You may redistribute this newsletter for non-commercial purposes. For commercial use contact To subscribe or unsubscribe go here or drop Jack an email.

Editor's Notes

SEGGER Gang Programming Solution

Tip for sending me email: My email filters are super aggressive and I no longer look at the spam mailbox. If you include the phrase "embedded" in the subject line your email will wend its weighty way to me.

The Embedded Online Conference has concluded, but the folks there have generously opened up my keynote for free access. You do have to register, but feel free to see the session here.

Bob Paddock sent this:

I expect others have mentioned this, if not Zilog has discontinued the Z80.

A ~47 year run for a single Micro.  Do any others do longer?

I'm not counting the 8051, as you can't buy an original Intel 8048 or
8051 as a new part from Intel today.  I know lots of people make 8051 descendants.

The Z80 was ALMOST a fully TTL part.  That the clock-input needed a higher voltage drive than the rest of the chip got many early designers.

Bob found that an open-source version of the Z80 is coming out:

Quotes and Thoughts

“The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague.” -- Dijkstra

Tools and Tips

Please submit clever ideas or thoughts about tools, techniques and resources you love or hate. Here are the tool reviews submitted in the past.

Also from Bob Paddock:

Regarding the Government saying C/C++ was unsafe and the discussion of moving from C to C++ in the last issue, I wanted to point you to CPPFRONT from Herb Sutter.

CPPFRONT is a pre-compiler that is meant to convert a safe subset of
C++ to C++ that any C++20 or higher compiler can handle.

Failures in the Field

Daniel McBrearty has an interesting story:

A long while ago I came a across a fault (on a linux server not an embedded system but it could quite as easily have been) which needed two infrequent events to occur simultaneously to manifest. When it did happen, it was c completely catastrophic - it caused the kill command to be executed with an arg of -1, which if you look it up, takes out EVERY process on the host except the caller.

The result, on a headless server, looks for all the world like an intermittent hardware issue. If you have a terminal login, it just dies. Everything dies except one internal process - but you don't know that, you are now looking at a brick.

It was really only because of the diligence of one technician, who wrote and installed some logging scripts on the machine and then reproduced by stress testing - that we ever even got a fault report. Many in the team (including my boss) didn't believe it was a software issue at all. The logs told me otherwise and, eventually we found the cause - a missing return statement in a C module. (For the record, my impression was that the coders involved were a highly competent team. It was an oversight, and they were as shocked as us by the fact that it got to production code.)

Here is the very interesting part. We patched the fault and issued a tech bulletin - then we started hearing from customers and dealers - "oh we saw that once"!

So here is my take on this. The maths of probability is such that this type of fault is very unlikely to be caught in development and verification. Depending on the nature of the event, it could be that you need to run the system for 50 years or whatever to have a reasonable chance of seeing it just once. If you do see it only once, on a prototype (nor during formal testing) - will you for sure be as diligent as our tech was? Maybe, maybe not. Many people will blink, reboot, shrug, and think about it for a few days - if it doesn't happen again, forget about it.

Of course, once you make hundreds or thousands of units and ship them, leave them running 24/7, it's another story.

(To give an idea, two processes which run and fire an event about once a second, which lasts 25us, will fire at the same time about every 50 years, by my calculations. For those interested, I asked on the math forum of stack overflow - link at the bottom.)

Now in this case, there was no human catastrophe - the worst would be some loss of data. But imagine this is the brakes system of your car, or an aircraft?

So hence the subject of this mail : "requirements testing" doesn't go far enough. Because it cannot possibly ensure proper operation under ALL circumstances. It is just impossible.

So what do we do? Well I think most of the things you have advocated - coding standards, code reviews, module testing. Error handlers that leave a persistent record and - ASSERT ASSERT ASSERT!  On top of this I would add - even MORE stringent validation for these kind of intermittent events that work on system level (eg ISRs). These should be subject to module tests which check each and every pathway through the code.

On top of all this - Keep It Simple Stupid works well for me - but its hard to pull off when you are working on an old design that has had many "extras" added.

We can also talk about redundancy - but that is really more of the same. Two redundant systems which are occasionally unreliable are just another example. If there is a manual fallback, relays and switches, which is regularly checked, this helps. But how do we address this class of problem at source?

But what is really bugging me is the number of people, even those in responsible positions, who just don't seem to get this. Code of sometimes dubious origin and quality is allowed because:

"We have requirements testing, they will find out if there are problems."

"It's been working fine for years" (even when there are unresolved bugs).

I'm curious about your take on this and that of your readership.

The link:

Slowing Down to Speed Up

"Faster development tools may paradoxically increase the time to complete programming tasks." - Phillip Armour

This has been my observation. In ancient times programmers keyed their programs, one statement per card, on punched cards, which were then submitted to the computer center. A typical run had a 24 hour turnaround. No one could afford dumb mistakes with that cycle time, so the notion of "playing computer" surfaced. The developer would get a listing and execute the code in his head, very carefully, looking for mistakes.

Fast forward to today and the situation is very different.

You're sitting there, feet up on the desk, IR keyboard at hand and lots of windows open on the 32" monitor. Run into a bug, and maybe changing that ">=" to ">" will fix it. Make a change and in a few seconds you're debugging again. If lucky, the patch won't work. Did you isolate the real problem with those three seconds of thought? How do you know?

We need debug logs. When a problem surfaces, write it down. Write down what you learn from trace data, single-stepping, and other debugging strategies. When the fix is clear, don't implement it. Write it down, first, and then key it into the IDE. Now, with 30 seconds worth of thought, the odds of getting a correct fix increase.

Have you ever watched a developer spend days chasing a bug? Inevitably he'll forget what tests he's run, what variables he's looked at, and repeat the tests. A debug log makes this impossible.

Finally, go over your log every 6-12 months. Patterns will appear. Identify those, so you don't make the same mistakes again.

Failure of the Week

Farbizio Bernardini sent this:

This is from Ian McCutcheon:


Have you submitted a Failure of the Week? I'm getting a ton of these and yours was added to the queue.


Let me know if you’re hiring embedded engineers. No recruiters please, and I reserve the right to edit ads to fit the format and intent of this newsletter. Please keep it to 100 words. There is no charge for a job ad.

Aveox, Inc. is seeking an expert embedded developer in the field of Power Conversion and Motor Control

Experience with Model-based Design, MATLAB, and TI Code Composer is required.    An in-depth understanding of Field Oriented Control is necessary as well.

Check us out at

Joke For The Week

These jokes are archived here.

From Bob Paddock:

This is from the Intel 8048 manual in 1977:

"A well placed underscore can make the difference between a s_exchange and a sex_change."

About The Embedded Muse

The Embedded Muse is Jack Ganssle's newsletter. Send complaints, comments, and contributions to me at

The Embedded Muse is supported by The Ganssle Group, whose mission is to help embedded folks get better products to market faster.

Click here to unsubscribe from the Embedded Muse, or drop Jack an email.