"Go here to sign up for The Embedded Muse.
The Embedded Muse Logo The Embedded Muse
Issue Number 376, June 17, 2019
Copyright 2019 The Ganssle Group

Editor: Jack Ganssle, jack@ganssle.com

   Jack Ganssle, Editor of The Embedded Muse

You may redistribute this newsletter for non-commercial purposes. For commercial use contact info@ganssle.com. To subscribe or unsubscribe go here or drop Jack an email.

Contents
Editor's Notes

Express Logic

Over 400 companies and more than 7000 engineers have benefited from my Better Firmware Faster seminar held on-site, at their companies. Want to crank up your productivity and decrease shipped bugs? Spend a day with me learning how to debug your development processes.

Attendees have blogged about the seminar, for example, here, here and here.

Jack's latest blog: Crummy Tech Journalism.

Quotes and Thoughts

"Errors are not in the art, but in the artificers." Isaac Newton, Principia Mathematica

Tools and Tips

SEGGER emCompress Real Time Compression

Please submit clever ideas or thoughts about tools, techniques and resources you love or hate. Here are the tool reviews submitted in the past.

Nial Cooling has a pair of must-read articles about the use of mutexes here and here.

Embedded Artistry (a great name!) has an excellent article about using callbacks in embedded systems.

Is strncpy() safer than strcpy()? You may be a little surprised.

Freebies and Discounts

The fine folks at Joulescope are making one of their Joulescope energy meters available for the June giveaway. I reviewed it here. It samples an astonishing range from 1.5 nA to 3A, with short bursts to 10 A at 2 mSa/s.

Joulescope for giveaway

Enter via this link.

MAGA: Making Assert() Great Again

A prediction: your next firmware project will have errors. Hardly rocket science, but the implication, to me, is that wise developers will seed their code with constructs to automatically find at least some classes of these bugs. I call that process "proactive debugging."

One example is the liberal use of the assert() macro, one of the most under-utilized assets that C provides. But it's rare to find much use of assert().

For those who have forgotten, assert() does nothing if the supplied parameter is TRUE. If FALSE, it does a printf() of the module name and line number. Now, that's pretty awful for many embedded systems where printf() may not exist. But it's easy enough to recode the macro to take whatever action is appropriate in your system. Generally, assert() should somehow indicate an error and then initiate some debugging action, like stopping at a breakpoint or starting a trace.

Languages like Ada and Eiffel have a similar but much more powerful resource called Design by Contract. DbC, like assert(), does runtime error checking, but is also an essential part of the code's documentation. The statements make a powerful, uh, statement about what the code should be doing. We should do no less in C using assert(). For instance:

void function(int arg){
  assert((arg>10) && (arg<20));

... both throws an error if a bad argument is passed to the function, and tells the reader what the function expects.

Consider this function:

float solve_pos(float32_t a, b, c){    
  float32_t result;
  result  = (-b + sqrt(b*b - 4*a*c))/(2*a);
  return result;
}

What could possibly go wrong? Well, if variable "a" were zero, that would mean a divide-by-zero, which could be catastrophic. Except, that C often ignores this error. I ran the code through Visual Studio with a=0 and got a result:

Divide by zero

Of course, the result is bogus. What is scary is that the bogus result will propagate up the call chain, with more math being done on it at each level, until the machine decides to pump 100 kg of morphine into the patient's arm.

Better, use assert():

float solve_pos(float32_t a, b, c){    
  float32_t result;
  assert (a!=0);
  assert ((b*b-4*a*c) >= 0);
  result  = (-b + sqrt(b*b - 4*a*c))/(2*a);
  assert (isfinite(result));
  return result;
}

(A great argument can be made to use error-handling code instead of assert(), though it can be very hard to know what to do with a detected problem, like a divide-by-zero. And only use assert() to detect honest-to-God bugs, situations where the code is just wrong.)

The first assert() looks for a==0; the second for an attempt to take the square root of a negative number, and the third looks for a division result that can't be expressed by the IEEE floating-point number standard. These are all likely programming errors.

assert() is disabled when NDEBUG is defined, which means the behavior of the code may change depending on NDEBUG (e.g., assert(a=0)). In Muse 317 I describe a small change that makes this macro much safer.

It has been shown that using lots of assert()s leads to better code, as the macro will find errors that testing may not. A divide-by-zero might not cause a symptom to appear, but is surely something we'd like to catch. At least one study shows assert()s lead to a small number of shipped defects:

Bugs vs assertions

From Gunnar Kudrjavets, Nachiappan Nagappan, Thomas Ball "Assessing the Relationship between Software Assertions and Code Quality: An Empirical Investigation"

And, since assert() tends to find errors near where the problem first occurs, instead of millions of cycles later when a symptom appears, it's promiscuous use greatly reduces the time needed to find a bug:

Cost to fix vs assertions

Blue bugs were found via conventional debugging; those in green were detected using assert(). Derived from L. Briand, Labiche, Y., Sun, H., "Investigating the Use of Analysis Contracts to Support Fault Isolation in Object Oriented Code"

Assume your code will have errors. Seed it with proactive debugging constructs. Among the most powerful of these is assert().

More on Adding Ground Test Points

Several readers had more ideas about adding test points for a scope ground connection to a PCB, after thoughts on this ran in the last Muse:

Carl Van Wormer wrote:

Installed ground pins are too expensive for my cheapness and are not needed for most of my design workflow. I got tired of soldering a piece of wire onto a ground via (for a scope ground connection) for every troubleshooting effort, so I've added "scope ground" components to almost every board I've designed for the last 10 years. I determined the opening range of typical scope probes and made the hole size and spacing from the board edge for an easy connection. The component has an outline that indicated board edge placement for best probe connection results.

Ground test point

Ground test point

Luca Matteini (and a couple of other people) suggested:

Harwin has a series of products for SMT test points with part number S1751, or the smaller S2751/S2761. They're practically squared rings that can be soldered on classic 1206 or 0805 footprint space, providing a cool automated pick and place solution.

Ground test point on a PCB

Maybe a bit tiny for a ground alligator clip, but I think they can work; for sure they're perfect with a scope or a logic analyzer probe -- without any very specific frequency/impedance requirement of course, but in that case you won't consider test points at all.

Chris Brown had a warning:

Responding to Paul Carpenter's comment in The Embedded Muse 375, I would like to suggest that whilst adding a test point for each supply rail is a good idea, it is important to ensure that there is some space between them.

A board I work with has a small forest of test pins, each carrying a different supply voltage, ranging from -15V to +24V. The PCB designer put them all together in the middle of the board "to make testing easier". However, being so close to each other, it is all too easy to bridge adjacent pins whilst probing, which results in sparks and blown up power supply components.

More on Autonomous Cars and Ethics

Scott Winder responded to a link I posted to Phil Koopman's treatise on ethics in self-driving cars:

I wanted to comment on the question of safety and ethics in self-driving vehicles. As a point of interest, the new buzzword in automated driving safety circles especially among OEMs - is SOTIF, or Safety of the Intended Functionality, which will be codified as ISO 21448. It was spawned as an offshoot of ISO 26262 when the committee determined (by a narrow vote) that it was a valid target for standardization, and a complex enough topic that it warranted separate treatment. You can find various explanations online, but a quick summary is that it's an attempt to account for the risks that may exist when the system is functioning as designed (ISO 26262 focuses on the risk and mitigation of system or component failure).

When you begin to realize that application of the standard requires you to identify and characterize the unknowns related to a given function, it appears insurmountably ambitious (and indeed, a quote in an EE Times article on the subject mentions "known unknowns" and "unknown unknowns" in a candid look at the discussions that led to the birth of the standardization effort). But a friend put it more succinctly: "Do we really want the system to do exactly what we told it to do?" In other words, it's more of an examination of the requirements than of the implementation.

A basic example would be to question whether we should drive at the speed limit (as specified) when the road is in poor repair or there is slowing traffic ahead. A less basic treatment would add elements such as weather, performance parameters (grip, braking distance, acceleration, etc.) and sensor limitations - you quickly end up with a multidimensional matrix of factors that, considered together, provide a much better input to a system that has to operate in real-world conditions. How this can be practically accomplished remains to be seen.

Though I'm enthusiastic about autonomous cars, I think that it will be a long time before we see affordable and reliable level 5 autonomy. This is a very hard problem. It's probably 90% solved, but the next 10% will require 90% of the development time. As someone said: trains still crash. And they run on tracks!

This Week's Cool Product

The technical press is going gaga this week over TI's new ADC12DJ5200RF ADC. This is an astonishing part that samples at 10.4 GSa/s with 12 bits of resolution. In one mode there's a timestamp feature to mark a specific acquisition. It also includes a digital down converter (DDC) which, if enabled, provides decimation. As I understand the data sheet, the DDC will also mix the signal with a numerically-controlled oscillator, rather like a superhet mixer, to isolate frequencies of interest. There are four of these oscillators making fast frequency hopping a possibility.

Want to build a wide-bandwidth receiver with this part? Here's the circuit:

TI's ADC12DJ5200RF

At $2786 each, this is a part with limited applicability. Suggestions are for oscilloscope front ends and the like. It appears to be a pre-production part at this time.

Note: This section is about something I personally find cool, interesting or important and want to pass along to readers. It is not influenced by vendors.

Jobs!

Let me know if you’re hiring embedded engineers. No recruiters please, and I reserve the right to edit ads to fit the format and intent of this newsletter. Please keep it to 100 words. There is no charge for a job ad.

Joke For The Week

Note: These jokes are archived here.

Dejan Durdenic sent this riff on Dallas's one-wire interface to a zero-wire interface.

About The Embedded Muse

The Embedded Muse is Jack Ganssle's newsletter. Send complaints, comments, and contributions to me at jack@ganssle.com.

The Embedded Muse is supported by The Ganssle Group, whose mission is to help embedded folks get better products to market faster. We offer seminars at your site offering hard-hitting ideas - and action - you can take now to improve firmware quality and decrease development time. Contact us at info@ganssle.com for more information.