Follow @jack_ganssle
Go here to sign up for The Embedded Muse.
TEM Logo The Embedded Muse
Issue Number 301, March 21, 2016
Copyright 2016 The Ganssle Group

Editor: Jack Ganssle, jack@ganssle.com

   Jack Ganssle, Editor of The Embedded Muse

You may redistribute this newsletter for noncommercial purposes. For commercial use contact info@ganssle.com.

Contents
Editor's Notes

The average firmware teams ships about 10 bugs per thousand lines of code (KLOC). That's unacceptable, especially as program sizes skyrocket. We can - and must - do better. This graph shows data from one of my clients who were able to improve their defect rate by an order of magnitude (the vertical axis is bugs/KLOC) over seven quarters using techniques from my Better Firmware Faster seminar.

The seminar is all about giving your team the tools they need to operate at a measurably world-class level, producing code with far fewer bugs in less time. It's fast-paced, fun, and uniquely covers the issues faced by embedded developers. Information here shows how your team can benefit by having this seminar presented at your facility.

A Williams tube in the Manchester Baby

The Manchester Baby was the world's first stored-program computer, yet it wasn't designed to be a computer; it was a testbed for the Williams tube, which was an early (1948) form of random-access memory that predated even core. I stumbled across a video which describes the machine and includes what appears to be footage of the original unit plus the 1998 replica. It's well worth the 8 minutes if you're a fan of the history of this industry.

I'm now on Twitter.

Quotes and Thoughts

"Software is too important to be left to programmers." - Meilir Page-Jones.

(See here for my comments about this quote).

Tools and Tips

Please submit clever ideas or thoughts about tools, techniques and resources you love or hate. Here are the tool reviews submitted in the past.

Are you using a commercial or open-source graphics package to add a GUI to your embedded system? If so, which one, and how do you like it? I'll post replies in the next Muse.

Some developers are embracing functional programming. Alas, most references on that subject drown the reader in complex ideas that mask the real issues. Alex Mor sent along this link for a practical article about the subject.


Ultra-low power microprocessor systems are the rage today, and a number of semiconductor vendors offer MCUs that only have to wave at a passing electron to get enough power to function. As designers we need to understand -- and measure -- the current required by our creations. I've reviewed a number of tools to do this and recently had a chance to play with a new unit from Cmicrotek.

The uCP100 places either an internal sense resistor or your own in series with the power supply and drives an oscilloscope to show how much current a system uses as the drain changes over time -- for instance, in going in and out of sleep modes.

The uCP100 measures currents from 5 nA to 100 mA; its brother, the uPD120 covers 50 nA to 800 mA. Some readers have complained that these sorts of units are little more than sense resistors with an amplifier, but the reality is that it's hard to get this right. To measure low currents means using either a big resistor a lot of gain. But op amps struggle to get decent bandwidth as the gain increases. When using a scope you need even more gain as it's hard to see millivolt-level signals.

Unlike most other devices targeting this market, the uCP100 can measure over a 20 volt range. If your system runs from solar you could see voltages approaching this. What's really unusual is the output to the scope can swing from 0 to 40 volts. This means at low currents you can zoom in easily yet still catch high-current events.

The scope connects via a standard BNC connection. On the input side, one wire goes to the supply and one of three wires is used to connect to the target system's power input. One is used if you're providing your own sense resistor. Another is for "precision" mode which goes from 5 nA to 100 uA; the other is for "wide-range" mode for 5 uA to 100 mA.

DIP switches select zoom in, normal, or zoom out gains. What this means is you can measure current to high levels of precision over narrower dynamic ranges, or a bit less precision over up to a 20,000:1 range.

A critical parameter (which is often not specified) is the sense resistor voltage drop, which in the uCP100 is 7 µV/nA in precision mode and 7 µV/uA in wide-range mode.

An optional break-out board eases connections. It goes between the supply and the uCP100. Here's the breakout board in the foreground and the unit itself behind it:

See the transistor and resistors? To run a realistic test I drove the base of the transistor from an arbitrary waveform generator (a Siglent SDG 2042X). One channel (yellow in the following picture) of a scope is attached to the collector of the transistor; the voltage there is inversely proportional to the current used by the transistor. The other channel goes to the output of the uCP100:

A couple of things to notice: the unit is responsive. The scope is set to 500 us/div. Another, the peak-to-peak is a whopping 18.9 volts! That allows for a lot of resolution.

It's a nicely-made piece of equipment and comes with a very complete kit, including grabbers to connect to the power pins and all connectors. I like it, and felt it's a worthwhile addition to any lab that needs accurate ways to measure wide dynamic ranges of current. At $495 I feel it fulfills its value proposition... but some may balk at that price. That is a tenth of Keysight's nifty N2820A current probe.

Freebies and Discounts

Real-Time Current Monitor

This month I'm giving away a Raspberry Pi 2 Model B. The contest will close at the end of March, 2016. It's just a matter of filling out your email address. As always, that will be used only for the giveaway, and nothing else. Enter via this link.

Software is Too Important to be Left to Programmers

I'm not sure what the context for the quote (above) from Meilir Page-Jones was, but it touches on one of my sore points: we're not programmers.

We're not programmers. We're software engineers. We build software-intensive systems by reusing code, writing code, purchasing code, modeling, using auto-code generation tools, and a whole host of engineering processes. We use metrics to guide our work and engineering standards to ensure quality. Programming is a hugely-important part of what we do, but is a subset of the field of software engineering. Calling a software engineer a programmer is like referring to a brain surgeon as a scalpel jockey.

Software engineering is a very young profession. There's much we still don't know and some that we're inventing even now. We do have a very important body of knowledge to draw upon (e.g., this). There are known practices that yield good results; practices, alas, too few of us employ. I truly believe that as time goes on most of us will adopt a more formal, engineering, approach to building software, and we'll use metrics to guide our work.

There is plenty of poorly-crafted code out there. But let's do remember that the world does run on software today, and that code is by and large doing a pretty darn good job of it. When I was a young engineer the average person had never seen a computer in person and had little to no first-hand experience with software. Now software (and especially firmware) mediates many aspects of everyone's lives.

Can you imagine building systems out of transistors or ICs without software? What would a device that implements, say, Excel, built without software, look like and cost? The complexity would be staggering, yet the magic of computer software means one can run a spreadsheet on a ten ounce tablet, almost for free.

Despite the problems, software has been an incredible success story.

On Craftsmanship

In Muse 300 I wrote about craftsmanship in software. Readers had much to say.

Tim Wescott wrote:

I liked the article, however there are two points that I think are missing:

One, I think you're being disrespectful of the Ada community in not pointing out that there's no reason that an all-Ada effort can't be a monumental wad of bugs and dysfunctional code.  Any programmer is perfectly capable of generating a cluster f -- um -- a big mistake.  If most aren't, it probably has more to do with Ada being known as a good tool for top-quality code, leading the people who were going to do a good job _anyway_ to use it, rather than causing people who use it to do well.

Two, I've had some contact with one of the local medical equipment companies, and to my knowledge they (a) program in C and assembly, and (b) do top-quality work (a large portion of the work out of that division is pacemakers, the code for which is a wee bit more important than BlueTooth earbuds).  It would be interesting to see the statistics on which industries use what language, and how well the code fares.

On another note, this event came to my attention recently.  It sounds like an interesting case study in reliability design, even if it doesn't have anything to do with software per se.:

https://en.wikipedia.org/wiki/St._Francis_Dam

Tim's point is valid, and one that's debated pretty fiercely. Why, for instance, has avionics been such a software success story? Is it because the code is done to a rigorous DO-178 standard? Or is it because of an ingrained safety culture in that industry? There's little data to go on.

From Harlan Rosenthal:

I blame my tools for being dull. And then I sharpen them.

I blame my tools for being hard to hold. So I used to modify the handles, before people started making them with more ergonomic handles.

All of the problems with C were widely discussed way back in the 1970s when it was released. This isn't news. Fortran used = for assignment and .LT. letter codes for comparison, because there weren't enough symbols on the 026 keypunch; ALGOL introduced := for assignment so that a plain = could be the equality comparison, and lots of follow-ons used it. C went back to the Fortran style, despite everybody already knowing better. It also allowed assignments in the middle of other operations, like intermediate results of calculations, ostensibly as an "economy" of notation, which lent more weight to the ambiguity of = and ==.

Jim Brooks added:

First, you can take off the flame proof underwear. I'm with you!

My disappointment is with the tool developers and the phrase good enough. Perhaps I'm wrong, what's new. Many moons ago, I was playing with OS2 Warp. After getting over the install hurdles, the system gave me 16 digit or more hex error message.

When I attended the OS2 user's group, a member said not a problem. If you run program x, and enter the x digit error number it will tell you what's wrong. The error was no floppy in drive. That was his solution.

Not once did he think the computer could have displayed the English error in the first place, avoiding the error prone and annoying process of entering the error number into the translator program.

This is what I see with C compilers. Everyone is happy with the current state. The idea of reporting errors as in your example does not happen.

How many times have you seen some unintelligible artifact error produced by a missing bracket or semi-colon? Could the compiler not do a check and report the true cause of the error, not the result?

We are all error prone, so any help we can get from these Gigahertz multicore processors is gratefully accepted. After all, they sit around doing nothing for most of their lives.

Vlad Z wrote:

Your example with table saw struck a cord: I work with wood often enough to appreciate this. OTOH, the expression you picked can cut both ways: some tools deserve to be blamed, and some craftsmen deserve to be blamed.

You need tools that fit the task. If you have a Philips screw, and use a flat screwdriver, craftsman is at fault. If you have a screwdriver made in China and sold at Harbor Freight Tools, which stripped on first try, the screwdriver is to blame.

The C language, in my opinion, is misused. It was designed as a glorified assembler, and it should have stayed with low level developers. That is, OS, device drivers, and such. If application developers used a different language, not C or C++, then such things like blowing stack would be impossible. It's a long conversation: which characteristics a system language should have, and which are needed for an application developer. I do not want to start that one, but let me just say that a number of problems detected by the modern compilers or lint-like tools should not be possible in the first place. Like if (x=2) {}. There are tricks to deal with that, like if (2=x) {}, which will generate a compiler error, but I should not have to do that. Higher level language syntax should take care of that.

And yes, a demanding craftsman does blame poor tools. I fondly remember Tornado debugger from WindRiver, which allowed access to every register and would name the bits when you looked at the registers. Today, I work mostly with Linux, and it is a sad state of affairs that for the last 6 years every company I had worked with used printk for debugging. Kernel debugger is nowhere in sight. I feel like I am back in the early 80s. The productivity suffers considerably.

Jobs!

Let me know if you’re hiring embedded engineers. No recruiters please, and I reserve the right to edit ads to fit the format and intents of this newsletter. Please keep it to 100 words. There is no charge for a job ad.

Joke For The Week

Note: These jokes are archived at www.ganssle.com/jokes.htm.

In Muse 300 I made a comment about huge levels of indirection using C. Tim Wescott sent this in response. Is it a joke or a terror alert?

/* We don't need no stinkin' comments */
#include <stdio.h>
int main(void)
  {
  int ralph = 1;
  int * bob = &ralph;
  int ** sue = &bob;
  int *** mary = &sue;
  int **** tom = &mary;
  int ***** gary = &tom;
  int ****** chris = &gary;
  int ******* frank = &chris;
  int ******** alex = &frank;
  int ********* burnie = &alex;
  int ********** hillary = &burnie;
  int *********** miriam = &hillary;
  int ************ neil = &miriam;
  int ************* barney = &neil;
  int ************** foo = &barney;
  int *************** fighter = &foo;
  int **************** brittany = &fighter;
  int ***************** trevor = &brittany;
  int ****************** newt = &trevor;
  int ******************* robin = &newt;
  int ******************** i = &robin;
  ******************** i = 0;
   
   printf("Well, I %s know what it means\n", ralph ? "guess I don't" : "do too");
   }
Advertise With Us

Advertise in The Embedded Muse! Over 25,000 embedded developers get this twice-monthly publication. For more information email us at info@ganssle.com.

About The Embedded Muse

The Embedded Muse is Jack Ganssle's newsletter. Send complaints, comments, and contributions to me at jack@ganssle.com.

The Embedded Muse is supported by The Ganssle Group, whose mission is to help embedded folks get better products to market faster. We offer seminars at your site offering hard-hitting ideas - and action - you can take now to improve firmware quality and decrease development time. Contact us at info@ganssle.com for more information.