Follow @jack_ganssle
Go here to sign up for The Embedded Muse.
TEM Logo The Embedded Muse
Issue Number 353, July 2, 2018
Copyright 2018 The Ganssle Group

Editor: Jack Ganssle, jack@ganssle.com

   Jack Ganssle, Editor of The Embedded Muse

You may redistribute this newsletter for non-commercial purposes. For commercial use contact info@ganssle.com. To subscribe or unsubscribe go here or drop Jack an email.

Contents
Editor's Notes

 

The average firmware teams ships about 10 bugs per thousand lines of code (KLOC). That's unacceptable, especially as program sizes skyrocket. We can - and must - do better. This graph shows data from one of my clients who were able to improve their defect rate by an order of magnitude (the vertical axis is shipped bugs/KLOC) over seven quarters using techniques from my Better Firmware Faster seminar. See how you can bring this seminar into your company.

I'm on Twitter.

Quotes and Thoughts

"I believe that economists put decimal points in their forecasts to show they have a sense of humor." - William Gilmore Simms

Tools and Tips

Please submit clever ideas or thoughts about tools, techniques and resources you love or hate. Here are the tool reviews submitted in the past.

In the last issue I asked what is the fastest interrupt response you've had to deal with. Will Cooke wrote:

In my case, the actual speed wasn't as important as consistency.  The project was a contest entry for the Atmel / Circuit Cellar 2004 AVR contest.  The project (here http://wrcooke.net/projects/vidovly/avrcog.html) was a device that synced with and overlaid text on a standard NTSC video signal.  An incoming horizontal sync pulse caused the interrupt, which then pulsed a row of pixels of the text on top of the video.  Although the response needed to be fast (less than a few microseconds) it was much more important that it happen at "exactly" the same time after each interrupt.  Otherwise the rows of pixels would jitter unacceptably. Since there was no way to ensure equal interrupt response while running code, I used a method similar to the one you related: I halted the CPU when I was awaiting the incoming interrupt.  I don't remember where I got the idea, but I didn't originate it.

It worked pretty well, but there was still some jitter.  I attribute that to running the CPU at 10 MHz rather than an even multiple of the horizontal sync rate.  I was able to receive characters from the serial port and display 120 (6 rows of 20 characters) overlaid on a video signal using an AT90S2313 with 128 bytes of RAM.  I don't think that code would have been possible with anything other than carefully tuned assembly language.

Kosma Moczek had an explosive situation:

I once built a 230V dimmer that featured output short-circuit protection. I used a basic current shunt resistor and a window comparator to detect overcurrent condition. The crux was that it had to react fast - before the current killed the output MOSFETs. I initially wrote the interrupt handler in assembly; on a 24MHz 8051 derivative the worst-case latency was about 20 cycles, or 0.84us, to disable the MOSFET driver. It worked well - except one condition: the interrupt would be masked if an internal flash write operation was in progress - and we did use the internal flash as non-volatile memory. The result was, well... a bit explosive. I ended up using a hardware solution to quickly crowbar the MOSFET gates to ground as soon as the window comparator triggered - without waiting for the MCU to notice. I also learned to cover the prototype with a tupperware box so the flying bits of TO-220 cases wouldn't shred my face.

Luca Matteini wrote:

I think that some advances with peripherals and devices have changed a lot the interrupt response time requirements. First came peripherals as DMAs to help it, then for any recent device you'd employ some smart magic in an FPGA, to automate even more complex tasks -- for a system I had, which was sampling 8 channels of analog data at 12-bits every 2 microseconds, I simply resorted to an FPGA buffering trick as well.

I often have had sampling of data every 20-30 microseconds, even on 8-bit CPUs in the 80s and 90s, but that was still manageable, even in C language.

Funny enough, the most stringent interrupt request that I remember, along with a funny trick, wasn't that "fast".

Crossing the end of 80s and beginning of 90s we used in an industrial site some Z80 controllers from Rabbit Semiconductor. Their Dynamic C language/programming environment was cool enough for us, then I instructed a colleague on using that, except for the critical parts that I managed myself.

We had an industrial rotational encoder connected, with the classic A-B phases, which had to be first monitored not for speed, but for bi-directional step count.

The Z80 CPU on those systems was connected to a Z80 PIO, summing that up with the "moderate optimization level" of Dynamic C, it ended to be highly critical at our "high speeds".

I proposed an implementation that my boss accepted with a half-hearted "if you're confident that it works..."

The system did run with two interrupts on A and B phase, to capture the two directions, and it was impossible for the system to run at maximal speed and being inverted in direction in zero time.

Then I developed a "variable interrupt" handler, which at higher speeds started counting steps only from one phase, ignoring the other until it slowed down again. And it worked very well!

Bruce Casne sent this:

Philips 87C751 had a PWM with NO buffer. Which meant the only way to change the comparison value was before the comparison triggered, or unpredictable results ensued. I solved this by putting a one instruction load at the interrupt vector, followed by a RETI. The total time was less that the minimum usable value in our system, so no problem. Yes, I did check the value for min and max before putting it into the static fetch location for the interrupt.

Steve Strobel wrote:

I once worked on a 68HC16 processor running a foreground loop with relatively non-critical timing and an ISR that generated five channels of tones using sine-table lookups.  If a channel was turned off, it would branch past its tone generation code so it didn't use much processor time.  That worked great when only one or two channels were active, but it couldn't keep up when all five were active, even after we applied every assembly language coding trick we could think of.  The eventual solution was to eliminate all of the branches and turn off the unneeded tone generators by setting their sine table pointer increments to zero so they generated the same value each time.  That made the ISR use the same number of processor cycles no matter how many channels were active.  It was hard to directly measure what percentage of the processor's time that ISR used because just setting an I/O line high at the beginning of the ISR and clearing it at the end pushed it over the edge.  But we could measure the percentage slowdown of the foreground control loop and found that the small number of remaining processor cycles were enough to keep it running acceptably fast.

Daniel McBrearty had a different take:

I can't claim any special experience of designing super-fast interrupt hacks - bit I'd like to present, if I may - an alternative take on that.

I have to admit that in general I try to avoid having to do unusual hacks to get components to do what I need them to. If I would need to respond to an event in a fraction of a microsecond, I might prefer to design some hardware to capture the most time-critical parts of the event, and allow a processor to respond in a more conventional fashion - if that were possible, of course.

I feel that as engineers, we are all too easily seduced by the "heroic" model, if I may call it that. I find that there is sometimes a good moment to stand back and ask whether I chose the right tool for the job. I very recently encountered this myself - having struggled to get some I2C hardware working reliably as part of a more complex system - and someone (the person paying for the job) pointed out that I could as easily use a straightforward serial link. He was right, and I'm embarrassed that I didn't think of it. It cost about a week of my time. (Luckily for me he's an understanding guy.)

(Of course I realise that "back in the day" unconventional approaches could achieve things which are more commonplace now. It's just another point of view, as I say.)

Daniel Wisehart responded to my comments on MISRA-C:

You say: "I believe MISRA is incomplete as it doesn't address stylistic issues. These include the placement of braces, indentation, and the like. Every organization has their own take on these; sometimes they constitute the catechism of a religious war between developers. Fight a war over design issues, not style. Clarity is our goal; when inconsistent styles are used one's eyes get tripped up by odd structures."

I have seen the same thing.  My way of solving this problem is to use the "hooks" that most source control systems support, to run a python script that reformats every checkin and checkout to the current company standards.  Then the religious wars can be fought over what goes into this script--with one person making the final decision based on the best arguments--but whatever goes into or comes out of source control contains consistent formatting.  Once a developer doesn't have to do any work to format code the accepted way, I find there are a lot fewer wars.  Code as you like, with the knowledge that what everyone else will see will be formatted in the accepted manner, as will the code, written by others, that you checkout.

And Keir Stitt had a story with a useful lesson:

I had an interesting issue today debugging a system which was going AWOL despite a reasonably well designed watchdog.

It seems the watchdog was checking everything inside the processor was going okay. But wasn't testing those special function registers which are set at startup and then forgotten - such as TRIS registers.

It turned out that the TRIS registers for a bunch of signals driving relay outputs would occasionally assert themselves to the input direction. But the watchdog just carried on as normal and the output latch registers were being asserted okay.

Just adding a few lined of code to test the TRIS registers before clearing the watchdog made a whole bunch of problems go away. The lesson is to consider the scope of what you want the watchdog to supervise.

I have seen this problem many times in the past. Some engineers make a practice of updating all of the pertinent I/O registers frequently; say, each iteration for a polled loop structure, or via a periodic interrupt or task.

Freebies and Discounts

This month we're giving away the Siglent SDS1102CML two-channel, 100 MHz bench scope that I re-review later in this issue.

Enter via this link.

Rules of Thumb

Capers Jones has more empirical data on software projects than anybody, and he shares it with the community in his various books and articles. The books are tomes only a confirmed software-geek could stomach as they are dense with data. I find them quite useful.

Over the years he has developed a number of rules of thumb. These are approximations only, but are reasonable first-order ways of getting a grip on a project.

Jones doesn't like using lines of code for a metric, and prefers function points. What is a function point? It's a measure of, well, functionality of a part of the software. There are a number of ways to define these, and I won't wade into the pool of competing thoughts that sometimes sound like circular reasoning.

One complaint lodged against function points is they tend to be highly correlated with lines of code. I'm not sure if that is a bug or a feature. Regardless, in C one function point is about 130 lines of code, on average.

Here are the rules of thumb, where "FP" means function points:

Approximate number of bugs injected in a project: FP1.25

Manual code inspections will find about 65% of the bugs. The number is much higher for very disciplined teams.

Number of people on the project is about: FP/150

Approximate page count for paper documents associated with a project: FP1.15

Each test strategy will find about 30% of the bugs that exist.

The schedule in months is about: FP0.4

Full time number of people required to maintain a project after release: FP/750

Requirements grow about 2%/month from the design through coding phases.

Rough number of test cases that will be created: FP1.2 (way too few in my opinion)

Re-Review of Siglent's SDS1102CML Oscilloscope

A few years ago, I wrote a review of Siglent's SDS1102CML 100 MHz, two channel scope. My impression: I liked it. It packs a ton of value for the money.

Since then I do use it time to time, even though it's the least capable of the bench scopes here. I've been curious about its durability.

Some early users reported trouble occurring with the power switch - it appears they fixed this as I've had no issues.

Two 100 MHz channels for under $300 is hard to beat. Unlike some of the competing entry-level units, each channel has a full set of knobs. Well, "full set" may be overreaching a bit since, like any scope, they comprise gain, position, and enable controls only. But I greatly dislike a single set of shared controls, so the Siglent excels here.

The only problem I've run into is that one of the legs no long snaps into position.

The display is adequate but doesn't have the resolution of a higher-end model, so letters, while entirely readable, are a little crudely-formed. My Agilent (now Keysight) mixed-signal scope has beautifully sculpted letters and symbols... but it costs $16,000. I also have another Siglent scope, an SDS2304X, which has a great high-resolution display. For $2k it should.

The screen resolution does mean that sine waves and other non-square signals appear a little pixelated.

Less-than-perfect resolution is a tradeoff to get a decent instrument at a low price. As Mick Jagger said, "you can't always git whatcha want."

Occasionally I need a scope on the road. Any of the USB instruments are a logical choice since I already am carrying a laptop. But I find myself bringing the Siglent. None of the USB versions I have matches its 100 MHz bandwidth (though there are plenty of models out there that do). I just find operating a scope with real knobs easier and more enjoyable than manipulating virtual controls via a mouse - or, worse, a laptop's trackpad.

So, after four years with the Siglent, I still think it's a terrific deal, and a decent instrument for home labs or even for professional settings where high-performance isn't needed.

At this writing it's available on Amazon Prime for just $299.That's an incredible price for a decent bench scope. You can have mine - I'm giving it away at the end of July, 2018. It needs a home where it will get more use.

Amazon has a number of reviews of the scope. All but one are positive. My favorite is this: Looks really cool sitting on my desk. Docked one star because the instructions aren't clear enough on how to make the screen show the squiggly lines like in the photo.

This Week's Cool Product

Switches bounce. Sometimes a lot. I have a Chinese FM transmitter that is almost impossible to turn off as pressing the on/off button while the unit is functioning almost always results in a quick off-then-on cycle.

When I ask engineers about their debouncing strategy the answer is usually a variation on "I delay x ms", where "x" is a time informed by habit, rumor, or Internet flamewar. Yet in profiling switches I have found some that bounce not at all; others for over 100 ms. Debouncing is an important and interesting subject that I analyzed extensively in this report. The bottom line is that one should dig deeply into the nature of the switch being used.

Alternatively, one can now buy bounce-free momentary-contact switches. Logiswitch sells switches with bounceless outputs. They come in a variety of styles. The company sent me samples, and all seem extremely well made and suitable for use in a NASA blockhouse; these are not cheap controls for your $29 toaster.

Typical switches offered by Logiswitch

Examining the switches it appears these are all SPDT going to an IC that presumably acts as a SR flip flop, which is guaranteed bounceless. But they're a bit more sophisticated than that. There's a hardware acknowledge cycle. When your system detects the switch closure, it responds with an ACK cycle that clears the asserted "switch closed" output from the Logiswitch. That greatly simplifies the code as there's no convoluted logic to wait for the switch to be released (plus all of the debouncing code). There's also an intriguing "toggle" output which changes state every time the switch is pressed.

The company also makes ICs that debounce arbitrary SPST switches, and these support the ACK handshake protocol.

Logiswitch also has a nice tutorial about debouncing techniques. Recommended.

Note: This section is about something I personally find cool, interesting or important and want to pass along to readers. It is not influenced by vendors.

Jobs!

Let me know if you’re hiring embedded engineers. No recruiters please, and I reserve the right to edit ads to fit the format and intent of this newsletter. Please keep it to 100 words. There is no charge for a job ad.


Joke For The Week

Note: These jokes are archived at www.ganssle.com/jokes.htm.

Have you heard of that new band "1023 Megabytes"? They're pretty good, but they don't have a gig just yet.

Advertise With Us

Advertise in The Embedded Muse! Over 27,000 embedded developers get this twice-monthly publication. For more information email us at info@ganssle.com.

About The Embedded Muse

The Embedded Muse is Jack Ganssle's newsletter. Send complaints, comments, and contributions to me at jack@ganssle.com.

The Embedded Muse is supported by The Ganssle Group, whose mission is to help embedded folks get better products to market faster. We offer seminars at your site offering hard-hitting ideas - and action - you can take now to improve firmware quality and decrease development time. Contact us at info@ganssle.com for more information.