Go here to sign up for The Embedded Muse.
TEM Logo The Embedded Muse
Issue Number 261, May 19, 2014
Copyright 2014 The Ganssle Group

Editor: Jack Ganssle, jack@ganssle.com
   Jack Ganssle, Editor of The Embedded Muse

You may redistribute this newsletter for noncommercial purposes. For commercial use contact jack@ganssle.com. To subscribe or unsubscribe go to https://www.ganssle.com/tem-subunsub.html or drop Jack an email.

Contents
Editor's Notes

Ad

Court testimony about the recent Toyota ruling makes for interesting - and depressing - reading. A lot of code was involved and some of it was safety critical. Yet it seems the firmware was poorly engineered. No doubt the typical mad rush to get to market meant shortcuts that, at the time, were probably seen to save money. A billion dollars later (with many cases still pending) that judgment looks more foolish than sage.

After over 40 years in this field I've learned that "shortcuts make for long delays" (an aphorism attributed to J.R.R Tolkien). The data is stark: doing software right means fewer bugs and earlier deliveries. Adopt best practices and your code will be better and cheaper. This is the entire thesis of the quality movement, which revolutionized manufacturing but has somehow largely missed software engineering. Studies have even shown that safety-critical code need be no more expensive than the usual stuff if the right processes are followed.

This is what my one-day Better Firmware Faster seminar is all about: giving your team the tools they need to operate at a measurably world-class level, producing code with far fewer bugs in less time. It's fast-paced, fun, and uniquely covers the issues faced by embedded developers. Information here shows how your team can benefit by having this seminar presented at your facility.

Quotes and Thoughts

"Passwords - Use them like a toothbrush. Change them often and don't share them with friends.'' -Clifford Stoll

Tools and Tips

Please submit clever ideas or thoughts about tools, techniques and resources you love or hate. Here are the tool reviews submitted in the past.

Ray Keefe also likes the Rigol DS1102E oscilloscope:

We also have a Rigol DS1102E which we use as a workhorse DSO and for most of our work the 100MHz bandwidth is adequate. Even has FFT! For $399 it is excellent value. My only complaint is the single set of knobs for the A and B channel and having to select which channel they belonged to. It would pay the extra $50 to have 2 sets of knobs even if it did make it 25mm wider.

So I concur with Robert Burke's review.

We have trialled several DSOs in this same price bracket and this is by far the best in our experience.

Fabien Le Mentec has a nice piece about using the Beaglebone PRU:

I recently published a short intro to using the Programmable Realtime Units of the Beaglebone platform. It has helped us simplifying our design, so I guess it may help other people as well. The article is here:
http://www.embeddedrelated.com/showarticle/586.php

Portuguese speakers may find this site, out of Brazil, interesting. Sistemas Embarcados means "embedded system," which the site is all about.

Spectrum Analyzers

In the last Muse I provided a link to a (recommended) Agilent application note about spectrum analyzers, and noted that you'd need some background in radio theory to understand it. Many people had questions about that, and some asked for an explanation.

First, when describing the picture I ran of a scope's FFT I mislabeled the signal levels. It's WBJC at -59 dBV, and the station at 100.7 MHz is stronger at-47 dBV:

FFT of FM radio band

The purpose of a spectrum analyzer (SA) is to show the amplitude of a signal in the frequency domain. The upper trace in the picture is that of the signal from an antenna in the time domain - that is, time is the horizontal axis. That's what oscilloscopes do: they show amplitude in volts vs time. The bottom trace is the same signal in the frequency domain: the horizontal axis is frequency. It shows the commercial FM band, from 90 to 110 MHz. The two labeled peaks are FM radio stations. I used a scope doing a fast Fourier transform (FFT) to get the lower trace; SAs get the same sort of data in a completely different way.

SAs are expensive; why not just use a scope and FFT? The instruments have very different specifications. For instance, a decent spectrum analyzer will have a displayed average noise level (DANL) many orders of magnitude lower than any scope can achieve. This means the device can sample extremely weak signals. Where it's tough to see a 1 mV signal on a scope, a decent SA will give meaningful results on sub-microvolt signals, though SAs normally use dBm rather than volts.

To understand how an SA achieves its magic one must understand how superhet radios work. Don't worry - even for non-hardware people, this should be pretty easy to understand! (The following is simplified, but there are many excellent works about radio theory).

Edwin Armstrong was one of the most prolific and interesting early electronic engineers, and in 1918 he invented the superheterodyne receiver, also called "superhet" for short. To this day this is the most common type of receiver. It gets around the limitations of the crystal set and those of other 1918-era radios.

Diagram of a superhet radio

Block diagram of a superhet radio, from Wikipedia.

The superhet has an amplifier that boosts the antenna's signal. Not a lot; at this point the signal is a jumble of noise from all sorts of transmitters. But then the magic of heterodyning takes place. A mixer takes both the RF signal and a sine wave from a local oscillator and "mixes" the two. The result is both the sum and the difference of the two inputs.

Suppose you'd like to listen to a station broadcasting on 100 MHz. If the local oscillator is producing an 80 MHz sine wave, then the mixer would output peaks at 180 MHz and 20 MHz (plus other signals we're not interested in). That is, your station, inside the radio, is now at those two frequencies.

A filter selects one of those images. Since they are so far apart that it's relatively easy to separate them. Generally the radio uses the mixer's lower frequency; the one passing through the filter is called the intermediate frequency (IF). Most radios today repeat the process, mixing the first IF to a lower second IF, and sometimes even more stages are used. Each stage allows sharper filtering, and thus better station selectivity. And, since the bandwidth of the signal being passed through the filters is so narrow, enormous amounts of gain are possible, increasing the radio's sensitivity.

A demodulator (AKA detector) extracts audio, which is amplified and fed to speakers.

One way to implement a radio is to use filters at fixed frequencies. To tune to a different station you change the local oscillator's frequency. This approach can greatly simplify the electronics since the filters don't need to be tunable.

A spectrum analyzer is, to a first approximation, just a superhet radio with a local oscillator that sweeps from a low to a high frequency. Generally it does this very quickly. As the oscillator sweeps the display shows amplitude at each frequency.

Consider the vast difference from an oscilloscope! The scope's front end is basically just an amplifier feeding an A/D converter. It's sucking in everything it can with no selectivity. Internally a scope looks pretty much like any bit of electronics. Take apart a spectrum analyzer and it looks like plumbers gone wild, with waveguides and heavily-shielded boards.

Today, SAs are fabulously complex and often combine heterodyning and FFTs. The front end, though, remains radio-like with a swept local oscillator, though that may be divided into bands.

I mentioned that SA's generally display amplitude in dBm instead of volts. A dB is ten times the log of a power ratio:

Formula for DB

A 3 dB drop means the signal is half of what it was. Actually, a drop by half is 3.01 dB down, since that's log(0.5), but everyone rounds it off to 3. Bandwidth, like the bandwidth of a scope, is often specified at the 3 dB point, which is really important to understand. A 100 MHz scope may, for a 100 MHz signal, show half of the signal that's really being probed.

When using dBs with voltages the formula is 20, not 10, times the ratio. That's because power is V2/R, and the log of V2 is 2log(V).

dBs are unitless, but sometimes we want units. A dBm is one dB referenced to 1 mW; p2 in the equation is 0.001 Watt. 1 milliwatt is 0 dBm; 1000 mW (1W) is 30 dBm, and one million Watts is 90 dBm. With SAs we're often working with very low-level signals so they have negative levels. -30 dBm is one uW.

dBs and dBms may seem a bit hard to grok at first, but with a bit of practice become a natural way to think about amplitudes. It's a lot easier to write "-30 dBm" than count the zeroes in 0.001 Watt.

So there you have it -almost 100 years ago Armstrong revolutionized radios with the superheterodyne receiver, which is also the basis of the spectrum analyzer. Software defined radios aim to connect the antenna directly to the A/D, perhaps with a bit of amplification first, but at this point that process can't reach the performance of the century-old superhet.

Reflections on a Career

Harley Burton, a long-time Muse reader, retired recently. He sent the story of his career, which I found illuminating:

I didn't take the straight route to get here by any means. In fact, I started my career majoring in Music the first time I went to college. Long story there, but after two attempts at college, I joined the Navy for 6 years. This was the last 6 years of the Vietnam War. Fortunately, I missed combat duty and more importantly, I was introduced to professional grade communications systems. I have had electronics as a hobby since I was 9 years old, but in the Navy, I was trained as a Communications Technician, in both operation and maintenance. When I got out, I knew that was my career path. So I entered West Virginia University in January 1975. This was a blessing and a curse. The low in-state tuition was a very good thing and being a vet, I was older, more settled and more dedicated to my career selection. However, it had been 8 years since I had taken any math so my skills were very bad and pretty much have remained that way till today. I should have taken refresher algebra at least but since I was going on the GI bill,I didn't get paid unless I was on a normal degree path. But I digress...

I was recruited out of WVU by Rockwell International's military products organization, here in Richardson, Texas. Those were fun times. The military budgets were flowing to replace and upgrade all of the hardware we had lost in Vietnam, and upgrade many old systems. I was racking-n-stacking equipment and was in heaven. As time and other projects went on and projects finished, I felt it was time to leave and try out another military contractor just down the street. Fortunately, I had learned the Intel i8080, i8085, i8048k, i8051 and i8086/88 microprocessors and their toolsets so I was able to locate a new job in one afternoon. Times were good then.

I went to my new company fully of myself and secure in my microprocessor knowledge and quickly buried myself up to my neck. I had been asked to design a communications switch, but in reality what I was designing was a real-time operating system. Neither my manager or myself realized the scope of work till I got into it. I failed miserably so after 6-months, I left. This was the first job at which I truly failed and frankly, it seriously undermined my confidence.

But as fate would have it, I was picked up by another company after searching for one day. There, I was made a Group Leader on a large seismic project for the Air Force. I stayed at this company for 4 years and thoroughly enjoyed the work. It was an interesting project and the personalities were very interesting indeed. There were some real deep-seated emotions which were allowed to fester and grow. I inserted myself in the middle as a liaison and that seemed to work. We were able to get the project done and it worked very well. At the completion of that program, the company scaled back from 750 to 85, the normal size of the company. Again, I beat the layoff list by 3 days, and found a new job.

The next company hired me as a Group Head and Project Leader with as high as 32 direct reports. I wanted to stay here till I retired which would have been 20 years later. Obviously, I didn't make it. It was very interesting work though. More communications systems design. I had a lot of good projects there and worked with some of the best young engineers that I have had the pleasure of meeting. My 4¾ years there were very enjoyable. I was promoted to Department Head, demoted back to Group Head and restructured a few times while I was there, but the people were absolutely the best. Then tragedy struck again and this time I didn't beat the layoff list. I was out for 10 weeks but I found a job with a 50 mile each way commute through the center of Dallas.

The next company built Postal mail sorters. This was my first foray into the commercial sector. Things were very different. Schedules were tighter, cost was very critical and again, I was working for a company with some very strong emotions among the employees. Everything was decided by the President. I was told when hired that I would have "total responsibility and zero authority" for the outcome of this project. Fortunately, I had good people working for me and through powers of persuasion, I was able to get things done. Because of the long drive and the fact that it wasn't really in my area of interest, I left there after 1½ years and was hired by a startup to develop a multi-function modem.

ITC was run by a couple very good business men who also had very creative ideas. Right in the middle of my development, we were suckered in by the original Nigerian scam. When it finally shook out, the owners lost $250,000+ which took all the development money so we couldn't get our product to market and after another 1 ½ year, I was faced with unemployment again. An opening happened at Rockwell for a Field Applications Engineer, so I stepped into that opportunity.

I spent 10½ years at Rockwell all through the dotcom boom and bust. It was the best job I ever had. The way I describe it is that every day was a new day. Generally, you never know what you'll be doing till you checked your voice-mail and email. It was always interesting. Most of my customers were really creative trying to put our devices to work in new and exciting applications. After the telecom bust of 2000, however, I hung on for a few more years, but then it all crashed and I was laid off. My office was closed down after decreasing from 15 people, down to just 1, me.

For the next 10 years, I worked for 4 companies. One for a year, one for 6-months, then 2-years and finally 3-years. These were all applications engineer positions with their usual customer interface. After the Rockwell layoff, all of the layoffs were for 6-months, however, the last one was 18 months. That was the toughest of all.

Now that I've bored you with essentially my resume, I have some thoughts. I started with some real high-hopes. As a new electrical engineer, I thought that going to work for Rockwell would be my employer forever. I come from a mill town where some employees have worked 40-50 years and I hoped I could do that at Rockwell. Then an engineer two desks over was laid off after 25 years with the company. This was a terrible shock to me. I had never heard of such thing. It was at that time, that I began looking around in case I needed to look for a job.

My career was 12 years as a military contractor and the rest in the civilian sector. Both were great. The challenges of the military business and the (sometimes) advanced technology were very rewarding. The constrictions were frustrating. This was equally true of the civilian sector. Most of the projects were very interesting and something I look back on with pride.

I met some truly amazing people along the way, both as co-workers, employees, and managers. I have also had both good and bad managers. Some of the managers were micro-managers while others were my favorite kind. They would give you a task and the resources you might need and then let you go and work on it as long as you kept them informed. This was definitely my favorite. I never directly supervised hourly workers, but I have always felt that by and large, the professionals all wanted to do a good job. I frequently had to prod some employees to try to finish on time. We developed a saying that has applied more than I ever thought: "the only to get the job done is to shoot the engineer" In a couple cases I would have to draw a line in the sand (chalkboard) and tell my engineer "now just go build this. It's good enough". At one company, there were some real prima-donnas. Keeping these people busy and separated so they could get their work done.

Over the years, I have had to terminate many people. Only two were terminated for cause and they just didn't give me any choice. In every other case, this was the hardest part of my job. I much preferred to recommend my people for awards, raises and kudos than punishments. I still do.

I've seen a LOT of changes in 35 years. Hardware just keeps getting smaller and smaller, especially with the transition from i8080 to ARM microprocessors, slide-rules to smart phones. Software development tools were nonexistent when I started writing code and languages as powerful as C++, Python, etc., assembler to structured programming to object oriented languages. Analog voltmeters to spectrum analyzers you can carry in your pocket and plug into your laptop. It has been an amazing change.

Okay, so what has been that point? Did I make an impact on humanity? No, I don't think so. Did I make some useful products? Absolutely. Would I do it again? Yes, of course.

So I asked Harley what advice he'd give to a young engineer:

My only advice is how I handled my career.  Never be afraid to try new things, in fact look for new opportunities and go for it.  Don’t be afraid to ask for help when you need it.  Don’t be afraid to fail.  You will, but give it your all, but keep your resume up to date and always remember your family comes first.

I think another bit of advice lies in his stories of disruptions and layoffs: save money. With money in the bank one has options.

Rate-Monotonic Scheduling

Normal preemptive multitasking is not deterministic. At any point a perfect storm of interrupts and task demands can cause a system to miss a deadline. In a hard-real-time system, this may cause catastrophic failure.

One well-known solution is rate-monotonic scheduling (RMS, it also goes by the name rate-monotonic analysis). With RMS one gives the highest priority to the tasks that run most often. Given the execution time and frequency of each task, with a bit of simple math it's possible to figure if RMS scheduling will be deterministic. That is, one can prove that the system will always meet deadlines.

Phil Koopman's May 12 blog posting is a must-read about RMS.

Though often pushed as "the" solution to determinism woes, I am somewhat troubled by RMS. It fails without quantitative knowledge of both the worst-case execution time (WCET) and period for each task. Period is problematic when tasks are initiated by external events (which is an argument for a design where that does not happen). WCET can only be determined ex-post facto - after the code is written. One does a careful design, tedious implementation, and only then, late in the game, do we know if our system will be reliable. WCET is especially tough because it may vary hugely depending on inputs. Floating point execution times can be all over the map depending on the arguments. How can one be sure that the tests elicited the worst of the worst case timings?

Then there are the inevitable changes. Marketing asks for some feature change. To ensure that the slightly-modified system is still schedulable, one has to re-measure all of the WCETs. Not impossible, of course, but a huge time sink that will, in the shipping frenzy, be neglected more often than not.

Phil agreed that these measurements are required, and stressed how important this is for safety-critical systems. It's just the cost of doing business in that space. True, but where does that leave non-safety-critical devices, which make up the bulk of the market?

Phil is a big proponent of adding run-time code that monitors system performance. Perhaps an API could be defined where each task checks in on entry and exit; the OS or other component then monitors timing. That argues for even more slack in the CPU's loading. And I'd want the ability to get a report about observed timings, so one could see how close the system is to failing.

What's your take? How many of you are using RMS, and if not, what do you do?

Jobs!

Let me know if you’re hiring embedded engineers. No recruiters please, and I reserve the right to edit ads to fit the format and intents of this newsletter. Please keep it to 100 words.


Joke For The Week

Note: These jokes are archived at www.ganssle.com/jokes.htm.

Steve Leibson sent a link to a programmer's t-shirt that I sure want: http://dashburst.com/humor/programmer-definition/

Advertise With Us

Advertise in The Embedded Muse! Over 23,000 embedded developers get this twice-monthly publication. .

About The Embedded Muse

The Embedded Muse is Jack Ganssle's newsletter. Send complaints, comments, and contributions to me at jack@ganssle.com.

The Embedded Muse is supported by The Ganssle Group, whose mission is to help embedded folks get better products to market faster.