Go here to sign up for The Embedded Muse.
Logo The Embedded Muse
Issue Number 227, September 3, 2012
Copyright 2012 The Ganssle Group

Editor: Jack Ganssle, jack@ganssle.com
Jack

You may redistribute this newsletter for noncommercial purposes. For commercial use contact jack@ganssle.com. To subscribe or unsubscribe go to https://www.ganssle.com/tem-subunsub.html or drop Jack an email.

Contents
Ad
Editor's Notes

After nearly a quarter century of monthly publication, in May the print edition of Embedded Systems Design (ESD) magazine folded. The magazine succumbed to a dearth of advertising dollars and the high price of printing and mailing a physical publication. It’s tough today to compete with the Internet. Happily, ESD will continue at embedded.com.

But on-line publishing suffers from its own flaws. The high cost of print meant editors carefully culled content. Poor articles were rejected; a big part of editors’ work was to find articles and authors with ideas valuable to the reader. The Internet model is exactly the opposite: ad dollars stem from eyeballs, so many (maybe most) sites are just Google magnets. The goal is massive amounts of so-called “content”, regardless of quality, meant to drive hit counters to stratospheric levels. The result is plenty of blogs and blather (and an increasing number of highly biased vendor-written articles) that the reader, instead of an editor, is forced to sift through.

Most of these web sites have corresponding email newsletters. These, too, are designed simply to draw eyeballs to the site and are therefore generally little more than mostly content-free lists of links.

Just before hearing about ESD’s demise I wrote in the Muse about print's importance in the digital age. An avalanche of readers agreed. Most (though not all) wrote that they don’t care for reading articles on-line. They find print more user-friendly.

Engineering changes constantly. New ideas, products, algorithms and strategies appear daily. We need magazines, whether in print or on-line, to keep up with the state of the art. Practically nothing today in this field resembles electronics as it was when I was a young engineer! Even the pedestrian soldering iron is often replaced by an SMT station. We have always relied on the publications to keep abreast of the field.

Does anyone remember Computer magazine (not the IEEE version)? Or Electronics? Those were beefy, packed with fabulous content. Like ESD they are gone. EDN and EETimes are just shadows of their selves.

Since ESD died we here at The Ganssle Group have been struggling to find a way to fill the void, and gave a lot of thought to starting a print magazine for the embedded community. Alas, the numbers just don’t add up. Instead, we’ve decided to beef up The Embedded Muse, and this issue is the first installment of the new version.

Yes, it’s now in HTML, which will annoy some. But readers have been after me for years to use a more modern format than the text-only version. HTML will allow me to use graphics: charts, graphs, pictures and the other sort of material you would expect in an old-fashioned print publication.

This isn’t, and won’t be, a print magazine. But my goal is to have it print-worthy. Great content that you’ll want to print and perhaps even save in a binder. Content that will align with so many letters from readers which said, in effect “if the material is good I prefer to read it at my leisure, not when I’m stuck behind a computer screen.”

What won’t be in here is the kind of stuff that ticks me off. Like reader tracking, which is a component of most email newsletters today. Hover over a link and it surprisingly goes to constantcontact.com or some other link-tracking service. Sure, that helps the company target its marketing. But it does not serve the engineer.

The very first Muse went out in June of 1997. In the ensuing fifteen years it has grown to over 26,000 subscribers. Not an issue goes out that doesn’t get at least dozens – sometimes hundreds – of replies. I appreciate them all and publish the most interesting. Please keep them coming! And I hope you enjoy the new format of The Embedded Muse.

Quotes and Thoughts

There are two ways to write error-free programs; only the third works. – Alan Perlis

Tools and Tips

Rodrigo Flores wrote: I just noticed you don't have any distributed VCS systems in your tools page (well, there's darcs but I don't know that one). So here's another contribution to your page...

Although someone already pointed to an article summing the differences between centralized and distributed, there's no mention of the specific systems. The main FOSS competitors (in my opinion) are: Bazaar, Git, and Mercurial (hg). You can find them at:
http://bazaar.canonical.com/en/
http://git-scm.com/


For those using Windows, I recommend the Tortoise-flavoured GUIs:
http://wiki.bazaar.canonical.com/TortoiseBzr
http://code.google.com/p/tortoisegit/
http://tortoisehg.bitbucket.org/

For those still stuck with centralized systems, a quick reminder: you can still have a central repository with most distributed VCS systems.

As a side note: I'm using all three of them (kind of), and while I favor Mercurial a bit, I'm happy to use either.

What I'm Reading

Designing Analog Chips - a wonderful book (the PDF is at the link) by Hans Camenzind, inventor of the 555 timer, who passed away in August.

Software Lifecycle Metrics; Enabling Quality, Security and Productivity - Shan Bhattacharya's paper which will be presented at the upcoming Boston Embedded Systems Conference. And, if you're attending the ESC be sure to say "hi!" I'm presenting three talks. When Is a Cyberattack a Use of Force or an Armed Attack? - From the August 2012 print issue of IEEE Computer. It's not particularly deep or insightful, though the subject is fascinating.

Skilled Work, Without the Worker - How robots are replacing all sorts of workers. Foxconn is installing a million robots on their production lines!

Continuing Education

Amr Bekhit wrote: I was listening to the Amp Hour episode #93 recently and in it, they interviewed Tom LeMense to get his views on electronics in the automotive industry. One of the things that caught my interest about LeMense's history was how he changed his job in order to work with more experienced engineers. 20 minutes into the interview, LeMense talks about how he was working for a car alarm company after he graduated and soon became the "go-to guy" on radio design at the company at the age of 25. Instead of feeling pleased with himself, he realised something was wrong and so he started searching for other places where he could work with more experienced engineers and started work at Ford Electronics.

The interview reinstated in me the importance of continuing my professional education during employment. I currently work for a small company that I joined straight out of university a few years ago. When I joined, the only engineer in the company was the owner and he's a mechanical engineer. I was the first EE in the company (we've since hired another fresh graduate) and so unfortunately don't have the luxury of being exposed to more experienced engineers to learn from, although it has been nice to run things "my own way". In my case, I feel I have an even greater responsibility to keep up to date and continuously learn as the company is placing the responsibility of all embedded systems development at our feet.

The primary method I use to try and keep abreast of the embedded world is mainly through the web, by subscribing to various electronics engineering sites and reading the articles there. I did attend ESC UK a few years ago and learnt a huge amount there. But I still don't feel that that's enough and there's a huge world out there that I'm not exposing myself to. I would love to hear what you and Embedded Muse readers' advice and experiences are on continuing professional education and making sure you don't fall behind.

Multitasking

Richard Wall sent a link to an article about multitasking (that is, in terms of trying to do more than one thing at a time, not in reference to operating systems): http://www.thenewatlantis.com/publications/the-myth-of-multitasking.

Best quote: “Workers distracted by e-mail and phone calls suffer a fall in IQ more than twice that found in marijuana smokers.” But it is a long article which may be hard to understand if email and texts are coming in at same time...

Don't Be Cute

What do you think of the following construct?

while(42)
  {

 
 
 

    ptr=0;
  }
                

It’s part of a putchar() function – provided by the compiler vendor – that I found while playing with a cool little 16 bit microcontroller. There were no comments, of course. There are a couple of bits of nastiness in this, but let's look at one.The initial while statement threw me for a minute. Why did it take an argument of 42? Obviously 42 is non-zero, so the code is nothing more than an infinite loop. But why 42? Was it a magic number? Was there some meaning to this that eluded me? Perhaps a bifocal-eluding small “i” prefixed the number, turning this into a variable.Eventually the significance sank in. The mice in Douglas Adams’ Hitchhiker’s Guide to the Galaxy built a grand computer that pondered the question “what is the answer to life, the universe, and everything” for ten million years before answering: “42.” That answer meant as little to the mice as it does to us, but kept Zaphod Beeblebrox, Arthur Dent and Ford Prefect tumbling into one misadventure after another, while Marvin the depressed robot whined about everything.So, apparently the person who coded this while loop selected 42 to be cute. It’s non-zero and will keep the loop running forever. But it tripped me up for a bit, so was a bit of cuteness that effectively reduced the code’s readability.The deluded programmer could have used a symbolic notation; it is, after all, generally a Bad Idea to embed numeric constants in the code. Maybe something like:

while  (meaning_of_life_etc)

with a

#define  meaning_of_life_etc 42 

buried in a header file somewhere. That’s a lot more disciplined. If the meaning of life, the universe, and everything were to change a quick edit of the header file would fix the program everywhere.But this is even foggier than the original version. Now we don’t know anything about meaning_of_life_etc without searching through dozens of .h files. Is it a constant? Global? Does an ISR change it invisibly in the background?This reminds me of the time my college assembly-language instructor just about blew a gasket when I submitted a card deck (this was a long time ago) that looked like:


Read  input data1
Read input data2
Compute data1 times data2 to result
Output result
The assignment was to read two inputs, multiply them and print the result. In Univac 1108 assembly language, which of course this code doesn’t resemble at all. But that machine had a very powerful macro preprocessor, so, just to be ornery I defined the above constructs as macros. Was it valid assembly? Maybe in some technical sense, but by getting very cute I submitted code that wasn’t standard, wasn’t maintainable, and that no other programmer could work on.

Some years ago a friend who owned a compiler company told me about an odd experience he had had. One day three burly guys draped in suitcoats showed up. In a Kafka-esque scene they demanded that the company remove a word used as a variable in a demo program supplied with the compiler… but they refused to identify the word. It was classified, it seems, a code word denoting something really important to them but meaningless to my pal. In an odd dance the suits pointed vaguely at listings while looking away… because he wasn’t allowed to know this secret. Finally, he figured it out and made the change.It seems the secret had been revealed in a book about the NSA. “For fun”, a young programmer used the word as a variable name. The agency didn’t share in the developer’s guffaws.Software has to do two things: it has to work, of course. But just as importantly, the code must clearly convey its intentions to the next developer. Those are basic coding rules. An extra bit of wisdom might be to avoid dispensing national secrets in the source.

And don’t be cute.

A Look Back - The Op Amp

When the microprocessor first appeared it was just another component; a very cool little IC that, like any IC, could add some functionality to a product. But early parts, like the 8008, needed an enormous amount of companion logic. Electrical engineers - EEs in US parlance - designed complex boards that made the CPUs into useful computers. Then, generally while the PCB was being designed (by manually placing black tape which represented tracks on sheets of mylar), the EE would realize that, gee, a bit of software would be needed as well. He - almost always a "he" - would crank out some assembly code to burn into EPROM. Electronic products were produced by people with a deep understanding of electronics, and, too often, not even a vague notion of software engineering.

The reverse is true today. Some estimates peg software at 80% of the engineering cost of a product. Fewer EEs populate development teams.

Yet our work is still electronics; a firmware person pushes electrons through circuits in complex ways. There's no such thing as a one or a zero; these are abstractions that mask the gritty truth zipping between components. Today managers bombard my email in-box with complaints that their people don't have the grounding in electronics needed to really understand the implications of their work. This disconnect will get worse, since EE degrees have been in decline for a long time.

So I've been considering starting an electronics section in the Embedded Muse; a biweekly series that takes the novice from Ohm's Law through transistor theory and into logic and analog circuits. The aim wouldn't be to create a EE, but to give software people a grounding in the essence of what is going on in that board that hosts their code. Comments on this idea are greatly appreciated.

But there's another aspect to this: the path we've taken in just a century from a world that didn't know how to manage an electron to that 100 billion transistor, nearly free, mobile phone. Most of those transistors are logic and memory, what we think of as the essence of embedded systems. But some of the most important are for the analog sections. For analog is the reality of our products; the interface between idealized ones and zeroes and the sensors and actuators that interface with the real world. Analog is crucial to embedded systems.

In Muse 225 I mentioned TI's free "Handbook of Operational Amplifier Applications." It's a great introduction to one of the most important components in the analog world. The op amp is the shape shifter of electronics; by connecting it in different ways it can be an amplifier, oscillator, integrator, differentiator, and do all sorts of math. Its functionality is limited only by the imagination.

But where did it come from? Did it spring forth unheralded from Bob Widlar when he designed the game-changing uA702 in 1964? The uA702 was the first monolithic op amp - that is, the first one on a single piece of silicon. That part reportedly cost an astonishing $300 ($2200 in today's diminished dollars).

Op amps have been around for a long time. While sorting out some pictures from a recent visit to The Computer History Museum (a must-visit site in Mountain View, CA) I came across a photo of Philbrick's K2-W. This, the first commercially-available op amp, made its debut in 1953.

K2-W op amp

The K2-W

The K2-W had a lot of interesting features. Though transistors existed at the time they were unbelievably expensive. (Bell Labs got a royalty on every one, with the exception of those used in hearing aids. The reason? Alexander Graham Bell had been a teacher of the deaf and the company wanted to honor his memory). So the amplifier used a pair of 12AX7 vacuum tubes, high gain dual triodes (a triode gives the same sort of functionality as a transistor, though is more akin to a FET than a bipolar part). The 12AX7, first introduced in 1946, is still made today - in Russia.

The "12" in 12AX7 refers to the filament voltage. Filaments were heated to emit (directly or indirectly) a stream of electrons between the cathode and plate in the vacuum enclosed by the tube's glass envelope. A little bit of negative voltage on the "grid", a wire mesh between those two elements, would repel the negatively-charged electrons and modulate the flowing stream. Hence, it acts as an amplifier.

I see these on the Internet for prices from $10 to the better part of a thousand bucks. At $10 each, a smart phone would cost half a trillion dollars using 12AX7 technology (remember, there are two triodes in each 12AX7).

The tube used a nine-pin "miniature" pinout. That is, nine small pins sprouted directly from the glass at the bottom of the tube, so the part was much smaller than older octal devices. "Octal" stems from the much bigger 8 pins used on earlier tubes. Yet the K2-W op amp's pinout was itself that of an octal tube.

12AX7 dual triode

A 12AX7 dual triode

The K2-W's datasheet has more interesting details. It needed plus and minus 300 volt power supplies, due to the use of the twin tubes. Working on electronics in the tube era was shocking at best and could be lethal. Like all of today's op amps there were positive and negative inputs. The open-loop gain, a critical parameter which is infinity in a perfect op amp, was 15,000. That's not far off of today's LM324, though the latter part costs about a dime in quantity and contains four amplifiers. The KC-2's input impedance was a decent 100 MΩ or better (again, a perfect device would be infinity) and the output impedance could be under an ohm (zero is ideal).

K2-W op amp schematic

The K2-W's schematic. Check out the two NE-2 neon light bulbs used in series as a voltage reference for the triode on the right. Cool! Neon bulbs have a voltage/current curve somewhat like zener diodes, though the breakdown is on the order of 60-120 volts. Also interesting is the uuF designation on the capacitors. In the olden days that meant micro-microfarad. Today we call the unit a picofarad. And, although the 12AX7 had a 12 volt filament, that was center-tapped so it could be powered with 6.3 volts.

The bandwidth was "over 100 KC." That parameter is kilocycles, which all US engineers used instead of kHz until the late 60s or even into the 70s.

At $22 each ($187 today) these were not cheap, but were affordable for a lot of applications.

A 1956 description of the device is delightfully written and offers a good introduction to the use of op amps. It includes a lot of practical circuits, even for digital applications (like a monostable multivibrator - AKA one-shot). There's even a simple design that takes roots (e.g., square root) of an input; it uses a vacuum tube in the op amp's feedback loop. That threw me for a loop till I remembered how we use transistors in exactly the same way for the same reason today.

Here's a snippet of the publication's very non-geeky prose:

prose

But where did the concept of the op amp come from? It seems that many different people contributed, and depending on the source, there's quite a bit of disagreement about the inventor(s). Here's what seems to be truth, or at least truthy.

Wikipedia and other sources claim the device was invented by Karl D. Swartzel Jr. of Bell Labs, who filed patent 2,401,779 in 1941. It is for "electrical calculating devices and particularly to a device for obtaining the sum of a plurality of electrical voltages."

Swartzel patent

The amplifier in Swartzel's patent

Note that there's no non-inverting input. But the invention does have a feedback loop, a key characteristic of the op amp. Input resistors "1," "2," "3" and feedback resistor "16" are configured exactly as we would build an adder today.

Reputedly this circuit was used in the M9 artillery director built by Bell Labs and used in WWII. I've only been able to find cursory references to the M9, but its ideas were enshrined in post-war patent 2,493,183. That shows the use of op amps. Lots of op amps. One wonders how they managed to make this reliable considering the percussive effect of each salvo.

One of the M9's co-inventors is Sidney Darlington. Could he be the same Sidney Darlington who filed a patent for the Darlington pair 1952? (That is, the use of two or more transistors in a unique way to get really high gains). Both the M9 and Darlington pair patents were granted to a Darlington who worked at Bell Labs, so presumably they are the same person.

But other patents predate Swartzel's, and more importantly, other problems existed that were getting engineering attention.

The telephone dates back to about 1875 and its origin is clouded in legal disputes. Gray and Bell are the main protagonists, though plenty of others vied for credit. But while the lawyers filed mountains of briefs, in just two years engineers laid the first long-distance phone line. It covered 60 miles in California. And in 1880 the progenitor of American Telephone and Telegraph Company started work on a nation-wide telephone system.

Electronics did not yet exist. Getting a signal through thousands of miles of noisy and lossy wire was a problem in those pre-amplifier days. Engineers found that inductance could alleviate some of the problems, so long wires had "loading coils" at regular intervals.

They weren't enough. By 1911 a 2100 mile line between New York and Denver, complete with loading coils, sort of worked but the telephonic messages were all but indecipherable. A number of people at Bell Labs set to work on the problem. The vacuum tube had become available, and it was clear this device was the amplifier the phone system needed.

1915 saw the opening of the first transcontinental phone line with loading coils at 8 mile intervals and eight vacuum tube repeaters. Alexander Graham Bell spoke his famous command to Watson, now not in the other room, but across the country.

Unfortunately, amplifiers weren't stable; they tended to oscillate especially as the gain increased. Indeed, Edwin Armstrong invented the regenerative radio in 1914 which worked by using positive feedback to create enormous gain. But these radios tended to break into oscillation.

Other problems were being addressed as well. A Bell engineer named Harold Black was tasked with making amplifiers linear. A perfect amplifier has a linear transfer function, but tubes exhibit quite a bit of variation which leads to distortion. One morning in 1927 while taking the Lackawanna ferry to work he realized that by using a bit of negative feedback the gain of the amplifier would be reduced, but its linearity greatly improved. A year later he filed patent 2,102,671 for a "Wave Translation System" in which he explained the use of negative feedback in amplifiers.

Black patent

Black's circuit. Note feedback loop 21.

He colleagues were not interested, due in part to the intellectual divide at the Labs. Black's bachelor's degree meant those with doctorates held him in disdain. Mere engineers were thought to not have the math skills to deal with transfer functions and the like, though the patent is almost unreadable due to the density of math. And no one liked the idea of reducing the gain of an amplifier.

The history becomes muddied at this point as Black's half-century later recollections are at odds with some of the papers of the time. But his invention seems a reasonable candidate for first op amp. He wasn't alone, of course; just as every important invention seems to have many parents it's clear that others in the US, Britain and the Netherlands were working on the same idea at about the same time.

Black continued work on the concept and partnered with Harry Nyquist and Hendrik Bode, two other giants of electronics whose ideas are familiar to every EE, to refine the ideas behind feedback that later became the basis of all control theory.

His single-ended input design doesn't resemble the modern differential configuration we're used to. It's possible that the first differential amplifier was invented by B. H. C. Matthews in 1934 but I haven't been able to track down the paper. Regardless, during the latter part of the 1930s many individuals on both sides of the Atlantic were publishing improved op amp designs.

Karl Swartzel's 1941 op amp, referenced earlier, was important and a lot of later work befitted from the design. Its gain of 60,000 was quite remarkable for the day.

Decades ago Heathkit sold the EC1 analog computer, which was basically nine vacuum tube op amps with their inputs and outputs brought to binding posts. One could wire these together in various ways to solve equations and simulate physics. Interestingly, these were single-input op amps, very simple single-tube affairs. Check out the user's manual to see the astonishing array of problems one could solve. It would be fun to wire up a modern version of the EC1 using just a couple of LM324s and repeat the experiments in the manual.

Jobs!

Let me know if you’re hiring embedded engineers. No recruiters please, and I reserve the right to edit ads to fit the format and intents of this newsletter. Please keep it to 100 words.


Joke For the Week

Note: These jokes are archived at www.ganssle.com/jokes.htm.

Harold Kraus sent the following in response to the "my favorite tools" item in the last issue of the Muse:

A late addition to the favorite tool contest was: Some of my uses of wet fingers:

  • (archaic) Turning the pages of ESD
  • Application of an organic, mildly saponifying, liquid cleaning agent and subsequent abrading
  • Distribution of anti-fogging agent on a sight glass
  • Application of thin, organic, gel lubricant, especially to interior surfaces; for example, lubricating the inside of the end of a hose before inserting a barbed fitting.
  • Crystal verification test (also, play a verrilion)
  • Juvenile oral stereotypy (< not a typo!), especially with the thumb.
  • Medium to High-voltage detection.
  • Adult expletive suppression; for example, after high-voltage detection.
  • Sampling of dry evaporate residues for subsequent organic testing for presence (or absence) of characteristic gustatory signatures (for example, saline, acidic, saccharide, glycerol, detergent, or cuprous compounds) generally in early stages of root cause failure analysis.
  • Forming nylon cord immediately after thermal seizing (wetting prevents digital adhesion and provides ablative thermal protection)
  • Applying an organic, water-wash solder mask when your bottle of Liquid Paper[TM] has dried out.

Estimating temperature of a metal item, resistor, IC, etc :  (with references to 17th-18th century alchemical degrees of temperature when wet fingers >were< used to measure temperature)

  • Not cold to a little warm 75-95F (2nd degree of Heat / Heat of Animals)
  • Comfortably warm indicates 100-120F (3rd degree of Heat / Heat of the Bath)
  • Painfully hot indicates 130-212F (3rd degree of Heat / Heat of Scalding) (Speed of evaporation indicates proximity to 212F)
  • Sizzle indicates over 212F and definitely beyond most commercial or industrial temperature ratings (4th degree of Heat / Heat of Boiling)
  • Cooking or burning smell indicates somewhere over roughly 350F (4th degree of Heat / Heat of the Retort)
  • Casting a shadow on the wall indicates well over 1000F(5th degree of Heat)
  • If the wet finger sticks (without burning smell), it is somewhere under 32F
  • Comfortably cool indicates 55 - 75F (1st degree of Heat or Cold)
  • Chilling cold indicates 32 - 50F (depending) (2nd degree of Cold)
  • Stinging cold indicates under 32F (3rd degree of Cold / ice and salt)

Chambers, Ephraim, 1680 (ca.)-1740., et al. / A supplement to Mr. Chambers's cyclopædia: or, universal dictionary of arts and sciences. In two volumes (1753)

About The Embedded Muse

The Embedded Muse is Jack Ganssle's newsletter. Send complaints, comments, and contributions to me at jack@ganssle.com.

The Embedded Muse is supported by The Ganssle Group, whose mission is to help embedded folks get better products to market faster.