|For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.
By Jack Ganssle
In 1966 my Dad gave me a wonderful present - The Radio Amateur's Handbook, then as now the standard reference work for ham radio enthusiasts. Just having entered the turbulent teenaged years it's $4.00 price was beyond my modest means; 30+ years ago four bucks was a lot of money. Though my ham license was several years away the book was such a fantastic compendium of things electronic that I pored over it much as kids today are mesmerized by Harry Potter.
I was a geek then, as now.
The Handbook was a compendium of all things electronic, though perhaps just a bit behind the times. As I recall transistors weren't mentioned, let alone ICs, though both of these technologies were reasonably mature even then. Hams didn't use such exotic components; the vacuum tube was king.
Older readers surely remember those days. Today even the simplest electronic device sports a hundred thousand transistors; tubes, though, were so bulky that their multi-cubic-inch volume contained only two active components. The 12AX7, one of more popular devices, contained two "triodes" - each the equivalent of a transistor. Anther popular tube, the 6146, was a "tetrode" that used a pair of grids, giving only one active element in a large, fragile glass package.
Where transistors can only be understood in terms of quantum mechanical effects, tubes were simple. A heated filament emitted an electron stream through a wire mesh (the grid) towards a metal target (the plate). Bias the grid with a small voltage and the electron beam was repelled or attracted by the grid charge. In some higher power applications you could actually see the electron beam's ghostly echo as a blue glow.
A few years went by; with a high school part time job I managed to purchase the 1970 edition for $4.50. That book has followed me through 30 years of jobs, moves, and boats. It's on my desk now as I write; in fact for three decades it's been one of the most consistently-used books in my library.
By 1970 the ham radio community was using semiconductors in some applications, as reflected in the Handbook's coverage of these devices. Yet the majority of the circuits were still tube designs.
Yet the book remained, and remains, a treasure trove of information for anyone working with electronics. It bridged the gap between engineers and technicians. Need to know how to compute reactance? Use the formulas provided, or refer to a graph that's more immediately useful in practical work. Forget the resistor color codes? Want to know the right way to bundle wires? How much current can 12 gauge wire carry? The Handbook contains it all.
I refer to the book for practical information as well as to fill in the gaps in aging memory. I always forget the "hole and electron" transistor explanation, but it's in the book. We engineers calculate a lot less than the public supposes; when indeed it's time to do a real computation, I've usually forgotten the formula. A quick look in the Handbook yields the needed equation.
My 1970 edition is getting awfully ragged, so I bought the 2001 release. It's now titled The ARRL Handbook for Radio Amateurs, presumably to highlight the Amateur Radio Relay League's 78 years of publication of the book. Since Nixon's time the price has gone up to $32, the volume itself is much larger, and the best place to order it is on the web at www.arrl.org. The tome is more "now" with plenty of pictures of happy hams instead of raw equipment photos in the older versions. And in keeping with the realities of the new century there's a number of pages devoted to what I'd call remedial math - basic algebra and trig needed to handle the electronics equations that follow in later chapters.
In a flip due to modern times transistor reference data abounds! yet there's still 3 pages devoted to vacuum tube specifications! These are the high power devices used in big transmitters, but it's somehow reassuring to see some history preserved. The older volume's tube pages are mostly replaced with reference material on digital filter coefficients, the ASCII table, computer connector pinouts and the like.
Though the electronics field of three decades ago is almost unrecognizable today, the Handbook's primary focus of giving basic electronics information hasn't changed. Better, its emphasis on building stuff remains, even in this high integration world.
And to me, building stuff was the compelling reason to learn electronics and eventually become an engineer.
But times have changed. Years ago a TV was perhaps the most complex bit of electronics in a typical household, yet none contained more than two dozen tubes. Today a wristwatch is orders of magnitude more complex. A home might have billions of transistors in various appliances, games and the inevitable PC. Billions. Not long ago the only thing anyone owned in such volume were atoms or molecules.
In the past it was possible to build quite a state of the art device in your basement. Today that's much harder.
Things have changed over the last couple of decades. Ham radio, while still a popular hobby, is a victim of the electronics revolution. Most hams buy their equipment now, as it's simply too hard to build your own. Years ago we worked almost exclusively on trivially simple AM gear. Today, SSB and FM dominate; both require much more sophisticated receivers and transmitters, equipment far beyond the construction abilities of the average teenage wannabe.
The golden age of ham radio lasted from the early part of this century till the 70s. During this time the only barrier to entry was one's lack of desire, as most hams found ways to make gear with little money.
At one point I thought computers would replace ham radio as the home electronics hobbyist domain. At the dawn of the microprocessor age people did indeed create computers more or less from scratch. No more. Today, "building a computer" means buying a motherboard, disks, and other modules, and then bolting them together. There's neither opportunity nor need to learn about the electronics. SMT, ultra high speeds, and increasing miniaturization puts that sort of electronics beyond most hobbyists' reach.
Something important is lost. We learn more by doing than by studying. Doing hard today without a fully-equipped lab. The result is as inevitable as it is obvious: tinkering with software has replaced playing with electronics. These computer wizards will become skilled and productive CS people! but possibly not EEs. Where will we get the future crop of electronics designers?
Even now it's hard to find good designers. Analog folk are in notoriously short supply. Far too many embedded people have little knowledge of basic electronics. Now we manipulate parts with a million transistors as easily as we once worked with a single FET or tube. Designing a big gate array or PLD is more a software exercise than an experience in electronics. Too few, while adept with the latest high-density devices, remember Ohms Law.
We use mountains of decoupling capacitors, whose behavior is critical to our circuits yet which act not at all like TTL gates. As speeds increase we're dealing with Maxwell's dreaded Laws, not simple wires. Electronics. Not discrete ones and zeroes.
Is basic electronics obsolete? Has Boolean algebra has replaced Kirchoff's Law? Is digital engineering immune from pedantic electronics concerns? Perhaps the benign clock rates of embedded designs in the 70s and 80s insulated us from the underlying yet critical physics of circuits. I do think that the colleges pandered to this, in many ways, creating a generation of "computer engineers" who are adept at software and high level design, but who are adrift when confronted with a component's transfer function. Though perhaps such specialization is indeed necessary, as none of us has the time to become as master of everything.
I'm concerned when my students, CS and EE college seniors, can't read a resistor's color bands, or have no concept about power dissipation in components. This past semester they built computer-controlled cars that patrolled hallways using Polaroid sonar sensors. What fun! But even when the motor drive transistors smoked they weren't able to translate years of theory into making a simple wattage calculation. Theory is important, but must be tempered with experience.
Some of the best engineers I've worked with were hams in their youth. They couple a love of electronics with a tremendous amount of practical experience. They have a visceral feel of how things work. Power dissipation isn't theory - scarred fingers attest to long-ago burns that leaven their calculations of today
How has the ARRL Handbook balanced increasing complexity versus this hand-on approach? While plenty of the projects are for very advanced amateurs only, some are but a half dozen components. How's this for cool: a ' watt transmitter for 10, 15 and 20 meters that uses one IC. A digital IC. A quad inverter, in fact, set up in a feedback loop with crystal to control the frequency. Add three resistors, a couple of coils and capacitors and you're done. Awesome in its simplicity, incredibly elegant in design, I tip my hat to the cleverness of its designer.
So, if you're involved in any sort of radio frequency work, this book is a must-have. As a general electronics reference it's one of the very best (the only other book that comes close is The Art of Electronics by Horowitz and Hill, Cambridge University Press, 1980, Cambridge, England ISBN 0-521-37095-7). As a guide to building things, to fiddling with circuits, this book can't be beat.
Recently while giving a talk to a group of developers one spoke up and claimed quite firmly that there's no use for floating point in embedded work. Astonished, I then realized that his statement represents the essential dilemma of embedded systems - each and every product is so different it's impossible to generalize. 4 bit greeting cards have little in common with an ARM-powered telecom system.
An awful lot of systems do indeed use floating point. In fact the very first embedded system I built long ago measured infra-red light reflected from samples of wheat to figure protein content. Though cost issues confined us to only 4k of code space on the (by today's standards) absurdly underpowered 8008, the program performed very elaborate curve fits. Intel's user group made a bare bones floating point library available, but all of the more complex functions (trig, exp, log, etc) were homebrew.
Even today many developers face the same problem. Working on a low-end CPU, maybe in assembly language, you're forced to beg borrow or steal trig and other functions. Perhaps in C you can't afford the overhead of a complete library when doing just one complicated function. Or maybe the compiler's library is just sloooow. One I use for a 16 bit CPU consumes many tens of milliseconds for a lousy cosine operation. Coding your own can speed things up tremendously.
And so I've always been fascinated with approximations of all sorts, both integer and floating point. Jack Crenshaw, regular columnist for this magazine, recently released his Math Toolkit for Real-Time Programming (CMP Books, Lawrence, KS 2000, ISBN 1-929629-09-5). Like his columns, this is a crystal clear description of numerical calculus, square root, trig and log approximations, and simulation. A lot of books delve deep into creating approximations; none explains how they work more clearly than Crenshaw. He covers approximations to common functions in both floating point and integer.
That said, my all-time favorite book on the subject is Computer Approximations by John Hart (Robert E. Krieger Publishing Company, Malabar, FL, 1968, ISBN 0-88275-642-7). Where Crenshaw shines a welcome light into the process of creating the approximations, Hart's explanations are totally baffling to anyone with less than a math PhD. Hart covers floating point only.
The great virtue of Hart's book, though, is that it contains thousands of approximation algorithms for just about every function you can imagine. Each is rated by complexity (more or less the number of floating point operations required) and accuracy, so you can trade off speed for precision. Since we can't generalize about embedded systems, you may need to compute a tangent to 14 digits of precision while I'm looking for something speedy but not terribly accurate. Hart's book gives us solutions for a wide range of these varying needs.
Unfortunately the work has been out of print for years. You might be able to find a copy at a decent university library, though most are notoriously stingy about non-students/professors checking books out. So I've extracted some of the more useful algorithms from the Hart book and put them on-line at www.ganssle.com. You won't find the hows and whys that Crenshaw does so well; it's more a collection of code with minimal explanations.
Help yourself and let me know how they work for you.