For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.

 

The Microprocessor at 40, by Jack Ganssle

Some 2011 reflections on the 40th anniversary of the 4004, the first commercially-successful microprocessor.

We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Bill Gates.

If one generation represents twenty years, as many sources suggest, then two entire generations have been born into a world that has always had microprocessors. Two generations never knew a world where computers were rare and so expensive only large corporations or governments owned them. These same billions of people have no experience of a world where the fabric of electronics was terribly expensive and bulky, where a hand-held device could do little more than tune in AM radio stations.

In November, 1971, 40 years ago, Intel placed an ad in Electronic News introducing the 4004, the first microprocessor. "A micro-programmable computer on a chip!" the headline shouted. Though in my first year of college I had been fortunate to snag a job as an electronics technician, and none of the engineers I worked with believed the hype. At the time Intel's best effort had resulted in the 1103 DRAM, which stored just 1k bits of data. The leap to a computer on a chip seemed impossible. And so it turned out, as the 4004 needed a variety of extra components before it could actually do anything. But the 4004 heralded a new day in both computers and electronics.

The 4004's legacy wasn't that of a single-chip computer, though that came within a few years. Rather, it spawned the age of ubiquitous and cheap computing. Yes, the era of the personal computer came a decade later and entirely as a result of the microprocessor, but the 4004 immediately ushered in the age of embedded systems. In the decade between the micro's invention and the first IBM PC thousands, perhaps millions, of products hit the market with embedded intelligence.

Forty years ago few people had actually seen a computer; today, no one can see one, to a first approximation, as the devices have become so small.

Typical microprocessor

A typical embedded microprocessor in 2011.

The history of the micro is really the story of electronics, which is the use of active elements (transistors, tubes, diodes, etc.) to transform signals. And the microprocessor is all about using massive quantities of active elements. But electrical devices - even radios and TV - existed long before electronics.

Suggestion: Subscribe to my free newsletter which often covers the microprocessor.

 

Mother Nature was the original progenitor of electrical systems. Lightning is merely a return path in a circuit composed by clouds and the atmosphere. Some think that bit of natural wiring may have created life on this planet. Miller and Urey created amino acids in 1952 using simulated high-energy discharges. But it took four billion years after Earth formed before Homo sapiens arrived, and then a little longer until Ben Franklin and others in France found, in 1752, that lightning and sparks are the same stuff. Hundreds of years later kids repeat this fundamental experiment when they shuffle across a carpet and zap their unsuspecting friends and parents (the latter are usually holding something expensive and fragile at the time).

Other natural circuits include the electrocytes found in electric eels. Somewhat battery-like, they are composed of thousands of individual "cells," each of which produces 0.15 volts. It's striking how the word "cell" is shared by biology and electronics, unified with particular emphasis in the electrocyte.

Alessandro Volta was probably the first to understand that these organic circuits used electricity. Others, notably Luigi Galvani (after whom the galvanic cell is named) mistakenly thought there was some sort of biological fluid involved. Volta produced the first artificial battery, though some scholars think that the Persians may have invented one thousands of years earlier.

About the same time others had built Leyden jars, which were early capacitors. A Leyden jar is a glass bottle with foil on the surface and an inner rod. I suspect it wasn't long before natural philosophers (proto-scientists) learned to charge the jar and zap their kids. Polymath Ben Franklin, before he got busy with forming a new country and all that, wired jars in series and called the result a "battery," from the military term, which is the first use of that word in the electrical arena.

Many others contributed to the understanding of the strange effects of electricity. Joseph Henry showed that wire coiled tightly around an iron core greatly improved the electromagnet. That required insulated wire long before Digikey existed, so he reputedly wrapped silk ripped from his long-suffering wife's wedding dress around the bare copper. This led directly to the invention of the telegraph.

Wives weren't the only ones to suffer in the long quest to understand electricity. In 1746 Jean-Antoine Nollet wired 200 monks in a mile-long circle and zapped them with a battery of Leyden jars. One can only imagine the reaction of the circuit of clerics, but their simultaneous jerking and no doubt not-terribly pious exclamations demonstrated that electricity moved very quickly indeed.

It's hard to pin down the history of the resistor, but Georg Ohm published his findings that we now understand as Ohm's Law in 1827. So the three basic passive elements - resistor, capacitor and inductor - were understood at least in general form in the early 19th century. Amazingly it wasn't until 1971 that Leon Chua realized a fourth device, the memresistor, was needed to have a complete set of components, and another four decades elapsed before one was realized.

Michael Faraday built the first motors in 1821, but it another four decades elapsed before James Maxwell figured out the details of the relationship between electricity and magnetism; 150 years later his formulas still torment electrical engineering students. Faraday's investigations into induction also resulted in his creation of the dynamo. It's somehow satisfying that this genius completed the loop, building both power consumers and power producers.

None of these inventions and discoveries affected the common person until the commercialization of the telegraph. Many people contributed to that device, but Samuel Morse is the most well-known. He and Alfred Vail also critically developed a coding scheme - Morse Code - that allowed long messages to be transmitted over a single circuit, rather like modern serial data transmission. Today's Morse code resembles the original version but there are substantial differences. SOS was dit-dit-dit dit-dit dit-dit-dit instead of today's dit-dit-dit dah-dah-dah dit-dit-dit.

The telegraph may have been the first killer app. Within a decade of its commercialization over 20k miles of telegraph wire had been strung in the US, and the cost to send messages followed a Moore's Law-like curve.

The oceans were great barriers in these pre-radio days, but through truly heroic efforts Cyrus Field and his associates laid the first transatlantic cable in 1857. Consider the problems faced: with neither active elements nor amplifiers a wire 2000 miles long, submerged thousands of feet below the surface, had to faithfully transmit a signal. Two ships set out and met mid-ocean to splice their respective ends together. Sans GPS they relied on celestial sights to find each other. Without radio-supplied time ticks those sights were suspect (four seconds of error in time can introduce a mile of error in the position).

William Thomson, later Lord Kelvin, was the technical brains behind the cable. He invented a mirror galvanometer to sense the miniscule signals originating so far away. Thomson was no ivory-tower intellect. He was an engineer who got his hands dirty, sailing on the cable-laying expeditions and innovating solutions to problems as they were encountered.

While at a party celebrating the success, Field was notified that the cable had failed. He didn't spoil the fun with that bit of bad news. It seems a zealous engineer thought if a little voltage was good, 2000 would be better. The cable fried. This was not the first nor the last time an engineer destroyed a perfectly functional piece of equipment in an effort to "improve" it.

Amazingly, radio existed in those pre-electronic days. The Titanic's radio operators sent their SOS with a spark gap transmitter, a very simple design that used arcing contacts to stimulate a resonant circuit. The analogy to a modern AM transmitter isn't too strained: today, we'd use a transistor switching rapidly to excite an LC network. The Titanic's LC components resonated as the spark rapidly formed, creating a low-impedance conductive path, and extinguished. The resulting emission is not much different from the EMI caused by lightning. The result was a very ugly wide-bandwidth signal, and the legacy of calling shipboard radio operators "sparks."

TV of a sort was possible, though it's not clear if it was actually implemented. Around 1884 Paul Nipkow conceived of a spinning disk with a series of holes arranged in a spiral to scan a scene. In high school I built a Nipkow Disk, though used a photomultiplier to sense the image and send it to TTL logic that reconstructed the picture on an oscilloscope. The images were crude, but recognizable.

The next killer app was the telephone, another invention with a complex and checkered history. But wait - there's a common theme here, or even two. What moved these proto-electronic products from curiosity to wide acceptance was the notion of communications. Today it's texting and social networking; in the 19th century it was the telegraph and telephone. It seems that as soon as any sort of communications tech was invented, from smoke signals to the Internet, people were immediately as enamored with it as any of today's cell-phone obsessed teenagers.

The other theme is that each of these technologies suffered from signal losses and noise. They all cried out for some new discovery that should amplify, shape and improve the flow of electrons. Happily, in the last couple of decades of the 1800s inventors were scrambling to perfect such a device. They just didn't know it.

From Light Bulbs to Computers

Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and perhaps weigh 1-1/2 tons. Popular Mechanics, 1949

Thomas Edison raced other inventors to develop the first practical electric light bulb, a rather bold undertaking considering there were neither power plants nor electrical wiring to support lighting. In the early 1880s his bulbs glowed, but the glass quickly blackened. Trying to understand the effect, he inserted a third element and found that current flowed in the space between the filament and the electrode. It stopped when he reversed the polarity. Though he was clueless about what was going on - it wasn't until 1897 that J. J. Thomson discovered the electron - he filed for a patent and set the idea aside. Patent 307,031 was for the first electronic device in the United States. Edison had invented the diode.

Edison patented the first tube

From Edison's patent - the first electronic device.

Which lay dormant for decades. True, Ambrose Fleming did revive the idea and found applications for it, but no market appeared.

In the first decade of the new century Lee de Forest inserted a grid between the anode and cathode, creating what he called an Audion. With this new control element a circuit could amplify, oscillate and switch, the basic operations of electronics. Now engineers could create radios of fantastic sensitivity, send voices over tens of thousands of miles of cable, and switch ones and zeroes in microseconds.

The vacuum tube was the first active element, and its invention was the beginning of electronics. Active elements are the core technology of every electronic product. The tube, the transistor, and, I believe, now the microprocessor are the active elements that transformed the world over the last century.

Though the tube was a stunning achievement, it was useless in isolation. De Forest did create amplifiers and other circuits using tubes. But the brilliant Edwin Armstrong was probably the most seminal early inventor of electronic circuits. Though many of his patents were challenged and credit was often given to others, Armstrong was the most prolific of the early radio designers. His inventions included both the regenerative and super-regenerative receivers, the superhetrodyne (a truly innovative approach used to this day), and FM.

As radio was yet another communications technology, not unlike text messaging today, demand soared as it always does for these killer apps. Western Union made the VT1, one of the first commercial tubes. In 2011 dollars they were a hundred bucks a pop. But war is good for technology; in the four years of World War I Western Electric alone produced a half million tubes for the US Army. By 1918 over a million a year were being made in the US, more than fifty times the pre-conflict numbers, and prices quickly fell. Just as cheaper semiconductors always open new markets, falling tube prices meant radios became practical consumer devices.

VT-1 vacuum tube

The VT-1 vacuum tube.

Start an Internet publication and no one will read it until there's "content." This is hardly a new concept; radio had little appeal to consumers unless there were radio shows. The first regularly-scheduled broadcasts started in 1919. There were few listeners, but with the growth of broadcasters demand soared. RCA sold the earliest consumer superhetrodyne radio in 1924; 148,000 flew off the shelves in the very first year. By the crash in 1929 radios were common fixtures in American households, and were often the center of evening life for the family, rather like TV is today.

Nearly until the start of World War II radios were about the most complex pieces of electronics available. An example is RCA's superb RBC-1 single-conversion receiver which had all of 19 tubes. But tubes wore out, they could break when subjected to a little physical stress, and they ran hot. It was felt that a system with more than a few dozen would be impractically unreliable.

In the 1930s it became apparent that conflict was inevitable. Governments drove research into war needs, resulting in what I believe is one of the most important contributions to electronic digital computers, and a natural extension of radio technology: RADAR. The US Army fielded its first RADAR apparatus in 1940. The SCR-268 had 110 tubes... and it worked. At the time tubes had a lifetime of a year or so, so one would fail every few days in each RADAR set. (Set is perhaps the wrong word for a system that weighed 40,000 Kg and that required 6 operators). Over 3000 SCR-268s were produced.

SCR-268 RADAR picture

The SCR-268 at work.

Ironically, Sir Henry Tizard arrived in the US from Brittan with the first useful cavity magnetron the same year the SCR-268 went into production. That tube revolutionized RADAR. By war's end the 10 cm wavelength SCR-584 was in production (1700 we manufactured) using 400 vacuum tubes. The engineers at MIT's Rad Lab had shown that large electronic circuits were not only practical, they could be manufactured in quantity and survive combat conditions.

Like all major inventions computers had many fathers - and some mothers. Rooms packed with men manually performed calculations in lockstep to produce ballistics tables and the like; these gentlemen were known as "computers." But WWI pushed most of the men into uniform, so women were recruited to perform the calculations. Many mechanical calculators were created by all sorts of inventive people like Babbage and Konrad Zuse to speed the computers' work. But about the same time the Rad Lab was doing its magnetron magic, what was probably the first electronic digital computer was built. The Atanasoff-Berry computer was fired up in 1942, used about 300 tubes, was not programmable, and though it did function, was quickly discarded.

Meanwhile the Germans were sinking ships faster than the allies could build replacements, in the single month of June, 1942 sending 800,000 tons to the sea floor. Britan was starving and looked doomed. The allies were intercepting much of the Wehrmacht's signal traffic, but it was encrypted using a variety of cryptography machines, the Enigma being the most famous. The story of the breaking of these codes is very long and fascinating, and owes much to pre-war work done by Polish mathematicians, as well as captured secret material from two U-boats. The British set up a code-breaking operation at Bletchley Park where they built a variety of machines to aid their efforts. An electro-mechanical machine called the Heath Robinson (named after a cartoonist who drew very complex devices meant to accomplish simple tasks, a la Rube Goldberg) helped break the "Tunny" code produced by the German Lorenz ciphering machine. But the Heath Robinson was slow and cranky.

Tommy Flowers realized that a fully electronic machine would be both faster and more reliable. He figured the machine would have between 1,000 and 2,000 tubes, and despite the advances being made in large electronic RADAR systems few thought such a massive machine could work. But Flowers realized that a big cause of failures was the thermal shock tubes encountered on power cycles, so planned to leave his machine on all of the time. The result was Colossus, a 1,600 tube behemoth that immediately doubled the code breakers' speed. It was delivered in January of 1944. Those who were formerly hostile to Flower's idea were so astonished they ordered four more in March. A month later they were demanding a dozen.

Colossus didn't break the code; instead it compared the encrypted message with another data stream to find likely settings of the encoding machine. It was probably the first programmable electronic computer. Programmable by patch cables and switches, it didn't bear much resemblance to today's stored program machines. Unlike the Atanasoff-Berry machine the Colossi were useful, and were essential to the winning of the war.

The Colossus reproduction at Bletchley Part

The Bletchley Park Colossus reproduction.

Churchill strove to keep the Colossus secret and ordered that all be destroyed into pieces no bigger than a man's hand, so nearly 30 years slipped by before its story came out. Despite a dearth of drawings, though, a working replica has been constructed and is on display at the National Museum of Computing at Bletchley Park, a site on the "must visit" list for any engineer. (But it's almost impossible for Americans to find, you'll wind up dizzy from the succession of roundabouts one must navigate). A rope barrier isolates visitors from the machine's internals, but it's not hard to chat up the staff and get invited to walk inside the machine. That's right - inside. These old systems were huge.

Meanwhile, here in the colonies John Mauchly and J. Presper Eckert were building the ENIAC, a general-purpose monster of a machine containing nearly 18,000 vacuum tubes. It weighed 30 tons, consumed 150 KW of electricity, and had five million hand-soldered joints. ENIAC's purpose was to compute artillery firing tables, which it accelerated by three orders of magnitude over other contemporary approaches. ENIAC didn't come on line until the year after the war, but due to the secrecy surrounding Colossus ENIAC long held the title of the first programmable electronic computer. It, too, used patch panels rather than a stored program, though later improvements gave it a ROM-like store. One source complained it could take "as long as three weeks to reprogram and debug a program." Those were the good old days. Despite the vast number of tubes, according to Eckert the machine suffered a failure only once every two days.

During construction of the ENIAC Mauchly and Eckert proposed a more advanced machine, the EDVAC. It had a von Neumann architecture (stored program), called that because John von Neumann, a consultant to the Moore school where the ENIAC was built, had written a report about EDVAC summarizing its design, and hadn't bothered to credit Mauchly or Eckert for the idea. Whether this was deliberate or a mistake (the report was never completed, and may have been circulated without von Neumann's knowledge) remains unknown, though much bitterness resulted.

(In an eerily parallel case the ENIAC was the source of a patent battle. Mauchly and Eckert had filed for a patent for the machine in 1947, but in the late 1960s Honeywell sued over its validity. John Atanasoff testified that Mauchly had appropriated ideas from the Atanasoff-Berry machine. Ultimately the court ruled that the ENIAC patent was invalid. Computer historians still debate the verdict.)

Meanwhile British engineers Freddie Williams and Tom Kilburn developed a CRT that could store data. The Williams tube they built was the first random access digital memory device. But how does one test such a product? The answer: build a computer around it. In 1948 the Small Scale Experimental Machine, nicknamed "The Baby," went into operation. It used three Williams tubes, one being the main store (32 words of 32 bits each) and two for registers. Though not meant as a production machine, the Baby was the first stored program electronic digital computer. It is sometimes called the Mark 1 Prototype, as the ideas were quickly folded into the Manchester Mark 1, the first practical stored-program machine. That morphed into the Ferranti Mark 1, which was the first commercial digital computer.

I'd argue that the circa 1951 Whirlwind computer was the next critical development. Whirlwind was a parallel machine in a day where most computers operated in bit-serial mode to reduce the number of active elements. Though it originally used Williams tubes, these were slow, so the Whirlwind was converted to use core memory, the first time core was incorporated into a computer. Core dominated the memory industry until large semiconductor devices became available in the 70s.

The Whirlwind computer

The Whirlwind computer.

A core plane from the Whirlwind computer

Core plane from the Whirlwind computer.

Whirlwind's other important legacy is that it was a real-time machine, and it demonstrated that a computer could handle RADAR data. Whirlwind's tests convinced the Air Force that computers could be used to track and intercept cold-war enemy bombers. The government, never loathe to start huge projects, contracted with IBM and MIT to build the Semi-Automatic Ground Environment (SAGE), based on the 32 bit AN/FSQ-7 computer.

SAGE was the largest computer ever constructed, each installation using over 100,000 vacuum tubes and a half acre of floor space. 26 such systems were built, and unlike so many huge programs, SAGE was delivered and used until 1983. The irony is that by the time SAGE came on-line in 1963 the Soviets' new ICBM fleet made the system mostly useless.

For billions of years Mother Nature plied her electrical wiles. A couple of thousand years ago the Greeks developed theories about electricity, most of which were wrong. With the Enlightenment natural philosophers generated solid reasoning, backed up by experimental results, that exposed the true nature of electrons and their behavior. In only the last flicker of human existence has that knowledge been translated into the electronics revolution, possibly the defining characteristic of the 20th century. But electronics was about to undergo another fundemental transformation.

 

The Semiconductor Revolution

We're on track, by 2010, for 30-gigahertz devices, 10 nanometers or less, delivering a tera-instruction of performance. Pat Gelsinger, Intel, 2002

We all know how in 1947 Shockley, Bardeen and Brattain invented the transistor, ushering in the age of semiconductors. But that common knowledge is wrong. Julius Lilienfeild patented devices that resembled field effect transistors (though they were based on metals rather than modern semiconductors) in the 1920s and 30s (he also patented the electrolytic capacitor). Indeed, the USPTO rejected early patent applications from the Bell Labs boys, citing Lilienfeild's work as prior art.

Semiconductors predated Shockley et al by nearly a century. Karl Ferdinand Braun found that some crystals conducted current in only one direction in 1874. Indian scientist Jagadish Chandra Bose used crystals to detect radio waves as early as 1894, and Greenleaf Whittier Pickard developed the cat's whisker diode. Pickard examined 30,000 different materials in his quest to find the best detector, rusty scissors included. Like thousands of others, I built an AM radio using a galena cat's whisker and a coil wound on a Quaker Oats box as a kid, though by then everyone was using modern diodes.

Schematic of a crystal radio

A crystal radio - there's not much to it.

An early adjustable diode

An early adjustable diode.

Early tube computers used crystal diodes. Lots of diodes: the ENIAC had 7,200, Whirlwind twice that number. I have not been able to find out anything about what types of diodes were used or the nature of the circuits, but imagine an analog with 60s-era diode-transistor logic.

While engineers were building tube-based computers, a team lead by William Shockley at Bell Labs researched semiconductors. John Bardeen and Walter Brattain created the point contact transistor in 1947, but did not include Shockley's name on the patent application. Shockley, who was as irascible as he was brilliant, in a huff went off and invented the junction transistor. One wonders what wonder he would have invented had been really slighted.

Point contact versions did go into production. Some early parts had a hole in the case; one would insert a tool to adjust the pressure of the wire on the germanium. So it wasn't long before the much more robust junction transistor became the dominant force in electronics. By 1953 over a million were made; four years later production increased to 29 million a year. That's exactly the same number as on single Pentium III integrated circuit in 2000.

The first commercial transistor was probably the CK703, which became available in 1950 for $20 each, or $188 in today's dollars.

Meanwhile tube-based computers were getting bigger, hotter and sucked ever more juice. The same University of Manchester which built the Baby and Mark 1 in 1948 and 1949 got a prototype transistorized machine going in 1953, and the full-blown model running two years later. With a 48 (some sources say 44) bit word, the prototype used only 92 transistors and 550 diodes! Even the registers were stored on drum memory, but it's still hard to imagine building a machine with so few active elements. The follow-on version used just 200 transistors and 1300 diodes, still no mean feat. (Both machines did employ tubes in the clock circuit). But tube machines were more reliable as this computer ran about an hour and a half between failures. Though deadly slow it demonstrated a market-changing feature: just 150 watts of power were needed. Compare that to the 25 KW consumed by the Mark 1. IBM built an experimental transistorized version of their 604 tube computer in 1954; the semiconductor version ate just 5% of the power needed by its thermionic brother.

The first completely-transistorized commercial computer was the, well, a lot of machines vie for credit and the history is a bit murky. Certainly by the mid-50s many became available. Earlier I claimed the Whirlwind was important at least because it spawned the SAGE machines. Whirlwind also inspired MIT's first transistorized computer, the 1956 TX-0, which had Whirlwind's 18 bit word. Ken Olsen, one of DEC's founders, was responsible for the TX-0's circuit design. DEC's first computer, the PDP-1, was largely a TX-0 in a prettier box. Throughout the 60s DEC built a number of different machines with the same 18 bit word.

The TX-0 was a fully parallel machine in an era where serial was common. (A serial computer works on a single bit at a time; modern parallel machines work on an entire word at once. Serial computing is slow but uses far fewer components.) Its 3600 transistors, at $200 a pop, cost about a megabuck. And all were enclosed in plug-in bottles, just like tubes, as the developers feared a high failure rate. But by 1974 after 49,000 hours of operation fewer than a dozen had failed.

The official biography of the machine (RLE Technical Report No. 627) contains tantalizing hints that the TX-0 may have had 100 vacuum tubes, and the 150 volt power supplies it describes certainly aligns with vacuum tube technology.

IBM's first transistorized computer was the 1958 7070. This was the beginning of the company's important 7000 series which dominated mainframes for a time. A variety of models were sold, with the 7094 for a time occupying the "fastest computer in the world" node. The 7094 used over 50,000 transistors. Operators would use another, smaller, computer to load a magnetic tape with many programs from punched cards, and then mount the tape on the 7094. We had one of these machines my first year in college. Operating systems didn't offer much in the way of security, and some of us figured out how to read the input tape and search for files with grades.

The largest 7000-series machine was the 7030 "Stretch," a $100 million (in today's dollars) supercomputer that wasn't super enough. It missed its performance goals by a factor of three, and was soon withdrawn from production. Only 9 were built. The machine had a staggering 169,000 transistors on 22,000 individual printed circuit boards. Interestingly, in a paper named The Engineering Design of the Stretch Computer, the word "millimicroseconds" is used in place of "nanoseconds."

While IBM cranked out their computing behemoths, small machines gained in popularity. Librascope's $16k ($118k today) LGP-21 had just 460 transistors and 300 diodes, and came out in 1963, the same year as DEC's $27k PDP-5. Two years later DEC produced the first minicomputer, the PDP-8, which was wildly successful, eventually selling some 300,000 units in many different models. Early units were assembled from hundreds of DEC's "flip chips," small PCBs that used diode-transistor logic with discrete transistors. A typical flip chip implemented three two-input NAND gates. Later PDP-8s used ICs; the entire CPU was eventually implemented on a single integrated circuit.

The PDP-8 Computer

A PDP-8. The cabinet on top holds hundreds of flip chips.

A DEC Flip Chip

One of DEC's flip-chips. The board has just 9 transistors and some discrete components.

But woah! Time to go back a little. Just think of the cost and complexity of the Stretch. Can you imagine wiring up 169,000 transistors? Thankfully Jack Kilby and Robert Noyce independently invented the IC in 1958/9. The IC was so superior to individual transistors that soon they formed the basis of most commercial computers.

Actually, that last clause is not correct. ICs were hard to get. The nation was going to the moon, and by 1963 the Apollo Guidance Computer used 60% of all of the ICs produced in the US, with per-unit costs ranging from $12 to $77 ($88 to $570 today) depending on the quantity ordered. One source claims that the Apollo and Minuteman programs together consumed 95% of domestic IC production.

First IC

Jack Kilby's first IC.

Every source I've found claims that all of the ICs in the Apollo computer were identical: 2800 dual three-input NOR gates, using three transistors per gate. But the schematics show two kinds of NOR gates, "regular" versions and "expander" gates.

The market for computers remained relatively small till the PDP-8 brought prices to a more reasonable level, but the match of minis and ICs caused costs to plummet. By the late 60s everyone was building computers. Xerox. Raytheon (their 704 was possibly the ugliest computer ever built). Interdata. Multidata. Computer Automation. General Automation. Varian. SDS. Xerox. A complete list would fill a page. Minis created a new niche: the embedded system, though that name didn't surface for many years. Labs found that a small machine was perfect for controlling instrumentation, and you'd often find a rack with a built-in mini that was part of an experimenter's equipment.

The PDP-8/E was typical. Introduced in 1970, this 12 bit machine cost $6,500 ($38k today). Instead of hundreds of flip chips the machine used a few large PCBs with gobs of ICs to cut down on interconnects. Circuit density was just awful compared to today. The technology of the time was small scale ICs which contained a couple of flip flops or a few gates, and medium scale integration. An example of the latter is the 74181 ALU which performed simple math and logic on a pair of four bit operands. Amazingly, TI still sells the military version of this part. It was used in many minicomputers, such as Data General's Nova line and DEC's seminal PDP-11.

The PDP-11 debuted in 1970 for about $11k with 4k words of core memory. Those who wanted a hard disk shelled out more: a 512KB disk with controller ran an extra $14k ($82k today). Today a terabyte disk drive runs about $100. If it had been possible to build such a drive in 1970, the cost would have been on the order of $100 million.

The PDP-11 computer

The PDP-11.

Experienced programmers were immediately smitten with the PDP-11's rich set of addressing modes and completely orthogonal instruction set. Most prior, and too many subsequent, ISAs were constrained by the costs and complexity of the hardware, and were awkward and full of special cases. A decade later IBM incensed many by selecting the 8088, whose instruction set was a mess, over the orthogonal 68000 which in many ways imitated the PDP-11.

Around 1990 I traded a case of beer for a PDP-11/70, but eventually was unable to even give it away.

Minicomputers were used in embedded systems even into the 80s. We put a PDP-11 in a steel mill in 1983. It was sealed in an explosion-proof cabinet and interacted with Z80 microprocessors. The installers had for reasons unknown left a hole in the top of the cabinet. A window in the steel door let operators see the machine's controls and displays. I got a panicked 3 AM call one morning - someone had cut a water line in the ceiling. Not only were the computer's lights showing through the window - so was the water level. All of the electronics was submerged.

Data General was probably the second most successful mini vendor. Their Nova was a 16 bit design introduced a year before the PDP-11, and it was a pretty typical machine in that the instruction set was designed to keep the hardware costs down. A bare-bones unit with no memory ran about $4k - lots less than DEC's offerings. In fact, early versions used a single 74181 ALU with data fed through it a nibble at a time. The circuit boards were 15" x 15", just enormous, populated with a sea of mostly 14 and 16 pin DIP packages. The boards were typically two layers, and often had hand-strung wires where the layout people couldn't get a track across the board. The Nova was peculiar as it could only address 32 KB. Bit 15, if set, meant the data was an indirect address (in modern parlance, a pointer). It was possible to cause the thing to indirect forever.

Data General Nova Board

A board from a Data General Nova computer.

Before minis few computers had a production run of even 100 (IBM's 360 was a notable exception). Some minicomputers, though, had were manufactured in the tens of thousands. Those quantities would look laughable when the microprocessor started the modern era of electronics.

 

Microprocessors Change the World

I have always wished that my computer would be as easy to use as my telephone. My wish has come true. I no longer know how to use my telephone.- Bjarne Stroustrup

Everyone knows how Intel invented the computer on a chip in 1971, introducing the 4004 in an ad in a November issue of Electronic News. But everyone might be wrong.

TI filed for a patent for a "computing systems CPU" on August 31 of that same year. It was awarded in 1973 and eventually Intel had to pay licensing fees. It's not clear when they had a functioning version of the TMS1000, but at the time TI engineers thought little of the 4004, dismissing it as "just a calculator chip" since it had been targeted to Busicom's calculators. Ironically the HP-35 calculator later used a version of the TMS1000.

But the history is even murkier. Earlier I explained that the existence of the Colossus machine was secret for almost three decades after the war, so ENIAC was incorrectly credited with being the first useful electronic digital computer. A similar parallel haunts the first microprocessor.

Grumman had contracted with Garrett AiResearch to build a chipset for the F-14A's Central Air Data Computer. Parts were delivered in 1970, and not a few historians credit the six-chips comprising the MP944 as being the first microprocessor. But the chips were secret until they were declassified in 1998. Others argue that the multi-chip MP944 shouldn't get priority over the 4004, as the latter's entire CPU did fit into a single bit of silicon.

In 1969 Four-Phase Systems built the 24 bit AL1, which used multiple chips segmented into 8 bit hunks, not unlike a bit-slice processor. In a patent dispute a quarter-century later proof was presented that one could implement a complete 8 bit microprocessor using just one of these chips. The battle was settled out of court, which did not settle the issue of the first micro.

Then there's Pico Electronics in Glenrothes, Scotland, which partnered with General Instruments (whose processor products were later spun off into Microchip) to build a calculator chip called the PICO1. That part reputedly debuted in 1970, and had the CPU as well as ROM and RAM on a single chip.

Clearly the microprocessor was an idea whose time had come.

Japanese company Busicom wanted Intel to produce a dozen chips that would power a new printing calculator, but Intel was a memory company. Ted Hoff realized that a design with a general-purpose processor would consume gobs of RAM and ROM. Thus the 4004 was born.

4004 microprocessor

The 4004 microprocessor.

It was a four-bit machine packing 2300 transistors into a 16 pin package. Why 16 pins? Because that was the only package Intel could produce at the time. Today (2011) fabrication folk are wrestling with the 22 nanometer process node. The 4004 used 10,000 nm geometry. The chip itself cost about $1100 in today's dollars, or about half a buck per transistor. Compusa currently lists some netbooks for about $200, or around 10 microcents per transistor. And that's ignoring the keyboard, display, 250 GB hard disk and all of the rest of the components and software that goes with the netbook.

Though Busicom did sell some 100,000 4004-powered calculators, the part's real legacy was the birth of the age of embedded systems and the dawn of a new era of electronic design. Before the microprocessor it was absurd to consider adding a computer to a product; now, in general, only the quirky build anything electronic without embedded intelligence.

At first even Intel didn't understand the new age they had created. In 1952 Harold Aiken figured a half-dozen mainframes would be all the country needed, and in 1971 Intel's marketing people estimated total demand for embedded micros at 2000 chips per year. Federico Faggin used one in the 4004's production tester, which was perhaps the first microcomputer-based embedded system. About the same time the company built the first EPROM and it wasn't long before they slapped a microprocessor into the EPROM burners. It quickly became clear that these chips might have some use after all. Indeed, Ted Hoff had one of his engineers build a video game - Space War - using the 4004, though management felt games were goofy applications with no market.

In parallel with the 4004's development Intel had been working with Datapoint on a computer, and in early 1970 Ted Hoff and Stanley Mazor started work on what would become the 8008 processor.

1970 was not a good year for technology; as the Apollo program wound down many engineers lost their jobs, some pumping gas to keep the families fed. (Before microprocessors automated the pumps gas stations had legions of attendants who filled the tank and checked the oil. They even washed windows.) Datapoint was struggling, and eventually dropped Intel's design.

In April, 1972, just months after releasing the 4004, Intel announced the 8008, the first 8 bit microprocessor. It had 3500 transistors and cost $2200 in 2011 dollars. This 18 pin part was also constrained by the packages the company knew how to build, so it multiplexed data and addresses over the same connections.

Typical development platforms were an Intellec 8 (a general-purpose 8008-based computer) connected to a TTY. One would laboriously put the binary instructions for a tiny bootloader into memory by toggling front-panel switches. That is, just to get the thing to boot meant hundreds of switch flips. The bootloader would suck in a better loader from the TTY's 10 character-per-second paper tape reader. Then, the second loader read the text editor's tape. After entering the source code the user punched a source tape, and read in the assembler. The assembler read the source tape - three times - and punched an object tape. Load the linker, again through the tape reader. Load the object tapes, and finally the linker punched a binary. It took us three days to assemble and link a 4KB binary program. Needless to say, debugging meant patching in binary instructions with only a very occasional re-build.

Intellec 8

An Intellec 8.

The world had changed. Where I worked we had been building a huge instrument that had an embedded minicomputer. The 8008 version was a tenth the price, a tenth the size, and had a market hundreds of times bigger.

It wasn't long before the personal computer came out. In 1973 at least four 8008-based computers targeted to hobbyists appeared: The MCM-70, the R2E Micral, the Scelbi-8H, and the Mark-8. The latter was designed by Jon Titus, who tells me the prototype worked the first time he turned it on. The next year Radio Electronics published an article about the Mark-8, and several hundred circuit boards were sold. People were hungry for computers.

"Hundreds of boards" means most of the planet's billions were still computer-free. I was struck by how much things have changed when the PC in my woodworking shop died recently. I bought a used Pentium PC for $60. The seller had a garage with pallets stacked high with Dells, maybe more than all of the personal computers in the world in 1973. And why have a PC in a woodworking shop? Because we live in the country where radio stations are very weak. Instead I get their web broadcasts. So this story, which started with the invention of radio, circles back on itself. I use many billions of transistors to emulate a four-tube radio.

By the mid-70s the history of the microprocessor becomes a mad jumble of product introductions by dozens of companies. A couple are especially notable.

Intel's 8080 was a greatly improved version of the 8008. The part was immediately popular, but so were many similar processors from other vendors. The 8080, though, spawned the first really successful personal computer, the Altair 8800. This 1975 machine used a motherboard into which various cards were inserted. One was the processor and associated circuits. Others could hold memory boards, communications boards, etc. Offered in kit form for $1800 (in today's dollars), memory was optional. 1KB of RAM was $700. MITS expected to sell 800 a year but were flooded with orders for 1000 in the first month.

Computers are useless without software, and not much existed for that machine. A couple of kids from New England got a copy of the 8080's datasheet and wrote a simulator that ran on a PDP-10. Using that, they wrote and tested a Basic interpreter. One flew to MITS to demonstrate the code, which worked the very first time it was tried on real hardware. Bill Gates and Paul Allen later managed to sell a bit of software for other brands of PCs.

The 8080 required three different power supplies (+5, -5 and +12) as well as a two-phase clock. A startup named Zilog improved the 8080's instruction set considerably and went to a single-supply, single-clock design. Their Z80 hugely simplified the circuits needed to support a microprocessor, and was used in a stunning number of embedded systems as well as personal machines, like Radio Shack's TRS-80. CP/M ran most of the Z80 machines, and was the inspiration for the x86's DOS.

But processors were expensive. The 8080 debuted at $400 ($1700 today) just for the chip.

Then MOS Technology introduced the 6501 at a strategic price of $20 (some sources say $25, but I remember buying one for twenty bucks) in 1974. The response? Motorola sued, since the pinout was identical to their 6800. A new version with scrambled pins quickly followed, and the 6502 helped launch a little startup named Apple.

Other vendors were forced to lower their prices. The result was that cheap computers meant lots of computers. Cut costs more and volumes explode.

 

Active Elements

In this article I've portrayed the history of the electronics industry as a story of the growth in use of active elements. For decades no product had more than a few tubes. Because of RADAR between 1935 and 1944 some electronic devices employed hundreds. Computers drove the numbers to the thousands. In the 50s SAGE had 100,000 per machine. Just 6 years later the Stretch squeezed in 170,000 of the new active element, the transistor.

We embedded folk whose families are fed by Moore's Law know what has happened: some micros today contain 3 billion transistors on a square centimeter of silicon; memory parts are even denser. A very capable 32 bit microcontroller (that is, the CPU, memory and all of the needed I/O) costs under $0.50, not bad compared to the millions of dollars needed for a machine just a few decades ago. That half-buck microcontroller is probably a million times faster than ENIAC. But how often do we stand back and think about the implications of this change?

Active elements have shrunk in length by about a factor of a million, but an IC is a two-dimensional structure so the effective shrink is more like a trillion.

The cost per GFLOP has fallen by a factor of about 10 billion since 1945.

Chart of Active Elements vs time

Growth of active elements in electronics. Note the log scale..

Chart of the size of active elements vs time

Size of active elements. Also a log scale.

It's claimed the iPad 2 has about the compute capability of the Cray 2, 1985's leading supercomputer. The Cray cost $35 million; the iPad goes for $500. Apple's product runs 10 hours on a charge; the Cray needed 150 KW and liquid Flourinert cooling.

My best estimate pegs an iPhone at 100 billion transistors. If we built one using the ENIAC's active element technology the phone would be about the size of 170 Vertical Assembly Buildings (the largest single-story building in the world). That would certainly discourage texting while driving. Weight? 2500 Nimitz-class aircraft carriers. And what a power hog! Figure over a terawatt, requiring all of the output of 500 of Olkiluoto power plants (the largest nuclear plant in the world). An ENIAC-technology iPhone would run a cool $50 trillion, roughly the GDP of the entire world. And that's before AT&T's monthly data plan charges.

Without the microprocessor there would be no Google. No Amazon. No Wikipedia, no web, even. To fly somewhere you'd call a travel agent, on a dumb phone. The TSA would hand-search you... uh, they still do. Cars would get 20 MPG. No smart thermostats, no remote controls, no HDTV. Slide rules instead of calculators. Vinyl disks, not MP3s. Instead of an iPad you'd have a pad of paper. CAT scans, MRIs, PET scanners and most of modern medicine wouldn't exist.

Software engineering would be a minor profession practiced by a relative few.

 

Accelerating Tech

Genus Homo appeared around 2 million years ago. Perhaps our first invention was the control of fire; barbequing started around 400,000 BCE. For almost all of those millennia Homo was a hunter-gatherer, until the appearance of agriculture 10,000 years ago. After another 4k laps around the sun some genius created the wheel, and early writing came around 3,000 BCE.

Though Gutenberg invented the printing press in the 15th century, most people were illiterate until the industrial revolution. That was about the time when natural philosophers started investigating electricity.

In 1866 it cost $100 to send a ten word telegram through Cyrus Field's transatlantic cable. A nice middle-class house ran $1000. That was before the invention of the phonograph. The only entertainment in the average home was the music the family made themselves. One of my great-grandfathers died in the 1930s, just a year after electricity came to his farm.

My grandparents were born towards the close of the 19th century. They lived much of their lives in the pre-electronic era. When probing for some family history my late grandmother told me that, yes, growing up in Manhattan she actually knew someone, across town, who had a telephone. That phone was surely a crude device, connected through a manual patch panel at the exchange, using no amplifiers or other active components. It probably used the same carbon transmitter Edison invented in 1877.

My parents grew up with tube radios but no other electronics.

I was born before a transistorized computer had been built. In college all of the engineering students used slide rules exclusively, just as my dad had a generation earlier at MIT. The 40,000 students on campus all shared access to a single mainframe. But my kids were required to own a laptop as they entered college, and they have grown up never knowing a life without cell phones or any of the other marvels enabled by microprocessors that we take for granted.

The history of electronics spans just a flicker of the human experience. In a century we've gone from products with a single tube to those with hundreds of billions of transistors. The future is inconceivable to us, but surely the astounding will be commonplace.

As it is today.

 

Thanks to Stan Mazor and Jon Titus for their correspondence and background information.