For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.

By Jack Ganssle

Remembering the Memories

Published July 30, 2009 in Embedded Systems Design

September's column was a walkabout through the history of memory devices. I highlighted a few favorite examples and promised more in October.

I forgot. Dropped a bit somewhere. Maybe a cosmic ray induced a neuronal parity error with no ECC. There's a lesson there in the importance of building reliable memory. or in the fallibility of middle-aged brains.

In "Guns, Germs and Steel" (W.W. Norton & Co, 1999) Jared Diamond explains how the invention of agriculture created wealth in primitive society; wealth that meant not everyone had to scramble for a living. Petty rulers emerged, kings that could live off the labor of their subjects. Other bureaucrats emerged, too, and probably some of these folks, free of the endless demands of the farm, learned to inscribe symbols on various surfaces. The invention of writing changed, well, maybe not a whole lot for the average Joe. It wasn't till the 17th century that large percentages of the Western world were able to read and write. Here in Carroll County Maryland the local historical society found that in a "sizeable percentage" of marriage licenses issued between 1910 and 1915 the betrothed signed with an X, as they were unable to form their own signature.

Memory is what makes a computer different from digital logic circuits. Because of it, we can build one relatively simple circuit - the CPU - that can implement virtually any sort of functionality. The notion of a stored program computer emerged from Alan Turing's work in the 30s, and was independently invented by Konrad Zuse in Germany at about the same time. The great computer engineers J. Presper Eckert and John Mauchly (who developed ENIAC and so much more) also came up with the idea, but John von Neumann was credited with the invention when he wrote a paper about ENIAC that was circulated in an early form sans acknowledgement of the machine's two designers.

Zuse, a largely unheralded genius till recent years, went on to build the first programmable digital computer in 1941. The Z3 didn't use FPGAs, transistors or tubes; it employed 2400 relays instead. According to http://ed-thelen.org/comp-hist/Zuse_Z1_and_Z3.pdf about 1800 of those implemented the memory bank of 64 twenty-two bit words. That's puzzling at 64 x 22 is 1408, but imagine building memory out of relays! Needless to say the machine, running at around a 5 Hz clock rate, wouldn't give even a digital watch a run for the money.

There's a relay-based computer running today in Oregon. Harry Porter built one that used relays for register storage, though he employed SRAM for main memory. See http://web.cecs.pdx.edu/~harry/Relay/. A cheat, perhaps, but it illustrates the maxim that the memory is always much bigger than the processor.

The development of RADAR during World War II was, in my opinion, one of the most important progenitors of the electronics industry. These efforts had two important consequences: large systems using many active components (all tubes at the time) became viable and reliable. And the mass production of these systems both taught legions of technicians to work on electronics, and removed perceived barriers to producing very complex equipment for the masses. TV, alas, followed soon after.

Advanced RADAR needed memory to remove images of still objects to highlight moving aircraft. Various schemes were invented that used delay lines to cancel static scenes. Enter J. Presper Eckert, again, who used piezo crystals to induce waves in columns of mercury. Later, his idea found life in the EDSAC computer of 1949 which used 32 mercury delay lines to store 512 (later 1024) words of memory. Just consider how tough, but circuit-wise and mechanically, it must have been to make a delay line memory work!

But delay line memory held on for quite a while. The 1951 UNIVAC 1 had about 1000 words worth, and the fiendish complexity of the memory system is awe-inspiring. More here: http://univac1.0catch.com/index.htm .

Inevitably, computers used more electronics and fewer mechanical components. Perhaps the simplest memory circuit in the early days was the dual-triode flip-flop, which really wasn't much different than the structure of modern SRAM. Except the latter is a billionth of the size of the tube version and uses nanowatts. Modern SRAMs don't glow in the dark, either.

So many flip flops were used that enterprising engineers created the first IC-like product around them. The plug-in module (pictures here: http://homepages.nildram.co.uk/~wylie/ICs/package.htm) used a tube that contained two triodes (each functionally similar to a transistor) to store a single bit of information. The idea persisted into the 70s. Digital Equipment Corporation's Flip-Chip was a small circuit board used in a number of their computers that implemented a simple function, like a flip-flop.

Flip-flops are fast but expensive on a per-bit basis (even to this day, SRAM remains much more expensive than DRAM). Early computers used all sorts of alternatives, some of which I've mentioned in these two articles. I have a certain fondness for now-extinct drum memory, as the Univac 1108 at my college had a large collection of these. It seems that Andrew Booth invented drum in 1948 when he created a 2 inch long version that stored 10 bits per inch. By the early 50s it was used in a number of machines, notably IBM's 704, which was the predecessor of their hugely important 7000-series of machines.

Drum memory is like disk, except the recording media is shaped like the eponymous drum. Heads are fixed - there's no seek - and there's at least one head per track.

Drums were exclusively for mass storage, except when they weren't. In at least one case, that of Librascope's LGP-30 from 1956, the drum stored everything, including the CPU's registers, in 4096 32 bit words to keep costs down. And they did; the computer was a comparative steal at $47k. That's a third of a million in today's dollars, the price of a house, which today, including the embedded systems, probably holds dozens to hundreds of CPUs and memory systems.

Then there was the 1957 Burroughs 205 computer which did have CPU registers implemented in tubes. A drum memory formed the machine's main storage. Twenty tracks each stored 200 words. But it also provided four quirky "quick-access bands," each of which held 20 words. Those twenty words were stored ten times in each band, reducing access time by an order of magnitude.

Every drum memory I have read about uses magnetic media just like today's disk drives. Except for the Atanasoff-Berry Computer which first ran in 1942. It used two drum memories, each of which had 1600 capacitors somehow planted on the drums, organized as 32 words of 50 bits each. There's nothing new under the sun; just as the capacitors in today's DRAM needs to be periodically refreshed, so, too, did the drum's capacitors. Every revolution (they rotated at 60 RPM) the data was rewritten.

Probably the most iconic image of computer memory are spinning tape drives from the 50s and 60s. They peppered every sci-fi movie of the era, appropriately, as tape was extremely inexpensive and therefore widely used.

The first use of tape for computer memory dates to 1951. Remington Rand's UNISERVO tape drive for the first UNIVAC computer (which was designed by - who else? - the Eckert-Mauchly Company) employed 1200 foot rolls of « inch metal tape - not the later ferrous-coated Mylar. Each reel held 1.44 million characters - the same as a 3.5" floppy disk. Though normally operated at 128 bits per inch, a special diagnostics mode reduced that to 22 BPI. Then, the technician could immerse it into a mix of iron filings, which would stick to the tape so he could read the bits!

Tape was cheap; disks and drums were not. Computer centers invariably had a tape library, racks and racks of reels of tape owned by various users. A user would send a request to the operator to load a tape onto a drive. When that happened, minutes or hours after the request depending on workload, you'd have access to your files, at least until the operator pre-empted access by loading another user's tape.

With the invention of the microprocessor tape saw a brief resurgence. Cheap CPUs demanded inexpensive mass storage, so various innovators adopted off-the-shelf technologies to solve the problem. In the case of tape, they used off-the-retail shelf approaches. It wasn't long before people were connecting audio cassette decks to their computers to store programs. MITS, the folks that gave us the Altair computer, offered a cassette interface as early as 1975. When the IBM PC came to market floppies were an expensive option; the PCs all came equipped with a cassette tape interface (the user was expected to connect his or her own audio deck to the machine).

Another of my favorite memory devices is the shift register, a device that clocks bits from one flip-flop to another, rather like a row of cheerleaders passing a baton. They're analogous to the delay lines already mentioned in that both are sequential and not random access. That's similar to tape, too, although a tape drive can stop and rewind, albeit at great cost in performance.

It seems the earliest shift registers were found in the seminal Colossus code-breaking machine at Bletchley Park during the second World War. (That site is worth a visit if you're in the UK, and if you're not, is worth a trip to England). That machine used a five-stage shift register implemented using hydrogen thryratron tubes, the 40s-era equivalent of the silicon controlled rectifier.

What makes - to me - the shift register so interesting is that it was the lifeblood of a tiny company in what was at the time not known as Silicon Valley. Intel's 1402 comprised four 256 bit dynamic shift registers. "Dynamic" meant they had to keep shifting to preserve the data. These parts were used in applications where something had to be continuously refreshed, like a video screen. I haven't been able to find a datasheet, but the similar 1403 device could be obtained in an 8 pin metal can; four of those pins were devoted to +5, -5, -9 and ground. Busicom, the Japanese calculator maker, wanted Intel to use their shift register technology to make a better product. That notion was replaced by the novel idea of a four bit microprocessor, a concept that didn't help Busicom much (they failed a few years later), but did boost Intel's sales a bit.

And so we come full circle. Memory and computing are intrinsically intertwined, and the use of memory radically changed the nature of digital electronics. That it spawned the microprocessor revolution is, in retrospect, hardly surprising.

Memory comes in many varieties today. Though PC-centric, I highly recommend Ulrich Drepper's very long but fascinating paper "What Every Programmer Should Know About Memory" (http://people.redhat.com/drepper/cpumemory.pdf).