For novel ideas about building embedded systems (both hardware and firmware), join the 35,000 engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.
Episode 11: Getting Busted in New Jersey With Core Memory.
|September 1, 2014|
(Go to the complete list of videos)
I'll present my Better Firmware Faster seminar in Melbourne, Australia February 20. All are invited. More info here.
Core memory was once the staple of all computers. Thankfully those days are gone. But the history is interesting and involves a run-in with the police and some gin.
Hi, I'm Jack Ganssle, and welcome to the Embedded Muse video blog, which is a companion to my free online Embedded Muse newsletter. Today we're going to talk about a kind of memory which really shaped the computer industry for a long time and which is now thankfully obsolete.
This is a core memory. I got this when I was 17. I was planning to build a computer and I was looking for some sort of memory device to put in it. There was no such thing as semiconductor memory at the time, and, at the time, all the computers used these core memories. As a matter of fact, I keep this around all these years later, because I got it by hitchhiking from Maryland to Boston to a fabulous old surplus store they used to have up there, where I bought this device as well as some other bits of antique electronics. And on the way back, hitchhiking on the New Jersey Turnpike, my friend and I got picked up three times for hitchhiking on the turnpike before they dragged us in to the police department and threw us in a cell and searched us. And when the came across this memory the cop wondered what it was. And I told him it was a memory out of a computer. And he thought I was insane. Nobody had computers back then. Even the smallest computers cost hundreds of thousands of dollars.
Before core memory was invented, computer used things like delay line memory, where basically it was a set of springs in a mercury bath. A transducer would send signals, zeros and ones, through the springs, which would then take time to propagate, acoustically, through the springs, where the would then come back out and be reinserted into this delay line. Mercury was used because the impedance of it matched that of the drive transducer really well. Alan Turing, the famous Alan Turing, actually suggested using gin at one point, and I don't know if anyone ever took him up on that.
Another kind of device was called a Williams tube, which was basically a CRT where the bits were painted on the screen and a metal disk on the outside could then sense those bits, so that it would know which ones were zeros and which ones were ones. A Williams tube could typically hold 512 to 1k of bits. That was it, so it was not very efficient.
This is the very first core memory. It was actually from the Whirlwind Computer, which was a machine built at MIT in the early 50's. It originally had Williams tube memory on it but eventually they did put core on it. I took this picture at the Computer History Museum, which is a must-stop visit for any techie. It's in Mountain View, California. The core memory in this case is 32 by 32, or 1k bits. 16 of these were used to get 1k words. And Whirlwind, of course, was the first computer to use core memory.
My core is somewhat more advanced than the Whirlwind memory. You can see that it does have these little torroid devices. They're almost like Cheerios, through which wires are threaded, and in this case there's 32 by 64 array of bits, giving us 2k per plane, and then there are 26 planes, you can see there's cores on each side of the planes, giving us a total of about 50k bits of memory in this device. Each core holds one bit of memory. The planes are accessed in parallel to read and write words in memory. The data is stored in a magnetic field in each core. To read from it a particular core is selected by sending half the current needed to flip the field into a particular X and Y select line to a zero. The core at the intersection then does nothing if it holds a zero, but changes state if it was a one. There's a little delay before this happens. When the core changes state, if it was a one, it induces a small signal in the sense wire that is detected. Obviously this is a destructive read, so the data must then be written back into it. It only takes a couple hundred milliamps to drive the core, which induces a few tens of millivolts on the sense line.
You might say that a destructive read is a really weird way to handle memory, but consider DRAM's that we take for granted. The charge is stored in a tiny capacitor inside the DRAM cell, which has to be refreshed every few milliseconds, otherwise it bleeds away. The capacitor is actually on the order of 10 or 20 femtofarads. A femtofarad is 10 to the minus 15th farad. It's so small that if you could put a 1 megohm resistor across it the charge would bleed off in 10s of nanoseconds. Now that is crazy.
Core was pretty much the only kind of memory used in computers from the mid-50's on well into the 70's. It eventually got down to the point where it cost about a penny per bit, which sounds pretty cheap, but 64k of memory would have cost $5,000. That's about $30,000 in today's money. So there you have it. It's a quick look back at memory in computers. Thank god we're not in those bad old days anymore.
Thanks for watching. And don't forget to go over to ganssle.com for about 1,000 articles on embedded systems and plenty more videos.