A Look Foreback
Summary: What can we expect in technology in 2015?
An editor at another publication asked me for a prediction for this industry in 2015. What crazy, unexpected thing will happen?
I have no idea.
We do know there will be more. For less. More transistors, more functionality, more complex software and more design headaches. Engineering will continue to get more expensive while product costs go down.
In the 44 years since Intel launched the embedded industry we've seen geometries go from 10,000 nm to 14 nm. 7 isn't far away. To put this in perspective, a silicon atom is about 0.25 nm in diameter. When I started working with ICs the most complex parts had a few hundred transistors; most used just a dozen or two. Today it's hard to even know what the numbers are. A complex CPU or FPGA might have five billion. Some memory parts appear to be twenty times that; Samsung's V-NAND process is bringing true 3D parts to market today, though it's really hard to get hard-hitting technical details.
The path forward is, as always, fuzzy and hard to imagine. Instead, let's take a glimpse backwards to give some perspective to this relentless lunge into tomorrow.
The first commercially-successful 8 bit microprocessor was Intel's 8008. It was a terrible part requiring insane amounts of electronics to build a useful system. But compared to the expensive minicomputers of the day it was inexpensive and small.
With 3500 transistors the part took tens of microseconds to execute a single instruction. An 18 pin DIP package meant the address and data bus had to be multiplexed over the same pins; in fact, the 14 bit address itself had to be multiplexed over the 8 pins shared with the databus.
Why a 14 bit address bus? I have no idea, but that was generally plenty since memory was expensive. The 1702 EPROM, the most common non-volatile memory in those days, could store a whopping 256 bytes. Alas, I can neither remember nor find the price of that part. About the same time my college had a Univac 1108 with 1 mega-word of core; the core box cost a cool million dollars, back when a megabuck was a lot of money. And when the 8008 came out core was still mainstream technology; till the mid-70s we were buying a lot of Nova minicomputers, all of which used core.
The 8008 user's manual (http://www.classiccmp.org/8008/8008UM.pdf) has schematics for a computer based on that CPU. It sports 1 KB of RAM using 32 (!) 1101 SRAM chips. According to http://www.crescentmeadow.com/document_imaging/pdf/22045p.pdf those parts were about $256 in today's dollars. That's $8k for a single KB of RAM.
A typical board using the 8008 looked like the following:
It took a lot of external parts to make a working computer using the 8008!
Note that this part was a CPU chip, not an MCU. No memory or I/O was included. Its $120 price - about $700 in today's deflated dollars - was about what Intel still charges for their latest and greatest processors. But imagine the cost to build any sort of embedded system!
Despite the paucity of pins, three had to be dedicated to power (+5, -9 and ground) and two to a 2-phase clock. That leaves only 13 for all other functionality. Compare that to the nearly thousand-pin (well, ball) packages used by a modern Pentium!
One was an interrupt input whose behavior was nothing short of odd. When the CPU recognized the interrupt it did not follow a vector. It merely issued another fetch cycle with status bits indicating this was an interrupt fetch. External logic had to jam an instruction onto the bus. It was up to the hardware designer to disable memory accesses during this fetch and gate the jammed instruction instead.
Most developers jammed an RST. Eight of these were one-byte calls to specific locations in low memory (RST0 went to 0000, RST1 to 0008, etc). Being calls, the stack preserved the system's pre-interrupt PC, and being one-byte, the electronics required wasn't too complex.
Interrupt response was measured in many tens of microseconds as an RST had to multiplex all of those writes and then fetch instructions at the destination address. We were building a system that had to take data from a spinning filter wheel. An encoder generated 1000 pulses per revolution of the wheel; I forget how fast it rotated, but it was pretty quick and low latency was critical to get the data at the right time. Things happened too fast for polling and the 8008 couldn't even keep up with jammed RSTs. We hit on the idea of jumping to a halt instruction located just before the data-taking code. Then the interrupt jammed a NOP onto the bus, which was the fastest instruction available, and the system proceeded to take and process the A/D data before halting again.
Intel's Intellec 8 was a development system filled with all of the boards needed to make a working computer. EPROMs contained the tool's code, but to boot it one had initiate an interrupt with the clocks frozen, and then manually jam a JMP 0x0100 instruction into the front panel switches. 40+ years later, after having done this hundreds or thousands of times, I still remember that a JMP was 0x44.
The Intellec 8
The Intellec wasn't a JTAG, BDM or even ICE debugger. The motherboard had around a dozen connectors that carried all CPU bus signals around. We designed a board that plugged into one of these slots, which connected to a similar board plugged into the bus of our system. To develop code we'd pull the CPU board out of our system and use the Intellec as a replacement. A model 33 TTY was the console, and a crude debugger let us set breakpoints and the like. It took three days to reassemble the program, so we'd patch machine instructions into the code to fix bugs.
The 8008 had a 7-level deep hardware stack. Push or call too much and the next pop or return would give unexpected results. Stack overflows that caused the code to return to random places were common and frustratingly hard to troubleshoot.
Today we marvel than a 32 bit MCU with plenty of memory and I/O can be had for a half-buck or less, so the 8008 seems like laughable technology at astronomical prices. But it, and its predecessor the 4004, ushered in the world of embedded systems.
What will the next 44 years look like? The answer is unknowable. Moore's Law looks likely to survive for quite some time. The aforementioned V-NAND technology, Xilinix's 2.5 D approach, and other ideas may help Moore scale to more than two dimensions.
I suspect the future will be a world of technology that would leave us breathless. It's likely that the technology will exceed our ability to grapple with associated cultural and moral issues; the ensuing decades could be full of fascinating debates about balancing the human and mechanical world. Or not; we may careen into the future with blinders on.
Like H.G Well's Time Traveler, given the choice to catch a glimpse of the future vs the past, I'd happily chose the former.
Published December 30, 2014