Speed Kills
Data comm over cheap serial links might be more cost effective than other, faster, options.Published in Embedded Systems Programming, August 1990
For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe. |
By Jack Ganssle
I warned Tyler Sperry, this magazine's Editor, that my column will occasionally address issues that cross the border between software and hardware. Ours is truly an interdisciplinary business. Often the line between the software and hardware is a bit blurred, since in many cases code replaces those functions traditionally assigned to hunks of metal and silicon.
Data communications is one of the stickiest thorns of this industry. Even in simple embedded systems a lot of data moves between the processor and each I/O device. Factory automation equipment might resemble a complex nervous system, with dozens or hundreds of remote sensors communicating with a central controller over synapses of cable or optical fiber. These communications links absolutely must be optimized to reduce costs (copper wire is expensive!) and improve reliability. Simple, brute force approaches to moving data around are no longer adequate.
Every office now relies on a LAN. Serial networks of one form or another are starting to permeate the embedded data communications world. While for decades serial has been used as a point-to-point link (like from one terminal to a mainframe), now smart links can handle commands and data to and from lots of peripherals, all on one set of wires.
Consider modern automobiles: long before the end of this decade most cars will be networks of dozens of CPUs. Every simple function will be assigned its own processor. The driving force behind this is not so much to improve performance or to add features, but simply to save the weight of the wiring! A serial communication link uses one pair of wires to replace the ratsnest of cabling that pervades most cars.
I worked as a consultant for several years. What a miserable job! Customers could just never bring themselves to accept the expert advice they paid us dearly for. Data communications was always one of the biggest issues. My partner and I designed a new security system for the White House during Reagan's first term. Dozens of identical 8085-based chassis were to be connected to a remote mainframe computer. The low data rates made serial the natural solution. Too high tech - the Secret Service demanded a massively parallel bus consisting of several hundred wires per chassis! Over a thousand large connectors were involved, creating a reliability nightmare. Your tax dollars at work.
In another case, we installed a large thickness measuring system in a Baltimore steel plant. Quite a few 12 bit encoders were located up to 1000 feet from the central computer (a pair of PDP-11s). We tried, oh how we tried, to convince the customer to connect each encoder with a single optical fiber pumping serial data. No luck. They were simply afraid of the technology, and refused to even consider putting an 8051 at each encoder site. We were forced to bring the data back in parallel, installing a dozen differential line driver chips at each encoder, transmitting the data over fantastically expensive inch-thick cables. Fear of the unknown is a horrible thing.
Microprocessor technology has changed the economics of design. Silicon is a lot cheaper than copper. Transmitting data over long distances on bundles of wire is almost always much more expensive than multiplexing the same data onto a single wire or fiber. Multiconductor copper wire (especially armored for factory environments) can cost over a dollar a foot. The savings in copper will justify a one-chip micro that converts parallel signals to one serial channel in most applications. A lot more money is saved by connecting all of the remote devices on the same serial cable, saving the cost of running a separate wire from each sensor all the way back to the central computer.
Conceptually, a multidrop serial link is quite simple. Every node (peripheral or computer) is assigned a unique address. The remote device ignores all transfers unless specifically addressed by a central controller, in which case it gets sole access to the link. There are hundreds of variations on this theme, all targeted at reducing cabling costs.
Signetics recognized the problem some time ago. Even in situations where data is to be transferred only a few inches (say from a CPU to a peripheral chip on the same boa`d), the conventional parallel computer bus is inefficient. A lot of CPU and peripheral chip pins must be dedicated to the unglamorous task of communication. Pins are valuable - it's much better to allocate them to useful functions like extra I/O. If speed is not a problem, why not transfer data via a simple serial protocol?
I**2 C is a serial bus that communicates to any number of peripherals using only two wires. One carries a clock signal that sequences the timing of bits on the other line - the data wire. The clock indicates when each data bit is stable, and is used to indicate special conditions like data start and stop. This contrasts with RS-232 and other common data transmission standards, where the data is self clocking. In RS-232, given a known baud rate, data bits follow the start bit in an exact time sequence. By adding a distinct clock line, all communications become speed independent. The current bus master supplies the clock that all other devices can synchronize to.
I**2 C also differs from RS-232 in its signal levels. It was really designed for very short range (on a PC board) communication, so typically uses MOS levels. However, it's not hard to buffer the signals and increase the communications range substantially.
I**2 C doesn't force one controller to be the perpetual bus master. Any device can assume control of the bus, so multiple processors can be attached. An arbitration scheme resolves conflicts between simultaneous requests for control of the bus.
advantage of I**2 C over RS-232 is its well defined protocol. RS-232 just shoots data between two fixed devices - each had better know what proprietary protocol, if any, is being used. I**2 C supports an addressing scheme, so when many different devices are connected to the same two wires a particular unit can be selected. Once a connection is established between a device and the current bus master, then data is transferred in precisely measured blocks; after the last block another master can gain control of the bus. Thus, like IEEE-488 or SCSI, any device that understands the communication protocol can be connected to the I**2 C interface. Unlike SCSI, the communications protocol is fairly simple, so real systems are pretty easy to implement.
Refer to Signetics' Microcontroller handbook for a good description of the I**2 C bus.
I**2 C supports data rate up to 100 kbps. Need more speed? Last year Gazelle Microcircuits (Santa Clara, CA) introduced a chip set called (of course) the "Hot Rod" that communicates serially at up to 1 billion bits/second. To get this sort of speed they employ a gallium-arsenide (instead of silicon) substrate, the same exotic technology that will soon bring us the Cray 3.
The Hot Rod differs from I**2 C in that it is for point to point communication only. In other words, there is no bus - the Hot Rod is a dedicated chip set that connects two devices, and only two devices. The Hot Rod is also a bit simpler to use, in that no programming is needed. A transmitter chip takes a 40 bit parallel bus and converts it to serial, transmitting it over coax or fiber optics to the receiver chip where it is reconstituted into the original 40 bit parallel bus. Sounds vaguely like Star Trek's transporter...
Why not carry this a step further? After all, the Hot Rod can move data at tremendous rates - suppose you could replace the massive (and expensive) PC motherboard connectors with a simple two wire serial link? In a way, this is the idea behind Apple's serial interface on the Macintosh. Tie external devices to the main computer with a simple serial cable that clocks fairly high speed data.
Gazelle's approach is targeted at very high speed applications; the I**2 C bus is currently limited to products from Signetics. The rest of us can still take advantage of these de facto standards. I**2 C is particularly well suited to smaller embedded systems, since it could be implemented in an ASIC or even a PLD device with little trouble. Modern high integration processors (like the 64180) have lots of serial on board (including fast clocked channels) - perhaps it would be not too difficult to kludge up an I**2 C interface using these resources.
Since I**2 C is speed-independent, ultra cheap systems can even use a processor's parallel bits and lots of code to simulate a slow version of I**2 C, although it will eat lots of CPU time. Look at Intel's "Serial Interfacing on the 8085" application note for more information about simulating serial this way.
The Code
Serial does pose some interesting software design problems. With traditional parallel I/O, to get the setting of a switch you just read the bit. This isn't so easy if the switch is located remotely and its value communicated over a serial link.
One common approach is to send a request for data as needed. This keeps the software simple, but does mean that the controller will often be kept waiting. Another method is to let the external peripherals transmit current data at regular intervals. Perhaps remote sensor A sends it's data ever 100 milliseconds. Then, the controller can consult its current memory image of sensor A's last transmitted state.
An even better approach is to program the remotes to transmit only when their data changes. In the case of switches this will result in a dramatic reduction in transmissions. If the data changes quickly (say from a fast encoder) consider using the same technique found in a mouse or trackball - send frequent bursts of data, with each transmission containing a measure of the sensor's change from the last burst.
By all means code the serial receive routine to be interrupt driven. Polled UART I/O will bring the system to crawl. This does create yet another problem - when is a block completely transferred? You can't let the code work with incomplete transmissions.
If speed is not an issue a low level interrupt service routine can copy entire blocks, when received, to a stable buffer that (implicitly) always contains correct data. Be sure to do this with interrupts disabled! A somewhat better approach is to employ double buffers with semaphores indicating which one contains valid data; the interrupt service routine fills one and the code uses the other. Be sure to isolate the data structures from your code with a driver routine, since every data access will involve a semaphore test. Smart programmers have used these object oriented techniques long before the fancy moniker was invented.
Novices always get burned by a common trait of UARTs. Receiving is easy: read the data when the "receive data ready" interrupt occurs - this clears the interrupt. Transmitting isn't quite as straightforward. After the UART sends a byte it will assert a "transmit data buffer empty" interrupt, which will stay asserted until another byte is transmitted. If you don't mask off the interrupt until you're ready to transmit again, then you'll get an infinite stream of these interrupts. Be warned.
Summary
When designing a system look carefully at all remote data collection devices. Can you save money by using serial communications? Sure, the software will be a bit more complicated, but cabling costs go down, reliability goes up, and, if optical fiber is used, noise immunity skyrockets. Engineers often eschew serial due to the time it takes to multiplex data onto the link. By all means, if speed is truly an issue then use whatever means are needed to get it. You'll pay a heavy penalty for it. Remember the old adage - speed kills.
A lot of embedded work is undertaken by old smokestack industries trying to use modern controllers to improve produtivity and cut costs. These companies, the employers of many of this magazine's readers, really don't understand software. When a new project is contemplated, they huddle with hardware engineers, leaving software people out of the initial system's specification phase. What a disaster!
All too often, we software people abdicate our roles as expert advisors during a system's initial design. Hardware guys frequently design their little gems with little or no input from us, when we can help them with the software tradeoffs that will yeild a better design. It's important to get involved with all aspects of a project, to produce a product that is more saleable, more reliable, and more elegant.