Follow @jack_ganssle

Coding ISRs

Handling interrupts is tricky at best. Here's a few suggestions.

Published in Embedded Systems Programming, July 1994

The logo for The Embedded Muse For novel ideas about building embedded systems (both hardware and firmware), join the 25,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype, no vendor PR. It takes just a few seconds (just enter your email, which is shared with absolutely no one) to subscribe.

By Jack Ganssle

Are you an interrupt-lover or hater? As a newly hatched engineer I remember going to great lengths to avoid using interrupts in my embedded designs, covertly fearful of my ability to debug the code. With growing experience and confidence that changed to an undisciplined use of interrupts in every conceivable area, figuring that they'd speed system response and show everyone just how clever I was.

A horrible bit of reality intervened, in the form of a high-speed tape system that had only microseconds to respond to each incoming data byte. In my confidence (and arrogance) I blithely wired the data-ready signal to the system's interrupt pin, figuring it would be easy to write a fast ISR (Interrupt Service Routine) to handle the river of data. Thus I learned about latency - it can take a long time for a system to even see the interrupt, let alone stack a return address, fetch a vector and start the ISR. The system was finally delivered with a five instruction interrupt-free polling routine taking in data... just barely fast enough to keep the FIFOs from overflowing.

Interrupts are indeed wonderful. They let you write code that is basically unaware of external events, each of which gets serviced by an ISR. Interrupts do, however, impose execution penalties that you simply must account for. They also complicate debugging, since these asynchronous events can confuse your tools. (Fortunately, modern embedded tools generally act well in an interrupt-intensive environment, so debugging is not the act of heroics it once was).

A recent article in Circuit Cellar Ink by Do-While Jones panned interrupt use in almost all cases, promoting the use of polled code wherever possible. Though it is important to recognize their limitations, interrupt-Luddism is an awful mistake. Well-written ISRs encapsulate complex hardware behavior - surely a good thing. They remove handling asynchronous I/O from main line code, greatly simplifying system design. Interrupts play naturally into the use of a Real Time Operating System, which lets you buy, rather than foolishly rewrite, a significant chunk of the real-time sequencing part of your code.

They can introduce an element of chaotic behavior into your design. In some cases it may not be possible to prove that, if every interrupt comes at the same time, that sufficient stack space and CPU time will be available to process each one. One solution is to write ugly main-line code that ponderously samples each input sequentially. Another is to add external hardware -- FIFOs or even a distributed processor -- to reduce the I/O burden on the CPU.

Vector Overview

One common complaint against interrupts is that they are difficult to understand. There is an element of truth to this, especially for first time users. However, just as we all somehow shattered our fathers' nerves and learned to drive a stick-shift, we can overcome inexperience to be competent at interrupt-based design.

Fortunately there are only a few ways that interrupts are commonly handled. By far the most prevalent is the Vectored scheme. A hardware device, either external to the chip or an internal I/O port (as on a high integration CPU like the 188 or 68332) asserts the CPU's interrupt input.

If interrupts are enabled (via an instruction like STI or EI), and if that particular interrupt is not masked off (high integration processors almost always have some provision to selectively enable interrupts from each device), then the processor responds to the interrupt request with some sort of acknowledge cycle.

The requesting device then supplies a Vector, typically a single byte pointer to a table maintained in memory. The table contains at the very least a pointer to the ISR.

As an aside, the proper term should probably be Scalered Interrupts. Vectors imply a one-dimensional list of data, not a single value. It always make me wonder if someday true vectors or even matrices could be used. Perhaps a wide-bandwidth interrupt channel could accept a vector or result codes from a co-processor, or even an instruction stream partially preprocessed by the interrupt controller...

The CPU pushes the program counter so at the conclusion of the interrupt the ISR can return to where the program was running. Some CPUs push other data as well, like the flag register. It then uses the vector to look up the ISR address and branches to the routine.

At first glance the vectoring seems unnecessarily complicated. Its great advantage is support for many varied interrupt sources. Each device inserts a different vector; each vector invokes a different ISR. Your UART Data_Ready ISR called independently of the UART Transmit_Buffer_Full interrupt.

Simple CPUs sometimes avoid vectoring to directly invoke the ISR. This greatly simplifies the code, but, unless you add a lot of manual processing, limits the number of interrupt sources a program can conveniently handle.

Controllers

The greatest complication arises from the use of an external interrupt controller. For example, PC's use an 8259-type device that funnels multiple interrupt sources through to the CPU onto a single interrupt line. The 8259 provides the vector address, and even prioritizes each input. Events considered more critical than others (like a power-fail condition) override lower priority actions.

External interrupt controllers are a wonderful addition to complex systems since they take care of details of properly generating the timing information needed by the processor. They are invariably a pain in the neck to use from a software standpoint. Few embedded systems are adequately documented, so the poor programmer must somehow figure out what interrupt is on which line, and then set up the controller with the properly priorities and masks.

Many controllers also demand special handling in each and every ISR. The 8259, for example, requires you to send an end-of-interrupt sequence to it before it will allow any other interrupt to come along. It's easy to do once you know the rules. Unfortunately, the procedures are always buried in a cryptic hardware reference manual. Using the device in the simplest possible manner means digging through hundreds of complex options to get to the real meat of the procedure.

The ISR

The ISR starts running almost magically when the interrupt comes. Of course, its goal is to service the interrupting device properly. Even more important, though, the ISR must completely preserve the context of the system.

ISRs are called with the main line code in any sort of state, running any sort of routine. A null ISR must at the very least execute a return with all program parameters restored intact so the main line code is not corrupted. A well written ISR will be invoked transparently. Destroy a register, and the code will eventually crash.

ISRs generally take the following form:

entry:  Push registers	
        Service hardware	
        Re-enable controller	
        Pop registers	
        Enable interrupts	
        Return

The initial register push must include every register that will be used by the ISR - including the flag bits. Some processors do push the flags automatically.

The only reason the ISR was called was to service interrupting hardware. This action takes an infinite variety of forms. The biggest mistake made in ISRs, though, is to put too much hardware service in the ISR itself.

Simple interrupts, like UART handlers, basically queue up or dequeue data in just a few lines of code. More complicated hardware, like an IEEE-488 controller, may need quite a bit of support in the ISR. Beware of long ISRs! A valid interrupt-phobia is that they are a bit more difficult to debug than conventional code. Keep the ISR short so there's little to debug.

Long ISRs may keep interrupts disabled for so long that the system misses other critical events. An alternative is to reenable interrupts before the device is completely serviced, but this mandates writing reentrant code. If your device really and truly needs a lot of servicing code considering using the ISR to spawn off a different task that handles the bulk of the work. Make sure the spawned task is reentrant to guarantee it cannot corrupt other program activity, and so multiple copies can be active at one time.

Exiting the ISR

Reset your interrupt controller before restoring the registers. On an 8259 send it the EOI signal via a simple OUT instruction.

Even systems without explicit interrupt controllers may need to take special action to reset the interrupt hardware. The Z180's internal timer, for example, requires reads from two registers before proceeding. This little fact is buried in the documentation and is often missed.

Pop everything from the stack that you pushed. Count the number of pushes and pops to insure they are the same... and then count again. I blush to think of the number of ISR's I've written that return to data because of unmatched pushes and pops. (I believe that good assembly code minimizes the use of PUSHes and POPs. How many times have you seen code that uses these instructions ad nauseam, even around conditional branches, making matching them up an exercise that earns you the painful distinction of "guru"?).

Reenable interrupts if your RET instruction does not do that for you. It's bad practice to put anything between the EI and the RET. Every processor I know of defers the actual interrupt reenable until the instruction after the EI executes, which gives the return a chance to complete. If your interrupts come at a hysterical rate, this enable-efer insures you'll avoid a stack overflow problem.

Execute the return and the ISR will go back to your main code. Be sure to execute the right (INGRID -right IN ITALICS) return! Many processors have a special interrupt version that restores things automatically pushed by the vectoring. Some, like the Z80, have a return from interrupts that alerts external devices the ISR is complete.

C or Assembly

C compilers are now so efficient that I recommend writing all but the most critical ISRs in this language. Assembly is simply too tedious, and linking assembly to C is sometimes an art - particularly if you have to pass data between languages.

Be sure your compiler has built-in support for writing ISRs. We've found that even Microsoft and Borland's DOS compilers do a great job in interrupt intensive embedded systems. True embedded compilers, like those from Software Development Systems, Microtec Research, and a host of others all include ISR-specific keywords.

The compilers will take care of pushing, popping, reenabling interrupts, and the return. Frequently the linker will even set up the vector table for you as well. A few low-level OUTs, issued from your C code will program the external controller and other devices with the proper vectors.

Debugging ISRs

The wonder of the interrupt is that the CPU automatically handles vectoring. When the interrupt comes along the appropriate ISR just starts running. The nuisance is a complete system crash if the controller, interrupting source, or vector table is set up wrong.

Before turning on your code double and triple check the interrupt structure. Troubleshooting an obvious problem is inherently much slower than finding it via a quick code read-through.

Since interrupts are always a source of trouble use a proactive debugging strategy. Enable only one interrupt at a time, and even if it seems to work spend a few minutes checking its operation. This is a lot faster than searching for the cause of some weird problem later when a dozen interrupts are flying around.

I like to center trigger my trace or logic analyzer on each ISR, and watch the flow just to insure it works properly. Does the right vector get picked up? Are all of the registers preserved properly? Does the stack balance (that is, the stack just after the ISR return must be identical to the stack just before the interrupt)?

To Interrupt or Not?

Should you use interrupts or polled I/O? This decision depends on several factors.

First, can the product's sell price support the extra hardware needed by interrupts? You can't just pulse the interrupt line and expect the interrupt to "take". If the CPU is busy it may miss a short pulse. Use hardware that latches the signal until acknowledged. Supplying a vector adds yet more hardware complexity.

Is the hardware properly debounced? A bouncy interrupt (like, from an unfiltered mechanical switch) might cause thousands of spurious interrupts. Polled I/O probably works better in this case. Or, use a timer interrupt to invoke a polling routine every few milliseconds.

Next, will the signal come so fast that the interrupt overhead associated with vectoring and pushing/popping makes the code run too slow? Though an answer to this is polled I/O, this does imply you are running right at the hairy edge of processor time. A much better solution is to change CPUs, add another processor, or crank up the clock rate. Things are always worse than you figure. Make sure there is plenty of margin for unexpected problems.

Are your tools up to snuff? Don't build a 50 interrupt system with no more than a ROM monitor for debugging. Your time is worth too much! Will your C compiler generate reentrant code?

When the system finally works take pride in your accomplishment. Not everyone can build a system with all of those asynchronous events running full out!