Self Calibrating Systems

Here's an algorithm to make an instrument read correct values without ever manually setting gain and offset coefficients.

Published in Embedded Systems Programming, October 1990

For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.

By Jack Ganssle

As a programmer, how do you view the microprocessor-based widget you're currently designing? All too many of us think in terms of three distinct black boxes: the analog input and output circuitry, the microprocessor hardware, and the code. We should draw a lesson from the science of Ecology, which teaches us to view the Earth as a whole system, a Gestalt, rather than a collection of independent life forms. Our designs can benefit from the same sort of worldview: the analog, digital and software components must be designed together to act as a unified whole.

One of the weakest points of any system is the analog electronics. In the digital world, a one is a one is a one - we rarely worry about voltage levels and the like. Not so in an analog front end. Drift, aging, and even humidity and cleanliness can dramatically affect the response of an analog circuit. I remember one case where we had very low-leakage capacitors isolated from the PC board by Teflon standoffs, which had to be cleaned using exotic solvents that left no trace of contamination behind. One day a customer called to say his unit wasn't working and he had sprayed WD-40 all over it to see if that would help!

Since the entire system will be only as good as any of its parts, removing front-end inaccuracies is just as important as having the proper software algorithms.

Traditional analog circuits use potentiometers (variable resistors, colloquially known as "pots") to remove most errors. A technician carefully sets each pot to calibrate the unit. With one or two pots this is usually not a big deal, but some systems have dozens. Worse, one pot can affect the setting of all of the others. For example, setting the gain pot usually requires re-adjustment of the offset. This becomes an iterative process, putting the technician into a sort of pot-twiddling frenzy.

It's nice to think that a product consists of a bare minimum of analog electronics, all of which are a front-end to the processor. Run your inputs to an A/D (analog to digital converter) and let the computer do the rest! More often than not, however, the raw signals must be amplified, smoothed and sometimes translated. More often than not one or more operational amplifiers are wired together before the A/D - unadjusted, each of these will contribute some amount of analog error to the digitized data, as will the A/D itself.

High accuracy systems (those with a 10 or more bit A/D) will almost certainly require adjustment to remove these errors. 10 bits of conversion resolution, over a 5 volt range, makes each bit worth a paltry 5 millivolts! Even small errors will use up several bits of resolution. While in a purely analog world there may not be a good alternative to a multitude of pots, embedded systems can take advantage of the power of the computer to mathematically remove many of the error sources the pots are called on to correct for. Consider - the pot is really an analog memory device. We can replace all of these expensive, failure-prone board-space-robbing components with clever code and perhaps some trivial electronics. The benefit? Each removed pot reduces system size, increases reliability, and eliminates the labor associated with its adjustment. Even better, it will increase overall system accuracy, since the computer can recalibrate itself continuously. In the real world a pot will only be adjusted infrequently if at all.

Linear Error Sources

Even the simplest of analog circuits are subject to offset and gain errors. "Offset" refers to an undesired DC bias - for example, when you expect zero volts out, the circuit gives 0.25. Gain is the amplification factor. A system that relies on a gain of 5.0 may not work properly if the gain is really 5.1.

If you consider "gain" to be the algebraic quantity "slope", and "offset" to be "intercept", it's pretty clear that a circuit that can be described simply by its gain and offset really defines the equation of a line. In other words, its transfer function is of the form y=mx+b, where m is the slope and b is the intercept. Entire textbooks have been written about these "linear" circuits.

Most analog circuits are well described by the equation of a line. Even an A/D converter can be pretty well characterized this way. Although we wish an A/D would have the ideal characteristic y=x (output digital representation is exactly the same as the input), most will exhibit some small gain and offset errors. After all, an A/D consists of op amps, an integrator, and other analog components just as susceptible to drift and aging as more discrete electronics. Indeed, most precision converters have a provision for trimming both gain and offset parameters via external pots.

The op amps have errors, the A/D is suspect, and even the sensors themselves rarely measure precisely what we're looking for. Thermistors and thermocouples change sensitivity with age. Photocells drift with age and temperature. We designed a system years ago with lead sulfide photodetectors - the sensors were an order of magnitude more sensitive to temperature than to the infra red light we were trying to measure. It seems that a sensor for measuring almost any physical parameter to a reasonable degree of accuracy will, uncorrected, lead to significant errors.

This is why every instrument in your lab, be it a scope, voltmeter, frequency counter, or whatever, should be calibrated at regular intervals. You should also change your car's oil every 3000 miles and call your mother once in a while. The frenetic pace of 1990's life seems to ensure that equipment calibration will never be a high priority concern to most of us. Hell - I can barely get these columns in on (or near) time, let alone worry about the accuracy of the temperature sensor in my electronic bread maker.

Eliminate pots! Try to design your system to be self calibrating. Sensors and analog circuits will probably always need some sort of alignment, but try to come up with a purely digital approach. If the self-calibrating routine is invoked every second, day, or week, the end-product will ultimately give much more accurate results than a painstakingly hand-tuned front end whose last calibration was half a decade ago. Even if only limited self calibration is feasible, do what you can to remove all manual pot adjustments.

Linear Calibrations

Given that most systems use linear sensors (or at least sensors that are mostly linear in the operating range) and linear circuits, it follows that we can use a linear correction to remove essentially all of the error. Even some non-linear circuits can use a linear correction if the operating range is sufficiently restricted.

A self calibrating system needs two ingredients: some known value which is injected into the front-end during the calibration, and an algorithm to correct the overall transfer function to meet these known values. Note the difference from a pot-ridden front end, in which the transfer function of the analog portion alone is adjusted. A digital calibration affects the transfer function of the entire system. Digital designs are deterministic, so perhaps this is more a philosophical distinction than a real one, but it underscores my belief in a holistic approach to embedded systems design.

Before considering the problem of introducing known values into the system, let's look at how we might do the math involved in the calibration.

Starting with the general case, suppose we're measuring temperature. The input to the computer will be a number of bits representing voltage, which we'll call "x". This quantity, almost by definition, includes both a gain and an offset error which must be removed by the calibration. In other words, when computing temperature we'll replace "x" as the input parameter with the formula:

x'= m * x + b

 
 
 


where x' is the corrected (linearized) value of x

m is the correction to the system's gain

b is the correction to the system's offset

m and b are computed every time we recalibrate.

Elementary algebra tells us that to solve for two unknowns (i.e., m and b), we need at least two equations. A technician would adjust the pots (if our clever engineering hadn't removed them) by inserting zero volts for the offset correction and a full scale reading for the gain correction. We should do the same.

If we take a measurement with very different values, say one at each extreme of the sensor's range, then we'll get two formulas:

y1 = m * x1 + b	(first reading)
y2 = m * x2 + b	(second reading)

Solving for m:

m = (y1-y2) / (x1-x2)

And for b:

b= y1 - m * x1

Remember, x1 and x2 are the values read from the A/D, and y1 and y2 are their corresponding known correct values.

Thus, every time we take a reading from the A/D converter we should correct the reading (x) by applying the formula:

output=((y1-y2)/(x1-x2)) * x + y1 - x1 * (y1-y2)/x1-x2)

While this might seem a bit cumbersome, all of the coefficients can be computed at calibration time, so the computational burden is small.

Be warned though - a noisy system can produce crummy calibrations. If the two readings taken to compute m and b are not accurate due to noise, average a number of readings first, and then perform the computations.

Standards

Just how does one go about injecting known values for the sake of calibration?

Two different approaches are possible. Do you wish to compute a gain and offset to correct drift in the analog electronics? Or, do you need to include sensor changes in the calibration?

If your system measures a "pure" electronic quantity like voltage or current then there is no sensor, so a purely electronic solution is practical. Put a computer-actuated analog switch in the front end. Position 1 might be the regular input, position 2 a good zero level (ground), and position 3 a precision near-full scale value. The computer can then use these two calibration points to establish gain and offset corrections.

If the system already has an analog multiplexor, so it can sample readings from several sources, just dedicate two of the multiplexor's channels to these calibration parameters.

Some systems use a digital to analog converter (D/A) feeding the multiplexor. The computer can generate calibration values at any voltage level. This is a nice approach, but you must be sure the D/A is accurate enough to not skew the calibration results. In the interest of keeping costs down, I'm not a big proponent of this unless the system needs a high accuracy D/A for some other reason.

Drifty sensors will make the calibration procedure more complicated. It's hard to create a standard physical parameter (like temperature or a light level) with precision. This makes sensor calibration difficult at best, but remember that such calibration is needed regardless of the instrument's technology - smart code still saves pots.

For example, how do you calibrate a thermistor? The user inserts it in a controlled temperature bath and then enters the bath temperature into the computer. Then, to get another calibration point (two equations in two unknowns), the process is repeated in a bath of another temperature. Tedious, yes. Avoidable? Not really, given that the thermistor requires some sort of calibration.

Every sensor does have it's unique calibration requirements. In cases where the computer can automatically insert known readings, by all means do so. The frequent recalibration will yield a better product. If some manual intervention is needed, say by making the user subject the sensor to known values, then the self calibration techniques are still worthwhile. A little code saves several pots, and completely eliminates the tedious pot twiddling that can only be done in a lab.

Single Point Calibrations

Sometimes, due to the expense or complexity of inserting accurate standards, a two equation calibration is not possible. We just can't get around the fact that two equations are needed to solve for two unknowns, but you can at least correct either the gain or slope by using one standard.

Before the microprocessor age I worked on a colorimeter that was chock full of old-technology analog circuits. The electronics simply could not maintain the accuracies the designers strove for, so they included a rotating white standard shaped like a bow tie. Many times a second the standard was in view, blocking light from the user's samples but returning a precision white value to the photodetectors. The electronics used this information to recompute the amplifiers' gains. The presumption was that offset contributed only a small part of the error. The white bow tie was used to constantly correct for gain errors.

In a steel thickness guage we had a similar problem. This system included a number of self calibration features, including a multiple point regression to figure the relationship between voltage and steel thickness, which is a complicated curve - not a line. In addition, every minute or so the system automatically blocked all x-rays from its sensor by inserting heavy hunk of lead in the beam. During this brief dark time the computer made a number of A/D readings, averaged them, and recomputed the system's offset.

Drifty sensors and electronics demand these awkward mechanical calibration methods. Other options are always available. Suppose that the input data changes slowly with time. Then, you can use a simple analog switch (or even relay) to occasionally inject a known voltage into a sum node of your first stage of amplification. For example, if the system has an input range of 0 to 10 millivolts the code can wait for a mid range reading to occur (5 millivolts or so). Then it can momentarily actuate the switch to add another few millivolts to the signal. Since this additional value is known (precision voltage references are cheap), it's easy for the code to compute a gain correction factor.

Sometimes you'll really need a gain correction, but just can't add all of the mechanical complexity of solenoid-actuated standards. Supply the user with a single standard that he can manually insert. If the process is made painless, a single point calibration will only take a minute or less. In an updated version of the colorimeter I referred to earlier, we supplied each unit with a white tile with known reflectances. The user put this on the instrument daily for a quick gain-only recalibration.

Another option is to add an analog switch in the feedback loop of an op amp. If the switch reduces the amplifier's gain by half, then it is easy to compute an offset correction.

Conclusion

A self-calibrating embedded system saves obvious costs in installing, setting, and maintaining a gaggle of potentiometers. There can be other, subtler, benefits. If the calibration is performed often, you can sometimes replace high precision resistors with cheap, drifty, substitutes. So what if the analog electronics drifts? As long as the automatic calibration removes these errors, the integrity of the entire product is maintained.