Your Noise is My Music

Minimizing noise in analog systems is tough. Here's a few ideas.

Published in ESP June 1997

For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.

By Jack Ganssle

They come to my virtual confession booth, almost daily, embedded developers all, each confessing a design sin, looking for absolution in the form of technology solutions. Usually there's little I can offer other than a reference to a book or an article, or a poor suggestion gleaned from past experience.

 Yesterday's penitent was in an unusual state of distress. His system had been delivered and installed on a factory floor. The computers were functioning, all datacomm was up to snuff, and the user interface flawless! except for the slowly drifting numbers on the display, indicating a fluctuating weight measured by a load cell sensor. The slab of meat on the weighing pan was clearly not changing mass at an appreciable rate; the reading variations were all due to noise and drift in the system. He was frantically looking for a simple noise filter, a silver bullet easily applied to get the customer to pay.

 To a parent is the shrieking and whining that seemingly goes on forever. To a teenager it's his folk's oldies station. In the electronics world, though, noise is pretty much anything but the signal you're looking for.

 Analog sensors work in a real world that we digital folks are mostly immune from. Our ones and zeroes are pristine things of a Platonic ideal. A one is, by definition, perfect, always representing exactly one asserted bit, no more and no less. The same is true for a zero; together they form the entire universe of the binary idea.

 Analog is just the opposite. It's a continuum, a spectrum of values all of which have an element of truth and all of which contain some error. Where digital is pristine, analog is the grimy back alleys of the electronics world, with all the truth and ugliness that implies.

 The filth and grime in an analog signal is noise - it's a single added to that which you're measuring that corrupts the real values. Noise is to an analog signal what the kid banging on drums is to your Bach Concerto. Distortion. You can close the basement door to remove some of the child's interference, or slap on headphones to eliminate even more.

 An awful lot of embedded systems measure and process analog data, and must contend with noise to larger or lesser degree. There are two main sources of evil in the analog world: noise and drift.

  

Drift

Drift is generally a very slow change in the system over time. It comes from your analog front-end, or sometimes from the sensor. Drift biases all of the readings, all in the same manner. Drift comes from a change in "offset", or the zero reading, or a change in "gain". It helps to think of drift in terms of the equation of a line:

 y=m*F(x) +b

 Where b is the offset and m is the slope of the line, or the gain. F(x) is the mathematical function you apply to x, the input reading, to get whatever output you're looking for. In a perfect linear system m and b are constants.

 The real world is not so kind. When you adjust the bathroom scale to read zero when no one is one it (or, to read the number you'd really like to see when you are on it) you're adjusting it's offset, something you do to account for mechanical deformations in the scale's springs. We assume the scale's gain is constant; if it's not, if "m" is greater than 1, then the more you weigh the greater it's error will be.

 Like the mechanical scale, sensors in general are imperfect and exhibit varying levels of drift over time. That's why we "tare" a scale - we read the zero value with the pan empty, and set the offset to zero. Your embedded system also includes some amount of analog electronics, if only an A/D converter. All analog drifts with time. The vendors of converters and op amps are good at documenting these errors, but be sure they are there.

 The only way to deal with drift is to measure and correct for it. In low-precision systems drift may be small enough that you can safely ignore it. Once your A/D starts processing lots of bits of resolution, drift is a more important problem.

 One of the cleverest ways of dealing with drift that I have seen was in a colorimeter. A rotating bow-tie shaped precision white standard rotated quickly in and out of the optical path. When in the path, the electronics knew exactly what value the standard represented, and automatically corrected the slope (m) to give the desired reading. This self-calibrating design insured high accuracy at all times.

 In an X-ray thickness gauge we used solenoids to toss a shutter (something that closed the beam off entirely) in the path of the X-rays every few minutes to get a new offset value. No beam and F(x) should be zero, so it was an easy matter for the computer to compute a new b using the non-zero result from the sensor. Similarly, we tossed other, non-opaque standards in from time to time to correct for drift errors.

 Both of these examples relied on mechanically inserting samples into the measurement system to correct slope and offset. Sometimes this is not an option.

 A scale we did long ago was designed for use in a grocery store's meat department. It's typical range was a pound or so up to a couple of pounds. Never, ever, would a butcher weigh something less than a few ounces. We used this fact to compute a new offset. When the scale's value went below some number and stabilized, we assumed nothing was on it and computed a new tare value. No external standards or mechanical action was needed. This method of computing a new offset worked only because we knew and exploited the environment the scale lived in.

 In situation where the measured signal is quite tiny very high gain amplifiers are used before the A/D. This tend to be subject to all sorts of unpleasant offset and gain changes over time. Sometimes, when it's impossible to mechanically inject a standard, people will electronically insert a "virtual" standard. This might involve using an analog switch to disable the sensor and inject small, precisely known, voltages or currents. The computer can zero-out the errors of the amplifiers, though the sensor's drift, if any, still remains uncorrected.

  

Noise

Electronic noise is a signal distortion that comes at non-DC frequencies - from Hertz to megahertz. All electronic components sputter and gork a bit. When your input is extremely small, these minute fluctuations can become a significant percentage of the input.

 Noise also comes from all sorts of places outside of the components themselves. A prime problem is the 60Hz field radiated by the power lines in the office and the world. Sometimes a nearby 50,000 watt radio station might create interference to high gain circuits (of course, can you really call it "noise" if it's your kind of music?). Current switching on its associated circuits radiate high frequency noise like mad. Even rotating bearings can radiate enough to create problems.

 I installed gear in a factory where a house-sized motor, switching back and forth, generated enough "noise", or EMF, to physically destroy unprotected components.

 The effect of noise on your system is corrupted readings, perhaps effecting their accuracy or making them dither with time. Customers just hate to see that static number - perhaps a pressure reading, a thickness value, or whatever, change when the sample is static. Though there are a lot of partial algorithmic solutions, it's best to eliminate as much as possible before applying computer solutions.

 Noise can be difficult to cure and frustrating to isolate. One of the biggest mistakes I see made is applying fixes before the source of the problem is clearly known. Just as too many programmers optimize the wrong set of code when there's a speed problem - because they haven't clearly identified which functions need optimizing - engineers often focus on the wrong section of their hardware.

 Is the problem a sensor or the electronics? Remove the sensor and inject known, steady, values to the amplifiers to determine this. In cases where it's hard to inject a voltage (perhaps you're sensing a complex waveform), build or buy a known-stable waveform generator. Yes, it's a time-consuming pain to construct diagnostic tools. You'll surely use it more than you initially planned, though, and surely the production/repair departments will need the tool in the future to maintain the product. But rest assured that without known stable inputs you'll spend far too much time troubleshooting any noise problem.

 Solving noise problems in circuits is simply too broad to cover in a column. I suggest reading Bob Pease's book, Troubleshooting Analog Circuits (Butterworth-Heinemann, 1993, ISBN 0-7506-9499-8).

 Some basic design guidelines, though, can help. Keep gains as low as possible. Limit the length of high impedance wires, and those with low signal levels. Bring all analog grounds to one common point, and don't mix analog and digital ground. Keep sensitive electronics and sensors away from devices switching lots of current.

 The trickiest systems to quiet down are those processing DC signals, like those of my friend with the weighing systems. If it's not too late - if the system is not yet designed - consider changing paradigms and developing something that is inherently more noise immune. The common radio is a wonderful example.

 Nothing can match a radio receiver for pulling extremely weak signals out of a noise-populated environment. The signal might be a microvolt or two, swamped by millivolts of junk it's circuits must reject. Few embedded designers work in such an unforgiving analog realm.

 The signal of interest, though, is the signal at a particular frequency. All of the others are in other bands. A radio's magic comes from it's ability to extract one narrow item (in the frequency domain) from a babble of broad-banded signals.

 Even better, noise is typically spread over an extremely wide bandwidth. When the radio extracts on narrow frequency band it simultaneously eliminates most of the noise.

 So, why not use a similar approach to measuring low frequency signals? Instead of feeding a DC value to a load cell, excite it with an RF sine wave. Then create a narrow-band filter (easy with today's high cheap analog ICs), that extracts just that frequency.

 This won't correct for drift problems, but will eliminate a lot of the noise.

 Thermistors, load cells, oxygen sensors, lead sulfide photocells, and lots more sensors all require some sort of bias, while detecting a very slowly-changing signal. This method is a natural fit for all of these applications and more.

  

Software Solutions

Of course, management likes to tell us to "fix it in the software", something that is not always realistic. There are indeed a number of software strategies for noise reduction, though their effectiveness varies.

 The simplest is averaging. Read the input a number of times and compute an average. Simple, effective, and easy to implement. In low-noise situations this may be the best approach. However, averaging leads to several problems, notably response time (the system is returning no data while it reads the n samples), and diminishing returns.

 Response time means if the software needs a data point NOW, it will have to wait for some number of samples to be read and averaged before proceeding. This is often intolerable.

 Thankfully, in an embedded system the firmware has full control over the hardware's interrupts. We can immediately improve the apparent response time of an averaging algorithm by programming an ISR to constantly read the A/D in the background. When the code needs a value, the data is already accumulated in a register in memory.

 The ISR can't read and average the input data, because it runs (presumably) asynchronously with respect to the code that needs the results. It's best to just have the ISR read raw data into a memory buffer, and let the main line code or some other task take care of applying the averaging algorithm to the data.

 To avoid accumulating old data, the ISR should gather N samples in a FIFO buffer. These represent the most recent N readings from the A/D. Whenever the ISR takes a reading it drops the oldest sample from the buffer and adds the newest.

 Whoever queries the buffer to get a reading then simply averages the N sample points. Once the buffer is initially filled, then the response time to a request for data is just the time taken to do the averaging.

 It's important to clear the buffer when significant events occur. If the sensor assembly is active only when a lamp is on, for example, then be sure to reset the FIFO's pointers at that time. Use a simple semaphore to make a routine requesting data wait until the N samples are taken. Or, return the average of the number of samples accumulated until that number hits N. The first few readings will be noisy, but they will be more or less immediate. As the FIFO fills the signal will settle down.

 The problem of diminishing returns is much more difficult to deal with. Increase the averaged number of samples by a factor of 10, and the noise goes down by a half. Another factor of 10 (we're up to 100x samples now), and it goes down another half. Averaging works, but to remove a lot of noise may take far too many samples. Acquisition time goes up much, much faster than smoothing results.

 You can pre-filter the data stream. If the data is more or less DC, then noise may represent itself as point-to-point dithering. Sometimes you can reject those points that are not at least near the current baseline. The problem lies in establishing what the baseline should be.

 If you have some a priori knowledge that no input should vary more than some percentage from the baseline (a not unreasonable assumption when working with a slowly changing signal), then you can compute an average and reject the outriders. I recommend the use of a sum-square computation, since most electronic noise is more or less Gaussian.

 It takes a lot of compute time to perform such a rejection. Worse, sometimes the wrong data gets rejected! If you are forced to average over only a few points due to time or smearing problems, then any one really bad sample will throw off the whole average. 

Conclusion

The best approach is always to eliminate as much noise and drift as possible from the signal before it makes it to the computer. Use a clever design instead of applying last-minute Band-Aids.