Tweet Follow @jack_ganssle

Embedded Muse 39 Copyright 1999 TGG October 21, 1999

You may redistribute this newsletter for noncommercial purposes. For commercial use contact

EDITOR: Jack Ganssle,

- Embedded Seminars in Chicago and Boston
- Floating Point Approximations
- Thought for the Week
- About The Embedded Muse

Embedded Seminars in Chicago and Boston

Iíll present the seminar "The Best Ideas for Developing Better Firmware FasterĒ in Chicago and Boston on November 8 and 9, 1999.

The focus is uniquely on embedded systems. I'll talk about ways to link the hardware and software, to identify and stamp out bugs, to manage risk, and to meet impossible deadlines. If youíre interested reserve early as these seminars fill completely.

For more information check out or email

A lot of folks have asked me to bring this seminar to their company. Email me at if youíre interested.

Floating Point Approximations

I was surprised to find that one of my favorite programming books is no longer in print. "Computer Approximations" by J.F. Hart (John Wiley & Sons, 1968, ISBN 0-88275-642-7) is the bible of floating point approximations. claims they might be able to find used copies, and any decent university library will have a copy.

C libraries include all of the standard math functions for trig, exponentiation, and the like. Thatís not much help to assembly-language programmers or C coders who had to delete the math library to save space. And, the C libraries are typically aimed at the mass of developers, offering high precision answers despite long execution times. Need a particularly fast trig function, and are you willing to sacrifice some precision? Hartís book is the place to find an appropriate algorithm.

Hartís book gives polynomial solutions for all sorts of functions, including logs, trig, roots, etc. He also presents a number of variants, so you can select for a longer polynomial of high accuracy (and slower execution) or a shorter one that solves quickly but not so accurately. For instance, the COSINE function can be calculated to about 5 decimal digits of accuracy by:

Cos(x)=.9999932946 -.4999990534*x**2 + .0414877472*x**4 - .00127120948*x**6

Need more speed? Try the following which gives 3.2 digits of accuracy:

Cos(x)=.99940307 -.49558072*x**2 + .03679168*x**4

These approximations are valid for the range of 0 to 90 degrees. The argument ďxĒ is in radians.

He gives 46 different approximations for the cosine alone, with accuracies ranging from 2 to 23 decimal digits. For low resolution integer-only applications scale his coefficients to integers and save the space of lookup tables.

Consider square roots: most of us write these as iterative algorithms that eat up tons of execution time. All of Hartís square root algorithms (88 variants are presented) use polynomial solutions, that execute in more or less fixed times.

Iíve found references in runtime libraries to Hartís book going all the way back to the PDP-11 Fortran products from DEC 30 years ago. Itís the best source of algorithms for these sorts of problems.

Hart will frustrate some readers as he presents derivations that are deeply mathematical. I find the tables of polynomial coefficients tremendously useful though, and rarely bother with the math details.

Thought for the Week

Thad Badowski sent this gem along. I thought it was particularly useful considering the recent loss of the Mars Climate Observer spacecraft due to a metric conversion problem.

"Useful Metric Conversions"
Americans (defined as residents of the USA) frequently have problems with metric conversions. In an attempt to clarify the conversion process I now submit some "Useful Metric Conversions."

1 million microphones = 1 megaphone
2000 mockingbirds = two kilomockingbirds
10 cards = 1 decacards
1 millionth of a fish = 1 microfiche
453.6 graham crackers = 1 pound cake
1 trillion pins = 1 terrapin
10 rations = 1 decoration
100 rations = 1 C-ration
10 millipedes = 1 centipede
3 1/3 tridents = 1 decadent
2 monograms = 1 diagram
8 nickels = 2 paradigms
2 wharves = 1 paradox