For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.

By Jack Ganssle

Tool Upgrades

Published 1/16/2006

Last week's rant about tools that don't conform to an OS's standard elicited a number of interesting replies. Ken Wada wondered how a company should deal with new versions of tools, especially compilers.

I haven't a clue.

You're halfway through a project, things are going well, when with great fanfare the vendor releases version 5.0, loaded with bells and whistles that you probably don't need. But the risk of working with an obsolete or unsupported product tempts you to perform the upgrade.

What's the right course of action?

Suppose the new release is version 4.01 to correct a couple of bugs reported by other users. The problems haven't affected your project as yet. To upgrade or not to upgrade. that is the question.

Or the project has been released and is in maintenance. Defect rates are low; the customers are thrilled with the product. But the compiler has become obsolete and unsupported. Is it time take the attendant risks of doing an upgrade?

We know several things for sure. A change to a compiler's code generator will generally cause it to emit different binary. The compiled code might be tighter, faster and all around better. But it's different. And that's a problem.

When the production line uses checksums to label ROMs or control software versions, a change in binary at the very least means some sort of documentation change to support the production line. Often much more serious ramifications result.

Different generated code may affect the system's real time performance. A lot of applications use pre-emptive multitasking, which has both benefits and risks. One downside is that there's no way to guarantee a deterministic response to inputs or interaction between tasks. Different code may alter the system's determinism, perhaps to the extent of breaking the code.

A new compiler may generate slightly slower or more bloated code. The runtime package will likely change in some unpredictable manner. I'm just back from visiting a company whose product consumes more than 99% of all CPU cycles, and I know of several others where 99% utilization is <i>required</i> since any excess capacity means hardware costs are too high. A compiler upgrade may push the ROM usage too high or performance too low.

Some safety critical systems must be requalified when the tools change, a hideously-expensive process.

Most development teams I know blindly upgrade, offering a prayer or three that nothing substantive will change. And they usually have no problem.

Others refuse to change midstream, using the older product unless trapped by a problem that lacks a workaround. They often archive the old tools for future maintenance.

Though I'm addressing compilers directly, the same peril exists with any package that's bundled with your code, like protocol stacks, math libraries and the like.

What's your take? How do you manage changes in tools and included packages?