Go here to sign up for The Embedded Muse.

Embedded Muse 188 - January 18, 2010


You may redistribute this newsletter for noncommercial purposes. For commercial use contact jack@ganssle.com. To subscribe or unsubscribe go to https://www.ganssle.com/tem-subunsub.html or drop Jack an email at jack@ganssle.com.

EDITOR: Jack Ganssle, jack@ganssle.com

 

Contents

- Editor's Notes
- Quotes and Thoughts
- Process Improvement
- The Perils of Volatile
- Are Debuggers Evil?
- Tools and Tips
- Joke for the Week
- About The Embedded Muse


Editor's Notes

Did you know it IS possible to create accurate schedules? Or that most projects consume 50% of the development time in debug and test, and that it's not hard to slash that number drastically? Or that we know how to manage the quantitative relationship between complexity and bugs? Learn this and far more at my Better Firmware Faster class, presented at your facility. See https://www.ganssle.com/onsite.htm .


Quotes and Thoughts

Testing by itself does not improve software quality. Test results are an indicator of quality, but in and of themselves, they don't improve it. Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often. What you eat before you step onto the scale determines how much you will weigh, and the software development techniques you use determine how many errors testing will find. If you want to lose weight, don't buy a new scale; change your diet. If you want to improve your software, don't test more; develop better. - Steve McConnell, Code Complete


Process Improvement

According to a survey article in IEEE Software (Measuring the ROI of Software Process Improvement, Rini van Solingen, May/June 2004) companies working to improve their software processes get, on average, a 700% return on investment. Capers Jones, one of the most recognized thinkers in the software world, in IEEE Computer (Volume 29 number 1) claims those pursuing process improvement net a 3x to 30x ROI over 48 months. For teams working on projects with around 100KLOC of C, defects will drop about 10x, productivity increase 3.5x, schedules drop by 70%, and the amount of reusable material (code, specs, test cases, etc.) will rise from less than 15% to better than 65%.

Shorter schedule. Fewer bugs. Isn't it astonishing that so few avail themselves of these sorts of benefits? Actually, it's not surprising at all. We're being asked to do more with less, and there just isn't time to stand back and rethink our approaches. The recession has fed this; VDC reports spending on embedded R&D is down 4% this year. But I suspect few projects have been cut which means we're deeply into panic mode.

When I conduct public seminars always - always! - some portion of the registered attendees drop out the week beforehand citing schedule panic on their current project. Further inquiry generally shows that the group is always - always! - in panic mode. There's no time to learn how better development techniques can bring projects to fruition on-time.

Like sailors on a sinking ship they are too busy bailing to fix the leak. The water slowly rises so they bail ever more frantically. Sooner or later they're going down, but working faster and harder staves of the inevitable for just a while longer.

An old cartoon shows a fierce battle, the soldiers wielding swords and spears. The general turns away a machine-gun salesman. He complains: "I don't have time to talk to you - can't we see we're at war?"

Why are so many firmware projects so late and so bug-ridden? A lot of theories abound about software's complexity and other contributing factors, but I believe the proximate cause is that coding is not something suited to Homo Sapiens. It requires a level of accuracy that is truly super-human. And most of us aren't superman.

Cavemen did not have to bag every gazelle they hunted - just enough to keep from starving. Farmers never expect the entire bag of seeds to sprout; a certain wastage is implicit and accepted. Any merchant providing a service expects to delight most, but not all, customers.

The kid who brings home straight As thrills his parents. Yet we get an A for being 90% correct. Perfection isn't required. Most endeavors in life succeed if we score an A, if we miss our mark by 10% or less.

Except in software. 90% correct is an utter disaster, resulting in an unusable product. 99.9% correct means we're shipping junk. 100K lines of code with 99.9% accuracy suggests some 100 lurking errors. That's not good enough. Software requires near-perfection, which defies the nature of intrinsically error-prone people.

Software is also highly entropic. Anyone can write a perfect 100 line-of-code system, but as the size soars perfection, or near perfection, requires ever-increasing investments of energy. It's as if the bits are wandering cattle trying to bust free from the corral; coding cowboys work harder and harder to avoid strays as the size of the herd grows.

So what's the solution? Is there an answer?

In my opinion, software will always be a problem, and there will never be a silver bullet that satisfies all of the stakeholders. But there are some strategies that offer hope. Some involve specific technologies and tools. Then there's reuse, hyped often but hard in practice. But the most important is to use a disciplined development process.

XP, CMM, Inspections, Standards, and maybe even Drunken Orgies are all valuable ways to improve the code. when used with discipline. None offer much benefit when casually or intermittently employed.

We software artistes have missed the "process" boat. Most other industries use various sorts of defined processes to work efficiently. One way to identify an amateur organization of any sort, be they accountants, lawyers, craft shops or software developers, is by a lack of process. By contrast, an efficient company like MacDonald's defines a rigorous way to do just about everything. Even a teenager, using Ray Kroc's process, can crank out Big Macs that taste exactly as bad as in any MacDonald's in the world.

We firmware folks must look beyond the world of software engineering for insight into better ways to build our products. I highly recommend Michael Gerber's book "The E-Myth Revisited: Why Most Small Businesses Don't Work and What to Do About It". Skip the book's irrelevant last half. Gerber says that poor businesspeople work "in" the business - they are technicians daily busy with making the product or service. The business cannot succeed without that individual, who may be a genius at providing some product or service, but spend their days firefighting. He feels the brilliant company owners work "on" the business. They build systems, processes, and techniques so the business runs smoothly. These awesome managers don't just solve problems, they invent solutions that eliminate the problem forever, or that automatically deal with the issue when it comes up again.

They stop bailing and plug the leak.


The Perils of Volatile

I wrote about this a year ago in Embedded Systems Design, but it was buried at the end of an article about market statistics. It's an issue potentially so important to this industry that I'll report about it here.

Read the paper "Volatiles Are Miscompiled, and What to Do About It" (Eric Eide and John Regehr, Proceedings of the Eighth ACM and IEEE International Conference on Embedded Software, Atlanta, Georgia, Oct 2008, currently at http://www.cs.utah.edu/~regehr/papers/emsoft08-preprint.pdf). Reader Bob Paddock sent me the link, and thereby wrecked my day. But thanks, Bob, this is important.

Simply put, compilers sometimes miscompile code that uses the volatile keyword. And that keyword is hugely important in the embedded space.

In automated tests the researchers ran on thirteen compilers, every compiler generated incorrect code for accesses to volatile variables. The best of the lot miscompiled only 0.037% of the time; the worst was 18.72%. The first doesn't sound so bad, but even a single such error could cost weeks of debugging the generated assembly code. The error rate of the worst compiler is downright terrifying.

It's unclear how these errors apply to your code; it seems some code constructs are fine while others aren't. And there's no data to suggest that the compiler you're using isn't perfectly fine. but then no one suspected the thirteen either. After reading this article I'm left with zero confidence that any other compiler is any better.

The authors speculate that the problems stem from an essential conflict between optimization and the nature of volatiles.

They make several recommendations. The first is the use of helper functions which encapsulate accesses to volatiles, thereby outfoxing the optimizer. It's important that the functions aren't inlined, of course, and don't use macros. The helper functions eliminated 96% of all volatile miscompiles.

Finally, leave optimizations turned off on modules that rely on volatiles.


Are Debuggers Evil?

This statement in the last Muse generated a lot of email dialog:
> "Debuggers: (This is going to go against the grain of some folks!)
> Debuggers are evil -- learn how to figure out where your code's
> problems are without using a debugger.

Two camps of thought emerged, with Paul Bennett best summarizing the anti-debugger school: "I am with you all the way on this statement. The best debugging tool in the world is not to introduce the pesky things in the first place. Here the "MkI eyeball" cast over the initial specifications, the full technical specifications and the framework for the code will resolve up to 70% of those problems that turn so easily into bugs in the code. Please note that up to this point no code should have been written.

"The second bit is a thorough "MkI eyeball" cast over the source code that is written to support the full technical specification, making sure that all requirements of the spec are met in the code (this should be by clear attribution in the documentation and the source code). Also a walk-through of the code functions, following every logical path, before you compile it will probably bid farewell to another 20% of the potential bugs.

"Finally, test running the code (preferably in small steps) with clearly defined break-points, input conditions and fully checking out the resultant output for sanity. Correcting any problems as you discover them and re-testing will help eliminate the bugs that do get through to this point.

"After this point any bugs that are left will be interesting, and really worthy of skilled effort to track down the root causes.

"I have never had a debugging programme of any description. My most used debugging tools are keen observation, inspection and simple problem solving techniques (see George Polya "How to Solve it" for a treatise on the last item)."

On the other side, John Hartman neatly summarized why he, and others (including myself) are pro-debugger: "Your last newsletter contained someone's opinion that debuggers are evil, apparently because they `fail sometimes'.

"This is silly. Every car I have ever owned has `failed sometimes'. But I continue to drive a car because a) when the car works, it is much faster than walking, and b) it doesn't fail very often.

"Same thing with debuggers: in most cases you can use a debugger to help you isolate and identify the sources of problems far more quickly than by any other means - and usually without needing to modify the program.

"Certainly there are times when you need to use other debugging techniques. My car isn't the best tool for distances under 10 feet or over 1000 miles. As with any tool, you have to know when it is and is not appropriate. Oscilloscopes, LEDs, printf, "flight recorders" in system RAM and the like are all important techniques, and every embedded software developer should know how to use them and when to use them.

"But I have burned afternoons using printf to track down problems that a debugger could have let me find in minutes.

"These days, most of us use high-level languages for just about everything, and assembler only when performance or some other requirement requires it. We do it because of the huge productivity gain of using the high-level language. I consider a decent debugger to be my high-level tool, and printf etc. to be the special-purpose tools, brought out only when needed."

 

Tools and Tips

Louis Russ wrote: "I was reading https://www.ganssle.com/tools.htm which looks like a very useful page and was surprised to not find Perforce in the list of version control tools. So I thought I'd offer it for submission to the list.

"I've used this on a couple of projects and like it. I particularly like their support which IME has been outstanding. I wish other vendors were so easy to deal with and so helpful.

"For small projects they have a free non-expiring two user license. Lots of integration with other tools. See http://www.perforce.com/index.html ."

By the way - there are now over 150 different tools reviewed on https://www.ganssle.com/tools.htm by Muse readers. Thanks! And keep them coming.


Joke for the Week

Vicky Hansen contributed this to the recent Christmas jokes:

Silent night, quiet night
All is calm, no light is bright
Nothing runs without the code
Bare hardware has no use or mode
Data cannot flow
There simply is no show

Silent night, another night
The bus is dead, no prompt in sight
No connections, address or node
Hardware waits for code to load
Not even fans can blow
Firmware makes it all go

Silent night, busy night
The update made it work right
Fans are on, connections are coded
The OS and apps are being loaded
Firmware made it so
Firmware made it go


About The Embedded Muse

The Embedded Muse is a newsletter sent via email by Jack Ganssle. Send complaints, comments, and contributions to me at jack@ganssle.com.

The Embedded Muse is supported by The Ganssle Group, whose mission is to help embedded folks get better products to market faster. can take now to improve firmware quality and decrease development time.