You may redistribute this newsletter for noncommercial purposes. For commercial use contact email@example.com.
Did you know it IS possible to create accurate schedules? Or that most projects consume 50% of the development time in debug and test, and that it’s not hard to slash that number drastically? Or that we know how to manage the quantitative relationship between complexity and bugs? Learn this and far more at my Better Firmware Faster class, presented at your facility. See https://www.ganssle.com/onsite.htm.
I've posted two more videos.
While long names will never yield self-documenting code, correctly naming things is hugely important. This 8 minute video explains why using names like read_timer_1 is poor programming practice.
Oscilloscopes weren't always the wonderful digital instruments that offer so much functionality. This 11 minute video is an in-depth look at the one from 1946 I discussed in Muse 252. It was a lot of fun "extracting" the electron gun from a CRT!
|Quotes and Thoughts
"Furious activity is no substitute for understanding." H. H. Williams
|Tools and Tips
Please submit clever ideas or thoughts about tools, techniques and resources you love or hate. Here are the tool reviews submitted in the past.
|Some MISRA Rules I Don't Care For
The MISRA standard for C gives about 140 rules for the safer use of the language. It's available as a .PDF file here for £15, and is probably the most-often used firmware standard in the embedded industry. A lot of tools will automatically check your code against the rules. I am an enthusiastic supporter of the standard.
In the last issue I showed a bug that, had the MISRA rules been followed, probably would not have occurred. I did mention that it seems to me some of the rules need some tuning. A lot of people wrote asking for more details. Arguing about any sort of standard is akin to debating religion or politics, but here's my thinking:
Section 5.2.1 says one must develop both a compliance matrix and deviation process. Good ideas, but this means almost no code will be compliant, since the vast number of MISRA users will simply use Lint or other tools to check for compliance. And it says "In order to use MISRA C" the matrix and deviation process must be developed. What does that mean? If every rule is followed (5.2.1 is not a rule but a required process activity) is the code compliant absent these steps?
Section 5.5 says all of the code in a project must comply in order to claim MISRA compliance. That means almost any project with legacy code cannot be claimed to be compliant. Linux users can't make the claim. I have observed that many teams don't bother with MISRA at all, as they feel it's hopeless since they have so much non-compliant code. Some sort of happy middle ground would make sense.
Section 6.8 says automatically generated code must be compliant. That may be reasonable, but only for cynical reasons (e.g., the tool is not trusted). MISRA is all about the code, not the design. A trusted tool that translates a design to code would, one would hope, do its job perfectly. If this sort of rule were applied to a C compiler, then shuffling generated machine code for optimization reasons, which may produce horrible-looking stuff, would be illegal. (More on this below).
Directive 3.1 "All code shall be traceable to documented requirements." That is a great rule, but the rationale gives little real guidance. I think it's far too easy for a team to claim compliance while doing nothing of the sort. And what does this mean for legacy code?
Directive 4.1 says "Run-time failures shall be minimized." What does that mean? It can't be measured unless there's a metric. There are a lot of words associated with this directive, but they really don't say what makes the code acceptable or not. "Minimized" is not an engineering term.
Directive 4.6, which tells us to use typedefs to define better types (e.g., uint16_t for an unsigned 16 bit integer), is advisory. It should be mandatory. One of C's biggest problems is the type system.
Directive 4.12 (which is mandatory) prohibits the use of dynamic memory allocation. But then it says "if a decision is made to use" it. What is the rule? And how does one ensure third party software complies since often that comes without the source code? In many resource-constrained systems it's impossible to avoid dynamic memory allocation. And, some systems might implement dynamic-like allocation (pools, etc). I think the rule is rather unclear on where these systems fall. Rule 21.3, which outlaws the use of calloc, malloc, realloc and free, seems redundant. Rule 22.2 (only free memory if it was allocated) appears to allow dynamic memory. The rules seem contradictory and don't offer good guidance to the developer.
Some rules (e.g., 17.8) use as part of their justification that developers who are "unfamiliar with C" may make certain mistakes. I'd argue that those unfamiliar with C shouldn't be using it in production systems.
What's your take on MISRA?
|On Trusted Tools
MISRA's section 6.8 that requires automatically-generated code be compliant is part of a larger problem. Do we trust our tools?
We generally don't examine compiler-generated object code to ensure the tool worked correctly. Yes, the test process insures that no bugs crept in during compilation, but in 2014 compilers are so good that such problems are the exception. We rightly trust the tool to perform as intended, and even, most of the time, enable optimization, which can completely scramble the execution flow. To my knowledge no one mandates any sort of MISRA-like standard for compiler-generated code; it can be a mess as long as it is correct.
The modeling community has a broad range of tools to translate UML, statecharts, and the like directly to a programming language. At that point modeling often falls apart. Yes, in some cases it is possible to debug at the model level, so the generated code is as uninteresting as the object a compiler creates. All too often, though, developers have to tune and debug the generated C. That seems like a throwback to the early days of microprocessors where we wrote assembly but debugged binary. All too often the model and the code then drift apart, much as external documentation remains accurate only through exercising constant care.
(To be clear, some modeling tools are excellent and do insulate the user from language details).
Sometimes, though, we're not allowed to trust the tools. The highest level of the DO-178B/C avionics standard requires one validate the object code, going beyond testing, in some circumstances. One example is if the compiler adds code that is not traceable to the source to, say, check that a subscript doesn't go out of range.
One of the many justifications for Ada is that the compiler is a proven entity. Compilers are validated in a standardized manner. Yet, in a spirit of pessimism we frequent travelers admire, DO-178B/C does not make an exception to the object code verification rule for Ada.
It would be fascinating to get data on how often this extreme level of validation finds a compiler-generated error, but the literature seems very quiet on that subject.
When was the last time you encountered a compiler bug? What was the problem?
|Software Engineering for Embedded Systems
Software Engineering for Embedded Systems, By Rob Oshana and Mark Kraeling is a monumental work. At 1150 pages it's probably the most comprehensive book about firmware to date. A lot of people contributed to the book, though Rob and Mark wrote a great deal of it.
Firmware is getting big. Some estimates peg firmware at 80% of the development cost of modern products. Whereas a few years ago a system with a few hundred thousand lines of code was considered huge, today it's common to find multi-megaline code bases. Smart phones use tens of millions of lines and are spectacularly complex. Consumers figure 70% of the value of an automobile comes from the electronics - they're buying code more than engines and wheels.
Since the introduction of the first microprocessor forty years ago the standard way to build firmware was heroics. Smart people, way too much overtime, and a lot of yelling from drill sergeant bosses got products out the door. But that approach just does not scale and simply can't cope with today's huge systems. A more disciplined approach is needed. And this book does an awesome job of conveying that information.
Mark Pitchford's chapter on integration and testing, for instance, covers those subjects in exactly the right way. It all starts from the requirements, and ends with tests mapped back to those requirements. Mark shows how one can use control-flow graphs to ensure the tests are complete. Today most developers have no idea if their test suite covers 10% of possible flows, or 90% or 100%. As engineers we need to do the analysis he outlines to prove our product is correct. And - mirabile dictu! - there are plenty of tools from a variety of vendors that will construct test cases, manage requirements, and ensure complete testing coverage.
Bruce Douglass complements his excellent book on the subject with a chapter on design patterns for embedded systems. Patterns are reuse in a disciplined way and is common in IT projects. It is only now finding a place in the embedded world. Jim Trudeau's chapter about reuse gives insight into more traditional (and all-too-rarely practiced) aspects of recycling software components.
Lest one thinks that this means big-up-front design, the authors also address agile programming and the special challenges presented with embedded systems.
But it also deals with the ugly realities of engineering. Frank Schirrmeister's chapter on hardware/software co-design shows the reader how to bring both hardware and software up in parallel, despite the obvious problem that with neither complete, neither can be tested using conventional strategies.
What makes embedded so unique? A big reason is scarce resources. Limited memory and CPU cycles sorely test the developer; today power consumption is a big concern since so many systems are expected to run from batteries. A number of chapters address all of these issues, even showing software techniques to squeeze every microwatt from a battery.
Another aspect unique to firmware is its close integration with the hardware. That, too, gets treated in-depth in several chapters.
Other topics include the special needs of automotive code. Linux and Android, of course. Building safety-critical systems (and more of us do this than we might expect; change "safety-critical" to "mission-critical" and you may be surprised at how your product should be embracing the concerns of the safety community).
It's hard to think of any aspect of embedded software that this book doesn't cover.
The only constant in this field is change. A November, 2012 article in India Times claims software developers are obsolete by age 40. That's a tough age to start a new career. Read this book and learn more effective ways to get your projects out the door. It'll help you stay relevant and avoid becoming one of those doomed over 40 engineers.
My one beef is the cost. It's $90 on Amazon for the hardcopy edition, and only ten bucks less for the zero-cost-to-reproduce Kindle edition.
Let me know if you’re hiring embedded engineers. No recruiters please, and I reserve the right to edit ads to fit the format and intents of this newsletter. Please keep it to 100 words.
|Joke For The Week
Note: These jokes are archived at www.ganssle.com/jokes.htm.
Charlie Moher sent this. It's the type of thing too many of we engineers have had to put up with, and is a nice parody of so many inane TED talks: https://www.youtube.com/watch?v=DkGMY63FF3Q
|Advertise With Us
Advertise in The Embedded Muse! Over 23,000 embedded developers get this twice-monthly publication. .
|About The Embedded Muse
The Embedded Muse is Jack Ganssle's newsletter. Send complaints, comments, and contributions to me at firstname.lastname@example.org.
The Embedded Muse is supported by The Ganssle Group, whose mission is to help embedded folks get better products to market faster.