Go here to sign up for The Embedded Muse.
TEM Logo The Embedded Muse
Issue Number 266, August 4, 2014
Copyright 2014 The Ganssle Group

Editor: Jack Ganssle, jack@ganssle.com
   Jack Ganssle, Editor of The Embedded Muse

You may redistribute this newsletter for noncommercial purposes. For commercial use contact jack@ganssle.com.

Contents
Editor's Notes

Best in class teams deliver embedded products with 0.1 bugs per thousand lines of code (which is two orders of magnitude lower than most firmware groups). They consistently beat the schedule, without grueling overtime.

Does that sound like your team? If not, what action are you taking to improve your team's results? Hoping for things to get better won't change anything. "Trying harder" never works (as Harry Roberts documented in Quality Is Personal).

Did you know it IS possible to create accurate schedules? Or that most projects consume 50% of the development time in debug and test, and that it’s not hard to slash that number drastically? Or that we know how to manage the quantitative relationship between complexity and bugs? Learn this and far more at my Better Firmware Faster class, presented at your facility. See https://www.ganssle.com/onsite.htm.

Embedded Video

Two more videos are available. Episode 8 is about using two GPIO bits to simulate an I2C interface, and then a simple protocol analyzer to decode the serial stream. It's a very cheap way to create debug and logging info. All of the code is included.

Episode 9 is a short video about using Ada on a microcontroller. I talk to a lot of people who complain Ada is only viable on big processors, but it actually works really well even in smaller systems. All of the tools are free.

Quotes and Thoughts

"When test is the principal defect removal method during development, corrective maintenance will account for the majority of the maintenance spend." Girish Seshagiri

Tools and Tips

Please submit clever ideas or thoughts about tools, techniques and resources you love or hate. Here are the tool reviews submitted in the past.

One respondent to the survey indicated in the comments that he uses, among other tools, uncrustify to clean up the source. This is one of the most powerful and configurable pretty printers around. There are about 450 configuration options and almost, unfortunately, no documentation. Several configuration files are included; defaults.cfg lists all of the options, and that file seems to be about all there is for a user's manual.

Uncrustify will adjust spacing, tabs, brace placement, spaces around operators and much more. It can change a multi-line function declaration so the parameters are all vertically aligned. One of the included configuration files will change source to resemble K&R style.

It runs in a Windows/Linux command shell and is breathtakingly fast. I ran it against an 8000 line C source file and the execution time was perhaps a quarter second on a Windows 8.1 six-core machine.

Uncrustify is free and available here. There's a GUI front end for Macs available here which makes setting the configuration options a lot easier.

My philosophy about pretty printers is that one should type in correctly-formatted code, rather than entering a mess and counting on a tool to clean things up. But no one is perfect and there's no reason not to use the computer to fix our occasional mistakes. And, we're often confronted with legacy or inherited code; it's a waste of expensive engineering time to manually reformat that.

Results of Firmware Standard Survey

In Muse 265 I asked people to complete a very short survey about which firmware standards, if any, they use. About 500 responded, and here's the results.

It was common for people to indicate they used MISRA with some modifications and/or exceptions. A few use MISRA combined with other standards. In both of these cases I called the standard used "MISRA-modified."

In some cases just a single respondent used a standard. These were: EN50128, Google Coding Standard, SLI-4, K&R, and DO-178B (which isn't a standard). In other cases six or fewer people were using one; these are CERT, Netrino and mine. None of these are reflected in the results below as they are in the noise.

So here's the data:

Use of Firmware Standards

Almost 70% do use a firmware standard, which does not mirror what I see in my travels. It's possible there is a selection bias at work.

MISRA users standards use

 

MISRA-modified standards use

Proprietary standards use

There were a lot of interesting comments; alas, far too many to include. Here are some highlights:

All new code is written to the standard but depending on the project, even some old code is updated. If project is large and requires an in-depth study of the old code for its reuse anyway, it makes sense to update formats, naming and comment. If the project is just to add a small feature or update to an old project, then we don't update the format. It is much less frustrating to support code that was created under a good standard too!

Standards, for us, are more about producing consistent code amongst developers, code that avoids easy mistakes and code that can be used across different platforms - this makes MISRA very useful indeed. Lots of rules seem excessively prohibitive but with a bit of digging and research, their place can be understood. C is not a language for show-offs. You may know a quirky, succinct C snippet that achieves the same job in half the number of lines but if no other developer can follow it, or it is prone to common C pitfalls, then you've not saved any time. MISRA has helped us churn out reams of, honestly, dull code but this is code that *works*.

We use the tool QAC. No deviation/approval process as it is a small team. Deviations required by low level hardware, processor access or conflict between tool and compiler are 'tagged' using the tool syntax tags so they are partially hidden for convenience. In this way non compliant lines are annotated in code with the tool summary showing them. Code changes can be checked to '100%' compliance without wading through known non-compliances.

I use MISRA as a "best practices" guide. Much of the code I work with is legacy, and the effort to bring it into MISRA compliance can not be justified. However as I work with the code--modifying it, extending it--I use MISRA as a guide to write the best possible code I can.

I use MISRA as a guideline - it helps what I call "responsible engineering" in that MISRA makes you think "am I doing this in a responsible way?". And I care more about that than blindly following obtuse rules. Any monkey can blindly follow rules - but an engineer THINKS about what he/she is doing. And, an engineer can COMMUNICATE what he/she thought and did.Some of these rules also make the basis of some interesting interview questions to separate the can-code'ers from the engineers with real talent.

With multiple resources working on a single project, it becomes difficult to train each of them into using the standard. Some of them are engrossed in solving multiple bugs that they forget to follow the standard and instead become very defensive in coding to such a limit that they are not even ready to add comments in fear that this would introduce more bugs in a working code. :)

We use a variety of standards over a variety of programmes, coding and general development Many are specific to the company I work for but can be traced to many other 'standards'. Most standards will add an overhead initially, but finding a bug/issue during code review is just so much cheaper than doing so when the product is in service. I could go on for ages arguing about many of the points made, but to pick up of just a few:

  • For safety critical devices lack of trust in tools is a must IMO. Validating 'tools' can be mitigated by the use of an independent source of review (code walk-through and testing to name but a few). Requirements based testing in conjunction with black/whitebox testing gives some level of independence testing of a compiler for example.

  • Standards should not define how to do something but what objectives you are trying to meet. As an example you mention "Section 5.5 says all of the code in a project must comply in order to claim MISRA compliance. That means almost any project with legacy code cannot be claimed to be compliant." Correct - would you want it to be any other way? Why would you want to allow some library with potentially failure mode to control the device that has your name on it. For this reason we mandate that no libraries can be used and all code is treated as new and must have requirements and testing to cover it.

My engineering department has a sort of "coding standard" written years ago, but no one follows it. The department belief is that standards destroy a software engineer's creativity.

I design small embedded products that use primarily PIC controllers with firmware written in assembler. Although I try to use the best practices that I read about, I am not aware of a firmware standard that is applicable to code written in assembler. That said - I'd like to be shown that I am wrong about this and that there is indeed a standard that I can learn from and use.

We are currently developing all safety related code in assembly to avoid the "certified compiler" unavailability on our microcontrollers. All the code is reviewed by other department colleagues and submitted to third party assessors.

I've been doing embedded design for 20 years and the team has always been at most 2 people. The need for a formal standard has not shown itself. I have looked at them and have always found them to be too subjective, full of exceptions and lacking in realistic reasoning.

This is largely a DO-178 shop, so we're obligated to have some kind of coding standard. The one commonly referenced is an internal document of some 30 pages; incorporating some prohibitions of DO-178 (recursion comes to mind), some style points (indenting), but nowhere as involved as MISRA. So: throw in a pile of college-hires, and the code ranges from wretched to perverse... They used to bring in veteran contractors like myself for "verification" once the early hw/sw integration was done, but that's a little late for any process effect. And, I'm seeing bug rates in the 1% range for this "integrated" code; granted, lots of silent bugs, but frightening nonetheless.

 

Picked at random from the survey respondents, Scott Becker won a copy of Software Engineering for Embedded Systems.

Thoughts on Software Engineering

Some readers had great comments about software engineering which they kindly allowed me to share.

Jerry Trantow wrote:

I think my copy of PC-Lint cost me $185 close to twenty years ago. When I left that employer, I had the license transferred. I still get free updates for it. Best development money I have ever spent.

I strongly believe in using the best tools and that the tools help you become a better developer. I have joined several development teams in trouble and my first step is often to guide the other developers to adopt tools. There is usually some resistance. However, in my experience, six months into a project most developers will buy in. (If they haven't, either the tool isn't much good or the developer needs to be replaced.)

I consult for a medical startup that was releasing a navigation application for a FDA 510K submission. The first submission a few months after I started was a nightmare. They weren't using version control and working code could only be built on a certain computer. My first step was convincing everyone to use SVN which helped immensely. The project managers were impressed they could access the released code (and always get the correct version). The software developers were relieved when the same code could be built by everyone and we could track changes. We also started using a defect/issue tracker. (TeamForge/Redmine/Track/Bugzilla/etc.) We managed to get the submission in, but realized we needed process, code, test, and documentation improvements.

The next step to improve code quality was to use Lint and stamp out the Clang build warnings. Cleaning up the build caught a number of program crashes that had been haunting the code. Having less warnings in the output helped the new warnings stand out. Gradually, we bumped up the Clang compiler warning level and developers improved their coding as they were prodded by the warnings and realized the benefits. We were able to use the defect/issue tracker to prioritize efforts. Developers started to enjoy working smarter rather than harder and project management gained confidence in the risks/timelines for the next 510K submission. We started unit testing our code to find the remaining issues and Doxygen to document the design.

With these basics under control, we started improving the software development process. The first submission had created some of the process and design documents at the last moment for the submission rather than following a SDP. Now that developers and managers could see some of the benefits of the tools they realized that many difficulties were caused by poor planning and incomplete requirements. I think this is always a work in progress but at least people were realizing the benefits in following a SDP. The results we had from using VC, tracking, unit testing and Doxygen helped immensely with the next submission. The next project will go a lot smoother now that project managers and developers understand why/how to use the tools to follow a development process.

I realize I am rambling, so I will try to summarize the tools I have found essential over 30 years of software development:

Version Control - I've used many. PVCS, Perforce, SourceSafe, and now primarily use Subversion and some Git. The repository should be hosted on an external server and there is no excuse for software development that isn't in VC. For Windows, TortiseSVN is wonderful.

Defect/Issue Tracking - This will simplify life as a developer and help project managers understand the project state. Also useful for creating required documentation. There are some nice tie ins between VC and tracking software to link code changes to issues. (Redmine, Bugzilla, TeamForge, etc..)

Coding - I use PC-Lint for almost all my coding. I can use it with Visual Studio, TI Code Composer Studio, Atmel Studio, etc... Using a single tool across multiple platforms is nice as I know the options and how to interpret the warnings. I put Lint in the IDE prebuild step so it runs before the compiler. I also do a global lint in the post build prior to the link. On the Mac, I use Clang. One client I work with uses LDRA for some aviation and military projects. I'm one of the few developers that request them to use LDRA for projects I am involved in. It's a little extra work as they only have a few seats of LDRA. It is a bit of a hassle since I need someone else to run the analysis. There is a benefit as I get a code review by some of their best developers when they run the analysis.

Build Configuration - I maintain a test build, debug build (with lots of ASSERT() and VERIFY()), and possibly a dedicated release build if my debug build executes too slowly.

Documentation - I use Doxygen to comment my source code. I have a few clients that have bought into the idea of the Doxygen output serving as the detailed design document. This warns me if I miss documenting a function or variable and I have found some of the Doxygen warnings actually prompt me to improve code structure. This saves me a ton of documentation work and ensures the documentation is complete and consistent. I find this much easier than any other software documentation method I have tried. Many years ago we used the similar LaTeX CWeb on a Ford automotive project. We had the output documentation bound with a hard cover and the Ford managers were amazed.

Unit Testing - I haven't settled on one framework that works on all platforms. On new projects, I try to unit test most my functions. I can run automatic testing on my Mac projects, but I still have to manually select and run test builds on my embedded projects. This is an area for improvement when I settle on a testing tool. Unit testing is a great place to run timing tests on critical code. It's also a good way to verify a code change does what it is supposed to. I often compare old functions with the new improved version in my unit tests. I have worked on several projects where the specification is given as a Simulink or Matlab example. The unit test code runs the same data through the embedded code and Matlab and compares the results.

Ray Keefe wrote

We have done MISRA-C:2004 compliant development. This was for an automotive company. They also used QAC by PRQA for static analysis of code. That tool is very comprehensive.

As you rightly point out, the process compliance rules are harder to be confident about than the coding compliance rules.

There were some things we had to work through. One of the modules we designed was the non-volatile memory manager. This was a table driven module where the data was predefined so not an arbitrary chunk manager. But it still handled it and wrapped it for storage on a variety of NVM options. Both the MISRA checker and QAC threw complaints about use of pointer math. So a design review and design walkthrough were done to look at the code and the design and it was signed off as OK.

Our client’s understanding of the value of MISRA is that you need a compliant process, which is not hard if you start a project with that in mind, and you need to review and sign off on exceptions. They used a requirements allocation matrix so that made that part of the process easy. You can go to market with exceptions as long as they are reviewed and signed off. What MISRA is intending to do is prevent the most frequent types of serious errors getting into a design without it being looked at carefully.

As an example, unions are very useful for communications stream handling, but are required to not be used under MISRA (required 18.4). This is because alignment and other issues cause a large number of system errors, often at the interface rather than with an individual system. While it is possible to write code that uses unions in these circumstances, it is actually safer to not do it and be explicit. So we used typed pack and unpack routines instead. In this case, I think it is a reasonable rule. And you get to deal with endian issues automatically.

Another example is on the minimisation of run time errors by use of at least one of:

  • Static analysis tool/technique

  • Dynamic analysis tool/technique

  • Explicit coding checks to handle run time faults

This is requirement 21.1. I think you should always do the third of these so we never write anything that isn't compliant with this rule. Plus we use static analysis tools (RSM & PCLint) and we do monitor run time behaviour including logging asserts.

An example of a rule we had to adjust our coding standard for is requirement 2.2 which insists comments must be of the /* comment */ variety and not // comment. So a minor issue but I had got used to the C++ commenting style.

A great tip I picked up from writing the NVM manager is that for debugging, our client used asserts to throw to a handler you can fall through, but released code throws to a NVM log and records the incident for later review. If you are writing mission critical code and have an NVM management module this becomes another ring buffer and isn't so hard to add and you only have to consider the impact on run time performance and storage of the extra logging. Ideally, nothing gets logged. In practice, design requires assumptions and sometimes they are wrong!

So overall, the idea of having a coding standard, using design principles, avoiding or being careful in areas of common error and doing some checks that your code is running as expected are all good things.

After we emailed back and forth a bit, Ray added this:

There are lots of things that you would expect to be standard practice in SW development let alone embedded development but which seem to be unknown, untaught and unused. Our experience with hiring is that almost no-one has heard of, let alone done any:

  • Unit testing

  • Test harness

  • Any test methodologies

  • Have an understanding of what test driven development is about

  • Know what the difference between a debug version and a release version and what is going to be different in the compiled code

  • Integration testing as something other than “it builds”

  • Know what design patterns are

  • Design review (or even what documenting a design means)

  • Peer code review

  • Mocks

  • Agile software development ( also unhelpfully called extreme programming)

  • Static analysis

  • Code quality metrics (cyclomatic complexity for instance)

Jobs!

Let me know if you’re hiring embedded engineers. No recruiters please, and I reserve the right to edit ads to fit the format and intents of this newsletter. Please keep it to 100 words.

Joke For The Week

Note: These jokes are archived at www.ganssle.com/jokes.htm.

From Jordin Kare:

If you connected every electrical device in the world end to end...

  ...you'd have a World Series.

Advertise With Us

Advertise in The Embedded Muse! Over 23,000 embedded developers get this twice-monthly publication. .

About The Embedded Muse

The Embedded Muse is Jack Ganssle's newsletter. Send complaints, comments, and contributions to me at jack@ganssle.com.

The Embedded Muse is supported by The Ganssle Group, whose mission is to help embedded folks get better products to market faster.