For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.

A Conversation With HCC Embedded, Part 2

Summary: Jack talks to HCC Embedded about very high reliability firmware design.

At the Embedded Systems Conference I had a chance to sit down with the Dave Hughes and David Brook from HCC Embedded (http://www.hcc-embedded.com/). They have an unusual devotion to getting firmware right. Later, we had a telephone call and explored their approaches more deeply. I ran part 1 of this discussion last week (http://www.embedded.com/electronics-blogs/break-points/4438786/A-Conversation-With-HCC-Embedded--Part-1), and here's the second half. As noted last week, this has been edited for clarity and length.

Jack: I keep thinking about the Toyota debacle, where they were slapped with a $1.2 billion fine. The code base is something under a million lines of code, which means they have won the coveted, Most Expensive Software Ever Written Award. The open SSL thing, is interesting, because the group that actually was maintaining it were able to solicit on the order of only a few thousand dollars a year from industry to support it.

Dave: It's absurd that there is no better method. I mean, if I was running the world which, unfortunately, I haven't been granted yet, one of my goals would be to restructure how software is developed. Set up committees to say, "Look, we're now going to write a proper SSL, a proper TCP/IP and distribute this in a form that can be reused." And we can do that at relatively small cost. We could get that extremely high quality and we could make it very reusable so that these problems would be dealt with. And we could probably even create a competitive environment to do that in as well, to make it even cheaper, where they compete on, 'We've got fewer bugs than you'.

Jack: How do you convince your customers that this software is, indeed, of extremely high quality? Because it's a claim that is easy to make and in fact, a lot of people routinely do. But, with you guys I know it's much more serious. How do you convince folks that this really is quality code?

Dave: You can trawl all the websites in the world and you won't find a software company saying that our software is slow, huge, unmanageable, or badly documented. They all say exactly the same thing and the web is a huge leveler in this respect because there are large companies, small companies, 2 men and his dog companies, all writing exactly the same thing on their web sites. At HCC we feel we can only make our quality argument by creating verifiability. So one of the ways we can verify this is by providing the test reports that show that this actually did achieve full MC/DC coverage. And therefore, if you run that on your platform, you will get exactly the same test results. We also publish documents like quality reports and checks for complete MISRA compliance. We enforce all MISRA rules. We provide a large amount of verification documentation to make people realize that there is provable quality in the products. David: Just as a slightly tangential comment on the current state of the embedded industry, there are many mature software organizations out there who have their own experience and their own objectives regarding quality and--to a certain extent--that can be a fairly normal, standard, sales process. When we get to see the QA guy, he normally gets quite excited by this quality message. In medical and industrial control companies, in particular, so long as you can find the right person, the sales process is fairly mundane.

There is a huge amount of background noise when it comes to advertising online software. It's very difficult to get any message out to the broad-based community of developers just now. The software vendors at the low end of the markets, the M0s and the M3s, make a huge amount of noise and distribute a lot of free and open-source software. It's not that high-quality software can't compete. The major problem is that these guys monopolize the sales and communications channels and it's difficult to get access to those sales and communications channels. There are good companies with good software out there and an excellent value proposition, but often they don't get to talk to a software engineer, because that engineer goes straight into the desired sales process that the semiconductor company wishes that guy to go through. I think that the semiconductor companies are going to have to take a look at how they can create a healthier ecosystem for their own products.

Jack: Interesting point. The semi-vendors do make demo software available and we know that most of that stuff is junk or toy code that works under narrow sets of circumstances.

Dave: And most of it is actually usually documented by the silicon vendor as saying, "Not suitable for product use." It's for demonstration purposes only, but that's not what's actually happening.

Jack: I know you guys use a V model process. Do you have any comments about agile approaches?

Dave: Well, from our point of view, the agile processes seem to be something that's developed for people to take shortcuts with development and still claim quality. Because our aim here is to develop software that is scalable and reusable forever, we're not on any short times scale. We're on a mission to make this as rigorous as we can. So we don't really have much interest in the agile methods. Functional safety standards like 61508 have no trouble with something like agile.

Jack: Sure, that's fair. I understand that and certainly there's no one process that's perfect for every situation so I see that the process you've chosen makes a ton of sense for what you folks are doing.

Dave: Yeah and a lot of this is about touch and feel and establishing something that works for you. There's no right or wrong on these things, it's like having an argument about how you do your brackets in C language. It's completely and utterly arbitrary what result you come up with. The important thing is that you have a result that is well defined and you go through a sensible set of methods for your particular problem. It's all about creating a framework that really ensures that the chance of error is very small.

Jack: What about tools? I know you folks are using some of the LDRA tools and they certainly have some very, very interesting offerings.

Dave: We use different tools for different pieces of work. LDRA gives us very good static analysis. It's probably even stronger on the dynamic analysis part where we can really look in detail at any block of code. The tools give us reports that a quality manager, who may not know the code in detail, can use to make assessments on things like complexity, excessive knots, and the like. He then asks "Is this really necessary? Can we make this bit nicer?" The tools help analyze where the code really could be made cleaner, more understandable. It keeps us on track on things like comment ratios. You can cheat on comment ratios very easily. A mature engineering team can then look at the code and say, well actually, this doesn't really need commenting. There's no need to, say, write a comment that i++ increments i! We use Doors and model with UML using IBM Rational.

LDRA is our main code analysis tool for unit testing, coverage testing and static analysis.

Jack: I see a lot of older companies that have traditionally sold mechanical or electro-mechanical devices who have been forced to go into the microprocessor age. Management typically has no concept of what software is about or what software engineering requires. And when engineers ask for something that they can't put a property tag on, like a software tool, they find it very difficult to understand why that expense is necessary.

Dave: There are countless examples of where just a small, sensible attitude to expense of software would have saved huge sums of money. One of the classics was that Mars rover when they had only run the Flash simulation on the ground before they launched it for 7 days. When you're launching a thing to Mars you would think that more lifelike testing would be at the top of a list of priorities.

Jack: That was the Mars Exploration Rover when Spirit started grinding a rock and it suddenly went dead, because, like you say, the Flash file system, actually the directory structure, was full. The good news is that they were able to fix it, and the rover had a very successful mission.

Dave: Yes, absolutely. It was just very weird that you wouldn't run that test case.

Jack: It is sort of mind-boggling. I sometimes have to remind myself that software is a very new branch of engineering and people are still trying to figure it all out. But sometimes there's the smack-yourself-in-the forehead, obvious stuff you'd think that people would get.

Dave: And it's changing very rapidly as well. Look at how microcontrollers have changed. We're now running hundreds of megahertz microcontrollers with vast flash resources. It's a very dynamic industry.

Jack: Well, it certainly is and that's what keeps it interesting, that's for sure. Thanks so much for your time, and best wishes for the business.

Thanks to Dave and David. You can learn more about their company and products at hcc-embedded.com.

Published February 16, 2015