Time to Market
Summary: Here are some tips to get to market faster.
What's the fastest way to get a firmware project out the door?
Ship junk. Unprogrammed flash. One company I know grades developers on the size of the program; a cynic there could do very well by dumping Moby Dick (1,172,046 bytes by my count) into memory. The system wouldn't work too well, or at all, but the project will beat all development records.
The second fastest way is to ship insanely-high quality code. Deming, Juran and the subsequent quality movement taught the manufacturing world that quality and speed go together. Quality stems from fabulous design that requires no rework; no rework means projects go out the door faster.
Alas, in the firmware world that message never resonated. Most projects devote half the schedule to debugging (which implies the other half should be named "bugging"). Typical projects start with a minimum of design followed by a furious onslaught of coding, and then long days and nights wrestling with the debugger.
Capers Jones1 studied 4000 late software projects and found that bugs are the biggest cause of schedule slippages. Benediktsson2 showed that one can get the very highest level of safety-critical software for no additional cost, compared to the usual crap, by using the right processes. Bottom line: quality leads to shorter schedules.
Here are some tips for accelerating schedules:
1) Focus relentless on quality. Never accept anything other than the best. Don't maintain a bug list; rather fix the bugs as soon as they are found. Bug lists always infer a bug deferral meeting, that time when everyone agrees on just how awful the shipped product will be. This is the only industry on the planet where we can (for now) ship known defective products. Sooner or later the lawyers will figure this out. A bug list produced in court will imply shoddy engineering... or even malpractice.
2) Requirements are hard. So spend time, often lots of time, eliciting them. Making changes late in the game will drastically curtail progress. Prototype when they aren't clear or when a GUI is involved. Similarly, invest in design and architecture up front. How much time? That depends on the size of the system, but NASA3 showed the optimum amount (i.e., the minimum on the curve) can be as much as 40% of the schedule on huge projects.
3) Religiously use firmware and code inspections. I have observed that it's about impossible to consistently build world-class firmware unless it is all done to a reasonable standard. Happily, plenty of tools are available that will check the code against a standard. And couple this practice to the use of code inspections on all new code. Plenty4 of research shows that inspections are far cheaper - faster - than the usual debugging and test. In fact, testing generally doesn't work5 - it typically only exercises half the code. There are, however, mature tools that will greatly increase test coverage, and that will even automatically create test code.
4) The hardware is going to be late. Plan for it. When it finally shows up it will be full of problems. Our usual response is to be horrified that, well, it's late! But we know on the very first day of the project that will happen. Invent a solution. One of the most interesting technologies for this is virtualization: you, or a vendor, builds a complete software model of the system. It is so complete that every bit of your embedded code will run on the model. Virtualization products exist, and vendors have vast libraries of peripheral models. I was running one of these on my PC, but it was a Linux-based tool, so ran VMWare to simulate the Linux environment. The embedded system was based on Linux, so the system was simulating Linux simulating Linux - and it ran breathtakingly well.
5) Run your code through static analyzers every night. That includes Lint, which is a syntax checker on steroids. Lint is a tough tool; it's one that takes some learning and configuration to reduce false positives. But it does find huge classes of very real, and very hard-to-find bugs. Also use a static analyzer, one of those tools that does horrendous mathematical analysis of the code to infer run-time errors. In one case one of these tools found 28 null pointer dereferences in a single 200 KLOC infusion pump that was on the market.
6) Buy everything you can. Whether it's an RTOS, filesystem or a protocol stack, it's always cheaper to buy rather than build. And buy the absolute highest quality code possible. Be sure it has been qualified by a long service life, or even better by being used in a system certified to a safety-critical standard. Even if your product is as unhazardous as a TV remote control, why not use components that have been shown to be correct?
7) That last bit of advice applies to tools. Buy the best. A few $k, or even tens of $k, for tools is nothing. If a tool and the support given by the vendor can eek out even a 10% improvement in productivity, at a loaded salary of $150k or so it quickly pays for itself.
8) Use proactive debugging. OK - those last two words are my own invention, but it means assuming bugs will occur, and therefore seeding the code with constructs that automatically detect the defects. For example, the assert() macro can find bugs for as little as one thirtieth6 of the cost of conventional debugging.
9) Include appropriate levels of security. Pretty much everything is getting hacked. Even a smart fork7 today has Bluetooth and USB on board; that fork could be an attack vector into a network. Poor security means returns and recalls, or even lawsuits, so engineering effort will be squandered rather than invested in building new products. At the least, many products should have secure boot capabilities.
Never have embedded systems been so complex as they are today. But we've never had such a wide body of knowledge about developing the code, and have access to tools of unprecedented power. It's important we exploit both resources.
1) Jones, Capers. Assessment and Control of Software Risks. Englewood Cliffs, N.J. Yourdon Press, 1994
2) Benediktsson, O. Safety critical software and development productivity, The Second World Congress on Software Quality, Yokohama, Sept 25.-29, 2000
3) Dvorak, Dan and a cast of thousands. Flight Software Complexity, 2008
4) Wiegers, Karl. Peer Reviews in Software , 2001, and about a zillion other sources.
5) Glass, Robert. Facts and Fallacies of Software Engineering, 2002.
6) L. Briand, Labiche, Y., Sun, H. Investigating the Use of Analysis Contracts to Support Fault Isolation in Object Oriented Code, Proceedings of International Symposium on Software Testing and Analysis, pp. 70-80, 2002.
Published September 16, 2013