Go here to sign up for The Embedded Muse.
TEM Logo The Embedded Muse
Issue Number 293, October 19, 2015
Copyright 2015 The Ganssle Group

Editor: Jack Ganssle, jack@ganssle.com

   Jack Ganssle, Editor of The Embedded Muse

You may redistribute this newsletter for noncommercial purposes. For commercial use contact jack@ganssle.com.

Contents
Editor's Notes

I'm now on Twitter (for better or worse) - follow me as @jack_ganssle.com.

Normally the Muse goes out the first and third Monday of the month (with occasional issues skipped due to the hardworking Muse's need for a holiday from time to time). I'll be traveling for the entire month of November, so the next issue will be on December 7.

A number of people wrote in about the debounce code in the last Muse to suggest profiling switches first, to see how long they bounce for. Vendors don't characterize this, so all we can go on is empirical evidence. I've taken a lot of data for various switches, which is summarized here.

Quotes and Thoughts

The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair. - Anonymous

Tools and Tips

Please submit clever ideas or thoughts about tools, techniques and resources you love or hate. Here are the tool reviews submitted in the past.

In the last issue I asked readers how they deal with geographically-disbursed teams. Here are your thoughts!

Chris Mitchell wrote:

In regards to managing embedded teams across long distances, I've found that two free tools greatly improve our team's ability to work remotely without suffering from drastic increases in communication overhead. Slack is an incredible chat client that allows team members to have focused discussions in project specific channels. The second would be trello, which we use as a project management tool for establishing tasks, goals, and milestones that can be claimed by members of the team, or assigned by management. We use trello for making "scrum-like" boards where having a physical board doesn't make sense for teams across multiple locations.

Brian O'Connell wouldn't want to work with a Baltimorean like me:

Have done this, regretted it, and will not allow such a team for any future projects.

The last project used a software dev in Phoenix (we are in San Diego County) and the DVT was done ahead of schedule. The proximate (intended as both adverb and noun) limit for team members is under 500km. Single-day equipment exchange is simple, and at most, a single time-zone difference.

And do not like to use them east-coasters, they have really weird accents.

Peter Heath contributed:

In response to your Muse 292 query. My $0.50 for what it’s worth.

I found 3 things that were critical to effective team management with dispersed projects having limited HW resources and limited brain resources (key engineers available at only one location).

  1. Travel. Don’t be afraid to send a person(s) every month to the other place(s). Whether they are key engineers, PM, contracts, technicians – whoever - be sure they visit the other team so they can meet the people, see what their work conditions are and what obstacles they face. Depending upon the length and size of the project, every team member should travel at least once a year. That goes both ways – it’s good team building, helps to avoid miscommunication, and I've never visited and not come away feeling like (my) time was well spent.
  2. One set of requirements, one change tracking system and one configuration management repository. Invest in tools that can be accessed by everyone on the team.
  3. Make sure everyone is using the EXACT same tools. Be it project schedule, compliers, computers (including OS), test equipment (HW and software), whatever. I call it project configuration management. If it only works in one place, then your chances of figuring out why increase exponentially if you've done this.

I’m certain there are more but if you look at them, the key underlying theme here is simplifying communication

Lou Calkins sent this:

Obviously with team members literally all over the globe, live interaction can become practically impossible.  But the same problem can occur when team members have widely ranging schedules of availability.  In our development process, reviewing is a critical part.  With team members scattered in geographical locations, or with staggered available time slots, reviewing must sometimes be conducted off-line (i.e. not live in real time).

One tool we have found to be very useful for reviews is Code Collaborator (CC) from Smartbear.  Although our team members are geographically only one time zone apart, we still have different things going on at each location that can cause scheduling difficulties.  Since CC is a tool that facilitates non-live reviews, we have found it allows us to review firmware (and software, circuit schematics, mechanical designs, and even written documentation) whenever each team can get time to simply log in and use it.  CC records comments (that cannot be redacted or removed once entered), and provides a system of accountability to ensure revisions and corrections are done properly.  I highly recommend it.

Ideally, all team members should continuously be busy working on a project without times spent waiting for something or someone.  That means each member needs to parse their tasks such that there are always things to work on while waiting for something on other tasks.  It might be tempting to think such problems would be eliminated if all team members were simply in one location.  But a problem with that can be frequent interruptions that cause both discontinuity in thought, and delays.  So whether teams are in one location or not, it probably goes without saying that better task planning is a good thing.  The need for it is just more obviously seen when teams are geographically disbursed.

Freebies and Discounts

In the last Muse I reviewed The Art of Electronics, third edition. This month I'll give away my copy of the second edition. It's a little tattered, but is packed full of wisdom about designing analog and digital circuits. Plus, a little mystery insert I've always stored between its pages is included.

The contest will close at the end of October, 2015. It's just a matter of filling out your email address. As always, that will be used only for the giveaway, and nothing else. Enter via this link.

Survey Results

Thanks to everyone who filled out the survey of firmware development practices. The answers were certainly interesting, and depict an industry that is maturing.

First, teams are small! The average size is 5.9 firmware developers, but the median - 3 - is more representative.

Number of hardware and firmware developers

What approaches do we use? They range from "divine intervention" to CMM5.

Methodology used to develop firmware

2.5% claimed they used an "other" methodology. Comments included:

  • Internally-developed method.
  • Anarchy
  • Mixture of agile (many agile ideas/policies used, but not SCRUM) & plan-driven (V-model) approaches
  • We fall somewhere between no defined process and an unholy Agile/Waterfall combination of techniques, but attempt to have a requirements document (that we rarely look at after it's written).  We then move forward and pivot as needed to get things done, while incorporating features not thought of (but needed) as we go.

Lint is a powerful bug-swatting tool. It's not particularly popular:

Use of lint with firmware

I modified the data for use of static analyzers based on which tool was reported being used, since in some cases respondents indicated the tool they were using wasn't a static analyzer (e.g., PC-Lint). Static analyzers, which are confusingly named, look at the code's behavior if it were running to find things like buffer overflows. Lint and similar tools check syntax and conformance with standards.

Use of static analyzers with firmware

Interestingly, of those who use static analysis on most modules, 44% reported using Lint on most modules, and an equal number rarely use Lint.

Are developers happy with these, very expensive, static analysis tools?

Satisfaction with static analyzers with firmware

Which tools are people using?

Static analyzers used with firmware

So, how does satisfaction correlate with the tool used?

Cppcheck:

  • 29% report it provides real value
  • 71% get decent but not stellar results
  • 0% think it's a waste of money (of course, Cppcheck is free)

Klocwork:

  • 31% report it provides real value
  • 62% get decent but not stellar results
  • 8% feel it was a waste of money

Coverity:

  • 50% report it provides real value
  • 31% get decent but not stellar results
  • 19% think it was a waste of money

LDRA:

  • All respondents felt it provided decent but not stellar results

42% use some sort of requirements tool. Of those there were a huge range of tools cited, but with the exception of those in the following graph, none scored more than a couple of percent.

Requirements tools used with firmware

Complexity metrics give two important results: they are one quantitative measure of a function's maintainability, and also indicate the minimum number of tests a function must undergo. Yet few of us measure this:

Complexity of firmware

Only 20% of respondents report they measure the cost to develop the code.

Metrics collected with firmware

It's one thing to collect metrics. Are they used?

Are the firmware metrics used

Firmware standards are increasingly important, and in many regulated industries are required. Here's the state of the industry:

Use of firmware standards

7% of respondents reported doing pair programming.

24% reported having dedicated testers on their team (exclusive of the QA group).

Which tests are being run?

Use of firmware tests

Agile processes mandate the use of automatic tests, which are a good idea in general, though they can be difficult to implement.

Use of automated tests

Modeling gets a lot of press. Who uses it?

Use of models with firmware

Here are some of the comments from respondents:

Use Mantis bug tracker for ALL defects logging, commentary, fixes and reporting with well defined versions and release notes. In the early stages Mantis is used to load requirements and work packages are done according to the mantis feature requests (and closed out when complete, from that point on defects are tracked instead of features). This allows feature requests to be inserted at any time as requirements may change during development.



I feel like I should apologize for our code.


Processes are defined but not enforced. Those who follow them have good consistent (boring) results on time. Those who don't rely on heroics.


Our firmware team was one of the most talented groups I have worked with. We completely lacked support and direction in terms of management as our sole investor pushed out the VP of engineering early in the program. Yet we did a great job of addressing the features and defect issues with good prioritization and shipped the product. Time will tell if the market provides success.

All of the code which we generated had a cyclomatic complexity of less than 20, and typically far less than that. I think most interesting is that the 2 more junior engineers on the team had never heard of 'cc' and therefore did not code with being tested in their heads. They just wrote maintainable code as a practice.

While I tried to bring in static analysis tools, there was no way I was going to justify the large price tag to our investor.

I'm not sure that the tools in this case would have benefited, as most of the defects we encountered were not code based but hardware interaction and network traffic/congestion based. These were more system issues that code analyzers would not catch. Nevertheless, in prior projects (medical device) they proved effective in finding bugs that would have been overlooked.


Despite our trivial process, we're very disciplined, professional and ultimately successful, in the sense that we aren't too late and what we field is largely bug-free, even though it's a "programmable" product (in the sense that Visio is highly "programmable": one can drop any combination of shapes and connect/modify them innumerably).

Which isn't to say we couldn't improve our process and be more efficient & effective...


In previous jobs, some of these questions would be answered differently. I spent a bit of time recently trying to find a well designed solution for requirements, software development and testing. There is a great intuitive online project tool called Basecamp that is terrific to support Agile development and team discussion. JIRA is so complex and not easy to use but powerful. Tools like Redmine have been tried too but requirements are difficult and tracking changes. We have Doors for requirements but way too expensive and not simple to use either. If there was an all inclusive web tool with Basecamp like interfaces, that would be a awesome.


DO-178B (nominal) customer.

No, these guys don't unit-test; and considering the wretched JTAG debugger ("emulator") they're stuck with, no surprise. Years ago, (in another company) we did some semi-formal unit testing, by capturing debug sessions; but it was a lot of work to clean this up into a repeatable form - and even more work as the unit-test script changed every time the code did. But, without unit test, I see a lot of "silent bugs", where a faulty computation is overwritten by something subsequent, or the data is ultimately unused. DO-178 doesn't like "dead" code that is never executed, but the "undead" code where the results are voided seems equally questionable.

I'd like to see more "assert"s, and other robustness mechanisms, but project leads complain that they're too expensive for the "formal verification" we do. Thank God the equipment is installed as redundant sets, where off-in-the-weeds firmware is deprecated by a normally running buddy.

The "runaway Toyota" issue might have possibly been minimized by such redundancy. I had a thought, that something stronger than simple redundancy could have a place; namely, the computerized analogue of a throttle-return spring. A second computer tries to defeat the control output, and only when the control computer is properly refreshing its output, would the control be successful.


Based on the extent to which we use processes, it is hard to claim that we do "software engineering". Hacking would be a more accurate description.


I'm doing Maker stuff after 30+ years as a software engineer, specializing lately in Dev Ops. I discovered after leaving my last company that I could replicate most of their productivity and dev stack for < $200. The Biz Productivity and Continuous Integration stack is from Atlasssian ($60 for Jira/Agile, Confluence/Gliffy, Bamboo and Stash). Requirements are captured in wiki (text and UML) from which they flow to tickets and then to Scrums. Integration of all components (including CI and version control) mean that Git debug branches are created directly from bug tickets, along with a private test pipeline in Bamboo.

Metrics are analyzed through a combination of Bamboo reports and Splunk dashboards. IDEs are Xcode and Eclipse targeting Raspberry Pi and Arduino.


I stand on the edge of a valley. My side is called V-Model. The other side is Agile. I lead the effort to build the bridge that crosses this valley.


When you ask the right questions, things look pretty dismal. However, a firmware development standard is under development, supporting code reviews, use of Lint, RSM (metrics) and AStyle (formatting). Code review would initially be manual but eventually employing some SmartBear product like Code collaborator. I've looked into TDD, even met Jim Grenning at ESC Boston in 2011, met you too for that matter. Integrating TDD is a big paradigm shift for us and I'm not sure it scales completely to the complexity of our application. Maybe just an excuse to avoid it but it seems a steep uphill climb.

I recognize the need to improve our process and am committed to it. Just takes time. Fortunately and perhaps in no small part to the talent of our team, we haven't suffered customer discontent due to any major bugs in the firmware. The post shipment bugs we've found have been minor and mostly inconsequential to our customer application.


Management decreed that all code checked in must be paired or reviewed (we use SmartBear Collaborator). We are theoretically allowed to choose which route to go for each story, but in practice the project manager pushes everyone towards pairing. The younger ones seem fine with that, the older ones less so.


Unfortunately, in my experience, management rarely has any firmware experience and treats firmware as the black box in the development cycle. With so little insight, they fall back on assuming schedules that don't agree with their own estimates are inaccurate. Ultimately testing and best practices are abandoned to meet the breakneck schedule handed down from on high.


Could be much more disciplined. Very hard to find tools that work for our needs - seems most tools have either a real or perceived overhead that turns the team off towards them, even though we have actively searched for a good tool/set of tools to cover our design, development, and testing needs.


We work on safety critical devices (commercial drones), and while our processes are still maturing we pride ourselves on our test automation. We require 100% unit test LOC coverage and use a continuous integration server to run unit and functional/integration tests with every commit. Our functional/integration tests include L1 (basic single board smoke tests) through L3 (full system testing) as well as a dedicated integration team which goes out and regularly flies and tests released kits.

In terms of development, we use Scrum with sprints, but have tended toward a more Kanban style of pulling in tasks as needed. We're definitely still trying to find our feet in terms of development methodologies, but we've got a great team who are all motivated to put out a great product and to do so with a pragmatic approach to our processes. We are willing to use whatever works best for us, but it just takes time to find out what that is.

On a more specific note, we've started only using static asserts in our code. Any run-time error checking must now be done so that it remains in production code and can be handled gracefully (as much as possible anyway).


God, this was embarrassing.

Jobs!

Let me know if you’re hiring embedded engineers. No recruiters please, and I reserve the right to edit ads to fit the format and intents of this newsletter. Please keep it to 100 words. There is no charge for a job ad.

Joke For The Week

Note: These jokes are archived at www.ganssle.com/jokes.htm.

Lorenzo Meneguz sent the following:

Joke from Italy! I found this online job ad:

Joke

A rough translation is: "For a company in the industry we are looking for a boiler/refrigerator maintenance technician with required electromechanical experience in the job of 374 years and willingness to move around the area".

In Italy, right now, it's very hard to find a job if you have no experience in the field, but I guess they won't find anyone so old, let alone the experience.

On Italian keyboards, if you don't use the numeric keypad, the slash is made by [Shift] + [7], so the intended meaning was 3/4 (which is 3 or 4, not 0.75) years.

Advertise With Us

Advertise in The Embedded Muse! Over 25,000 embedded developers get this twice-monthly publication. .

About The Embedded Muse

The Embedded Muse is Jack Ganssle's newsletter. Send complaints, comments, and contributions to me at jack@ganssle.com.

The Embedded Muse is supported by The Ganssle Group, whose mission is to help embedded folks get better products to market faster.