Go here to sign up for The Embedded Muse.
The Embedded Muse Logo The Embedded Muse
Issue Number 445, May 2, 2022
Copyright 2022 The Ganssle Group

Editor: Jack Ganssle, jack@ganssle.com

   Jack Ganssle, Editor of The Embedded Muse

You may redistribute this newsletter for non-commercial purposes. For commercial use contact jack@ganssle.com. To subscribe or unsubscribe go here or drop Jack an email.

Contents
Editor's Notes

SEGGER Embedded Studio cross platform IDE

Tip for sending me email: My email filters are super aggressive and I no longer look at the spam mailbox. If you include the phrase "embedded muse" in the subject line your email will wend its weighty way to me.

Jon Waisnor poses a question about how we're coping with supply chain issues:

I'm hearing from former co-workers that there is, or has been. a rush to migrate embedded devices to more available microcontrollers and microprocessors.  Over the past two years some parts either can't be obtained or offer impossibly long lead times.  Have there been any feedback from your readers and contributors on best practices to handle these issues?  What are the trade-offs to get a legacy product flowing again when you've been told there are no more chips available?  Treat it like a new design? Like a end of life issue?  Buy anything you can get your hands on from even dubious suppliers?

Quotes and Thoughts

Brooks' law: Adding manpower to a late project makes it later.

Highly recommended is his book The Mythical Man-Month. Though dated it is still full of wisdom.

Tools and Tips

Please submit clever ideas or thoughts about tools, techniques and resources you love or hate. Here are the tool reviews submitted in the past.
Freebies and Discounts

This month's giveaway is a STM32F411RE, mbed-Enabled Development Nucleo-64 series ARM® Cortex®-M4 MCU 32-Bit Embedded Evaluation Board. As it has a debug port, it's perfect for learning about working with ARM Cortex M4 MCUs.

Enter via this link.

Department of Redundancy Department

I believe it was the humorist group Firesign Theatre who, in their fun 1970 record album Don't Crush That Dwarf, Hand Me the Pliers, came up with the notion of a Department of Redundancy Department.

The DoRD is alive and well in our business. Though too much code has too few comments, it's not uncommon to find developers duplicating comments in code. From some real code:

/*
 This section returns TRUE if A==B
*/ 
ret=FALSE;
if ( A == B) ret=TRUE; // if A == B return TRUE
return ret;

The comments are correct and clear. The problem is that if the code changes, the comments have to change in more than one place. Odds are, they won't. At least one of the comments will be wrong, and the poor maintainer will be forced to examine the code to decide which is accurate. A more complex example could induce Excedrin Headache number 19 (i.e., what the heck was this guy thinking?).

In a real-world example I ran across some years ago, the source file comprised about 2200 lines of code. 2000 of those were in a header block which described every aspect of the very complex code's operation. Included was a link to a paper describing the algorithm in detail. One could make an argument that the link alone would suffice. Or perhaps an abbreviated outline of the algorithm, with the link. You can be sure that a maintainer would have placed a heavy weight on the page-down key to get to skip the details and get to the code itself.

Regular readers know I'm a comment zealot. I prefer to write all of a function's comments before entering any C statements. Given clear comments that show the designer's intent, then there's not a lot of work to filling in the code.

Most of our code is a documentation desert. We know that great code will have a comment density (e.g., ratio of lines with comments to non-blank source lines) exceeding 60%1, yet that's rarely achieved. But density alone is a poor metric; those comments must be meaningful, concise and useful, providing information beyond the bare ideas embodied in the C statements.

There's an old joke that is funny, in the sense that a train wreck is awful yet eyeball-grabbing: Never include a comment that will help someone understand your code. If they can understand it, they don't need you.

Consign that meme to the joke file!

1Measuring Software Quality: A Case Study Thomas Drake, IEEE Computer Nov, 1996

How Teams Shape Up

How is your team doing? Are there problems? What could you improve?

Over the decades I've been called to evaluate many firmware engineering teams around the world. Generally management is unhappy; projects are late and buggy, or engineering is not meeting some corporate goals. Management wants to effect some sort of change, though they're not sure what. They typically don't understand that I can't help unless I have quite a bit of information about the team and its work. Over the years I've developed an outline of questions designed to garner the insight needed to make recommendations for change.

I don't do this sort of thing anymore, but you may find these questions useful to get some insight into your own team's behaviors and processes.

You'll note there's a lot of overlap between questions asked of team leads and of the team members. I have found that, too often, the team lead/VP/manager does not know what is really going on in the trenches. Analyzing this dichotomy gives quite a bit of insight into the efficacy of an engineering department.

For team leads:

  1. Meta-questions
    1. Why am I here?
    2. What would you like to get out of this?
    3. Describe your role and job
      1. What sort of people do you manage?
      2. What work do they do?
    1. What do you think we can really accomplish?
    2. In one sentence, what is the biggest problem you're concerned about?
  1. Products
    1. Describe
  2. State the problems as you perceive them. For each:
    1. How bad is it? (metrics)
    2. What are the trends?
    3. How is it affecting the profits, the product, customers, engineers, management?
    4. How does the team perceive these – are they considered problems?
    5. Are you more or less alone in these concerns?
    6. How does management perceive these?
    7. Is there any interest in any sort of change?
      1. If yes what has happened or not and why?
      2. If no, why not?
  1. Is there a current project you are especially worried about?
    1. What is the project?
    2. What is your frank assessment of the present status? How does that compare to the promises made?
    3. What is the desired best-case outcome?
    4. Worst case?
    5. What are you predicting for the outcome and why?
    6. Using any metric, how big is it now?
    7. How big do you expect it to be?
    8. Tell me about the team
      1. Size
      2. Strengths/weaknesses
      3. OT?
      4. Quiet working conditions?
      5. What sort of training do they get?
  1. Organization
    1. Describe how the team or group is organized. Who are the players?
    2. What is management's take on the problems you've described?
    3. What management support does, and could, exist for change?
    4. Will management spend money on tools and the like?
    5. How do you manage schedules?
      1. How managed?
      2. Considered a joke?
      3. Are they always kept up-to-date?
      4. How are things going?
  1. Development strategy currently used
    1. Describe it
      1. How much of it is really used in practice?
      2. Are you happy with it? Why/why not?
      3. Is management happy with it?
      4. How does the team feel about it?
    1. If chaos, use guided discussion to understand it.
    2. Have you used or flirted with other development strategies?
      1. How devotedly?
      2. What was the outcome?
    1. What sort of documentation is produced?
      1. Maintained?
      2. How used?
      3. Doxygen? If so, how used?
      4. What is delivered to the customer?
      5. Get some example docs
    1. Tell me about requirements here.
      1. Formal or consistent way used to define/describe them? How documented?
        1. Show me the requirements document
          1. Complete
          2. Correct
          3. Feasible
          4. Necessary
          5. Prioritized
          6. Unambiguous
          7. Verifiable
      2. Tools used
      3. Are requirements tracked to code? To end-user (manuals, etc)?
      4. Change:
        1. Have you had a lot?
        2. Do you expect much at
          1. Marketing input
          2. Engineering discoveries
          3. Customer
        3. Chance control
        4. Fed back to requirements
        5. Are requirement changes measured?
    1. Modelling?
      1. Prototyping? Any unknowns that need to be figured out?
    1. Metrics
      1. Are any being used?
      2. Tell me about them. What are they, when used, consistency of use
      3. Learning anything from them?
    1. How do you deal with bugs?
      1. Measured? How?
      2. Bug list? How big? How big of a problem is it?
      3. Ship with known bugs?
    1. Is security at all a consideration?
      1. If so, what actions are being taken
      2. What tools being used
    1. Do you use/religiously?/if not why not and is there any interest in change: General question:
        1. Which one?
        2. Outcome? – metrics?
        3. Why not using it?
        4. Why not investigating it?
      1. VCS
      2. Bug database
      3. Standards
        1. Really used?
          1. How do you ensure they are used? Tools?
        2. Show me the FSM
      4. Max function sizes
      5. Cyclomatic complexity
      6. Inspections
        1. How done
        2. When done
        3. Efficacy
      7. Lint
      8. Static analysis
      9. Testing: How do you do it? Systematized?
        1. Formal test procedure?
          1. Get a copy
        2. Test generation tools
        3. Regression
        4. Unit test
        5. Automated tests
        6. Coverage used?
        7. Simulation?
        8. If h/w a problem, virtualization
        9. Is any sort of test library used? How maintained?
        10. Do you track error-prone modules?
      10. Are there real-time issues? Do you need to guarantee performance?
        1. What tools or approaches used to measure this?

For team members:

  1. General
    1. Tell me about your background.
    2. What's it like to work here? Get heard? Respect? Fun?
  2. State the problems as you perceive them. For each:
    1. How bad is it? (metrics)
    2. What are the trends?
    3. How is it affecting the product, customers, engineers, management?
    4. How does the team perceive these – are they considered problems?
    5. Are you more or less alone in these concerns?
    6. How does management perceive these?
    7. Is there any interest in any sort of change?
      1. If yes what has happened or not and why?
      2. If no, why not?
    1. Current project (list name):
      1. What is your frank assessment of the present status project? How does that compare to the promises made?
      2. What are you predicting for the outcome and why?
      3. What if anything would you like to see changed and why?
  1. Development strategy currently used
    1. Describe it
      1. How much of it is really used in practice?
      2. Are you happy with it? Why/why not?
      3. Is management happy with it?
      4. How does the team feel about it?
    1. If chaos, use guided discussion to understand it.
    2. Have you used or flirted with other development strategies?
      1. How devotedly?
      2. What was the outcome?
    1. What sort of documentation produced?
      1. Maintained?
      2. How used?
      3. Doxygen? If so, how used?
      4. What is delivered to the customer?
      5. Get some example docs
    1. Tell me about requirements here.
      1. Formal or consistent way used to define/describe them? How documented?
        1. Show me the requirements document
          1. Complete
          2. Correct
          3. Feasible
          4. Necessary
          5. Prioritized
          6. Unambiguous
          7. Verifiable
      2. Tools used
      3. Are requirements tracked to code? To end-user (manuals, etc)?
      4. Change:
        1. Have you had a lot?
        2. Do you expect much at
          1. Marketing input
          2. Engineering discoveries
          3. Customer
        3. Chance control
        4. Fed back to requirements
        5. Are requirement changes measured?
    1. Modelling?
      1. Prototyping? Any unknowns that need to be figured out?
    1. Metrics
      1. Are any being used?
      2. Tell me about them. What are they, when used, consistency of use
      3. Learning anything from them?
    1. How do you deal with bugs?
      1. Measured? How?
      2. Bug list? How big? How big of a problem is it?
      3. Ship with known bugs?
    1. Is security at all a consideration?
      1. If so, what actions are being taken
      2. What tools being used
    1. Do you use/religiously?/if not why not and is there any interest in change: General question:
        1. Which one?
        2. Outcome? – metrics?
        3. Why not using it?
        4. Why not investigating it?
      1. VCS
      2. Bug database
      3. Standards
        1. Really used?
          1. How do you ensure they are used? Tools?
        2. Show me the FSM
      4. Max function sizes
      5. Cyclomatic complexity
      6. Inspections
        1. How done
        2. When done
        3. Efficacy
      7. Lint
      8. Static analysis
      9. Testing: How do they do it? Systematized?
        1. Formal test procedure?
          1. Get a copy
        2. Test generation tools
        3. Regression
        4. Unit test
        5. Automated tests
        6. Coverage used?
        7. Simulation?
        8. If h/w a problem, virtualization
        9. Is any sort of test library used? How maintained?
        10. Do you track error-prone modules?
      10. Are there real-time issues? Do you need to guarantee performance?
        1. What tools or approaches used to measure this?
  1. General
    1. What's the schedule pressure like? OT?
    2. Are you familiar with and/or use:
      1. Agile – which one
      2. PSP
      3. TSP
      4. CMM
    1. If no metrics:
      1. Estimate the current size of the code base in LOC (or other metric)
      2. Estimate your productivity in LOC/anything
      3. Estimate your post-compile bug rate
      4. Estimate your defect potentials
      5. Estimate the cost of an LOC
  1. Organization
    1. If I were your boss and you could talk very candidly, what would you say? Ups? Downs?
    2. If I were your boss's boss and you could talk very candidly, what would you say?

 

Failure of the Week

Sergio Caprile found a deal on a battery. "Economy International Shipping" was only $100 million!

From Craig Ogawa:

Ian Freislich sent this baffling status update:

Have you submitted a Failure of the Week? I'm getting a ton of these and yours was added to the queue.

Jobs!

Let me know if you’re hiring embedded engineers. No recruiters please, and I reserve the right to edit ads to fit the format and intent of this newsletter. Please keep it to 100 words. There is no charge for a job ad.

Joke For The Week

These jokes are archived here.

From Dave Fleck:

Caught my son chewing on electrical wires.
So, I grounded him.
He didn't put up any resistance, which is unusual given his short fuse.
He's doing better currently and conducting himself properly.
I think it was a positive learning experience for him.
But perhaps he might need a couple of new outlets for all his energy.

I got a charge out of this. I hope others were just as positive.

About The Embedded Muse

The Embedded Muse is Jack Ganssle's newsletter. Send complaints, comments, and contributions to me at jack@ganssle.com.

The Embedded Muse is supported by The Ganssle Group, whose mission is to help embedded folks get better products to market faster.