Go here to sign up for The Embedded Muse.
TEM Logo The Embedded Muse
Issue Number 320, January 1, 2017
Copyright 2017 The Ganssle Group

Editor: Jack Ganssle, jack@ganssle.com

   Jack Ganssle, Editor of The Embedded Muse

You may redistribute this newsletter for non-commercial purposes. For commercial use contact jack@ganssle.com.

Contents
Editor's Notes

Happy New Year to Embedded Muse readers!

Quotes and Thoughts

"To focus on the visible at the expense of the essential is irresponsible." - Bertrand Meyer

Tools and Tips

Please submit clever ideas or thoughts about tools, techniques and resources you love or hate. Here are the tool reviews submitted in the past.

Paul Carpenter sent a link to the LogicCell, a really nifty way to learn about logic using real gates.

Karim Yaghmour sent some links:

Hope you're doing well. I just received your latest Embedded Muse newsletter and as I was reading through it I thought about a recent YouTube channel my friend Dave Anders (in CC) has started a few weeks back called Dave's Dev Lab:https://www.youtube.com/channel/UCzdegKr4LhK4ypZqvA0R8mw

Dave aims at providing no-nonsense advice/tricks for professionals looking for information about development tools and such.

Star Simpson sent a link to a story about a mysterious circuit that appeared to keep RAM alive without power for long periods of time.

Freebies and Discounts

This month's giveaway is a Silicon Labs Thunderboard Sense. It's billed as "feature packed development platform for battery operated IoT applications. The mobile app enables a quick proof of concept of cloud connected sensors. The multi-protocol radio combined with a broad selection of on-board sensors, make the Thunderboard Sense an excellent platform to develop and prototype a wide range of battery powered IoT applications." I bought it hoping to play with the thing but just don't have the time.

Enter via this link.

Comments on IoT Security

Last issue's thoughts about securing the IoT elicited some interesting responses from readers.

Don Herres had some useful thoughts and links:

Regarding Security in the IoT World, I think it will fall back on the one group that wants it least, the ISP's.  In the early days of viruses, they were the ones who were forced to offer free anti-virus (carrot) and then shut down those who ignored the problem and their computers became zombie spam servers (stick).

In today's world, they will end up shutting down the ports left open by careless, or mal-intentioned, component manufacturers.  They already typically block ports 20 & 80.  The next step will be to block unencrypted FTP and Telnet access (just in case you felt compelled to give your passwords away).

There is a good article at:https://securityintelligence.com/mirai-evolving-new-attack-reveals-use-of-port-7547/
 and from:https://www.us-cert.gov/ncas/alerts/TA16-288A
• Monitor Internet Protocol (IP) port 2323/TCP and port 23/TCP for attempts to gain unauthorized control over IoT devices using the network terminal (Telnet) protocol.

• Look for suspicious traffic on port 48101. Infected devices often attempt to spread malware by using port 48101 to send results to the threat actor.

Blocking all of these for the average residential connection may be necessary.  If your device does not work, then the complaints will start popping up on Amazon and eBay.

Doug Gibbs had an idea for making a business from this:

A colleague and I had what maybe an interesting slant on IoT security. This is one of our get rich ideas, but I doubt either of us will be in a position to take advantage of the opportunity.

We worked in the industrial instrument industry. You can not sell factory process equipment with out the Factory Mutual mark (FM). http://www.fmglobal.com/. FM provides a standard, and your product must pass. The standards relate mostly to fire safety and the physical device. Some relate to intrinsic safety which touches the electronics. FM is an insurance company. No one will buy a non FM device, because they could not get insurance on their factory.

So, if there are large penalties for building a device that can be used in an attack, the companies building the devices will need insurance. This creates a need for standards and testing for device security. Without getting the Secure Home Internet Thing seal (the acronym fits well), selling the device leaves the manufacturer open to attack lawyers.

Our idea was to provide the testing, for large fees. The market will provide the motivation and the insurance companies will give a financial incentive to build devices correct and secure.

Stephen Bernhoeft had an interesting idea:

To avoid the issue of "well-known passwords" (esp. on routers), why not require (by legislation) that manufacturers provide a unique default username/password (on a printed label) for consumer devices having an internet connection?

It would cost a little more initially, as each device needs factory programming with the given password.  Once it becomes the norm however, it would involve no extra cost

Comments on Comments on Comments

A lot of people had comments about my comments on comments in the last issue. I thought there would be some responses advocating for no, or limited, commenting but strangely there were none.

Here's some of them. I shamelessly stole the subject line for the title of this section from Rod Main's email:

In "The Embedded muse 319" you wrote "I've talked with several engineers recently who told me they don't comment. Ever. Their code is completely self-documenting so comments aren't needed. One argument is that the code inevitably drifts away from the comments, and wrong comments are worse than none at all." This, as you said, is completely unprofessional. The argument that "the code inevitably drifts away from the comments" is not, IMHO, a valid one. The comments should have been changed to reflect what the code is being changed to do at the same time. However, if someone has a mind-set that they don't comment then clearly changing comments is also going to be beyond their ability too. They are the reason that the code drifts from the comments.

Self-documenting code?

a= out_pr * scale;  // scale the input using the calibrated coefficient
if(Mark2)           // mark2 board components mean there is a
                    // -2.6mVolt offset which must be corrected for. 
   a= a+offset;

Take out the comments and looks as if its self-documented. But in 6 months' time when you are updating for the Mark3 board, you won't remember why the offset was being added or if its necessary for what you are doing now. Been there, done that.

Unless you are writing code with psychic abilities which adds commented explanation as you code then I think self-documenting code is an illusion. There's only code you remember knowing what it was supposed to do when you wrote it and are surprised that you could have written something that you now can't see what it does or know why it was done that way. If only you'd put some comments in to document it properly at the time…

J.G. Harston contributed his thoughts. The link is to a compendium of CRC-16 implementations, including in PDP-11 assembly language!

> Do you use in-line comments? ... or a block comment instead

I use both. I tend to use three sets of comments:
* a block summarising code entry/exit
* end-of-line comments through the code where needed
* a single line introducing a distinct subsection of a block of code

Plus, I use an assembler that allows multiple instructions per line, so I group whole atomic actions together, for example:

LDA crc+0:EOR #&21:STA crc+0   :\ CRC=CRC EOR &1021, XMODEM polynomic 
TYA:EOR #&10                   :\ Get CRC high byte back  from Y  

This makes for ease of readability as the comment describes the action of the code instead of the action of the individual instructions, and supports the goal of having single distinct code blocks being no bigger than a display screen so you can see the whole code block in one go without scrolling stuff off the screen. I'm showing my age when I say that's a 80x32 screen.

I am paranoid about not understanding my code when I come back to it, plus I try and write code with the intention of other people reading it and as a learning resource. A habit picked up at university and writing for magazines. I'm regularly surprised that youngsters don't have this habit - how did they expect their assignments to be assessed?

While not claiming it is the best, I try to stick to this style of commenting: http://mdfs.net/Info/Comp/Comms/CRC16.htm

Jon Daley wrote about a style I have seen too much of:

When I was teaching an undergraduate engineering course, my students would often complain about my requirement for commenting, and would say things like "it's self documenting", but when I would force them to write comments anyway, they would write things like:

// function that returns an integer
int func(void){
  int a;      // we need an integer to store stuff
  a = 5;      // set a to 5
  a += 4;     // increment a by 4
  return a;   // send a back to caller
}

And they would be quite irritated when I didn't think those comments were useful, though it does explain why they thought their code was self-documenting...

I think comments like that are actually worse than not commenting, as they don't give any new information (such as why one would want to increment the variable by a seemingly random number) and quickly get out of date if the code is ever modified.

I think storytelling is a great way to get ideas across, and Dave Telling had one to share:

Re comments - I agree with you 100%. I have found that I often cannot remember why I wrote a function a particular way when reviewing code months or years later, and that is why I'm a big advocate for comments, even when some might say that the code is obvious.

I had a situation a few years ago, where we had a division that had done a fairly complex EFI application in assembler (68HC11) and there was a desire to refactor in C. I took a look at the original source, and most of the code had no comments, and those lines that did often had comments that NEEDED comments to be able to know what was going on. The worst was that there were a number of code blocks that used complex stack manipulation, and were (to me, at least, as an engineer, not a programmer) almost incomprehensible. We asked a well-known firmware company to give us an estimate of what it would take for them to do the conversion, but the cost was so high that the idea was scrapped. I later found out that that same division had already done a new EFI project (but using a different uC) that DID use C for the code, but the division was shut down soon after.

I have to really force myself to slow down and add comments when I write, and I'm also trying to make sure that I incorporate the idea of writing a comment block that explains what the function actually does; what it expects to receive when called, and what it returns (if anything). This is good counsel for anyone!

Paul Carpenter wrote:

- positive range limited
- zero disabled
- negative a series of error codes

One thing I like about Ada is that one can explicitly specify ranges for variables, which are checked at runtime.

Mark Globerson shared some experiences:

I liked your suggestion about focusing on block comments. I have worked on many embedded systems,coded in both C and assembler, that had combination of  algorithmic or protocol like code spread out in several places in the design and often had a pipelined approach to solving problems so work was done in stages and setup for the next stage. Events occurred that required updates to certain variables that would signal other areas to be updated. This work was mainly done in memory and speed constrained DSP chips that did combinations of algorithms and bit banging. I found that I needed a large area of block documentation to describe the rather complex interactions of the various pieces in time and the particulars of how variables/data structures were used to coordinate the work. I also tended to keep hand drawn timing diagrams of pipelined processing to track how the stages worked. In addition to having an overall description of how everything worked in one area I put in pretty detailed comments in the various places that subtle things were being done in the code as a way to remind me of how it tied back to the overall algorithm. The problems the code solved were complicated and the design + implementation took a long time to figure out. No this wasn't because it was spaghetti code and no we couldn't treat this like high level language code and modularize everything down to trivial routines nested in a tight object hierarchy.  Limited memory and extremely tight real time deadlines causes you to make some compromises in a way that someone writing a web application would not be familiar with.

I understand how people could feel that comments could get out of date. My feeling is that when you have something this complicated it is your job to update the code and comments together, taking the time to make sure they are consistent. When I was asked to go back and modify this code at some later point the comments (and my hand drawn timing diagrams) were invaluable to help figure out exactly how things worked so changes could safely be made with a thorough understanding of the details on how the system worked. The comments saved me hours of work and gave me the confidence to believe that the changes were correct. I'm sure there are some people out there who feel they are code gods who are above this documentation. I consider myself pretty mortal and fallible. I still make mistakes and it's nice to have some way recreate the thinking processes that went into the design in order to find those mistakes or make changes. 

On the flip side of the coin I have had the task of reverse engineering some fairly complicated systems where the code was written in assembly language. The documentation was sparse, at best, and after weeks of pouring through the code I found many conditions that didn't work, though it was clear the code might or might not end up hitting those conditions, depending on inputs and timing. Since I had inherited this code I made sure I documented the overall way it worked and the subtle interactions in, knowing that things would change. Several releases of this code were already in the field and required support so somebody had to know how to fix. Even if the code was well written and well documented it would have taken time to figure it out. Maybe this could be considered a rite of passage but I found this to be a completely undisciplined way to design a system. It was clear that the initial design process didn't include the use of pencil and paper to write out the overall algorithms used and the system timing. So the code was started without a clear picture of how it should work. Then it looked like a series of patches to handle odd cases that occurred due to a lack of generalizing certain parts of the problem. At a minimum having any documentation that attempted to explain the overall flow would have been invaluable. After putting in my time documenting things I kept this documentation up to date as future changes were added so there was some way to understand the code flow without spending weeks reviewing it each time you needed to make changes to the complicated parts. That is when I learned that I document because I'm lazy and don't want to repeat work over and over.

So yes I prefer block comments that explain the high level view of what is really happening and how things piece together. It's useful to have comments on the more local level when subtle things are being done within a function/procedure/code block too, but please don't bother with:

   foo = a + b ;   /* add a to b and store in foo */

David Wyland wrote:

General Systems Theory states that structure and function are not related, except by the designer/user.
Neither can be derived from the other. A structure can have many possible functions. A function can be implemented by many possible structures.

If comments exist, they are usually about structure: a description of what a line or block of code does in terms of how it does it. This lets you verify the line(s) of code: does it do what it says it will do? Is the design of the structure accurate?

What is usually missing is what is the routine is expected to do for the program that called it - its function?
Does it do what the calling program expected? This validates the function.

For example, are you taking the square root of a sum of squares, or the square root of your telephone number?
It does not matter how well you do the wrong thing.

Make sure you describe both what it is  supposed to do as well as how it does what it does.

Dave Harper contributed:

In the latest EM I found your "Comments on Comments" section very interesting. I generate a lot of code for my own use which means I am also the one who needs to maintain it. Thus I have a vested interest in learning anything that will help me comment more effectively. I agree that well written code is somewhat self-documenting, but it only shows "what" is being done. Over time I've found that a far bigger issue in maintaining code is "why" something is done the way it is. Often there may be subtle corner cases that result in code being written one way over a more obvious way and the code itself doesn't convey this to the maintainer. Thus I've developed the habit of a comment block preceding the function that describes why it is needed, the purpose of any non-obvious variables, etc. Further, if sections of the function are not obvious as to why they are doing what they're doing, another small multi-line comment block is likely in order.

Brian Rosenthal improved my example:

In general, I find that comments that explain "Why?" are better than comments that explain "What?". Using your (admittedly simple) example:

a= out_pr * scale;  // scale the input
          

is useless - I can see by the code that we are scaling something. The improved comment:

/* 
         Adjust the input by "scale", which
         is the coefficient calculated at
         calibration time.
*/
          

is better, but I would suggest that the same information can be conveyed by changing the variable from "scale" to something like "calibratedScaleCoef". To me a more useful comment would explain "why are we doing this here or now?"

/*
         Scale from bits to real-world units now;
         as all following corrections will be in
         real units.
*/
          

Likewise, anytime you spend a day changing something, only to find out why it was bad idea, note it in the code.

/*
         Don't scale to real units yet, this all
         needs to be streamed in binary first.
*/
          

When writing code always assume that 10 years from now someone (probably you) will need to change something and a bit of context makes all the difference.

Our small group of two or three has been maintaining the same code base as it has moved to updated hardware for 30 years - my part has been "only" 17 years. As it isn't possible to keep track of hundreds of thousands of lines of code in your head, comments that explain "why" really help.

Harold Kraus had a different take:

I was taught the importance of commenting in my first programming courses (early 80s), kind of in the vein of "show your work".  But, today, in a safety-critical work environment where I have to trace every line of code to requirements and test cases, I find that my code is consequently (1) very simple and (2) well explained in the high and low-level specification/design items.  It is not rare that code comments I write to keep track of what I am doing as I try to solve problems end up as low-level design data.  I am loathe to maintain design data in two places. What I think some coders might put in comments, I put in design and requirements, which kind of leads me to the thinking that seeing lots of comments reflects a need for more detail in requirements and design.  Now, I know there are ways to set up comments and other tags in code that assist in automating synchronization of code and design (e.g., Doxygen), but I haven't put in enough effort to set that up in a way that works for us.

P.S. More to the point, while I am specifying and designing, I am visualizing how I will code and test; and while I am coding, I am visualizing how I will express specifications, design, and test.

P.S. Years back, I was taken by Scott Ambler's writings on Agile Modeling; this was after my first Level B project wherein I felt constrained to use a single notation model and after my second Level B project wherein I codified how I used multiple notation models.
I like the statement in "Agile Architecture: Strategies for Scaling Agile Development - 6. Requirements-Driven Architecture": "Your architecture must be based on requirements otherwise you are hacking, it's as simple as that."

http://subs.emis.de/LNI/Proceedings/Proceedings07/AgilModel_aBrief_1.pdf
http://www.agilemodeling.com/essays/agileArchitecture.htm

George Farmer sent this:

I was shocked to read in EM 319 that there are still engineers out there who mistakenly think their code is self-documenting.  I find this extreme hubris on their part almost criminal – certainly a firing offense in some cases, to say the least.  They have no idea how much they actually cost their employers in the long run, not to mention the extreme risks they are taking.

Having dealt with someone else's so-called self-documenting code from my days in aerospace and later in the off-road, heavy construction equipment industries, I can share numerous anecdotal horror stories, but I'm sure you have heard them all before.  For what it's worth, I have gone back through some of my old code from many years ago and was glad I documented and commented to the level I did – even then I regretted not having commented more than I could have.

I like to joke that I have a very good memory, just that it's very short.  The plain, simple truth is that the latter part really is true and is no joke.  In this faster-better-cheaper-pick-any-two world of Embedded Development, very, very few people have the gift of photographic memory (I know I'm certainly not one of the Gifted).  Even thinking that code can be self-documenting can be extremely dangerous. 

Paul Bennett wrote:

As one who is a stickler for having a decent commenting strategy I find
that things improve in a "Component Oriented Development Environment".
This environment encourages a more literal style of programming where
the comments are written first.

I use the block comment approach to capture the specific portion of a
requirement for the component under design. Writing the comment as
though it was a requirement specification has a number of benefits.

1. You can review proposals for components without having written or
    coded the component.

2. Reviews of the comments, provided they are written as a component
    requirement, can be gauged for validity against the higher level
    requirements.

3. When the code is written, it can be reviewed and tested against the
    already validated comments for correctness of implementation.

4. If component limitations are made clear for components within the
    comments, the comment acts as a data-sheet for the component
    when it is made available in a library, thus allowing sensible selection
    of components that are re-usable.

The above benefits are already enjoyed in the hardware world whether
mechanical, electrical or electronic, and I see no reason why software
should be any different.

Mat Bennion had a good point about maintaining comments:

I'm sure you'll have plenty of response to your comments on comments.  My point is that there should rarely be a maintenance overhead in ensuring that the comments match the code because the comments shouldn't simply be a human-readable description of the code - we can assume that the reader is fluent in the language.  They should convey information that is not presented directly in the code - e.g. its intent, anything surprising about the algorithm, rationale, tips to help the maintainer... this is likely to be much more stable than the code itself.

Claude Galinsky had this story:

Ten years ago I worked for a company engineering high end music synthesizers. Over the course of the company's life, it had been sold multiple times, the last one involving massive layoffs of the developers. Those of us hired afterward to re-establish the company needed to migrate the technology from 68000-family code to a more modern platform. We were confronted with a large real-time C code base, controlling our custom VLSI chip, that was almost completely uncommented. Not only that, it made such extensive use of (also uncommented) macros that it might as well have been written in Klingon.

The mastermind behind this code, who had quit the company angrily after they had laid off all his friends, was a brilliant but prickly ex-Bell Labs guy who, unfortunately, thought of himself as a machine. His take on the need for comments was simply this:

"Comments don't compile."

It was necessary to sweet-talk him for weeks before he agreed to come back as a consultant. Then it took him and the brightest of our software engineers most of a year before we were able to get it working.

If there hadn't still been one person at the company who was still friends with this prima donna, the company would likely have had to shut its doors.

Ron Aaron wrote:

Regarding code-comments: I've been writing software for over 30 years, and have worked in a very wide variety of places.  Everywhere I've worked had a policy (whether enforced or not) that code must be commented.  In any kind of team environment, uncommented code is a huge liability.

But even when working solely on my own projects, I have always commented liberally enough so that when I return to the code in six months I don't have to spend time figuring out what the code is supposed to do.

Which brings up the only valid point against commenting, which is that comments must be maintained just like code.  When the code changes, the comments must be updated to match (and indeed, they must match to begin with!), or the comments can actually be harmful because they will mislead and waste the developer's time and his company's money.

Jobs!

Let me know if you’re hiring embedded engineers. No recruiters please, and I reserve the right to edit ads to fit the format and intent of this newsletter. Please keep it to 100 words. There is no charge for a job ad.

Joke For The Week

Note: These jokes are archived at www.ganssle.com/jokes.htm.

A true story from Mike Coren that is unfortunately not a joke:

The Burt Rutan quote in The Embedded Muse 319 ("Testing leads to failure, and failure leads to understanding") reminded me of an incident from early in my career, and a lead engineer who understood the first part of that quote all too well.  In 1992, I was working with a company that was developing a new product under contract.  Doing contracted product development for somebody else wasn't this company's usual business model, but the company needed the cash and had the necessary technology expertise.  As we were preparing for the formal handover to the customer, somebody on our development team encountered a minor firmware bug.  A new version was quickly built with a fix, after which the lead engineer announced unequivocally that the last bug had been found.  He subsequently prohibited any further testing until after the acceptance documents were signed.

Advertise With Us

Advertise in The Embedded Muse! Over 27,000 embedded developers get this twice-monthly publication. .

About The Embedded Muse

The Embedded Muse is Jack Ganssle's newsletter. Send complaints, comments, and contributions to me at jack@ganssle.com.

The Embedded Muse is supported by The Ganssle Group, whose mission is to help embedded folks get better products to market faster.