You may redistribute this newsletter for non-commercial purposes. For commercial use contact firstname.lastname@example.org.
This issue marks 20 years of The Embedded Muse. TEM #1 came out June 16, 1997. A lot has changed in this industry in the intervening decades! I want to thank all of the readers for your support, your contributions, and your thoughtful emails over the years.
Normally, the Muse goes out twice a month, but that will drop to a single issue each in July and August, as it kicks back for some summer fun.
|Quotes and Thoughts|
"Heuristic is an algorithm in a clown suit. It's less predictable, it's more fun, and it comes without a 30-day, money-back guarantee." -
|Tools and Tips|
Please submit clever ideas or thoughts about tools, techniques and resources you love or hate. Here are the tool reviews submitted in the past.
|Freebies and Discounts|
Enter the contest via this link.
|XML or Binary for Config Data?|
In the last issue I posed a question asked by a reader: does it make sense to store configuration information in binary or in XML? That elicited a flood of replies, with most readers suggesting the use of JSON. Here are some of the correspondents' thoughts:
Thor Thau wrote:
Stuart Donnan is also a fan of protocol buffers:
Frank Hunleth contributed:
Mat Bennion wrote:
Nathan Menhorn votes for XML:
Stjepan Henc is also a fan of JSON:
Scott Nowell has a down-to-Earth example:
Jim Donelson wrote:
And Charles Manning sent this:
|Dealing With Complex I/O|
In Muse 329 we discussed complex I/O on modern MCUs. Mat Bennion had some thoughts on this:
|Point/Counterpoint on Embedded Ransomware|
Martin Thompson had some comments about an article on security that ran last issue. His comments are in italics, prefixed by the initials MJT.
What are possible counter measures?
The most basic pre-requisite for an attack as described here is the knowledge about the specific microcontroller and bootloader mechanism used. This information can be obtained by either monitoring/tracing the CAN/CANopen communication during the firmware update process or by access to a computer that has this information stored. Protecting these in the first place has the highest priority.
The designer has to make sure that the firmware update process is not easy to reengineer just by monitoring the CAN/CANopen communication of a firmware update procedure. Things that we can often learn just by monitoring a firmware reprogramming cycle:
How is the bootloader activated? Often the activation happens through a specific read/write sequence.
[MJT] It is a common mistake to conflate encryption with authentication. What is important when protecting against this attack is to make sure that only authentic code is allowed to execute. It should not matter (for security) if the attacker can see the code (which is what encryption protects against), it is vital that the code is protected against tampering. Cryptographic Signatures are the usual way to achieve this, not encryption.
A non-repetitive challenge response to "open up" the communications with the bootloader is an excellent idea, but the calculation of the correct response must involve some cryptographic primitive and some secret material (ie a key) that the attacker cannot get hold of. Using something like a CRC because it "looks hard" to "predict" is a classic mistake.
What file format is used? ".hex" or binary versions of it can easily be recognized.
[MJT] Again: encryption does not help. Authentication is what is required. And then it doesn't matter how easily recognisable the format that you use is.
What CRC is used? Often a standard-CRC stored at end of the file or loadable memory.
[MJT] Even an encrypted CRC provides small protection against tampering. A hash on its own will also not help as the attacker can just manipulate the hash to match their tampered firmware. Some kind of secret (the key) must be employed. Examples are using a cryptographic signature or a *keyed* hash (also known as a Message Authentication Code).
Protecting the secrets is the "key", if you'll excuse the pun, to security. This is known as Kerckhoff's principle - even if the attacker knows everything about how your system operates, if they do not have the keys they can still not influence it.
And it's also the hard bit that is glossed over in many security treatments (eg the immobiliser chip that puts the same key in every keyfob, meaning that once one keyfob is broken you can start any car with that style of immobiliser!)
Does it make sense to grind markings off the CPU chip? But with so many devices using Cortex-M parts, a smart attacker could make some assumptions about the processor type.
[MJT] It does makes life harder for the first attacker, but once someone has figured it out, the internet means everyone knows - Kerckhoff still applies, even if your attacker knows what chips you have used, they should still not be able to successfully attack you without the secret key material.
If there's a debug port, should that be closed off?
[MJT] IMHO: yes. It should at least be password protected. And (also IMHO) with a different password for every instance of your controller. Yes, this is a pain! (There is an inevitable trade-off between security and convenience, we are moving to a world where security has to begin to take a higher precedence). This is particularly important if there are secret keys stored in the processor which could just be read out using the debugger (unless they are protected by some additional measures, like debugger censoring, or a Hardware Security Module (HSM)), but it provides another layer of protection against attackers whatever.
Given that so many devices now sport nice GUIs and connectivity, Linux is a logical choice of operating systems. But it is big and vulnerable, so how does one manage Linux patches and upgrades? I can't help but wonder if it makes sense to use an RTOS coupled with GUI/networking packages from the RTOS vendor. These typically have a smaller attack surface than a big OS.
[MJT] That could be of value for many systems. As always, there is a trade-off, there is no perfect security, you "just" have to decide what is sufficient for your product. But don't underestimate the ingenuity of attackers, and don't make newbie mistakes by rolling your own security, take advantage of the well-documented ways of doing things.
I found this series of challenges instructive – demonstrating simple flaws that have been seen widely in real systems, and how easy they can be to attack…
Thor Johnson weighed in on this as well:
Let me know if you’re hiring embedded engineers. No recruiters please, and I reserve the right to edit ads to fit the format and intent of this newsletter. Please keep it to 100 words. There is no charge for a job ad.
|Joke For The Week|
Note: These jokes are archived at www.ganssle.com/jokes.htm.
Paul Carpenter sent a link to the ideal keyboard for a certain kind of programmer: http://devhumor.com/media/the-only-keyboard-most-quot-programmers-quot-need
|Advertise With Us|
Advertise in The Embedded Muse! Over 27,000 embedded developers get this twice-monthly publication. For more information email us at email@example.com.
|About The Embedded Muse|
The Embedded Muse is Jack Ganssle's newsletter. Send complaints, comments, and contributions to me at firstname.lastname@example.org.
The Embedded Muse is supported by The Ganssle Group, whose mission is to help embedded folks get better products to market faster. We offer seminars at your site offering hard-hitting ideas - and action - you can take now to improve firmware quality and decrease development time. Contact us at email@example.com for more information.