Go here to sign up for The Embedded Muse.
The Embedded Muse Logo The Embedded Muse
Issue Number 469, May 1, 2023
Copyright 2023 The Ganssle Group

Editor: Jack Ganssle, jack@ganssle.com

   Jack Ganssle, Editor of The Embedded Muse

You may redistribute this newsletter for non-commercial purposes. For commercial use contact jack@ganssle.com. To subscribe or unsubscribe go here or drop Jack an email.

Contents
Editor's Notes

SEGGER embOS-ULTRA

Tip for sending me email: My email filters are super aggressive and I no longer look at the spam mailbox. If you include the phrase "embedded muse" in the subject line your email will wend its weighty way to me.

Quotes and Thoughts

For every 7 faults corrected, one of at least equal severity injected. Adams, N.E “Optimizing preventive service of software products”, IBM Journal of Research and Development

Tools and Tips

Please submit clever ideas or thoughts about tools, techniques and resources you love or hate. Here are the tool reviews submitted in the past.
Hobbies to Vocations

My wife needs an intervention.

She's the sort who never slows down. She is always creating some form of art. The house is overflowing with her stained glass creations. Everything is decorated with mosaics or her paintings. Even rocks in the yard are mosaic victims.

But some years ago she discovered beading, and, well, let’s just say those beaders are nuts. Her bead society is full of similarly-minded women (to a first approximation, all women). They can't go for more than a few microseconds without stringing bits of shiny glass or other materials together. Some make their own beads, an effort that seems deliriously obsessive to me. I'm afraid my welding torch will get co-opted for that soon. The bead shows are packed. One of her friends spent a kilobuck on beads at a single show recently.

Actually, I admire her dedication, but frequently josh her about it.

In Hobbies: Leisure and the Culture of Work in America, author Steven Gelber suggests that the word "hobby" derives from "hobby-horse," and the notion of having pet projects was considered silly and even a dangerous obsession. Around 1880 American views started to shift as necessary crafts were reborn into avocations practiced primarily for fun. But it's clear that in Europe science was practiced primarily as a hobby in much earlier times, though probably only by the wealthy.

It's truly impressive how many people in this country so passionately pursue their avocations. While the media complains about couch potatoes the truth is many of us are consumed with our passions and fill the spare hours doing something whose profit is mainly happiness.

While in India once I was struck by a comment made by a very smart guy who grew up there but moved to the States many years ago. He said that Indians don't have hobbies, and lamented that a lot of tech talent is missed as a result. I have no data on the accuracy of that statement - Googling for information about hobbies in India I got 119 million hits, and there’s even an indiahobbies.com web site.  But one would think hobbies are the province of those who have the luxury of time and some disposable income, which are probably in short supply in poorer countries.

If he is right and hobbies are uncommon in India or other locales, those nations are missing a great source of talent. For many of us our hobbies led to our vocations.

My interest in embedded systems stem entirely from my hobbies, as I was consumed from an early age with electronics. I know a lot of other engineers who can make the same claim. Some of the very best analog designers I know mastered their craft building stuff long before college. Remember legendary analog engineer Jim Williams? He worked in a TV shop in high school to learn more about electronics, and throughout his career maintained a lab at home to pursue that passion.

[Note to younger readers: We used to repair TVs. Even non-techies would pull the tubes and take them to a drug store, all of which had tube testers, to isolate bad components.]

Then there's ham radio, a gateway drug for an awful lot of EEs. Youngsters play with radios and, for many, become consumed with learning how those devices work. Today there are a ton of areas where one can still build radios. That's no longer an analog domain, either, with software-defined radios and many digital transmission modes.

What about you? Are you consumed by some avocation? And did that get you into the embedded world?

More on AI and Software Engineering

The article about the implications of AI on software engineering in the last issue garnered a lot of replies. My son works in AI and thinks technology like ChatGPT will profoundly change the world, rather as the Internet did. Demonstrating its capabilities, he asked it to write a "love sonnet for Jack and Marybeth on a sailboat." The result was pretty impressive, if a bit pedestrian poetry. I asked it to write a C program to compute a CRC. The result looked pretty good... but it was fundamentally wrong. It should have asked me what kind of CRC I wanted; instead it produced code for the (pedestrian!) most common sort.

That's exactly what I'd expect from an inexperienced developer. A pro would snort, realize he asked the wrong question, and reframe it with more detail. The result will be (presumably) useful code; the AI acting as a great adjunct, but only when carefully instructed. It's rather like a search engine: ask the wrong question and you get poor results. As I wrote in the last issue, framing the problem is a big part of solving the problem.

But is the AI-generated code any good? One must evaluate it. Perhaps write tests. Or, you could ask the AI to write the tests... but are those correct? It's turtles all the way down.

Jakob Engblom wonders about copyrights. If an AI is trained by scraping copyrighted information from the web, how can it pass off its output as anything but piracy? He writes eloquently "In a way, the position of ChatGPT is akin to money laundering - by passing the code through a system like this you clean off all the licenses and get something you can use by agreement with ChatGPT." But is that ethical? Legal? There's currently a lawsuit against GitHub over Copilot about this very issue.

Peter House wrote:

In the days of Beethoven and the other great composers, to become a great composer required a lucky genealogy (ability to own or have access to an instrument), considerable skill in at least one instrument, the ability to read and write, and the ability to write music, and gather an audience.

While I have been a musician most of my life, I have never been among the most talented.  In the 1980’s I was able to work with Bob Moog and get access to MIDI and computer software for writing and performing music.  Suddenly, the instrumental skill was not very necessary, and the cost was not more than a few weeks of regular working man’s wages.

Today, you don’t even have to know how to read, perform on an instrument, or write music, if you can whistle a tune, there is software for your cell phone which will write the score and let you refine it and add multiple parts.  You can even publish your music from your cell phone although repeat listeners are not guaranteed.

Music, for the most part, has been commoditized!  This is not to say that everyone is able to create popular music, but everyone can create and share their music with very little additional cost other than the individuals time.

I believe software is headed down this same path.  Spreadsheets and word processors have allowed great numbers of people to create and share business applications (some goo and some bad) with not much more than a good understanding of their business!

I believe this process for software/AI is in the initial stages where music is in the final stages of growth and acceptance.  Where this will end up is anybody’s guess.

Chris Lee had some questions:

I wonder how much of the complexity is in the solution to the problem itself vs created by what I would consider subpar or inadequate planning and "real" engineering.  There has been a lot of chatter about AI's capabilities lately, but you have to wonder if people would REALLY use it.

Yes, the AI can pass a law school test.  Now, if your life depended on the case, would you allow the AI to represent you in court?

Yes, the AI can identify various diseases and conditions.  Would you follow the treatment that the AI pushes on you based on its diagnosis of your condition?

Yes, the AI can probably write software.  Do you feel comfortable flying in an airplane knowing that the AI generated the control system code for autopilot?

I certainly wouldn't - and it's not because I dont trust the AI.  The AI was created by people, and I fundamentally dont trust people's ability to generate really solid code!  If the AI generated itself, then we're talking about an entirely different scenario that only exists (today, anyway) in dystopian sci-fi plot lines.

Love reading your columns.  Always insightful - and on extremely rare occasions, inciteful!  Could an AI tell the difference in the 2 statements here?

Charles Manning perceptively writes:

The AI stuff is interesting and there are really two different things to talk about:
* The AI itself.
* The outputs it generates.

The AI engines themselves are largely impenetrable "blobs" of binary that are impossible to debug. This makes it very hard to see a path to using AI in something like DO178 or other safety applications where you're supposed to be able to show the relationship between the requirements and the actual code itself (ie. explain exactly why this line of code exists and what it does). Good luck trying to do that with the mess that is a trained neural network.
Sure, there are some debug tools that allow you to inspect layers and determine that it has learned to extract features or colour or brightness or whatever, but that is a sort of reverse engineering that does not fit within the process of things like DO178.
I have recently done some courses on TinyML (running AI inference on smaller Cortex M-level micros). In my opinion, this stuff is way over hyped. There are some problems where it is a good fit (eg. anomaly detection). But where it is now reminds me of those childrens' line following robots that "work" so long as someone is there to pick it up and put it back on track when it goes wrong.

I intend to use tinyML in a few applications, but these are pretty restricted.

The outputs themselves (eg. a program generated by chatGPT) are a bit different. Those can be verified if they are small enough.

One huge downside I see of these is that if they are replacing people, it is first replacing lower level people. The lower rungs of the career ladder are already missing and they're getting harder and harder to grasp. When I started in programming and, more importantly embedded programming, in the 1980s I was immediately productive even while I was still at university. Now I think it probably takes people a good 5 years after university to be really productive.

As the old grey heads are retiring there are just not enough skilled people coming up to replace us.  Taking out the lower end jobs with AI will make that barrier to entry bigger. 10 years from now I think this industry has some huge problems.

As you say, the skill is not in being able to wrangle the language itself. It is all the thinking that goes around it. That takes experience. Being able to make something "work" in a highly constrained lab environment is completely different to making a product that can survive in the real world.

Mat Bennion likes Copilot:

Interesting comments on AI there…  I've just bought Github Copilot - the first time in 35 years of coding I've ever bought a tool with my own money!  It's far, far from being able to write a program, but for set pieces like a binary search, or doing the masks and shifts in a complex data field, or idioms in an unfamiliar language, it's amazing.  You've got to know what you're doing because it will make mistakes and give silly responses, but it's so much fun to use.  You start writing a comment, then in the short pause while you think how to end the sentence, it guesses it for you, often correctly, and then goes onto write the code.  It feels like it's reading your mind.

Often I'm so amazed by the subtlety of what it's produced (right down to mimicking my variable naming style as well as knowing arcane aerospace protocols) I have to grab the nearest person and show them.  The “wow” moments more than make up (in fun, saved effort and better code) for the times I have to delete what it suggests because it’s completely misunderstood.

This is from Tyler Herring:

I have to agree with your take on the whole AI situation.

I think I will start getting concerned that AI is going to take over large programming projects just as soon as humans actually create insanely detailed specifications for projects that don’t require them for certification purposes today.

I’d be equally impressed if the AI could take a completed programming project and spit out a specification documentation for how the overall thing actually works, honestly.

I think it'd be fun to see how good the AI does making a project when the early days of the spec is some napkin scribbles, a few product renderings in photoshop, and a blank pack of 200 sheets of printer paper!

Daniel Way wrote:

I just read the latest Muse and really appreciated your rational take on AI in software. I hope AI won't be a new target for outsourcing, but a force multiplier we can use to alleviate some of the tedious tasks we encounter.

As a motivating example, I recently had to compare some filter topologies and produce Bode plots using Python. Writing all the plotting code can be monotonous, and I don't do it often enough to remember all the function calls and formatting syntax. I asked ChatGPT to produce the plots I wanted. In a matter of minutes and with a few iterations it wrote an useful prototype which I then refined.

On the other hand, I tasked ChatGPT with creating a GraphViz state machine for a traffic light and it fell over. It added extra states, couldn't loop back from red to green, etc.

While I hope it becomes a useful tool soon, I worry that it has non-deterministic failures. If I can't know its limitations then I still need to validate everything it outputs.

Kobus Steyn sent a link:

I heard from a colleague I bumped into about a YouTube video they made on AI called “The AI dilemma”. I watched it yesterday and it is really though provoking and a bit concerning.
https://www.youtube.com/watch?v=xoVJKj8lcNQ

Paul Carpenter quips:

> Old timers will remember the claims made about COBOL. It was so easy
> that pundits predicted programmers would be obsolete. Any money manager
> could whip up a bit of Cobol to run an entire department.

Old timers will remember the adage with the advent of the IBM PC

Advantage           "Anybody can write their own program"

Year later with many support issues as mission critical applications or
scripts in applications, and the author has left and no documentation

Disadvantage      "ANYBODY can write their own program"

As is evidence by the amount of awful software and "apps" that people can
sell these days.

Python Ponderings

As I wrote in the last issue, I like Python! Here are some reader takes on it.

David Smith is a fan:

I wanted to email about your comments concerning Python in the last embedded muse.  I agree with the gist of your sentiments and find it a complete joy to work with for the appropriate tasks.  I spend my day writing embedded C code and designing digital hardware, so I spend most of my time weeding those gardens.  But when I have a need to create a tool that runs on the desktop, I gladly reach for Python.  Java used to be my go-to for this, but after starting to learn Python several years ago for some personal projects, I found it much easier to use to get the job done quickly.  Python covers a spectrum of use cases that allow it to be employed for everything from a quick imperative algorithm prototype to a full-fledge OOP solution to complex problems.  The great thing about it is that you don’t pay for what you don’t use.  You don’t HAVE to use classes or OOP – you can write a strictly imperative script.  Or, you design and code in complete OOP paradigms depending on what you need for the task at hand.  In comparison, Java always forces a strict OOP hierarchy which may have been well suited to complex OOP software, but made it totally unwieldy to make quick “one-offs”, algorithm tests, or just simple scripts.

Like many of the comments you made in the muse, it is just so much easier (and faster) to do the same things that could be done in C, but would require much more time and effort. The built-in data structures like dicts, sets, and lists (and of course, classes and dataclasses) cover so many of the typical needs for day-to-day work, that it allows me to focus directly on the problem I’m trying to solve.  The interpreted nature of Python makes it great for prototyping algorithms right in the console!  It is almost like an immediate feedback cycle.

The dynamic typing initially took some philosophical getting used to for me, coming from the perspective of someone who prefers strongly-typed code.  But in the more recent years, Python has added the concept of “type hints” that you can apply to your parameters and variables.  These have no (or at least minimal) affect at runtime, but allow running an off-line type checker (like ‘mypy’) on it to catch potential typing mistakes.  Like type linting for python.  A lot of python IDEs will also use the type hints to provide insight and autocomplete via the editor.  One of the things I like most about type-hinting, though, is that it provides a better sense of programmer intent when you are review Python code that uses type hinting. 

A lot of what I've written for the desktop in Python have been command-line test software for various devices that we develop.  Allow me to introduce you to the ‘click’ package (https://click.palletsprojects.com/en/8.1.x/).  It is a 3rd party package that can be installed from ‘pypi’ (python packaging index) using ‘pip’ that makes it easy to create command-line driven python programs (using linux-y syntax such as: python mycommand.py –options -o variable_user_parameters). The types of desktop file-processing programs that you describe writing in the muse would probably be well suited to something like this.

Another really great Python resource is the RealPython website.  It has a lot of great articles written about various aspects of the language – from built-in constructs to 3rd party libraries and tools.  It is at https://realpython.com.

The last thing I’d like to mention is circuitpython/micropython.  They are the “embedded” maker flavors of python that you've likely seen mentioned.  Circuitpython is derived from micropython, and runs on much of Adafruit's single-board microcontroller products.  It is very usable for wide classes of problems (albeit mostly hobby/maker stuff), and brings with it the ability to rapidly create something functional.  It is very-well suited to prototyping devices, or hobby products.  I've also used it in several test fixtures we've developed that I've built around some Adafuit boards.  I not sure I'd feel comfortable designing it into a “professional” customer-facing product, but that is more due to the rapidity with which it is developed and can change, and the fact that it is developed targeting a maker market vs being designed for OEM use in commercial products.  That’s a great trait for hobby products, but maybe not so great for a long-lived commercial product line that needs to have all of the rough edges taken off? The jury is still out for me on this question.  Anyway, if you are looking for something fun to experiment that will give you a feel for the possibilities, I’d highly recommend getting an Adafruit circuitpython-based board and spending a few hours playing with it.  They have lots of good project tutorials for using them on their website. (https://learn.adafruit.com/welcome-to-circuitpython/overview). Stay away from the “M0” flavors of boards as these were some of the earliest ones that barely had enough RAM to support circuitpython.  The “raspberry pi PICO” (https://www.adafruit.com/product/4864) for $4(!) and the “Feather 2040” (https://www.adafruit.com/product/4884) for $12 are some of my favorites.  For people new to electronics the “circuitpython playground express” board is great because it includes so many RGB LEDs, several sensors, and has alligator-clip friendly IO port connections that require no soldering (https://www.adafruit.com/product/3333).

I know I’m going a bit long on this email, but I find this stuff exciting 😉.  It feels so “freeing” to get to work in python for a bit after working for months on projects in C - and in some cases, maintaining legacy products in assembly ☹.  When I read your segment on Python in the muse, it reminded me of some of the initial feelings I had when first dipping my toe in the Python pool.  Obviously like any tool, python isn’t great for everything, but for certain classes of problems, it can be phenomenal.

Charles Manning disagrees with me about indenting:

I use python here and there for test and development stuff for which it is very useful. Python now replaces a lot of what used to happen in the Matlab space. Many research tools like AI and software defined radio are based on python.

There's a really interesting python framework called Pynq which Xilinx (rebadged AMD) put together which runs Python on the CPU of a Zynq SOC. This can interact with designs in the FPGA fabric providing an easy/quick way to debug/develop your FPGA designs. Of course you likely then need to rewrite everything in C to actually make a product but that's a different matter.

One thing that really annoys me about python is the indenting. Many people will say they like this because it takes away the ugly/annoying/whatever {...} or begin/end or whatever other grouping delimiters other languages use. Well not really. The delimiter/syntax is still there in the form of indentation with spaces and tabs and is now invisible. Imagine making a critical part of a language invisible? This can make it very challenging to do things like email snippets which get messed up by reformatting.

I quite like python, but would like python with {...} or begin...end far more.

Kevin Banks has some useful embedded links:

>>Though there is an implementation targeting MCUs

and flagged micro:python.

I just wanted to call your attention to a few things:

https://circuitpython.org/ is a fork of micro:python, and is available on some different platforms. For example, circuitpython was the first to support the Espressif ESP32-S2.

So if a chip you need to use is not supported by one embedded Python, you might be able to use the other.

(We have used both implementations over the years with good results).

Finally, in the "shameless plug" department, our (Firia Labs) variants of both micro:python and circuitpython include support for source level debugging.

See firialabs.com (you once gave away one of our "CodeBot" products in the embedded muse).

Brian Cuthie also likes the lingo:

Python is a great language. I use it frequently. I think it’s the Tutorial page of the Python online documentation that describes it best (and I paraphrase here): a memory managed language, with dictionaries and regular expressions. And that, I believe, is its strength. One has to think about programming problems a little differently to really take advantage of Python’s power. Never parse; use regular expressions. Use dictionaries frequently as small, efficient, databases. 

Python’s other power comes from the immense collection of pre-written libraries that provide most of the solution to many problems. If you find yourself writing more than a few hundred lines of Python, you should check to make sure you’re not re-implementing a library. At least this has been the case for me. 

I recently came across a good book worth taking a look at. It’s called Fluent Python (from O’Really).

Failure of the Week

From Borislav Gachev:

An anonymous reader sent this and noted that multiplying each number by 24 gives more sensible hours:

Have you submitted a Failure of the Week? I'm getting a ton of these and yours was added to the queue.

This Week's Cool Product

When building very low-power systems we need tools to measure the power consumption. A new unit is now out. From the web site: "MicroAmp-Meteris a current measurement tool featuring auto-ranging, Wi-Fi current profiling, SD card logging options, and more. It is a portable current meter that can record and profile the current consumption of devices in real-time. It has an automatic shunt-switching mechanism that enables it to measure the dynamic range of current levels ranging from 1uA to 1Amp. MicroAmp-Meter has Wi-Fi and an inbuilt Websocket server which provides an application in the web browser to display and record the current consumption in graphical form. It can also log current and other parameters on a micro SD Card for more than 12 hours."

It measures from 0 to 1 amp, with the lowest scale being 0 to 1000 microamps. The resolution appears to be 12 bits, which suggests it can monitor to sub-microamp levels.

They tell me it is about $100, which is a remarkable price. I don't see how the company can make a profit on the unit. But it sure looks useful. More info here.

Note: This section is about something I personally find cool, interesting or important and want to pass along to readers. It is not influenced by vendors.

Jobs!

Let me know if you’re hiring embedded engineers. No recruiters please, and I reserve the right to edit ads to fit the format and intent of this newsletter. Please keep it to 100 words. There is no charge for a job ad.

 

Joke For The Week

These jokes are archived here.

A seminar on Time Travel will be held two weeks ago.

About The Embedded Muse

The Embedded Muse is Jack Ganssle's newsletter. Send complaints, comments, and contributions to me at jack@ganssle.com.

The Embedded Muse is supported by The Ganssle Group, whose mission is to help embedded folks get better products to market faster.