|Jack Ganssle's Blog
This is Jack's outlet for thoughts about designing and programming embedded systems. It's a complement to my bi-weekly newsletter The Embedded Muse. Contact me at email@example.com. I'm an old-timer engineer who still finds the field endlessly fascinating (bio).
|For novel ideas about building embedded systems (both hardware and firmware), join the 35,000 engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.
March 5, 2020
What happens when AI becomes smarter than a smart human being? This is the subject of Nick Bostrom's 2014 book Superintelligence. I found it an interesting and at times frustrating read.
Nick postulates that strong AI - artificial intelligence that surpasses human capabilities in most endeavors - is coming, perhaps in the next few decades. He outlines a number of ways this may happen, from the conventional notion of really smart software, to slicing a brain into microscopically thin segments and "reading" the structure of each to create an emulation of the human brain.
His supposition that such an emulation can be derived from merely mapping the cortex's physical structure strikes me as naive. Does memory and intelligence lie solely within gross anatomy, or is it possible some portion resides in electrical loops spinning between neurons and synapses that won't appear in such a map? Regardless, his other approaches to superintelligence are compelling, and for me it's hard to imagine that sufficiently complex hardware and software won't someday exhibit a level of intelligence we can't imagine.
But with a human's intelligence derived from 100 billion neurons each sporting maybe 50 or more synapses, the amount of computational resources needed would seem vast. I wonder (Bostrom doesn't go into this) how many of those neurons are needed for thought. Some proportion are surely only there for basal responses not incidental to thinking, so perhaps that culls the needed complexity by a big number.
Of course, there may be two dimensions to this: gross complexity in terms of numbers of connections, versus a computer's much faster processing speeds compared to organic matter.
Copious notes are provided; too many as they are distracting, yet some offer fascinating insights and I recommend exploring them.
Bostrom speculates on the impact of superintelligence on the human condition; specifically, on the nature of work, a problem I've wondered about for years. If the machines are self-replicating and capable of most any sort of work we humans do, what is the value of labor? It seems to me, and to the author, that labor, human labor, becomes worthless. So how to people survive when they can't perform valuable and compensated work?
Bostrom's answer isn't particularly satisfying to me. While he believes incomes will disappear, capital will explode. I don't entirely follow his logic. One of the cited reasons is colonization of new land in space. That strikes me as beyond speculation and entering the realm of magic, as we have no inkling today of how space travel will ever be cheap, despite Elon Musk's unproven fantasies. It takes a lot of energy to escape the Earth's gravitational well. Chemical rockets will never afford inexpensive access to deep space.
Bostrom feels that many factors will result in redistribution of the capital; trillionaires will feel compelled to share their wealth. Perhaps. Some of his notions sound like Bernie Sanders pounding the podium.
Yet in the notes he writes compellingly: "Providing 7 billion people an annual pension of $90,000 would cost $630 trillion a year, which is ten times the current world GDP. Over the last 100 years, world GPD has increased 19-fold." (I assume these are inflation-adjusted dollars). "So if the growth rates we have seen over the last hundred years continue for the next two hundred years, while population remained constant, then providing everybody with an annual $90,000 pension would cost 3% of world GDP. An intelligence explosion might make this amount of growth happen in a much shorter time."
He does lay out a number of alternative scenarios, though most of those quite bleak.
The general thrust of the book is that AI will arrive in the form of machine intelligence enabled by computers and software, and that while there are many possible outcomes, most are undesirable if we value human life. I am reminded of Mark Twain's last story, the Mysterious Stranger, in which a young boy befriends a stranger who grants his every wish, but no matter how carefully-framed those wishes might be, they all result in disastrous outcomes. (The stranger turns out to be Satan). Bostrom examines hundreds of possibilities, but they mostly end in the superintelligence dominating the world and even the galaxy.
I found two areas missing from the book. First, he only discusses superintelligence arriving via machines: digital hardware and software. Might there be an organic version engineered by humans? Of course, software has the benefit of zero cost of reproduction and the chance of very rapid evolution, so this failing is understandable.
Second, if the dire predictions given were indeed to come to pass, would people revolt? We recall the Luddites that (ineffectually) destroyed looms to preserve jobs. But what about a Canticle for Leibowitz scenario, where after the horrors of nuclear war the populace kills all the engineers and scientists and the world reverts to a Middle-Age of ignorance and poverty?
Bolstrom never addresses consciousness. That is probably wise as it's a hard concept to even define. The superintelligences Bostrom envisions are rather like smart viruses that use directed goals to achieve some objective. Cold and heartless, they just execute their ever-improving programming. They might reproduce like mad to get to a goal or use inventive strategies. But are they introspective? Thoughtful?
He does delve into the fascinating idea of causing harm to humans, and a corresponding evil of doing harm to AIs. Is it wrong to use a superintelligent agent and then discard it? Does that intelligence merit a compassion we'd normally direct to people? And then the stunner: he suggests that we checkpoint these failed AIs and later, when better superintelligences figure out how, re-instantiate them in a happier fashion. And yes, he suggests, perhaps playfully, that this mirrors the notion of a heaven for those AIs.
The book is not always an easy read. Bostrom's complex use of language and unusual words is a delight, but he often tortures English to a point of near incomprehensibility. Take-aways are few; the use of "if," "suppose," "perhaps," and a hundred other qualifying words turns the volume into a compendium of possibilities rather than predictions. But it is throught-provoking and I think a valuable read for anyone with a technical bent who wonders about the future of AI. My takeaway is that superintelligent AI is coming, he thinks in the not too distant future, and we're basically screwed.
Some of the ideas presented are truly wild. Suppose there's an AI that programmers created whose goal is to efficiently make paperclips. It gets smarter and smarter, and eventually launches self-replicating AIs that convert all of the material in the universe into paperclips!
I found myself taking copious notes. The profusion of ideas Bostrom presents all make you think. And that is the real value of this work.
What about AI writing firmware. I explored that here.
Recommended, with caveats. It will make you think, but may require more work to devour than most popular science books.
Feel free to email me with comments.
Back to Jack's blog index page.
If you'd like to post a comment without logging in, click in the "Name" box under "Or sign up with Disqus" and click on "I'd rather post as a guest."
Recent blog postings:
- My GP-8E Computer - About my first (working!) computer
- Humility - On The Death of Expertise and what this means for engineering
- On Checklists - Relying on memory is a fool's errand. Effective people use checklists.
- Why Does Software Cost So Much? - An exploration of this nagging question.
- Is the Future All Linux and Raspberry Pi? - Will we stop slinging bits and diddling registers?
- Will Coronavirus Spell the End of Open Offices - How can we continue to work in these sorts of conditions?
- Problems in Ramping Up Ventilator Production - It's not as easy as some think.
- Lessons from a Failure - what we can learn when a car wash goes wrong.
- Life in the Time of Coronavirus - how are you faring?
- Superintelligence - A review of Nick Bostrom's book on AI.
- A Lack of Forethought - Y2K redux
- How Projects Get Out of Control - Think requirements churn is only for software?
- 2019's Most Important Lesson. The 737 Max disasters should teach us one lesson.
- On Retiring - It's not quite that time, but slowing down makes sense. For me.
- On Discipline - The one thing I think many teams need...
- Data Seems to Have No Value - At least, that's the way people treat it.
- Apollo 11 and Navigation - In 1969 the astronauts used a sextant. Some of us still do.
- Definitions Part 2 - More fun definitions of embedded systems terms.
- Definitions - A list of (funny) definitions of embedded systems terms.
- On Meta-Politics - Where has thoughtful discourse gone?
- Millennials and Tools - It seems that many millennials are unable to fix anything.
- Crappy Tech Journalism - The trade press is suffering from so much cost-cutting that it does a poor job of educating engineers.
- Tech and Us - I worry that our technology is more than our human nature can manage.
- On Cataracts - Cataract surgery isn't as awful as it sounds.
- Can AI Replace Firmware - A thought: instead of writing code, is the future training AIs?
- Customer non-Support - How to tick off your customers in one easy lesson.
- Learn to Code in 3 Weeks! - Firmware is not simply about coding.
- We Shoot For The Moon - a new and interesting book about the Apollo moon program.
- On Expert Witness Work - Expert work is fascinating but can be quite the hassle.
- Married To The Team - Working in a team is a lot like marriage.
- Will We Ever Get Quantum Computers - Despite the hype, some feel quantum computing may never be practical.
- Apollo 11, The Movie - A review of a great new movie.
- Goto Considered Necessary - Edsger Dijkstra recants on his seminal paper
- GPS Will Fail - In April GPS will have its own Y2K problem. Unbelievable.
- LIDAR in Cars - Really? - Maybe there are better ideas.
- Why Did You Become an Engineer? - This is the best career ever.
- Software Process Improvement for Firmware - What goes on in an SPI audit?
- 50 Years of Ham Radio - 2019 marks 50 years of ham radio for me.
- Medical Device Lawsuits - They're on the rise, and firmware is part of the problem.
- A retrospective on 2018 - My marketing data for 2018, including web traffic and TEM information.
- Remembering Circuit Theory - Electronics is fun, and reviewing a textbook is pretty interesting.
- R vs D - Too many of us conflate research and development
- Engineer or Scientist? - Which are you? John Q. Public has a hard time telling the difference.
- A New, Low-Tech, Use for Computers - I never would have imagined this use for computers.
- NASA's Lost Software Engineering Lessons - Lessons learned, lessons lost.
- The Cost of Firmware - A Scary Story! - A hallowean story to terrify.
- A Review of First Man, the Movie - The book was great. The movie? Nope.
- A Review of The Overstory - One of the most remarkable novels I've read in a long time.
- What I Learned About Successful Consulting - Lessons learned about successful consulting.
- Low Power Mischief - Ultra-low power systems are trickier to design than most realize.
- Thoughts on Firmware Seminars - Better Firmware Faster resonates with a lot of people.
- On Evil - The Internet has brought the worst out in many.
- My Toothbrush has Modes - What! A lousy toothbrush has a UI?
- Review of SUNBURST and LUMINARY: An Apollo Memoir - A good book about the LM's code.
- Fun With Transmission Lines - Generating a step with no electronics.
- On N-Version Programming - Can we improve reliability through redundancy? Maybe not.
- On USB v. Bench Scopes - USB scopes are nice, but I'll stick with bench models.