Jack Ganssle, Editor of The Embedded Muse Jack Ganssle's Blog
RSS Feed This is Jack's outlet for thoughts about designing and programming embedded systems. It's a complement to my bi-weekly newsletter The Embedded Muse. Contact me at jack@ganssle.com. I'm an old-timer engineer who still finds the field endlessly fascinating (bio).

For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.

Superintelligence

March 5, 2020

What happens when AI becomes smarter than a smart human being? This is the subject of Nick Bostrom's 2014 book Superintelligence. I found it an interesting and at times frustrating read.

Nick postulates that strong AI - artificial intelligence that surpasses human capabilities in most endeavors - is coming, perhaps in the next few decades. He outlines a number of ways this may happen, from the conventional notion of really smart software, to slicing a brain into microscopically thin segments and "reading" the structure of each to create an emulation of the human brain.

His supposition that such an emulation can be derived from merely mapping the cortex's physical structure strikes me as naive. Does memory and intelligence lie solely within gross anatomy, or is it possible some portion resides in electrical loops spinning between neurons and synapses that won't appear in such a map? Regardless, his other approaches to superintelligence are compelling, and for me it's hard to imagine that sufficiently complex hardware and software won't someday exhibit a level of intelligence we can't imagine.

But with a human's intelligence derived from 100 billion neurons each sporting maybe 50 or more synapses, the amount of computational resources needed would seem vast. I wonder (Bostrom doesn't go into this) how many of those neurons are needed for thought. Some proportion are surely only there for basal responses not incidental to thinking, so perhaps that culls the needed complexity by a big number.

Of course, there may be two dimensions to this: gross complexity in terms of numbers of connections, versus a computer's much faster processing speeds compared to organic matter.

Copious notes are provided; too many as they are distracting, yet some offer fascinating insights and I recommend exploring them.

Bostrom speculates on the impact of superintelligence on the human condition; specifically, on the nature of work, a problem I've wondered about for years. If the machines are self-replicating and capable of most any sort of work we humans do, what is the value of labor? It seems to me, and to the author, that labor, human labor, becomes worthless. So how to people survive when they can't perform valuable and compensated work?

Bostrom's answer isn't particularly satisfying to me. While he believes incomes will disappear, capital will explode. I don't entirely follow his logic. One of the cited reasons is colonization of new land in space. That strikes me as beyond speculation and entering the realm of magic, as we have no inkling today of how space travel will ever be cheap, despite Elon Musk's unproven fantasies. It takes a lot of energy to escape the Earth's gravitational well. Chemical rockets will never afford inexpensive access to deep space.

Bostrom feels that many factors will result in redistribution of the capital; trillionaires will feel compelled to share their wealth. Perhaps. Some of his notions sound like Bernie Sanders pounding the podium.

Yet in the notes he writes compellingly: "Providing 7 billion people an annual pension of $90,000 would cost $630 trillion a year, which is ten times the current world GDP. Over the last 100 years, world GPD has increased 19-fold." (I assume these are inflation-adjusted dollars). "So if the growth rates we have seen over the last hundred years continue for the next two hundred years, while population remained constant, then providing everybody with an annual $90,000 pension would cost 3% of world GDP. An intelligence explosion might make this amount of growth happen in a much shorter time."

He does lay out a number of alternative scenarios, though most of those quite bleak.

The general thrust of the book is that AI will arrive in the form of machine intelligence enabled by computers and software, and that while there are many possible outcomes, most are undesirable if we value human life. I am reminded of Mark Twain's last story, the Mysterious Stranger, in which a young boy befriends a stranger who grants his every wish, but no matter how carefully-framed those wishes might be, they all result in disastrous outcomes. (The stranger turns out to be Satan). Bostrom examines hundreds of possibilities, but they mostly end in the superintelligence dominating the world and even the galaxy.

I found two areas missing from the book. First, he only discusses superintelligence arriving via machines: digital hardware and software. Might there be an organic version engineered by humans? Of course, software has the benefit of zero cost of reproduction and the chance of very rapid evolution, so this failing is understandable.

Second, if the dire predictions given were indeed to come to pass, would people revolt? We recall the Luddites that (ineffectually) destroyed looms to preserve jobs. But what about a Canticle for Leibowitz scenario, where after the horrors of nuclear war the populace kills all the engineers and scientists and the world reverts to a Middle-Age of ignorance and poverty?

Bolstrom never addresses consciousness. That is probably wise as it's a hard concept to even define.  The superintelligences Bostrom envisions are rather like smart viruses that use directed goals to achieve some objective. Cold and heartless, they just execute their ever-improving programming. They might reproduce like mad to get to a goal or use inventive strategies. But are they introspective? Thoughtful?

He does delve into the fascinating idea of causing harm to humans, and a corresponding evil of doing harm to AIs. Is it wrong to use a superintelligent agent and then discard it? Does that intelligence merit a compassion we'd normally direct to people? And then the stunner: he suggests that we checkpoint these failed AIs and later, when better superintelligences figure out how, re-instantiate them in a happier fashion. And yes, he suggests, perhaps playfully, that this mirrors the notion of a heaven for those AIs.

The book is not always an easy read. Bostrom's complex use of language and unusual words is a delight, but he often tortures English to a point of near incomprehensibility. Take-aways are few; the use of "if," "suppose," "perhaps," and a hundred other qualifying words turns the volume into a compendium of possibilities rather than predictions. But it is throught-provoking and I think a valuable read for anyone with a technical bent who wonders about the future of AI. My takeaway is that superintelligent AI is coming, he thinks in the not too distant future, and we're basically screwed.

Some of the ideas presented are truly wild. Suppose there's an AI that programmers created whose goal is to efficiently make paperclips. It gets smarter and smarter, and eventually launches self-replicating AIs that convert all of the material in the universe into paperclips!

I found myself taking copious notes. The profusion of ideas Bostrom presents all make you think. And that is the real value of this work.

What about AI writing firmware. I explored that here.

Recommended, with caveats. It will make you think, but may require more work to devour than most popular science books.

Feel free to email me with comments.

Back to Jack's blog index page.

If you'd like to post a comment without logging in, click in the "Name" box under "Or sign up with Disqus" and click on "I'd rather post as a guest."

Recent blog postings: