For novel ideas about building embedded systems (both hardware and firmware), join the 40,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. Click here to subscribe.

A Washington Post article (https://www.washingtonpost.com/classic-apps/the-ai-anxiety/2015/12/23/1b687008-6175-11e5-8e9e-dce8a2a2a679_story.html) by the usually-interesting Joel Achenbach discusses fears many thinkers have about the future of AI. Will intelligent machines rule the world? Kill off humanity?

I remember visiting Marvin Minsky's AI lab at MIT around 1970 while scouting colleges. They had a PDP-12 computer, by today's standards a joke of a machine. Yet it was big and expensive. Today, a $500 iPad has more computing power than the fastest computer in the world in 1985, which cost $35 million ($77m in today's dollars). The people interviewed in the article indirectly address the exponential growth of compute power, wondering what it bodes for the future.

Everyone talks about Moore's Law, but few outside the industry understand it. Tom Freeman of the New York Times is completely off-base in his interpretations of it; he equates the "law" with pretty much all progress. Moore's Law is not a law, but an observation. There is no reason to think it will continue. It has continued because it's also an aspiration - the semiconductor people work hard to try to double transistor density every two years. That has slowed recently, and the future looks like more scaling but at increasingly unattractive prices. There's another "law" called Dennard Scaling which says with each doubling we get all sorts of other benefits like lower power, faster clocks, etc. Few outsiders understand how much this scaling contributed to faster machines. Alas, Dennard Scaling completely fell apart at the 90 nm node. And there are other roadblocks that may slow computation progress, like the mismatch between memory and CPU speeds. Time will tell.

The article talks about increasingly-ubiquitous robots. It claims that people are better than robots at fine, agile motions, but that isn't true. Today's electronics cannot be assembled by humans - humans can't even place the components on a circuit board due to the level of precision needed. A pick and place machine is needed for that. To make an IC a stepper has to position masks to a superhuman precision. So robots can be quite agile.

But today's robots are amazingly dumb. They can only do simple, repetitive actions. There's a lot of work going on to make them smarter, and I have no doubt that we will see robots that can put away the dishes, do the laundry, etc., probably in my lifetime. Certainly in my kids' lifetimes.

We will see driverless cars quite soon and they will probably be better drivers than we are. But these technologies bring up the central idea of the article (which is only barely alluded to): that of machines making ethical choices. Today there's a lot of thought going into problems that go beyond the technical. For instance, should a car break the law to increase safety? It might be impossible to merge onto a high speed road with human drivers going bumper to bumper at 85 MPH. A robot car that stuck to the speed limit will never make the merge.

Or suppose there's a situation where the only choices are to run though a crowd of people or plow over a baby carriage. In the real world this choice never materializes as a human driver panics and takes some random action. But the people designing a driverless car's software have to program these decisions into the code. These are rather philosophical and legal issues. Philosophers and legislators, though, are abdicating the thinking to the techies. I'm not sure that is healthy.

Ray Kurzweil is mentioned in the article. He has done some interesting work and predicts that computers will have the power of the human brain by about 2030-2040. He goes on to make some interesting, but I think intellectually-suspect, arguments about the implications. It's not at all clear to me that lots of compute logic is the same as thinking. It might be, but no one really knows.

While the article goes on to blue-sky about super-intelligent computers wreaking havoc, I think a more likely threat, and much more imminent, is the destruction of work by smart machines. Driverless cars will mean the end of cabbies, truck drivers, garbage collectors, UPS delivery people, and far more. Automation is changing manufacturing. Even Foxconn in China is installing 1 million robots to replace workers there. When Chinese workers are too expensive, what happens to those trying to compete in the west? Even legal research is in some cases done by computers today. Maybe receptionists will be replaced, too (http://qz.com/584727/does-this-humanlike-robot-receptionist-make-you-feel-welcome-or-creeped-out/). Grocery stores now have automated checkouts. It's hard to imagine many jobs, other than those in science, engineering, etc., that can't be automated.

So what happens when the machines replace, say, 20% of the workforce? I expect to see that in a decade or two. 40%? 60%?

When robots can make everything, including robots, and mine raw materials, labor will have little value. Will everything then be free?

Or will we see a Marxian situation: the mass of workers, no longer able to make a living, and the owners of the mines and robots somehow exploiting the former worker class? There's a lot of vapid thinking today about the "1% vs the 99%", but more than a little of income inequality is a symptom of automation.

How about a Luddite-like revolution, but one of national or even global scope? Yet one can't really stop the march of technology. In "A Canticle for Leibowitz" nuclear war destroys civilization. The people hold techies accountable and kill all except Leibowitz, an engineer, who founds a monastery to preserve scientific knowledge. A thousand years later the tech is back, but people haven't changed so the same problem repeats itself.

Perhaps there will be a Star Trek-like economy where everything is free, there are no wages, so people pursue activities to better themselves. I don't have enough faith in human nature to think that's likely. Seems to me that people will still be people, and idle hands are not a good situation.

I worry about the destruction of work and the effect it will have on society. Ironically, it's we, the readers of embedded.com, who build the machines that replace labor. I feel it's foolish to try and halt technological progress, as that also holds the only hopes we have to deal with other huge and looming problems. What's the answer? Do you have any thoughts?

Published December 29, 2015