Follow @jack_ganssle

The logo for The Embedded Muse

For novel ideas about building embedded systems (both hardware and firmware), join the 27,000+ engineers who subscribe to The Embedded Muse, a free biweekly newsletter. The Muse has no hype and no vendor PR. It takes just a few seconds (just enter your email, which is shared with absolutely no one) to subscribe.

Want a Want a copy of Jean Labrosse's excellent uC/OS-III, The Real-Time Kernel book? Enter the contest here.

By Jack Ganssle

The Fourth Law

Published 10/10/2007

65 years ago Isaac Asimov defined the now famous Three Laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

When he wrote the I, Robot stories the word "robot" viscerally meant to John Q Public a mechanical being which invariably looked human. Indeed, that perception lived even into Star Trek: Data not only looked entirely human, he grappled with emotions.

But with the advent of inexpensive computing "robot" no longer means a human-like machine. The car's cruise control is a form of robot; the agents that scour the net are called "bots," and mail-delivering robots roam the halls of the Pentagon and other buildings. They look more like filing cabinets on wheels than like Data. So I think it's reasonable to substitute "computer" for "robot" in the Three Laws.

And it's time to add a Fourth Law:

4. A computer may not piss off a human, as long as such behavior doesn't conflict with the first, second or third laws.

For example, I was on a long flight recently and the seatback TV screens were, as usual, behaving badly. They reset constantly. They'd freeze. The flight crew cycled power a couple of times and eventually the units started running reasonably well.

But I wanted to sleep so turned my unit off. Abid, seated to my right in 31H, also flicked his unit off. We each reclined.

And, minutes later, both screens came back on. We turned them off, and they came on again. And again and again.

Abid 31H and I shared only the universal language of annoyed computer users, so could merely point at these poorly-implemented devices and laugh. The screens won, we lost, and on they stayed while the Ambien finally kicked in.

Embedded computers are unlike any other devices on this planet. Other engineered products respond to a user's actions: the car follows inputs into the steering wheel and pedals. Lights go on when we throw the switch. But a computer does the commanding; it uses both inputs and a set of rules defined by a programmer to control something. Though I have no idea how to actually build Laws into firmware, I can't help but think these products could use some sort of analogous meta-guidance. In Asimov's stories the Laws are somehow distinct from, and above, the programming details, just as the Ten Commandments supersede the hundreds of thousands of pages of extant law and precedent that guide judges.

Redefining the Four Laws using "computer" instead of "robot" moves the responsibility for conforming to the Laws to the designers rather than to the robots. It's up to us to build systems that are safe. and that don't piss off our users.