Published in AI

Asimov’s three laws don’t work for AI

by on02 October 2019


Philosopher says they are too ambiguous

Hopes that science fiction writer Isaac Asimov (1920-1992), whodeveloped the Three Laws of Robotics, would guard against potentially dangerous artificial intelligence have been dashed by Philosophy professor in China.

The laws, as they first appeared in his 1942 short story Runaround were:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence if such protection does not conflict with the First or Second Law.

A 0th law was added in Robots and Empire (1985): "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

However Chris Stokes, a philosopher at Wuhan University in China, warned: "Many computer engineers use the three laws as a tool for how they think about programming." But the trouble is, they don't work.

He said that the First Law fails because of ambiguity in language, and because of complicated ethical problems that are too complex to have a simple yes or no answer.

The Second Law fails because of the unethical nature of having a law that requires sentient beings to remain as slaves.

The Third Law fails because it results in a permanent social stratification, with the vast amount of potential exploitation built into this system of laws.

The 'Zeroth' Law, like the first, fails because of ambiguous ideology. All the Laws fail because of how easy it is to circumvent the spirit of the law but still remaining bound by the letter of the law.

Last modified on 02 October 2019
Rate this item
(1 Vote)

Read more about: