Official Robot Ethics Released: Do No Harm, Obey Orders

A professor of robotics in England says “The problem with AI systems right now, especially these deep learning systems, is that it’s impossible to know why they make the decisions they do.” If this is true, then robots are already out of control.  TN Editor

Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.

The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.

Welcoming the guidelines at the Social Robotics and AI conference in Oxford, Alan Winfield, a professor of robotics at the University of the West of England, said they represented “the first step towards embedding ethical values into robotics and AI”.

“As far as I know this is the first published standard for the ethical design of robots,” Winfield said after the event. “It’s a bit more sophisticated than that Asimov’s laws – it basically sets out how to do an ethical risk assessment of a robot.”

The BSI document begins with some broad ethical principles: “Robots should not be designed solely or primarily to kill or harm humans; humans, not robots, are the responsible agents; it should be possible to find out who is responsible for any robot and its behaviour.”

It goes on to highlight a range of more contentious issues, such as whether an emotional bond with a robot is desirable, particularly when the robot is designed to interact with children or the elderly.

Noel Sharkey, emeritus professor of robotics and AI at the University of Sheffield, said this was an example of where robots could unintentionally deceive us. “There was a recent study where little robots were embedded in a nursery school,” he said. “The children loved it and actually bonded with the robots. But when asked afterwards, the children clearly thought the robots were more cognitive than their family pet.”

Read full story here…

Related Articles That You Might Like

1 Comment on "Official Robot Ethics Released: Do No Harm, Obey Orders"

  1. Robots have to know what a human considers “harm” in order to do that. That could be anything from eliminating our jobs to micro-aggressive comments. Police have already used robots to blow a “suspect” to bits, so just like everything else, government and their robots get to ignore the rules while we and our robots must obey them.

Leave a comment

Your email address will not be published.


*