Raising Robots: Can Morals Be Taught To AI Through Parenting Skills?

Youtube

It is a dimwitted fantasy to treat Artificial Intelligence and robots as sentient human beings, when they are not, nor can they ever be. Yet, as Japanese men are having sex with silicon dolls, others are trying to teach morals to robots using ‘parenting skills.’  TN Editor

Intelligent machines, long promised and never delivered, are finally on the horizon. Sufficiently intelligent robots will be able to operate autonomously from human control. They will be able to make genuine choices. And if a robot can make choices, there is a real question about whether it will make moral choices. But what is moral for a robot? Is this the same as what’s moral for a human?

Philosophers and computer scientists alike tend to focus on the difficulty of implementing subtle human morality in literal-minded machines. But there’s another problem, one that really ought to come first. It’s the question of whether we ought to try to impose our own morality on intelligent machines at all. In fact, I’d argue that doing so is likely to be counterproductive, and even unethical. The real problem of robot morality is not the robots, but us. Can we handle sharing the world with a new type of moral creature?

We like to imagine that artificial intelligence (AI) will be similar to humans, because we are the only advanced intelligence we know. But we are probably wrong. If and when AI appears, it will probably be quite unlike us. It might not reason the way we do, and we could have difficulty understanding its choices.

In 2016, a computer program challenged Lee Sedol, humanity’s leading player of the ancient game of Go. The program, a Google project called AlphaGo, is an early example of what AI might be like. In the second game of the match, AlphaGo made a move – ‘Move 37’ – that stunned expert commenters. Some thought it was a mistake. Lee, the human opponent, stood up from the table and left the room. No one quite knew what AlphaGo was doing; this was a tactic that expert human players simply did not use. But it worked. AlphaGo won that match, as it had the game before and the next game. In the end, Lee won only a single game out of five.

AlphaGo is very, very good at Go, but it is not good in the same way that humans are. Not even its creators can explain how it settles on its strategy in each game. Imagine that you could talk to AlphaGo and ask why it made Move 37. Would it be able to explain the choice to you – or to human Go experts? Perhaps. Artificial minds needn’t work as ours do to accomplish similar tasks.

In fact, we might discover that intelligent machines think about everything, not just Go, in ways that are alien us. You don’t have to imagine some horrible science-fiction scenario, where robots go on a murderous rampage. It might be something more like this: imagine that robots show moral concern for humans, and robots, and most animals… and also sofas. They are very careful not to damage sofas, just as we’re careful not to damage babies. We might ask the machines: why are you so worried about sofas? And their explanation might not make sense to us, just as AlphaGo’s explanation of Move 37 might not make sense.

This line of thinking takes us to the heart of a very old philosophical puzzle about the nature of morality. Is it something above and beyond human experience, something that applies to anyone or anything that could make choices – or is morality a distinctly human creation, something specially adapted to our particular existence?

Read full story here…

Related Articles That You Might Like

Leave a Reply

3 Comments on "Raising Robots: Can Morals Be Taught To AI Through Parenting Skills?"

Notify of
avatar
Sort by:   newest | oldest | most voted
Fred
Guest
“It is a dimwitted fantasy to treat Artificial Intelligence and robots as sentient human beings, when they are not, nor can they ever be.” Would you, also, say it is a “dimwitted fantasy ” that humans evolved from single celled organisms? Personally, I think it is much more likely that robots will evolve into “sentient robots”. Humans will act as an initial catylist, but the final evolvement of robots into sentient robots will be accomplished by the robots themselves without humnan input. Their forms and actions will be those that are the most advantageous for the robot, not the human.… Read more »
Patrick Wood

Fred – you have been watching too much science fiction.

Fred
Guest

No doubt. But if you look at how life has, I think amazingly, evolved from single celled organisms then it does not seem so far fetched that robots (with mankind serving as a catalyst) could be a logical extension of this process. I view humans as part of nature, not as outside observers.
I was curious as to how others would react to this thought. You were polite about it. Most people say I am full of BS.

Give a Bundle, Save 30%