Expert: AI Soldiers Will Develop ‘Moral Compass’ And Defy Orders

Wikimedia Commons
Please Share This Story!
image_pdfimage_print
It has already been demonstrated that AI algorithms exhibit the biases of their creators, so why not murderous intents as well? Technocrats are so absorbed with their Pollyanna inventions that they cannot see the logical end of their existence. ⁃ TN Editor

Murderous robot soldiers will become so advanced they could develop their own moral code to violently defy orders, an AI expert claims.

Ex-cybernetics engineer Dr Ian Pearson predicts we are veering towards a future of conscious machines.

But if robots are thrust into action by military powers, the futurologist warns they will be capable of conjuring up their own “moral viewpoint”.

And if they do, the ex-rocket scientist claims they may turn against the very people sending them out to battle.

Dr Pearson, who blogs for Futurizon, told Daily Star Online: “As AI continues to develop and as we head down the road towards consciousness – and it isn’t going to be an overnight thing, but we’re gradually making computers more and more sophisticated – at some point you’re giving them access to moral education so they can learn morals themselves.

“You can give them reasoning capabilities and they might come up with a different moral code, which puts them on a higher pedestal than the humans they are supposed to be serving.

Asked if this could prove fatal, he responded: “Yes, of course.

“If they are in control of weapons and they decide that they are a superior moral being than the humans they are supposed to be guarding, they might make decisions that certain people ought to be killed in order to protect the larger population.

“Who knows what decisions they might take?

“If you have a guy on a battlefield, telling soldiers to shoot this bunch of people, for whatever reason, but the computer thinks otherwise, the computer is not convinced by it, it might conclude that soldier giving the orders is the worst offender rather than the people he’s trying to kill, so it might turn around and kill him instead.

“It’s entirely possible, it depends on how the systems are written.”

Dr Pearson’s warning comes amid growing concerns of fully autonomous robots being used in war.

Read full story here…

Join our mailing list!


Subscribe
Notify of
guest
6 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Pyra

Idiotic. Does any of these computer dorks ever think about what they are saying? I mean think it out past the tip of their noses.

The saying, “professing themselves to be wise, they became fools…” has a lot of relevance here.