Scientist Warns: Intelligent AI Robots Capable Of Destroying Mankind

Wikipedia Commons
Please Share This Story!

TN Note: Dr Eden may have been watching too many science fiction movies, but his bio is impressive. He asks below, “Would the age of the superhuman usher in an era of where the notion of being human has passed?” TN posts such stories to let the viewer understand that many prominent scientists and engineers really believe this stuff; or, they have just jumped the rails.

Free-thinking AI robots could end up destroying mankind or even completely change what it means to be human if we let them think for themselves, a scientist has warned.

Dr Amnon Eden said more needs to be done to look at the risks of continuing towards an AI world.

He warned that we were getting close to the point of no return in terms of AI, without a proper understanding of the consequences.

Dr Eden said: “The New Year needs to see this ill-informed controversy replaced by a better informed analysis of the potential impact of AI and of its applications.

“In 2016 expert risk analysis must gain a far greater role in the thinking of policy and decision makers, of governments and corporations.”

Dr Eden is principal of the Sapience Project, a think-tank which has been formed to look at the potential disruptive impact of artificial intelligence AI.

Science fiction has regularly explored whether robots could destroy mankind, most famously the Terminator films starring Arnold Schwarzenegger.

Dr Eden’s stance comes after Oxford Professor Nick Bostrom said that super intelligence AI may “advance to a point where its goals are not compatible with that of humans”.

He claimed that unlike climate change and genetic engineering, where he said Governments across the globe are putting in mechanisms to minimise the risks of what are, “nothing is being done to control the advance of AI”.

Mr Bostrom said: “There is a policy vacuum which must be filled if this inevitable advance is to be used wisely.”

Discussing the potential risks, he said: “A computer is basically a box and what goes into it is all that it can use, so unless we tell it that people in cold countries may die if they have no heating, and that’s bad, how will it know? How can we define what ‘good’ or ‘friendly’ actually is?

“The result of the singularity may not be that AI is being malicious but it may well be that it is not able to ‘think outside its box’ and has no conception of human morals.

“The other AI concern is one that Hollywood has been using for years as a theme and that is the rise of the ‘Terminator’ type being and the struggle to survive against an army of hostile shape-shifting robots being run by a self-aware AI called Skynet.”

Read full story here…

Notify of
Inline Feedbacks
View all comments