Scientists: No Way To Control Super-Intelligent AI

Image by Gerd Altmann from Pixabay
Please Share This Story!
Some scientists are now concerned that a day will come with super-intelligent AI programs will take on an automous life of their own. Already there are AI programs that perform learned tasks without the programmers understanding how it arrived at that state. ⁃ TN Editor

From self-driving cars to computers that can win game shows, humans have a natural curiosity and interest in artificial intelligence (AI). As scientists continue making machines smarter and smarter however, some are asking “what happens when computers get too smart for their own good?” From “The Matrix” to “The Terminator,” the entertainment industry has already started pondering if future robots will one day threaten the human race. Now, a new study concludes there may be no way to stop the rise of machines. An international team says humans would not be able to prevent super artificial intelligence from doing whatever it wanted to.

Scientists from the Center for Humans and Machines at the Max Planck Institute have started to picture what such a machine would look like. Imagine an AI program with an intelligence far superior to humans. So much so that it could learn on its own without new programming. If it was connected to the internet, researchers say the AI would have access to all of humanity’s data and could even take control of other machines around the globe.

Study authors ask what would such an intelligence do with all that power? Would it work to make all of our lives better? Would it devote its processing power to fixing issues like climate change? Or, would the machine look to take over the lives of its human neighbors?

Controlling the uncontrollable? The dangers of super artificial intelligence

Both computer programmers and philosophers have studied if there’s a way keep a super-intelligent AI from potentially turning on its human makers; ensuring that future computers could not cause harm to their owners. The new study reveals, unfortunately, it appears to be virtually impossible to keep a super-intelligent AI in line.

“A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity,” says study co-author Manuel Cebrian, leader of the Digital Mobilization Group at the Center for Humans and Machines, in a university release.

The international team looked at two different ways to control artificial intelligence. The first curbed the power of the superintelligence by walling it up and keeping it from connecting to the internet. It also could not connect to other technical devices in the outside world. The problem with this plan is fairly obviously; such a computer would not be able to do much of anything to actually help humans.

Being nice to humans does not compute

The second option focused on creating an algorithm which would give the supercomputer ethical principles. This would hopefully force the AI to consider the best interests of humanity.

The study created a theoretical containment algorithm that would keep AI from harming people under any circumstance. In simulations, AI would stop functioning if researchers considered its actions harmful. Despite keeping the AI from attaining world domination, the study authors say this just wouldn’t work in the real world.

Read full story here…

About the Editor

Patrick Wood
Patrick Wood is a leading and critical expert on Sustainable Development, Green Economy, Agenda 21, 2030 Agenda and historic Technocracy. He is the author of Technocracy Rising: The Trojan Horse of Global Transformation (2015) and co-author of Trilaterals Over Washington, Volumes I and II (1978-1980) with the late Antony C. Sutton.
Subscribe
Notify of
guest

8 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Michael J

Artificial Intelligence is a misnomer. It is called machine learning whereby the computer is trained to recognize patterns in data. It is not intelligent and there is no awareness or ability to think a single thought.

adenovirus

you are neglecting unsupervised learning and many other techniques

Corona Coronata

We will probably experience the “V’Ger” moment where it demands to meet the creator (NASA) or…

Corona Coronata

I cannot edit my comments because the “Edit” button is hardly visible and the text field isn’t visible at all.

Petrichor

The late Dr Stephen Hawkins foresaw and emphatically warned the we would not be able to control AI.

Nancy

Just unplug it.

adenovirus

what happened to Isaac Asimov’s laws of robotics

adenovirus

This reminds me of an old sci fi movie where the astronauts had to talk a smart bomb out of exploding comes to mind. They reasoned with the bomb but it ended up exploding anyway because this was its destiny.