CEO of Allen Institute for AI, Professor Oren Etzioni, issued a series of potential warning signs that would alert us to “super-intelligence” being around the corner.
Humans must be ready for signs of robotic super-intelligence but should have enough time to address them, a top computer scientist has warned.
Oren Etzioni, CEO of Allen Institute for AI, penned a recent paper titled: “How to know if artificial intelligence is about to destroy civilisation.”
He wrote: “Could we wake up one morning dumbstruck that a super-powerful AI has emerged, with disastrous consequences?
“Books like Superintelligence by Nick Bostrom and Life 3.0 by Max Tegmark, as well as more recent articles, argue that malevolent super-intelligence is an existential risk for humanity.
“But one can speculate endlessly. It’s better to ask a more concrete, empirical question: What would alert us that super-intelligence is indeed around the corner?”
He likened warning signs to canaries in coal mines, which were used to detect carbon monoxide because they would collapse.
Prof Etzioni argued these warning signs come when AI programmes develop a new capability.
He continued for MIT Review: “Could the famous Turing test serve as a canary? The test, invented by Alan Turing in 1950, posits that human-level AI will be achieved when a person can’t distinguish conversing with a human from conversing with a computer.
“It’s an important test, but it’s not a canary; it is, rather, the sign that human-level AI has already arrived.
“Many computer scientists believe that if that moment does arrive, superintelligence will quickly follow. We need more intermediate milestones.”
But he did warn that the “automatic formulation of learning problems” would be the first canary, followed by self-driving cars.