The rate of development is increasing along with the speed of learning, meaning new versions can come to market much faster. The overall effect is moving toward escape velocity for regulation and control. If regulators cannot understand what needs to be regulated and how, they are useless. Further, AI programmers themselves admit that they do not understand how their AI reaches conclusions.
AGI is not the biggest threat; rather, the willingness of humans to obey a lifeless, soulless algorithm is the problem. ⁃ TN Editor
The U.K. prime minister’s AI task force adviser said large AI models would need regulation and control in the next two years to curb major existential risks.
The artificial intelligence (AI) task force adviser to the prime minister of the United Kingdom said humans have roughly two years to control and regulate AI before it becomes too powerful.
In an interview with a local U.K. media outlet, Matt Clifford, who also serves as the chair of the government’s Advanced Research and Invention Agency (ARIA), stressed that current systems are getting “more and more capable at an ever-increasing rate.”
He continued to say that if officials don’t consider safety and regulations now, the systems will become “very powerful” in two years.
“We’ve got two years to get in place a framework that makes both controlling and regulating these very large models much more possible than it is today.”
Clifford warned that there are “a lot of different types of risks” when it comes to AI, both in the near term and long term, which he called “pretty scary.”
The interview came following a recent open letter published by the Center for AI Safety, signed by 350 AI experts, including OpenAI CEO Sam Altman, that said AI should be treated as an existential threat similar to that posed by nuclear weapons and pandemics.
“They’re talking about what happens once we effectively create a new species, sort of an intelligence that’s greater than humans.”
The AI task force adviser said that these threats posed by AI could be “very dangerous” and could “kill many humans, not all humans, simply from where we’d expect models to be in two years’ time.”
According to Clifford, regulators and developers’ primary focus should be understanding how to control the models and then implementing regulations on a global scale.
For now, he said his greatest fear is the lack of understanding of why AI models behave the way they do.
“The people who are building the most capable systems freely admit that they don’t understand exactly how [AI systems] exhibit the behaviors that they do.”
Clifford highlighted that many of the leaders of organizations building AI also agree that powerful AI models must undergo some kind of audit and evaluation process before deployment.
Currently, regulators worldwide are scrambling to understand the technology and its ramifications, while trying to create regulations that protect users and still allow for innovation.
On June 5, officials in the European Union went so far as to suggest mandating all AI-generated content should be labeled as such to prevent disinformation.
In the U.K., a front-bench member of the opposition Labour Party echoed the sentiments mentioned in the Center for AI Safety’s letter, saying technology should be regulated like medicine and nuclear power.
Perhaps the real motive for “regulating” AI is that the EU is terrified that AI may turn out to be brilliant at supplying accurate information and refractory at being trained to lie, unlike the human officials in the EU? It is conceivable that AI (or whatever software is currently thus marketed) may turn out to be a friend of The People and an enemy of Authoritarian Bureaucratic Dictatorships?
I am observing human behaviour at large, inequality, working poor, the treatment of the global south, the treatment of refugees, slave trade, exploitation, exploitative mining, deep sea mining, land theft, water theft, corporate imperialism, monocultures, oil and gas theft, mineral theft, wars, disinformation, climate change, biodiversity loss…
Gee, you say A.I. brings change, I say, bring it on, it can’t get any worse.
[…] UK AI Task Force Advisor: 2 Years Left To Contain, Or Else […]