In May, hundreds of leading figures in artificial intelligence issued a joint statement describing the existential threat the technology they helped to create poses to humanity.
“Mitigating the risk of extinction from AI should be a global priority,” it said, “alongside other societal-scale risks such as pandemics and nuclear war.”
That single sentence invoking the threat of human eradication, signed by hundreds of chief executives and scientists from companies including OpenAI, Google’s DeepMind, Anthropic and Microsoft, made global headlines.
Driving all of these experts to speak up was the promise, but also the risk, of generative AI, a type of the technology that can process and generate vast amounts of data.
The release of ChatGPT by OpenAI in November spurred a rush of feverish excitement as it demonstrated the ability of large language models, the underlying technology behind the chatbot, to conjure up convincing passages of text, able to write an essay or improve your emails.
It created a race between companies in the sector to launch their own generative AI tools for consumers that could generate text and realistic imagery.
The hype around the technology has also led to an increased awareness of its dangers: the potential to create and spread misinformation as democratic elections approach; its ability to replace or transform jobs, especially in the creative industries; and the less immediate risk of it becoming more intelligent than and superseding humans.
Brussels has drafted tough measures over the use of AI that would put the burden on tech groups to make sure their models do not break rules. Its groundbreaking AI Act is expected to be fully approved by the end of the year — but it includes a grace period of about two years after becoming law for companies to comply
Regulators and tech companies have been loud in voicing the need for AI to be controlled, but ideas on how to regulate the models and their creators have diverged widely by region.
The EU has drafted tough measures over the use of AI that would put the onus on tech companies to ensure their models do not break rules. They have moved far more swiftly than the US, where lawmakers are preparing a broad review of AI to first determine what elements of the technology might need to be subject to new regulation and what can be covered by existing laws.
The UK, meanwhile, is attempting to use its new position outside the EU to fashion its own more flexible regime that would regulate the applications of AI by sector rather than the software underlying them. Both the American and British approaches are expected to be more pro-industry than the Brussels law, which has been fiercely criticised by the tech industry.
The most stringent restrictions on AI creators, however, might be introduced by China as it seeks to balance the goals between controlling the information put out by generative models and competing in the technology race with the US.