UK to ‘do its own thing’ on AI regulation – what could that mean?
Jaque Silva | Nurfoto | Getty Images
LONDON – The UK has said it wants to “do its own thing” when it comes to regulating artificial intelligence, hinting at a possible divergence from approaches taken by mainstream Western peers.
“It’s really important that we as the UK do our part in terms of regulation,” Ferial Clarke, UK’s minister for artificial intelligence and digital government, told CNBC in an interview broadcast on Tuesday.
He added that the government already has “good relationships” with AI companies such as OpenAI and Google DeepMind, which voluntarily open up their models to the government for security testing purposes.
“It’s really important that we cook in these safety conditions right from the start when the models are being developed … and so we will work with the sector on any safety measures that come up,” Clarke said.
His comments echoed Prime Minister Keir Starmer’s comments on Monday that Britain “has the freedom to do what we think is best for the UK in relation to this arrangement” after Brexit.
“You have different models around the world, there’s the EU approach and there’s the US approach – but we have the opportunity to choose the model that we think is in our best interests and we intend to do that,” Starmer said. after announcing, in response to the reporter’s question a A 50-point plan to make the UK a global leader in AI.
Difference from US, EU
So far, Britain has avoided introducing formal laws to regulate artificial intelligence, instead deferring to individual regulators to apply existing rules in business when it comes to the development and use of AI.
This is in contrast to the EU, which introduced comprehensive, pan-European legislation aimed at harmonizing rules for technology within the bloc with a risk-based approach to regulation.
Meanwhile, the US lacks any AI regulation at the federal level and instead adopted a patchwork of regulatory frameworks at the state and local level.
During Starmer’s election campaign last year, Labor pledged in its manifesto to introduce regulation focusing on so-called “frontier” AI models, citing big language models such as OpenAI’s GPT.
However, the UK has yet to confirm details of the proposed AI safety legislation, instead saying it will consult with industry before proposing formal regulations.
“We will work with the sector to develop this and align it with what we said in our manifesto,” Clark told CNBC.
Chris Mooney, partner and commercial head of London-based law firm Marriott Harrison, told CNBC that the UK is taking a “wait and see” approach to AI regulation, even as the EU moves forward with its own AI Act.
“While the UK government says it is taking a ‘pro-innovation’ approach to AI regulation, our experience with clients is that they find the current situation uncertain and therefore unsatisfactory,” Mooney told CNBC via email. .
One area where the Starmer government has talked about reforming AI regulations is copyright.
At the end of last year, Great Britain one consultations reviewing the country’s copyright framework evaluating possible exceptions to the existing rules for AI developers who use the works of artists and media publishers to train their models.
Enterprises remained uncertain
Sachin Dev Duggal, CEO of London-headquartered AI startup Builder.ai, told CNBC that while the government’s AI action plan “shows ambition,” acting without clear guidelines is “borderline reckless.”
“We’ve already missed important regulatory windows twice — first with cloud computing, then with social media,” Duggal said. “We can’t make the same mistake with AI, where the stakes are exponentially higher.”
“UK data is our greatest asset; it should be used to build sovereign AI capabilities and create British success stories, not to power foreign algorithms that we cannot effectively regulate or control,” he said.
Labor had details of its plans for AI legislation First, King Charles III is expected to speak at the opening of the British Parliament last year.
However, the government has committed to creating “appropriate legislation” on only the most powerful AI models.
“The UK government needs to clarify here,” said John Buyers, international head of AI at law firm Osborne Clarke, adding that he had learned from sources that the consultation for formal AI security laws was “waiting to be released”.
“The UK has missed an opportunity to provide a holistic view of where the AI ​​economy is headed by delivering piecemeal consultations and plans,” he said, adding that the lack of details of the new AI safety laws would cause uncertainty for investors. .
Still, some figures in the UK tech scene think a more relaxed, flexible approach to AI regulation is in order.
Russ Shaw, founder of advocacy group Tech London Advocates, told CNBC: “It’s clear from recent discussions with the government that there is a lot of protectionism going on around AI.”
He added that the UK is well-positioned to adopt a “third way” of AI safety and regulation – “sector-specific” rules governing different industries such as financial services and healthcare.