© FT montage

The UK is pushing companies, including OpenAI and Google’s DeepMind, for unprecedented access to the technology that drives their artificial intelligence models as Downing Street prepares to host a pioneering global AI summit.

UK officials are negotiating for permission to examine the internal workings of large language models built by groups such as Google DeepMind, Anthropic and OpenAI, according to multiple people familiar with the discussions.

The companies are hesitant to share more internal details of their AI models — the systems that drive products such as ChatGPT — since that could reveal proprietary information about the products or make them vulnerable to cyber attacks, people on both sides of the discussions said.

Greater access to models is key to understanding how the technology works and would anticipate potential risks. One example is the sharing of “model weights”, which act as a core part of the blueprints for how large language models work. AI companies are not currently required to share these details despite calls for more regulation and greater transparency of the powerful new technology.

If the UK, which will hold the world’s first summit on AI safety at Bletchley Park in November, persuades the companies to give it the access it wants, it would be the first time these AI companies will have revealed the internal workings of their technology to a government anywhere in the world. 

In June, DeepMind, OpenAI and Anthropic agreed to open up their models to the UK government for research and safety purposes. However, the extent and technical details of that access were not agreed upon.

Anthropic said it had discussed model weight access with the government, but “sharing weights has significant security implications”, and it is “instead exploring delivering the model via API and seeing if that can work for both sides.”

An application programming interface, or API, offers limited insight into how the model works and is at the same level that enterprise customers have. However, the government is requesting a deeper level of oversight, according to people familiar.

One person close to DeepMind said providing access to models for safety research will be key to understanding the risks, but details on which models it is providing access to and how that access will work in practice are still to be determined.

OpenAI did not immediately respond to a request for comment.

“These companies are not being obstructive, but it is generally a tricky issue and they have reasonable concerns,” said one person close to the government, familiar with the discussions.

“There is no button these companies can just press to make it happen. These are open and unresolved research questions,” they added.

The UK government hopes the agreement can be made ahead of the global AI summit, due to be held at the Buckinghamshire country estate where second world war codebreakers such as Alan Turing were based. The event, about 50 miles from London, plans to get world leaders and AI companies together to discuss the risks of the fast-evolving technology, in particular over cyber security and criminals using it to design bioweapons.

The summit was born out of calls to regulate the technology off the back of AI advances led by products such as ChatGPT and improvements in the processing chips that drive it. Academics and civil society will also be invited.

Two people close to the discussions said the UK government was working on coming to an agreement with the companies to announce at the summit.

The Department for Science, Innovation and Technology said it had established its Frontier AI Taskforce with a clear focus on AI safety and had brought in experts to harness the technology’s opportunities safely.

“As part of this, we recently announced leading AI companies have committed to provide access to their models to support the Taskforce’s AI research specifically in relation to safety. As such, the Taskforce has been engaging with these companies to take that forward,” it added.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments