Rishi Sunak warned in a speech on Thursday that ‘humanity could lose control of AI completely’ if the technology was not given proper oversight © Peter Nicholls/Getty Images

Ever since Rishi Sunak announced in June that the UK would host the “first major global summit on artificial intelligence safety”, officials in Westminster have been racing to assemble a guest list of tech bosses, policymakers and researchers within a punishing deadline.

Sunak’s pledge to organise such a high-profile event inside just six months was not only an attempt to position the UK as a leader in a hot new field. The organisers were eager to move ahead before the next generation of AI systems are released by companies such as Google and OpenAI, giving global leaders a shot at establishing principles to govern the powerful new technology before it outpaces efforts to control it.

“Ideally we would have had a year to prepare,” said one person involved in organising the summit. “We have been rushing to make this happen before the next [AI] models come.”

Emphasising the high stakes ahead of next week’s summit at Bletchley Park, Sunak warned in a speech on Thursday that “humanity could lose control of AI completely” if the technology was not given proper oversight, even as it created new opportunities.

After ChatGPT brought generative AI — technology capable of rapidly creating humanlike text, images or computer code — into the public eye late last year, there have been increasing concerns over how the software could be abused. Critics say AI will be used to create and spread misinformation, increase bias within society or be weaponised in cyber attacks and warfare.

Rishi Sunak at Bletchley Park
Rishi Sunak said at Bletchley Park on Thursday the UK would not ‘rush to regulate’ AI © Tolga Akmen/Pool/EPA-EFE/Shutterstock

Expected to join the effort to establish ground rules for the development of “frontier AI” next week are political leaders from around 28 countries, including the US, Europe, Singapore, the Gulf states and China, alongside top executives from Big Tech companies and leading AI developers.

The Financial Times has obtained a list of companies, governments and organisations expected to attend the summit, which is published in full at the end of this article. A UK government spokesperson said: “As is routine, we will not speculate on potential invitees.”

A guest list of around 100 people is expected to include Microsoft president Brad Smith, OpenAI chief executive Sam Altman, Google DeepMind chief Demis Hassabis, and from Meta AI chief Yann LeCun and president of global affairs Nick Clegg. Elon Musk, the tech billionaire who earlier this year formed a new AI start-up called x.ai, has been invited but has not committed to attend, according to people familiar with the matter.

Chinese tech groups Alibaba and Tencent are due to attend, as is the Chinese Academy of Sciences, the country’s top state-funded science think-tank, according to the list obtained by the FT. A Chinese government delegation is attending from the Ministry of Science and Technology, according to people familiar with its plans.

However, the summit’s select roster of attendees has led to criticism from some organisations and executives outside the tech industry, who feel excluded from the meeting.

The prime minister’s representatives on artificial intelligence — tech investor Matt Clifford and former diplomat Jonathan Black — have spent the best part of a month on planes visiting countries to get to grips with their positions on AI and to find common ground.

People involved with the summit said its remit had expanded considerably in the months since Sunak first announced it. Initially, it had been focused almost exclusively on national security risks, such as cyber attacks and the ability to use AI to design bioweapons; it is now expected to cover everything from deepfakes to healthcare.

Within government, there has been disagreement over the event’s scope, these people said. The Department for Science Innovation and Technology wanted a wider list of invites and broader discussions on the social impacts of AI, while Number 10 preferred to keep it to a small group of nations and tech bosses to focus on the narrower brief of national security.

“It has been absolute chaos and nobody has been clear who is holding the pen on any of it,” said one person involved in the summit.

The final agenda will, on the first day, involve roundtable discussions on practical ways of addressing safety and what policymakers, the international community, tech companies and scientists can do. It will end with a case study on using AI for the public good in education.

On the second day, led by Sunak, around 30 political leaders and tech executives will meet in a more private setting. Themes covered will include steps on making AI safe, as well as bilateral talks and closing remarks from the host prime minister.

One product of the summit will be a communiqué that is intended to establish attendees’ shared position on the exact nature of the threat posed by AI.

An earlier draft suggested that it would state that so-called “frontier AI”, the most advanced form of the technology which underpins products like OpenAI’s ChatGPT and Google’s Bard chatbot, could cause “serious, even catastrophic harm”.

The communiqué is one of four key results organisers are planning from the summit, according to a government insider briefed on the plans. The others are the creation of an AI Safety Institute, an international panel that will research AI’s evolving risks and the announcement of the event’s next host country.

In Thursday’s speech, Sunak said the UK would not “rush to regulate” AI. Instead, the summit is likely to focus on “best practice” standards for companies, officials involved in the event said.

However, the government is still keen to independently evaluate the models that power AI products. Officials have been negotiating with tech companies over deeper access to their systems. The government has also been trying to buy chips from companies including Nvidia, to build sophisticated computer systems to run independent safety tests on AI models.

Bletchley Park
Bletchley Park, venue for the AI summit and historic home of Britain’s wartime codebreakers and computer pioneers © Jack Taylor/Getty Images

A government paper, set to be published on Friday, will set out recommendations for building the scale of AI responsibly. Companies should have policies in place to turn off their products if harm cannot be otherwise prevented, employ security consultants to try to “hack” into their systems to identify vulnerabilities, and create labels for content created or modified by AI, the paper says.

Michelle Donelan, the UK’s technology minister who is chairing the first day of the summit, is advocating that AI firms subscribe to these processes at the event.

“You shouldn’t really dream of having a company in this space without this safety process in place,” Donelan told the Financial Times. “The companies are all in agreement that things have got to change. They are uneasy with the current situation, which is basically they’re marking their own homework, and that’s why they’ve agreed to work with us.”

Additional reporting by Hannah Murphy, George Parker and Qianer Liu

UK’s AI Safety Summit
Attendees
Ada Lovelace Institute
Adept
Advanced Research and Invention Agency
African Commission on Human and People’s Rights
Al Now Institute
Alan Turing Institute
Aleph Alpha
Algorithmic Justice League
Alibaba
Alignment Research Center
Amazon Web Services
Anthropic
Apollo Research
ARM
Australia (government)
Berkman Center for Internet & Society, Harvard University
Blavatnik School of Government
British Academy
Brookings Institution
Canada (government)
Carnegie Endowment
Centre for Al Safety
Centre for Democracy and Technology
Centre for Long-Term Resilience
Centre for the Governance of Al
Chinese Academy of Sciences
Cohere
Cohere for Al
Columbia University
Concordia Al
Conjecture
Council of Europe
Cybersecurity and Infrastructure Security Agency
Darktrace
Databricks
Eleuther Al
ETH Al Center
European Commission
Faculty Al
France (government)
Frontier Model Forum
Future of Life Institute
Germany (government)
Global Partnership on Artificial Intelligence (GPAI)
Google
Google DeepMind
Graphcore
Helsing
Hugging Face
IBM
Imbue
Inflection Al
India (government)
Indonesia (government)
Institute for Advanced Study
International Telecommunication Union (ITU)
Ireland (government)
Italy (government)
Japan (government)
Kenya (government)
Kingdom of Saudi Arabia (government)
Liverpool John Moores University
Luminate Group
Meta
Microsoft
Mistral
Montreal Institute for Learning Algorithms
Mozilla Foundation
National University of Córdoba
National University of Singapore
Naver
Netherlands (government)
Nigeria (government)
Nvidia
Organisation for Economic Co-operation and Development (OECD)
Open Philanthropy
OpenAI
Oxford Internet Institute
Palantir
Partnership on Al
RAND Corporation
Real ML
Republic of Korea (government)
Republic of the Philippines (government)
Responsible AI UK
Rise Networks
Royal Society
Rwanda (government)
Salesforce
Samsung
Scale Al
Singapore (government)
Sony
Spain (government)
Stability Al
Stanford Cyber Policy Institute
Stanford University
Switzerland (government)
Technology Innovation Institute
TechUK
Tencent
Trail of Bits
United Nations
United States of America (government)
Université de Montréal
University College Cork
University of Birmingham
University of California, Berkeley
University of Oxford
University of Southern California
University of Virginia
x.ai
Source: a list of countries and organisations expected to attend the AI Safety Summit that was circulated among attendees and corroborated by the FT from multiple sources. List was dated October 26 and may change before the event begins on November 1.
Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments