Skip to content

The UK’s AI Safety Summit shows there is work to do

Laetitia Nappert-Rosales and Jane Finlayson-Brown of our London tech practice provide an overview of the first UK AI Safety Summit and discuss what should come next.

Eight decades ago, the World War II codebreakers gathered at Bletchley Park to solve the Enigma code. On 1 and 2 November 2023, a host of governments, academics, tech companies and multilateral organisations gathered in the same place for the first AI safety summit to, if not solve, at least have ‘robust discussions’ on the risks posed by frontier AI. 

What is frontier AI?

The summit focused on ‘frontier AI’, which the Government defines as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models”. This is intended to capture advanced large language models (LLMs), such as ChatGPT and Google's Bard. The Government's report on the Capabilities and Risks from Frontier AI (the Report) published a couple of days before the summit, excludes ‘narrow AI systems’ such as DALL·E 3 from the summit scope, since they cannot perform as wide a variety of tasks.

What are the concerns with frontier AI?

The Government's two overarching concerns with frontier AI are the existential risks of misuse and loss of control. In its Report it sets out the risks of frontier AI ending up in the wrong hands, for example that ChatGPT could produce detailed instructions for the development of biological or chemical weapons or create computer viruses that can avoid detection. In a similar vein, if we hand over too much control to AI systems, or they manipulate us into handing over too much control, the ‘black box’ risk becomes prevalent, and the AI system may pursue goals that are at odds with human interests. 

The Report outlines a number of risk factors that may aggravate these risks, including the current lack of technical AI standards and the lack of incentives for tech developers to invest in risk mitigation, given the competition to get AI products on the market. The Report also addresses societal harms and in particular the risks of bias, hallucinations and disruption to the labour market.  Indeed, the Prime Minister’s speech on 26 October 2023 made sure to mention that while a focus on the existential is vital, the immediate AI risks and harms should be addressed in the same discussions.

The Government has continued to advocate its strategy of establishing AI principles and standards that provide a framework around the use of AI as a way to mitigate those risks, including the safety measures referred to at the AI Safety Summit, alongside driving innovation.   The Prime Minister decried the “rush to regulate” as a “point of principle” in his speech of 26 October. This balance underpins the Government’s National AI Strategy (published in September 2021) and its White Paper (published in March 2023). The Government has made clear that it will not seek to implement a dedicated AI Bill, further reinforced by the fact no AI Bill was mentioned in the King’s Speech on 7 November 2023 (despite this being pushed for by the House of Commons Science, Innovation and Technology Committee interim report on the governance of AI).  Rather, it will permit industry regulators to oversee AI developments in their sector, applying the principles-based approach. We discuss the UK’s approach to AI regulation in one of our previous blogposts here.

What happened at the AI Safety Summit?

There has been a lot of noise surrounding the first AI summit. In the lead-up to it, there were a variety of AI Fringe events happening across London and the Government published a number of updates on its dedicated AI safety summit site including: (i) the Report; (ii) an overview of emerging safety processes for developing frontier AI systems; and (iii) leading AI companies such as Amazon, Microsoft, Google DeepMind and Meta publishing their AI Safety Policies.

The guest list

The attendee list boasted an impressive line-up of governments from across the world, including the US and China, as well as representatives from academics and institutes and top executives from leading AI companies.  International participants were actively involved to chair roundtables.

The list did attract criticism though, in particular by a number of non-profit organisations and trade unions, who signed an open letter to Rishi Sunak on the Monday before the summit (30 October 2023), coordinated by Open Rights Group, Connected by Data and the TUC. The open letter stated that the summit was not bringing together sufficient diversity of expertise and perspectives by virtue of not featuring trade unions, activists and campaigners to represent the communities and workers currently most impacted by the effects of AI. Interestingly, no regulators were present either, although the Digital Regulation Cooperation Forum, an initiative comprised of the UK’s ICO, CMA, Ofcom and FCA, did participate in an AI Fringe event discussing digital regulation.

The robust discussions

Day 1 of the summit comprised a series of roundtables discussing, for example, how to scale AI responsibly and how national and international policymakers and the scientific community should address the risks and opportunities of AI. While there were topics of debate, for example whether AI models should be “open” or “closed”, the consensus from the Roundtable Chairs’ Summaries is that it is crucial to build a deeper understanding of the potential risks of AI systems. Governments and stakeholders globally should develop a co-ordinated approach and may share resources and standards to develop effective safety policies and measures to mitigate those risks. The summaries have been kept high-level and set out principles rather than specific safety measures.

At the end of Day 1 all the countries attending the summit signed up to the ‘Bletchley Declaration’, a statement affirming that they are committed to deepening their understanding of the potential risks of AI and that they will work together to ensure responsible AI that is safely deployed. The declaration also sets out that “all actors have a role to play in ensuring the safety of AI”. Those who are developing AI that is unusually powerful and potentially harmful have a “particularly strong responsibility” to ensure the safety of those systems. 

Despite this, and in keeping with the UK’s drive on AI innovation, the Prime Minister made clear during his press conference on Day 2 of the summit that AI companies are not being required to “mark their own homework”. He rejected the idea that the onus should be on the companies to prove that their systems are safe and have a dedicated security budget. Rather, he stated that although these companies have a moral responsibility to ensure that the development of their systems is happening in a safe and secure way, it is primarily the responsibility of the Government to monitor and test AI systems and provide independent assurance that they are safe. 

For this reason, the new AI Safety Institute (previously the Frontier AI Taskforce) will be testing new types of frontier AI before and after they are released. A press release by the Government published on Day 2 of the summit states that several of the attending AI companies such as OpenAI, Amazon, Microsoft, DeepMind and Meta have agreed, as part of a plan for safety testing of frontier AI models, to their models being tested by governments before they are released to businesses and consumers for potentially harmful capabilities, including national security, safety and societal harms. This is an encouraging first step to meaningful action being put in place following the summit. 

At the moment, the responsibilities of businesses who use and deploy AI have not been addressed in the same detail as for AI developers. It is nevertheless interesting to note the direction of travel from the Government and its focus on rigorous safety testing, not only pre and post deployment of models but perhaps also earlier in the lifecycle and in training runs (which was discussed at the summit).

Action points

Practice what you preach

The AI Safety Policies published by Amazon, Anthropic, Google DeepMind, Inflection, Meta, Microsoft and Open AI are a starting point to understanding how safety frameworks could operate in practice. These seven tech companies published policies on nine areas of AI safety as requested by the UK Government ahead of the summit. The areas addressed in these policies include responsible capability scaling, model evaluations and red teaming and data input controls and audits. 

Whilst high level, the safety policies may help to give insight and inspiration to other businesses looking to put in place AI policies for their development or use of AI. For example, Google DeepMind’s responsible capability scaling policy sets out how it divides internal governance: it has a Responsible AI Council, a Responsible Development and Innovation team, a Responsibility and Safety Council and a standardised Ethics and Safety Assessment, which is reviewed and updated at various stages of its AI model’s development.

Harmonising approaches

The US National Institute of Standards and Technology announced the creation of its own AI Safety Institute, on the first day of the UK summit and following President Joe Biden’s Executive Order (which we report on separately here). Canada’s Industry Minister François-Philippe Champagne has announced it is also considering setting up an AI Safety Institute. Other countries are likely to follow suit. 

The work of the AI Standards Hub, the OECD, the Partnership on AI and other international organisations may be helpful in establishing overarching principles and a knowledge-sharing base for cross-border co-operation that will be relevant to the AI Safety Institutes. In practice though, if and when these safety institutes start developing their own working processes and standards, it may prove difficult for global companies to reconcile differences. For example, which safety institute will companies report to, which standards will be adhered to, how will interoperability work? 

Co-operation between governments, akin to the US-Singapore joint mapping exercise completed last month - which aims to promote collaboration and information-sharing on international AI security, safety, trust, and standards development - will need to be rolled out on a wider, international scale. Similarly, the safety institutes will need to be developed cognisant of existing initiatives, such as the DRCF’s new AI and Digital Hub multi-regulator service (expected to pilot next year) advising innovators on their cross-regulatory questions.

What is next?

In a sense, the summit was a notable achievement in drawing together a diverse group of nation states and seeking agreement on certain high-level principles, but the hard work will, or should, ramp up from here.  Participants have committed to meeting again: there will be a virtual summit co-hosted with the Republic of Korea in six months’ time and a further in-person summit in France next year, both of which will be even more revealing in terms of the work done and to be done to continue the commitments made last week. 

The AI Safety Summit closed with no concrete agenda of next steps for the participants, but with commitments made in the King’s Speech on 7 November 2023 to ensure AI is developed safely, it is clear that the lessons from the summit will remain a priority for the Government. This gives each stakeholder, and not just those present at the summit, the opportunity to set their own AI agenda in keeping with the summit goals and of course the evolving legislative and regulatory backdrop internationally. It does however, also set a challenge/expectation for future summits to convert the agreed principles of the Bletchley Declaration into specific action.