Skip to content

UK continues on the road to an AI regulation

On 18 July 2022, the UK Government published an interim paper “Establishing a pro-innovation approach to regulating AI”, outlining proposals for AI regulation in the UK (the AI Paper). In the UK’s National AI Strategy (see our previous blog for more information), the Government committed to develop a national position on the regulation and governance of AI, culminating in a White Paper, currently due in the autumn. This AI Paper is a stepping-stone towards such a White Paper and in calling for views, the Government hopes to account for a wider pool of opinion before publication.

Fundamental aims

At its core, the approach outlined in the AI Paper is intended to support business in adopting AI and investing in its development, looking to what the AI Paper describes as the UK’s attractive transparency and certainty when it comes to regulatory regimes. Currently multiple laws, regulators and bodies address AI risks and requirements, leading to a patchwork approach that is challenging to navigate, with risk of inconsistency, overlap, gaps and lack of clarity.By addressing regulation of the use of AI (rather than the technology itself), taking a proportionate, risk and outcomes based approach (ie considering how the use if AI impacts certain individuals/groups in different contexts) and removing inconsistencies, the Government hopes that businesses will gain clarity and the public will maintain trust, so aiding confidence in AI usage.


Six principles

Given the context-based approach to AI regulation proposed, the Government also puts forward six core and overarching principles built on the OECD Principles on AI. These values-based principles are intended to describe a well-governed use of AI rather than create a new framework of individual rights. These principles are expected to apply to any actor within the AI lifecycle whose activities create a risk that a regulator considers should be managed through “operationalisation” of those principles (eg considering what fairness means in context and whether an entity needs to demonstrate that fairness in a particular way). The current principles proposed are:

  • Ensure safe AI use
  • Ensure technical security of AI and that it functions as designed
  • Ensure AI is transparent and explainable (acknowledging the technical challenges here and suggesting requirements for information that could be provided when considering transparency)
  • Consider fairness
  • Identify a legal person responsible for AI
  • Clarify routes to redress or enable use to be contested.

 

Light touch by multiple regulators

In line with the UK mood music post-Brexit, the AI Paper anticipates light touch regulation. Indeed, at this stage, the Government does not anticipate the introduction of AI legislation, preferring a more agile response “to respond to the rapid pace of change in the way that AI impacts society”. Currently, the Government expects to pursue its regulatory aims by putting the principles on a non-statutory footing, through regulator-led risk assessment, guidance, voluntary measures and access to a sandboxes. That said, it does not rule out legislation in the longer term, perhaps to enable effective regulatory coordination or increase relevant powers. 

Perhaps unusually, the approach anticipates a patchwork of regulators taking responsibility. Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority and the Medicine and Healthcare Products Regulatory Agency are all expected to play a part in interpreting and implementing the legislation in the context of their sector (although how the ICO’s remit is to be defined is currently unclear, given the ICO does not regulate a particular sector).

It remains to be seen whether the involvement of multiple regulators and the potential for inconsistent approaches and outcomes will hinder any aim to limit the regulatory burden on organisations. Whilst offering flexibility and allowing for sector related nuances, the reliance on multiple regulators may bring another layer of complexity, not least when viewed in addition to consideration of the EU AI regime (amongst others). The AI Paper recognises that there are differences between the types of rules regulators can make when translating the principles and the enforcement action they can take where rules are broken. However, whilst the remit of certain regulators may need updating the Government does not see a need for equal powers or uniformity of approach across all regulators. That said, the Government is alive to the need for coordination to ensure coherence, and to avoid contradiction and multiple pieces of guidance on the same topic. As such, it points to institutional architecture such as the Digital Regulation Cooperation Forum and potentially further mechanisms to ensure a coherent, if decentralised, framework. The issue of regulatory cooperation was also identified in an Alan Turing Institute report of the same day - a “Common regulatory capacity for AI” particularly where there are common challenges and opportunities cross-sector, with a need to share knowledge, expertise and resources. 

What is AI?

The Government rejects the EU approach to AI regulation in the UK – ie the proposals avoid a sector neutral, broadly applicable definition of AI itself. The definition of an “AI system” continues to be a stumbling block in the debate around the EU AI Act but the Government preference in the AI Paper is to call out core characteristics of AI that raise regulatory issues and look to specific regulators to determine more detailed definitions appropriate to their sector and the context. The two AI characteristics identified are:

  • the adaptiveness of AI (ie its ability to “learn” or be “trained”), with consequential challenges in explaining logic behind outcomes; and
  • the autonomy of AI (ie its ability to operate, often at speed in complex contexts, with little human control).

 

International coherence

The Government recognises the inherent international nature of the digital ecosystem in general, including the use of AI. It therefore flags its commitments to international cooperation and promotion of a pro-innovation approach that does not support repression. Many organisations engaging with AI will welcome the acknowledgement that an interoperable, joined-up approach in a global marketplace is essential.

Interplay with wider UK reforms

The timing of this paper is no accident as the UK also introduced to Parliament the new Data Protection and Digital Information Bill (the Bill). The Bill itself aims to take a pro-innovation approach, loosening restrictions on automated decision making activities and looking to clarify expectations to aid business certainty. For more on this development see our Digital Hub for more on the Data Protection and Digital Information Bill.

The AI Action Plan

Separately, the UK Government has also taken the opportunity to publish its first, annual, AI Action Plan, describing how the UK is delivering against the pillars of the National AI Strategy (ie investing in long term needs of the AI ecosystem; ensuring AI benefits all sectors and regions; governing AI effectively) and setting forthcoming priorities (for example the Action Plan notes that taking a joined up approach across government and working closely with the AI Council and others will be critical to success).

What’s next

The call for views ends on 26 September 2022. The responses will be factored into the White Paper which itself will look to take a more practical approach and consider how to operationalise the principles. The White Paper is intended to reflect consideration of the proposed framework and whether it adequately addresses AI specific risks; the role and powers of the various regulators and need for coordination; and methods for monitoring and evaluating the framework over time.

The AI Paper is available here and the AI Action Plan is available here.

Related expertise