Skip to content

The UK’s approach to AI-principles to “turbocharge growth”

As generative AI products hit the headlines and organisations look to make the most of AI across all aspects of business, regulators continue to try and keep pace with the technology.

Whilst the EU is leading the way with its EU AI Act (looking to regulate the technology in a horizontal manner and with extra-territorial effect), the UK Government has now published its own AI framework and approach to regulating AI, “A pro-innovation approach to AI regulation”.

Has the UK found the silver bullet?

In short, no. AI is complex, regulation is complex. Trying to regulate such a quickly developing, technical area that has the potential to impact every person and business across all sections of society and the economy is increasingly considered essential – but is no mean feat.

And the UK’s proposal perhaps raises more questions than it answers. We will turn to the specifics of the framework in more detail below, but there is a lack of clarity as to whether the multi-regulator model will allow a proportionate, relevant approach yet avoid fragmentation, contradiction and duplication, particularly once factoring in both the Government’s central function and the engagement of the Digital Regulation Cooperation Forum (DRCF).  The approach to foundation models, large language models (LLMs) and general purpose AI remains subject to further consideration, monitoring and evaluation. And the scope of the White Paper specifically excludes some of the key issues that concern organisations day-to-day, not least IP ownership (and infringement), control of data (and unauthorised use), access to resources and compute, and perhaps more importantly, the allocation of liability and responsibility across the AI ecosystem.

Much remains to be fleshed out in an environment that is not pausing for breath and where business certainty and consumer protection would seem particularly crucial.

How and why did we get here?

Back in July 2022, the Government trailed its proposals for AI with an interim paper, putting forward a principles-based approach to regulating AI use, with a of regulators taking a light touch. We discuss that stepping stone paper in our article “UK continues on the road to an AI regulation”.

Following a period of consultation and consideration of 130 responses, the UK Government now presents its white paper. In detailing an AI framework (and so fulfilling its National AI Strategy commitment to develop a position on the regulation and governance of AI) the Government has finessed rather than altered the fundamentals of its thinking – there will be little in the white paper that surprises.

In reviewing the interim consultation, the Government has settled on three aims it wants the AI framework to address:

  • drive growth and prosperity;
  • increase public trust; and
  • strengthen the UK’s position as a global leader in AI.

The Secretary of State for the Department of Science, Innovation and Technology (DSIT), Michelle Donelan, is also keen to highlight the UK’s profile as an international heavy weight when it comes to AI, and in order to take that step to becoming an “AI superpower”, the Government wants “to harness the benefits of AI and remain at the forefront of technological developments.” The Government considers that “includes getting regulation right so that innovators can thrive and the risks posed by AI can be addressed”.

The AI specific risks identified include risks to human rights, safety (physical and mental health), fairness (bias), privacy and agency, societal wellbeing (disinformation), security (cyber attacks), and prosperity. It is commonly accepted that consumer trust is of utmost importance to ease the adoption of new technologies and therefore the intention to mitigate AI risks and the well-known potential for bias and discrimination serves the growth and prosperity, as well as social and ethical, aims.

The Government’s refrain, threading through all of its recent data and technology related bills, papers, policies and reviews, is the desire for a pro-innovation, proportionate regulatory regime, often framed as taking advantage of “Brexit freedoms”. As expected, alongside an intention to be adaptable, clear, collaborative and trustworthy, we see the same themes in its approach to AI regulation.

A flexible framework? More detail

Unsurprisingly, the Government maintains its view that a rigid and intensive approach to regulation will stifle innovation and AI adoption. As such, and in stark contrast to the EU, there will be no new overarching legislation. Rather, the Government sets out a framework underpinned by principles to guide development and use of AI, implemented by multiple existing regulators.

The Government acknowledges the need for the regime to evolve. With the best will in the world, fully future-proofing regulation relating to developments in technology such as AI is likely to be impossible. The Government therefore recognises the need to monitor and evaluate its proposed AI framework, iterating and making changes if necessary. The hope is that this adaptable approach will enable the UK to keep pace with technological advances and make the most of emerging opportunities. It will however need to avoid the risk that a continually shifting regulatory landscape leads to uncertainty and a lack of confidence.

Multiple regulators, no leader of the pack

As indicated in the interim paper, the Government does not propose to give responsibility for AI governance to a new single regulator, nor establish a lead regulator. Rather, the Government looks to multiple regulators to use their sector expertise on a non-statutory basis to support business in their adoption of AI and intervene when required.

Whilst there is a clear rationale behind the decision to avoid a centralised regulator, care will be needed to avoid regulatory uncertainty and complex practicalities, particularly for those organisations, or in relation to AI risks, that fall within scope of multiple regulators. Indeed, the Government expects that regulators will collaborate to support organisations in such a position.

Likewise, the Government acknowledges that not all AI risks will fall within the remit of any existing regulator so there remains the potential for more regulatory collaboration, expansion of regulatory remits or additional legislation to fill the gaps. Regulators will no doubt hope that appropriate resourcing will follow. One would assume that regulators cutting across-sector, such as the Information Commissioners Office, will be particularly engaged in collaboration with other regulators. We await further clarity on the implementation of the framework in the form of an AI regulation roadmap, to be published at the same time as the Government’s response to the consultation on the white paper.

In any event, it is clear that there are differing capabilities across the different regulators, with some (such as the ICO) having done a lot of thinking around AI, and others that may have far less in the way of technical expertise or indeed capacity. The Government acknowledges this and intends to explore options to plug those gaps.

So what is AI?

The EU has struggled to define an AI System, as required for the EU AI Act. It is all too easy to sweep up technology that is not strictly AI or indeed narrow the concept so far that many AI systems are not effectively governed by the relevant legislation. In the UK, the Government has stuck with its plan to avoid a singular, precise definition, set in stone. Instead, it calls out two characteristics of AI that particularly justify the need for a bespoke approach to regulation:

  • its adaptivity (AI is “trained” and “learns”); and
  • its autonomy (AI can operate without ongoing human control).

A combination of these characteristics, the Government notes, can for example make it difficult to explain, predict or control outputs or to allocate responsibility for an AI system’s operation. Hence the need for the proposed framework.

It is hoped that regulators will take this “common understanding” of what is meant by AI, interpret the characteristics in context and define AI as appropriate for their sector. How this works in practice will be key to the success of the framework and the Government is mindful of the need to support coordination and alignment between interpretations where possible. Certainly, organisations looking to develop or use AI that poses risks of interest to more than one regulator will be hoping for consistency to avoid complexity.

Regulate use, not technology

In looking to be flexible, proportionate, outcomes-focused and take a risk-based approach, the Government envisages targeting the use of AI rather than the specific technology itself. Per the Government’s example, a Chatbot article summary poses very different risks to the output of an AI-driven disease diagnostic tool.

This context-specific approach has been broadly welcomed by industry. Regulators can balance the risk of a particular use against the loss of opportunity if an AI system is not implemented. The Government considers that individual regulators will be best placed to carry out a risk assessment in their field of expertise and does not propose specifying particular risk thresholds. 

However, as we have seen during the EU’s AI legislative process, it is not easy to address the regulation of general-purpose AI models (which may have multiple uses with very different risk profiles).

Given the prevalence of commentary and discussion around the likes of LLMs for instance, the question of how to regulate general-purpose AI models will no doubt remain front of mind for many.  The Government envisages that some regulators may issue specific guidance (eg regarding transparency) for developers or users of LLMs, given their potential for general application. How this will translate into practice remains challenging – LLMs are likely to fall within the remit of more than one regulator and their uses are likely to demonstrate very different risk profiles.

The Government is reluctant to take specific regulatory action at this stage (fearing impact on innovation) and so alongside the recently announced “Foundation Model Taskforce” (intended to support UK foundation model capability) the Government will focus on rigorous monitoring of risks and horizon scanning (see further under “Consistency is key” below).

It is interesting to consider this approach to LLMs particularly in light of the number of regulatory challenges that are starting to emerge in various other jurisdictions.

5 principles to guide them all

The White Paper details five very familiar principles that should apply to the development and use of AI. This “national blue print” is expected to sit across the existing environment of different laws, standards and guidance that apply to AI, aiming for greater certainty and consistency. The principles mirror the OECD AI principles with a view to supporting interoperability. Though tweaked a little since the interim proposals in 2022 (which for example, placed more emphasis on accountability and governance), the principles will not come as a surprise: 

  • Safety, security and robustness: AI should function in a secure, safe and robust way and risks should be carefully managed;
  • Transparency and explainability: organisations should be able to communicate when and how AI is used and (take a proportionate approach to) explain a system’s decision-making process;
  • Fairness: use of AI should be compliant with UK laws (such as the Equality Act 2010 or UK GDPR). AI must not discriminate against individuals or create unfair commercial outcomes;
  • Accountability and governance: organisations should put in place measures to ensure appropriate oversight of the use of AI with clear accountability for the outcomes; and
  • Contestability and redress: there must be clear routes to dispute harmful outcomes or AI generated decisions.

Apart from reflecting OECD AI principles, these concepts will be recognisable to organisations as they look to comply with data protection regulations. Data is often described as the life-blood of AI and so it is not surprising that the need to consider security of data and systems, the requirement to clearly explain what an organisation is doing, the oversight requirements and need to be actively accountable, the need for adequate recourse and redress for individuals all sit in common with data protection regimes, including those in the UK and EU. Indeed, the ICO’s recently updated Guidance on AI and data protection clearly calls out those same expectations in its contents. Many organisations will be minded to consider how they can leverage existing policies and procedures established in the data protection space to take account of the AI principles to the extent they are not already doing so.

Regulators are expected to proportionately apply the principles to address AI risks within their remit, across the full AI lifecycle (design, development, use).

Provision of guidance (joint or otherwise) on AI best practice is at the forefront of implementation of the principles, though the Government anticipates that the use of other tools and resources would be similarly relevant.  By way of illustrative examples, the Government gives suggestions of when regulators may want to require the likes of regular testing of AI (eg security principle), set technical standards (eg security and transparency principles), and require impact assessments (eg accountability) for instance.

The Government will provide guidance to help regulators implement the principles, amongst other things, encouraging the regulators to design, implement and enforce regulatory requirements and to integrate delivery of the principles into existing monitoring, investigation and enforcement processes.

Although regulator responsibilities are non-statutory in the first instance, we are promised that, when parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently – imposing a statutory duty on regulators to have regard to the principles. However, this will not come to pass if the framework is considered to be effective in any event. This is in contrast to the desire of certain academics, industry and regulators who felt that further measures may be needed to support enforcement.

Consistency is key

The Government does acknowledge concerns raised in the interim stage consultation that a fully de-centralised approach – the patchwork of regulators – will lead to a lack of regulatory clarity or coordination.  There was support for a degree of central coordination to aid regulators. Certainty and consistency of regulation are often flagged as fundamental to support business confidence and investment.

With a view to coherence, the Government bulks up its plans for centralised support including:

  • monitoring and evaluation of the framework’s effectiveness (including in supporting innovation) and the implementation of the principles (such as conflict or inconsistencies);
  • assessing and monitoring risks arising from AI;
  • horizon scanning and gap analysis to inform a coherent response to emerging AI technology trends;
  • testbeds and sandbox initiatives;
  • providing education and awareness of the framework; and
  • promoting interoperability with international regulatory frameworks.

Whilst some had suggested that the DRCF should provide this support, the Government plans to be responsible for delivering these functions itself, albeit working with regulators and other “key actors” in the AI ecosystem.

In her letter to the DCRF regulators dated 17 March 2023, the Secretary of State for DSIT does anticipate that the DRCF will have a “significant role to play in supporting the development and implementation of the new proportionate and pro-innovation AI regulatory framework”. More specifically, she notes that the Government is keen to understand if there is scope to expand the DRCF’s role to support the wider regulatory community in enhancing cooperation, research and knowledge transfer.  Likewise, given the DRCF’s existing work in horizon scanning and considering cross-cutting issues such as algorithmic processing, the Government is keen to leverage the DRCF horizon scanning programme to inform the central assessment of emerging risk and opportunities. There are many compliments in the letter but no mention of specific further funding at this stage – maybe that will come in the detailed implementation plan later in the year.

Assurance, standards and sandboxes

To support the implementation of AI, the Government calls out the need for assurance techniques and use of technical standards.

As far as technical standards are concerned, the Government commits to continue contribution to international standards work. It also contemplates a layered approach to applying available AI technical standards – encouraging sector-agnostic standards in the first instance to support the implementation of cross-sectoral principles, with further standards to address specific risks raised by AI (such as bias) in a particular context, and the potential for sector-specific technical standards.

Specific action is anticipated in relation to assurance, to measure, evaluate and communicate trustworthiness of AI systems. The Government considers that there is likely to be an emerging market in AI assurance services that needs support. The Government’s Portfolio of AI assurance techniques (due shortly) is intended to demonstrate how such techniques (including impact assessment, audit, and performance testing, formal verification) can support wider AI governance.

Sandboxes are a familiar tool to support industry in rapidly developing fields. DRCF regulators are well used to taking advantage of them. As referenced in the Pro-innovation Regulation of Technologies Review, the Government intends to establish an AI focussed sandbox. Consideration of different model options, engagement with DRCF on multi-sector versions, pilot phases and roll out should mean we see a regulatory sandbox up and running in 12 months or more.

Liability

Whilst the liability position for AI does not form part of the principles, it is interesting that the Government anticipates that regulators are best positioned to adopt a context-based approach to allocate liability for AI within their sectors.

Some respondents to the interim consultation considered that more could be said on liability allocation across the AI life-cycle, but the Government has declined to specify an overarching approach.

Drawing a distinction between allocation of responsibilities under certain existing laws (such as the data protection regime and product safety laws), the Government considers that it is “not yet clear how responsibility and liability for demonstrating compliance with the AI regulatory principles will be or should ideally be, allocated to existing supply chain actors within the AI life cycle”.  Indeed, the Government states that it is “too soon to make decisions about liability as it is a complex, rapidly evolving issue which must be handled properly”, promising only to engage with experts and lawyers to improve its understanding of the issues. The wait and see approach may lead to intervention if the Government considers that responsibility and liability issues are undermining its pro-innovation approach but those developing and using AI shouldn’t expect a Government declaration on this topic in the short term.

Other gaps and issues out of scope

For many within the AI ecosystem, the application of regulatory principles will not address some of the more fundamental challenges posed by AI. For example, IP ownership and infringement, control of data, access to necessary compute can all present legal, contractual and practical issues. The Government is clear that such important aspects of AI development and implementation are out of scope of this particular paper but will form part of separate workflows such as those flagged in the Pro-innovation Regulation of Technologies Review. Inevitably, organisations looking to roll out their AI systems will need to have an eye to rather more than one white paper to ensure effective, compliant implementation of the new technology.

Looking beyond the UK

The Government recognises the cross border nature of AI supply chains and therefore the need for international cooperation and promotion of interoperability. Although the UK AI framework will have no direct application beyond the UK and despite the EU AI Act’s extra territorial effect, the Government nonetheless hopes to contribute to the global conversation on AI, influencing international partners as well as learning from the same. This influence will involve a continuing role in international forums such as the OECD and G7 but may take the form of support for countries to implement regulation and technical standards.

As expressed by some in the interim consultation responses, it remains to be seen whether, despite the UK’s attempt to be flexible framework, the EU’s more prescriptive legislative approach will become the model to follow in practice if not in law.

What comes next?

A consultation on the white paper itself is open until 21 June for comment. The Government notes that implementation of the framework has already commenced and will continue in parallel with the consultation phase. However, given the pace of technological development and progress of AI regulation in other jurisdictions, the timelines proposed by the Government are not particularly ambitious.

In the first 6 months, expect to see the Government’s consultation response plus further detail on the centralised support functions in the form of an AI roadmap. There will be more stakeholder engagement and analysis of research projects (not least around accountability for AI regulatory compliance) and initial guidance for regulators.

In the first year, we can expect regulators to issue their own guidance and more on the central function design before longer term plans include a cross sectoral AI risk register, monitoring and evaluation reports and inevitable roadmap updates.

Undoubtedly, organisations will want to track the progress of the framework from proposal to implementation. However, most will not be solely focused on the UK position. The complex array of regulatory developments, guidance, standards and principles across the globe will necessitate careful consideration as AI systems are implemented in practice.  

It is interesting to consider how these different AI regimes evolve in light of the wider conversation around AI and the balance of risk and reward. The Future of Life Institute open letter calls for a halt to advanced AI development whilst jointly developed safety protocols are implemented, research is re-focused and effective AI governance systems are established. The governance requested includes the likes of AI specific regulators, oversight of highly capable AI systems computational capability, provenance and watermarking systems, auditing and certification ecosystem, a liability regime for AI-caused harm, public funding for technical AI safety research and well-resourced institutions to cope with, what the signatories expect will be “dramatic economic and political disruptions”. It is hard to envisage AI development actually ceasing in response to the letter but it will be interesting to see how many of the AI regulatory approaches that come to fruition address some of the checklist put forward.