Skip to content

The UK’s National AI Strategy: setting a 10-year agenda to make the UK a “global AI superpower”

Browse this blog post

Related news and insights

Publications: 21 March 2024

Seizing the AI opportunity in Europe

Publications: 13 March 2024

Surge in EU and UK private antitrust damages actions continues

Publications: 13 March 2024

Digital markets remain a focal point for antitrust enforcement

Publications: 13 March 2024

Abuse of dominance enforcement declines as new forums emerge

On 22 September 2021 the UK Government published its promised National Artificial Intelligence (AI) Strategy, coming on the back of a raft of related plans, strategies and roadmaps, such as 2020’s National Data Strategy, the 2021 Plan for Digital Regulation, recent Innovation Strategy and the AI Council’s 2021 AI Roadmap

Why do we need a National AI Strategy

The AI Council recognised that its Roadmap of sixteen recommendations (regarding R&D, skills and diversity, data, infrastructure, public trust, investment and adoption) would need to be rolled out over time and therefore, it encouraged the UK Government to produce a National AI Strategy.  

In its published form, the National AI Strategy (the Strategy) sets out a 10-year plan to make the UK “a global AI superpower” building on research and development success in the field as well as previous AI Sector Deal investment and establishment of AI bodies and structures (not least the AI Council and Centre for Data Ethics and Innovation (CDEI)). 

The Strategy notes specific goals for the UK to experience significant growth in AI discoveries made, commercialised and exploited in the UK, associated economic and productivity growth and to establish a trusted and pro-innovation AI governance system. But more generally, the Strategy mirrors other recent publications, highlighting the UK Government’s desire to provide a pro-innovation environment, with a business-friendly regulatory framework, whilst protecting the public and fundamental values. 

The Strategy differentiates AI (defined as “machines that perform tasks normally requiring human intelligence, especially when the machines learn from data how to do those tasks”) from other technology or digital policy, calling out features that the UK Government considers require a unique policy response. These include, for example, questions regarding liability, fairness, transparency bias, risk and safety arising from AI system autonomy and algorithm complexity; issues regarding greater infrastructure requirements necessary to perform; multiple skills sets necessary and lengthy commercialisation journeys. 

 

The three pillars

The National AI Strategy points to three core pillars:

• Investment in long term needs of the AI ecosystem-to ensure competitiveness
• Supporting transition to an AI enabled economy-considering all sectors and regions
• Ensuring the right national and international governance of AI technologies-working with global partners to promote responsible AI development

It identifies “people”, “data”, “computing power” and “finance” as being key drivers to progress, discovery and strategic advantage in AI and, assuming that AI will become mainstream in the economy and that governance and regulation will need to keep pace, the Strategy specifies particular actions to pursue.

 

Some of the key actions

In order to meet the specified Pillars, the Strategy flags many existing initiatives and highlights, albeit it at a high level, the potential for new actions, considering skills development, talent recruitment (and visa regimes), regional investment, international collaboration, infrastructure (such as reviews of compute capacity), governance, security and others.

By way of example, some of the actions identified include:

 

Potential for new AI Regulation 

In contrast to views expressed in the House of Lords 2018 Select Committee “AI in the UK: ready, willing and able?” report and the House of Lords 2020 Liaison Committee “AI and the UK: No Room for Complacency” report (read further here), the National AI Strategy opens the way for a specific AI regulation. Whilst the Strategy itself doesn’t go in to great detail, the Office for AI will develop a national position on the regulation and governance of AI, with a White Paper setting out views.

Whilst recognising that AI does not currently go unregulated (with the likes of data protection, competition, human rights law and sector-specific legislation, eg in the fields of financial services and health), the Strategy notes AI’s power to disrupt business and its potential for risks such as bias, accountability and fairness concerns.  It is acknowledged that on the international stage, others are taking strides to develop methods and approaches to AI governance and it is clear that the UK does not want to be left behind. Alternatives to a broad AI regulation are also identified, for example:

• removing regulatory burden when creating unnecessary barriers to innovation
• retaining the sector-led approach whilst ensuring that various regulators have flexibility to ensure AI delivers the right outcomes
• introducing cross-sector principles or rules to supplement existing regimes

Indeed, as the EU Draft AI Regulation (read further here) begins its potentially lengthy legislative route to law, it will be interesting to see how the UK’s approach differs to what appear to be common issues, particularly as the UK looks to be favouring a light touch approach to regulation generally. Timothy Clement-Jones, a former chair of the House of Lords’ artificial intelligence liaison committee, is quoted in the press as saying, “If this is tending in a direction which is diverging substantially from EU proposals on AI, and indeed the GDPR [the EU's data protection rules] itself, which is so closely linked to AI, then we would have a problem”. 

As we have been seeing with data privacy laws, as different jurisdictions and regions look to develop legislation on common topics, many organisations are faced with the increasingly challenging task of finding a globally consistent approach to compliance.

 

AI standards and assurance

Technical standards are recognised in the Strategy as good practice for safety and efficiency. In that context, the Strategy notes the desire to integrate use of AI standards as a part of the AI governance model. Proposals include the pilot of AI standards hubs to expand international engagement and thought leadership as well as to develop a toolkit to guide engagement in standardisation.
Similarly, assurance is viewed as a way of understanding the safety and trustworthy nature of AI systems but it is acknowledged that the current assurance ecosystem is fragmented. The upcoming CDEI AI assurance roadmap is therefore promoted as an effective step in the right direction.

 

AI and IP

The interplay between AI and intellectual property law has long been discussed, not least as part of the IP5 (the five offices that handle about 85% of the world’s patent applications) joint Task Force on New Emerging Technologies and Artificial Intelligence (read further here). Indeed, the UK Intellectual Property Office published an AI and IP call for views earlier this year. Historically, it has commonly been considered that an AI system cannot, itself, be a patent inventor. However, this status quo is under debate, as indicated by the recent Australian Federal Court decision finding that an AI system could be a patent inventor (judgment subject to appeal, see further here). The National AI Strategy promises a consultation on the patentability of AI derived inventions, as well as consideration of copyright in the context of computer generated works and use of copyright materials in AI systems.

 

A New National Strategy for AI in Health and Social Care

This more specific strategy is promised in 2022, aligning with this broader Strategy and, one would hope, interacting with the recent “Data Saves Lives: Reshaping Health and Social Care with Data” draft strategy. It is intended to build on the existing NHS AI Lab in NHSX to accelerate safe, ethical and effective development of AI technology in health and social care.

 

A new National AI Research and Innovation Programme

Launched by the UKRI, this Programme will be designed to align funding programmes, encourage investment in AI research, enable cross-discipline collaboration to support research and innovation, and support continuing development of new capabilities around trustworthiness, acceptability, adoptability and transparency of AI technologies.

 

So what next

The National AI Strategy promises further detailed, measurable plans for the execution of the first stage of the strategy later this year. These will be necessary to see what many of the proposals mean in practice. We can expect to see a White paper on AI regulation within 6 months but don’t expect to see the National AI Research and Innovation Programme launched until next year. Helpfully, we can also anticipate the CDEIs AI assurance roadmap within the quarter and the IPO’s consultation on copyright and patents for AI should also be forthcoming in the next three months.  Organisations are promised a series of further papers and strategies touching on related areas. Look out for the upcoming National Cyber Strategy, a broader Digital Strategy, a new Defence AI centre, a National Resilience Strategy, and the outcomes of the UK’s Data: A New Direction consultation amongst others.  It is clear that the interplay between the likes of R&D, IP, technologies, data, infrastructure, cyber, security, ethics and so on means that organisations will need to keep a holistic eye on the spectrum of related developments (both here and abroad), particular as the UK looks to navigate it’s new post-Brexit position in the world.

The National AI Strategy can be found here.

Related blog topics