Generative AI and the EU AI Act - A Closer Look
Browse this blog post
Related news and insights
Publications: 01 March 2024
Publications: 28 February 2024
Publications: 28 February 2024
Publications: 28 February 2024
When the European Commission first released its proposal for an Artificial Intelligence Act in April 2021, generative AI was far from being an immediate concern of regulators. That of course all changed with the recent surge in AI solutions that generate text, content, images, or videos, and are tested for various purposes and in different industries, from entertainment to healthcare.
Because of this, the European Parliament substantially amended the European Commission’s initial proposal, notably introducing specific rules that apply to generative AI systems (the Parliament Proposal). Below, we provide an overview of the generative AI rules in the Parliament Proposal.
How does the EU AI Act define generative AI?
In the Parliament Proposal, the European Parliament defines “generative AI” as a type of foundation model. Foundation models are defined as AI system models trained on large and diverse datasets, designed for generality of output that can be used for many different tasks. Generative AI systems are a specific subset of foundation models “specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio or video”.
Which rules apply to generative AI?
In the Parliament Proposal, generative AI systems are subject to three overlapping sets of obligations. Each set of obligations is discussed in more detail below.
Specific obligations for generative AI
The Parliament Proposal imposes specific obligations on providers of generative AI systems. In particular, providers will have to:
- train, design and develop the generative AI system in such a way that there are state-of-the-art safeguards against the generation of content in breach of EU laws;
- document and provide a publicly-available detailed summary of the use of copyrighted training data; and
- comply with stronger transparency obligations.
The former two obligations mainly aim to protect against the infringement of intellectual property rights (and in particular against copyright infringement).
The latter obligation aims to avoid, through transparency, the use of a generative AI system to create manipulative content. Where a generative AI system has been used to create “deep fakes” (i.e. text, video or audio that appears to be authentic or truthful while it is not), the users that created such content must disclose that the content is AI generated or manipulated and (where possible) indicate the name of the legal or natural person that generated or manipulated the content.
Specific obligations for foundation models
As mentioned, generative AI systems are defined as a subset of foundation models. Accordingly, generative AI systems must also comply with the obligations imposed by the AI Act on providers of foundation models.
Under the Parliament Proposal, a provider of a foundation model must, prior to placing its model on the market or into service:
- be able to demonstrate how it has mitigated the reasonable foreseeable risks to health, safety, fundamental rights, environment, democracy and the rule of law;
- only use datasets that are subject to an appropriate data governance that ensures that the datasets are suitable and unbiased;
- design, develop and test the foundation model in such a way that ensures performance, predictability, interpretability, corrigibility, safety and cybersecurity throughout its lifecycle;
- design and develop the foundation model using standards to reduce or cut down the use of energy, resources and waste;
- develop technical documentation and intelligible instructions for the foundation model. The provider must keep this technical documentation available for the competent authorities for a period of ten years from the date of market introduction;
- establish a quality management system to ensure and document compliance with the AI Act; and
- register the foundation model in an EU database.
General obligations applicable to all AI systems
In addition to the specific obligations for generative AI systems and foundation models, as set out above, generative AI systems obviously also have to comply with the obligations that apply to AI systems depending on their risk categorisation.
The Recitals of the AI Act clarify that the development of a generative AI system or foundation model as such does not lead to a high-risk classification. Rather, for each specific generative AI system, one must assess what the risk classification of such AI system is – and comply with the corresponding obligations. For more on the risk categories and the related obligations, see our previous articles on the AI Act available here and here.
What does the Parliament Proposal mean for supervision and enforcement in the EU?
The Parliament Proposal introduces specific rules regarding supervision and enforcement that may apply to foundation models.
National authorities designated by Member States as being in charge of compliance with the AI Act will monitor and supervise compliance with the obligations under the AI Act. In addition, the European AI Office will be in charge of specific tasks concerning the monitoring of foundation models.
Substantial fines apply in case of non-compliance with the AI Act. The Parliament Proposal has further increased the possible fines which now range from up to 2% of the total worldwide annual turnover (or EUR 10 million if higher) to up to 7% of the total worldwide annual turnover (or EUR 40 million if higher).
What are the next steps?
Trilogues between the European Parliament and the European Council are currently ongoing. Accordingly, the provisions in the adopted text may differ from the Parliament Proposal. We will follow developments and keep you informed.
Even when the AI Act is finally adopted, that will likely not be the end of the legislative story for AI. The AI Act specifically mentions that, given that generative AI systems (and other foundation models) are a new and fast evolving development in AI, the European Commission and the European AI Office will regularly monitor and assess the legislative and governance framework that applies to such models.