Skip to content

European Parliament agrees on AI Act approach: one step closer to EU AI rules

On 14 June 2023, the European Parliament voted on its approach to the draft Artificial Intelligence Act (the AI Act).

The AI Act is a proposal for a Regulation issued by the European Commission (the Commission) in April 2021. It reflects a key aspect of the EU’s policy to nurture a safe and lawful AI environment across the EU. Since the Commission's original proposal, the Council of the European Union (the Council) and the European Parliament (the Parliament) have been working on modifications.

Most recently these modifications included consideration of the implications of foundation models. A foundation model is a pre-trained AI system that can be used as a basis for developing other AI systems and includes generative AI.

We reported previously on the Commission’s original proposal and the general approach to the draft AI Act adopted by the Council on 6 December 2022.

Here, we look at some of the key changes proposed by the Parliament and how businesses can prepare. You can find more details in our blog post: European Parliament committees adopt their vision on the AI Act proposal.

New definitions

The Parliament proposes various new definitions to clarify the text of the Act.

The definition of “AI System” unlike the Commission proposal, does not refer to software. It is defined as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments”.

The Parliament also proposes substituting the term “user” (that counter-intuitively means an entity or person under which authority the AI system is operated) with “deployer”. This should create clarity in interpreting the provisions of the Act.

AI foundation models and generative AI

The Parliament places new obligations on the providers of AI foundation models. The new Article 28b requires the providers of foundation models to register these models in an EU database. The aim is to ensure that the models comply with comprehensive requirements for their design and development. They must produce and keep certain documentation for ten years. They must draw up extensive technical documentation and intelligible instructions for downstream providers. They must provide information on the characteristics, limitations, assumptions and risks of the model or its use.

Generative AI models will be subject to foundation model obligations plus additional requirements: transparency obligations, training the model in a way to ensure adequate safeguards to avoid creating content breaching EU law, and making a summary of the use of training data that is copyrighted.

Prohibited systems

The Parliament also includes an expanded list of prohibitions in regards to what AI systems are permitted to be put on the market. This includes systems that fall under certain specific use cases, relating to "real-time” remote biometric identification systems, crime prediction, creation of facial recognition databases and inferring emotions.

High-risk systems

The Commission’s original proposal takes a risk-based approach by imposing stricter requirements for high-risk AI systems. Such systems include those in scenarios of employment, biometric identification, management or operation of critical infrastructure, education, and access to essential public or private services.

The original proposal listed AI systems that would automatically be considered high-risk. Whilst the Parliament expands the list of AI systems to include AI systems used for public elections and AI-based content recommendation systems for very large online platforms, as defined in the Digital Services Act, it does not consider those specified AI systems to be automatically high-risk.

In order to be considered high-risk, the AI system must also pose a 'significant risk’ to people’s safety, health or fundamental rights, or a significant risk of harm to the environment.

To determine whether or not an AI system poses a significant risk, an assessment should be made of several factors:

  1. the effect of the risk,
  2. the severity of the risk,
  3. its probability of occurrence,
  4. its duration, and
  5. whether it affects individuals or a group.

The Parliament also requires a fundamental rights impact assessment before putting a high-risk AI system into use for the first time. Other than where the deployer is an SME, the deployer must notify national supervisory authorities and “relevant stakeholders” of a system launch. These bodies would include equality bodies, consumer protection agencies, social partners and data protection “agencies”. To the extent that it is possible, the deployer would need to obtain their input. Certain entities deploying AI will be required to publish a summary of the results of this impact assessment.

The Parliament also provides that if a data protection impact assessment is required under the General Data Protection Regulation (GDPR), it should be done in parallel with the fundamental rights assessment and attached as an addendum.

Exemptions

The Parliament clarifies that there will be exemptions to the rules for AI systems. These regard research, testing and development activities of an AI system before it is placed on the market or put into service. These specific activities must respect fundamental rights and EU law.

Key elements to look out for in the near future

To conclude, even though this is an ever-evolving framework, businesses need to start shaping their internal strategies and policies to secure the future development or use of AI in a manner that ensures compliance with the upcoming and existent EU legislative framework. For this reason, the following factors should be taken under consideration:

The functioning of the AI system

Businesses should understand how algorithms and data work to ensure that no detriment to individuals may arise from biased or inaccurate algorithms and data sets. As AI may be “black-box” technology, it is not an easy task to have a proper understanding of its functioning and to predict its outcomes. Still, in case of high-risk AI systems, AI providers are entrusted with a series of delicate assessments, while such obligations are far more limited for AI deployers. Indeed, AI deployers must only follow the instructions of use indicated by the AI providers, including any human oversight measures indicated by them.

The purpose and scope of the AI system

It is fundamental to define the functions, goals and expected use cases for which the AI system is intended to be deployed, and its limitations. Such assessment would allow AI providers to categorise an AI system into one of the bands of risks set out by the AI Act.

This would also help AI deployers to clearly define the purpose of the AI system and take the necessary decisions on the input data to feed the system. Additionally, this analysis would support the alignment of the AI system’s expected use with the ethical principles and reasonable expectations of individuals and wider society, as well as with relevant GDPR principles.

The human oversight requirement

High-risk AI systems must be equipped with appropriate mechanisms to ensure human oversight throughout their lifecycle. This oversight must include the ability to disregard, override or interrupt the AI system. Human oversight shall also be ensured in the deployment stage to ensure a proper monitoring of the AI system’s functions, as well as the ability to act upon any malfunctioning. Human oversight instances will require the redefinition of internal roles and responsibilities, urging the creation of competent and well-resourced personnel.

The accountability factor

Businesses need to implement strong technical safeguards and monitoring mechanisms that can detect, prevent and correct any error, risk or harm from the AI system. As with the GDPR, this factor would require businesses to re-shape their internal systems and internalise the costs of compliance.

These kinds of concerns are already familiar to the data protection discourse, but they become even more complicated with a technology that is often opaque and yet to be fully deployed. Businesses should also carefully keep an eye on the various pieces of legislation applicable to the development and the deployment of AI systems, including any overlap with GDPR, the upcoming digital legislation and applicable consumer legislation.

Next steps

The proposed AI Act will now enter the last stage of the legislative process: the trilogues, the three-party informal negotiations of the Parliament with the Council and Commission, which may take several months.

Further information from Allen & Overy on AI can be found here.