Skip to content

Ethical and social challenges posed by AI generative models: Exploring fears and the state of AI regulation in the EU

Artificial Intelligence (AI) generative models (such as GPT3 and GPT4) are powerful tools that can create realistic and diverse content, such as images, text, audio, and video, from data or latent variables. However, as these models become more advanced and accessible, they also pose significant ethical and social challenges. It is worth exploring some of the growing fears relating to the emergence of these models and looking at where we are in terms of AI regulation.

Big fear

Geoffrey Hinton (labelled 'The Godfather of A.I.’) has voiced his concerns that AI is advancing more quickly than he and other experts in the field expected. In his view, the recent progress in the development of AI generative models should create a sense of urgency to ensure that humanity can contain and manage these advanced models. Mr Hinton's concerns follow the sentiment of many researchers and other AI specialists who signed an open letter calling for the training of AI systems more powerful than GPT4 to be paused for at least six months. Max Tegmark, the founder of the Future of Life Institute and one of the first signatories of the open letter stated that the unprecedented race towards AI dominance is like 'rushing towards the cliff' – "the closer we get the more scenic the views are". These voices are undoubtedly a reason to be concerned and a clear indication for legislators to take action. 

Apart from the fear of "losing control over AI systems", other main fears are:

  • The fear of using generative models to create and spread fake or misleading information, such as deepfakes, synthetic text, or manipulated audio and video, which could harm individuals, groups, or institutions, or undermine trust, democracy, and security. This will have adverse consequences for human behaviour, cognition, and emotion, as the models could influence or manipulate people's beliefs, opinions, preferences, or actions, or affect their sense of reality, identity, or agency;
  • The fear of job loss. The researchers at OpenAI estimate in their report of March this year that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted.
  • The fear that generative models will pose a threat to intellectual property, creativity, and authenticity, as they could generate or copy content that infringes on the rights or interests of original creators, or that deceives or confuses audiences or consumers.

Where are we with AI regulation in the EU?

In the EU the first comprehensive proposal to regulate AI was adopted more than two years ago. The underlying objective of the proposal was to ensure that AI systems are overseen by humans, are safe, transparent, traceable, non-discriminatory, and environmentally friendly. The proposal follows the risk-based approach to AI. It classifies AI systems into four categories: unacceptable, high-risk, limited-risk, and minimal-risk. AI systems with an 'unacceptable' risk level would be strictly prohibited. Among them are systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities, or are used for social scoring.

Last Thursday, two EU committees (the Internal Market Committee and the Civil Liberties Committee) adopted a draft report setting out amendments to the proposal. They include a ban on predictive policing (ie AI systems predicting the occurrence of offenses based on the profiling of individuals), an expanded list of stand-alone high-risk AIs and a strong role for the new AI Office. The draft report also covers AI generative models (like GPT3 and GPT4) and refers to them as 'foundation models'. They are defined as AI models that are trained on broad data at scale, are designed for generality of output, and can be adapted to a wide range of distinctive tasks. The draft report imposes a number of obligations on providers of these models, including:

  • the obligation to ensure that the models comply with regulatory requirements;
  • the obligation to demonstrate through appropriate design, testing and analysis that risks of the models are mitigated;
  • the obligation to process and incorporate only datasets that are subject to appropriate data governance measures for the foundation models;
  • to design and develop the foundation models to achieve throughout their lifecycle appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity assessed through appropriate methods such as model evaluation with the involvement of independent experts, documented analysis, and extensive testing during conceptualisation, design, and development;
  • to design and develop the foundation models in line with specific environmental requirements (including on reduced energy use);
  • to draw up extensive technical documentation and intelligible instructions for use to enable downstream providers to comply with their requirements;
  • to register the foundation models in the EU database.

Providers of generative models should also ensure transparency about the fact content is generated by an AI system, not by humans. They would also have to train and design their models in such a way as to ensure adequate safeguards against the generation of content in breach of EU law in line with the generally acknowledged state of the art, and without prejudice to fundamental rights, including the freedom of expression.

Transparency issue

The EU proposal strongly emphasises transparency as a general principle of foundation models (as well as AI systems). Transparency means that a model should be developed and used in a way that allows for its appropriate traceability and explainability. However, the issue is that the models themselves are, to a great extent, extremely difficult (if not impossible) to explain. AI researchers often refer to them as 'black boxes' because of difficulties in explaining how they work (ie how they produce their outputs). This creates significant uncertainty in the context of auditing methods that should be applied to these models to properly address the risks they may pose.

Timing issue

The EU AI Regulation itself is definitely a step in the right direction. However, most of its provisions are designed to start applying 24 months after the regulation enters into force. Given the urgency of which AI risks should be addressed at regulatory (and other) levels, such a delay in the Regulation starting to apply is not very comforting. It remains to be seen whether EU legislators will truly treat AI safety seriously.