UN calls for moratorium on AI systems that pose serious risks to right to privacy and other human rights
27 September 2021
The OHCHR urges for a moratorium on the sale and use of AI systems that pose a serious risk to human rights and remote biometric recognition systems in public spaces until adequate safeguards are put in place. It also recommends banning AI applications that cannot ensure compliance with international human rights law.
While the report recognised that AI is instrumental in developing innovative solutions, it stressed the effects of the ubiquity of AI on people’s fundamental rights. The report looks in detail at the use of AI solutions in key public and private sectors, for example, in national security, criminal justice, employment and when managing information online.
In this respect, the OHCHR highlighted a number of risks of AI that need to be addressed by states and businesses, for example:
- the ways in which the large amounts of data fed into the AI systems are merged, collected and analysed is opaque, which has created an “immense” accountability gap;
- the data used to inform and guide AI systems can be faulty, discriminatory and out-of-date, leading to discriminatory decisions with increased risks for marginalised groups; and
- lack of transparency with respect to how companies are developing and using AI
The report recommends addressing these risks using a comprehensive human rights-based approach and outlines possible ways to address the fundamental problems associated with AI, including the implementation of a robust legislative and regulatory framework, which prevents and mitigates any adverse effects of AI on human rights. States should ensure that any permitted interference with the right to privacy and other human rights through the use of AI does not impair the essence of these rights and is stipulated by law, pursues a legitimate purpose, is necessary and proportionate, and requires adequate justification of AI-supported decisions. The OHCHR also recommends that public and private entities systematically conduct human rights due diligence throughout the entire life cycle of the AI systems (including a human rights impact assessment), increase transparency about the use of AI and actively combat discrimination.