US – White House announces voluntary commitments from key AI companies to manage safety, security and trust risks posed by AI
Browse this blog post
Allen & Overy’s Daren Orzechowski, Will Wray and Jasmine Shao of our US Technology Team summarised this development in a blog, available here.
In brief, the commitments include:
- undertaking internal and external security testing (e.g. by independent experts) of their AI systems before releasing the product, as well as sharing information on managing AI risks, including best practices for safety, attempts to circumvent safeguards and technical collaboration;
- investing in cybersecurity and insider threat safeguards to protect proprietary and unrealised model weights (as the most essential parts of AI systems). These companies will also facilitate third-party discovery and reporting of vulnerabilities in their AI systems; and
- take steps to earn public trust. This includes such measures as:
- developing technical mechanisms (e.g. watermark) to ensure that users know when content is AI generated;
- publicly reporting capabilities, limitations and areas of both appropriate and inappropriate use of their systems; security and societal risks, such as effects on fairness and bias;
- researching societal risks of AI systems, to protect privacy and avoid harmful bias and discrimination; and
- developing and deploying advanced AI systems to help address society’s greatest challenges, such as climate change and cancer prevention.
The White House also clarified that the Biden-Harris Administration is currently developing an executive order and will pursue bipartisan legislation to address AI risks. The Administration will continue its efforts for establishment of an international framework on development and use of AI.
The announcement is available here.