Skip to content
A collection of monitors displaying images.
A collection of monitors displaying images.

Artificial Intelligence

As the world's leading AI advisory firm we help responsible pioneers harness the potential and manage the risks of this transformational technology. 

In February 2023 we became the first firm in the world to deploy generative AI at enterprise level. More than 3,000 of our lawyers in 43 jurisdictions now use GPT-4-based tools in their day-to-day work.

Our client advice is grounded in the extensive and rigorous programme we undertook to safely and responsibly integrate this technology. We understand all forms of AI and the specific issues each raises from a risk management and contracting perspective. Our experience spans everything from helping nation states shape their AI policies to advising businesses across sectors on how to develop effective and responsible AI solutions, handle AI-focused transactions, and manage AI-related disputes.

AI at A&O

Harvey

Our deployment of Harvey, an OpenAI-backed tool based on GPT-4, began with a sandbox.

Read more Harvey
Read more Harvey

Our deployment of Harvey, an OpenAI-backed tool based on GPT-4, began with a sandbox. In other words, we gave access to a limited number of lawyers in a ring-fenced environment. Sandboxes are crucial for any business looking to deploy generative AI because it’s hard to predict what the technology will do until you use it. We tested, adapted, and moved ahead – all in a safe and secure environment. We only rolled out Harvey to a wider group once we could mitigate its risks, and we continue to gather and act on feedback we receive.

We also established an AI steering committee and an AI brains trust to help our experts understand AI’s current and future capabilities and how it can be harnessed across every area of our business. Alongside this, all our existing governance structures, including our risk committee, now consider generative AI in their day-to-day decisions.

Clear governance and guardrails are critical to successfully deploying AI. We have specific rules of use in place and train our people how to use AI tools effectively and safely.

People are the common thread that runs through all our work with AI. We know that generative AI is an augmentative tool. Everything Harvey produces is rigorously checked, edited and finessed by our team. It enhances the work our lawyers do and helps us produce better results for our clients. In turn, it is governed and augmented by the gold-standard critical thinking and creativity for which A&O lawyers are known.

Our AI Group

Our multidisciplinary AI Group advises clients on the responsible development, deployment and use of AI.

Read more Our AI Group
Read more Our AI Group

We combine a sophisticated understanding and experience of technology with deep expertise in intellectual property, data privacy, regulation, technology transactions, litigation and change management.

We help clients to manage the risks associated with this powerful technology which fall under two broad categories.

First, AI models make errors. Crucially, even those who build and train the models can’t explain and account for them. This so-called “black box” problem creates significant risks.

  • Hallucinations: These are incorrect outputs that could lead to, for example, tort liabilities, consumer harm or regulatory breaches. Hallucinations can be the result of incorrect or out-of-date data, inaccurate mathematical predictions based on weighting of sources or randomisation, or historical bias in the datasets used to train the models.
  • Unpredictability: A lack of explainability also creates a lack of predictability: you can’t be certain exactly what the model will say in response to a question. This can make it extremely difficult to check that it meets standards of quality and accountability.
  • Response divergence: By their very nature, AI models will give multiple answers to the same question. This could be evidentially relevant if, for example, an AI chatbot built to give financial advice delivers different responses to two individuals leading to divergent outcomes.

Second, generative AI models take human content and account for it in a mathematic response. A user may therefore be working with someone else’s information without permission, credit, knowledge, or even awareness. This raises significant IP Infringement questions: for example, can the user assert ownership over the model’s output? And is their own IP safe if they are using the model?

There are also consequential questions about data privacy and data protection, for example, where an AI model has been trained using personal data or a user inputs personal data as a prompt.

Our AI Group provides answers to these substantive legal questions on a syndicated basis. You can sign up to join a series of one-hour calls with other businesses, each in a controlled environment monitored by an antitrust lawyer present. The calls deal with specific issues and are supplemented by minutes and additional written materials such as formal memos, policy guidelines, or comparative analyses.

So far, we have covered topics including a primer on AI, ChatGPT policy, IP infringement and data risks, licensing an LLM and change management, and have welcomed attendees from industries including financial services, pharma, technology and telecoms.

For more information, please get in touch with your usual A&O contact.

ContractMatrix

At A&O, we don’t just use AI-based tools: we build them.

Read more ContractMatrix
Read more ContractMatrix

Our deployment of Harvey was led by our Markets Innovation Group (MIG), the team that also created ContractMatrix. MIG brings together lawyers, developers and data scientists to build innovative solutions to our clients’ most complex challenges.

ContractMatrix is a cutting-edge contract management platform that uses generative AI to review, draft and manage contracts faster and smarter than any human.

Contractual data is the foundation of any large business but can act as either source of opportunity or a barrier to progress. The ability to generate insights from a contractual portfolio is what makes the difference: seeing quickly and clearly what’s in your contracts allows you to run your business efficiently and respond effectively to dislocation events.

ContractMatrix can handle any type of contract, from proprietary templates to third party drafts, and can analyse data using bespoke playbooks or policy parameters.

Using AI to create new value from contracts

AI can be used to automate and enhance every stage of the contract lifecycle from drafting to execution – saving time, reducing costs, improving quality and reducing risks.

Our client, a leading global asset manager, was operating an unwieldy manual document storage system. It had no clause databases and disparate BAU processes, and did not apply consistent governance and compliance procedures across its operations.

ContractMatrix was the perfect solution. Its combination of cutting-edge technologies including generative AI, natural language processing and machine learning enabled us to create a bespoke, unified model to help our client better store and manage its contractual data.

System applies playbook in real time across contractual portfolio

In the process we were able to digitalise our client’s playbook parameters so they could be applied in real time across its contractual portfolio, with the system automatically identifying non-compliant documents that required remediation.

We used the technology to create custom reports and visualisations of key data points, helping to impose discipline and governance on counterparty engagements and ensuring risk parameters and compliance requirements were better managed globally.

ContractMatrix can analyse contracts, extract key information, identify risks and opportunities, suggest optimal clauses and terms, and ease negotiation. It can also help businesses comply with their regulatory and contractual obligations, monitor their performance and obligations, and generate insights and reports. ContractMatrix shows how we are leveraging our legal and technology expertise to support our clients’ business and legal functions.

To understand more about how ContractMatrix helps our clients, contact David WakelingFrancesca BennettsTom Roberts or Karen Buzard.

Our deployment of Harvey, an OpenAI-backed tool based on GPT-4, began with a sandbox. In other words, we gave access to a limited number of lawyers in a ring-fenced environment. Sandboxes are crucial for any business looking to deploy generative AI because it’s hard to predict what the technology will do until you use it. We tested, adapted, and moved ahead – all in a safe and secure environment. We only rolled out Harvey to a wider group once we could mitigate its risks, and we continue to gather and act on feedback we receive.

We also established an AI steering committee and an AI brains trust to help our experts understand AI’s current and future capabilities and how it can be harnessed across every area of our business. Alongside this, all our existing governance structures, including our risk committee, now consider generative AI in their day-to-day decisions.

Clear governance and guardrails are critical to successfully deploying AI. We have specific rules of use in place and train our people how to use AI tools effectively and safely.

People are the common thread that runs through all our work with AI. We know that generative AI is an augmentative tool. Everything Harvey produces is rigorously checked, edited and finessed by our team. It enhances the work our lawyers do and helps us produce better results for our clients. In turn, it is governed and augmented by the gold-standard critical thinking and creativity for which A&O lawyers are known.

We combine a sophisticated understanding and experience of technology with deep expertise in intellectual property, data privacy, regulation, technology transactions, litigation and change management.

We help clients to manage the risks associated with this powerful technology which fall under two broad categories.

First, AI models make errors. Crucially, even those who build and train the models can’t explain and account for them. This so-called “black box” problem creates significant risks.

  • Hallucinations: These are incorrect outputs that could lead to, for example, tort liabilities, consumer harm or regulatory breaches. Hallucinations can be the result of incorrect or out-of-date data, inaccurate mathematical predictions based on weighting of sources or randomisation, or historical bias in the datasets used to train the models.
  • Unpredictability: A lack of explainability also creates a lack of predictability: you can’t be certain exactly what the model will say in response to a question. This can make it extremely difficult to check that it meets standards of quality and accountability.
  • Response divergence: By their very nature, AI models will give multiple answers to the same question. This could be evidentially relevant if, for example, an AI chatbot built to give financial advice delivers different responses to two individuals leading to divergent outcomes.

Second, generative AI models take human content and account for it in a mathematic response. A user may therefore be working with someone else’s information without permission, credit, knowledge, or even awareness. This raises significant IP Infringement questions: for example, can the user assert ownership over the model’s output? And is their own IP safe if they are using the model?

There are also consequential questions about data privacy and data protection, for example, where an AI model has been trained using personal data or a user inputs personal data as a prompt.

Our AI Group provides answers to these substantive legal questions on a syndicated basis. You can sign up to join a series of one-hour calls with other businesses, each in a controlled environment monitored by an antitrust lawyer present. The calls deal with specific issues and are supplemented by minutes and additional written materials such as formal memos, policy guidelines, or comparative analyses.

So far, we have covered topics including a primer on AI, ChatGPT policy, IP infringement and data risks, licensing an LLM and change management, and have welcomed attendees from industries including financial services, pharma, technology and telecoms.

For more information, please get in touch with your usual A&O contact.

Our deployment of Harvey was led by our Markets Innovation Group (MIG), the team that also created ContractMatrix. MIG brings together lawyers, developers and data scientists to build innovative solutions to our clients’ most complex challenges.

ContractMatrix is a cutting-edge contract management platform that uses generative AI to review, draft and manage contracts faster and smarter than any human.

Contractual data is the foundation of any large business but can act as either source of opportunity or a barrier to progress. The ability to generate insights from a contractual portfolio is what makes the difference: seeing quickly and clearly what’s in your contracts allows you to run your business efficiently and respond effectively to dislocation events.

ContractMatrix can handle any type of contract, from proprietary templates to third party drafts, and can analyse data using bespoke playbooks or policy parameters.

Using AI to create new value from contracts

AI can be used to automate and enhance every stage of the contract lifecycle from drafting to execution – saving time, reducing costs, improving quality and reducing risks.

Our client, a leading global asset manager, was operating an unwieldy manual document storage system. It had no clause databases and disparate BAU processes, and did not apply consistent governance and compliance procedures across its operations.

ContractMatrix was the perfect solution. Its combination of cutting-edge technologies including generative AI, natural language processing and machine learning enabled us to create a bespoke, unified model to help our client better store and manage its contractual data.

System applies playbook in real time across contractual portfolio

In the process we were able to digitalise our client’s playbook parameters so they could be applied in real time across its contractual portfolio, with the system automatically identifying non-compliant documents that required remediation.

We used the technology to create custom reports and visualisations of key data points, helping to impose discipline and governance on counterparty engagements and ensuring risk parameters and compliance requirements were better managed globally.

ContractMatrix can analyse contracts, extract key information, identify risks and opportunities, suggest optimal clauses and terms, and ease negotiation. It can also help businesses comply with their regulatory and contractual obligations, monitor their performance and obligations, and generate insights and reports. ContractMatrix shows how we are leveraging our legal and technology expertise to support our clients’ business and legal functions.

To understand more about how ContractMatrix helps our clients, contact David WakelingFrancesca BennettsTom Roberts or Karen Buzard.

Our Risk Management Pillars

Use Case + The sweeping abilities of large language models means there is a high risk of mission creep. When deploying these tools it’s vital that the use case is tightly defined. We call this the ‘+’ –  the strict governance controls required to keep the use of the AI within its original boundaries. This should be reinforced with playbooks, training, system settings and working practices. 
Operational Our experience deploying generative AI means we know how important it is for legal departments to work in lockstep with information security teams as well as those aligning the AI tools with existing technology infrastructure. In AI projects, the interdependencies between legal, operational and security stakeholders is greater than in non-AI rollouts. To take just one example, it’s not enough to put in place a contractual restriction designed to protect trade secrets if no practical steps are also taken to implement encryption measures or configure systems to support contractual terms.
Contractual Contract terms are vital in mitigating legal risk. This is true both in the contract between the AI user and the developer, and – where generative AI is used in customer-facing products – between the business and its customers. We have negotiated many of these contracts and are working on similar agreements with clients across sectors.

AI deployment risk is further complicated by the fact that there are often trade-offs between these three pillars, with some more important than others depending on the situation. A&O’s AI Group is helping clients to manage this careful calibration.              

AI Insights

Server room

Keep informed of the latest developments in AI regulation, and the associated risks and opportunities, on our dedicated insights hub.