Skip to content

Responsible AI: navigating the risks and embracing the possibilities

Headlines in this article

Related news and insights

Publications: 11 December 2023

The power of ADHD and how to harness it

Publications: 11 December 2023

The world needs different

Publications: 11 December 2023

An extraordinary and transformative year

Publications: 11 December 2023

On the road to a sustainable society

The launch of GPT-4 has led to the widespread use of generative artificial intelligence (AI). As the first law firm – and one of the few firms of any sort – with experience in deploying GPT-4 across our business, we’re helping clients embed and use AI responsibly. We speak to some members of A&O’s AI working group.

The terms ‘AI ethics’, ‘trustworthy AI’ and ‘responsible AI’ are often used interchangeably, but there’s a technical distinction between the three despite the overlaps, as A&O counsel Karishma Brahmbhatt explains.

“AI ethics,” she says, “is defined as the study and evaluation of moral problems relating to data, algorithms and corresponding practices to formulate and support morally good solutions. In practice, it embodies the difference between what you can do and what you should do with AI and the output it creates.

“In the corporate context, this discourse has been subsumed into the broader notion of ‘responsible AI’, which asks who is answerable for the ethical and acceptable uses or outcomes of AI systems.There’s an inherent accountability element to it.”

Responsible AI also goes further: it’s also about future-proofing.

“Part of responsible AI is making sure that your system remains fit for purpose, and that the purpose you’re using it for remains acceptable in the context of the social, economic and cultural environment in which it’s used,” Karishma adds.

But as the concepts of ‘ethical’ and ‘acceptable’ change, so responsible AI means keeping your finger on the pulse of societal and cultural expectations, what ‘good’ or ‘right’ mean at a particular moment, and adjusting your commercial strategy accordingly.

Responsible AI is values-driven, embodying oft-cited principles such as fairness, lawfulness, ethics, safety and security, and it matters because these issues are uppermost in consumers’ minds and – as the slew of white papers, guidance, commentary and draft legislation indicates – those of policy-makers.

Why should businesses care?

There are many reasons why businesses should care about responsible AI. For Daren Orzechowski, partner and global co-head of Technology, it comes down to practicalities. “Your people are using it, no matter what they say, so you need to get ahead of it to drive safe use and safe conduct. Embrace the efficiencies and the possibilities of the technology; it’s happening, so you need to figure it out.”

Embrace the efficiencies and the possibilities of the technology; it’s happening, so you need to figure it out.

Daren Orzechowski

And while the technological singularity (the point at which AI surpasses human intelligence and machines can learn and innovate on their own) might not currently be keeping you up at night, the fact that AI is a hot topic for regulators should be. Says Karishma: “We’ve already seen data protection regulators at the forefront of enforcement actions concerning AI systems. To avoid fines or regulatory sanctions, organisations need to keep on top of compliance.”

There’s also the reputational risk – and potential gains. AI, if used irresponsibly, could have far-reaching, detrimental impacts and undermine progress in environmental, social and governance (ESG), and diversity, equity and inclusion (DE&I).

“AI is a technology of the future, but it consumes data, and most data is a reflection of our past,” says Karishma.“We need to be careful how we’re using data to train the AI systems that could shape our future.

“Inaccurate, incomplete and inappropriate datasets can result in the use of AI systems, and their outputs, being unfair, discriminatory and exclusionary in harmful ways, and therefore problematic for creating a smarter, more inclusive, more sustainable society.

“Social consciousness is creeping up the corporate agenda. Responsible AI practices can help companies protect rights and freedoms of individuals, while also enabling innovation and creativity and giving companies a commercial edge.”

Responsible AI practices can help companies protect rights and freedoms of individuals, while also enabling innovation and creativity and giving companies a commercial edge.

Karishma Brahmbhatt

“The initial question is, if you use thirdparty data or images to train the AI, is that an IP infringement? A second question is whether there’s an infringement at the point of use, when we rely on the AI and the things it was trained on, to produce a result.”

There’s also the risk of ‘hallucinations’: wrong answers that look like right ones. Francesca Bennetts, ICM partner and a member of our Markets Innovation Group (MIG), says: “We liken it to an articulate, knowledgeable 13 year old who is capable of giving a convincing and well-constructed answer, but they don’t know what they don’t know.

“That’s probably the biggest risk from a legal perspective, because if people rely on the outputs of these systems without rigorous checking, they could give materially incorrect answers to clients, with potentially serious repercussions.”

A bigger question is who is responsible if something goes wrong. Karishma says AI liability may not be top of the legislative agenda right now, but soon will be. “We’re already looking at the question of who should be liable for the output created by the AI system – is it the person who created it, the person who procured it or the person who used it? Where does (and should) the buck stop?”

How to manage those risks

Before you can begin to build a responsible AI framework, you need to define your use case. This enables you to take a by-design approach. Daren explains: “It starts with understanding your organisation – its needs and its goals – and then understanding the various use cases that would make work easier or more enjoyable. Technology should be used to create efficiency.”

Knowledge of the technical architecture is critical too.

“Before you let your people use the technology, you need to know where the data they input into a tool is going and who’s seeing the input and the output,” he adds. This will determine whether you design or license AI systems – and whether you limit access.

You need to establish the principles that will govern your use of AI and tailor them to the organisation’s culture. Develop a risk management framework, but make sure your policies are practical and realistic. This means engaging with employees early so they understand the strategy, the risks, and the rules of use.

Buy-in from senior management and other relevant stakeholders is also essential if your AI governance measures are to have teeth, as is representation.

“We’re a diverse bunch of people,” says Karishma, “which means AI, and AI governance frameworks, should be created with that diversity in mind. Making sure that the right people are involved and understand their responsibilities will help make your adoption of AI a responsible one.”

Deploying Harvey: how we did it and what we learned

Our MIG team was responsible for rolling out Harvey, a generative AI system based on OpenAI’s large language model. Today, more than 3,500 employees across 43 jurisdictions have access to it from their desktops, with around 800 people using it daily. IP partner Peter Van Dyck says there are myriad examples of how Harvey has already changed the way he and others in the team now work.

“For example,” says Peter, “I used it to research international case law as part of patent litigation work. Harvey came up with several relevant and promising cases, which I was then able to send to our colleagues in the relevant jurisdictions for further analysis.”

Referring to deployment, Francesca adds: “The biggest hurdle was making sure we understood the key legal and regulatory risks. We actively managed those before we rolled it out.”

We also set up layers of governance, including an AI steering group to set the strategy, and a group for early adopters.

“This AI Brains Trust are not just champions,” says Francesca. “They identify use cases for their practice group, best practices and what doesn’t work well. We share those learnings with the wider firm so that everyone has the benefit of up-to-date thinking.”

The rules of use are also updated regularly to reflect any changes to regulations or our internal position on risk, but there’s one rule that remains constant.

“You have to validate the output,” says Francesca. “The outputs are meant to be used as inspiration, not verbatim, and we’ve made that crystal clear. It’s your responsibility to make sure that what you’re producing for your clients is accurate and fit for purpose.”

Impact on junior lawyers

Francesca is also focused on how AI impacts our people and making sure that the technology doesn’t disrupt their career plans and lives. She has been working with HR and training teams to understand how AI will affect our junior lawyers.

“There’s no doubt that AI makes some of the processes that our juniors do more efficient. We have to identify the skills we want people to learn, and if we think they are not going to get that experience organically, then we have to proactively teach them.”

In this respect, AI is allowing us to become more purposeful about our training for junior lawyers.

“I actually think that’s a good thing for our lawyers because it’s more systematic,” she adds. “It will mean we have that certainty that we’re teaching our people what they need to be an effective lawyer.”

The increasing use of AI is challenging the concept of what it means to be human and whether a technology can be afforded ‘rights’ in the same way as a person can.

Karishma Brahmbhatt

Building solutions and sharing our experience

The MIG team has used our AI experience to develop our own proprietary tool, ContractMatrix.

“ContractMatrix leverages generative AI to speed up contract drafting and review. It allows you to compare a clause that has been amended or suggested by generative AI to a selection of your golden-source precedents and data,” says Francesca. “It aids efficiency because, rather than having to trawl through subfolders to find previous documents, it’s all in a single place.

“The system is surfacing up the best precedent every time, which means our lawyers have access to much better knowhow, much quicker.”

Additional functionality is regularly added in response to internal A&O user feedback, for example the ability to find an example clause or the most similar entire document in the bench.

ContractMatrix has the potential to help others too. We’re developing a clientfacing version of the platform that uses their own data, which we’ll provide on a software-as-a-service (SaaS) basis.

We’re also building out our advisory practice, helping clients to manage risk across the full lifecycle of an AI system. Says Daren: “A lot of clients are looking to build or license in technology and that’s also leading to transactional work to acquire those technologies. This type of work has been a core part of our technology transactions practice for years.”

Daren, Francesca, Peter and Karishma are part of our AI working group, which is sharing our learnings from deploying AI systems with our clients on a syndicated basis.

“The AI working group was born out of the processes we’ve had to go through,” says Francesca. “We realised these were the same issues our clients were grappling with too.”

Looking ahead

As for the future, it’s a ‘known unknown’, says Daren. “We don’t know how all this is going to play out, so we need to make sure we’re using AI responsibly. It’s a balancing act, but it will sort itself out: the markets will have to adapt to it, as they did with online books and music streaming.”

Karishma says, once the rules have been established and the scramble to implement compliance frameworks is over, “we’ll find ourselves dealing with the really meaty, knotty, interesting questions.

“The increasing use of AI is challenging the concept of what it means to be human and whether a technology can be afforded ‘rights’ in the same way as a person can.

“The pace of development is so fast that I think those types of questions will become more relevant. In ten years, we’re not going to be scratching our heads over today’s AI issues in the same way we are now.”

Read more about A&O’s Advanced Delivery & Solutions offering.

Recommended content