New due diligence challenges facing investors in AI
Partner, Global Co-head of Technology
12 January 2022
Organisations looking to acquire or collaborate with Artificial Intelligence (AI) companies, or acquiring AI technologies, are having to address a host of specific risks in their due diligence procedures.
AI, in various forms, has long been pervasive in certain industries.
However, it is currently advancing at breakneck pace thanks to the growing sophistication of mathematical models and algorithms, the massive abundance of readily available data and exponential growth in computational power. It remains a frontier technology – an area of huge potential, but also complexity.
Acquirers have different priorities
In its early form, the notion of AI was the quest to make computers do what humans can do. This involved setting a series of specific ‘rules’ that a computer program would follow in order to solve a particular problem. This was a laborious process and only worked for certain types of problems.
Today, AI is associated with machine learning. Machine learning is the quest to make computers emulate how humans think. In simple terms, a machine-learning system is a mathematical model trained by inputted data to anticipate risks and predict solutions to a particular problem. As more data is inputted to the model, the more accurate the predictions become relative to the objective of the model.
The potential uses for technology of this nature are almost endless, and AI is having a transformative effect on many industries. AI is often a cornerstone of many organisations’ digitalisation initiatives.
In recent years, there has been a spate of high-profile M&A transactions involving the acquisition of AI businesses. Standout transactions in this space in recent years have often been dictated by what the acquirer looks to get from a particular AI system. Crudely, an AI system comprises three main components – the model, the data used to train and test the model, and the software on which the model has been programmed. We have seen purchasers of AI businesses place greater importance on these different elements of the AI system or, indeed, on the individuals involved in creating the AI system.
- In many cases (such as GSK’s acquisition of 23andMe, the DNA mapping company, or Amazon’s acquisition of Ring), a key driver for the acquisition was access to the data held by the target and the potential synergies with the purchaser’s existing datasets.
- In other cases (such as Roche’s acquisition of Flatiron Health), it was to gain access to the target’s proprietary models.
- Finally, other acquisitions (such as Google’s acquisition of DeepMind) have been driven by a desire to bring the target’s machine-learning scientists into the purchaser’s business.
Most of the deals we are seeing in this area are currently collaborations, particularly in the Life Sciences sector, where pharmaceutical companies are turning to partnerships with AI drug discovery platforms to help to slash the costs and time involved in developing new drugs.
Up to now, much of the focus has been on minority investments in AI companies at the incubation/start-up phase, done in the hope that investors have bet on a winning technology ahead of expected consolidation further down the road.
AI due diligence – new questions with uncertain answers
Due diligence in AI transactions requires heightened focus in key areas. As with any other risks identified by a purchaser, these may be sufficiently material for the purchaser to decide not to proceed with the transaction or, alternatively, be capable of being addressed through appropriate warranties, indemnities, and pre/post-closing conditions in the sale agreement.
Some of the specific risks arising from AI include the following.
1. IP ownership
In many jurisdictions there will be no single type of IP right that protects the AI system as a whole. Rather, different forms of IP right will be relevant to different components of the system, e.g. copyright may subsist in the source code for the program but not in the model or the functionality of the program. Database rights may subsist in any databases of training and testing data used to train the model. Patents are unlikely to be available in the EU or the UK, where software cannot generally be patented unless it can be shown to have an overall technical effect.
For this reason, trade secrets will be the best form of protection for AI systems. The test for trade secrets will vary per jurisdiction. In essence, trade secrets protect confidential information, and for this reason can protect many elements of an AI system. Accordingly, the due diligence of an AI system/target would require heightened focus on the steps the target has taken to protect the confidentiality of the AI system, eg the terms of employment contracts and consultancy agreements, the use of NDAs to regulate any disclosure, cybersecurity measures, internal policies and staff training, For a business spun out of a university or other research institution, this can be challenging, particularly where academics (who often seek to publish their research) have been involved in developing the AI system.
From a data perspective, the nature of the AI system will determine the degree of risk (and accordingly the focus of the due diligence exercise). There is a difference between an algorithm that processes patient data for health diagnostics and an algorithm that processes molecular data for drug discovery. AI systems processing large volumes of high-risk data in sensitive areas like health diagnostics or personal credit ratings can raise procedural and ethical questions.
Key areas of focus for data protection due diligence include:
- Where is the data from? Has the data been collected lawfully?
- Have steps been taken in collecting the data to eliminate biases and comply with equality legislation? Have steps been taken to ensure that the data is up to date and accurate?
- How is the data stored? And where?
- What internal governance procedures are in place within the organisation around the data? Are machine learning scientists trained in the ethical use of the data and subject to codified principles for data ethics?
- What arrangements are in place to cover any cross-border transfers of data?
- What steps have been taken to embed the ethical use of data? Have steps been taken to ensure the outputs of the AI system are robust and justified, eg testing and formal audits, management-level oversight, internal KPIs?
- Are data ethics embedded in the organisation’s compliance strategy? Is the organisation aware of relevant regulatory authorities and guidance issued on AI systems?
Although regulators are struggling to keep pace with developments in AI, there are many existing laws and regulations that apply to AI systems (including laws relating to data protection and various sector-specific regulatory requirements on the use of AI). There is also a global effort by regulators to manage risks arising from AI systems. In the EU, a draft EU AI regulation seeks to put requirements in place for AI systems, including more stringent requirements for AI systems carrying out higher risk functions. These requirements seek to set a minimum standard for risk management, transparency, robustness, data governance and human oversight, among others. Enforcement could include fines of up to EUR30 million or 6% of global revenue, making penalties even heftier than those under the GDPR.
As such, from a due diligence perspective, a purchaser should expect the regulatory burden on AI businesses to expand, and should give due focus to the steps taken by the target to enshrine existing and anticipated standards in its business.
Rules on foreign investment and national security may also be relevant, depending on the nature of the AI system. In many jurisdictions, including China and the UK, new national security and foreign investment rules now specifically apply to AI, alongside other technologies.
If the transaction involves the acquisition of large of amounts of data, antitrust issues may be relevant where the combination of that data with the purchaser’s own data (and the use of the combined dataset to make better AI models) results in a level of market dominance.
Cybersecurity will be a focus of most due diligence exercises. In an AI context, certain specific risks arise that may need to be addressed. This is because the outcomes that AI systems produce are dependent on the input data. If the input data is capable of being hacked and altered the outcome will change. Certain types of AI model (such as adversarial networks) are particularly vulnerable to being spoofed, in particular when they are being used in connection with automated systems running critical infrastructure, health and drug development programmes or even autonomous weapons systems.
Since the technology underpinning AI systems is constantly evolving and breaking new ground in various industries, it can be difficult to apportion liability (and make financial provision for it) should an AI system cause loss or damage. AI systems generate a prediction (not a guarantee) and could, therefore, even when functioning perfectly within their specifications, cause an outcome that results in loss or damage.
From a due diligence perspective, a purchaser should consider the potential legal grounds on which the target may be exposed to liability (eg product liability, negligence, breach of contract) and focus accordingly on the possible mitigations (eg insurance) and on the target’s plans for mitigating and responding to incidents of this nature.
Looking under the hood
The potential for AI to be turned into some incredibly useful tools in multiple applications is already clear. Equally, it can be difficult for a purchaser (especially if a more traditional market player in a particular sector) to penetrate the huge amount of sales and marketing hype around the sector as it goes through a period of accelerating development.
For this reason, many purchasers will need help to ask informed questions (on commercial and technological topics as much as legal) and to adjust the terms of the sale documentation accordingly.