A (mostly) fictional account of how a foreign bank in China might adopt AI in its credit assessment process
On November 15th, President Xi Jinping and U.S. President Joe Biden held a summit meeting at the Filoli Estate in San Francisco, California. The White House readout states, “The leaders affirmed the need to address the risks of advanced AI systems and improve AI safety through U.S.-China government talks.” A Chinese Government report confirms, “The two presidents agreed to promote and strengthen dialogue and cooperation between the two countries in various areas including China-U.S. government talks on AI.”
This is not surprising given the attendance of both countries of the recent AI Safety Summit hosted by the UK government (Click here for A&O’s commentary). At the same time, it is eye catching because AI is among the very few topics that China and the US have agreed upon during their leader’s candid discussions. It is also intriguing considering AI is an area that China and the US intend to compete vigorously in the future. So the question is how it will look like when the two countries “address the risks of advanced AI systems and improve AI safety?”
It is a complex question for another blogpost. Our “fiction” below demonstrates some of the issues that international business, especially financial institutions, will likely face when they consider adopting AI into their China businesses. Many of these issues will likely fall outside the parameters of “AI safety” and call for a wider common ground between the two countries.
Rules (mostly non-fiction)
First, you need to understand the framework of rules the below fiction takes place (just as one has to understand the rules driving Deckard to retire 6 Nexus-6 androids to understand Bladerunner which coincidentally also takes place in San Francisco).
China has been building up a legal ecosystem around AI including the laws on cybersecurity, data security, personal information, internet contents, AI governance, AI ethics, algorithm-generated recommendation, deepfake, generative AI (Click here for A&O’s commentary).
A comprehensive AI Law is now contemplated in China’s 2023 legislative efforts. Most notably, a team from the influential Chinese Academy of Social Sciences (CASS) released a scholars' draft of an AI Model Law (which has been recently updated from version 1.0 to 1.1) (the Model AI Law). The team is led by Hui Zhou who leads the national research on the Status of the Construction of China’s Artificial Intelligence Ethics Review and Regulatory System. Given the credentials of the team, it is not unreasonable to assume that the Model AI Law may be used as a reference point or even blueprint when China proceeds with the legislation of a comprehensive AI Law. For the fiction below, we treated the AI Model Law as if enacted in its current form – this is therefore the only (reasonable) fictional aspect of the rules. We will dissect the AI Model Law in a future blogpost. For now, it’s sufficient to highlight the following features:
- The Model AI Law sets out the general principles and obligations for AI research, development, provision, and use, such as people-centeredness, safety/security, openness, transparency, explainability, accountability, fairness, equality, greenness, and innovation. Those principles are substantially aligned with the general principles promoted under the EU AI Act (Click here for A&O’s commentary). Having captured research and development of AI, it also boasts a much broader remit of the recent law on generative AI which only covers the provision and use of AI.
- The Model AI Law proposes to establish a National AI Office as the governing body responsible for AI development and management (like CAC for data), and a collaborative governance mechanism that involves government supervision, corporate responsibilities, industry self-governance, social supervision, and user self-discipline. The idea is similar to the EU AI Office and departs from UK’s White Paper approach. This is not surprising given China’s highly centralized “whole nation” approach to AI.
- The Model AI Law proposes to adopt a Negative List system for AI, which requires prior administrative approval and enhanced obligations for products and services on the list, and post administrative filing for products and services outside the list. The Negative List will be formulated and updated by the National AI Office, considering the significance and potential harm of AI to national security, public interests, economic orders and the rights and interests of individuals and organizations. The design of the Negative List system is in essence a risk-based approach, but according to the author, at the execution level purposely differs from the classification model adopted by the EU AI Act with the aim to reduce compliance burden for those outside of the list.
- The Model AI Law differentiates obligations for AI developers, service providers and users. In particular, the Model AI Law proposes to introduce enhanced obligations for AI developers and service providers for products and services on the Negative List, such as maintaining technical documentation, operating a quality management system, conducting security assessment (and in the case of developers, providing support for service providers in this regard), ensuring human supervision and control, and cooperating with regulators. In addition, it proposes to impose special obligations for foundation model developers, such as establishing security risk management, model management and data management systems, formulating rules of use and refraining from abusing market dominant position, providing necessary assistance to AI developers and service providers in fulfilling their obligations under the model law, and releasing social responsibility reports.
Let’s also remember that financial institutions in China, like regulated entities elsewhere in the world, are also subject to sector-specific regulations. Those include for example rules on credit assessment, IT outsourcing, and financial consumer protection.
AI use in the financial sector (non-fiction)
In August, the Alan Turing Institute published a report “The AI Revolution: Opportunities and Challenges for the Financial Sector” (Click here for the full report). It listed several benefits of AI in the financial sector: key business processes automation in customer service and insurance, algorithmic trading improvement, improving financial forecasting, and improving compliance and fraud detection, just to name a few.
The report states that one of the biggest benefits of using AI in the financial sector is improving decision-making related to credit assessment, lending, and investment. This is due to a financial institution’s ability to harness the vast amount of data it generates, as well as diverse and non-traditional data sets, such as data gathered from consumer behaviour and social networks. We have chosen this area as the basis of our fiction not only because of its potential but also because it is an area likely to be the most regulatory intensive.
Now for the fiction – finally!
A global bank, DoGood, has presence in many countries around the world. It benefits from a global risk management system and the ability of bringing best practices from one country to another. DoGood has a banking subsidiary in China that lends to both individuals and companies in China. Having had good experience in a number of LatAm and ASEAN countries using Benddo, an alternative credit scoring fintech company driven by data analytics and AI, and striving for further financial inclusion, DoGood is now considering using a customized version of Benddo in China to assist in its lending, below a certain amount (say, RMB10,000, equivalent to USD1500), to individuals (especially in rural areas of China) and small and medium sized enterprises (SMEs).
Benddo collects both traditional and non-traditional data and uses AI algorithms to predict an individual’s likelihood to pay. In particular, it helps people that have no prior credit record but have steady income to gain access to credit by sharing their online profile and information from their mobile devices for example.
DoGood is now conducting a high-level regulatory feasibility assessment of Benddo for China and finds the following questions key to be addressed:
1. Which are the main regulatory agencies that will have jurisdiction over DoGood’s use of Benddo?
- PRC AI Bureau, especially if credit assessment services fall within the negative list;
- NAFR, primarily for credit business generally and for bank outsourcing, also for financial consumer protection for example;
- PBOC, depending on Benddo’s service model and if Benddo itself is regarded as conducting credit assessment / scoring business subject to a licensing regime of the PBOC; and
- CAC, data in general and in particular if the service involves potential data export.
2. Is the adoption of Benddo by DoGood China subject to the AI Law and the Generative AI Service Law?
It is highly likely that the extraterritorial application of the Model AI Law will be triggered since the services (provided by Benddo to DoGood China on a cross-border basis) potentially impact on China’s national security, public interest, or legitimate rights and interests of PRC individuals or entities. Benddo will have to establish designated platforms or appoint designated personnel in China to handle AI related matters and file their names and contact information to the PRC AI Bureau.
If Benddo’s services fall within the Negative List (which is highly likely), both Benddo and DoGood China will be subject to enhanced regulations under the Model AI Law.
The Generative AI Service Measures would probably not be triggered if we can argue that the relevant AI services are not provided to the general public, but rather only to DoGood China.
3. Will Benddo be subject to any financial licensing requirement due to its adoption by DoGood China?
The answer depends on the business model (and therefore the role) of Benddo and the exact arrangement between Benddo and DoGood China.
Assuming that the services to be provided by Benddo for DoGood broadly include collection, ordering, storage, and processing credit data of PRC persons and providing the relevant output for DoGood China, such activities would constitute credit assessment business subject to PBOC’s licensing regime. Credit data is a broad concept very likely include the data processed by Benddo.
The question is whether the parties may find a solution to reduce the service scope of Benddo to pure IT solution provision without engaging in the above licensed activities, though one questions whether this may significantly dilute the expertise and value that Benddo may provide for DoGood China.
4. Will the adoption of Benddo by DoGood China be considered a type of outsourcing that requires regulatory approval?
Cross-border outsourcing services provided by an overseas IT service provider to a PRC onshore bank like DoGood China would be subject to a requirement of post-event filing with the NAFR.
5. Does the fact that Benddo is developed outside China affect its adoption by DoGood China?
The cross-border feature of the engagement between DoGood China will trigger a series of issues that we have flagged above to be considered, including the potential extra-territorial effect of the Model AI Law, licensing requirement on credit assessment business, NAFR reporting requirement on cross-border outsourcing, and data export. The engagement needs to be carefully structured in order to mitigate those issues.
6. What types of data DoGood China or Benddo may have access to?
The Model AI Law spells out China’s ambition to share data (in addition to sharing infrastructure and computing power) to promote AI development and innovation. The reality shows that it’s still a long way to go.
PBOC’s credit data system is China’s basic financial credit database which includes traditional credit information such as repayment of loans or payment of utility bills. The system is in general only accessible by data subjects, PRC onshore banks (including DoGood China) and licensed lending business, but not to a foreign person such as Benddo.
There are also commercial credit assessment data providers licensed by the PBOC but access by a foreign credit assessment service provider is subject to post-event filing with the PBOC. This is in addition to the data export related issues.
It is not clear to what extent there is any provider of non-conventional sources of data such as that resides on a person’s mobile devices used by Benddo and access likely has to be made through individual consent and proactive provision.
This raises the question as to how Benddo’s model could be trained with relevant data in China (if such training is necessary).
7. Can data collected by DoGood China or Benddo be used outside China? Can the model trained with the China data be used outside China?
It may be possible that any improvement of the model from training on China data may be utilized outside China but the actual data export (especially considering the sensitive nature of such data especially when they are sourced from mobile devices) will likely be challenging.
8. If US further expands its chip ban against China, will DoGood China have to cease to use Benddo?
Possible if Benddo is a US based company. Art 60 of the Model AI Law provides that China may take reciprocal measures if any country imposes discriminative restrictive or prohibitive measures against China related to AI R&D, investment and trading. If a chip ban is regarded as “related to AI R&D, investment and trading” and China decides to take reciprocal measures including excluding US AI service providers from the China market altogether or limiting their ability to use China data or improve their model with China data in their model, DoGood China may have to cease to use Benddo.
Having considered all above issues, DoGood China is of the view that some regulatory hurdles (typical when adopting new technology in a new jurisdiction) can be overcome by appropriate regulatory process and engagement. However, the uncertainty created by the countermeasures under articles 60 of the Model AI Law is entirely out of its control and difficult to quantify. The potentially severe impact of ceasing to use Benddo would be very difficult to mitigate. DoGood learns that there is a China fintech company that provides technology similar to Benddo but trained on China data. Since DoGood has no experience with this technology and needs significant time and resources to test it, it decides to delay the project indefinitely.
The End (of the fiction)
Not surprisingly, introducing geopolitical considerations into a general AI Law, though understandable, does not invite business confidence in adopting AI in an already highly regulated industry. Let’s be happy that this is for now, only fiction.
The chance is that we are already looking at a fragmented world for future AI deployment in terms of model development, data, computing power as well as associated regulatory framework. Potential cooperation on AI safety concerning frontier AI, as agreed during the UK AI Safety Summit and the San Francesco meeting between the US and China leaders, is a great first step. On the level of principles such as security, transparency, fairness, and accountability, we are clearly on a path of convergence. Yet, it is not clear whether this is sufficient to fully harness the potential of AI in areas that require global efforts such as in combating poverty and diseases. Because of this, opportunities are lost for those who need them most
Content in this post has been contributed by Tiantian Wang, counsel at Shanghai Lang Yue Law Firm, Allen & Overy LLP’s joint operation firm in China.
Allen & Overy Lang Yue (FTZ) Joint Operation Office is a joint operation in the China (Shanghai) Pilot Free Trade Zone between Allen & Overy LLP and Shanghai Lang Yue Law Firm established after approval by the Shanghai Bureau of Justice.