Skip to content

Project ExplAIn-an interim view and the challenges of explanations

Browse this blog post

The UK’s Information Commissioner’s Office (the ICO) and Alan Turing Institute (the national institute for data science and AI, the Turing) have, in the first week of June, published their interim report (the Report) into Project ExplAIn.

What is Project ExplAIn?

As national and international institutions and organisations endeavour to publish AI guidelines and recommendations (see our blog regarding the OECD AI Recommendations), the ICO and the Turing have teamed to investigate a specific issue particularly affecting AI - its “explainability”. This is Project ExplAIn.

In 2018 and as part of its AI Sector Deal, the UK Government requested practical guidance to assist organisations when explaining AI decisions to affected individuals. Project ExplAIn guidance, when finalised, is intended to “promote best practice, helping organisations to foster individuals’ trust, understanding, and confidence in AI decisions” and to address the need to comply with data protection principles such as transparency, even when data is being used in innovative technologies.

Project ExplAIn proposals may prove to be a helpful stepping stone to broader consensus and guidance, if only to highlight that no one-size-fits-all and the nature of any explanation will depend on the context. Despite existing requirements in relation to AI and the processing of personal data, it may be that broader governance and culture changes are required within organisations to ensure that AI is adequately explained more generally.

Why focus on explainability?

The awkward term “explainability” describes how well an AI system can be understood. It is broadly considered necessary to ensure the ethical, unbiased, and safe use of the technology, but is also fundamental to compliance with data protection legislation such as the General Data Protection Regulation (the GDPR). The general principle of transparency; the right to have explained, and be informed of, automated decision making (and not be subject to the same); and the need to adopt safeguards and carry out impact assessments when using automated decision making, are all GDPR requirements. As such, the need for guidance in this area is clear, irrespective of any desire to operate at an ethical level.

Understanding and interpreting AI systems may offer individuals opportunity to challenge decisions but, from a purely commercial perspective, it should inform developers of their products, and enable them to iterate and improve.

Key themes

Whilst the Report acknowledges some potential deficiencies in methodology (as part of the framing of the citizens' juries and industry roundtables) it highlights three key themes arising from initial research:

1. the importance of context in explaining AI decisions;
2. the need for education and awareness around AI; and
3. the various challenges to providing explanations.

Context

Consensus and a standardised approach to the use of AI is certainly beneficial for business, as expressed in existing guidance such as that of the OECD Recommendation and the Ethics Guidelines for Trustworthy AI (produced by the High-level expert group on AI). However, at the point of AI implementation and explanation, as we already see in the provision of privacy information under the GDPR, there is no one-size-fits-all approach.

An explanation appropriate for a data scientist about a decision made on the basis of “standard” personal data will differ from that necessary to explain a decision made on the basis of sensitive personal data about a child.

As the Report details, the emphasis placed on provision of an explanation as opposed to accuracy of the AI decision itself (acknowledged as something of a false dichotomy in practice) will also differ depending on circumstance. For example, the need for an explanation about a criminal sanction decision may be greater (so as to enable challenge) than the need for explanation about a medical treatment decision (where the Report indicates that individuals preferred improved AI accuracy even at the cost of an explanation).

Whether or not an explanation would be provided following a human decision is also flagged by the Report as a guiding factor. Some research participants considered that AI should mirror life, and explanations need only be provided if expected from a human, but others took a more progressive view, suggesting that AI decisions should always be justifiable and truthful, taking the opportunity to remove the human desire for social niceties and fudged rationales.

As such, any explainability guidance finally produced by Project ExplAIn will need to provide adequate information to be of practical use but retain sufficient flexibility to account for different scenarios.

Education

As with any complex subject, individuals cannot be expected to understand an explanation unless they have already received general education about key concepts and the pros and cons of AI decision making. Information relating to a particular product, system or decision, as you might see in a privacy notice under GDPR, may also be required.

Research participants considered the level of public AI comprehension to be low and therefore a general program of awareness-raising is deemed necessary. No specific body is proposed to lead and although advantages of a multi-voice scenario were submitted, without a clear allocation of a mandate there is a risk of mixed messaging, overlap or contradiction. Indeed, the Report itself highlights the need to avoid saturation and confusion due to excess messaging (easy to envisage given the current volume of AI commentary!) In any event, consideration of the mechanism for information delivery is considered crucial (eg by social media, broadcast, national curriculum).

Challenges

In addition, the Report highlights one challenge to the explainability of AI as being the current lack of a standardised approach to an organisation’s internal accountability for explainable AI (across departments, territories, product types and third party interactions). However, the presence of C-Suite executives in the research programme suggests increasing interest in the issue, especially as it ties with boardroom topics like GDPR-compliance and ESG (Environment, Social and Governance). AI strategy and policy may therefore flow from the top, with improved procedures to achieve consistency based on guidance regarding effective governance in the area.

Inevitably, the cost and resources required to explain AI was called out as an issue. However perhaps more interestingly, even certain individuals on the citizens' juries (rather than industry round tables) acknowledged the cost concern. Industry representatives admitted that it was likely to be cost rather than technical challenge that was the stumbling block in provision of explanations and that, with adequate financing, there was no technical reason why explanations couldn’t be provided. This gave reassurance to the ICO and the Turing but may not change reality for organisations, particularly when it was felt that the pace of innovation was such that legal and compliance teams simply did not have the bandwidth to address explainability requirements early enough.

And as practitioners come to draft and explain the AI decisions it is likely that they will suffer from the same challenges as those writing complex privacy notices today. With innovative technology and competitive edge at stake, certain participants also feared disclosure of commercially sensitive materials, intellectual property infringement and revelations that enabled “gaming” or “exploitation” of the system. The balance between adequate information, easily digestible text and business protection is a hard to achieve and the“no one-size-fits-all” view means that commoditising and standardising responses for efficiency will be problematic, even with guidance.

Next steps

The Report presents interim findings that will inform Project ExplAIn guidance for organisations. It is expected that guidance will be submitted for public consultation over the summer before publication in the autumn. All materials and reports generated from the citizens’ juries are available here. The Project ExplAIn guidance will also inform the ICO’s AI Auditing Framework, which is currently being consulted on and which is due to be published in 2020 and links to the ICO’s Technology Strategy – the second goal being “providing effective guidance to organisations about how to address data protection arising from technology”. It remains to be seen whether final Project ExplAIn guidance will be consistent with AI guidance initiatives around the world and whether a common approach can be achieved.

Related blog topics