Reflection #1

Ilesha Garg
2 min readApr 16, 2021

Paper - Questioning the AI: Informing Design Practices for Explainable AI User Experiences

In this paper the authors talk about explainable AI (XAI) and identify gaps between the user needs for effective transparency and the current algorithmic XAI approaches. They advocate for more user-centric explainable AI-applications and work with the design and UX practitioners — that sit between the technical AI team and end users — to identify the current challenges and opportunities for XAI practices.

According to Liao et al, explainability with regards to AI/ML models implies making them more ‘transparent and understandable’. There is no definitive quantitative measure of explainability and it may vary by recipient and hence the algorithmic work of XAI must take a human-centered approach to solving the explainable AI problem.

In order to connect the user needs with available XAI techniques, the authors emphasize the importance of selection and note the lack of criteria for such selection. They develop a ‘XAI question bank’ with potential questions that users may ask to understand AI models, which could be expanded by adding more questions directly from the end-users of AI applications and should be considered as a starting point towards XAI.

One aspect of the paper that I found deeply intriguing is the transparency of a data set and how it impacts a particular model’s output and reception by end-users. There have been extensive studies that have been conducted which throw light on the bias embedded into the data (eg: internet data which is heavily biased) as well as lack of transparency on how a model is trained (http://ai.stanford.edu/blog/bias-nlp/). The end-user of an AI application is unable to look at the data, where it is coming from or run any analysis on it making it impossible for anyone outside the data science team to replicate the AI system, making the system opaque.

As a next step the questions, especially those on the data, from the authors’ question bank could be integrated with machine learning toolkits to make them a part of the development process of an AI model as a sign-off criteria.

Existing explanation algorithms like LIME, SHAP and Google’s Language Interpretability toolkit (https://pair-code.github.io/lit/) are developer-centric and answer some of the questions from the authors’ question-bank however, their interpretability is limited to data scientists.

For an ML model used by law-enforcement agencies, developing user trust and user explainability go hand in hand. If the model is treated as a black-box without any effort to explain its predictions or the data that it is trained on, such an element of trust could be completely lost. In an article published by ProPublica (link), a risk assessment tool that predicted recidivism was found to have racial bias in its predictions. ProPublica used statistical analysis for their assessment but had the ML model endeavored to employ one of the algorithmic XAI techniques or answer any of the questions from the question-bank, arguments such as these can be strengthened and predictive law-enforcement models be made more transparent and robust.

--

--