Skip to the content.

Time:

Note this course will be taught twice at CHI 2021:

Monday May 10: CEST 17-19 / PDT 08-10 / EDT 11-13 / JST 00-02 (next day)

Wednesday May 12: CEST 17-19 / PDT 08-10 / EDT 11-13 / JST 00-02 (next day)

Location: Zoom link through conference platform

Instructors: Q. Vera Liao, Moninder Singh, Yunfeng Zhang, Rachel Bellamy

CHI Program: Link

Slides: Link

Goals of the Course

We will address the following questions:

Overview

Artificial Intelligence (AI) technologies are increasingly used to make decisions and perform autonomous tasks in critical domains such as healthcare, finance, and employment. The needs to understand AI in order to improve, contest, develop appropriate trust and better interact with AI systems have spurred great academic and public interest in Explainable AI (XAI). On the one hand, the rapidly growing collection of XAI techniques allows diverse styles of explanations to be incorporated in AI systems. On the other hand, to deliver satisfying user experiences with AI explanations requires user-centered approaches and interdisciplinary research to connect user needs and technical affordance. In short, XAI is an area with growing needs and exciting opportunities for HCI research.

This course is intended for HCI researchers and practitioners who are interested in developing and designing explanation features in AI systems, and those who want to understand the trends and core topics in the XAI literature. The course will introduce available toolkits that make it easy to create explanations for ML models, including AIX 360 [1], a comprehensive toolkit providing technical and educational resources on the topic such as introduction to XAI concepts, python code libraries, and tutorials.

We will also draw on our own experience designing and studying XAI systems [3-8], as well as learning from industry design practitioners [2] to discuss opportunities and challenges to incorporate state-of-the-art XAI techniques in AI systems to create good XAI UX, including a “question-driven XAI design process”[9] we developed through our research.

Outline

Intended audience and prerequisites

The intended audience for this course are any CHI attendees who have already, or intend to engage in developing, designing and researching on the topic of XAI. The course does not require advanced knowledge in AI, data science or programming, though a basic understanding of machine learning concepts such as classification, training data, and features could be helpful.

The course will include 15-20 minutes hands-on practice with Python code samples provided. Interested attendees could further explore the code but programming is not required. The course instructors will provide instructions to use the code samples, as well as introductory materials for machine learning and Python programming beforehand for interested attendees.

Instructions for Attendees

Check out this course notes.

References

[1] Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., … & Mourad, S. (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques.

[2] Liao, Q. V., Gruen, D., & Miller, S. (2020). Questioning the AI: Informing Design Practices for Explainable AI User Experiences. CHI 2020

[3] Dodge, J., Liao, Q. V., Zhang, Y., Bellamy, R. K., & Dugan, C (2019). Explaining models: an empirical study of how explanations impact fairness judgmen. IUI 2019

[4] Zhang, Y., Liao, Q. V., & Bellamy, R. K. (2019). ffect of confidence and explanation on accuracy and trust calibration in ai-assisted decision making. . FAT* 2020

[5] Ghai, B., Liao, Q. V., Zhang, Y., Bellamy, R., & Mueller, K. (2021). Explainable Active Learning (XAL) Toward AI Explanations as Interfaces for Machine Teachers. CSCW 2021

[7] Narkar, S., Zhang, Y., Liao, Q. V., Wang, D., & Weisz, J. D. Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML. IUI 2021

[8] Ehsan, U., Liao, Q. V., Muller, M., Riedl, M. O., & Weisz, J. D. (2021). Expanding Explainability: Towards Social Transparency in AI systems. CHI 2021

[9] Liao, Q. V., Pribić, M., Han, J., Miller, S., & Sow, D. (2021). Question-Driven Design Process for Explainable AI User Experiences. Working Paper