Mediating Community-AI Interaction through Situated Explanation: The Case of AI-Led Moderation
Yubo Kou (Pennsylvania State University); Xinning Gui (Pennsylvania State University)
CSCW '20: ACM Conference on Computer-Supported Cooperative Work and Social Computing
Session: Interpreting and Explaining AI
Abstract
Artificial intelligence (AI) is becoming prevalent in our everyday interactions with technologies and attracts much attention from HCI and CSCW researchers. The explainable AI (XAI) scholarship has explored the philosophical nature of explanation and technical explanations are evaluated by experts in lab settings and challenging for layperson to understand. Less is known about how people understand and explain AI-led decisions in the community context. Drawing from XAI research and activity theory, a foundational HCI theory, we theorize how explanation is situated in a community’s shared values, norms, knowledge, and practices, and how situated explanation mediates community-AI interaction. We then present a case study of AI-led moderation, where people who received automated punishments sought socially-oriented, system-oriented, and action-oriented explanations to develop understanding of AI decisions. Lastly, we discuss the implications of this framework at the intersection of CSCW, HCI, and XAI.
DOI:: [ Ссылка ]
WEB:: [ Ссылка ]
Pre-recorded for the ACM ACM Conference on Computer-Supported Cooperative Work and Social Computing, October 17-21, 2020.
Ещё видео!