Trust in AI-assisted Decision-Making

This is a study on Trust in AI-assisted decision-making (ADM) that takes the perspectives of the partictioners (those behind the system) and the decision subjects (those for whom the decision is made). This study presents interviews of people from these two groups in order to better understand the concept of trust in this context and its factors.

The paper will be presented at CHI 2024 in April.

This work is led by Oleskandra Vereschak, in collaboration with Gilles Bailly, Mahla Alizadeh and Baptiste Caramiaux.

Abstract: Trust between humans and AI in the context of decision-making has acquired an important role in public policy, research and industry. In this context, Human-AI Trust has often been tackled from the lens of cognitive science and psychology, but lacks insights from the stakeholders involved. In this paper, we conducted semi-structured interviews with 7 AI practitioners and 7 decision subjects from various decision domains. We found that 1) interviewees identified the prerequisites for the existence of trust and distinguish trust from trustworthiness, reliance, and compliance; 2) trust in AI-integrated systems is strongly influenced by other human actors, more than the system’s features; 3) the role of Human-AI trust factors is stakeholder-dependent. These results provide clues for the design of Human-AI interactions in which trust plays a major role, as well as outline new research directions in Human-AI Trust.