Mar 04, 2026
                       

Calibrating Trust Through Explainable AI

The Research Problem
The rise of Generative AI (GenAI) tools, such as ChatGPT, has made them a big part of daily lives for millions. People use them to work faster, get answers about health/mental health, and support their learning and decision-making. However, a key problem is that most users do not understand how these systems generate their answers. This lack of understanding leads to two opposite issues: some people trust GenAI too much and accept its answers without questioning them, while others do not trust it at all and avoid using it. Both can widen the digital gap, separating people who know how to use AI critically from those who either trust it too much or do not trust it at all.

 One way to address these issues is through human-centered AI design, which aims to make AI systems more transparent and understandable. A key approach here is Explainable AI (XAI), which explains, clearly and intuitively, how GenAI tools create their answers. Specifically, XAI uses visuals, interactive elements, or personalized feedback, to help users understand when to trust or question AI’s responses.

 This project explores whether Explainable AI (XAI), which helps people understand how and why AI produces certain answers, can help ordinary users develop a more critical and informed approach to using GenAI. We want to see whether clear, intuitive and simple explanations designed for everyday users can help people decide when to trust AI and when to question it, especially in high-stakes scenarios. Most research so far has focused on technical users, such as programmers or engineers, so this project brings a new focus on ordinary people (avoiding) using AI in daily life.

Research Design
Our key research question is whether user-centred Explainable AI promotes calibrated trust in generative AI outputs among ordinary users across diverse cultural contexts.  To address this question, we will design and experimentally test a user-centered XAI intervention. Specifically, we will use prompt engineering to develop a customized version of ChatGPT that includes explainable features. This version will show users how (high-stakes) answers are generated, using intuitive visuals or short messages that explain how confident the system is and why it gave a particular answer.

We will then run an online experiment in Asia, Europe, Oceania, and North America. Regional representatives from University of Auckland (New Zealand), University of Alberta (Canada), Padjadjaran University (Indonesia), and Maastricht University (Netherlands) involved in the project will oversee data collection. Participants will be recruited via Prolific platform and, where needed, through local sources.  After providing consent, participants will complete a short baseline survey about their trust in AI, how well they think they understand it, and whether they use it regularly. They will then be divided into two groups.

In the first group (experimental condition), participants will use a version of ChatGPT that includes new explainable features (XAI). They will complete short but high-stakes tasks in three key areas: health (for example, asking the AI to explain medical test results), education (for example, seeking guidance on choosing a study program or career path), and mental health (for example, asking for advice about emotion regulation). In the second group (Control), participants will use a standard version of ChatGPT that does not include explainable features but will ask the same questions (tasks). Afterwards, both groups will again complete questionnaires about their levels of trust, understanding, confidence, and future intention to use AI. The same procedure will be repeated in each region so that we can compare results across countries and cultures.

Project Objectives
The project will deliver scientific evidence (manuscript) and open source material on whether XAI helps users develop more informed attitudes and behaviours toward GenAI, and whether these effects hold across different regions. Importantly, it will deliver a first proof-of-concept for using XAI as a way to improve AI literacy, laying the foundation for future work on adapting XAI to diverse cultural contexts and settings, while addressing potential risks and misuses. Notably, this collaboration is possible through WUN’s global structure, which provides trusted institutional partnerships, established channels for ethical and administrative coordination, and existing research connections across continents.