Feb 17, 2026
                       

Responsible AI for Medical Decisions

The Research Problem
Artificial intelligence (AI) is increasingly used in healthcare, supporting tasks ranging from medical image analysis to clinical decision-making. Despite these advances, many existing AI systems operate as “black boxes”, producing results without clear explanations of how or why those outcomes are generated. This lack of transparency limits clinicians’ trust in AI, raises concerns about safety and accountability, and slows responsible adoption in real-world medical settings.

Addressing these challenges is particularly important in clinical environments, where decisions can have significant consequences for patients and healthcare systems. There is a growing need for AI approaches that not only provide accurate outputs, but also offer interpretable reasoning and allow meaningful human oversight.

This project addresses this gap by exploring a human-in-the-loop explainable AI approach for medical decision support. The focus is on developing a proof-of-concept toolkit that enables clearer interpretation of AI-generated analyses and supports collaborative interaction between clinicians and AI systems. By prioritising transparency, interpretability, and human involvement, the project aims to contribute to the more trustworthy and responsible use of AI in healthcare contexts.

Research Design
Using a human-in-the-loop, mixed-methods approach, we will: 1) curate and prepare appropriately governed medical imaging cases together with associated labels and report text, and define the clinical context and evaluation criteria to ensure the task reflects real decision support needs; 2) develop a multimodal explainable AI prototype that integrates imaging and text to generate structured outputs with explicit, step-by-step reasoning and interpretable reports, rather than opaque predictions; 3) implement a human-in-the-loop feedback workflow where experts review and compare alternative AI-generated reports, providing preference signals that are used to iteratively refine the model using reinforcement-learning-style alignment; 4) validate the toolkit through both technical testing.

 This project will be coordinated through cross-university collaboration, with regular online working sessions and workshops to share methods, consolidate results, and define next-stage research.

Project Objectives
This project aims to develop a proof-of-concept framework for human-in-the-loop explainable AI in medical decision-making. Leveraging WUN’s global research network, the project enables collaboration across institutions and regions, ensuring that the resulting methods and insights are informed by diverse context. The project will generate transferable principles and prototype tools that support transparent, trustworthy, and responsible use of AI in healthcare, while allowing adaptation to different settings and future applications.