Concept-Driven Explainability Methods for AI in Medical Diagnostics
Jakub Šimko
Supervisor(s): Ing.Martin Dubovský
Slovak Technical University
Abstract: Medical diagnostics, particularly in fields like histopathology, rely on the expert interpretation of complex visual data. As artificial intelligence (AI) systems become increasingly integrated into these processes, their lack of transparency presents a significant barrier to adoption. To bridge this gap, explainability methods are essential to ensure that AI-generated insights align with the mental models of medical professionals. This study extends the End-User-Centered Explainable AI (EUCA) framework by introducing novel concept-based explainability methods and the Concept-Driven Design (CDD) methodology. The concept-based extension includes concept alignment and concept importance, which enhance the interpretability of AI predictions by ensuring their alignment with domain-specific concepts used by medical professionals. Furthermore, the CDD integrates human-centered design principles with explainability techniques to develop AI systems that better align with expert reasoning. An initial round of expert testing demonstrated the usefulness of the proposed concept-based explainability methods. The preliminary results suggest a positive impact, highlighting the potential of these methods to improve AI-assisted medical diagnostics.Keywords: Design, Human-Computer InteractionFull text:Year: 2025