A Procedure for Trust Analysis in Human-AI Interaction

Darina Dvorecká

Supervisor(s): Ing. Miroslav Laco, PhD.

Slovak Technical University


Abstract: User interface (UI) evaluation, focused on user experience (UX), is crucial for designing effective digital products. As artificial intelligence (AI) gets more integrated into interfaces, traditional user testing methods fall short in capturing the complexity of human-AI interaction (HAII). In this paper, we propose a procedure for objectively capturing and evaluating user trust and behavior in AI-enhanced interfaces. The procedure introduces novel behavior-based metrics extending traditional UX evaluation methods to address trust dynamics and behavioral patterns, supported by an automated logging plug-in and an evaluation dashboard for interactive visualization and analysis. We validate the method via case studies: a pilot study and a domain-specific study with an AI-assisted annotation interface using a controlled Wizard of Oz (WOZ) simulation with various levels of accuracy. Results show that combining objective behavioral measurements with subjective post-task assessment provides deeper insights into trust calibration, error tolerance, and interaction patterns, depicting trust as a dynamic, time-evolving process rather than a static state. The plug-in is easily integrable into web-based prototypes, thus providing a straightforward integration into standard user testing studies. This approach could help future studies design AI-enhanced systems with appropriately calibrated user trust.
Keywords: Human-Computer Interaction
Full text:
Year: 2026