Direkt zum Inhalt

Förster, Maximilian ; Hagn, Michael ; Hambauer, Nico ; Jaki, Paula ; Obermeier, Andreas ; Pinski, Marc ; Schauer, Andreas ; Schiller, Alexander ; Benlian, Alexander ; Heinrich, Bernd ; Jussupow, Ekaterina ; Klier, Mathias ; Kraus, Mathias ; Schnurr, Daniel

A Taxonomy for Uncertainty-Aware Explainable AI

Förster, Maximilian, Hagn, Michael, Hambauer, Nico, Jaki, Paula, Obermeier, Andreas, Pinski, Marc, Schauer, Andreas, Schiller, Alexander, Benlian, Alexander, Heinrich, Bernd, Jussupow, Ekaterina, Klier, Mathias, Kraus, Mathias und Schnurr, Daniel (2025) A Taxonomy for Uncertainty-Aware Explainable AI. In: European Conference on Information Systems (ECIS), 15.06.2025 bis 18.06.2025, Amman, Jordan.

Veröffentlichungsdatum dieses Volltextes: 02 Jul 2025 09:02
Konferenz- oder Workshop-Beitrag
DOI zum Zitieren dieses Dokuments: 10.5283/epub.77019


Zusammenfassung

Artificial Intelligence (AI) is increasingly used to augment human decision-making. However, especially in high-stakes domains, the integration of AI requires human oversight to ensure trustworthy use. To address this challenge, emerging research on Explainable AI (XAI) focuses on developing and investigating methods to generate explanations for AI outcomes. Yet, current approaches often yield ...

Artificial Intelligence (AI) is increasingly used to augment human decision-making. However, especially in high-stakes domains, the integration of AI requires human oversight to ensure trustworthy use. To address this challenge, emerging research on Explainable AI (XAI) focuses on developing and investigating methods to generate explanations for AI outcomes. Yet, current approaches often yield limited explanations, neglecting the various sources of uncertainty that strongly influence AI augmented decision-making. This paper presents a first step to establishing a foundation for future research in uncertainty-aware XAI. By applying the Extended Taxonomy Design Process, we aim to develop an integrated, hierarchical taxonomy to structure the key characteristics of uncertainty-aware XAI. Through this approach, we identify four primary sources of uncertainty: data uncertainty, AI model uncertainty, XAI method uncertainty, and human uncertainty. Furthermore, we propose a preliminary taxonomy as an initial foundational framework for the future design and evaluation of uncertaintyaware XAI.



Beteiligte Einrichtungen


Details

Bibliographische Daten exportieren

Nur für Besitzer und Autoren: Kontrollseite des Eintrags

nach oben