| PDF (1MB) |
- URN to cite this document:
- urn:nbn:de:bvb:355-epub-770196
- DOI to cite this document:
- 10.5283/epub.77019
Alternative links to fulltext:Publisher
Abstract
Artificial Intelligence (AI) is increasingly used to augment human decision-making. However, especially in high-stakes domains, the integration of AI requires human oversight to ensure trustworthy use. To address this challenge, emerging research on Explainable AI (XAI) focuses on developing and investigating methods to generate explanations for AI outcomes. Yet, current approaches often yield ...

Owner only: item control page

Download Statistics