Direkt zum Inhalt

Hartl, Philip ; Fischer, Thomas ; Hilzenthaler, Andreas ; Kocur, Martin ; Schmidt, Thomas

AudienceAR - Utilising Augmented Reality and Emotion Tracking to Address Fear of Speech

Hartl, Philip, Fischer, Thomas, Hilzenthaler, Andreas, Kocur, Martin und Schmidt, Thomas (2019) AudienceAR - Utilising Augmented Reality and Emotion Tracking to Address Fear of Speech. In: Alt, Florian und Bulling, Andreas und Döring, Tanja, (eds.) MuC'19: Proceedings of Mensch und Computer 2019. Association for Computing Machinery, New York, NY, USA, S. 913-916. ISBN 9781450371988.

Veröffentlichungsdatum dieses Volltextes: 07 Aug 2020 05:02
Buchkapitel


Zusammenfassung

With Augmented Reality (AR) we can enhance the reality by computer-generated information about real entities projected in the user's field of view. Hence, the user's perception of a real environment is altered by adding (or subtracting) information by means of digital augmentations. In this demo paper we present an application where we utilise AR technology to show visual information about the ...

With Augmented Reality (AR) we can enhance the reality by computer-generated information about real entities projected in the user's field of view. Hence, the user's perception of a real environment is altered by adding (or subtracting) information by means of digital augmentations. In this demo paper we present an application where we utilise AR technology to show visual information about the audience's mood in a scenario where the user is giving a presentation. In everyday life we have to talk to and in front of people as a fundamental aspect of human communication. However, this situation poses a major challenge for many people and may even go so far as to lead to fear and and avoidance behaviour. Based on findings in previous work about fear of speech, a major cause of anxiety is that we do not know how the audience judges us. To eliminate this feeling of uncertainty, we created an AR solution to support the speaker while giving a speech by tracking the audience's current mood and displaying this information in real time to the speaker's view: AudienceAR. By doing so we hypothesise to reduce the speaker's tension before and during presentation. Furthermore, we implemented a small web interface to analyse the presentation based on the audience mood after the speech is given. Effects will be tested in future work.



Beteiligte Einrichtungen


Details

DokumentenartBuchkapitel
ISBN9781450371988
Buchtitel:MuC'19: Proceedings of Mensch und Computer 2019
Verlag:Association for Computing Machinery
Ort der Veröffentlichung:New York, NY, USA
Seitenbereich:S. 913-916
Datum2019
InstitutionenSprach- und Literatur- und Kulturwissenschaften > Institut für Information und Medien, Sprache und Kultur (I:IMSK) > Lehrstuhl für Medieninformatik (Prof. Dr. Christian Wolff)
Informatik und Data Science > Fachbereich Menschzentrierte Informatik > Lehrstuhl für Medieninformatik (Prof. Dr. Christian Wolff)
Identifikationsnummer
WertTyp
10.1145/3340764.3345380DOI
Stichwörter / Keywordsaffective computing, augmented reality, emotion, facial, recognition, hololens
Dewey-Dezimal-Klassifikation000 Informatik, Informationswissenschaft, allgemeine Werke > 004 Informatik
100 Philosophie und Psychologie > 150 Psychologie
StatusVeröffentlicht
BegutachtetJa, diese Version wurde begutachtet
An der Universität Regensburg entstandenJa
URN der UB Regensburgurn:nbn:de:bvb:355-epub-435816
Dokumenten-ID43581

Bibliographische Daten exportieren

Nur für Besitzer und Autoren: Kontrollseite des Eintrags

nach oben