| Veröffentlichte Version Download ( PDF | 237kB) | Lizenz: Creative Commons Namensnennung 4.0 International |
Are GRU Cells More Specific and LSTM Cells More Sensitive in Motive Classification of Text?
Gruber, Nicole
und Jockisch, Alfred
(2020)
Are GRU Cells More Specific and LSTM Cells More Sensitive in Motive Classification of Text?
Frontiers in Artificial Intelligence 3 (40), S. 1-6.
Veröffentlichungsdatum dieses Volltextes: 02 Nov 2020 11:37
Artikel
DOI zum Zitieren dieses Dokuments: 10.5283/epub.43883
Zusammenfassung
In the Thematic Apperception Test, a picture story exercise (TAT/PSE; Heckhausen, 1963), it is assumed that unconscious motives can be detected in the text someone is telling about pictures shown in the test. Therefore, this text is classified by trained experts regarding evaluation rules. We tried to automate this coding and used a recurrent neuronal network (RNN) because of the sequential input ...
In the Thematic Apperception Test, a picture story exercise (TAT/PSE; Heckhausen, 1963), it is assumed that unconscious motives can be detected in the text someone is telling about pictures shown in the test. Therefore, this text is classified by trained experts regarding evaluation rules. We tried to automate this coding and used a recurrent neuronal network (RNN) because of the sequential input data. There are two different cell types to improve recurrent neural networks regarding long-term dependencies in sequential input data: long-short-term-memory cells (LSTMs) and gated-recurrent units (GRUs). Some results indicate that GRUs can outperform LSTMs; others show the opposite. So the question remains when to use GRU or LSTM cells. The results show (N = 18000 data, 10-fold cross-validated) that the GRUs outperform LSTMs (accuracy = .85 vs. .82) for overall motive coding. Further analysis showed that GRUs have higher specificity (true negative rate) and learn better less prevalent content. LSTMs have higher sensitivity (true positive rate) and learn better high prevalent content. A closer look at a picture x category matrix reveals that LSTMs outperform GRUs only where deep context understanding is important. As these both techniques do not clearly present a major advantage over one another in the domain investigated here, an interesting topic for future work is to develop a method that combines their strengths.
Alternative Links zum Volltext
Beteiligte Einrichtungen
Details
| Dokumentenart | Artikel | ||||
| Titel eines Journals oder einer Zeitschrift | Frontiers in Artificial Intelligence | ||||
| Verlag: | Frontiers | ||||
|---|---|---|---|---|---|
| Band: | 3 | ||||
| Nummer des Zeitschriftenheftes oder des Kapitels: | 40 | ||||
| Seitenbereich: | S. 1-6 | ||||
| Datum | 30 Juni 2020 | ||||
| Institutionen | Humanwissenschaften > Institut für Psychologie Zentrale Einrichtungen > Rechenzentrum | ||||
| Identifikationsnummer |
| ||||
| Stichwörter / Keywords | GRU, LSTM, RNN, text classification, implicit motive, thematic appeception test | ||||
| Dewey-Dezimal-Klassifikation | 100 Philosophie und Psychologie > 150 Psychologie | ||||
| Status | Veröffentlicht | ||||
| Begutachtet | Ja, diese Version wurde begutachtet | ||||
| An der Universität Regensburg entstanden | Ja | ||||
| URN der UB Regensburg | urn:nbn:de:bvb:355-epub-438834 | ||||
| Dokumenten-ID | 43883 |
Downloadstatistik
Downloadstatistik