| Veröffentlichte Version Download ( PDF | 6MB) | Lizenz: Creative Commons Namensnennung 4.0 International |
Leveraging fine-tuning of large language models for aspect-based sentiment analysis in resource-scarce environments
Fehle, Jakob
, Kruschwitz, Udo
, Hellwig, Nils Constantin
und Wolff, Christian
(2026)
Leveraging fine-tuning of large language models for aspect-based sentiment analysis in resource-scarce environments.
Knowledge-Based Systems 336, S. 115277.
Veröffentlichungsdatum dieses Volltextes: 20 Jan 2026 13:08
Artikel
DOI zum Zitieren dieses Dokuments: 10.5283/epub.78479
Zusammenfassung
This study explores the use of fine-tuned open source large language models (LLMs) for Aspect-based Sentiment Analysis (ABSA), comparing their performance with state-of-the-art (SOTA) methods on English and German datasets with focus on low-resource scenarios. Results on the four ABSA subtasks Aspect Category Detection (ACD), Aspect Category Sentiment Analysis (ACSA), End-To-End-ABSA (E2E), and ...
This study explores the use of fine-tuned open source large language models (LLMs) for Aspect-based Sentiment Analysis (ABSA), comparing their performance with state-of-the-art (SOTA) methods on English and German datasets with focus on low-resource scenarios. Results on the four ABSA subtasks Aspect Category Detection (ACD), Aspect Category Sentiment Analysis (ACSA), End-To-End-ABSA (E2E), and Target Aspect Sentiment Detection (TASD) show that fine-tuned LLMs handle limited training data scenarios better than current SOTA approaches, achieving consistent performance across various dataset sizes. Prompt formulation and hyperparameter tuning influence performance, though concise prompts often suffice when combined with effective fine-tuning. To assess generalizability, we conduct an ablation study across multiple languages, domains, and LLM architectures. The findings confirm that performance gains extend beyond the initial setting, supporting the robustness of fine-tuned LLMs over multiple different languages and domains. We establish new SOTA results on the Rest-16 and GERestaurant datasets and highlight the practical viability of fine-tuning LLMs for ABSA applications under limited training material.
Downloadstatistik
Downloadstatistik