Dokumentenart: | Artikel | ||||
---|---|---|---|---|---|
Titel eines Journals oder einer Zeitschrift: | Applied Intelligence | ||||
Verlag: | Springer | ||||
Ort der Veröffentlichung: | DORDRECHT | ||||
Band: | 52 | ||||
Nummer des Zeitschriftenheftes oder des Kapitels: | 5 | ||||
Seitenbereich: | S. 5617-5632 | ||||
Datum: | 2022 | ||||
Institutionen: | Sprach- und Literatur- und Kulturwissenschaften > Institut für Information und Medien, Sprache und Kultur (I:IMSK) > Lehrstuhl für Informationswissenschaft (Prof. Dr. Udo Kruschwitz) Informatik und Data Science > Fachbereich Menschzentrierte Informatik > Lehrstuhl für Informationswissenschaft (Prof. Dr. Udo Kruschwitz) | ||||
Identifikationsnummer: |
| ||||
Stichwörter / Keywords: | BANDIT; Best arm identification; User studies; Racing algorithms | ||||
Dewey-Dezimal-Klassifikation: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 004 Informatik 300 Sozialwissenschaften > 330 Wirtschaft | ||||
Status: | Veröffentlicht | ||||
Begutachtet: | Ja, diese Version wurde begutachtet | ||||
An der Universität Regensburg entstanden: | Ja | ||||
Dokumenten-ID: | 56860 |
Zusammenfassung
Two major barriers to conducting user studies are the costs involved in recruiting participants and researcher time in performing studies. Typical solutions are to study convenience samples or design studies that can be deployed on crowd-sourcing platforms. Both solutions have benefits but also drawbacks. Even in cases where these approaches make sense, it is still reasonable to ask whether we ...
Zusammenfassung
Two major barriers to conducting user studies are the costs involved in recruiting participants and researcher time in performing studies. Typical solutions are to study convenience samples or design studies that can be deployed on crowd-sourcing platforms. Both solutions have benefits but also drawbacks. Even in cases where these approaches make sense, it is still reasonable to ask whether we are using our resources - participants' and our time - efficiently and whether we can do better. Typically user studies compare randomly-assigned experimental conditions, such that a uniform number of opportunities are assigned to each condition. This sampling approach, as has been demonstrated in clinical trials, is sub-optimal. The goal of many Information Retrieval (IR) user studies is to determine which strategy (e.g., behaviour or system) performs the best. In such a setup, it is not wise to waste participant and researcher time and money on conditions that are obviously inferior. In this work we explore whether Best Arm Identification (BAI) algorithms provide a natural solution to this problem. BAI methods are a class of Multi-armed Bandits (MABs) where the only goal is to output a recommended arm and the algorithms are evaluated by the average payoff of the recommended arm. Using three datasets associated with previously published IR-related user studies and a series of simulations, we test the extent to which the cost required to run user studies can be reduced by employing BAI methods. Our results suggest that some BAI instances (racing algorithms) are promising devices to reduce the cost of user studies. One of the racing algorithms studied, Hoeffding, holds particular promise. This algorithm offered consistent savings across both the real and simulated data sets and only extremely rarely returned a result inconsistent with the result of the full trial. We believe the results can have an important impact on the way research is performed in this field. The results show that the conditions assigned to participants could be dynamically changed, automatically, to make efficient use of participant and experimenter time.
Metadaten zuletzt geändert: 29 Feb 2024 12:41