Abstract
From their impacts to potential threats, privacy and misinformation are a recurring top news story. Social media platforms (e.g. Facebook) and information retrieval (IR) systems (e.g. Google), are now in the public spotlight to address these issues. Our research investigates an approach, known as Nudging, applied to the domain of IR, as a potential means to minimize impacts and threats ...
Abstract
From their impacts to potential threats, privacy and misinformation are a recurring top news story. Social media platforms (e.g. Facebook) and information retrieval (IR) systems (e.g. Google), are now in the public spotlight to address these issues. Our research investigates an approach, known as Nudging, applied to the domain of IR, as a potential means to minimize impacts and threats surrounding both matters. We perform our study in the space of health search for two reasons. First, encounters with misinformation in this space have potentially grave outcomes. Second, there are many potential threats to personal privacy as a result of the data collected during a search task. Adopting methods and a corpus from previous work as the foundation, our study asked users to determine the effectiveness of a treatment for 10 medical conditions. Users performed the tasks on 4 variants of a search engine results page (SERP) and a control, with 3 of the SERP's being a Nudge (re-ranking, filtering and a visual cue) intended to reduce impacts to privacy with minimal impact to search result quality. The aim of our work is to determine the Nudge that is least impactful to good decision making while simultaneously increasing privacy protection. We find privacy impacts are significantly reduced for the re-ranking and filtering strategies, with no significant impacts on quality of decision making.