Trust in AI
In two working papers together with researchers from Information Systems and Operations Research we investigate how humans follow advice from AI and the role of transparency. Contact me for the working papers.
In two working papers together with researchers from Information Systems and Operations Research we investigate how humans follow advice from AI and the role of transparency. Contact me for the working papers.
In the first project, we combine recent literature on algorithm aversion with early literature on advice discounting. Yet, we highlight the importance of advice in the era of algorithms. As digitalization advances, algorithms increasingly replace or assist human decisions, often outperforming humans. However, users frequently avoid these tools, a behavior known as algorithm aversion. While prior research attributes this to factors related to humans, machines, or tasks, the role of initial human-computer interaction remains unclear. Since attitudes toward algorithms evolve and can shift during interactions, studying these dynamics may reveal strategies to reduce aversion and foster acceptance. To explore this, we conduct a large (>3000 participants), pre-registered, incentivized online experiment together with a survey focusing on how the sequence of presenting information about algorithm quality affects user attitudes.
In the second article, we focus more on the role of transparency in algorithmic advice. We explore how AI-based advice influences human decision-making in repetitive tasks like anomaly detection in quality assurance. A key concern is the risk of users blindly relying on AI recommendations to save time. Through a laboratory experiment using eye-tracking, we investigate two forms of AI advice: simple quality predictions and predictions accompanied by heatmaps showing anomalies.