News & Events
Systems based on artificial intelligence (AI) increasingly support radiologists in making diagnostic decisions. But, technological characteristics make these systems less transparent and their errors less predictable compared to rule-based systems. In a within-subject experiment with 47 novice physicians, we show how AI influences the process of medical diagnostic decision-making. We use qualitative data from think aloud protocols to examine the effects of correct and incorrect AI advice. The findings are triangulated with another sample of 21 novice physicians who received the AI advice after their own assessment and a subsample of 12 trained radiologists. The qualitative data analysis results in a process model with five distinct decision-making patterns which differ in how decision-makers use metacognitions to monitor and control the decision process. When the advice confirms their own opinion, the confidence of physicians increases as well as the accuracy if the advice is correct. However, if the AI disconfirms physicians’ opinion, metacognitions related to the own reasoning process and the system advice decide whether physicians follow or reject the AI advice. In particular, how physicians use these metacognitions determines whether they are successful in rejecting incorrect whilst accepting correct AI advice. We identified three failure patterns in this process. First, AI systems can distort physicians’ independent assessment. Thus, physicians experience a confirmation from incorrect advice. Second, a conflict between beliefs in system accuracy and physicians’ own capabilities can cause them to unduly follow or reject AI advice. Third, physicians selectively evaluate the data in support of the AI. However, decision augmentation is only successful if decision-makers draw on system and self-related metacognitions equally to consider their own assessment as well as the system advice. On the basis of our results, we demonstrate the challenges of augmenting medical decisions with AI systems advice.