Radiology and AI: overcoming the cognitive challenges of integration in clinical practice

AI radiology cognitive perspective

A recent opinion article in The Lancet Digital Health explores the cognitive challenges of integrating artificial intelligence (AI) into radiology, highlighting a fundamental difference between how radiologists and AI systems make decisions. The researchers aims to bridge the gap between the development of AI tools and their practical value for radiologists by examining the fundamental differences in decision-making processes. The study argues that while radiologists rely on contextual cues and bounded rationality to make diagnostic decisions, AI models operate on a dataset-driven approach that disregards clinical context. The researchers emphasise that this fundamental difference can lead to significant challenges in clinician-AI collaboration, potentially increasing the risk of over-reliance or under-use of AI tools. They call for future research to focus on the cognitive aspects of decision making in medical AI to ensure safer and more effective integration into clinical practice. This perspective is essential for the development of AI systems that truly augment, rather than complicate, the decision-making process in radiology.

Read full study


Medical artificial intelligence for clinicians: the lost cognitive perspective

The Lancet Digital Health, 2024

Abstract

The development and commercialisation of medical decision systems based on artificial intelligence (AI) far outpaces our understanding of their value for clinicians. Although applicable across many forms of medicine, we focus on characterising the diagnostic decisions of radiologists through the concept of ecologically bounded reasoning, review the differences between clinician decision making and medical AI model decision making, and reveal how these differences pose fundamental challenges for integrating AI into radiology. We argue that clinicians are contextually motivated, mentally resourceful decision makers, whereas AI models are contextually stripped, correlational decision makers, and discuss misconceptions about clinician–AI interaction stemming from this misalignment of capabilities. We outline how future research on clinician–AI interaction could better address the cognitive considerations of decision making and be used to enhance the safety and usability of AI models in high-risk medical decision-making contexts.