AI that decodes radiologists’ behavior from eye movements
AI that decodes radiologists’ behavior from eye movements
Hien Van Nguyen, an associate professor in the Department of Electrical and Computer Engineering, served as the lead for an NIH-funded research project. His paper, “Interpreting Radiologist’s Intention from Eye Movements in Chest X-ray Diagnosis,” was recognized as an Outstanding Paper in Content Theme at the 33rd Association for Computing Machinery (ACM) International Conference on Multimedia, a major conference.
ACM Multimedia is dedicated to advancing research and applications across diverse multimedia fields. The conference brings together leading minds from academia and industry to explore cutting-edge developments in areas such as virtual and augmented reality, AI, video, and social data.
Nguyen said he was surprised but honored when he was notified that the team had won.
“This conference is a major venue for multimedia and AI research, so it was very meaningful to see our work recognized there,” he said. “My first thoughts were gratitude for the entire team and our clinical collaborators, because this project sits at the intersection of radiology and AI. It really only works when everyone contributes their expertise. I also felt excited that the community sees value in moving beyond ‘AI that mimics experts’ and toward AI that helps us understand expert decision-making.”
The goal of the team’s research is to teach AI to understand what a radiologist is looking for moment by moment, based on eye movements while reading a chest X-ray.
“Radiologists don’t just ‘look around’ randomly,” Nguyen said. “Their eyes move in patterns that reflect expertise. Sometimes they follow a systematic checklist, sometimes they explore when something looks uncertain, and sometimes they do a quick overall scan and then zoom in on a suspected problem.”
Their method, RadGazeIntent, takes a sequence of gaze fixations along with the X-ray image and predicts which clinical finding the radiologist is likely focusing on at each moment, along with confidence scores.
“As an analogy, imagine watching someone searching for issues in a complex scan,” Nguyen said. “Their eyes reveal whether they’re doing a routine sweep, double-checking something suspicious, or focusing hard on one spot. We’re building tools that can interpret that ‘search strategy’ rather than only copying the final answer.”
Other team members include Ngan Le, an assistant professor in the Department of Computer Science and Computer Engineering at the University of Arkansas, and her Ph.D. student Trong-Thang Pham; Anh Nguyen, an associate professor at the University of Liverpool in England; Carol C. Wu, a professor in the Department of Thoracic Imaging in the Division of Diagnostic Imaging at the University of Texas MD Anderson Cancer Center; and Zhigang Deng, Moores Professor of Computer Science in the College of Natural Sciences and Mathematics.
Nguyen identified three additional research directions the team is now exploring:
- Broader validation and generalization. Their current benchmarks are built from public chest X-ray eye-tracking datasets, but they want to test and refine the approach across more readers, institutions, and clinical settings.
- Expanding beyond chest X-rays. Eye-movement behavior can differ across modalities such as CT or MRI, so extending the approach could reveal what generalizes well and what requires new modeling.
- Intention-aware assistance and training tools. The long-term goal is interactive systems that can collaborate with radiologists—such as educational feedback for trainees or tools that adapt in real time to what the reader is focusing on. This would support clinicians rather than trying to replace them.
Nguyen has published in Nature Scientific Reports about related research, and he is also part of AI projects about more effective organ-targeted drug delivery systems and lung cancer diagnostics. For more information about his lab and his research, visit his group’s website.