The pursuit of decision safety in clinical applications underscores the need for transparent methods in medical imaging. Although concept-based models provide local, instance-level explanations, they often fail to capture the global decision logic at the dataset level. Our work in "Learning Concept-Driven Logical Rules for Interpretable and Generalizable Medical Image Classification”, aims to mitigate the limitation by introducing Boolean logical rules derived from binary visual concepts. We propose Concept Rule Learner (CRL), which integrates logical layers to model concept correlations and extract clinically meaningful rules, offering both local and global interpretability. Experiments on two medical imaging tasks show that CRL delivers competitive performance compared to existing interpretable methods while improving generalization to out-of-distribution data.
It is a privilege to present my paper at MICCAI 2025, especially as this marks my first conference experience. I deeply appreciate the chance to showcase my research on such a renowned platform and to connect with experts in the field.
I am particularly interested in advances in foundation models, multimodal applications, and active interpretable methods for medical image analysis. These areas closely connect with my research, and I look forward to learning how the community is pushing the boundaries of trustworthy AI in healthcare.
I am excited to interact with distinguished researchers who share the same passion for medical imaging. I look forward to gaining valuable insights from their work, exploring the wide scope of ongoing research, and broadening my perspective through meaningful exchanges.