Menu Close

Special track on Explainable Machine Learning models in Medical Imaging

34th IEEE CBMS International Symposium on Computer-Based Medical Systems

June 7, 8 & 9 – Aveiro, Portugal

Call for papers

Computational medical imaging techniques aim towards enhancing the diagnostic performance of visual assessments in medical imaging, improving the early diagnosis of various diseases, helping to obtain a deeper understanding of physiology and pathology, and therefore contributing to advance the field of Quantitative Radiology. To reach these goals, medical image computing and signal processing are commonly combined with biophysical models, which describe explicitly the organ or tissue under investigation.

Machine Learning (ML) models revolutionised multiple tasks in medical image computing, such as image segmentation, registration and synthesis, through the extensive analysis of big imaging data. Although ML models outperform classic approaches on the above tasks, they remain to a large extend implicit in terms of describing the data under investigation. This limits ML model interpretability, which in turn is one of the main barriers towards ML-based pathology detection assessments and generalised single- or multi-modal ML analysis in medical imaging. Moreover, in modern clinical practices and settings, detailed explanations of the model behaviours are increasingly required to support reliability towards improving clinical decision making. Model explainability becomes also critical when data integration techniques are required to cross-assess learning performance from imaging data against mutual, complementary or “clinical reference standard” information from “additional modalities” (either imaging, or other types of biomedical/clinical data such as invasive methods or ex vivo analysis, and/or other). To support further development of ML models for clinical applications, model explainability is highly important towards enhancing generalisability, trustworthiness, causality, transferability, informativeness, confidence, accessibility and interactivity. Last not least, being of the most promising topics in ML/medical imaging research, the main challenge for developing explainable models is to improve explainability whilst maintaining high learning performances.

The main objective of this special issue is to attract original high-quality research and survey articles that reflect the most recent advanced on explainable ML models in medical imaging (MRI, CT, PET, SPECT, Ultrasound and other), by investigating novel methodologies either through interpreting algorithm components and/or exploring algorithm-data relationships.

We welcome researchers from both academia and industry, to present their state-of- the-art scientific developments, technologies, and ideas covering all possible aspects of explainable ML models in medical imaging.

Topics of interest include (but are not limited to):

  • Develop and interpret ML models in single- or multi-modal (MRI, CT, Ultrasound, PET, SPECT) imaging (using either single- or multiple-inputs and thus, biophysical information)
  • To improve explainability, combine ML with biophysical modelling and/or visual assessments from additional/complementary imaging modalities (e.g. multiple sequences in MRI, or combining MRI with Ultrasound, CT, PET or SPECT)
  • To improve explainability, combine ML with other types of “reference standard” input data (e.g. clinical data, electrophysiology signals, molecular analysis, invasive methods) that can enhance ML interpretability in medical imaging
  • Enhance explainability through combining multiple tasks (e.g. segmentation and/ or image synthesis)/ Multi-task learning on multi-modality medical images
  • Enhance explainability by incorporating graphical deep learning models for single- or multi-modality image analysis
  • Solidify explainability in cross-domain image synthesis between different imaging modalities or sequences (e.g. from different MRI sequences, or MRI and CT, etc.)
  • Transfer learning and transferability for single- or multi-modality medical images
  • ML model explainability in semi-supervised, weakly-supervised and unsupervised learning in medical imaging
  • Post-hoc explainability techniques for ML models in single- or multi-modality images
  • Enhance explainability through developing ML models to detect or predict pathology versus healthy statuses
  • Explain strengths and weaknesses of ML models through quantitative evaluation and interpretation of algorithm performances

Paper submission guidelines

Authors are invited to submit their original contributions before the deadline following the conference submission guidelines. Each contribution must be prepared following the IEEE two-column format, and should not exceed the length of 6 (six) Letter-sized pages. For detailed instructions please visit: https://cbms2021.web.ua.pt/

All submissions will be peer-reviewed by three reviewers of the Program Committee. All accepted papers will be included in the conference proceedings, and will be published by the IEEE. For each accepted paper, at least one author must register at the conference before the Author Registration Deadline. Publication in proceedings is conditioned to the registration and presentation of the paper at the conference by one of their authors. If the paper is not presented at the conference, it will not be included in the proceedings.

Authors of the best papers will be invited to submit an extended contribution to a journal special issue.

Important dates

  • Paper submission deadline: February 5, 2021
  • Notification of acceptance: March 26, 2021
  • Camera-ready due: April 16, 2021
  • Registration
    • Early registration deadline: April 16, 2021
  • Conference: June 7, 2021

Special Track Chairs

Program committee members

Sotirios A. Tsaftaris, University of Edinburgh, UK

Gabriele Valvano, IMT Lucca, Italy

Victor Gonzalez-Castro, University of Leon, Spain

Lin Gu, RIKEN AIP, University of Tokyo, Japan

Hao Dong, Peking University, China

Zhangming Niu, Aladdin Healthcare Technologies, Germany

Chunliang Wang, KTH Royal Institute of Technology, Sweden

Sivarama Krishan Rajaraman, NIH/NLM, USA

Emanuele Trucco, University of Dundee, UK

George Matsopoulos, National Technical University of Athens, Greece

David Rodriguez Gonzalez, University of Cantabria, Spain

Sammy Danso, University of Edinburgh

Xurui Jin, Duke Kunshan University, China-USA

Adrian Clark, University of Essex, UK

Adrian Martín Fernández, Pompeu Fabra University, Spain

Eirini Christinaki, KU Leuven, Belgium

Oscar Jiménez del Toro, University of Applied Sciences Western Switzerland, Switzerland

Rakkrit Duangsoithong, Prince of Songkla University, Thailand