Interpretable Deep learning (IDL) in Co-Clinical Non-invasive Radiological Image

Download Call for Papers (PDF)

Over the past few years, the application of deep-learning (DL) techniques has been used in several fields of medical science. Deep learning (DL) provides computational models of multiple processing layers to learn and represent data with multiple levels of abstraction. It can implicitly capture intricate structures of large-scale data and ideally suited to some of the hardware architectures that are currently available. DL has recently achieved outstanding performance in academic and industrial fields and become a vital utensil in a wide range of medical image computing tasks, including cancer detection, tumor segmentation, tumor classification, vessel segmentation, and cancer prediction. While DL models give impressively high predictive accuracy, they are recognized as black boxes with deep and complicated layers but not very popular among the doctors and radiologist. In the meantime, DLs have been recently reported as defenseless to spoofing with elegant hand-designed input samples. This principally takes place in medical image computing field, where a single incorrect prediction might be very detrimental, and the trust on the trained DL model and its capacity to deliver both efficient and robust data processing must be pledged. Therefore, understanding how the DL models works, and thus creating explainable DL models have become an elemental problem.

Currently, it is still not clear what information must be delivered to DL models, and how DL models work to warrant a rapid, safe and robust prediction. Hence, experts/users request to know the latest research advances of interpretable deep learning (IDL). This critical research topic will bring new challenges and opportunities to the new age AI community. The purpose of this special issue aims to provide a diverse, but complementary, set of contributions to demonstrate new developments and applications of explainable deep learning, to solve problems in medical image computing. The main goal is to encourage research and development of explainable deep learning for multimodal biomedical images by publishing high-quality research articles and reviews in this rapidly growing interdisciplinary field. The medical data can be obtained from multimodal imaging, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound, Single Photon Emission Computed Tomography (SPECT), Positron Emission Tomography (PET), Optical Microscopy and Tomography, and many other noninvasive radiological imaging.

In this special section, our aim is to provide the researchers and practitioners a platform to present innovative solutions based on interpretable deep learning in radiological imaging for precision medicine. The focus of this special section is to address the current research challenges of IDL by encouraging submissions related to the advanced data analytics using deep learning in noninvasive medical imaging for precision medicine.

Topics of interest include, but are not limited to, the following:

  • Interpretable analysis of medical image analysis for treatment response
  • Lesion interpretation and visualization using IDL
  • Novel theoretical understanding of IDL in application to medical imaging
  • Interpretable transfer learning and multi-task learning
  • Analyzing bottleneck of efficient learning of deep neural networks
  • Inferring and regularizing network structure for robust prediction
  • IDL in co-clinical applications such as detection, quantitative measurements, and image guidance
  • Multi-dimensional deep learning for multidimensional data
  • Adversarial attacks and defending in medical image computing applications.
  • Patient follow-up and treatment delivery using big deep analysis

Guest Editors

Sudipta Roy, Jio Institute,;
Tai hoon Kim, Beijing Jiaotong University,

Key Dates

Deadline for Submission: 31 Oct, 2022
First Reviews Due: 05 Dec, 2022
Revised Manuscript Due: 01 Jan, 2023
Final Decision: 01 Feb, 2023