Open and Interpretable AI in Computational Pathology

Download Call for Papers (PDF)

The existing deep learning models, are less interpretable, i.e., neither provide explanations nor trustworthy for the predictions. Furthermore, several other challenges exist, such as ethical, legal, social, and technological issues of the existing AI. Trustworthy and explainability in AI tools based on Deep Learning (DL) is an emerging field of research with great promise for increased high-quality healthcare. Particularly, it refers to AI/DL tools and techniques that produce human-comprehensible solutions, i.e., provide explanations and interpretations for disease diagnosis and predictions, as well as recommended actions. The explainable solution will enable enhanced prediction accuracy with decision understanding and traceability of actions taken.   Interpretable AI aims to improve human understanding, determine the justifiability of decisions made by the machine, introduce trust and reduce bias.  This special issue calls for and attract recent studies and research work focusing on interpretable AI methods to generate human readable explanations. Particularly those with the purpose to (i) improve the trust and reduce analysis bias, (ii) stimulate discussion on the system designs, and (iii) use and evaluate novel explainable AI for improving the accuracy of pathology workflows for disease diagnostic and prognostic. We invite researchers working on practical use-cases of trustworthy AI models that discuss adding a layer of interpretability and trust to powerful algorithms, such as neural networks and ensemble methods for delivering near real-time intelligence.

Only high-quality and original research contributions will be considered. The special issue will highlight, but not be limited to, the following topics:

  • Emerging AI for the analysis of digitize pathology image
  • Trustworthy AI in computational pathology
  • Explainable AI for computational pathology
  • Explainable AI for whole slide image (WSI) analysis
  • Advanced AI for WSIs Registration
  • AI-based systems using human-interpretable image features (HIFs) for improved clinical outcomes
  • Human level explainable AI
  • Detection and discovery of predictive and prognostic tissue biomarkers
  • Histopathologic biomarker assessment using advanced AI systems for accurate personalized medicine.
  • AI-assisted computational pathology for cancer diagnosis
  • Immunohistochemistry scoring.
  • Interpretable deep learning and human-understandable machine learning
  • Trust and interpretability
  • Theoretical aspects of explanation and interpretability in AI

Guest Editors

Key Dates

  • Submission deadline:                February 20, 2022
  • First reviews due:                      March 1st, 2022
  • Revised manuscript due:           April 1st, 2022
  • Final decision:                           June 30th, 2022
  • Camera ready version:              July 1st, 2022

Download Call for Papers (PDF)