Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Ze Jin, Maolin Pang, Yuqiao Yang, Fahad Parvez Mahdi, Tianyi Qu, Ren Sasage, Kenji Suzuki

Abstract

In this study, we proposed a novel explainable artificial intelligence (XAI) technique to explain massive-training artificial neural networks (MTANNs). Firstly, we optimized the structure of an MTANN to find a compact model that performs equivalently well to the original one. This en-ables to “condense” functions in a smaller number of hidden units in the network by removing “redundant” units. Then, we applied an unsupervised hierarchical clustering algorithm to the function maps in the hidden layers with the single-linkage method. From the clustering and visualization re-sults, we were able to group the hidden units into those with similar func-tions together and reveal the behaviors and functions of the trained MTANN models. We applied this XAI technique to explain the MTANN model trained to segment liver tumors in CT. The original MTANN model with 80 hidden units (F1=0.6894, Dice=0.7142) was optimized to the one with nine hidden units (F1=0.6918, Dice=0.7005) with almost equivalent performance. The nine hidden units were clustered into three groups, and we found the following three functions: 1) enhancing liver area, 2) suppressing non-tumor area, and 3) suppressing the liver boundary and false enhance-ment. The results shed light on the “black-box” problem with deep learning (DL) models; and we demonstrated that our proposed XAI technique was able to make MTANN models “transparent”.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43895-0_67

SharedIt: https://rdcu.be/dnwzz

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    The study proposed proposed a novel explainable artificial intelligence technique to explain massive-training artificial neural networks for semantic segmentation of liver tumor and used hierarchical clustering algorithm to group the hidden units with similar functions into one group.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    It provides an explanation of the hidden units functions and removed the redundancy across them.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    no comparison with other/previous approaches.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The proposed approach can be applied to other medical imaging tasks to explain the decisions of the deep learning models.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    Please add a comparison to previous/other works.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The study proposed proposed a novel explainable technique to explain deep learning models for semantic segmentation and can be applied to other tasks.

  • Reviewer confidence

    Somewhat confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    Explainable AI is an emerging technique to interpret the DL model’s output final output. The authors have attempted to explain this concept for explaining the intermediate result which is a good approach. The paper is well written from Introduction to conclusion.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper is well written with a nice explanation on introducing the domain, performing the state of the art literature survey, the methodology and the results. The work can be reproduced. The results are promising, The approach looks novel.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Hardly there are few weakness in the paper. Even though the work and the workflow looks interesting, the main drawback is in explaining the liver CT cases. As there are 4 phases of imaging in Liver, the paper did not discuss what phase image is considered. Also, only few cases of images are shown.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The work can be re-executed and the results can be reproduced.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    1. Section 2.1: Author has directly mentioned the quantum noise. Reason for this noise is due to less photons collected in the detector due to less kVp in image acquisition. The author should have also mentioned what was the source of noise from the paper that he has referred. Just mentioning the reduction of noise does not give an insight about this [6] literature.

    2. As the DCE Liver images results in 4th dimension i.e. phases like plain CT, Arterial, Portal Venous and Delayed phase images, there is no discussion o on what phase images the proposed method is applied. It is insufficient to just show one image and there is no explanation on what phase image is this.

    3. Page 5, First paragraph: What type of noise you dealt with? Because in previous section, Quantum noise is mentioned. If the same noise is reduced here, mention Quantum noise. Did you find any computational noise, electronic noise cases? How did you deal with other noises?

    4. If you had visualized the segmented liver and liver tumor using direct volume rendering technique, the results would have have impressed more

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper content is well written from technical point of view. With few major comments addressed the paper can be accepted after second round of review.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    4

  • [Post rebuttal] Please justify your decision

    Seems only few review comments are addressed and authors are not serious about all comments.



Review #3

  • Please describe the contribution of the paper

    The paper presents modifications aimed at improving the explainability of MATNN, a widely-used neural network for medical image analysis. The authors propose a more compact structure for the neural network and introduce an algorithm for visualizing the importance of feature maps. The article includes CT visualizations to demonstrate the effectiveness of their method. The proposed modifications have the potential to enhance the interpretability and performance of neural networks for medical image analysis. Further research is needed to evaluate the practicality and generalizability of the proposed approach.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The authors of this paper focus on improving the explainability of the widely-used neural network, MATNN, for medical image analysis. They aim to make the model more transparent and interpretable.

    2. The proposed modifications enhance the network’s performance and make it easier to understand how it makes decisions.

    3. The article presents clear visual representations of the network’s hidden units, which help understand how it processes the data. The authors also demonstrate the effectiveness of their proposed modifications in improving the model’s interpretability.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    (1) The paper has some limitations that need to be addressed. Firstly, the topic of improving MTANN is relatively dated, which may limit its impact on the field. Further research is needed to determine the practicality and generalizability of the proposed approach.

    (2) The paper lacks comparisons with recent XAI models, making it difficult to evaluate the effectiveness of their method. Future studies could benefit from comparing their approach with recent models that enhance the interpretability of neural networks.

    (3) Certain sections of the paper are challenging to follow (i.e., the algorithm), which could hinder the understanding and replication of their approach. The authors could improve the paper’s clarity by revising the explanations and providing more details on their algorithm and how the clustered feature maps contribute to the final result.

    [1] Van der Velden B H M, Kuijf H J, Gilhuijs K G A, et al. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis[J]. Medical Image Analysis, 2022: 102470. [2] Salahuddin Z, Woodruff H C, Chatterjee A, et al. Transparency of deep neural networks for medical image analysis: A review of interpretability methods[J]. Computers in biology and medicine, 2022, 140: 105111.

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The paper is likely reproducible since the authors provide a detailed algorithm description and clear visualizations of their results. Reproducibility is essential in scientific research, and it helps other researchers validate and build upon their work.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    1. The authors should consider broadening the applicability of their methodology beyond MTANN. Their algorithm design appears versatile enough for use with other recent models. However, its current scope limits the potential contribution of the method.

    2. Improvements can be made in the presentation of the paper. For instance, the authors could simplify the algorithm presentation for greater concision and use formal formulas instead of descriptions.

    3. Enhancing the quality of figure illustrations is also recommended. As they stand, the images are too blurry for consideration in top-tier conferences.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    3

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The scope of this method, the performance comparison, and the content presentation

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    The paper tries to improve the explainability of neural network for medical image analysis, through a more compact structure and an algorithm for visualizing feature maps’ importance. It includes visualizations to validate their method, which could enhance interpretability and performance of medical image analysis neural networks, although further research is needed for practicality and generalizability. The authors attempt to explain the intermediate result of the DL model’s output, which is seen as a beneficial approach. The paper received mixed reviews and is invited for rebuttal. The authors should address the major comments on the applicability of the proposed method to other networks and also the comparison with other explainability methods, in addition to other detailed comments.




Author Feedback

We would like to thank the meta-reviewer and 3 reviewers for providing constructive suggestions. By following their suggestions, we responded to them below to improve the clarity and quality of our submission.

  1. Our response to the comments from the meta-reviewer and Reviewer (Rev) 3 on the applicability of the proposed method to other networks: Our proposed XAI method is applicable to other neural network-based deep-learning (DL) models such as CNN, ResNet, U-Net, LSTM, and transformers. Our method consists of 1) sensitivity-based network optimization, 2) weighted feature map calculation, and 3) unsupervised hierarchical clustering of the feature maps for grouping hidden neurons in a neural network. All the 3 components in our method are applicable to neural network models. Thus, as long as a DL model is based on a neural network, our method can be used to explain the learned functions of neurons in the network. We applied our XAI method to an MTANN model which is also a neural-network-based DL model, because we thought we could demonstrate our method’s mechanism and effects more clearly for the relatively simple MTANN model. Indeed, we were able to reveal the learned functions of the MTANN model. We plan to expand our experiments to include the ones for other DL models to demonstrate the applicability and generalizability of our method.
  2. Our response to comments from meta-reviewer, and Rev 1 and 3 on the comparison with other explainability methods: Many XAI methods have been proposed to explain a trained DL model (i.e., post-hoc methods). Representative XAI methods include class activation mapping (CAM), grad-CAM, layer-wise relevance propagation (LRP), deep learning important features (DeepLIFT), LIME, and SHapley additive explanations (SHAP). These XAI methods offer post-hoc explanations that indicate which areas in a given input image the trained model focuses on and identify which areas in the image have a positive or negative impact on the model decision. In other words, those XAI methods are “instance-based” (also known as “local”) and limited to the visual explanation of model’s attentions in a given input image (i.e., an instance). They do not offer explanations of the learned functions of the network. In comparison, our XAI method can reveal the learned functions of groups of neurons in a neural network, which we call “functional explanations” and define as explanations of the model behavior by a combination of functions, as opposed to the visualization of neurons. To our knowledge, there is no XAI method that offers functional explanations. Thus, our method is a post-hoc method that offers both instance-based and model-based functional explanations. In addition, as described in our paper, we experimented that removing one group of hidden neurons that were considered no contribution to the segmentation task, resulted in a better agreement with the ground truth. Thus, our method can be used to control the behavior of the trained model, which is unique among XAI methods. As long as the model is based on a neural network, our method is applicable thus can be considered “model-flexible”. Our XAI method and other XAI methods are complementary; namely, both methods can be used at the same time to provide both visual and functional explanations.
  3. Our response to Rev 2’s comment on the liver CT cases: The database used in this study was obtained from an open database (LiTS database). It contains hepatic CT images with liver tumors in a portal venous phase only, because liver tumors were delineated best in this particular phase. Radiologists chose this phase to make “gold-standard” manual segmentation for the same reason. Using a single phase in our experiment contributed to the clear interpretation of the results and demonstration of the effectiveness of our method.
  4. Our response to Rev 2’s comment “What type of noise you dealt with?”: Quantum noise was dominant in the CT images due to low radiation exposures.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors’ responses to the reviewing critics are not satisfactory, which fall on the superficial response without revealing the specifics of how to solve them. For instance, when responding to the comment on applicability, the authors only generally mentioned their method can be applied to any neural networks without explaining how it can be done.



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    There are major concerns with the writing of current version of paper. Also, reviewers have concerns with limited discussion with other relevant studies, which makes it difficult to clarify the merit of the paper. Rebuttal was not convincing. I would suggest ‘reject’.



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors tryied to improve the model explainability for medical image segmentation, via introducing a more compact structure and visualizing the importance of feature maps. The concerns from the reviewers and AC include the applicability of the proposed model and the comparison experiments. In general, the concerns have been solved by the authors but reviewer 2 emphasised his/her concerns were only partially solved, but without pointing out which concerns were not solved. I think it is reasonable to do a initial test on a simple model, i.e., MTANN model. Therefore, I suggest to accept this work.



back to top