Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Hejie Cui, Wei Dai, Yanqiao Zhu, Xiaoxiao Li, Lifang He, Carl Yang

Abstract

Human brains lie at the core of complex neurobiological systems, where the neurons, circuits, and subsystems interact in enigmatic ways. Understanding the structural and functional mechanisms of the brain has long been an intriguing pursuit for neuroscience research and clinical disorder therapy. Mapping the connections of the human brain as a network is one of the most pervasive paradigms in neuroscience. Graph Neural Networks (GNNs) have recently emerged as a potential method for modeling complex network data. Deep models, on the other hand, have low interpretability, which prevents their usage in decision-critical contexts like healthcare. To bridge this gap, we propose an interpretable framework to analyze disorder-specific Regions of Interest (ROIs) and prominent connections. The proposed framework consists of two modules: a brain-network-oriented backbone model for disease prediction and a globally shared explanation generator that highlights disorder-specific biomarkers including salient ROIs and important connections. We conduct experiments on three real-world datasets of brain disorders. The results verify that our framework can obtain outstanding performance and also identify meaningful biomarkers. All code for this work is available at https://github.com/HennyJie/IBGNN.git.

Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16452-1_36

SharedIt: https://rdcu.be/cVVpR

Link to the code repository

https://github.com/HennyJie/IBGNN

Link to the dataset(s)

https://www.ppmi-info.org/access-data-specimens/download-data


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper proposed an interpretable GNN to find the group-wise-specific connectome-level features. The authors provided clear descriptions for the algorithm, which could contribute to the future work.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This paper proposed a novel interpretable GNN framework for connectome-based brain disorder analysis. It’s an interesting work that combined an explanation generator to make the GNN could interpret the brain biomarkers in the group level.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    For our brain, there’s a common hypothesis that every brain is unique even if belongs to the same group. But this paper aims to find the group-wise biomarkers though good results were obtained, there’s still a question that why the unique explanation for each subject is not good? Or why don’t the authors combine the subject-wise and group-wise features together?

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors provided clear code to make it reproducible.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
    1. The abbreviations when first occured should be explained with full name.
    2. It’s not very clear for the weights for the losses.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This work proposed a good method and conducted interpretable biomarkers for different diseases.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    2

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #3

  • Please describe the contribution of the paper

    The article proposes an interpretable deep learning framework to predict disease and identify disorder-specific salient regions of interest and important connections driving predictions. The method is applied to data set consisting of HIV positive subjects, one containing subjects diagnosed with bipolar disorder, and the publicly available data set of the Parkinson’s Progression Marker Initiative. The method reveals much higher accuracy scores than several baseline methods

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • focus on interpretation of predictions
    • model is carefully derived
    • cross-validation on several datasets
    • superior accuracy compared to 9 other methods !
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • analysis fails to account for confounder (such as motion, sex, … ) so that interpretation of findings is difficult
    • fails to test for significant differences to baseline
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    partly performed on public data and provide source code so reproducibility of some of the findings should be excellent

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    please see above

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    strong paper with some weaknesses that lower enthusiasm

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    2

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #4

  • Please describe the contribution of the paper

    The authors proposed an interpretable brain network-oriented framework, in which a GNN is used to extract embeddings of ROIs from the brain MRI images and an explanation generator is used to learn a disease-specific masking matrix. Their experiments showed that the proposed method achieved superior prediction performance and interpretations derived from the learned masking matrix aligned with existing clinical understandings.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Learning a sparse masking matrix is an interesting approach to impose implicit regularization on brain networks and produces more robust results under the limitation of data size.
    2. Visualizations for both salient ROIs and important connections are helpful to understand the clinical relevance of the proposed model.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Main issue: the method’s main shortcoming is that it cannot provide subject-specific interpretation and hence cannot be used in a more realistic clinical practice.

    Minor issues:

    1, definition of node feature x_i is not clear (under eq. 2). 2, “C” is not defined in eq. 5 3, In section “Prediction Performance”, “IBGNN+ can further increase the backbone by about 9.7%…”. However, this observation is not supported by Table 1. 4, define “HC” before using it

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The submission meets all criteria on the reproducibility checklist.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    1, extending this method to learn subject-dependent masking matrices would be interesting.

    2, It is unclear how the initial ROI embeddings were obtained. Given the limited size of labeled datasets, using self-supervised methods for pre-training may help to improve the tasks evaluated in this paper.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The proposed brain network-oriented GNN is novel, the use of a masking matrix to improve the model’s robustness and interpretability is an interesting idea. Evaluations on three different brain imaging datasets are solid.

  • Number of papers in your stack

    1

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
    • The paper is about interpretable GNN framework for connectome-based brain disorder analysis
    • The authors provided clear code to make it reproducible.
    • The authors MUST address the issue of confounder that is raised by one of the reviewers
  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    2




Author Feedback

Meta-Reviews: Please see Q2 for our response to the issue of confounder.

Q1: Subject-specific v.s. group-wise interpretation (Reviewer #1, #4) (1) We agree that subject-specific interpretations are also useful. In fact, we have directly applied GNNExplainer [1] to each subject, which learns individual-level masks to provide subject-specific interpretations. However, these individual-level masks vary a lot and yield inferior performance compared with our proposed shared masks, possibly due to the inherent noise in neuroimaging and the limited number of subjects in datasets. (2) As we highlight in this paper, we focus on investigating disease-specific patterns that are common across the group and robust to individual image quality. Meanwhile, by applying the learned group-level masks to individuals, we combine group-level with subject-specific brain networks for further visualization and interpretation. [1] Gnnexplainer: Generating explanations for graph neural networks, NeurIPS 2019

Q2: Confounders (e.g., motion, sex, … ) for interpretation (Reviewer #3) Thanks for bringing up the issue of potential confounders for rigorous interpretation and we think this is a very important research question. We would like to clarify that (1) This work does not focus on deriving strict causal relations. Instead, we aim to discover insights from brain structure perspectives that align with existing neuroscience understanding and are potentially valuable for clinical or scientific studies. (2) We have paid attention to regressing the potential influence of some important confounders such as motion, age, and gender in the process of dataset collection and preprocessing. Specifically, groups in each dataset have balanced age and gender portions and are collected with the same image acquisition procedure. The motion distortion in each group has been handled by aligning all the images with one reference in the preprocessing step. (3) With available datasets, we can consider applying rigorous methods such as [2] to explicitly regress measured confounders (e.g., age and gender), and methods such as [3] to find causal relations of hidden unmeasured confounders (e.g., smoking and depression). [2] Training confounder-free deep learning models for medical applications, Nat. Commun. 2020 [3] Generalized independent noise condition for estimating latent variable causal graphs, NeurIPS 2020

Q3: Abbreviations (Reviewer #1, #4) Thank you for bringing this up. We notice the lack of explanation for HC in Fig. 2, which stands for Health Control. We will make sure all the abbreviations are explained in the camera-ready version.

Q4: The weights for losses (Reviewer #1) We scale the numerical value of each loss item to the same order of magnitude and sum them up as the final training objective, so as to balance the influence of each loss term.

Q5: Significance test (Reviewer #3) We perform a paired t-test with p=0.05 compared to baselines and use * to denote the significant improvements. Please see the caption of Table 1 for more details.

Q6: Initial embeddings of node feature x_i (Reviewer #4) We treat the choice of node features as a hyper-parameter and try five different types of node features. The edge profile is taken as the final choice, which means using the corresponding row in the edge weight matrix as the node’s initial feature. Please refer to Appendix A for more details.

Q7: Definition of “C” in eq.5 (Reviewer #4) C is the number of possible prediction labels. Specifically, in our experiments C=2, where the brain disorder prediction label c is either 0 (health control) or 1 (patient). We will make this clear in the revision.

Q8: The performance gain of 9.7% (Reviewer #4) This difference is reflected by the F1 metric on PPMI datasets.

Q9: Use self-supervised methods to alleviate insufficient sample size (Reviewer #4) We agree this is a promising direction for neuroimaging analysis. Thank you for your suggestions.



back to top