Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Dongdong Chen, Mengjun Liu, Zhenrong Shen, Xiangyu Zhao, Qian Wang, Lichi Zhang

Abstract

Different functional configurations of the brain, also named as “brain states”, reflect a continuous stream of brain cognitive activities. These distinct brain states can confer heterogeneous functions to brain networks. Recent studies have revealed that extracting information from functional brain networks is beneficial for neuroscience analysis and brain disorder diagnosis. Graph neural networks (GNNs) have been demonstrated to be superior in learning network representations. However, these GNN-based methods have few concerns about the heterogeneity of brain networks, especially the heterogeneous information of brain network functions induced by intrinsic brain states. To address this issue, we propose a learnable subdivision graph neural network (LSGNN) for brain network analysis. The core idea of LSGNN is to implement a learnable subdivision method to encode brain networks into multiple latent feature subspaces corresponding to functional configurations, and realize the feature extraction of brain networks in each subspace, respectively. Furthermore, considering the complex interactions among brain states, we also employ the self-attention mechanism to acquire a comprehensive brain network representation in a joint latent space. We conduct experiments on a publicly available dataset of cognitive disorders. The results affirm that our approach can achieve outstanding performance and also instill the interpretability of the brain network functions in the latent space.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43993-3_6

SharedIt: https://rdcu.be/dnwM7

Link to the code repository

https://github.com/haijunkenan/LSGNN

Link to the dataset(s)

https://adni.loni.usc.edu/


Reviews

Review #1

  • Please describe the contribution of the paper

    This work proposes a learnable subdivision graph neural network (LSGNN) for brain network analysis from functional MRI data to address the heterogeneity in expressing intrinsic brain states across the population. LSGNN aims to implement a learnable subdivision method that projects connectivity data into multiple latent feature subspaces (that correspond to the different ‘functional organizational modules’). Interactions across subspaces are then encoded using an attentional transformer model. In turn, this aims to improve the interpretability in the extraction of functional representations that are useful for classification of cognitive disorders. Experiments are performed on three different clinical datasets against different GNN frameworks and an SVM model.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The idea of using a transformer based end-to-end models to encourage clustering in the data to distinct brain connectivity states is an interesting formulation and could potentially aid interpretability in GNN-based models. The improved results against existing GNN models on three separate datasets are encouraging.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The comparisons against the IB-GNN (N-E) task- Table 1 and the third column in the ablation study in Table 2 are rather close for several metrics. Have the authors verified whether these comparisons reach statistical significance thresholds in comparison?

    2. The comparisons in Table 2 are restricted to GNN type approaches and SVM models. Given that a large part of the methodology relies on a transformer based attentional backbone, how does the framework compare against vanilla transformer networks or graph transformer models?

    3. The presentation of section 3.4 , i.e. the discussion on interpretability is rather weak. It is unclear from the plots in Fig 2. if any distinctive patterns are uncovered by the LSGNN. Have the authors examined whether these patterns are consistently recovered for this task across initializations/subsets of the population? Given that interpretability is a major focus of this work (from the title), the analysis as presented here rather inadequate.

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors have indicated that training/inference code and models will be made available for reproducibility.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    1. The paper refers to “functional heterogeneity of the whole brain network” in the introduction, and contrasts this against existing notion of heterogeneity. However, the authors do not provide a formal definition for this term and the context in which their approach addresses this issue.

    2. Typo on page 6: handcraft features –> handcrafted features

    3. It would be great if the authors could provide a few sentences on how this work extends standard transformer models in Section 2.2 and 2.3

    4. How sensitive is the performance to choices of \alpha and \beta. Do the authors use a validation set to infer these hyper parameters?

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    3

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The main reason behind the lower score is that the evaluation of the work is not thorough (please see weaknesses) and do not adequately support the claims made in the contributions section.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    This paper proposed a learnable subdivision graph neural network (LSGNN) in order to extract the heterogeneous features of brain networks under various functional configurations, and a novel assignment method in order to encode brain networks into multiple latent feature subspaces in a learnable way, which were interpretable as brain states. In results, the proposed method was applied to fMRI ADNI dataset, and performed classification by the existing and proposed methods. The proposed method achieved outstanding performance and instill the interpretability of the brain network functions.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The existing graph neural networks rarely considered the functional heterogeneity of the whole brain network, and were poorly interpretable in the brain network analysis. The proposed method, LSGNN could solve these problems. The experimental results also showed the superiority of the proposed LSGNN in classification accuracy and the interpretability of brain states.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Although it was written that the brain state was interpretable through Fig. 2, since Fig. 2 showed only connectivity matrix without node information, it was not possible to know how to interpret it at all.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The paper was very clearly well written, but the proposed method was complex and seems difficult for readers to re-implement.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    The proposed method was novel, and its performance was adequately compared to the existing methods. However, the results of the interpretability part were a little unsatisfactory.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The proposed method was novel, clearly written, and proven superior with appropriate experiments.

  • Reviewer confidence

    Somewhat confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #3

  • Please describe the contribution of the paper

    In this work, the authors proposed an innovative learnable subdivision Graph Neural Network (LSGNN) to analyze the functional networks.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The major strengths are as follows: 1). Implement a learnable subdivision method to encode brain networks into multiple latent feature subspaces corresponding to functional configurations and realize the feature extraction of brain networks in each subspace, respectively. 2). Employ a prevalent technique named self-attention. 3). A well-designed experimental study to demonstrate the efficacy of proposed LSGNN.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1). Authors need to clarify whether they included some pre-knowledge to train LSGNN. For example, some ROIs and Atlas to determine the dominant points in functional connectivity graph. 2). The reviewers are very curious about the architecture of proposed LSGNN. In detail, the author need to describe the hyperparameters, e.g., the number of layers, number of nodes, kernel size ( if applied), and the optimization algorithms. 3). The reviewers are also concerned about the reproducibility. Does author employ any metric, such as Identifiability, to investigate the reproducibility in the empirical experiments. 4). In Table 1, the authors only provide the results of classification and robustness. It seems that the proposed LSGNN is much more complicated than canonical GNNs. The authors are suggested to validate LSGNN with other canonical GNNs using time consumption. Furthermore, the validation results in Table 1 demonstrate that the standard deviation of LSGNN is larger than other canonical learning methods, e.g., GNN, SVM. Can authors explain why LSGNN is not robust compared with other peer algorithms?

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors do not discuss the reproducibility in this work.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    1).

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    1). Authors need to clarify the details of architectures in proposed LSGNN.

    2). Authors are strongly suggested to add some validations between LSGNN and other peer methods, e.g., SVM and GNN.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This paper proposed a learnable subdivision method to extract heterogeneous features of brain networks and to encode brain networks into multiple latent feature subspaces corresponding to functional configurations. The key strength of this paper is to consider the heterogeneity of different brain subnetworks rather than a unified network into modeling, which is more reasonable. Although there are some interesting findings and merits in this paper, the meta-reviewer as well as a majority of the reviewers have some concerns and confusions, and invite the authors to provide the rebuttal to clarify these major concerns: 1. How to ensure the statistical significance when comparing with other methods; 2. Insufficient comparisons with other SOTA models. 3. Unclearness of model interpretation, parameter settings, and validations. Please also refer to the detailed comments from each reviewer.




Author Feedback

We thank all reviewers and AC for the insightful comments. Our responses to the major concerns are itemized as follows. We will also make further revisions based on our responses in the final version. 1) How to ensure the statistical significance when comparing with other methods (AC, R1): We performed 10 times cross-validation for all tasks, and the result of LSGNN was significantly better than other comparison methods (p<0.05) based on pairwise t-tests (Fourth Paragraph, Page 6). 2) Insufficient comparisons with other SOTA models (AC). How about the vanilla transformer networks or graph transformer models (R1): Our LSGNN leveraged GNN’s capability to represent network information and incorporated a self-attention mechanism to fuse heterogeneous information. Therefore the entire framework followed the GNN messaging architecture rather than classical transformer architecture, which is why we compared GNN models (GCN, GAT, BrainGNN, IBGNN) in the paper. Note that the vanilla transformer network is unable to deal with graph structure data directly. Besides, we have further compared the standard graph transformer model (GTN, Graph Transformer Network, NeurIPS 2019), which resulted in 72.15±1.49%, 77.82±1.74%, and 66.94±2.08% ACC for the three tasks, which are significantly lower than the proposed method (78.88±1.58%, 87.47±1.71%, 74.24±1.64%). 3) Unclearness of model interpretation (AC): If any distinctive patterns are uncovered? Whether these patterns are consistent in the population (R1)? Why Fig.2 showed only connectivity matrices without node information (R2): For R1, we can obtain seven distinct brain connectivity patterns as presented in Fig.2, and each pattern corresponds to a functional brain state. All patterns are computed through the average of testing samples, thus they are consistent in the population. For R2, we draw functional connectivity matrices with the aim of investigating brain states and revealing the impact of different brain states on diseases. The node information is only used to search for salient nodes in brain regions, which is irrelevant to the brain state information and therefore not discussed in our paper. 4) Unclearness of parameter settings and validations (AC). How sensitive of \alpha and \beta (R1)? Clarify the details of architectures (R3): We conducted 10 times cross-validation for all experiments and performed a grid search for hyper-parameters. Detailed experimental settings are on Page 6, third paragraph, and detailed parameter configurations are listed in Appendix Table 1, including layers of architecture (e.g., GNN_assign layer=1, MLP_1 layer=1, etc.) and hyper-parameters (e.g., \alpha=1e-3, \beta=1e-2, etc.). We also conducted sensitivity experiments with \alpha and \beta ranging from 1e-1 to 1e-5, and plot the results of all tasks into 3D bar charts in Appendix B.

Our responses to the reviewers’ specific comments are itemized as follows. For Reviewer #1: 1) Unclearness definition of “functional heterogeneity of the whole brain network”: Brain states represent various functional configurations of the brain network (e.g., Visual, Attention, etc.), capturing distinct functional signatures. These brain functions exhibit heterogeneity in their characteristics. 2) How this work extends standard transformer models: In the FSB module, we treat the embedding of brain networks in each subspace as a word vector, then use the self-attention mechanism to fuse their information, and finally obtain the brain network representation in a joint latent space through a mean pooling layer. For Reviewer #3: 1) Whether use pre-knowledge such as ROIs and Atlas: We follow the general process of the GNN-based method in brain network analysis and use the AAL116 atlas to define ROIs. 2) Concerned about reproducibility: We have listed all important parameters and architecture settings in Appendix A, and will publish the source code and models for reproducibility as promised in submission.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This paper proposed a learnable subdivision method to extract heterogeneous features of brain networks and to encode brain networks into multiple latent feature subspaces corresponding to functional configurations. The key strength of this paper is to consider the heterogeneity of different brain subnetworks rather than a unified network into modeling, which is more reasonable.

    The authors have provided detailed rebuttal to address the reviewers and AC’s concerns. A majority of reviewers retain positive comments on this paper. And I also recommend acceptance of this paper.



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors proposed a transformer-based method for functional network analysis. The responses to the reviewers and the primary AC seem reasonable and adequate. Overall the paper is well-written and would be a good fit for publication at MICCAI.



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The paper is quite noval and a convincing rebuttal is provided by authors especially with extra comparison with the state of the art and explanation to the reviewers concerns. I recommend acceptance.



back to top