Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Qianqian Wang, Mengqi Wu, Yuqi Fang, Wei Wang, Lishan Qiao, Mingxia Liu

Abstract

Resting-state functional MRI (rs-fMRI) is increasingly used to detect altered functional connectivity patterns caused by brain disorders, thereby facilitating objective quantification of brain pathology. Existing studies typically extract fMRI features using various machine/deep learning methods, but the generated imaging biomarkers are often challenging to interpret. Besides, the brain operates as a modular system with many cognitive/topological modules, where each module contains subsets of densely inter-connected regions-of-interest (ROIs) that are sparsely connected to ROIs in other modules. However, current methods cannot effectively characterize brain modularity. This paper proposes a modularity-constrained dynamic representation learning (MDRL) framework for interpretable brain disorder analysis with rs-fMRI. The MDRL consists of 3 parts: (1) dynamic graph construction, (2) modularityconstrained spatiotemporal graph neural network (MSGNN) for dynamic feature learning, and (3) prediction and biomarker detection. In particular, the MSGNN is designed to learn spatiotemporal dynamic representations of fMRI, constrained by 3 functional modules (i.e., central executive network, salience network, and default mode network). To enhance discriminative ability of learned features, we encourage the MSGNN to reconstruct network topology of input graphs. Experimental results on two public and one private datasets with a total of 1, 155 subjects validate that our MDRL outperforms several state-of-the-art methods in fMRI-based brain disorder analysis. The detected fMRI biomarkers have good explainability and can be potentially used to improve clinical diagnosis.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43907-0_5

SharedIt: https://rdcu.be/dnwb2

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper proposes a modularity-constrained dynamic representation learning (named MDRL) framework for classification from rs-fMRI data. Their network has three parts: a dynamic graph construction, a modularity-constrained spatiotemporal graph neural network (MSGNN) for dynamic graph representation learning, and a network for prediction and biomarker detection.

    The MSGNN learns spatiotemporal features via graph isomorphism network and transformer layers, and is constrained by three pre-specified brain networks (i.e., central executive network, salience network, and default mode network). The authors compare their framework against several graph neural network and traditional machine learning baselines on two public and one private datasets for three different classification tasks with rs-fMRI input data.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The idea of incorporating modularity based constraints into the training of graph neural networks for rs-fMRI analysis is an interesting methodological contribution for this application. This constraint encourages the GNN to learn embedding signatures that are consistent with common neurocognitive subsystems in the brain that are associated with the disorders being studied.

    2. The evaluation and experimental section is quite thorough. Several different baseline models have been run for comparison. Results on three different classification tasks on separate datasets show improvements over these baselines. Moreover, appropriate ablation studies have been performed to determine the effectiveness of the design choice (dynamic graph construction) and introduction of modularity constraints and graph topology reconstruction. Experiments have also been performed to study the effect of parameter sensitivity.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The modularity constraint is introduced as an inner product between the pre-selected neurocognitive subsystems and the inferred node embeddings. To me, it is not immediately obvious why this results in clustering behavior among nodes, since there are no cross-module terms in this loss. Perhaps I am missing something here, it would be great to provide more intuition on how this loss was designed. Why did the authors decide to pre-select K=3 modules as opposed to other choices?

    2. The following implementation details need further clarification:

    a. The authors mention this in passing under dynamic graph construction (Section 2.2)

    ” Considering all connections in an FC network may include some noisy or redundant information, we retain the first 30 strongest edges … “

    How was this signal-to-noise threshold determined? Was this binary adjacency matrix constructed by considering only positive correlations? The sparsity of the dynamic graphs, and therefore the performance, would be sensitive to this choice, which is why this is an important point to clarify.

    b. Under implementation details, the authors mention that “we randomly select m = 50% of all Nk (Nk −1) paired 2 ROIs in the k-th module (with Nk ROIs) to constrain the MDRL.”

    From a design perspective, it is not clear why this subselection was introduced, given that the modularity constraint ( inner product computation) can be enforced as is on the node embeddings.

    The discussion section has an empirical study on the “Influence of Modularity Ratio”, where they study the effect of changing the ratio. The explanation offered is that higher ratio potentially causes oversmoothing. For these set of experiments, was the value of \lambda_2 kept fixed? It is not clear whether the over smoothing is a result of the ratio or the penalty value of \lambda_2.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors have indicated that implementation code will be made available for reproducibility

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    Please refer to the points under weaknesses

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    While I am inclined to recommend acceptance based on the results and methodology, there are a few points under implementation details and design choices that need further clarification. Please see the detailed comments under weaknesses.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    The paper proposed a new method to conduct interpretable brain disorder ananlysis with rs-fMRI. Comprehensive experimental results verified the effecitiveness of the proposed method.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper proposed an interpretable GCN method for rs-fMRI analysis. The proposed MSGNN considers the topolpgy structure, the psatiotemporal information to output robust representation.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    It will be better for using multi=modality to replace the dynamic graph construction because it easily makes confuse to the dynamic graph representation learning. In modularity constraint, it is unreasonble to only constraint 3 networks because we have no idea to know how many networks influence the disease. In graph topology reconstruction constraint, H_t is dense, based on your idea, you can only obtain dense \hat A_t, in this way, it is difficult to obtain effectively dynamic GNN. A number of dynamic GNN models have been proposed, more comparison methods (e.g., the newly published dynamic GNN models) are suggested.

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    code unavailable

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    Please see weakness

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    See

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #3

  • Please describe the contribution of the paper

    This paper proposes a GNN architecture that incorporates modular information for dynamic FC analyses. It does so by introducing 2 additional terms to the loss function: “modularity constraint” encourages the embeddings of nodes belonging to the same module to be similar, while “graph topology reconstruction constraint” tries to improve the discriminative ability of the embeddings by reconstructing the connectivity matrices from the node level embeddings.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Modularity of brain functional networks is well-established but not well explored in disorder classification, this paper fills up this research gap.
    2. Evaluation is quite robustly conducted over multiple different datasets.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. Although multiple datasets were used, the data selection process raises several questions (see details below).
    2. The value of the reconstruction constraint is not very clear - it might have been better to focus on the modularity constraint instead.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    No major concerns. No link to code repository at the time of review.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    Major points

    1. Data selection process 1.1. In page 3, first line, 1155 subjects were mentioned - does that refer to the private dataset only? 1.2. In section 2.1, it was mentioned that only the 2 largest sites of ABIDE and the MDD dataset was used. Why was that so? Why wasn’t the whole dataset used? If inter-site variability is a concern, there are techniques to reduce them, such as ComBat. 1.3. The choice of only using the largest dataset leaves very few data points to be used especially for inner-loop cross validation (e.g. with 150 data points, 30 will be in the test split and even fewer is left for validation in the inner-loop CV). While this is a common issue in the field, it might have been better/possible to either use more data (from the other sites), or even tuning the model on the data from the other sites (if there’s a particular reason as to why only the sites chosen in this work were used).

    2. In the dynamic graph construction, 30 strongest edges were used - how was this number arrived at? If it was selected arbitrarily, it would warrant some experiments just like the ones done for \lambda and modularity ratio.

    3. In the ablation study, it was mentioned that the “modularity constraint may contribute more to MDRL than the graph reconstruction constraint”. From Fig 3a and 3c, it seems that MDRL w/o R performed very similarly to MDRL, while MDRL w/o M was similar to MDRL w/o MR. 3.1. What was the motivation behind adding the reconstruction constraint if it does not seem to have much impact? 3.2. What could be an explanation to the lack of improvement by the modularity constraint as seen in MDRL w/o R in Fig 3b?

    Minor points

    1. Comparisons with shallow methods do not seem very fair since they don’t incorporate temporal information - might be better to do away with them.
    2. There are more recent versions of AAL atlas to use e.g. AALv3
    3. Why are the results for the HAND dataset presented differently (ABIDE and MDD were in tables but HAND was in a bar chart)? What about the statistical differences for the HAND dataset in Fig 2?
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    While the modularity constraint seems useful, the choice of including the reconstruction constraint does not seem well-motivated in view of its limited impact (as seen in the ablation studies). It would have been better to keep the focus on modularity constraint and try to understand why it did not seem to help for the MDD dataset. Besides this, the rationale behind only experimenting on selected sites of the ABIDE and MDD datasets isn’t clear.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    5

  • [Post rebuttal] Please justify your decision

    Point about SR is reasonable now with these additional details (which should be added in the final version).

    Although the authors tried to argue that MDRL (with M and R) is better than MDRL (with M only, no R), the point was that the contribution of R pales in comparison to M for 2 out of 3 datasets (ABIDE and HAND). Rather than focusing on how MDRL is best in all scenarios, it would be more interesting to consider what the ablation results are telling us.

    On second look at the results,

    • These initial results seem to suggest for ABIDE and MAND, incorporating modularity information is useful (performance of MDRL with M only is very close to the full MDRL)
    • On the other hand, modularity information does not seem useful for MDD. Rather, incorporating reconstruction information seems more useful.
    • Different constraints might be useful for different datasets / diseases -> that’s a potentially more valuable insight than just how showing MDRL does best in all sites (especially when some of the ablations doesn’t show that MDRL w/ MR does much better than MDRL w/ M and MDRL w/ R)
    • These are just what we observe from these sites and should be further verified on other sites & datasets, but these points could have been mentioned in the paper, rather than being limited to a generic summary of the results in the current manuscript.

    Not completely convinced about not being able to use ComBat in this framework, but I will avoid being pedantic about sample size especially since there are already 3 datasets used in this work. ‘1115 subjects’ in the abstract should be made clearer by mentioning that it is a total count / sum.

    Overall, although I still have some reservations about data-related issues, that isn’t a major point. Point about SR is addressed + on second look, the ablation results are actually rather interesting and this paper should go through now. Hope that the updated manuscript will have a more thorough discussion of the results.




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    Strengths:

    • Incorporation of modularity into learning of GNNs for rsfMRI analysis is an interesting methodology contribution.
    • Experimental evaluation is thorough, including several datasets, several baseline comparisons, and ablation and parameter sensitivity studies.
    • Proposed method is interpretable for rsfMRI analysis.

    Weaknesses:

    • Concerns regarding modularity constraint - justification of loss design, why set number of network modules to 3.
    • Concerns regarding reconstruction constraint - unclear motivation, and experimental ablation results showed limited impact
    • Concerns regarding dynamic graph construction - why 30 edges, effect of this choice, why not use multimodal approach
    • Concerns regarding data selection - why use only few sites when more data available

    There seems to be strong interest in the introduced modularity constraint for GNN analysis of rsfMRI; however there are model design and implementation details that require clarification. Thus, I’d like to invite this paper for rebuttal.

    In rebuttal, please:

    • Explain how loss for modular constraint results in clustering, and why 3 modules was chosen.
    • Clarify motivation for reconstruction constraint and describe improvement in results.
    • Clarify the dynamic graph construction method, explaining why 30 edges and discussing how the model may be sensitive to this choice.
    • Justify the data selection process - why not use more of the available open data




Author Feedback

We appreciate the AC and Reviewers for the constructive comments. We are encouraged by the many positive comments about our “interpretable and interesting” method (AC&R1&R2), “filling up the gap of modularity for GNN-based fMRI analysis” (R3), “thorough evaluation on multiple datasets”(AC&R1&R2 &R3), “learning robust and discriminative representation (R2&R3)”. We address main concerns as follows.

Q1: Reasons for selecting 3 modules (AC&R1&R2)

  • Previous studies suggest that these 3 modules are fundamental functional modules in the brain to support effective cognitive functions, and have been consistently observed across different individuals and experimental paradigms.
  • We had extensive discussions with our clinical and neuroscience collaborators specializing in ASD, MDD and HIV-related neurocognitive impairment. They unanimously recommend using these 3 modules as constraints for fMRI analysis, encouraging the model to focus on common neurocognitive subsystems and enhancing the interpretation of identified biomarkers.
  • As a promising future work, we will design disease-specific modularity constraints based on neurocognitive research and clinical experience.
  • We’ll add the related discussion in the final version.

Q2: Motivation for graph reconstruction constraint (AC&R2&R3)

  • The graph reconstruction (GR) constraint is used to preserve the underlying topology of brain FC networks and relationships between ROIs during node-level representation learning in our MDRL, thereby helping to extract more discriminative features.
  • As shown in Fig.3, MDRLw/oR (i.e., without the GR constraint) is generally inferior to our MDRL. It suggests that this constraint helps improve the discriminative power of learned representations.

Q3: Details about dynamic graph construction (AC&R1&R3)

  • Sorry for missing the term ‘%’ after 30. We will correct it in the final version.
  • Following a previous study [1], we empirically retain the top 30% (sparsity rate, SR) strongest edges in each FC network to remove redundant information without discarding too much important connection information.
  • We do have experiments to study the effect of SR on the performance of our MDRL, by varying SR values and recording the results of MDRL in ASD vs. HC classification on NYU. We find that MDRL achieves very stable results when the SR is in the range [20, 60], e.g., AUC =71.4% with SR=40 and AUC=72.9% with SR=50. But with SR=100, the AUC is only 68.2%. These results suggest that our chosen SR value is reasonable.

Q4: Data selection (AC&R3)

  • As mentioned by R3, we only use the 2 largest sites from ABIDE and MDD instead of the whole dataset due to significant inter-site variability. The ComBat can be used to harmonize fMRI features, but it is not suitable to our framework that jointly conducts feature learning and classifier training in an end-to-end manner.
  • We will design advanced harmonization methods to reduce inter-site variance so that data from all sites can be utilized for model training.
  • Such discussion will be added in the final version.

Q5: Reasons for minor improvement on MDD (R3)

  • Our method yields statistically significantly better results than the competing methods, even though the performance improvement on MDD detection is relatively small compared to ABIDE and HAND.
  • MDD detection on the REST-MDD dataset is quite challenging compared with tasks on ABIDE and HAND, since depression is a complex mental disorder and REST-MDD exhibits particularly significant inter-site heterogeneity. For example, [2] only obtained AUC and ACC results of ~51% in MDD detection on the same REST-MDD dataset. We will develop end-to-end harmonization methods to mitigate such heterogeneity, thus facilitating the use of data from all imaging sites in REST-MDD to improve learning performance.

[1] Learning Dynamic Representation of Brain Connectome with Spatio-Temporal Attention, NeurIPS 2021 [2] DOI: 10.1016/j.bpsc.2020.12.007




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The rebuttal has addressed a number of the original concerns, including the selection of modules and dynamic graph construction approach. Also, I maintain that the proposed incorporation of modularity into GNN learning is an interesting contribution. Following rebuttal, all reviewers now tend toward accept, and I follow their recommendations.

    One other note - while the noted inter-site variability in the rebuttal is certainly an important and difficult challenge, promising to incorporate such an approach is not so constructive, and I suggest using any available space to please incorporate the motivations/methods details that were described in the rebuttal.



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    R3 was the reviewer who had the most reservations, and the authors reply has addressed their comments. In addition, most comments of the original AC also seem to be addressed. I suggest we ask authors to fully implement all the comments in their final version of the paper.



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors’ responses satisfactorily address the concern of reviewers. The explanation of the selection of the dataset and the contribution of the modules is acceptable. Please include the content mentioned in the response in the final manuscript.



back to top