Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Yuxiao Liu, Mianxin Liu, Yuanwang Zhang, Dinggang Shen

Abstract

The accurate and automatic diagnosis of new brain disorders (BDs) is cru-cial in the clinical stage. However, previous deep learning based methods require training new models with large data from new BDs, which is often not practical. Recent neuroscience studies suggested that BDs could share commonness from the perspective of functional connectivity derived from fMRI. This potentially enables developing a connectivity-based general model that can be transferred to new BDs to address the difficulty of train-ing new models under data limitations. In this work, we demonstrate this possibility by employing the meta-learning algorithm to develop a general adaptive graph meta-learner and transfer it to the new BDs. Specifically, we use an adaptive multi-view graph classifier to select the appropriate view for specific disease classification and a reinforcement learning based meta controller to alleviate the over-fitting when adapting to new datasets with small sizes. Experiments on 4,114 fMRI data from multiple datasets cover-ing a broad range of BDs demonstrate the effectiveness of modules in our framework and the advantages over other comparison methods. This work may pave the basis for fMRI-based deep learning models being widely used in clinical applications.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43993-3_10

SharedIt: https://rdcu.be/dnwNb

Link to the code repository

https://github.com/dasklarleo/fMRI_meta/tree/main

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper proposes a meta-learning framework for brain disorder diagnosis. The main proposed techniques include a multi-view classifier and a RL-based meta controller.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    S1: The proposed methods are well motivated with the challenges clearly explained, i.e., modeling higher order functional connectivity and combat overfitting during model adaptation. S2: The methods are relatively novel and clearly described. The proposed methods of multi-view classifier and meta-controller are reasonable. S3: The experimental results on multiple brain disorder datasets show promising results over some baselines and model ablations.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    W1: The design of high-order graphs is a bit arbitrary, while the directly leverage of GCN is not novel. The design of Average Node Information is interesting, but the necessity of a complicated reinforcement learning alorithm is not so clear. W2: Since the goal is to show the advantage of the propoed meta-learning framework, it is more convincing if more prediction models (e.g. GNNs) are tested. W3: Many closely related works on GNN models for brain connectivity based disorder analysis such as [1-3] and meta learning in the medical domains such as [4-5] are not discussed at all. [1] Cui, Hejie, et al. “BrainGB: a benchmark for brain network analysis with graph neural networks.” IEEE Transactions on Medical Imaging (2022). [2] Zhu, Yanqiao, et al. “Joint embedding of structural and functional brain networks with graph neural networks for mental illness diagnosis.” 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2022. [3] Cui, Hejie, et al. “Interpretable graph neural networks for connectome-based brain disorder analysis.” Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part VIII. Cham: Springer Nature Switzerland, 2022. [4] Tan, Yanchao, et al. “Metacare++: Meta-learning with hierarchical subtyping for cold-start diagnosis prediction in healthcare data.” Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2022. [5] Yang, Yi, et al. “Data-efficient brain connectome analysis via multi-task meta-learning.” Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Most datasets are publicly available. Codes are promised to be given after acceptance.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    C1: Different designs of modeling high-order connectivity and combating over-fitting should be discussed (and tested). C2: More GNN models (or non-GNN-based models) should be tested to fully establish the benefit of the proposed meta-learning framework. C3: More closely related works should at least be discussed (if not empirically compared).

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper is overall novel and interesting, while some limitations exist.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    The paper applied meta learning to brain disorder diagnosis, in order to achieve fast adaptation to new brain disorders not present in the training stage. To enable better adaptation, a multi-view graph classifier is proposed to incorporate dynamic information in both high-order and low-order FCN. Additionally, a meta-controller is applied to prevent overfitting and decide optimal adaptation step. The authors experimented on a variety of fMRI datasets and demonstrated the effectiveness of the proposed in few-shot classification of brain disorders.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The paper is well-organized, the proposed method is well-explained and well -reasoned. Combining a multi-view graph classifier with meta-learning is novel and effective (as shown in results).
    2. The paper presents an investigation into the potential of developing a general-purpose model that can quickly adapt and generalize to a new brain disorder diagnosis task, especially where the data is limited. As far as I know, most existing graph-based models are only effective for specific datasets and struggle to generalize to different datasets. This lack of versatility necessitates re-training and parameter tuning for every new dataset, which can be a cumbersome and time-consuming process.
    3. The authors conducted ablation study, revealed the contribution of each components in the proposed method.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The meta-controller part is difficult to understand, especially the policy gradients from reinforcement learning, more explanation should help.
    2. A higher ANI value indicates that “all nodes have similar features and thus degrading the performance”, this argument is easy to understand for node classification, but this work deals with graph classification, it’s unclear to me why similar node features will harm the classification of graph-level features.
    3. The author only experimented with one single meta-testing dataset, it would provide stronger evidence if multiple datasets were randomly selected from all 6 datasets as meta-testing dataset.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Very good reproducibility. code will be released and dataset description as well as experiment settings are clear.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    More experiments with more datasets and randomly shuffle the meta-training and meta-test datasets.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    A novel attempt to develop graph meta-learner for fMRI-based task; Contribution is clear and meaningful; Well-reasoned and well-developed methodology; minor flaw in experimental design.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #3

  • Please describe the contribution of the paper

    This paper demonstrate the probability and effectiveness to employ the meta learning algorithm to develop a general adaptive graph meta-learner and transfer it to new BDs.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. This paper assembles a large amount of data from both public datasets and in-house datasets to develop a general BD diagnosis model with meta-learning.
    2. Experiments on 4,114 fMRI data from multiple datasets covering a broad range of BDs demonstrate the effectiveness of their framework and the advantages over other comparison methods.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. How is the sensitivity of the proposed framwork when changing the hyper-parameters?
    2. The explanation of result and finding is too short. How do you know the importance of different resting-state networks? Is it based on the importance of ROIs in that network?
  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Their codes will be released after the acceptance.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    see above

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The explanation of result and finding is too short and unclear.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    5

  • [Post rebuttal] Please justify your decision

    The authors have answered my concern and I decided to increase my rating to weak accept, based on all reviewer’s comments and the responses of the authors.




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This paper presented an interesting method for model transfering to address the issues of potential insufficient samples and model reusability. Though the reviews highlight some strengths, but most of them only focus on the technical aspects. I highly doubt this method is useful in practical clinical settings. First of all, classification/prediction of brain disorders based on neuroimages are complicate. For AD prediction, we can achieve 90+% accuracy many years ago, however, the fact is that the imaging-based biomarker can only assist other clincial measures. The accuracy reported in this paper is not promissing at all. Second, the BD groups are problematic. These BDs have different pathologies, and most of them are unclear now. MCI and AD are different, they are not the same brain disease. The method did not consider the heterogeneity of the data, given different image sources, age groups and data processing influences. In general, I am not convinced that this method can be used or have potentials in clinical data for diagnosis.




Author Feedback

Meta-reviewer

  1. Imaging-based biomarker for brain disorders (BDs) can only assist other clinical measures. And for AD prediction, we can achieve 90+% accuracy. The accuracy in this paper is not promising. We agree that BDs diagnosis is achieved based on multiple types of information, including imaging and non-imaging information (i.e., MMSE or other scores), which assist each other for diagnosis. On the other hand, it is important to note that imaging can provide objective information that can be used for early screening of BDs in annual checking where patients do not show significant symptoms. Also, technically, imaging-based AD prediction is relatively easy, but the identification of early AD (or ADHD, Autism) is more important but more difficult, thus respective accuracy is relatively low (around 65%-75%). To solve the above issue, we need to develop an effective imaging-based AI model, with limited training samples in the practice. Particularly, comparing to classical BDs classification study, we add more constraints by requiring the model to fast adapt to unseen BDs using only few samples. This constraint influences the accuracy but mimics the conditions of clinical practices, which has significant difference from the research-oriented study that access to large number of samples. In this sense, we are dealing with a more complicated problem and our performance is still promising.
  2. BDs have different pathologies and most of them are unclear. The method did not consider the heterogeneity of the data. I disagree that the method has potentials in clinics. We agree that BDs could have different underlying pathology (in terms of biochemical process) but have similar manifestations on the neuroimage as reported in literature [1]. For example, common features among BDs are detected using fMRI perspective [2]. Existence of such common features motivates us to build a general model for diagnosis of different BDs. Meanwhile, our method is based on meta-learning which considers and utilizes the data heterogeneity, to learn essential information from heterogeneous data for fast adaption and generalization. The data heterogeneity benefits rather than degrading the model. Ref
  3. DOI: 10.1002/aur.2590
  4. DOI: 10.1016/j.tics.2011.08.003 Reviewer 1 & 2
  5. The use of high-order FCN is arbitrary High-order FCN is a complementary view for low-order FCN and results in improvements in large body of literatures. We proved its effectiveness in Table 2, 6th row.
  6. The use of reinforcement learning and policy gradient is not so clear We use reinforcement learning to determine adaption step t to achieve optimal domain adaption in the meta-testing dataset. The policy gradient is to update reinforcement learning parameters during meta-training. We will clarify that part in the final paper.
  7. Try different GNN models and testing dataset In limited time, we have included GAT and GIN, achieving accuracies of 70.8% and 71.7% when k-shot=50. We also use ADHD and ABIDE as testing dataset achieving 68.4 and 66.1 accuracy.
  8. Comparison with suggested related work We will include in the final paper. Reviewer 3
  9. How is the sensitivity of the framework when changing hyperparameters? There are 4 hyper parameters (i.e., lr, epoch, min and max adaption step, random seeds) in our method. We tested on random seeds for data split and parameter initialization in the paper, where our method exhibits reliability. Our method also show reliability on other parameters which will be included in the final paper.
  10. The explanation of result and finding is short. Is the importance of different resting-state networks based on the importance of ROIs? We will expand discussions about the importance of different networks, as well as connections among ROIs in the final paper. For evaluating importance of networks, we first calculate the Grad-CAM values of ROIs, and then average the values of ROIs within each network as their importance.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    May major concern is that given the low prediction accuracy, the authors should convince me what is the value of this work and why it is derserved to be published comparing to existing numerous AD related works/methods. In the rebuttal, the authors did not directly address my question.



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This paper combined multi-view graph classifier with meta-learning to develop a general adaptive graph meta-learner and transfer it to new brain disorder diagnosis. In general, it is an interesting study with clear presentation. The authors have satisfyingly addressed the reviewers’ concerns in the rebuttal and all reviewers and AC recommend acceptance of this paper.



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Based on the reviews, the rebuttal, the authors need to work the manuscript a bit more. I recommend rejection.



back to top