Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Xuesong Wang, Lina Yao, Islem Rekik, Yu Zhang

Abstract

Contrastive self-supervised learning has recently benefited fMRI classification with inductive biases. Its weak label reliance prevents overfitting on small medical datasets and tackles the high intraclass variances. Nonetheless, existing contrastive methods generate resemblant pairs only on pixel-level features of 3D medical images, while the functional connectivity that reveals critical cognitive information is under-explored. Additionally, existing methods predict labels on individual contrastive representation without recognizing neighbouring information in the patient group, whereas interpatient contrast can act as a similarity measure suitable for population-based classification. We hereby proposed contrastive functional connectivity graph learning for population-based fMRI classification. Representations on the functional connectivity graphs are “repelled” for heterogeneous patient pairs meanwhile homogeneous pairs “attract” each other. Then a dynamic population graph that strengthens the connections between similar patients is updated for classification. Experiments on a multi-site dataset ADHD200 validate the superiority of the proposed method on various metrics. We initially visualize the population relationships and exploit potential subtypes.

Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16431-6_21

SharedIt: https://rdcu.be/cVD43

Link to the code repository

https://github.com/xuesongwang/Contrastive-Functional-Connectivity-Graph-Learning

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    The paper proposes a method to encode functional connectome features by using contrastive learning to obtain embeddings that are then used as inputs to graph convolutional networks for disease classification

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper proposes a method to encode functional connectome features by using contrastive learning to obtain embeddings that are then used as inputs to graph convolutional networks for disease classification. As brain imaging datasets are scarce, attempts to overcome overfitting problems are needed.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    There are several issues in the paper.

    1. Notations are difficult to follow and I found several errors with mathematical formulations. For example, First paragraph, section 2.1: Errors start with the notions P is defined as number of number of patients. What is cal_P refers to? G_i^j is defined in line 1. Line 7, what is G_i referred to? Is P missing? And so on ….

    2. The effects of random partitioning time-series into two sets need to be studied.
    3. Though authors generalize their claim on small datasets, methods are tested only one dataset.
    4. The details of dynamic graph classification (DGC) is sketchy and from the table 1, I doubt it is ineffective.
    5. References to some existing work on unsupervised graph embedding methods for connectome analysis is missing. For example:

    Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning Ming JinYizhen Zheng[…]Shirui Pan (2021),10.24963/ijcai.2021/204

    1. Though comparison results are provided, how the performance matrices were derived - either from your own fair simulations or from literature has to be mentioned.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Details of the methods are sketchy and would be difficult to reproduce.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    The paper proposes a method to encode functional connectome features by using contrastive learning to obtain embeddings that are then used as inputs to graph convolutional networks for disease classification. As brain imaging datasets are scarce, attempts to overcome overfitting problems are needed.

    There are several issues in the paper.

    1. Notations are difficult to follow and I found several errors with mathematical formulations. For example, First paragraph, section 2.1: Errors start with the notions P is defined as number of number of patients. What is cal_P refers to? G_i^j is defined in line 1. Line 7, what is G_i referred to? Is P missing? And so on ….

    2. The effects of random partitioning time-series into two sets need to be studies.
    3. Though authors generalize their claim on small datasets, methods are tested only one dataset.
    4. The details of dynamic graph classification (DGC) is sketchy and from the table 1, I doubt it is ineffective.
    5. References to some existing work on unsupervised graph embedding methods for commectome analysis is missing. For example:

    Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning Ming JinYizhen Zheng[…]Shirui Pan (2021),10.24963/ijcai.2021/204

    1. Though comparison results are provided, how the performance matrices were derived - either from your own fair simulations or from literature has to be mentioned.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Clarity of the methods and missing details of comparisons with state-of-the art methods

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #2

  • Please describe the contribution of the paper

    The authors proposed a contrastive learning framework for population-based fMRI classification. Th proposed framework contains two parts: 1) constrastive graph learning and 2) dynamic graph classification. By employing both of the two parts, the proposed framework is able to outperform all the other benchmarks.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This is a novel application of contrastive learning on the FC graph data. The AUC improvement looks pretty good.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The constrastive learning framework proposed is mainly a straightforward application.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    If the codes and data are available, the results should be reproducible.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    In this paper, the authors proposed a constrastic learning framework for FC graph classification and showed improved performance in patients classification. Overall I found this paper to be pretty clearly written with good evidence of improvement. I only have a few comments

    1. Why weren’t the PCD features included in the KNN model? If you remove PCD features, will the current method still outperform KNN?

    2. I am curious, if you train the network in a supervised-way, e.g., instead of considering two views from the same subject as “attracted”, consider all the views from the same patient group as “attracted”, will you get worse or better result?

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The improvement is good and the method is rather straightforward.

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Somewhat Confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #3

  • Please describe the contribution of the paper

    In this paper, the authors propose contrastive learning of functional connectivity graph for population-based fMRI classification. In addition, a population graph is constructed for dynamic graph classification. They perform experiments on ADHD datasets. The results show that the proposed method outperforms baseline methods.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Overall I think the idea of using contrastive learning on functional connectivity graph and Dynamic Graph Classification on Population is novel.
    2. The authors compare against a wide range of baseline methods and conducts statistical tests.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The authors only perform experiment on one dataset, they could extend their experiments on more datasets.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors haven’t release code, but they provide network architecture in Supplementary material. The ADHD dataset is also publicly available.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
    1. They could have an encoder to extract imaging features from fMRI.
    2. Some typos can be fixed: e.g. “The framework overflow” -> “The framework overview”.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    I think overall the idea is novel and their experimental evaluation is strong in terms of setting and results.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    2

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This paper proposes a contrastive learning method for graph data and applied to fMRI connectivity graph learning. The contribution of interest is the application of contrastive learning to graphs rather than pixel data.

    Two of the reviewers were positive about this paper and commented on the novelty of the method, comparisons against multiple methods and good improvements demonstrated. One of the reviewers found the paper lacked clarity, had difficult notation and was lacking important details. Furthermore, reviewers in general criticized that a single dataset was used in the experiments and that generalizability claims in the paper are over-stated due to this reason. The authors should address these criticisms.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    7




Author Feedback

We sincerely appreciate the constructive feedback received from all reviewers. The primary concerns are threefold:

  • Clarification of notations and methods (Reviewer 1: Q5.1; Reviewers 2,3: Q8)
  • Lack of generalizability on a single dataset (Reviewer 1: Q10; Reviewer 3: Q5)
  • Reproducibility (Reviewers 1,2,3: Q7)

Response: Notation and Method Clarification:

R1 Q5.1: Two types of graphs are constructed in the framework: the individual FC graph G_i extracted from the i-th patient where each node represents a brain ROI and the population graph G^\cal{P} where each node stores the embedding of one subject. Hence \cal{P} is the population set {1, 2, …, P} whereas P indicates the maximum patient index. In line 8 (not 7), G_i concatenates the multiview data {G_i^1, …, G_i^j} as defined in line 1. It refers to the individual FC graph therefore \cal{P} is not missing. We will further elaborate these notations in Section 2.1 and add them to Fig 1.

R2 Q8.1. PCD features are indeed used in all baselines. In Section 3.1, “KNN is trained with raw vectorized FC features” highlights the vectorization of FCs as opposed to the graph inputs adopted in our method. Q8.2. Brain imaging datasets are generally scarce. Consequently, training a supervised model will inevitably suffer from overfitting. Your suggestion of treating all views as “attracted” can be considered simple data augmentation and has been shown by previous research as inadequate to mitigate overfitting.

R3 Q8.1. Using raw fMRI features will neglect Functional Connectivity as shown to be associated with cognitive behaviors in perception and vigilance tasks. Hence, we believe a contrastive framework handling FC inputs is crucial. Besides, encoding FC inputs is more cost-effective. The input dimension is only the number of Region of Interests (ROIs), saving extra layers of convolutions and pooling needed for typical 4D brain image inputs.

Dataset Generalizability:

The mean and variance values of metrics in Table 1 are obtained by splitting the ADHD dataset with 5 different sampling seeds. Hence the small variances in the classification metrics already demonstrate the generalizability of our model against training and testing data distributions’ perturbations. Additionally, we have compared our method with baselines on a multisite autism dataset ABIDE. Our model achieves the highest AUC of 68.23% (top 2: 66.46%), the second highest accuracy of 64.66% (top 1: 65.61%), and the highest sensitivity of 55.68% (top 2: 53.64%). Considering that our method emphasized more on the sensitivity (autism class recall rate) without sacrificing too much accuracy, our method is still recommended.

Reproducibility:

We believe the necessary training details are covered in the paper and the supplementary material. Codes will be shared in an open-access platform upon acceptance of this manuscript.

Minor issues:

R1 Q5.2: Time series are randomly split into multiviews during batch training, which guarantees randomness and stability.

R1 Q5.4: As indicated in Fig 3, Dynamic Graph Classification massively improves subtyping and explainability in the population graph compared with contrastive learning only (CGL Var-2 in Table 1, middle result in Fig 3), which is medically significant and necessary.

R1 Q5.6: The baselines have not published their codes for ADHD200. Therefore, we test their publicly available codes on ADHD200 with the same settings as ours, including batch size, epoch, random seeds, etc.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The reviewers have not updated their reviews post-rebuttal, but I believe criticisms about clarity and generalizability have been addressed. There were two positive and one negative votes from the reviewers; however even the reviewer who voted to reject the paper put the paper at the top of their stack (1/4). My votes it to accept this paper.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    6



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This paper studies the problem of population-based fMRI classification using contrastive learning, which appears to be interesting. The experimental results are good. So I support the acceptance of this paper.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    4



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Contrastive learning on functional connectivity graphs is considered novel. The application is also interesting and the experiments and comparisons to SOTA methods showcase the efficacy of the method. The rebuttal has address concerns about clarity and generalizability. Authors are encouraged to add/highlight these discussions and clarifications in the camera-ready version.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    2



back to top