List of Papers By topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Wei Liang, Kai Zhang, Peng Cao, Pengfei Zhao, Xiaoli Liu, Jinzhu Yang, Osmar R. Zaiane
Abstract
Alzheimer’s disease (AD) is a common irreversible neurodegenerative disease among elderlies. Establishing relationships between brain networks and cognitive scores plays a vital role in identifying the progression of AD. However, most of the previous works focus on a single time point, without modeling the disease progression with longitudinal brain networks data. Besides, the longitudinal data is insufficient for sufficiently modeling the predictive models. To address these issues, we propose a Self-supervised Multi-Task learning Progression model SMP-Net for modeling the relationship between longitudinal brain networks and cognitive scores. Specifically, the proposed model is trained in a self-supervised way by designing a masked graph auto-encoder and a temporal contrastive learning that simultaneously learn the structural and evolutional features from the longitudinal brain networks. Furthermore, we propose a temporal multi-task learning paradigm to model the relationship among multiple cognitive scores prediction tasks. Experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset show the effectiveness of our method and achieve consistent improvements over state-of-the-art methods in terms of Mean Absolute Error (MAE), Pearson Correlation Coefficient (PCC) and Concordance Correlation Coefficient (CCC). Our code is available at https
://github.com/IntelliDAL/Graph/tree/main/SMP-Net.
Link to paper
DOI: https://doi.org/10.1007/978-3-031-43907-0_30
SharedIt: https://rdcu.be/dnwcH
Link to the code repository
https://github.com/IntelliDAL/Graph/tree/main/SMP-Net
Link to the dataset(s)
Reviews
Review #2
- Please describe the contribution of the paper
The paper proposed a self-supervised multi-task learning framework for AD progression modeling with longitudinal brain networks. The method consists of self-supervised components of masked graph reconstruction and temporal contrastive loss, and temporal multi-task loss. The method was evaluated on ADNI and demonstrated superior performance comparing to existing graph learning methods.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
Strength:
- The paper is well written and organized.
- The evaluation of the method was thorough and suggested good performance.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
Weakness:
- The novelty is relatively limited and some method-related references were missing. The authors claimed three novelties: (1) masked graph auto-encoder for graph reconstruction. But it was proposed in [1], which was not cited by the paper. (2) temporal contrastive learning by using the similarity between subjects and between consecutive time points. Same idea was widely-used in video-based contrastive learning methods, e.g., [2]. (3) temporal multi-task learning. Using LSTM for sequential prediction for AD was used in [3,4], and combining several losses for related tasks is also widely used. [1] Hou, Zhenyu, et al. “Graphmae: Self-supervised masked graph autoencoders.” Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022. [2] Dave, Ishan, et al. “Tclr: Temporal contrastive learning for video representation.” Computer Vision and Image Understanding 219 (2022): 103406. [3] Cui, Ruoxuan, Manhua Liu, and Alzheimer’s Disease Neuroimaging Initiative. “RNN-based longitudinal analysis for diagnosis of Alzheimer’s disease.” Computerized Medical Imaging and Graphics 73 (2019): 1-10. [4] Ouyang, Jiahong, et al. “Longitudinal pooling & consistency regularization to model disease progression from MRIs.” IEEE journal of biomedical and health informatics 25.6 (2020): 2082-2092.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The description is clear.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
- The idea of building positive contrastive learning pairs by consecutive visits from the same subject is questionable in the context of disease progression modeling, for which the differences between the sequential visits of the same subjects should be enlarged rather than minimized to be able to capture the longitudinal changes.
- The measuring terms in Table 1 was not introduced, especially its precision is related to its scale.
- Statistical testing results should be also included for table 2.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
4
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Main concern is the novelty.
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #3
- Please describe the contribution of the paper
The authors proposed a novel model, SMP-Net, which uses self-supervised multi-task learning to establish relationships between longitudinal brain networks and cognitive scores for Alzheimer’s disease:
- The model is trained with a masked graph auto-encoder and temporal contrastive learning to learn structural and evolutionary features.
- A temporal multi-task learning paradigm is used to model relationships among multiple cognitive scores.
Results on the ADNI dataset show consistent improvements over state-of-the-art methods in terms of MAE, PCC, and CCC.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The authors describe the current challenges in predicting the relationship between longitudinal brain networks and cognitive scores, and motivate the use of self-supervised multi-task method;
- Combines masked graph auto-encoder and temporal contrastive learning for richer representation of the longitudinal brain networks;
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- The authors could provide more details about the masking method, how many nodes/edges are masked, and how to determine a reasonable amount for masking. Depending on the graph structure, when the random masking is performed, does it happen that if a critical edge or node is removed, it removes the connectivity of the sub-graphs, making it harder to reconstruct the graph?
- In the contrastive loss (eq. 1), the relation of H^t_g(i), H^t_g(j) is not considered;
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The method is evaluated on a public dataset, and the work will be reproducible if the authors provide open-sourced code repository.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
- Provide more details about the masking procedure, are the random masks generated during training or fixed, is the number of masked/edges also random within a range?
- The purpose of random masking and reconstructing the graph is to deal with the issue of a small dataset; it is somewhat equivalent to augmenting the graph. Do the authors consider augmenting the graph in other ways to have more graphs for training, or masking is already sufficient?
- Could the authors provide details for the contrastive loss, 1) the relation of H^t_g(i), H^t_g(j), and 2) is it necessary to tune the hyper-parameter of temperature, or is the method robust in that sense?
- In table 1, the bold results are not always the highest values, is this an oversight, or how to interpret these?
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
6
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The proposed method is overall well-organised and deals with the issues mentioned in the introduction part. The contribution of the paper lies in that, it introduces self-supervised multi-task learning to learn the relationship between longitudinal brain networks and cognitive scores, and deals with the common challenge imposed by a small dataset by using graph auto-encoder to reconstruct masked graphs, and deals with the varied relationship between the brain network and cognitive scores at different time points by multi-task learning. The paper has sufficient novelty while some details about masking and the definition of contrastive loss are needed.
- Reviewer confidence
Somewhat confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #4
- Please describe the contribution of the paper
This paper proposes a self-supervised multi-task learning paradigm for predicting cognitive scores in Alzheimer’s disease progression using longitudinal brain networks. The proposed method includes a spatio-temporal representation learning module for capturing the structural and evolutionary features of longitudinal brain networks, and a temporal multi-task module for modeling the relationships among cognitive scores prediction tasks at multiple time points.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The paper is well-written and well-organized
- This work applied masked graph auto-encoder and temporal contrastive learning to perform self-supervised representation learning, which takes the spatio-temporal nature of longitudinal brain networks into account and enable more effective feature learning.
- The proposed method compared with multiple sota methods and showed significant improvement (as indicated by p-value reported in the paper).
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- It is unclear that why there are two parallel task-specific layers, and each is performing which task. The definition of different cognitive score prediction tasks is also ambiguous.
- The authors repeated the 5-fold cross-validation 10 times with different random seeds, but did not report the standard deviation, which is important for performance comparison.
- The authors also performed experiments to evaluate robustness, the results are questionable as it’s unclear what is the variation scale of these fine-tuning tasks in different runs.
- Please rate the clarity and organization of this paper
Excellent
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Very good reproducibility. The authors pledge to make code public and details of experimental settings are provided.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
The authors should give a clear definition of different cognitive score prediction tasks; should provide the standard deviation along with the average metric.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
5
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
novel and reasonable methodology; questionable details in experiments and multi-task learning parts.
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
This paper received mixed ratings. The authors are expected to address the reviewers’ comments for the following weakness: (1) novelty of the presented method, in particular how it differs from the papers listed by R#2; (2) explain the details of the mask method as pointed by R#3; (3) address the concerns on the experiments as listed by R#4.
Author Feedback
Thanks to reviewers for their time and insightful comments. They found our work is novel (R3,R4), well-organized (R2,R3,R4) and effective (R2,R4), but also pointed out some issues. We will clarify the main points: Q1:Novelty of our work(R2). Our novelties lie mainly in: 1) The masked graph auto-encoder. Different from other works focusing on node reconstruction like [1], we aim to reconstruct the brain network (BN) structure since it can better reflect the mechanism of disease progression. Moreover, instead of simply leveraging graph convolution to learn the node’s representation by aggregating its neighbors’ features, our proposed topology-aware encoder achieves the node representation by modeling the associated connections. 2) Temporal contrastive loss. The definitions of positive and negative pairs are different from the other works like [2]. Longitudinal BNs at multiple visits characterize disease progression of the brain. Due to the slow progression of AD, the BN structure of the same object at two successive time points should be minimized and is taken as positive pair. It is equivalent to applying temporal smooth on the temporal graph structure. In contrast, the BN structures of different objects are distinct and should be enlarged as negative pair. 3) Temporal multi-task learning. We formulate progression prediction for multiple time point as a multi-task learning (MTL) problem so that multiple tasks benefit from each other. The work in [3,4] ignores the inherent correlations among the multiple tasks. 4) We are the first work to model the relation between longitudinal BNs and cognitive scores due to scarce data. Hence, we believe that this will open new avenues for deploying our framework in the clinic to aid diagnosis and treatment. Q2:Details of the masking method (R3). We randomly mask 20% of nodes following a probability function, where the mask probability of each node is inversely proportional to the average of its PCC to other nodes. It prevents important nodes/edges from being masked. The edges associated with the masked nodes are masked simultaneously. Masking ratio (MR) is a hyperparameter selected by nested cross validation from 0.05 to 0.5 and the optimal MR is 0.2. A higher MR leads to the removal of many connections. Q3:Other graph augmentation(R3). We try to augment the graph by deleting nodes/edges but the performances decrease. The reason is that the common graph augmentation destroys the structure of BN. It again validates that the BN structure is critical for BN analysis. Q4:Details of contrastive loss(R3) 1) The relation of Ht_gi, Ht_gj? We aim to develop a progression model so we consider samples at two consecutive time points instead of at the same time point. We take Ht_gi, Ht_gj as negative pair and found that it is useless to the final result. 2)We empirically set τ from 0.01 to 0.1 and the result shows that our model is robust with parameter τ. We finally set τ as 0.01. Q5:Definition of different tasks(R4). The definition is shown in Sec2.1 and Fig1. The tasks are to individually predict multiple cognitive scores at time M24, M36 and M48 with BNs at [M0,M6,M12]. The overall model consists of a shared layer and multiple task-specific layers (each corresponds to a task). For brevity, we show two task-specific layers in Fig2. Q6:CCC, T-test(R2) and Std(R4). CCC ranges from 0 to 1 and reflects both the correlation and the absolute error between the true and predicted values. All differences are significant (p<0.01) with t-test in ablation study. We will add this info to Tab2. We will add the Std in 1) and 2) below into Tab1 and Fig3 in the final version. 1)Task|GCN|eGCN|stGCN|DySat|Our 24|0.045|0.078|0.078|0.051|0.038 36|0.076|0.124|0.131|0.048|0.060 48|0.091|0.084|0.096|0.085|0.083 2) Task|0_6|6_12|0_6_12 MAE 24|0.368|0.333|0.307 36|0.345|0.292|0.273 48|0.270|0.273|0.373 CC 24|0.055|0.026|0.033 36|0.062|0.008|0.015 48|0.033|0.055|0.054 Ref [1]Hou SIGKDD[2]Dave CVIU[3]Cui CMIG[4]Ouy JBHI
Post-rebuttal Meta-Reviews
Meta-review # 1 (Primary)
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
The major concern was the novelty of the proposed method, which was largely explained in the rebuttal. Thus, accept.
Meta-review #2
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
The paper presents a self-supervised multi-task learning paradigm for AD progression modeling with longitudinal brain networks. Overall the reviewers appreciated the clarity and the experimental results of the method however they raised concerns about the limited novelty and different details about the experiments and other parts of the manuscript. The authors submitted a rebuttal to address these points. The metareviewer finds the answers of the authors convincing and he/ she thinks that the paper will be a nice contribution for MICCAI.
Meta-review #3
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
After carefully reviewing the authors’ feedback and the final decisions of the reviewers, it is apparent that there are mixed scores for this paper. Two reviewers have suggested acceptance, emphasizing the impact of the technique presented. On the other hand, one reviewer has maintained a rejection score, primarily citing novelty as the main factor. However, it is worth noting that this reviewer also recognizes the value of the paper, particularly the experimental insights.
After a thorough evaluation of the authors’ feedback and the reviewers’ final decisions, the Meta Reviewer agrees with the majority of reviewers who lean towards acceptance. The careful consideration of the authors’ feedback and the reviewers’ opinions supports this decision.