Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Yingjie Feng, Wei Chen, Xianfeng Gu, Xiaoyin Xu, Min Zhang

Abstract

Alzheimer’s disease (AD) is an irreversible neurodegenerative disease, so early identification of Alzheimer’s disease and its early stage disorder, mild cognitive impairment (MCI), is of great significance. However, currently available labeled datasets are still small, so the development of semi-supervised classification algorithms will be beneficial for clinical applications. We propose a novel uncertainty-aware semi-supervised learning framework based on the improved evidential regression (ER). Our framework uses the aleatoric uncertainty (AU) from the data itself and the epistemic uncertainty (EU) from the model to optimize the evidential classifier and feature extractor step by step to achieve the best performance close to supervised learning with small labeled data counts. We conducted various experiments on the ADNI-2 dataset, demonstrating the effectiveness and advancement of our method.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43907-0_13

SharedIt: https://rdcu.be/dnwcb

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #3

  • Please describe the contribution of the paper

    The paper presents several noteworthy contributions. Firstly, it has improved the accuracy of evidential regression by adjusting its loss function, resulting in better separation of AU and EU. Secondly, it has developed a multi-layer and multi-step network to implement evidential regression and proposed a novel semi-supervised learning approach using step-by-step training. Finally, its experimental results on the ADNI dataset show that our approach achieves a new state-of-the-art performance in semi-supervised learning, even when only a small amount of labeled data is available, with results that are comparable to those obtained through supervised learning.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    -Very detailed and high-quality figures -The theory is described with details

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    It would be helpful if you could provide more information about the data used in your study, including the data collection procedures, data cleaning and preprocessing methods, and any relevant information about the variables used in the analysis. By including this information, you can increase the transparency and credibility of your research and allow other researchers to build upon your work

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    It would be helpful if you could provide more information about the data used in your study, including the data collection procedures, data cleaning and preprocessing methods, and any relevant information about the variables used in the analysis. By including this information, you can increase the transparency and credibility of your research and allow other researchers to build upon your work

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    The paper lacks a fair discussion on the shortcomings of typically used approaches. The overall discussion in introduction offers much room for improvement by defining all these issues and limitations to define the worth of the proposed methodology.

    It would be helpful if you could provide more information about the data used in your study, including the data collection procedures, data cleaning and preprocessing methods, and any relevant information about the variables used in the analysis. By including this information, you can increase the transparency and credibility of your research and allow other researchers to build upon your work.

    While conclusion is concise, it would benefit from further development and elaboration.

    I suggest expanding on the information provided in the captions to ensure that readers can fully comprehend the content of the figures. This may include additional details on the data being presented, the variables being analyzed, or any key findings or trends that are visible in the figure. Providing this information will enhance the reader’s understanding of the figure and its relevance to the research.

    While tables can be an effective way to present data, they should be designed to be easily readable and understandable for the reader. I suggest revising the tables to make them less crowded and heavy. This could include simplifying the layout, using appropriate headings and subheadings to organize the information, and breaking up large blocks of text into more manageable sections. Additionally, consider using formatting techniques such as bolding to highlight key information or trends.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    -

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    The authors propose an uncertainty-aware semi-supervised learning framework using improved evidential regression, utilizing both aleatoric and epistemic uncertainties to optimize the evidential classifier and feature extractor, achieving comparable performance to supervised learning with a small number of labeled data.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    +ve: 1) Well written paper. 2) Good progress over the baselines, have presented the supervised and unsupervised baselines for comparison, makes it easier to see how unsupervised version perform as compared to the supervised counterparts. 3) The work could be useful to deal with unlabeled data; their unsupervised model results are close to the best supervised learning methods only with 20% of the labeled dataset. 4)Evaluation and ablation experiments are detailed to explain the validity and rationality of different parts of the proposed model.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Authors mention in conclusion that - their model demonstrates strong potential for clinical deployment. But it still have inferior performance to supervised models. Possibly, authors could have explained what makes unsupervised version more preferred solution (except the fact that unsupervised version needs less labeled training dataset), for example, 1) supervised ones could be more prone to overfitting given small labeled data and thereby underperforming on unseen dataset. 2) unsupervised version is better suited for domain adaption or fine tuning on downstream tasks. 3) Also, if the unsupervised model is light in terms on number of parameters to be easily deployed for clinical purpose (Note: above suggestions are not prescriptive, but something similar arguments on these lines would make the case why this is more suited for clinical deployment.)

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    No code provided, so hard to comment on reproducibility of the paper.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    • same comments as mentioned in the weakness.
    • does the method also perform well on some other task as well?
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This is a good work on unsupervised method and achieving almost similar results as the supervised models, but with just 20% of the labeled data as compared to the supervised models. This could be useful for medical domain tasks, when data acquisition is costly.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #4

  • Please describe the contribution of the paper

    The paper proposes a novel semi-supervised learning method that effectively utilize the estimated uncertainty from the data and the model to improve the classification of Alzheimer’s disease.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The study proposes a sounding, theory-based method that achieves SOTA performance on the classification of Alzheimer’s diseases
    • Experiments, including comparison with SOTA and ablation studies, are thorough
    • The approach can be easily adapted to in other medical problems
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • My main concern is that although the paper trains and evaluates the proposed method on the ADNI-2 dataset, there is no mention of how the validation or test sets were split. Without information on the validation set, it is unclear how all the thresholds were selected for training.
    • A minor concern I have is that the estimated uncertainty is only utilized to enhance the classifier performance. The paper doesn’t explore whether the predicted uncertainty can be utilized to filter OOD images or images that the model is highly uncertain about. Since the paper claims that their approach can decouple AU and EU, and subsequently improve uncertainty estimation, demonstrating enhancement when eliminating uncertain samples would further bolster their assertion.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
    • reproducibility on the public dataset in this study should be excellent if the code is made available
  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    • The term “recycle training” confuses me, at least until the workflow section. I think it would be helpful if the authors explain briefly in an intuitive way the motivation behind this training mechanism in an earlier section.
    • As mentioned above, some description of the data splits and how the final model was selected for evaluations would make the results more convincing
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper is methodologically interesting with satisfying experimental settings and results

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    The proposed method is interesting and has novel components. The experiments are well designed and the results are good to support the proposed model.




Author Feedback

Thank you very much to the reviewers and meta-reviewers for your professional, insightful, and encouraging comments and suggestions. We respond to discussions as follows and will add these clarifications and discussions in the camera-ready version. [R2&R3-Conclusion] Your valuable comments are pertinent. Insights into the potential for clinical applications of R2 are very accurate. Indeed, using the SSL method can avoid overfitting, while uncertainty preserves the model’s predictive freedom, resulting in better performance for OOD data. Secondly, our method has advantages in fine-tuning and deploying to real data in hospitals. Our framework makes it easy to add new unlabeled data and adaptively train on data with different qualities based on uncertainty, thereby enhancing the model’s capability with the information provided by these data. [R2-Further research] After submission, our further experiments show that our method also has excellent performance on different tasks such as brain tumor classification, which shows the generalization performance of our proposed framework, and we will continue to explore the ability of our framework on other datasets and tasks. [R3&R4-Data] On MRI images, we used 3T T1-weighted and FLAIR MR images, and the preprocessing process used CAT12 and SPM tools. All MRI data were processed using standard pipeline, including anterior commissure (AC)-posterior commissure (PC) correction, intensity correction, and skull stripping. Affine registration is performed to linearly align each MRI to the Colin27 template and resample to 224x224x91 for subsequent processing. For PET images, we used the official pre-processed AV-45 PET image and resampled like MRI. On the issue of dataset division, unlike SL, the unlabeled data in SSL also contributes a lot to the model training process, so there is no completely isolated train, valid, and test sets split like SL. But as R4 speculated, we did use about 10% of the data to validate independently and without data leakage during the training process to determine the specific value of thresholds. [R3 - Caption & R4 -workflow] The confusion regarding workflow raised by R4 has also been troubling us during the writing process. In conjunction with this and R3’s suggestions regarding figure captions, we will add a brief description of workflow to the caption of Fig2, which located earlier in the text. Additionally, we will include a brief analysis in the caption of Fig3 and a description of the transductive effect in the caption of Fig4. [R3-Table] Your comment is valuable, and we’ll continue to tweak it in camera-ready to improve readability and aesthetics. [R4-OOD] Sure, our framework has the ability to identify OOD and high uncertainty data. In the EDL, this part of the data provides less evidence and therefore contributes less to model training and prediction. We will explore the application of this part of the data in future research.



back to top