Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Zhe Xu, Donghuan Lu, Jiangpeng Yan, Jinghan Sun, Jie Luo, Dong Wei, Sarah Frisken, Quanzheng Li, Yefeng Zheng, Raymond Kai-yu Tong

Abstract

Segmenting prostate from MRI is crucial for diagnosis and treatment planning of prostate cancer. Given the scarcity of labeled data in medical imaging, semi-supervised learning (SSL) presents an attractive option as it can utilize both limited labeled data and abundant unlabeled data. However, if the local center has limited image collection capability, there may also not be enough unlabeled data for semi-supervised learning to be effective. To overcome this issue, other partner centers can be consulted to help enrich the pool of unlabeled images, but this can result in data heterogeneity, which could hinder SSL that functions under the assumption of consistent data distribution. Tailoring for this important yet under-explored scenario, this work presents a novel Category-level regularized Unlabeled-to-Labeled (CU2L) learning framework for semi-supervised prostate segmentation with multi-site unlabeled MRI data. Specifically, CU2L is built upon the teacher-student architecture with the following tailored learning processes: (i) local pseudo-label learning for reinforcing confirmation of the data distribution of the local center; (ii) category-level regularized non-parametric unlabeled-to-labeled learning for robustly mining shared information by using the limited expert labels to regularize the intra-class features across centers to be discriminative and generalized; (iii) stability learning under perturbations to further enhance robustness to heterogeneity. Our method is evaluated on prostate MRI data from six different clinical centers and shows superior performance compared to other semi-supervised methods.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43901-8_1

SharedIt: https://rdcu.be/dnwCB

Link to the code repository

N/A

Link to the dataset(s)

https://liuquande.github.io/SAML/


Reviews

Review #3

  • Please describe the contribution of the paper

    This paper proposes a framework for semi-supervised learning of prostate segmentation models. It focuses on a multisite scenario where large part of the unlabeled data comes from a different data distribution. Toward this end, they propose a teacher-student architecture that uses the labeled data to regularize these out-of-distribution features. In their experiments, their model outperforms existing approaches by a relevant margin. They also present ablation studies of the main components introduced in their model.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    1) I think the problem is real. We can get a large number of images nowadays, but they may not be useful due to the distribution shift. The proposed framework tries to tackle this problem.

    2) The proposed model outperforms existing approaches by a large margin.

    3) The formulation is long with a lot of components, but they are well-designed and insightful. They also validate the performance improvement of most of their components in the ablation study.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1) More segmentation metrics would be useful. For instance, metrics that take into account the segmented shape discrepancy like Hausdorff distance should always be considered in segmentation benchmarks.

    2) The formulation of the L_u2l and L_cr seems to take similar outputs of the model and construct a metric that measures similar things. Is It not redundant? I expected to see the impact of each term in the ablation study provided. For instance, the performance of the full model, the performance of the model without L_u2l, the performance of the model without L_cr, and the performance of the model without both terms.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    In the reproducibility checklist, the authors promised to release the code and trained weights for their proposed method. It is essential to others reproduce their results. For the sake of completeness, I would also suggest the authors report the average runtime and the memory footprint for each approach evaluated in their experiments.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    Please have a look at question 6.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This paper tackles a relevant problem and proposes an interesting model. It also provides a thorough evaluation showing that their model can outperform state-of-the-art methods. Therefore, I recommend its acceptance.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #1

  • Please describe the contribution of the paper

    The author proposes a framework called CU2L (Category Level Unlabeled 2 Labeled) semi-supervised learning for multi-site prostate MRI. their method has three main approaches: local pseudo-label learning to confirm the data distribution of the local center; category-level regularization across centers by using the limited expert labels, and stability learning to enhance robustness to heterogeneity.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • Challenging scheme: instead of the standard semi-supervised learning, the author solved the multi-site scheme with more heterogeneous distribution that also consulted other pools of unlabeled images.
    • Clinical feasibility: The quantitative result of CU2L is superior by a pretty high margin compared to other methods in multi-site settings.
    • Novelty: Unlike previous general domain regularization approaches, CU2L can Deal with multiple sources instead of a specific source
    • Simplicity: although all of the components in CU2L are simple to understand yet satisfying results achieved.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Limited novelty: seems to me like a combination of methods with some tweaks

    • also need more explanation on pseudo label usage
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    No code provided

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    Comment: Well done for tackling a difficult challenge in multi-source domain generalization

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    I like the way the authors tackle multi-site problem with uncommon multi-source domain generalization

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    This paper focused on a pratical scenario that the SSL model would be impacted by the limited introduced unlabeled data and then tried to leverage the additional multi-site data to improve the performance. Overall, the paper is clear and the claims are supported by corresponding experiments.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. A new scenario for semi-supervised segmentation in practical usage.
    2. Claims are well supported by extensive experiments.
    3. Good performance.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The authors proposed a new semi-supervised setting and gave a discussion of its clinical value. However, the clinical values need to be elaborated more to make it more convincing.
    2. The contribution claims can be improved to separate the statement and discussion/explanation away.
    3. For the data heterogeneity, it is better to include some label-shift related papers for discussions and then state their differences.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Good reproducibility

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    1. Add more evidences or examples to support the clinical values of its proposed setting;
    2. Improve the contribution claims to make it clear;
    3. If the quantity of unlabeled data changes, how is the performance? Try to show more results here.
    4. It may be more suitable to address others like tumors or lesions in this setting. Future work should include experiments of other objects.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Considering its clear organization and good performance, I think this paper is above the acceptance bar of MICCAI.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This work has received positive scores, with reviewers acknowledging the practical usage of the proposed scenario, the proposed superior performance of the proposed model, and the extensive experiments which support the claims. Reviewers have also raised several comments, some of them which could be considered by the authors in the camera ready version. Among others, these concerns include a stronger clinical motivation/value, additional segmentation metrics and computational cost. Following the comments from the reviewers, I strongly suggest the authors to release their code, as well as their trained model for the sake of reproducibility.




Author Feedback

Thanks for the supports and the constructive comments. We will carefully consider them and improve our camera-ready version.



back to top