List of Papers By topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Sukesh Adiga Vasudeva, Jose Dolz, Herve Lombaert
Abstract
Semi-supervised segmentation tackles the scarcity of annotations by leveraging unlabeled data with a small amount of labeled data. A prominent way to utilize the unlabeled data is by consistency training which commonly uses a teacher-student network, where a teacher guides a student segmentation. The predictions of unlabeled data are not reliable, therefore, uncertainty-aware methods have been proposed to gradually learn from meaningful and reliable predictions. Uncertainty estimation, however, relies on multiple inferences from model predictions that need to be computed for each training step, which is computationally expensive. This work proposes a novel method to estimate the pixel-level uncertainty by leveraging the labeling representation of segmentation masks. On the one hand, a labeling representation is learnt to represent the available segmentation masks. The learnt labeling representation is used to map the prediction of the segmentation into a set of plausible masks. Such a reconstructed segmentation mask aids in estimating the pixel-level uncertainty guiding the segmentation network. The proposed method estimates the uncertainty with a single inference from the labeling representation, thereby reducing the total computation. We evaluate our method on the 3D segmentation of left atrium in MRI, and we show that our uncertainty estimates from our labeling representation improves the segmentation accuracy over state-of-the-art methods. Code is released at https://github.com/adigasu/Labeling_Representations.
Link to paper
DOI: https://link.springer.com/chapter/10.1007/978-3-031-16452-1_26
SharedIt: https://rdcu.be/cVRY8
Link to the code repository
https://github.com/adigasu/Labeling_Representations
Link to the dataset(s)
http://atriaseg2018.cardiacatlas.org/
Reviews
Review #1
- Please describe the contribution of the paper
This paper proposes an uncertainty-based method for semi-supervised segmentation. Based on a common teacher-student framework, it introduces a labeling representation module, which is a pre-trained denoising autoencoder (DAE), in order to estimate a “perfect” segmentation map from the current prediction. The uncertainty map is estimated by the pixel-wise difference between the teacher and DAE predictions to guide the training of the student model. Experiments show the SOTA segmentation accuracy on the Left Atrium dataset.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- I like the idea of DAE, which encodes the global shape prior of the segmentation masks. It helps to predict a plausible segmentation mask for the teacher model and then provides a more reliable reference for uncertainty estimation.
- The proposed method reduces the total computation since it only needs one single inference for the uncertainty.
- The experimental comparisons with existing methods and ablation studies demonstrate the SOTA performance of the proposed method on the MRI dataset.
- This paper is very well-written and easy to follow.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
Lack of comparison with existing methods on other datasets. Encoding priors of the segmentation masks with the DAE provides more guidance for uncertainty estimation. However, it might have drawbacks considering the generalization ability. While all the experiments were conducted on the Atrial Segmentation Challenge dataset, how does it perform on other datasets, for example, ISBI LiTS 2017 Challenge and NPC MRI dataset?
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The paper is easy to follow. Technical details are clearly described for reproducing.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
- About the parameter selection, in the ablation study, the authors say that “we report on β=0.1 in all experiments for a fair comparison.” However, according to Table 4, when β=1, the method achieves the best performance. Then why β=0.1 for all experiments instead of using β=1? If β continues to increase, how does the performance change?
- While saying “the proposed uncertainty estimates are more robust than those derived from the entropy variance, requiring multiple inferences strategy”, it would be better to also compare the uncertainty maps.
- The low computational cost is claimed as one advantage of the proposed method. It would be better to also compare the training time required for different methods (not only the number K).
- In the ablation study, this paper compares with threshold strategy variant and entropy scheme variant. More explanation is desired. Why the proposed method is better than others?
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
6
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The idea of using DEA to reconstruct a segmentation mask for the current non-ideal prediction and estimating uncertainty directly from the pixel-wise difference between the reconstructed segmentation mask and the estimated mask by the student model is novel. This uncertainty estimation only needs one inference stage, thus reducing the computational cost. The experiments on the LA dataset demonstrate the superiority of the proposed method. However, more experiments and deeper analysis would make the paper more convincing.
- Number of papers in your stack
4
- What is the ranking of this paper in your review stack?
1
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Not Answered
- [Post rebuttal] Please justify your decision
Not Answered
Review #2
- Please describe the contribution of the paper
This paper proposes a labeling representation-based uncertainty estimation algorithm for semi-supervised segmentation. It obtains better performance than SOTAs on left atrium segmentation from 3D MR volumes in two different settings.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
This paper proposes a labeling representation-based uncertainty estimation algorithm for semi-supervised segmentation. It obtains better performance than SOTAs on left atrium segmentation from 3D MR volumes in two different settings.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
Quite simple: It is mainly improvement of mean teacher, why you design it, what is the main differences from mean teacher. Can you give the motivation much clearer.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
yes
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
This paper proposes a labeling representation-based uncertainty estimation algorithm for semi-supervised segmenation. It obtains better performance than SOTAs on left atrium segmentation from 3D MR volumes in two different settings. The topic appropriates for MICCAI and the technical novelty of the paper is somewhat novel. Its contribution is moderately significant and the coverage of the problem sufficiently comprehensive and balanced. However, following I have some minor questions that the authors should address to improve this work:
-
I strongly recommend authors to release the source code along with the submission, since the learning based projects are typically open-source oriented to facilitate a fair assessment of the performance of the proposed methods for the community.
-
It is mainly improvement of mean teacher, why you design it, what is the main differences from mean teacher. Can you give the motivation much clearer.
-
A suggestion: A similar uncertainty-aware method: mutual teaching for GCN, you can learn something from it.
-
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
5
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
This paper proposes a labeling representation-based uncertainty estimation algorithm for semi-supervised segmentation. It obtains better performance than SOTAs on left atrium segmentation from 3D MR volumes in two different settings.
- Number of papers in your stack
2
- What is the ranking of this paper in your review stack?
4
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Not Answered
- [Post rebuttal] Please justify your decision
Not Answered
Review #3
- Please describe the contribution of the paper
The paper proposed a novel labeling representation-based uncertainty estimation method for the semi-supervised segmentation, which requires only single inference. Specifically, a pre-trained denoising autoencoder is used to maps the predictions of the segmentation into set of plausible masks. Then the uncertainty is calculated based on the difference between the predicted mask and the reconstructed mask. The experiments show that the proposed network achieves the state-of-the-art results.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
-The paper proposed a novel way to estimate the pixel-wise uncertainty, which requires only single model inference. -The ablation studies demonstrate the effectiveness and robustness of the proposed uncertainty estimation method.
- The writing of the paper is clear.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- The motivation is not strong. Several uncertainty estimation methods which require only single inference have already been proposed. Besides, requiring an additional task to constraints shape prior is not an obvious limitation if it does not require any additional images or annotation for training.
- The comparison with previous methods is not sufficient. There is a lack of comparison results with several SOTA methods, thus the conclusion that the method of this paper achieves the best result is not strong.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
It can be implemented easily.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
Please see the answers of Q5.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
4
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Evenly this is a well-presented work, its current motivation is not strong. I can’t see the necessity of ‘single inference’. My another major concern is the lacking of experimental comprisons.
- Number of papers in your stack
3
- What is the ranking of this paper in your review stack?
1
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Not Answered
- [Post rebuttal] Please justify your decision
Not Answered
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
This paper proposes a method for uncertainty estimation in semi-supervised segmentation. The reviewers acknowledged novelty in the proposed uncertainty estimation method, however the reviewers also pointed out weakness in 1) poor motivation/justification of the proposed method; and 2) lack of comparison of several state of the art methods. Please provide your feed back on these issues in the rebuttal.
- What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
5
Author Feedback
We thank the AC and reviewers for their valuable feedback, who have highlighted novelty [R1, R2, R3, AC], effectiveness [R1, R3] and SOTA results [R1, R2] in our new approach to estimate uncertainty by leveraging labeling representations for semi-supervised segmentation. There are, however, a few concerns addressed below, with a few possible misinterpretations from the reviewers. Necessary changes will be incorporated into the manuscript.
Clarification of motivation [R3] and why it works [R1]. Recent baseline methods [30,26,21,24,27] resort to the entropy variance strategy [7] for the uncertainty estimation, which requires multiple inferences. The literature reveals that single inference uncertainty methods for semi-supervised segmentation are underexplored. Furthermore, we observed that the conventional entropy-based methods produce evenly distributed regions of uncertainty along target boundaries, which do not always represent uncertain regions. In contrast, the introduction of our label-awareness of target regions produces a focused distribution of uncertainty in relevant salient areas of images (such as valves or uncertain boundaries). This is due to the deviation of the provided segmentation from the learnt labeling distributions.
Lack of comparison [R3] and other datasets [R1]. To our knowledge, URPC (MICCAI’21) and DCT (AAAI’21) are among the state-of-the-art methods, in our opinion recent enough, hence included in our empirical validation. As we resort to uncertainty, we also include the established approach of UAMT. SASSnet enforces a shape constraint on the predictions and is therefore included in our comparisons. Furthermore, our contribution focuses on a specific methodological aspect, i.e., labeling representations for uncertainty. So we compare with relevant SOTA methods rather than with multiple datasets. However, as a preview, our experiments on an abdomen dataset (FLARE 2021) have shown positive results: Method- Dice(%) MT- 70.60, DCT- 68.07, UAMT- 73.67, URPC- 73.31, Ours- 74.05 We see a similar trend compared to the LA dataset, which demonstrates the generalization of our method.
Clarification of limitations of prior-based methods [R3]. We did not intend to claim that an additional task to constraint shape prior is a limitation of previous methods. Indeed, the limitations that motivate our approach include multiple inferences (via MC-Dropout or ensembles), the requirement of aligned images, the use of multiple networks and convergence difficulties. We apologize for this misunderstanding and will modify it accordingly.
Differences from Mean Teacher (MT) [R2]. Many baselines [30,26,24] are built on top of MT, which addresses the issue of reliable prediction in the consistency loss using uncertainty estimation. Our approach proposes a novel way of leveraging the labeling representation to generate reliable target regions. Leveraging such reliable targets in a consistency loss yields in improvements of Dice score up to 8% and HD by 5mm compared to MT in 10% labeled data experiments.
Robustness and compare the uncertainty maps [R1]. We want to clarify that the proposed method is more robust than entropy-based methods in terms of segmentation performance rather than uncertainty estimation. The uncertainty maps are mainly used to guide the student model, which varies in each training step.
Choice of β [R1]. Following UAMT, we set β=0.1 for a fair comparison. On varying β, we found that the segmentation accuracy improved up to β=1. As we focus on the methodological benefit rather than seeking the best accuracy, we have not included the β=1 result in Tables 1-2.
Training time [R1]. The training time (sec per iteration) for each model is as follows: SASSnet 2.05s, DCT 1.13s, UAMT 1.92s, URPC 1.33s and ours 1.004s. Our method is faster than these baselines while producing superior segmentation accuracy.
Code [R2]. We indeed plan to release the code upon completion of the review process.
Post-rebuttal Meta-Reviews
Meta-review # 1 (Primary)
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
The rebuttal well addressed the concerns on motivation, and comparison with the state of the art methods.
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
6
Meta-review #2
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
The most important issues on motivation and comparision with SOTA methods have been well clarified in the rebuttal.
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
2
Meta-review #3
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
In the rebuttal, authors clarified the motivation and justification of the proposed method; they also explained their choice in terms of the SotA methodology against which their method is compared. These were the most critical issues of their pre-rebuttal paper. Hence I recommend acceptance for this paper.
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
7