List of Papers By topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Xiangyu Zhao, Zhenrong Shen, Dongdong Chen, Sheng Wang, Zixu Zhuang, Qian Wang, Lichi Zhang
Abstract
Brain segmentation of patients with severe traumatic brain injuries (sTBI) is essential for clinical treatment, but fully-supervised segmentation is limited by the lack of annotated data. One-shot segmentation based on learned transformations (OSSLT) has emerged as a powerful tool to overcome the limitations of insufficient training samples, which involves learning spatial and appearance transformations to perform data augmentation, and learning segmentation with augmented images. However, current practices face challenges in the limited diversity of augmented samples and the potential label error introduced by learned transformations. In this paper, we propose a novel one-shot traumatic brain segmentation method that surpasses these limitations by adversarial training and uncertainty rectification. The proposed method challenges the segmentation by adversarial disturbance of augmented samples to improve both the diversity of augmented data and the robustness of segmentation. Furthermore, potential label error introduced by learned transformations is rectified according to the uncertainty in segmentation. We validate the proposed method by the one-shot segmentation of consciousness-related brain regions in traumatic brain MR scans. Experimental results demonstrate that our proposed method has surpassed state-of-the-art alternatives.
Link to paper
DOI: https://doi.org/10.1007/978-3-031-43901-8_12
SharedIt: https://rdcu.be/dnwCW
Link to the code repository
https://github.com/hsiangyuzhao/TBIOneShot
Link to the dataset(s)
N/A
Reviews
Review #3
- Please describe the contribution of the paper
The paper proposes a novel one-shot traumatic brain segmentation method. The method introduces adversarial training and uncertainty rectification to solve two issues of existing methods on one-shot segmentation based on learned transformations: limited diversity of augmented samples and potential label error introduced by learned transformations. Promising results on an in-house dataset for brain segmentation have been achieved.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- A novel one-shot brain segmentation method is proposed, which is useful for clinical applications.
- Promising results have been achieved.
- The manuscript is well written.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- It is not clear how the two losses work together (the adversarial loss L_adv and the segmentation loss L_sup).
- More ablation studies are needed on the design of L_rseg.
- More discussions on the different segmentation performances of different brain regions are expected. For example, the proposed method achieves much worse segmentation results on segmenting TP when compared to BrainStorm. What could be the reasons?
- Visualizations of label errors caused by appearance transformations and of the uncertainty in segmentation are necessary.
- The three comparison methods are a little bit outdated.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
In-house data are utilized. It is suggested to make the codes publicly available if the paper is accepted for publication to ensure reproducibility.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
- Illustrate how the networks are trained under the adversarial loss L_adv and the segmentation loss L_sup.
- Add more ablation studies.
- Provide visualizations of label errors caused by appearance transformations and of the uncertainty in segmentation.
- Discuss why the proposed method achieves better results in some regions and moderate performance in the others.
- Add some recently published works for comparison.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
6
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
- The topic is important. The manuscript is well written. The results are promising.
- Some details about the method and the results are lacking.
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #2
- Please describe the contribution of the paper
The paper presents a novel one-shot learning method that is based on combined image registration and segmentation.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The method is novel in certain extent.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
No statistical test to support the claim of one method is better than the other.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The network structure is quite complex, which is difficult to reimplement and reproduce the results from scratch.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
-
Please justify why KL divergence can be treated as the uncertainty in prediction (provide a reference).
-
The author stated only 42 scans were labelled as the test set. How was the fullysupervised U-net trained (upper bound) without the labels for the training set?
-
Please perform statistical test when claiming one method is better than another.
-
VoxelMorph may not be the best choice as the registration backbone, there’re newer and better methods recently developed.
-
It would be desirable to see when the method can reach the fully-supervised performance by including more labelled images in training. Not neccessairily in one-shot setting (semi-supervised learning in general).
-
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
6
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The method is novel and the results are convincing.
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #1
- Please describe the contribution of the paper
This paper proposes a method for one-shot traumatic brain segmentation with adversarial training and uncertainty rectification to deal with label scarcity. It is an improvement upon the existing one-shot medical image segmentation methods based on learned transformations (OSSLT) [3,16]. A spatial transformation is learned from an atlas image and a spatial reference image. Similarly, an appearance transformation is learned from atlas and appearance reference. The learned spatial transformation and appearance transformation are then applied to the atlas image to synthesize a new image. To increase the diversity of augmented samples, each pixel is assigned with two learnable parameters, controlling the amount of spatial and appearance transformation, respectively. These learnable parameters are trained end-to-end with the main network in an adversarially procedure. Furthermore, uncertainty rectification is applied to reduce the impact of the potential label error introduced by learned tra
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- Since it is difficult to collect enough training samples from patients with severe traumatic brain injuries, one-shot (or few-shot) learning makes sense in this application scenario.
- The proposed method with adversarial training to increase the sample diversity is novel.
- The proposed method is compared with several state-of-the-art one-shot learning methods and shows superior performance.
- The manuscript is organized and presented clearly.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- The proposed method is an incremental work upon OSSLT. It has some novelty, but not a break-through work.
- The novelty in uncertainty rectification is weak. First, it was inspired by [17]; Second, there are many ways to measure the uncertainty, e.g., the widely used dropout. A comparison experiment with alternative solutions is appreciated.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The authors have not released the data and code yet. Even though the manuscript is presented well, there may be some difficulty in reproducing the work since the method itself is complicated.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
A comparison experiment with alternative solutions for uncertainty rectification is appreciated.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
5
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Please refer to the strengths and weaknesses outlined above. Overall, I think its strengthes over-weight the weaknesses.
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
The proposed method is novel in that it uses adversarial training to increase the sample diversity and uncertainty to correct potential label errors. Although some clarifications may be needed and some details of the methodology can be improved, this is an interesting and solid work.
Author Feedback
On behalf of all of the coauthors, I appreciate your effort in reviewing our manuscript and the valuable comments on our research.
Q1: There are many ways to measure the uncertainty, e.g., the widely used dropout. A comparison experiment with alternative solutions is appreciated (R1). A1: Thanks for the valuable advice. We adopt KL-divergence for uncertainty to simplify the uncertainty estimation, as MC-dropout requires several times of forward passes.
Q2: Please justify why KL divergence can be treated as the uncertainty in prediction (provide a reference) (R2). A2: This usage is inspired by the search of Zheng et al., named Rectifying pseudo label learning via uncertainty estimation for domain adaptive semantic segmentation.
Q3: The author stated only 42 scans were labeled as the test set. How was the fully supervised U-net trained (upper bound) without the labels for the training set (R2)? A3: The fully supervised U-Net is trained by the 5-fold cross validation on the labeled scans to provide a reference of potential performance upper bound.
Q4: Please perform statistical test when claiming one method is better than another (R2). A4: Thanks for the constructive comment. We will introduce the t-test of the average segmentation performance of each subject in the future, as an addition to the current performance report.
Q5: VoxelMorph may not be the best choice as the registration backbone, there’re newer and better methods recently developed (R2). A5: Thanks for the valuable advice. We will improve this in future works.
Q6: Illustrate how the networks are trained under the adversarial loss L_adv and the segmentation loss L_sup (R3). A6: The adversarial network and the segmentation network are trained alternatively. The linear combination of both losses (a ratio of 1:1) is minimized during segmentation training.
Q7: More ablation studies are needed on the design of L_rseg. Add more ablation studies (R3). A7: L_rseg is the combination of rectification and KL-div regularization. Theories could prove that without KL-div regularization, the model will learn a lazy representation where all of the segmentation is uncertain.
Q8: Provide visualizations of label errors caused by appearance transformations and of the uncertainty in segmentation (R3). A8: Thanks for the valuable comment. We will improve this in future works.
Q9: Discuss why the proposed method achieves better results in some regions and moderate performance in the others (R3). A9: Since some regions are relatively easier to segment, the proposed method will not bring about significant improvement in these regions.
Q10: Add some recently published works for comparison (R3). A10: Thanks for your inspiring advice. Comparison with alternatives will be included in future works.