Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Udaranga Wickramasinghe, Patrick Jensen, Mian Shah, Jiancheng Yang, Pascal Fua

Abstract

There are many approaches to weakly-supervised training of networks to segment 2D images. By contrast, existing approaches to segmenting volumetric images rely on full-supervision of a subset of 2D slices of the 3D volume. We propose an approach to volume segmentation that is truly weakly-supervised in the sense that we only need to provide a sparse set of 3D points on the surface of target objects instead of detailed 2D masks. We use the 3D points to deform a 3D template so that it roughly matches the target object outlines and we introduce an architecture that exploits the supervision it provides to train a network to find accurate boundaries. We evaluate our approach on Computed Tomography (CT), Magnetic Resonance Imagery (MRI) and Electron Microscopy (EM) image datasets and show that it substantially reduces the required amount of effort.

Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16443-9_41

SharedIt: https://rdcu.be/cVRyV

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    The authors propose a weakly supervised segmentation method which uses user inputs, a deformable template governed by an active surface model (ASM) through a 3D neural network to segment structures in multiple imaging modalities. The user obtains an initial estimate of the segmentation by initialising the ASM via clicks, and obtains refined predictions by supplying more click. The neural network is used to create complete these updated shapes in response of the clicks. The method is interesting. Experiments highlight advantages.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    Interactive approaches are undoubtedly the correct tool to get large amounts of data annotated. The paper experiments on multiple images from multiple modalities. Results seem good even though they limited in number The method is relatively quick considering it works in 3D

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    I personally do not understand the purpose of the image reconstruction component. Certainly, forcing the network to reconstruct the original image from the segmentation mask may help learn some of the network parameters. But I was expecting this feature being used to create some sort of feedback loop. Using the mse loss for reconstruction of image content is also very risky in this context: a perfect reconstruction is only possible as a result of overfitting because the model won’t really learn the distribution of plausible images but just to associate segmentation with image. This can at best generate some contours with plausible greyscale levels near the boundaries of the object but blurry details elsewhere. Since the authors do not seem to use this in a feedback loop, I still am puzzled about why it is part of the training.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The paper is reproducible as authors include the code (which I haven’t personally tested).

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    The idea is valuable as the use of an ASM is probably a good idea in this context. I would suggest to develop an algorithm that can take both positive (“this region should be part of the foreground”) and negative (“this regions should not be part of the foreground”) feedback from the user for example using left and right clicks.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper is good but I am not convinced that the image reconstruction step is the right thing to do here.

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #2

  • Please describe the contribution of the paper

    The paper proposed a novel approach for weakly-supervised volumetric image segmentation, which requires only a few 3D points as training input. The main supervisions are obtained by comparing the segmentation results with a deformable template, and comparing a reconstructed image with the original image.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This paper is well written. The idea of using deformable templates and self-supervision together to address the weakly supervised segmentation is quite novel.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Although effective in terms of labor cost, the performance seems not to be very outstanding compared to the baseline approach.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Codes are promised to be released.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    Weakly supservised volumetric image segmentation is very useful in practice. Although the performance is not much better than the baseline, the novelty of the propose method makes it a good attempt to address this challenging problem.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This paper propose a complete and feasible pipeline with considerable technical novelty to the important problem of weakly supervised volumetric image segmentation.

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    2

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #3

  • Please describe the contribution of the paper

    A method of training networks to segment volumetric images with using only minimal annotations – a sparse set of points.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    An interesting idea, proposing a method requiring only sparse point annotations to supervise the training.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    • Doesn’t mention how many images in each dataset, nor how the ground truth was obtained. • Experimental results are not explained properly. • Section 4.3 “Full supervision” does not seem to be clearly defined. Do you mean you train a U-Net with fully annotated volumes? If so, how many were used for each set? • How exactly are the numbers in table 1 achieved? Was a pair of networks trained on weakly supervised data, then tested on a different set of test images? If so, how many were in each set? • Is it the case that the U-Nets are only trained once? The user performs point annotations until the template is satisfactory, then the U-Nets are trained? Or is it that the U-Net is trained as the user goes along, so that they can assess the performance and add points to poorly annotated regions? If the former, how will the user know that they have placed enough points? • Is the approach to train a pair of networks for each image to be segmented, or does one train a pair of networks for a set of images (each with its own sparse annotations)?

  • Please rate the clarity and organization of this paper

    Poor

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Fairly clearly described core approach which wouldn’t be hard to reproduce, though the exact set up is not described (see notes above). Experiments are not explained clearly though. It is claimed that code will be made available.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    The paper should be revised to make it clear exactly what the set up is and how it is evaluated. In particular, is this a method applied to a single image, or is one training a model from a set of images. I suspect this is considered so obvious to the authors that they forgot to mention it in the text (unless I’ve missed it).

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This is an interesting idea, definitely worth exploring. Unfortunately the paper doesn’t explain the actually set up clearly – is this something done one image at a time, or is a model trained on a set of images? This is something that could easily be fixed by the authors.

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    3

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    5

  • [Post rebuttal] Please justify your decision

    My main concerns were over the lack of clear explaination of the details in the paper. The response has helped clarify some issues - assuming the paper is tweaked slightly to make those things clear I’m happy for this to be accepted.




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    All reviewers found the idea of using an ASM to initialize a mask as a way to gain quick annotations interesting and useful. It seems like the authors are on the right track with the general approach. Nonetheless, reviewers highlighted several issues that are worth addressing in a rebuttal:

    1. Can authors better explain the rationale behind using the MSE loss and assuage concerns of using it?
    2. How would this approach be used in practice, given that there is no feedback for the user on when to stop clicking? R3 brought up other experimental concerns along these lines.
    3. Can authors address concerns as the significance of results compared to the baseline?

    Finally, unless I missed it, I could personally find little details on the datasets used. That may be found in the supplementary, but if so authors should make a clear reference to the supplementary in the main text. R3 also pointed out some important details that seemed to be missing.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    5




Author Feedback

Dear (meta-)reviewers,

We appreciate (meta-)reviewers (R1, R2, R3, MR) for the high-quality reviews. The primary concerns can be summarized as:

(R1/MR) Rationale behind image reconstruction branch and use of MSE loss. The deformed template used to compute cross entropy (CE) loss can only be expected to provide a coarse depiction of the object since the annotator only provides a sparse set of points. Therefore, this is not sufficient for accurate boundary detection. (see 2nd image from left in Fig 3).

To remedy this, we introduce an image reconstruction branch and evaluate an MSE reconstruction loss. In our experiments, we demonstrate that by using a weighted sum of the CE loss and the reconstruction loss (see λ = 0, 0.0001 and 10000 in Fig. 3) can help us obtain improved segmentation at the boundary. The intuition behind this is: When the MSE loss is given too large a weight, the resulting segmentation features spurious regions that correspond to image boundaries but not necessarily those of the target object. When it is given too low a weight, the boundaries are those of the atlas. And when the two loss terms are properly balanced, which is the case for a large range of weights, the boundaries end up being the image boundaries that are close to those of the actual object, that is, those of the target object. This is depicted by Fig. 1 in the supplementary material.

(R2/MR) Significance of results compared to the baseline? If the contour length is small and/or the structure only expands across a few slices, Weak-Net does indeed provide only a modest savings, as in the case of the Hippocampus in Table 2. However, in a more complex case such as that of the liver, the difference becomes much more significant and Weak-Net divides the effort by a factor 6 or more compared to the baselines.

(MR) When to stop clicking? The method is fast enough to be interactive. Hence, as shown by the demo video in the supplementary material, the annotator can use our MITK interface to see how accurate the reconstructions are and decide when to stop.

(R3) Information about datasets? All three datasets are publicly available but unfortunately forgot the references in the paper. We will add them along with a summary in the supplementary material.

(R3) Full Supervision Experiments. We used all the annotations to obtain full-supervision results. The horizontal dashed line in Fig. 6 (right) denotes the resulting IoU.

(R3) Results in Table 1. Results in Table 1 summarize human annotation results, using our MITK plugin. Then we performed slice-by-slice annotation on the training dataset such that the annotation effort is roughly similar to the annotation effort with our plugin. We then used the annotated datasets to train Weak-Net and U-Net models. We also include the results with full-supervision.

(R3) Are networks trained once? The networks are trained only once. The user first annotates image volumes until satisfied and without exceeding the annotation budget (i.e. time). Then we use this data to train the networks.

(R3) Separate networks for each image or single network for the whole dataset? We train a single network for a given dataset. Once the annotator has used our MITK plugin, the annotation task is complete and the dataset is used to train Weak-Net. At inference time, we simply use the trained network.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    I found that the authors sufficiently addressed most concerns, particularly as to missing details. Given that the idea is interesting and experimental validation good, the biggest issues here are with clarity. I strongly urge authors to fully address these concerns in their final version of the main body and supplementary.

    Because this is an interesting idea that stands out from the pack, I am very happy to recommend accept here.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    3



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This work proposes a strategy for weakly supervised image segmentation that brings in concepts from active shape models. The reviewers considered the idea as novel and well validated. The rebuttal addresses the points raised during the first round of revisions in a satisfactory way.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    5



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This work proposes a valuable methodological contribution, for weakly Supervised Volumetric Image Segmentation, based on sparse point annotation. In the 1st round of reviews, the reviewers have highlighted the fact that the idea was novel and the results were reasonable.

    In their remarks, clarifications were required on some aspects of the methodology and the experiments. The authors seem to have provided sufficient and convincing explanations on their methodology in the rebuttal; Therefore I recommend acceptance for this paper.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    2



back to top