Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Rebeca Vétil, Clément Abi-Nader, Alexandre Bône, Marie-Pierre Vullierme, Marc-Michel Rohé, Pietro Gori, Isabelle Bloch

Abstract

We propose a scalable and data-driven approach to learn shape distributions from large databases of healthy organs. To do so, volumetric segmentation masks are embedded into a common probabilistic shape space that is learned with a variational auto-encoding network. The resulting latent shape representations are leveraged to derive a zero-shot and few-shot methods for abnormal shape detection. The proposed distribution learning approach is illustrated on a large database of 1200 healthy pancreas shapes. Downstream qualitative and quantitative experiments are conducted on a separate test set of 224 pancreas from patients with mixed conditions. The abnormal pancreas detection AUC reached up to 65.41% in the zero-shot configuration, and 78.97% in the few-shot configuration with as few as 15 abnormal examples, outperforming a baseline approach based on the sole volume.

Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16434-7_45

SharedIt: https://rdcu.be/cVRsd

Link to the code repository

https://github.com/rebeca-vetil/HealthyShapeVAE

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper proposes a scalable and data-driven approach to learn shape distributions from large databases of healthy organs, in order to learn normative shape models from collections of healthy organs for anomaly detection.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    Different from previous methods, this paper learns a variational autoencoder based on binary segmentation masks with data augmentation, which is used to reduce the risk of overfitting and make the model more focused on the anatomy of organs.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The comparison methods in this paper have only one baseline, which is somewhat less.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The specific details of the model are explained in the supplementary material, so I don’t think the reproducibility of the paper is difficult. If the author can provide code, it will be more helpful for readers to understand the details of the paper.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    I suggest the author explain contributions more clearly in this paper。

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper doesn’t have significant novelty but it has some contribution for the field, furthermore, the clarity and organization of the paper is not bad.

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    5

  • [Post rebuttal] Please justify your decision

    The author’s response doesn’t address my concerns about the novelty of this paper. And the latest comparison method added is published in 2018. However, there is no denying that the paper is well written and organized. Therefore, I decided not to change my choice.



Review #3

  • Please describe the contribution of the paper

    They propose a scalable and data-driven approach to learn shape distributions from large databases of healthy organs

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The problem proposed in this paper is quite interesting

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Lack of compared method in experiment part. In the experiment part, the authors only discuss the parameters or ablation study of their method. If possible, the author should add more discussion about related works.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    it is good.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    Add more related works.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The topic is quite interesting, however the results are not very solid.

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Somewhat Confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #4

  • Please describe the contribution of the paper

    The authors aim to learn a shape atlas using a U-Net architecture with the ultimate goal to obtain a reference atlas for possible normal shapes. The approach is evaluated on a dataset with approx. 2600 images of healthy pancreas. The obtained shape model is then evaluated using images of a pathological and healthy pancreas and classifying them using a one-shot/few-shot learning approach.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The overall strength of the paper is, from my perspective, that it focuses more on the idea than on the technical details. This makes the paper really interesting and might boost the discussion rised by it. The benefits are therefore:

    1. The intensive and detailed evaluation, which considers different aspects of the techniques and reports numerous experiments.
    2. An interesting technical approach that clearly stands out from the typical “Segmentation or image classification” tasks.
    3. An (for MICCAI papers) extensive discussion of the work.
    4. The clear and good organization and writing style of the paper.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The technical novelty is limited
    2. The technical descriptions are limited.
  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The technical details are comparatively sparse in the paper. However, given the fact that the underlying technique is well-known, the authors relying on public software (nnUnet) and the main steps are documented, I think that the paper is still reproducible.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
    1. I would have wished for a comparison with other shape encoding approaches, especially with shape models.
    2. Releasing the code would further improve the importance of the paper.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    8

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper clearly stands out from other papers that I usually see during MICCAI review or on MICCAI. Given the number of experiments and details reported, it rather feels like reading a journal paper. Another strong asset is originality. Rather than proposing yet-another-deep-learning-tweak, the authors suggest a relatively novel approach that might boost additional research. The main weakness is of course the limited reproducibility due to in-house data and code.

  • Number of papers in your stack

    6

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    8

  • [Post rebuttal] Please justify your decision

    After reading the comments of the other reviewer, the meta-reviewer, and the response of the authors, I still stay true to my original opinion. The presented paper is of high quality and offers a unique technical approach. In their response, the authors addressed major critic points raised during the first review round. The authors compared against “traditional” techniques like shape models, clarified some missing points and promised to release their code. While these are just promises, as always, I see no reason to doubt the authors. Given the response of the authors, which addresses the major weaknesses of the original paper, I am more than convinced that the paper should be presented at MICCAI.




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    The paper proposes a VAE-based method to learn the distribution of shapes represented as segmentation masks to capture the normative distribution and apply this model to anomaly detection. The paper is well written and organized. Several experiments showcase the methods performance. Authors are required to address the following weaknesses in their rebuttal. The novelty of the work is not clear given that the method is based on the well-established VAE formulation and VAE for anamoly detection is not new. The use of the UNet architecture with skip connections between the encoder and decoder is not typical for VAEs and results in a non-generative model; justification of such a choice is not articulated and impact of such skip connection as compared to regular encoder-decoder architectures is not discussed. Experiments lack comparative results with other state-of-the-art methods for anomaly detection; as authors can’t submit new results during the rebuttal, authors are required to provide a discussion on related work for anomaly detection including few- and zero-shot methods emphasizing what is lacking in the state-of-the-art that is addressed by the paper.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    4




Author Feedback

We thank the reviewers for their positive and valuable feedback. We address their concerns below.

The reviewers raised the need to clarify the contributions of our paper. The main novelty of our work lies in leveraging a VAE to learn a normative model of organ shape. To do so, we apply a VAE on a large database of 3D segmentation masks of healthy organs, coupled with a shape-preserving data augmentation strategy consisting of translations, rotations and scalings. We illustrate our method through an extensive set of experiments aiming to detect abnormal pancreas shapes, thus bridging the gap between Statistical Shape Modeling (SSM) and Anomaly Detection (AD). Shape atlases are learned with SSM approaches that fall into two families: correspondence-based and deformation-based methods. The former requires important preprocessing to identify landmarks, while the latter makes topological assumptions which are computationally heavy to implement. Our framework overcomes these limitations as it does not require to perform an initial registration step nor to provide any prior hypothesis, while being computationally efficient. Regarding AD in the medical imaging literature, it has mainly been applied to detect brain lesions in MRI data and most methods rely on variations of the AE or its variational counterparts (Baur et al., MedIA 2021). These approaches compress and reconstruct images of healthy subjects to capture a normative model of the brain. Thus, they entail the risk of extracting features related to the intensity distribution of a dataset which are not necessarily specific to the organ anatomy. This is shown by the need of introducing regularization constraints in the presented models to improve the detection performances compared to the vanilla AE and VAE. To further reduce the overfitting risk, these methods artificially increase the dataset size by working on 2D slices extracted from 3D images. In our work, this overfitting risk is intrinsically limited as we apply a VAE on a large set of segmentation masks to build a latent space characterizing healthy organ shapes. Moreover, the data augmentation drives the extraction of shape features, which are subsequently used to perform AD. Finally, compared to the majority of current AD methods, we fully leverage the available spatial information by working on 3D masks. We will clarify this point in the Introduction of the camera-ready version.

Another point raised by the reviewers is the lack of comparison with other SSM methods. Therefore, we compared our framework with two state-of-the-art methods: active shape models (ASM) (Cootes et al, CVIU 1995) and LDDMM using the Deformetrica software (Bone et al., MICCAI, 2018). Concerning ASM, we compute the signed distance map of the pancreas 3D contours of each subject. For LDDMM, we estimate a Bayesian Atlas parametrized by 576 control points. In both cases, we perform a PCA on the shape-encoding parameters to obtain a latent vector of dimension 1024 for each subject. We compare them with our method also using a latent dimension of 1024. All methods are trained on 1200 samples. Zero-shot experiments result in AUC scores of 65.41 ±0.36, 54.60 ±0.36, 58.42 ±0.36 for the VAE, LDDMM and ASM, respectively. Few-shot experiments with 12 training samples result in AUC scores of 66.02 ±0.02, 58.68 ±0.02 and 61.10 ±0.02 for the VAE, LDDMM, and ASM, respectively. These results show that our model favorably compares with standard SSM methods in both configurations and will be added in the Supplementary Material.

Furthermore, the meta-reviewer mentions the presence of skip connections in our U-Net and the issue it raises within the VAE framework. We confirm that skip connections were removed from our U-Net to obtain a generative model (cf. Supplementary Material Fig A.1). We apologize for the lack of clarity on this point and commit to amending the manuscript accordingly.

Finally, we will publicly release our code for the camera-ready version.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Authors have addressed some concerns raised by the reviewers. However, authors’ response on the novelty aspect of the work is not convincing. The work’s novelty is considered limited to merely applying VAEs on 3D masks and using the learned representations for anomaly detection. The use of large-scale dataset has a merit and learning VAEs directly on binary masks mitigates computational bottlenecks in deformation-based and correspondence-based approaches for shape modeling. However, this is not considered novel.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Reject

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    9



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors rebuttal response was satisfactory. I recommend the paper be accepted.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    8



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    All reviewers recommended accepting this work. They all found the work very interesting and relevant for MICCAI.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    1



back to top