Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Zailiang Chen, Jiang Zhu, Hailan Shen, Hui Liu, Yajing Li, Rongchang Zhao, Feiyang Yu

Abstract

Placenta accreta spectrum (PAS) is a high-risk obstetric disorder associated with significant morbidity and mortality. Since the abnormal invasion usually occurs near the uteroplacental interface, there is a large geometry variation in the lesion bounding boxes, which considerably degrades the detection performance. In addition, due to the confounding visual representations of PAS, the diagnosis highly depends on the clinical experience of radiologists, which easily results in inaccurate bounding box annotations. In this paper, we propose a geometry adaptive network for robust PAS detection. Specifically, to deal with the geometric prior missing problem, we design a Geometry-adaptive Label Assignment (GA-LA) strategy and a Geometry-adaptive RoI Fusion (GA-RF) module. The GA-LA strategy dynamically selects positive PAS candidates (RoIs) for each lesion according to its shape information. The GA-RF module aggregates the multi-scale RoI features based on the geometry distribution of proposals. Moreover, we develop a Lesion-aware Detection Head (LA-Head) to leverage high-quality predictions to iteratively refine inaccurate annotations with a novel multiple instance learning paradigm. Experimental results under both clean and noisy labels indicate that our method achieves state-of-the-art performance and demonstrate promising assistance for PAS diagnosis in clinical applications.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43990-2_5

SharedIt: https://rdcu.be/dnwLf

Link to the code repository

https://github.com/Meteorary/GALA

Link to the dataset(s)

N/A


Reviews

Review #5

  • Please describe the contribution of the paper

    In this paper, the geometry-adaptive (GA) object-detection-based method for placenta accreta spectrum (PAS) detection from MRI images is presented. The method uses GA Label Assignment (GA-LA) strategy for the adaptive choice of IoU threshold for every object proposal during network training. GA RoI Fusion (GA-RF) aggregates multi-scale representations of proposals and finally a Lesion-aware Detection Head (LA-Head) improves annotations via multi-instance learning.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The new approach of using MIL is robust to noisy labels. The introduction of geometry-adaptive components improves prediction performance of the network. The geometry priors are introduced in the non-complex way and provide high performance gain. It is interesting how the additional knowledge about the geometry of labels can enhance the networks capabilities.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    If I understood correctly, all images in the training set contain PAS lesions. In this case the network expects to always have at least one bounding box to detect. This is in opposition to the clinical situation when most of the slices would not contain lesions. It is unclear how the method would behave on images without lesions. There are many spelling mistakes in the equations which makes the method harder to understand. The train-test regime is unclear. How was the dataset divided into training-validation-testing sets?

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The Authors use an in-house dataset and plan to publish the code upon acceptance.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    There are many spelling mistakes in equations (e.g. eq. 3 misses p, eq. 7 misses right parentheses and 1(.) below eq. 10 should contain ‘x’ instead of ‘.’). Please proofread all of the equations. The \varphi (φ) symbol in the eq. 6 is only used in this place. What is its meaning? Is this instance selector \phi (ϕ)? Please explain in more detail the reasoning behind eq. 5 (weighting factor). Due to inter- and intra-observer variabilities the original annotations also contain some kind of noise. Why was LA-Head not used in the experiments and ablation in the case of “clean data”? Would using LA-Head for clean data degrade the performance of the network? It is stated that the noisy labels were created by shifting and scaling ground-truth boxes. It is unclear whether it was done only on the training samples or the testing samples. How was the dataset divided into train-validation-test sets? Was some kind of cross-validation used?

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper presents an interesting method for PAS lesion detection based on geometry-adaptive components and multi-instance learning. It achieves competitive performance in comparison to SOTA models. Unfortunately, there are many mistakes in the equations which makes the paper hard to read and understand.

  • Reviewer confidence

    Somewhat confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    5

  • [Post rebuttal] Please justify your decision

    The paper presents the method which is valuable for MICCAI community and the Authors have responded to my concerns, therefore, I have decided to increase the initial score. However, Authors are encouraged to carefully proofread all equations to avoid errors in the camera-ready version.



Review #7

  • Please describe the contribution of the paper

    In this work, the authors developed a robust detection method for Placenta Accreta in fetal MRI. The method is an extension of Faster-RCNN with three modules: Geometry-adaptive Label assignment, Geometry-adaptive RoI Fusion and multiple instance learning-based refinement. Results on a private dataset showed the effectiveness of the method.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    1) The method design with geometry awareness is based on the property of the PAS, so that it is well motivated. 2) Three novel modules are introduced for the detection task: Label assignment based on adaptive threshold, geometry prior information-based ROI fusion and MIL for refinement. The modules helped to improve the detection performance. 3) The method was compared with several detection methods and showed better performance. The ablation study is also informative.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1) For dealing with noisy annotations, the proposed method was not compared with some existing methods. The authors did not review relevant works on learning detection models from noisy annotations. 2) The method was only validated with a private dataset. 3) It is not clear the LA-Head was trained end-to-end with the other parts of the pipeline, or it was trained independently. What is the detector inf Fig. 1(c)? It seems to be a module that is different from the detector used in FPN and RPN.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The method was quite complex with several modules, which may hinders the reproducibility. However, the authors promised to release the code.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    1) It would be good to provide some references for learning from noisy labels, especially for object detection. The proposed method can be compared with some existing noisy label learning methods. 2) The aspect ratio in section 2.2 seems to be a hyper-parameter. It would be better to introduce a symbol to denote this and discuss how it affects the performance of the method. 3) The fetal MRI are 3D images, but this work only considered slice-level detection. Why not using 3D detection? Inconsistent results may be obtained between neighboring slices. Some discussion may be added.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This paper has a novel method with sufficient validation, with some weaknesses on literature review and discussion.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #8

  • Please describe the contribution of the paper

    The paper presents a geometry-adaptive network for robust Placenta accreta spectrum (PAS) detection. The contributions are three-fold: (1) A Lesion-aware Detection Head (LA-Head) is designed, which employs a new multiple instance learning approach to improve the robustness to inaccurate annotations. (2) A flexible Geometry-adaptive Label Assignment (GA-LA) strategy is proposed to select positive PAS candidates according to the shape of lesions. (3) A statistic-based Geometry-adaptive RoI Fusion (GA-RF) module is developed for aggregating multi-scale features based on the geometry distribution of proposals.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • novelty in method
    • SOTA performance
    • extensive ablation studies to support design choices
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • could not be verified on a public dataset (due to lack of open-source datasets) but doing that or making the used dataset available would have made replication of the results easier
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The dataset is not available but the authors have mentioned that they will make the source code public.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    • typo in Figure 1c, “detecor” -> “detector”
    • 110 MR volumes gave 312 2D slices - does that mean that PAS was present on only ~3 slices per volume on average?
    • Please include the experience level of radiologists who did the manual annotations
    • I would suggest the authors to change the terms “clean data” and “noisy data” to “data with clean labels” and “data with noisy labels” respectively, just to make it clear that the data itself is never noisy, the label is
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    As stated before.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    In the paper the authors present a geometry-adaptive network to detect PAS in MRI. The paper received two accepts and a reject. I invite the authors to address the concerns from all reviewers in the rebuttal phase.




Author Feedback

We thank reviewers for the valuable comments and address the raised issues below. R1 1.Performance on images without lesions. 1) Our model filters the candidate boxes by IOU threshold and NMS. Given an image without lesions, the confidence scores of all candidates will be lower than the threshold, resulting in no predicted boxes. We have tested our model on such images and obtained robust results. 2) The proposed method focuses on identifying and localizing PAS lesions, thus images without lesions are not included. In future work, we will consider incorporating these slices into training to investigate whether they can further enhance the robustness and accuracy of the model. 2.Spelling mistakes. We will proofread all equations and make changes as suggested. In Eq. 6, φ should be modified to ϕ, denoting the instance selector. 3.Experimental setting. The dataset is randomly divided into training (60%), validation (20%), and test (20%) sets at the patient-level. 5-fold cross-validation is performed. 4.Detailed explanation of Eq. 5. δs is a weight function to assign coefficient for predicted box and GT box. We expect δs to satisfy 3 conditions. 1) Higher weight should be assigned to the predicted box when its confidence score is larger. 2) The impact of low-quality instances during early stage of training should be mitigated. 3) To ensure that the prior knowledge from GT is utilized instead of totally relying on the predictions, we set an upper bound for the weight of predicted box. 5.LA-Head under clean data. LA-Head is designed to utilize high-quality predictions to refine inaccurate annotations. So it is not included in the experiments under clean data. Thanks for reminding us that the original annotations also contain noise due to the inter- and intra-observer variabilities. Theoretically, LA-Head can exploit the information between lesion instances when trained on clean data, thus strengthening the discriminative capability of the model. 6.Noisy labels. The labels of training samples are noisy, aiming to improve the robustness of the model. The labels of testing samples are used for validation, without shifting and scaling. R2 1.Detectors dealing with noisy annotations. Existing detection methods learning from noisy labels are designed for natural images, employing loss function, label smoothing, and so on. In contrast to previous works, we emphasize learning with inaccurate bounding boxes on MR images and the model can be trained without clean data. 2.(R2, R3) Public datasets. To the best of our knowledge, this is the first work to detect PAS on MR images. Our method is specifically designed for the characteristics of PAS. Currently, there are no public PAS datasets, so our experiments are only conducted on a private dataset. We will consider making the dataset public in the future. 3.End-to-end training. The detector in Fig.1(c) is the baseline incorporated with GA-LA and GA-RF. The initial predictions generated from this detector are then used to refine the noisy annotations by MIL. The whole network is trained end-to-end based on Eq. 6. 4.Hyper-parameter. The aspect ratio in section 2.2 is a hyper-parameter. The value is set based on the data distribution and experiments. 5.3D images. We use a 2D approach because PAS only exists in a few slices. 2D images are easier to interpret, aiding in annotations. Slice-level study serves as a foundation. In subsequent study, we plan to extend the method to 3D implementation. R3 1.Public datasets. Please refer to R2#2. 2.Number of slices. The reviewer is correct. The radiologists selected the slices with PAS from 110 scans, resulting in a total of 312 slices. 3.Radiologists’ experience. The manual annotations were performed by two experts specialized medical imaging and PAS diagnosis with 20 and 14 years of clinical experience. They adhere to annotation guidelines to ensure accuracy in the annotations. 4.Terms. Thanks for your valuable comment. We will change the terms as suggested.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors provided a nice rebuttal that successfully cleared all concerns from R1 who gave only one reject for this paper. Now I am happy to see all reviewers consistently recommended to accept the paper in the post-rebuttal phase. I looked at the paper and the rebuttal. I recommend the paper for acceptance as well.



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    I think the authors have addressed the major concerns well and I recomend to accept.



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    In this work, the authors developed a robust detection method for Placenta Accreta in fetal MRI. The method is an extension of Faster-RCNN with three modules: Geometry-adaptive Label assignment, Geometry-adaptive RoI Fusion and multiple instance learning-based refinement. Results seem reasonable. The main strengths of the paper include: the method design with geometry awareness is based on the property of the PAS, novel modules are introduced for the detection task and SOTA performance. Although some weaknesses are occurred, most of them are addressed by the author, it is an interesting paper where merits slightly weigh over weakness.



back to top