Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Ahmed H. Shahin, Yan Zhuang, Noha El-Zehiry

Abstract

Accurate and safe catheter ablation procedures for atrial fibrillation require precise segmentation of cardiac structures in Intracardiac Echocardiography (ICE) imaging. Prior studies have suggested methods that employ 3D geometry information from the ICE transducer to create a sparse ICE volume by placing 2D frames in a 3D grid, enabling the training of 3D segmentation models. However, the resulting 3D masks from these models can be inaccurate and may lead to serious clinical complications due to the sparse sampling in ICE data, frames misalignment, and cardiac motion. To address this issue, we propose an interactive editing framework that allows users to edit segmentation output by drawing scribbles on a 2D frame. The user interaction is mapped to the 3D grid and utilized to execute an editing step that modifies the segmentation in the vicinity of the interaction while preserving the previous segmentation away from the interaction. Furthermore, our framework accommodates multiple edits to the segmentation output in a sequential manner without compromising previous edits. This paper presents a novel loss function and a novel evaluation metric specifically designed for editing. Cross-validation and testing results indicate that, in terms of segmentation quality and following user input, our proposed loss function outperforms standard losses and training strategies. We demonstrate quantitatively and qualitatively that subsequent edits do not compromise previous edits when using our method, as opposed to standard segmentation losses. Our approach improves segmentation accuracy while avoiding undesired changes away from user interactions and without compromising the quality of previously edited regions, leading to better patient outcomes.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43901-8_73

SharedIt: https://rdcu.be/dnwEq

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    An interactive segmentation editing method is proposed for 2d intracardiac echocardiography of 3d structures. A series of acquired 2d frames are assembled into a 3d grid, and a corresponding “initial” 3d segmentation is obtained using another method. The presented method aims to improve this initial segmentation using user interaction. To do so, the initial segmentation is mapped back to the 2d frames and the user can correct the segmentation in one of the 2d frames. The user scribbles are then mapped back to the 3d grid where the 3d image and the 3d initial segmentation are. Now, an “editing” neural network is trained to take as input (1) the 3d image, (2) the initial 3d segmentation and (3) the 3d scribbles, and outputs the edited 3d segmentation. The editing neural network is trained to (1) edit the segmentation in the vicinity of the marked scribbles, while (2) preserving the initial segmentation away from the scribbles. An evaluation metric is proposed which evaluates the edited s

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The paper’s setup assembles non-uniformly acquired 2d images into a 3d grid, and analyzes the data in 3d. This is definitely better than approaches that treat each 2d image independently.

    2. I like the overall writing and organization of the paper.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. I am not fully convinced by the idea of not changing the segmentation away from user scribbles. I think a more desirable goal is to make the predicted segmentation similar to the ground truth segmentation near as well as farther away from the user scribbles.

    2. Limited technical novelty: the basic idea is to limit the influence of user edits to areas near the edits, while preserving the initial predictions away from the edits. Further, the evaluation metric is defined specifically to favor segmentations that are like initial segmentations away from the user edits, while not incorporating the accuracy with respect to true segmentations away from the user edits.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors have indicated that they may not make their code publicly available. The dataset used for validating the proposed ideas is also not publicly available.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    1. Please clarify how the ground truth segmentation ‘y’ used in equation 2 is obtained? How is it related to the used interaction u?

    2. Please justify the necessity of the proposed evaluation metric. Currently, it is said that “ In our scenario, where the ground truth is CAS contours, we use distance-based metrics. However, the standard utilization of these metrics computes the distance between the predicted and ground truth contours, which misleadingly incentivizes alignment with the ground truth contours in all regions. This approach incentivizes changes in the unedited regions, which is undesirable from a user perspective, as users want to see changes only in the vicinity of their edit.” In my opinion, it would be desirable if a model can use scribbles in one part of the 3d image to make the predicted segmentation similar to the ground truth segmentation even farther away from the edits. Please explain in which settings this would not be desirable.

    3. In addition to evaluating with the proposed metric, please also provide evaluations with more standard metrics (e.g. Dice, Surface distance) with respect to the ground truth segmentations over the entire volume.

    4. It is mentioned at the start of section 3.2 that “To obtain the initial imperfect segmentation y_init, a U-Net model is trained on the 3D meshes y using a Cross-Entropy (CE) loss.“ I think it would be helpful to clarify this in section 2.1. While reading the methods section, it was unclear to me if y_init is (i) the output of a 3d segmentation neural network trained according to [8] or (ii) a 3d segmentation obtained by deforming a template to the match the CAS contours.

    5. I think it would also be useful to clarify that CAS contours in all 2d frames are not available for the subject for which the editing is being done. Please clarify and emphasize that the contours for all frames are used in the experiment setup only for simulating the user interactions in one 2d frame. This was another cause of confusion for me while reading the paper.

    6. It is said that “The region of maximum error is selected and a scribble is drawn on the boundary of the ground truth in that region to simulate the user interaction.” (i) How is the “region of maximum error” defined? (ii) Does the method require that the user’s scribble should be exactly on the boundary of the ground truth region? This would require more detailed human interaction than interactive segmentation methods that allow users to make scribbles anywhere within the region. Please clearly state this limitation.

    7. In section 3.3, for baselines (2) and (3), what are the CE and Dice losses computed with respect to?

    8. It is said that “Most of the editing literature treats editing as an interactive segmentation problem and does not provide a clear distinction between interactive segmentation and interactive editing. In this paper, we provide a novel method that is specifically designed for editing.“ Please clearly specify the differences between interactive segmentation and interactive segmentation editing.

    9. Please describe related work on interactive medical image segmentation. A rich body of literature exists on this topic.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    My main concern is that I do not agree with the evaluation criterion that favors preservation of the initial segmentation away from the user interaction over segmentation accuracy with respect to ground truth segmentation in the entire 3d image. Another concern is the limited technical novelty of the paper.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    5

  • [Post rebuttal] Please justify your decision

    I thank the authors for clarifying the distinction between interactive segmentation and interactive editing.

    However, I still remain unconvinced that interactive editing is a desirable goal in practice. If the “ping-pong” effect stated by the authors is a concern in interactive segmentation, I would say that this simply means that we need to develop better IS methods. I don’t think IE is the solution to this problem. Rather than IE, one might as well give the user the option to locally modify the segmentation completely manually.

    Having stated this point of view though, I am willing to improve my rating from 4 to 5. If the paper does get accepted, I will be curious to see how it is received in the community.

    If accepted, I urge the authors to at least include performance numbers with respect to ground truth in the entire 3D volume (as also pointed out by reviewer #2).



Review #2

  • Please describe the contribution of the paper

    This paper proposes a practical editing method for intracardiac echocardiography segmentation, which allows the users to edit segmentation maps by drawing 2D scribbles. This approach results in more accurate segmentation results and avoids undesired changes in user edited regions.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The writing in this paper is of high quality and easy to understand.
    2. This paper addresses the issue of intracardiac echocardiography segmentation and surpasses current techniques, ultimately enhancing clinical workflow and improving patient outcomes.
    3. This paper introduces a unique loss function and evaluation metrics specifically designed for editing, with a focus on preserving initial segmentation while accommodating user input - an approach distinct from existing methodologies.
    4. The authors successfully demonstrate that their proposed method can effectively integrate user edits without compromising previous edits or altering unedited regions, ultimately better catering to user needs in practical applications.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The author attempted to make a distinction between interactive segmentation and interactive editing, but the difference between the two is not clearly stated in the paper. It seems interactive editing is a special case of interactive segmentation, where the algorithm should not modify the regions that are not edited by the user.
    2. In the evaluation, although I agree with using the proposed editing-specific distance as an evaluation metric, it would be good to also show the overall performance wrt ground-truth segmentation masks since obtaining accurate segmentation masks is the ultimate goal instead of accurate editing wrt CAS contours.
    3. The methodology proposed in this paper seems to be a simple modification from existing ones, which limits the technical novelty. For example, the proposed editing-specific loss is a weighted version of cross-entropy loss.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The codes and data used in this paper are not public, which may limit the reproducibility. The proposed method can be re-implemented without too much difficulty.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    1. A clear definition about interactive segmentation and interactive editing can be helpful for readers to understand the proposed method.
    2. It would be good to also show the overall performance wrt ground-truth segmentation masks.
    3. For the future work, the restriction on “preserve the initial segmentation in unedited regions” may be unnecessary, since an intelligent algorithm should be able to apply necessary modification to unedited regions based on existing human edits so that the amount of human efforts can be minimized.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This paper tackles a clinically important problem and propose a novel method that suits the problem. I think it would be of interest for researchers in the community.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    4

  • [Post rebuttal] Please justify your decision

    After rebuttal, I’m convinced about their motivation and settings. In general I think this paper is clinically relevant and can be useful in real-world applications. Still, I agree with R1 that using standard evaluation metrics to demonstrate the segmentation accuracy wrt the ground truth in the whole 3D images is necessary, which is not addressed in the rebuttal. In addition, the technical novelty of the paper is somewhat limited. Overall, I think the weakness slightly weigh over merits and therefore recommend weak reject.



Review #3

  • Please describe the contribution of the paper

    The paper presents a novel interactive editing framework for Intracardiac Echocardiography (ICE) data segmentation, which aims to improve the accuracy and safety of catheter ablation procedures for patients with atrial fibrillation. The framework enables users to edit segmentation output by drawing scribbles on a 2D frame, incorporating user edits while preserving the initial segmentation in unedited areas. The paper introduces a new editing-specific loss function and a novel evaluation metric specifically designed for editing. The proposed method demonstrates superior performance compared to traditional interactive segmentation losses and training strategies, leading to better patient outcomes.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The study focuses on the segmentation of Intracardiac Echocardiography, which possesses significant clinical practical value.
    2. The authors propose a novel editing-specific loss function and an editing-specific evaluation metric specifically designed for interactive editing.
    3. The experiments are thorough and comprehensive, but the data on editing response time is absent.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. From my understanding, the goal of the proposed interactive editing framework is to enhance the segmentation of cardiac structures during surgery. This would necessitate a high level of responsiveness for real-time editing. However, the authors have not provided any data regarding the editing response time.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The author does not choose to release their codes. And I think it could be quite challenging to reproduce the results of this work since the data is not available, either.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    Editing response time: It would be beneficial to provide data on the editing response time, as real-time applicability is crucial for clinical use during surgery. Providing information on the computational efficiency of your framework and its real-time performance would strengthen your results.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Clinical relevance: The paper addresses an important problem in the field of medical imaging - Intracardiac Echocardiography Segmentation. This topic has significant clinical value as it can potentially improve surgical outcomes and patient safety.

    Novelty of the proposed method: The authors have developed an innovative interactive editing framework, introducing an editing-specific loss function and an editing-specific evaluation metric. These additions make the work a valuable contribution to the field.

    Experimental design and results: The experiments conducted in the paper are fairly detailed and comprehensive, providing support for the efficacy of the proposed method. However, the lack of data on editing response time is a limitation that should be addressed.

    Paper organization and clarity: The paper is well-structured and clear, which makes it easy to follow and understand.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    The reviewers are positive about the presentation and application of this work. However, they also raised key concerns about novelty. It is also a concern that the work cannot be reproduced. Please address these comments in your rebuttal.




Author Feedback

We appreciate the constructive feedback and are pleased with the unanimous appreciation of the clinical significance of the paper and clarity of presentation, e.g. R3: “…significant clinical practical value”. We address the reviewers’ comments below.

R1.6.2, R2.6.3: novelty

  1. We agree with R2 that our loss function is a derivative of the CE principle. However, the introduction of a differentiated computation of CE wrt y_init in particular regions and y in others is a novel concept. This novelty is impactful from the clinical standpoint as it ensures that any corrections made by the clinical expert are maintained and remain unaffected by consecutive edits, a guarantee not provided by the CE. We provide evidence for this in Fig. 4. It also prevents any unpredicted or undesirable behavior away from the user input.
  2. As acknowledged by R2 “an approach distinct from existing methodologies”, the loss and metric are specifically designed to serve the intuitive editing objectives, outlined above and in sec 2.1.
  3. This is the first application of IE to ICE data, with its unique challenges (such as 2D acquisition, non-uniformity, and sparsity). We appreciate R1’s recognition of the value in our approach “This is definitely superior to approaches that treat each 2D image independently”.

R1.6.1, R1.9.2, R1.9.8: “I am not fully convinced by … from user scribbles” & “Interactive Editing (IE) vs Interactive Segmentation (IS)” There is a clear distinction between IE and IS. IS is easier as there are no prior expectations from the user. In IE, there is a visual bias as the user has seen a prior segmentation. If the user draws a scribble near the apex and suddenly the base gets edited, then the behavior of the system is neither predictable nor reliable. Further, consecutive edits can impact prior corrections which is an undesired behavior, particularly when the output is used in critical clinical applications such as ablation guidance. In addition to being faithful (to user instructions) and predictable, we emphasize that the benefit of our approach becomes clearer in challenging medical segmentation tasks, where IS models struggle to implement accurate global alterations. For instance, global corrections might result in a ‘ping-pong’ effect, wherein the user corrects point X, and the model makes an accurate adjustment around X but commits an error around Y (far from X). Subsequently, when the user corrects Y, the previously correct segmentation at X is violated. We argue that, particularly in complex 3D tasks, having a model capable of applying perfect segmentations everywhere from a single user edit is less feasible than employing IE.

R1.8, R3.8: reproducibility Unfortunately, we are not able to release the code for proprietary constraints. Nonetheless, we concur with R2.8’s comment “The proposed method can be reimplemented without too much difficulty”. To facilitate this, we provided thorough implementation details and will provide a pseudocode of the loss computation, upon acceptance.

R1.9.1: y in eq 2 y is the 3D meshes generated by [8] by deforming a CT template to match the CAS contours. u is one of these contours. CAS contours are the true LA labels, while y may not perfectly align with them due to factors such as frames misalignment and cardiac motion (Fig. 1b). Y_init is the output of a 3d network trained on y (R1.9.4). We thank R1 and will clarify this part upon acceptance.

R1.9.6: i. Based on the distance to y_cas. ii. Future work will include experiments on performance under user edit perturbation. We appreciate the reviewer’s feedback and will include this in the limitations.

R1.9.7: wrt y. We will clarify this point.

R1.9.9: We cited the closest to our work [2,5,9] and can add more, if necessary.

R3.9.1: We agree and thank the reviewer for pointing this out. The inference time through our model is ~20 ms per volume. We will add it to the final version.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The major concern after rebuttal remains the novelty and insufficient evaluation (see R2’s updated comments). Unfortunately, the paper does not meet the standard of acceptance in its currently form.



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    In the rebuttal, the authors presented a compelling supporting argument regarding the perceived weak novelty. Although R1 remains partially unconvinced about the distinction between IS and IE, the method employed in this study demonstrates notable merits, particularly in terms of its high clinical relevance. Consequently, based on the aforementioned factors, I recommend accepting this paper.



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Even after rebuttal, this paper is receiving mixed feelings (one reviewer changed from WR to WA, and another the other way around). It seems like the idea and the potentiel of application in a clinical setting is very interesting, but the lack of technical novelty, and the lack of standard evaluation metric are not completely cleared up in the rebuttal. For this reason, I am slightly more inclined for a rejection of this paper.



back to top