Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Christopher Hahne, Raphael Sznitman

Abstract

Contrast-Enhanced Ultra-Sound (CEUS) has become a viable method for non-invasive, dynamic visualization in medical diagnostics, yet Ultrasound Localization Microscopy (ULM) has enabled a revolutionary breakthrough by offering ten times higher resolution. To date, Delay-And-Sum (DAS) beamformers are used to render ULM frames, ultimately determining the image resolution capability. To take full advantage of ULM, this study questions whether beamforming is the most effective processing step for ULM, suggesting an alternative approach that relies solely on Time-Difference-of-Arrival (TDoA) information. To this end, a novel geometric framework for micro bubble localization via ellipse intersections is proposed to overcome existing beamforming limitations. We present a benchmark comparison based on a public dataset for which our geometric ULM outperforms existing baseline methods in terms of accuracy and robustness while only utilizing a portion of the available transducer data.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43999-5_21

SharedIt: https://rdcu.be/dnwwz

Link to the code repository

N/A

Link to the dataset(s)

https://zenodo.org/record/4343435


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper is about bubble localization in Ultrasound Localization Microscopy (ULM) in order to render an image. The novelty is the use of a geometric framework using ellipse intersections straight from channel data instead of conventional beamforming as the first step. The authors use the PALA dataset to explore the potential improvements in image quality as a function of the number of channels used and noise added, and then report the tradeoff with computation time. The results show the proposed method outperforms a few chosen standard methods.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This paper has several strengths. The topic of image formation in ULM is interesting with this being a promising modality for a range of clinical applications. The results on PALA do show an improvement. The increased computation time is not a big drawback in my opinion – getting a good image is more important – plus keep in mind it skips the beamforming and data averaging steps. I appreciate the exploration of choosing a smaller group of channels and the performance under noise variations.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The main weakness is understanding whether the claimed improvements in image quality (shown in Table 1) are significant. The PALA data is also limited so it is unclear how the algorithm will work in different situations. The contribution to the state of the art is also really limited to ULM so it has somewhat narrow interest at MICCAI.

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Reproducibility is excellent. This is where the use of PALA is beneficial. This also applies for comparison to existing image formation methods also provided in the Github.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    Overall comments: this paper appears to make a big step forward in ULM image formation. ULM continues to be one of the most exciting topics in ultrasound with exciting work from multiple groups. The contribution to microbubble localization is important within ULM. The authors have tried to pack a lot of information into the short length of the paper but at the expense of explaining why they make key decisions. By answering the “why” questions listed below, the paper will be able to better understood, especially by those outside of the ULM field.

    Although the math is well described and makes sense, there are other aspects of the paper that are less clear. I found myself asking “why” many times when reading and re-reading this paper. Here are the key places that could benefit from adding justification for choices the authors made: Introduction and Figure 1 – why explore using a subset of channels (and define g in the Figure 1); Section 2.1 why is “interpretability” important in the feature extraction step?; Section 2.3: why is “atmospheric conditions” mentioned?; Section 3 Metrics, what is the Structural Similarity Index Measure compared against?; Table 1, what do bolded font entries indicate (statistical significantly better)? Also why talk about 32 channels improving Jaccard Index when 16 channels actually gave the first huge improvement to the Jaccard index? The text says “Table 1 lists the most time consuming process for each method…” but I don’t see that in Table 1. I only see “Time (s)”. Also “Our method significantly improves the overall computation and acquisition time…” but is this statistically significant? Where are the numbers to show this? And by acquisition time, do you mean the fewer number of channels? But best results are using the full 128 channels which I would expect is preferred anyway. Table 2 is described as a “Best-of-3” but why is best of three chosen here?

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper is sound and makes a good contribution to ULM but the contributions are specific to the micro bubble localization step in ULM so will have somewhat narrow interest at MICCAI; moreover these are only initial tests and it is unclear whether the improvements are really significant, especially on a wider range of in vivo data.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    5

  • [Post rebuttal] Please justify your decision

    I’m sticking with my original opinion of “weak accept” based on the response by the authors. Indeed, a weakness of the paper is lack of real data and the way the current data is presented. I feel that with a substantial revision of the manuscript to address the reviewers’ comments, it will still be of interest to MICCAI attendees. I personally am influenced by my growing interest in ULM.



Review #2

  • Please describe the contribution of the paper

    A geometric framework for microbubble localization through ellipse intersection in super-resolution ultrasound has been proposed. The presented technique’s performance has been validated using publicly available synthetic datasets.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Technical novelty: incorporation of geometric modeling for microbubble localization
    2. Well-written manuscript
    3. Promising results on synthetic datasets
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. Weak motivation in the abstract
    2. Robustness analysis for the proposed technique is missing
    3. No validation against real datasets
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The work is quite reproducible. However, further details on the parameter selection process will facilitate better reproducibility.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    1. Abstract: How the existing beamforming approaches motivate the proposed geometric microbubble localization work is not super clear to me. Please consider adding a couple of sentences to ensure a smooth transition between the problem and the objective.
    2. Introduction: The literature review has been placed after the problem statement and objective, which is quite unconventional. Please consider restructuring.
    3. Methods: How did you select different parameters? How is the ellipse modeling strategy related to the imaging settings? Is the technique robust to microbubble density?
    4. Results: The presented results for synthetic datasets are promising. However, no in vivo validation experiment is conducted.
    5. Fig. 3, third row: Looks like there is a large difference between the ground truth density and the estimated densities. Why? Please provide the color bar.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    3

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    No validation against real datasets was the main factor behind my decision.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    4

  • [Post rebuttal] Please justify your decision

    Some of my concerns are addressed. However, I still think that validation against only synthetic data is not sufficient to prove the proposed technique’s potential.



Review #5

  • Please describe the contribution of the paper

    The work proposes a method for ultrasound localization microscopy (ULM) that replaces beamforming in localization of the microbubbles (MB). It is a geometrical method based on Time-of-Flight. It has three parts: echo feature extraction, ellipse intersection (MB localization) and clustering (for robustness).

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The method is an alternative to beamforming which is a computationally intensive process. It is a novel three-step geometrical formulation.

    The method demonstrates strong results against the chosen baselines in the noise free setting. It can be used with a lower number of channels.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The method is not as robust under noise.

    The method is an alternative to beamforming, however, there is no evaluation comparing to it (or trying to approximate a comparison). An argumentation why this comparison is not there is also lacking.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors have chosen a public dataset and state that they will publish the code. Thus, reproducibility is ensured.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    Is it possible to have some sort of comparison to beamforming in the evaluation? Could it be outlined why not and what would be required to do so even if in an approximation.

    It would be interesting to see more renderings like these figure 3, potentially as supplementary materials. More concretely, the resulting images from the proposed method with a higher number of channels to see how more channels influence the results. And also images created from the noisy data.

    In the text after “and the sum over k accumulates all echo components”, the enumeration goes 1, 2, k, … K. Could this ‘k’ be a typo?

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper reads excellently, it is well structured and well argumented. The mathematical formulations appear sound. The authors have used a public dataset and plan on open-sourcing the code. They describe a novel alternative approach to beamforming.

  • Reviewer confidence

    Not confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    After reviewing the work, the three reviewers acknowledge its merit but raise several issues that should be addressed. Specifically, they note several instances where clarity can be improved. One important weakness that is harder to address is lack of real datasets in validations. Two reviewers are also pointing out that robustness of the method can be an issue.




Author Feedback

We thank the reviewers for their time examining our work and providing valuable feedback to help improve its quality. The overall affirmative assessments support the notion that our method is novel, and the results demonstrate a big potential to enhance ULM image formation. We recognize that the reviewers’ key concerns are clarity, real data validation, and robustness analysis. We address these below and believe our clarifications should alleviate the raised concerns.

  1. Clarity

[R2] Suggested improvement for the motivation in the abstract: The abstract states our hypothesis that beamforming defines the ULM resolution limit, which we strive to overcome by replacing it with our geometric model. This conveys the rationale of our study in a concise manner.

[R2] Introduction restructuring: The opening paragraph introduces the research topic while establishing context and relevance upfront to set the goals before presenting background information. The remaining parts adhere to conventional standards, providing the literature review, problem, objective, methodology and contributions in that order.

[R1/R2] Several points are raised about clarity, including parameter selection, interpretability, metrics and acquisition time: These are valid points that require minor adjustments (Fig. 1) or brief mentions (Table 1 & 2 details, interpretability, atmospheric conditions, SSIM) for clarification. Our method skips coherent compounding to reduce acquisition time as stated in the results section. This speeds up the capture process by two-thirds, which should be included.

[R2] Large difference between the ground truth and the estimated densities in Fig. 3: In the results, we explain the use of gamma correction with parameters, which amplifies differences and thus color deviations.

[R5] Request for comparison with beamforming methods: This seems to be a misinterpretation of our benchmark analysis in Table 1 and 2 since all state-of-the-art methods we compare against use beamforming such as [10, 15, 16].

  1. Real data

[R2/R1/Meta] No validation against real datasets: We understand the concern that our proof-of-concept study lacks in vivo data validation. As stated in section 3, our method requires raw Radio-Frequency (RF) data, which is not yet publicly available as in vivo data. The PALA in vivo [10] and ULTRA-SR Challenge sets only contain beamformed images. Thus, ULM in vivo RF data is closed-source at this stage and if results are presented, it could be criticized as not reproducible. It is also important to note that the herein used PALA in silico data [10] is based on a third-party software from an ultrasound device manufacturer with typical MB density modeling found in [10]. It is therefore an independent source that meets clinical standards. While we aim to demonstrate the advantage of our method on in vivo data, it is, however, a reasonable choice for novel proof-of-concept studies to be first validated on simulation data and submitted to a conference. Taking this as the main reason for a conference paper rejection, as reviewer 2 suggests, is an immoderate decision that discourages new risky research efforts of high potential. If reviewers insist on real data results to meet MICCAI standards, we will provide them as a replacement for the supplementary figures and/or Fig. 3. We hope our findings motivate peers to release in vivo data as raw RF signals to avoid this dilemma.

  1. Robustness

[R2] Robustness analysis for the proposed technique is missing: We examined robustness by means of the PALA noise model [10] providing detailed explanation in the metrics section and results in Table 2.

[R5] Improvement of our method is not evident in the presence of noise: It can be observed that (1) our method performs better than RS [10], (2) the U-Net [15] falls short in localization (see RMSE), and (3) our method is exposed to higher noise levels to account for noise reduction from beamforming as explained in the metrics paragraph.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors have addressed the concerns about clarity and lack of in vivo data in their validations. I suggest we ask the authors to revise the paper according to the reviewers’ comments and their response letter.



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Agree that this is an interesting paper and one I’m on the fence for so would be happy leaning in either direction. I do find however, given the lack of invivo testing (which is completely understandable at this point) if it’s not a paper that’s better suited for a workshop not MICCAI. R2 also raises this issue: “validation against only synthetic data is not sufficient to prove the proposed technique’s potential.”



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The subpar performance of the model and the lack of real-data experiment makes this work too preliminary to be presented at the MICCAI conference.



back to top