Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Qi Zeng, Shahed Mohammed, Emily H.T. Pang, Caitlin Schneider, Mohammad Honarvar, Julio Lobo, Changhong Hu, James Jago, Gary Ng, Robert Rohling, Septimiu E. Salcudean

Abstract

Registration of multi-modality images is necessary for the assessment of liver disease. In this work, we present an image registration workflow which is designed to achieve reliable alignment for subject-specific magnetic resonance (MR) and intercostal 3D ultrasound (US) images of the liver. Spatial priors modeled from the right rib segmentation are utilized to generate the initial alignment between the MR and US scans without the need of any additional tracking information. For rigid image alignment, tissue segmentation models are extracted from the MR and US data with a learning-based approach to apply surface point cloud registration. Local alignment accuracy is further improved via the LC2 image similarity metrics-based non-rigid registration technique. This workflow was validated with in-vivo liver image data for 18 subjects. The best average TRE of rigid and non-rigid registration obtained with our dataset was at 6.27 ± 2.82 mm and 3.63 ± 1.87 mm, respectively.

Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16446-0_17

SharedIt: https://rdcu.be/cVRSX

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    The paper proposed a registration framework for preoperative alignment of 3D ultrasound and MRI liver images. The initial alignment was achieved by using the right intercostal spaces from MRI as spatial priors for ultrasound positioning.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The main strength of this work is the whole processing system including segmentation of MR and ultrasound volumes, initial alignment, surface-based rigid registration, and the final deformable registration. The whole workflow is a feasible solution for preoperative registration of liver MRI-ultrasound images.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The technical novelty of this paper is limited. Almost all the components used in this system is established methods, such as dense-vnet based segmentation, coherent point drift based point cloud registration, and the linear correlation of linear combination (LC2) image similarity based non-rigid registration.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Not so good.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    The whole system is lack of technical novelty as described in the weaknesses part. Moreover, considering the whole registration workflow consists of several procedures, the errors may be accumulated to reduce the final performance. The motivation of the proposed initial alignment is not strong enough. Considering the registration is conducted preoperatively and in 3D space, in my opinion, the initial rigid alignment is not a challenging problem and can be achieved by conventional registration networks. The proposed initial alignment may be tedious. The authors used coherent point drift to register the surface point clouds from MRI and ultrasound images. The surface point clouds were generated from the segmentation results, which may contain incorrect segmented boundaries. However, the CPD is not robust to noisy point clouds, thus may result in inaccurate alignment. In data and material section, the authors mentioned “from 26 subjects…”, “Our MR dataset consisted of 67 liver abdominal…”, “In these datasets, we have 18 subjects…”, I’m confused with these descriptions. In FCN-Based Tissue Feature Extraction section, why the authors call the segmentation as feature extraction? There is lack of comparison between baseline and current state-of-the-art registration methods.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Please see the described weaknesses and detailed comments.

  • Number of papers in your stack

    3

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #2

  • Please describe the contribution of the paper

    This paper develops a pipeline for within-subject in-vivo 3D ultrasound/MR liver image registration based on right rib segmentation, probe-orientation estimation, learning-based point-cloud registration, and refinement using image similarity. The method was evaluated on 18 subjects.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This is an interesting and difficult problem. As the authors point out, a good initialization is required, and that is a key contribution of this work.

    The method was well-engineered, assembling a number of techniques into a pipeline that produced good results.

    The algorithm is, by necessity I think, somewhat bespoke. While the specifics may not be directly generalizable to other domains, the strategies employed are valuable and will provide inspiration for further uses beyond this application.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    There is no comparison to an alternative technique, other than to quote the results from Ref 23 which are in the same range although image resolution, etc, are not given. An explanation of why the proposed algorithm is an improvement over Ref 23 is needed.

    How robust is the initialization technique? Given the small number of cases, it is not clear that the rib and intercostal space modeling will always work. How good does the initialization have to be to avoid failure?

    There is also no ablation study to give an idea of what aspects of this approach are important.

    MVTK is mentioned but not cited or explained.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    No data (existing data used) or code available but other aspects of reproducibility included.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    It would be great to compare this approach to alternatives.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper shows impressive results on a difficult problem.

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    2

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #3

  • Please describe the contribution of the paper

    The paper presents a method to perform liver image registration using patient-specific magnetic resonance and intercostal ultrasound images. Clinically, alignment of pre-interventional magnetic resonance imaging and ultrasound images is frequently required and represents an interesting topic. An initial alignment between the MR and US images is estimated based on the spatial priors. A learning-based approach is considered for the rigid image alignment. Accuracy is improved by using the LC2-based non-rigid method. A detailed description of the workflow is presented with an assessment of performance and validation using 18 clinical cases.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    -

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    I believe it could be reproduced from the paper.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    -

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The US-MR liver image registration workflow presents sufficient amount of novelty. The application has significant clinical value. In-vivo validation and assessment of the performances are provided.

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    3

  • Reviewer confidence

    Somewhat Confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    6

  • [Post rebuttal] Please justify your decision

    The authors have made efforts to answer the questions properly.




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    With the perceived technical novelty of the paper, the authors are encouraged to highlight its contributions, addressing the comments especially from the first reviewer.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    NR




Author Feedback

Reviewer 1:

The technical novelty of this paper is limited. The core novelty is the automatic initialization based on the rib cage model. This is not reported in any prior work. We integrate this with known rigid and deformable registration in a novel fully automatic pipeline which we evaluate.

… the errors may be accumulated to reduce the final performance. As demonstrated in our results in Fig. 3, errors do not accumulate but rather decrease at every step. Note that our method has a clear physical interpretation of every step: initialization based on spatial priors, rigid surface model registration, and followed by the deformable one.

.. the initial rigid alignment is not a challenging problem and can be achieved by conventional registration networks. The proposed initial alignment may be tedious. Either the manual alignment or a learning-based method requiring training on a large dataset is required for the initialization. Our approach is not tedious since it is fully automated; other published methods are not.

The surface point clouds … may contain incorrectly segmented boundaries… the CPD is not robust to noisy point clouds. We use an ensemble of multiple segmentation models to meditate errors from each individual surface. These models cover different regions of the US FOV. From our experience, CPD performs as well as other methods. An extensive comparison to other methods for this application could be the subject of a future study.

…the authors mentioned “26 subjects…”, ..,“67 liver abdominal…”, “18 subjects”… The 18 cases with matching MRI and US were for registration validation. The additional data are for training and validation of the segmentation method.

why the authors call the segmentation as feature extraction? The learning-based segmentation is extracting features from the images to generate tissue label maps. We will replace this term in the revision if it causes any confusion.

There is a lack of comparison between baseline and current state-of-the-art registration methods. We would have liked to perform a rigorous comparison study on the same dataset but previous reports on the topic use private datasets. Hence we compared our results to the recent MICCAI work [23], which (i) requires expert segmentation of the vessels in MRI (in contrast, we used automatic segmentation), (ii) non-rigid alignment only applied to the vessels, leading to an average TRE of 7.1 ± 3.7 mm (our rigid registration step outperforms this, while with deformable registration, our error is nearly halved to 3.63 ± 1.87 mm). We will carry a comparison to the use of a PercuNav system as we now collect patient data using this system.

Reviewer 2:

There is no comparison to an alternative technique other than to quote the results from Ref 23… An explanation of why the proposed algorithm is an improvement over Ref 23 is needed: Please see above.

How robust is the initialization? All randomly generated transducer poses between ribs 7-10 lead to the CPD rigid registration convergence in all 18 cases. This approach mimics the search that the sonographers perform with the probe to find an acoustic window; therefore the resulting IVC, gallbladder, and diaphragm poses in the US volume are always in line with MR volume and within the tracking range of the CPD algorithm as shown in Fig. 2.

There is also no ablation study to give an idea of what aspects of this approach are important. All steps in the proposed sequence are required and are interpretable. However, we agree that a comparison of the proposed initialization to a fully randomized search and with tracked US system will be valuable. We will pursue this in the future, as suggested by the reviewer.

MVTK is not cited… We will correct this in the revision.

Reviewer 3:

It would be great to compare this approach to alternatives. We will update our discussion to better address the comparison to alternative methods. Please see previous answers.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Important learning-based registration application to deal with clinical and practical challenges. Authors responded well to all reviewer comments that are specific and objective.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    NR



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Very solid paper on learning based ultrasound-MRI abdominal registration that incorporates semantic segmentation features in combination with LC2 (local correlation) and yields very competitive results. The experimental evaluation is very good and the rebuttal addressed most remaining issues. In future work it would be interesting how the method compares to the multimodal abdominal registration task in Learn2Reg and/or the methods that worked best in that challenge.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    5



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The submission introduces a pipeline for within-subject in-vivo 3D ultrasound/MR liver image registration based on right rib segmentation, probe-orientation estimation, learning-based point-cloud registration, and refinement using image similarity. The method was evaluated on 18 subjects.

    The submission had a borderline score and quite a few questions from the reviewers. The authors participated in the rebuttal. They answered most of the questions, even though some will only be cleared up by future work and experiments.

    The outcome is weak accept. The presented problem is interesting and difficult. The newly engineered system produces promising results, but full validation / comparison to SOTA is still lacking.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    5



back to top