Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Mingyuan Meng, Lei Bi, Dagan Feng, Jinman Kim

Abstract

Deformable image registration is a crucial step in medical image analysis for finding a non-linear spatial transformation between a pair of fixed and moving images. Deep registration methods based on Convolutional Neural Networks (CNNs) have been widely used as they can perform image registration in a fast and end-to-end manner. However, these methods usually have limited performance for image pairs with large deformations. Recently, iterative deep registration methods have been used to alleviate this limitation, where the transformations are iteratively learned in a coarse-to-fine manner. However, iterative methods inevitably prolong the registration runtime, and tend to learn separate image features for each iteration, which hinders the features from being leveraged to facilitate the registration at later iterations. In this study, we propose a Non-Iterative Coarse-to-finE registration Network (NICE-Net) for deformable image registration. In the NICE-Net, we propose: (i) a Single-pass Deep Cumulative Learning (SDCL) decoder that can cumulatively learn coarse-to-fine transformations within a single pass (iteration) of the network, and (ii) a Selectively-propagated Feature Learning (SFL) encoder that can learn common image features for the whole coarse-to-fine registration process and selectively propagate the features as needed. Extensive experiments on six public datasets of 3D brain Magnetic Resonance Imaging (MRI) show that our proposed NICE-Net can outperform state-of-the-art iterative deep registration methods while only requiring similar runtime to non-iterative methods.

Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16446-0_9

SharedIt: https://rdcu.be/cVRSP

Link to the code repository

https://github.com/MungoMeng/Registration-NICE-Net

Link to the dataset(s)

https://adni.loni.usc.edu/

https://fcon_1000.projects.nitrc.org/indi/abide/

http://fcon_1000.projects.nitrc.org/indi/adhd200/

https://brain-development.org/ixi- dataset/

https://mindboggle.info/index.html

https://surfer.nmr.mgh.harvard.edu/fswiki


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper proposed a one-shot DL-based registration method to address image registration tasks with large displacements. In the decoder, the displacement vector field is predicted at several resolutions, and at each scale, the wrapped moving image is injected. The evaluation is performed on public Brain MR datasets, in which the displacements are relatively small. The proposed method is compared with two conventional registration methods as well as six DL-based methods, which are retrained in this study.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    1) Injecting the deformed moving image into the decoder is a novel idea. 2) The paper is well written.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1) The evaluation is performed on brain MR datasets, in which the displacements are relatively small.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    It is very nice that the code will be available and the dataset is public.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    1) The definition of large displacement is not clear. It would be helpful to mention the predicted displacement distribution in the testing datasets explicitly.

    2) It would be helpful to report the total number of voxels in each dataset in Table 1 in addition to the NJD.

    3) The ablation study in Table 2 is inconclusive. Up to what amount of L the improvement in Dice would saturate or degrade?

    4) For the extension, I would recommend applying the proposed method to a dataset with large deformations, such as chest CT scans (e.g., DIR-Lab 4DCT). I would recommend comparing your results to the study of [Hering2021CNN].

    Hering, A., Häger, S., Moltz, J., Lessmann, N., Heldmann, S. and van Ginneken, B., 2021. CNN-based lung CT registration with multiple anatomical constraints. Medical Image Analysis, 72, p.102139.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper proposed injecting the deformed moving image into the decoder as a novel idea. Although the evaluation is only on the brain MR datasets, this can be sufficient for this conference paper.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    This paper proposed a Non-Iterative Coarse-to-finE registration Network (NICE-Net) for deformable image registration. Unlike the existing iterative deep registration methods, the proposed NICE-Net can perform coarse-to-fine registration with a single network in a single iteration.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The main strengths of the paper can be concluded bellow: 1) Proposed a coarse-to-fine unsupervised learning-based registration framework by using a single network. 2) Relatively large experimental datasets.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1) The innovations of the proposed method are not enough for MICCAI: the main innovation of the proposed registration method is the different architecture of the registration framework. Using different scales of features to obtain different resolutions of the displacement fields to make a coarse-to-fine registration. 2) Only one UNet-like architecture may hasn’t the ability to capture such a large amount of features produced in different resolutions, this is the main reason that lots of previous work adopt multiple networks to conduct the coarse to fine registration. 3) In Table 1 the highest registration accuracies are obtained by using \lambda=0, it seems that the Jacobian determinant regularization didn’t work well in the loss function, which suggests that the proposed method may lead to image folding. 4) In Table 1 the registration accuracies are not higher than those of ULAE-net by using \lambda=10^{-4}, it demonstrated that the performance of the proposed method is not better than the ULAE-net. 5) Authors mentioned that “while the \lambada is set as 10^{-4} to ensure that …. is less than 0.05%”, how to get this conclusion? The \lambada is a weight that is to balance the loss items. How about the other values of \lambda, comparison results using different values of \lambda should be shown in the manuscript. 6) In the ablation study, what parameters such as the value of \lambda were used?

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Authors will submit codes on Github.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    1) The parameters such as \lambda should be detailed in the ablation study. 2) Other datasets such as OASIS should be included in the experiments. 3) Multi-iteration scheme registration experiments using the same proposed architecture should be conducted in the ablation study.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The main innovation of this paper is that a different registration architecture is proposed, some other aspects such as the loss function are the same as those proposed in previous work. In addition, the comparison experiments are not conducted using some common datasets such as OASIS.

  • Number of papers in your stack

    6

  • What is the ranking of this paper in your review stack?

    3

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    3

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #3

  • Please describe the contribution of the paper

    The authors propose an unsupervised non-iterative coarse-to-fine registration network (NICE-Net) for deformable registration using cumulative learning. This includes a single pass deep cumulative learning (SDCL) decoder, a selectively propagated feature learning (SFL) encoder, and an enhanced loss function.

    Compared to other iterative deep registration methods, NICE-Net can perform more accurate registration with a single network in a single iteration.

    Validation on two public datasets shows that NICE-Net outperforms the existing deep iterative registration methods.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    1- This paper presents a non-iterative coarse-to-fine deformable registration for medical applications based on cumulative learning.

    2- Better performance: Compared to other iterative deep registration methods, NICE-Net can perform more accurate registration with a single network in a single iteration with the advantage of being fast.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1- Limited discussion of the qualitative results and comparing with the state-of-the-art.

    2- Lack of training results: The authors utilized a total of four public datasets of 3D brain MRI for training their proposed network; however, the training results are missing.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The paper meets the standard requirement in terms of reproducibility.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    1- The authors should compare their proposed network with the state-of-the-art qualitatively. This would help to highlight NICE-Net’s registration performance.

    2- Limited clarity: A description of the training results would enhance the paper clarity. It is also preferable to include a brief description on the training implementation details (e.g. How many image pairs were used?).

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Overall, the paper is very interesting, and the method shows great potential.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    2

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    Overall, the authors appreciate the scope of the paper, but had several concerns with the details. For example, all authors had concerns with clarify of definitions (e.g. large displacement, the contribution) and the experiments/results. There is a substantial spread in final scores, and I would say the paper is currently quite borderline, as the overall contribution is unclear. The quality of the rebuttal will be really important here in determining the value of this paper for the community at MICCAI.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    4




Author Feedback

We thank the Meta-Reviewer (MR) and Reviewers (R1-R3) for the comments. Below we provide our responses (Re). 1 (MR, R2): The novelty and overall contributions are unclear. Re: We like to highlight our novelty/contributions: i) A non-iterative coarse-to-fine registration network (NICE-Net) was proposed, which allows to perform more accurate and faster registration than existing iterative coarse-to-fine registration methods and, ii) Coarse-to-fine registration was performed as a cumulative learning process in a single iteration with a single network (further clarified in Re-2). We appreciate both R1 and R3 recognized our novelty/contribution and accepted our work.

2 (R2): The main innovation is the different architecture and one UNet-like architecture may not have the ability to capture complex features in coarse-to-fine registration. Re: The differences in architecture reflect our novelty that coarse-to-fine registration was performed as a cumulative learning process, where the knowledge learned at coarser steps was cumulated and leveraged by the finer steps. This enables our NICE-Net with a UNet-like architecture to be sufficient to capture complex features. In contrast, previous methods, using multiple networks, are reliant on inefficient repeated feature learning. This is evidenced by our experiments that the NICE-Net using a single network outperformed the iterative registration methods using multiple networks (Table 1).

3 (MR, R1, R2): The definition of large deformations is unclear. The evaluation is performed on brain MRI with small deformations and is not conducted on common datasets, e.g. OASIS. Re: We followed the existing registration literatures ([6, 9-11] in the manuscript) to define large deformations. Brain MRIs were used for evaluation as they were commonly used by comparison methods (Dual-PRNet, RCN, LapIRN, ULAE- net). These literatures showed the basic VM and DifVM cannot handle the large deformations in brain MRI and introduced the need for coarse-to-fine registration. We used Mindboggle and Buckner for testing, both of which are common datasets used by comparison methods (VM, Dual-PRNet, ULAE-net) and have large deformations; as exemplified in Fig. 2, the region in red boxes has large deformations and the predicted displacements can be > 15 voxels.

4 (R2): The highest DSC is obtained when lambda=0, suggesting the Jacobian regularization didn’t work well. When lambda=1e-4, the DSC of NICE-Net is not higher than ULAE-net, indicating NICE-Net is not better than ULAE-net. Re: Apart from DSC, we consider NJD as another important evaluation metric. The best (highest) DSC was obtained when lambda=0, but the best (lowest) NJD was obtained when lambda=1e-4. There is a trade-off between DSC and NJD, which also occurred in LapIRN study ([10] in the manuscript). Compared to ULAE-net, our NICE-Net with lambda=1e-4 achieved similar DSC but much lower NJD; the NICE-Net with lambda=0 achieved similar NJD but significantly higher DSC (Table 1). When L=5, our NICE-Net outperformed ULAE-net on DSC by a larger margin (Table 2).

5 (R2): How to get the conclusion “the lambda is set as 1e-4 to ensure that the percentage of NJD is less than 0.05%”? A comparison using different lambda values should be shown. Re: We tried to vary lambda values in the validation set and found lambda=1e-4 results in the percentage of NJD being < 0.05%. This experiment was not included due to limited space but will be added to the supplementary materials in the final submission.

6 (R3): Lack of training results and qualitative comparisons. Re: We trained NICE-Net using 4 unlabeled datasets in an unsupervised manner, such that the training accuracy cannot be measured quantitively, which is common in unsupervised registration studies. As expected, we did observe a smoothly declined training loss during training. Our qualitative comparison was not included due to limited space but will be added to the supplementary materials in the final submission.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The paper is borderline, but the discussion among the reviewers makes me think it should be accepted. I strongly aencourage the authors to improve the paper at camera ready time with the clarifications of the concerns of the original reviews.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    -



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Most reviewers found merit in the proposed modification of multi-warp registration frameworks that have more complex repeated feature extraction steps. However, the differences to LapIRN in architectures are moderate in my opinion. Do 0.2 sec faster inference times really matter in clinical practice? It would have been nice to cite related work from computer vision / optical flow, were PWC-Net proposed the same scheme for 2D already in 2017. The experimental results on inter-subject brain (in unsupervised setting) are convincing and source-code will be provided, yet future work should include actual “large deformation” data, e.g. DIRLAB-COPD or similar.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    8



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This is a difficult one to call. 2/3 reviewers were happy with the work, so I will go along with the majority.

    There are several MICCAI submissions where the contribution is a new deep learning architecture for doing optimisation of image registration. None of them actually assess their effectiveness as optimisers by seeing whether they find parameters with smaller cost functions than the current SOTA (with the same cost function). For this proposed optimiser, it would have been especially interesting to separately assess the image similarity and the regularisation terms. My guess is that it improves the similarity term, but not the regularisation term.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    6



back to top