Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Huaqi Qiu, Kerstin Hammernik, Chen Qin, Chen Chen, Daniel Rueckert

Abstract

Deep learning (DL) image registration methods amortize the costly pair-wise iterative optimization by training deep neural networks to predict the optimal transformation in one fast forward-pass. In this work, we bridge the gap between traditional iterative energy optimization-based registration and network-based registration, and propose Gradient Descent Network for Image Registration (GraDIRN). Our proposed approach trains a DL network that embeds unrolled multi-resolution gradient-based energy optimization in its forward pass, which explicitly enforces image dissimilarity minimization in its update steps. Extensive evaluations were performed on registration tasks using 2D cardiac MR and 3D brain MR images. We demonstrate that our approach achieved state-of-the-art registration performance while using fewer learned parameters, with good data efficiency and domain robustness.

Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16446-0_6

SharedIt: https://rdcu.be/cVRSM

Link to the code repository

https://github.com/gradirn/gradirn

Link to the dataset(s)

http://imaging.ukbiobank.ac.uk

https://www.ub.edu/mnms/

https://www.cam-can.org/index.php?content=dataset


Reviews

Review #1

  • Please describe the contribution of the paper

    In this paper, the author deals with the problem of medical image registration. They introduced a two step scheme to connect tradionnal approaches and network based approaches. To do so they have two levels of optimisation : a deep learning based optimisation (looking for the best networks parameters) and a iterative optimisation using gradient descent (looking for the best transformations).

    The proposed scheme is independant from the network architecture or the registration formulation. Authors compared with deep learning and non deep learning approaches using two datasets with cardiac and brain MRI.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The main strenghts of the paper is the original way to combine iterative registration and deep learning based registration.

    • The proposed formulation have better performance on out of domain distribution or with a decrease of the training dataset than competing methods.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • The reviewer does not understand exactly the way the registration problem is solved and why the lower optimisation problem is called “the forward pass” by the authors. A pseudo algorithm of the proposed methodology would have clarify it.

    • According to the reviewer, it is necessary to compare the proposed formulation with other network than Voxe

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    According to the reviewer, the paper is not clear enough to be perfectly reproducible. However the authors provided a github repository with the code.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    The methodology and particularly the training process with two optimisation level requires more explanations and clarity.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The methodology and the advantage of having two optimisation process is not clear enough, well explained and discuss.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    3

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    5

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #2

  • Please describe the contribution of the paper
    • The authors present a combination of deep-learning-based and conventional registration.
  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The authors propose a method to combine deep-learning and conventional image registration to further improve the results obtained by the DL-Reg
    • The experiments show promising results.
    • The paper is well written.
    • A reasonable selection of metrics was made to evaluate the quality of registration (Dice, HD, detJ, std(detJ), runtime
    • the authors perform further evaluations on data efficieny and domain robustness.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    -The description of the method is not sufficiently clear. Is this a deep-learning registration, which is used to generate an initial solution for a conventional registration? (Meaning DL-Reg + Instance Optimisation like e.g. Mok et al. did for task 2 of the learn2Reg challenge) It seems to me that this is because the network parameters are not changed in the second optimisation (1b) This would mean that the optimisation from 1b does not have to be carried out during the training of the network, as these have no influence on the network parameters. However, the title as well as other wording in the text suggests that this is an interrelated optimisation/network. Why else should the image dissimilarity in the section “Image Dissimilarity Gradient” be differentiable twice?

    • For the evaluation, automatic segmentations are used on both data sets. It is not clear how good these are and accordingly how great the influence of errors in the automatic segmentation is on the evaluation of the registration. For the 3D dataset, GraDIRN is with a Dice of 0.799 significant better then RC-VM with a Dice of 0.794. Is that really significant better taking into account the segmentation error? Even without this, the difference seems very small to me for significantly better results.

    • It is not clear to me, why the authors have chosen to evaluate their method on this datasets. They clearly propose a new method and therefore, it would be appropriate to evaluate the method on a publicly accessible data set with manual annotations. (e.g. the Learn2Reg datasets)

    • the discussion and contribution section does not contain any discussion. Please discuss your results!

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The given answers seem to be correct. I appreciate that the authors honestly answered these questions (no one else did in my paper stack)

    An analysis of situations in which the method failed. [No] Discussion of clinical significance. [No]

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
    • In Table 1, it is not clear if the reported values are average or median or whatever values.
    • In Table 1 and 2, what is meant with “ J < 0% ? The number of voxels with a Jacobian determinant smaller than 0% ? Or the percentage of those so “ J < 0 in % “ ?
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    I see in this paper an interesting idea that has potential and should be presented to the scientific community. I hope the authors will revise their manuscript a bit more so that it is a bit clearer.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    2

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    6

  • [Post rebuttal] Please justify your decision

    I would like to thank the authors for the effort they have put into the rebuttal.

    With the changes the authors describe in their response, the method seems much clearer to me and my concerns have been addressed. Therefore, I accept this paper.



Review #3

  • Please describe the contribution of the paper

    In this paper, the authors present a deep learning based method for image registration. It is based on the interleaving of a learned gradient based forward step during a multiresolution based learned optimization. The authors present the individual building blocks of the algorithm and show the applicability of the method on 2d and 3d medical data, combined with a comparison to other published methods.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The method presented by the authors is motivated by classical image registration techniques. In the work, they extend conventional deep learning ased image registration methods with an update step motivated from image dissimilarity. This approach appears promising. The methodological foundations of image registration are explained in detail, thus motivating the newly added step. The method section takes enough space for motivating the individual building blocks. Another point worth highlighting is the use of clinical data to validate the new method. 2D and 3D data representative of current clinical issues (cardio and neuro) are used. These data are not only used to show the own strengths, but a comparison with current methods (classical and AI) is performed. Another strength is the experiments on data efficiency and domain robustness.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The formulation of the approach in a bi-level optimization view is not conclusive and probably rather misleading, since the optimization problem formulated in this way is not solved. The role and the design of the regularizer in the lower-level step is not clear (\nabla R_{theta_t}). Which form of regularization is used here should be made clearer. The motivation why the structure of the procedure described in Figure 1 fits the formulation of the optimization problem is not obvious. In the experiments part, the newly presented method is compared with other already published methods. Unfortunately, it remains unclear why the methods used in the paper were chosen. Here, a short justification of this choice would be nice. In the experiments it is motivated by Tab. 2 that the explicit gradient is necessary for the success of the method, here a more detailed consideration would be useful.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The procedure itself is not made available for download etc. The data used are freely available. The procedures used for comparisons have already been published and some of them are available as freely accessible code. It is unclear how the presented procedure has to be parameterized in detail. This could be discussed in more detail in the experiments section.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    The methods section should be revised to clarify the reasons that led to the formulation of the procedure as presented.

    The role of the regularizer in the lower-level optimization should be better emphasized. What form does it take after learning, what task does it have in the overall registration, what is its contribution to the results.

    It is unclear what segmentations are used for dice determination - no segmentations are mentioned in the 3d case.

    The role of lower-level optimization, specifically focused on in Tab 2, should be prepared and at least motivated in the methods section.

    Minor changes:

    The abbreviations for the segmentations in the heart are not used again in the following and can be removed (ie LV, Myo,…).

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Minor rearrangements and extensions in the methods section and the addition of explanations in the experiments section can raise the paper to a good level very quickly. The weaknesses presented can be easily elliminated.

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    4

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    The paper received three well written reviews, but all reviewers had concerns with the clarity of the paper, which led to uncertainty over its contributions. For example, several methodological issues are brought up by all reviewers, mostly I think stemming for a lack of clarity of writing. The choice of experiments (baselines and datasets) are also highlighted (e.g. by R2). There are a few things to clarify before the paper can be appreciated properly.

    I recommend a thorough and careful review to clarify the issues that the reviewers bring up, and also justify why the experimental choices help clarify the contributions of the paper.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    6




Author Feedback

We thank all reviewers for the constructive reviews, and for appreciating the proposed idea of combining conventional registration with deep learning, overall clarity and organization, the strengths in the evaluations with reasonable metrics (R2), clinically representative data (R3), and domain robustness and data efficiency (R1, R2, R3). In the following, we address the main concerns raised by reviewers and highlighted by the AC regarding the clarity of the method (1, 2), and the choice of baselines and datasets in the experiments (3, 4):

1) R1 and R2 raised concerns about the clarity in our description of the method. To clarify, the proposed method solves registration by running a forward pass of the deep learning-based model with a pair of images as input to get the transformation from the output, similar to other DL registration methods. The key difference is that our model forward pass itself is an unrolled iterative optimization process, with multiple update steps each combining explicit image dissimilarity gradient and CNN output to optimize the transformation \phi. The training finds the model parameters to best perform this unrolled iterative optimization. Hence, the DL network is not used to provide an initial solution for conventional registration as R2 suggested. Instead, our framework embeds gradient-based optimization into a learning framework. We will add clearer descriptions and make rearrangements in the revised version to incorporate these clarifications.

2) R3 comments that the proposed framework does not necessarily fit the bi-level optimization view, and asks to clarify the role and design of the lower-level optimization (in relation to Tab 2) and the learned regularizer. We argue that the bi-level optimization view explains the proposed framework well holistically: the lower-level optimization, which optimizes the transformation for each image pair, is unrolled into the forward pass of the model; the higher-level optimization trains the model parameters to solve the lower-level optimization. The use of image dissimilarity gradient enforces image matching explicitly in each update step of the lower-level optimization, which we hypothesize is important for achieving parameter-efficient and accurate registration. The results in Tab 2 empirically validate this. We will motivate this in the method section more clearly as R3 suggested. As for the learned regularizer, the idea is to use CNNs to learn the optimal regularization from training data instead of using a fixed form. We will explain more directly as such in the revised version. We agree that studying the form of the learned regularization is of interest and will investigate in the future.

3) R1 and R3 raise questions about the choice of baseline methods. We have compared our method not only to VoxelMorph (re: R1) but also to a cascaded method (RC-VM) which is a SOTA and iterative DL registration method related to our idea. We motivated these choices in the “Baselines and Implementation” section.

4) R2 raises concerns about the choice of dataset and suggests that evaluating on a publicly accessible dataset (e.g. learn2reg) might be more suitable. We point out that the two datasets used in our paper and all processing tools are publicly accessible. For 3D brain MRI data, the segmentation in the OASIS dataset in learn2reg is also done automatically by FreeSurfer, not manually. While evaluation on this dataset would be nice, we argue that the datasets we used are equally valid. We acknowledge that errors in automatic segmentation can affect the registration evaluation, as R2 pointed out, and will investigate this. Despite being comparable to RC-VM in some settings, our method performs significantly better than VoxelMorph in all settings and shows better data- and parameter-efficiency than RC-VM.

We will revise our paper to address all major concerns accordingly, and we hope the reviewers can re-assess the paper and adjust the ratings where possible.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    I appreciate the thorough answers and involvement of the reviewers. Several of the aspects/concerns were alleviated.

    Although I believe several concerns still remain, the paper should be accepted as it will lead to an interesting discussion. However, I also strongly encourage the authors to work to improve the paper (in camera ready) given all the current suggestions. Importantly, the authors promise to include several clarifications, please make sure to do this.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    -



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Multi-Resolution Gradient Descent Networks for Image Registration

    This submission joins a gap between traditional gradient-based optimized and network-based image registration. The originality resides in coupling both, employing a gradient-based forward pass and enabling a multi-resolution scheme.

    Clarity was the main concerns as two reviewers had issues in understanding the intricacies of the papers. The rebuttal appears to have clarified those with a direct straight-forward explanation of the methodological motivations. It is expected that these words could improve the clarity of the paper. Minor technical and validation questions were also satisfyingly addressed.

    For these reasons, recommendation is towards Acceptance.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    2



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The clarity of the proposed idea (i.e., integration of gradient-based optimization into a deep learning framework) and effectiveness of the bi-level optimization mechanism (especially lower-level) was the primary concern raised by the reviewers, and they were quite clearly addressed in the rebuttal. All suggested/responded changes should be made to the final manuscript.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    2



back to top