Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Fenja Falta, Lasse Hansen, Mattias P. Heinrich

Abstract

Deep learning-based methods for deformable image registration have continually been increasing in accuracy. However, conventional methods using optimisation remain ubiquitous, as they often outperform deep learning-based methods regarding accuracy on test data. Recent learning-based methods for lung registration tasks prevalently employ instance optimisation on test data to achieve state-of-the-art performance. We propose a fully deep learning-based approach, that aims to emulate the structure of gradient-based optimisation as used in con- ventional registration and thus learns how to optimise. Our architecture consists of recurrent updates on a convolutional network with deep supervision. It uses a dynamic sampling of the cost function, hidden states to imitate information flow during optimisation and incremental displacements for multiple iterations. Our code is publicly available at https://github.com/multimodallearning/Learn2Optimise/.

Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16446-0_29

SharedIt: https://rdcu.be/cVRTb

Link to the code repository

https://github.com/multimodallearning/Learn2Optimise/

Link to the dataset(s)

https://med.emory.edu/departments/radiation-oncology/research-laboratories/deformable-image-registration/downloads-and-reference-data/index.html

https://empire10.grand-challenge.org


Reviews

Review #1

  • Please describe the contribution of the paper

    The authors propose a deep learning-based approach called learn to optimize (L2O), that aims to emulate the structure of gradient-based optimization used in conventional registration. The proposed architecture consists of recurrent updates on a convolutional network with deep supervision. It uses a dynamic sampling of the cost function, hidden states to imitate information flow during optimization and incremental displacements for multiple iterations.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The authors propose a recurrent framework using an iterative dynamic cost sampling and a trainable optimizer that mimics Adam optimization but can substantially reduce the required number of iterations.

    The method uses a total of 45 features for each image voxel/control point. These features consist of 3 predicted displacements, 8 fixed grid of subpixel offsets (resulting in 24 features), 8 dissimilarity costs at the 8 fixed subpixel offsets, and 10 hidden states that are propagated through all iterations. Through these, the model is able to save information about previous update steps and incorporate them similar to how Adam uses momenta. All features get updated with each recurrent application of the network. That means the coordinates and dissimilarity costs dynamically change across recurrent states and mimic the iterative fashion of conventional registration.

    The authors show that a good initial condition generated by the VM++ algorithm is important for fast and accurate convergence of both the Adam optimized approach and their L2O approach.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The authors do not give a theoretical basis for why their network mimics the Adam optimizer, i.e., a loss function gradient descent. The reason why this is important is that the results presented in Figure 3 shows that the Adam optimizer continues to improve with iteration while the proposed method appears to level out and possibly get worse as the number of iterations increase.

    The authors do not show the before and after registration results. It would be good to difference images and Jacobian images for good, average and failed registration cases.

    The method was trained and evaluated using a small number of data sets. The paper states 28 pairs of image volumes were selected from the EMPIRE10 (selected cases), DIR-Lab COPD and DIR-Lab 4DCT data for 5-fold cross validation. There is no indication of how many image registration cases failed (if any) there were in this study.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The paper is reproducible.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    I like Figure 2. However, the authors do not explain what the ground truth reference is that they are using and how they got this ground truth. Without this information it is not clear whether or not this visualization is biased or not.

    Pg 6. The authors mention that their method produces smoother transformations and less folding compared to Adam optimization. The authors should report how much folding occurs in both the Adam and the L2O approaches.

    Fig 3. The authors should state what the shaded regions of the graphs represent and how they are computed.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    See above.

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    2

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #2

  • Please describe the contribution of the paper

    This paper proposes a novel recurrent framework to emulate instance optimization for deformable image registration, using an iterative dynamic cost sampling step.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    (1) This paper employs a recurrent network to emulate instance optimization, which is meaningful for intra-patient lung registration. (2) It uses a dynamic sampling of the cost function and hidden states to mimic gradient-based optimization, requiring fewer iterations than traditional techniques.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    (1) Section 2.1 introduces a lot about Adam optimization. However, it will be better to focus on the techniques/methods of this paper itself (Learning iterative optimisation for deformable image registration), rather than the related work. The title of section 2.1 is also misleading and confusing. (2) Section 2.3 and Section 2.4 are difficult to follow. I suggest the authors use an Algorithm block to clearly state the entire optimization algorithm. (3) As shown in Table 1., Adam, as the main baseline method, outperforms the proposed L2O both with and without Pre-Reg. Though this work reduces the required number of iterations, the contribution is relatively limited. It will be better for the authors to add comparisons of running time to show its advantages in terms of efficiency.

  • Please rate the clarity and organization of this paper

    Poor

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Good reproducibility.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    Major point: (1) It will be better to more clearly discuss the relationship between the hidden states in this paper and the first and second momentum in Adam optimization. (2) It will be better for the authors to add comparisons of running time.

    Minor point: (1) Eq. (1) should be improved. (2) One more comma at the end of the first sentence of the caption of Fig.1. (3) ‘For each sample coordinate a dissimilarity cost…’ should be improved. (4) ‘…are used to asses the registration accuracy…’ should be improved.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    3

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Two major issues are clarity in the presentation and issues in experimental results.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    4

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    4

  • [Post rebuttal] Please justify your decision

    Concern about clarity in the presentation is still not solved.



Review #3

  • Please describe the contribution of the paper

    The authors present a deep learning based approach to emulate the structure of gradient based optimization (Adam optimizer).

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The main strength of the paper lies with the fact that it uses recurrent network framework to emulate instance optimization. The paper is well written and easy to follow.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The main weakness of the paper is that the motivation to develop an instance optimization emulator is not that clear. The results indicate that the Adam optimizer works better than the L2O for both the datasets used.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The paper is reproducible and source code is available online.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    The authors should improve the motivation behind the development of the L2O method. Perhaps it would be interesting to look at the actual registration outputs and the diffeomorphism of the displacement fields.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper presents an interesting approach. Although, it is hard to see the impact and application in its current form.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    3

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    6

  • [Post rebuttal] Please justify your decision

    The authors answered my queries and based on their answers to the other reviewers I will change my original rating.




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    Contributions

    Proposes a deep learning-based approach (“learn to optimize”, L2O), that aims to emulate the structure of gradient-based optimization used in conventional registration.

    • Architecture consists of recurrent updates on a convolutional network with deep supervision.
    • Uses dynamic sampling of the cost function and hidden states to mimic gradient-based optimization, requiring fewer iterations than traditional techniques.
    • Shows that a good initial conditions are important for fast and accurate convergence of both the Adam optimized approach and their L2O approach.
    • Good reproducibility.

    Weaknesses to address

    • No motivation for the work is given, nor is there any presentation of a theoretical basis for why their network mimics the Adam optimizer.
    • Methods section should focus more on proposed method, rather than the Adam optimiser.
    • The results indicate that the Adam optimizer works better than the proposed algorithm for both datasets(and the Adam optimizer continues to improve with iteration while the proposed method appears to level out and possibly get worse with more iterations).
    • Does not show any before and after registration results (or difference images or Jacobian determinant maps) for good, average and failed registration cases.
    • Trained and evaluated using a small number of data sets, with no indication of how many registration cases failed (if any).
  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    8




Author Feedback

We thank all the reviewers for their valuable and constructive comments to improve our work.

R2 and R3 criticised that Adam outperforms our method. R2 stated “Adam, as the main baseline method, outperforms the proposed L2O both with and without Pre-Reg”, which is inaccurate. With pre-registration, Adam does slightly outperform our method (Adam/L2O for 4DCT: 1.33/1.69, COPD: 2.18/2.24), but without pre-registration, L2O performs significantly better (Adam/L2O for 4DCT: 2.38/2.01, COPD: 7.52/4.13). In addition to a lower TRE, L2O generates more plausible results with fewer foldings. We consider the robustness of our method with respect to the quality of the pre-registration to be important because related recent work shows that the accurate alignment of full-inspiration to expiration lung CT is challenging to achieve, in particular for DL-based methods. On the one hand, when considering applications outside of intra-patient lung registration, but also when trying to improve DL-based methods that do not yet achieve accurate enough results to employ Adam optimisation, as Adam is strongly dependent on the accuracy of the pre-registration, L2O offers a promising methodological solution to improve fast but less accurate deep learning-based methods even for scenarios where Adam-based instance optimisation currently fails.

R3 criticised that the motivation behind our method remains unclear. Using a DL-based method has some general advantages over conventional optimisation. DL-based methods can make use of population-wide information through training data (as shown in our experiments), are less dependent on pre-alignment and, in combination with other learning-based methods, can be trained in an end-to-end fashion, providing the potential for a joint network to improve even further (which is a natural next step). Typical end-to-end methods have the disadvantage of not being easily interpretable and susceptible to subtle domain shifts. Regarding these matters, conventional optimisation is usually advantageous. Since we aim to learn optimisation with our architecture, we combine the advantages of both.

R1 and R2 suggested further discussion of the relationship between our network and Adam optimisation. Namely, 1) why L2O gets worse with more iterations and 2) how the hidden states relate to Adam momentum. While most of this information was present in our manuscript, we agree that it could be presented more clearly. 1) Since we always used eight iterations during training, our network can use information about the displacements occurring in the training data and can adjust itself so that similar displacements are predicted after eight iterations. The effect can be reduced when using variable numbers of iterations during training. When comparing with Adam it should not be disregarded that using more iterations of Adam to improve accuracy also entails a longer runtime. 2) Hidden states can be used by the network to memorise information about previous iterations and approximated gradients (calculated from displacements and coordinates). This information is mapped to a compact hidden state vector. In Adam optimisation, this is done through calculation of momentum, a rather simple linear relation. Our architecture provides the network with the option to calculate momenta of the gradients, but it is not limited to use only the momentum, if there are better options. In summary, the recurrent architecture emulates the iterative nature of Adam, coordinates and displacement costs as inputs enable the network to perform gradient descent and the hidden states enable learning rate adaption.

To address the point that was raised by R1, there were no registration cases that failed. All cases increased in accuracy.

As R1 and R3 pointed out, we didn’t include any before and after registration results. We will amend Fig. 2 and add visual examples to show qualitative results of our method in a revision.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors presented a good case for their work during the rebuttal, leading to two reviewers raising their opinions of the work. Reviewer 1 already considered that the work was of sufficient quality to publish. Overall scores are now 6, 4, 6. On balance, given the extensive knowledge and experience of Reviewer 1, I would suggest accepting the manuscript.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    7



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Learning iterative optimisation for deformable image registration of lung CT with recurrent convolutional networks

    This submission joins the learning and gradient-based approaches for image registration, where the optimization iterations are learned for faster training times. The rebuttal has addressed the main clarifications, and it is believed the promised changes to the manuscript are feasible.

    For all these reasons, recommendation is towards Acceptance.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    4



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    It seems that there were some original concerns, but these were just sufficiently addressed to make it into MICCAI. Congratulations to the authors. I strongly encourage them to take into account the various concerns of the reviewers, though, since there were a few original concerns that still linger.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    18/30



back to top