Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Ekaterina Kuzmina, Artem Razumov, Oleg Y. Rogov, Elfar Adalsteinsson, Jacob White, Dmitry V. Dylov

Abstract

Image corruption by motion artifacts is an ingrained problem in Magnetic Resonance Imaging (MRI). In this work, we propose a neural network-based regularization term to enhance Autofocusing, a classic optimization-based method to remove motion artifacts. The method takes the best of both worlds: the optimization-based routine iteratively executes the blind demotion and deep learning-based prior penalizes for unrealistic restorations and speeds up the convergence. We validate the method on three models of motion trajectories, using synthetic and real noisy data. The method proves resilient to noise and anatomic structure variation, outperforming the state-of-the-art demotion methods.



Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16446-0_35

SharedIt: https://rdcu.be/cVRTu

Link to the code repository

https://github.com/cviaai/AF-PLUS

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    In this work, the authors proposed the Autofocusing algorithm using a CNN extracted prior knowledge on specific k-space motion corruption models. The authors evaluated the methods using fastMRI datasets.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The main strength of this paper is around formulation of the optimization by including k-space trajectory estimates.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The main weakness is that the proposed method is seems a trivial step in addition to autofocusing methods. The newly introduced Unet loss mechanism is similar to a GAN setup as critic so the authors should compare their methods with a GAN based motion correction model.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors included scripts but this reviewer encourages the authors to make them public accessible.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    The authors should clarify the novelty of the methods in addition to the autofocusing methods. The authors should compare their method with the existing methods using GAN setup since their construction of loss mechanism is similar to an GAN based critic. Therefore, it is important to validate both the novelty and benefits of the proposed model.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The overall methods are interesting but this reviewer finds it is limited in novelty and comprehensiveness in experimental comparison.

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    4

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    6

  • [Post rebuttal] Please justify your decision

    The authors have done a comprehensive revision including comparison with GAN which was one of my main comments. I can recommend acceptance of this work.



Review #2

  • Please describe the contribution of the paper

    The authors proposed an autofocus method based on a neural network to remove motion artifacts. Similar to autofocus and gradMC (L2 regularized autofocus method), the method does “blind” motion correction. However, it is constrained by a deep learning-based regularization term. The fast MRI database was used to compare the performance of the proposed method with other motion correction methods. The motion corrected images were compared using four quality metrics.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The proposed method does not require motion detection or estimation to perform motion correction;
    • The proposed methods was compared with several other motion correction methods using four image quality metrics; -The images obtained with the proposed method look good even in the presence of severe motion; -The source code is provided;
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    -It is not clear how the method works. In particular, it is not clear how eq (5) is related to the autofocus method proposed in [2], which is based on an entropy metric.
    -The proposed method takes 7-9 minutes to generate motion corrected images. -Comparison with other state-of-the-art motion correction deep learning methods missing. For example, [13] was discarded because of the “associated training routine” and [29,30] because they do not take “the physical nature of the MRI artifacts into account”, but these methods provide results within seconds, which is an advantage over the proposed method.

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Source code is provided. Data from fastMRI database.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    Other comments:

    1. “Compensation for the motion artifacts is referred to as demotion”. I cannot say I have heard the term “demotion” before in the field of motion correction for medical imaging.
    2. The autofocus method is vaguely described. Please provide more detail.
    3. The authors mention that the disadvantages of MedGAN are the “extra adversarial low functions and the associated long training routine”. How does the training time compare with the proposed method? It would be interesting to see how the outputs of both methods compare as well, since MedGAN provides images in seconds and the proposed method takes 7-9 minutes.
    4. Is autofocus+ conceptually similar to medGAN (expect for the deep regularization)? Is the performance of the proposed method superior to medGAN with a L1-norm regularization?
    5. It is not clear how blind motion correction can be performed from solving eq. 5. How is this related to autofocus, which uses entropy as a quality metric?
    6. It is not clear what is Sp, the output of the U-Net. Is it an image or the motion parameters?
    7. Section 3, “10374 scans for training and 1992 scans for validation”. How many for testing?
    8. The fastMRI database provides about 1500 knee k-space data. However, 10374 were used for training. Please clarify. If DICOMS (magnitude images) were used, please clarify how eq 4 was applied.
    9. Please provide the network architecture details.
    10. Please provide the training time for all deep learning methods and computation time of motion correction for all methods.
    11. Please provide more details about the other methods. How was the regularization parameter for gradMC selected? Are the SOTA and autofocus+ U-NETs the same?
    12. GradMC seems to perform poorly for all cases and sometimes is worse than the corrupted image. Would results improve if the regularization parameter was optimized?
    13. Fig 3. Visually gradMC seems better than the corrupted image, but the PSNR and SSIM indicate otherwise. Similarly, gradMC looks better than autofocusing but the latter has better PSNR and SSIM. Please comment.
    14. Fig 3. Please add a zoomed section to the “clean image”.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The authors proposed a neural network-based regularization term to improve the autofocusing method. More detail about the method is necessary to appreciate the contribution of this work. It is also important to understand how gradMC, “a powerful optimization-based autofocusing method”, performs so poorly even with minimal motion.

  • Number of papers in your stack

    2

  • What is the ranking of this paper in your review stack?

    2

  • Reviewer confidence

    Somewhat Confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #3

  • Please describe the contribution of the paper

    The work proposes a deep learning prior for MR motion artifact reduction that is used inside an autofocusing strategy. Rigid (translation and rotation) motion parameters are estimated together with scaling variances modelled by a UNet prior. Rigid motion simulations were conducted on the fastMRI database.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    Iterative-based motion correction using the concept of autofocusing with an automatic update of translation, rotation and scaling parameters. Combination of optimization-based refinement of motion parameters (translation, rotation) and an image-derived scaling prior extracted by a trainable UNet to prevent artifact inpainting or unrealistic restorations.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The proposed method was investigated for simulated motion which may not necessarily reflect the actual true underlying motion behaviour and impact on the image. Rigid motion correction for a single imaging sequence and imaging contrast was performed and hence generalizability of the autofocusing concept (task independent metrics/prior) is not known.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The study was conducted on a publicly available dataset. The method is described and could be reimplemented. The authors do not report if source code will be shared.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
    1. Motion artifact reduction is more commonly known as motion correction/compensation/reduction than “demotion”.
    2. Please clearly highlight in Introduction and Abstract that this work only addressed rigid motion artifacts.
    3. Please clarify if motion vectors were selected randomly from the reported ranges. Please specify how many different motion trajectories were generated per image and on what grounds the motion trajectories were selected and taken.
    4. Why were the rotation, translation and scaling parameters not jointly estimated inside one network or at least inside the same optimization steps (instead of shifting between rotation/translation and scaling estimation)?
    5. Was the UNet operated on the magnitude-only or on the complex-valued image?
    6. Were separate networks trained for each motion type or was a joint network trained? If the former, were cross-testings performed (train on motion type A, test on motion type B)?
    7. Was the amount of inner autofocusing iterations empirically optimized or selected?
    8. Did the authors investigate the proposed method’s performance on out-of-distribution data, i.e. stronger motion amplitudes, different imaging contrasts etc.? At least for increasing noise levels, the network seems to be only affected mildly.
    9. Was a GPU-based NUFFT implementation used? What is the main computational bottleneck for the proposed approach?
    10. Please report if source code is shared.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The work is nicely and comprehensively depicted. Some methodological choices require further justification or investigation. Overall, the work describes an interesting new concept for addressing motion correction via autofocusing and deep learning.

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    6

  • [Post rebuttal] Please justify your decision

    Further methodological details would require more space in the paper to address them properly. The paper has sufficient scientific merit to justify acceptance at MICCAI, also giving the authors the possibility to clarify the remaining points at the conference.




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    The authors propose a prior for MR motion artifact reduction that is used inside an autofocusing strategy and validate their idea on fastMRI data. All three reviewers have positive assessment of the paper, but also raise some important concerns for rebuttal. The authors should address the following points raised by R1, 2 and 3 in the rebuttal including: 1) The questions on the generalizability of autofocusing concept (for real motion) 2) The details on the entropy metric 3) Comparison with the state of the methods (GAN based methods).

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    5




Author Feedback

We are very grateful to our reviewers and the meta-reviewer who helped to concisely systematize the main concerns from the three supportive reviews. According to the verdict, there are three main recommendations, addressed below:

1) COMPARISON with GANs. We performed the requested comparison and found medGAN [1] to perform slightly better than the baseline U-Net, but noticeably worse than our model (mild corruption: PSNR: 25.90 ∓ 2.5, SSIM 0.613 ∓ 0.1; severe corruption: PSNR: 23.14 ∓ 2.7, SSIM 0.610 ∓ 0.1). Visually, medGAN images look better than U-Net, however, there is some fine blur in the texture and some new details, typically attributed to such generative models. On the contrary, Autofocusing+ takes the physical nature of the motion into account and cannot ‘draw’ the new features. We will add these images to the Figure 3 for a side-by-side comparison and discuss.

2) METRICS CHOICE. We refer to work [2], where the authors compare 24 different Autofocusing metrics for demotion algorithms. Equation (5) contains L1-norm because Autofocusing is shown to work better with it than with the Entropy (in contrast to GradMC, where Entropy is better). This observation agrees well with the baseline Autofocusing and stems from the properties of the inverse rotation and translation operations. With regard to evaluation metrics, we report standard PSNR and SSIM, and one of the top-performing metrics for the MRI data – VIF that correlates well with the perception of radiologists. We report superior performance of Autofocusing+ in all metrics.

3) GENERALIZABILITY and REAL MOTION DATA. Unfortunately, there are no open Datasets containing pairs of motion-free and motion-corrupted k-space data. Admittedly, they are also rather hard to collect. We could locate only one public raw k-space scan of the motion-corrupted human brain (in the repository of GradMC). Even though there is no ground truth image, nor known motion trajectories, our pre-trained Autofocusing+ model successfully de-motioned that scan, resembling the outcome of our simulated models. We will showcase this real data result (n=1) and discuss possible fine-tuning strategies should more data become available. Whilst the real data are scarce, we believe our study of three types of physically plausible motion trajectories of different strengths and noise levels (including severe and random ones in the presence of high noise) meets the generalizability requirement reasonably well.


MINOR:

R2: “…How many for testing?” For all models, we used 100 images for validation and 53 images for the final test. We will clarify this better next to the architecture description.

R2: “…gradMC looks better than autofocusing but the latter has better PSNR and SSIM. Please comment.” Indeed, this reflects the open debate about the best MRI image quality metric outlined above. Works [3, 4] also point to such observations w.r.t. PSNR and SSIM, motivating our selection of VIF metric to reflect the perception. We will elaborate the comment to further stimulate the search for an optimal MRI image quality metric.

R3: 8. “ …performance on out-of-distribution data, i.e. stronger motion amplitudes” As outliers, we studied images with strong noise (>50dB) and severe motion. The performance of Autofocusing+ decreased linearly with the severity of artefacts until some point (too severe OoD artefacts are too unrealistic and there is information loss after the rotation operation).

R3: We will add the requested clarifications:

  1. Motion vectors were selected randomly from the reported ranges at each training step.
  2. Magnitude-only.
  3. We tried them all. Separately trained models worked best.
  4. Empirically, 80 iterations.
  5. GPU-based NUFFT was used (from tomer196/PILOT)
  6. Yes, we will make the code public.

[1] Upadhyay et al, 10.1007/978-3-030-87199-4_58 [2] McGee et al, 10.1002/1522-2586(200002) [3] Quan, Ghanbari, 10.1049/el:20080522 [4] Quan, Ghanbari, 10.1007/s11235-010-9351-x




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors have addressed the concerns on generalizability, details of the entropy metric and comparison with GANS in the rebuttal. The suggestion of the reviewers to accept the paper is reinforced after the rebuttal. Therefore, I recommend acceptance of the paper

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    5



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Original reviewer scores were all 5 and above and, after a comprehensive revision, one reviewer has since upgraded their score from 5 to 6. One borderline accept and two accepts should be sufficient to warrant acceptance.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    8



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Overall, the authors have carefully addressed the main concerns of the reviewers, including a comparison with GANs and evaluating the model performance with more metrics. The authors also had a discussion on the model generalizability and made their efforts in looking for data with real motions. With the agreement between all reviewers giving positive scores on this paper, this paper shall be accepted to publish at MICCAI.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    6



back to top