List of Papers By topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Cheng Peng, Pengfei Guo, S. Kevin Zhou, Vishal M Patel, Rama Chellappa
Abstract
Magnetic Resonance (MR) image reconstruction from under-sampled acquisition promises faster scanning time. To this end, current State-of-The-Art (SoTA) approaches leverage deep neural networks and supervised training to learn a recovery model. While these approaches achieve impressive performances, the learned model can be fragile on unseen degradation, e.g. when given a different acceleration factor. These methods are also generally deterministic and provide a single solution to an ill-posed problem; as such, it can be difficult for practitioners to understand the reliability of the reconstruction. We introduce DiffuseRecon, a novel diffusion model-based MR reconstruction method. DiffuseRecon guides the generation process based on the observed signals and a pre-trained diffusion model, and does not require additional training on specific acceleration factors. DiffuseRecon is stochastic in nature and generates results from a distribution of fully-sampled MR images; as such, it allows us to explicitly visualize different potential reconstruction solutions. Lastly, DiffuseRecon proposes an accelerated, coarse-to-fine Monte-Carlo sampling scheme to approximate the most likely reconstruction candidate. The proposed DiffuseRecon achieves SoTA performances reconstructing from raw acquisition signals in fastMRI and SKM-TEA.
Link to paper
DOI: https://link.springer.com/chapter/10.1007/978-3-031-16446-0_59
SharedIt: https://rdcu.be/cVRT2
Link to the code repository
www.github.com/cpeng93/DiffuseRecon
Link to the dataset(s)
N/A
Reviews
Review #1
- Please describe the contribution of the paper
In this paper, the authors proposed a novel diffusion model-based MR reconstruction method (DiffuseRecon), which is robust to the sampling pattern and acceleration factors. The proposed DiffuseRecon achieves SoTA performances in two large public datasets: fastMRI and SKM-TEA.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The authors leverage the recent diffusion model-based generative methods to learn the distribution of fully-sampled images. Therefore, the proposed method doesn’t rely on a certain sampling pattern or acceleration factor.
- The authors did thorough studies on ablation studies, and different sampling factors, and provide enough details on the training and inference, Fig.3 shows the robustness of the proposed method. Solid work.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- One work that the authors should have compared or at least cited is this: robust compressed sensing mri with deep generative priors, which use a GAN to learn the distribution p(y_full) in the paper, and use Langevin dynamics to perform the reconstruction. Please comment on the advantages of using diffusion model compared to GAN model.
- If possible, please add the std statistics to Table 1.
- As mentioned in the paper, the inference time is quite long for DiffuseRecon, can you provide some potential ways to accelerate it?
- Please rate the clarity and organization of this paper
Excellent
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The authors provide code in the supplementary material.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
- Please add the std statistics to table 1.
- If possible, would be nice to compare with the work or at least cite: compressed sensing mri with deep generative priors. They have their code published.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
7
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
This paper proposed a novel diffusion-model-based MR reconstruction method, which is robust to the sampling patterns and the acceleration factors, the authors gave thougrough anylasis, great work!
- Number of papers in your stack
8
- What is the ranking of this paper in your review stack?
1
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #2
- Please describe the contribution of the paper
This paper proposes a novel MR reconstruction framework based on the diffusion model.
Experiments show that the model can outperform other baselines consistently.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
This paper provides the first data point, to my knowledge, where the diffusion model is introduced to MR reconstruction framework.
Experiments show that the model can outperform other baselines consistently, especially the proposed method can adapt different under-sample rates.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
This paper is unclear about some implementation details: As the authors claimed “We follow [1] with some modifications to train a diffusion model that generates MR images.” What are the detailed modifications? Since this step is the fundenmental of this work, I can not understand the training process from current version.
The authors also set all the medthods to produce 2 consecutive slices which is not similar to the original MR reconstruction baselines, why? All the figures in current paper are drawn to generate 1 slice along with the formulations. Do other models (D5C5 and KIKI) are also modified to produce 2 consecutive slices?
The diffusion model is computationally expensive [1, 2]. The comparison of running time and parameters should be included in table 1. Although the authors disscussed the problem in page 8, excessive computational cost can also limit the applicability of the proposed method.
[1] Nichol A Q, Dhariwal P. Improved denoising diffusion probabilistic models[C]//International Conference on Machine Learning. PMLR, 2021: 8162-8171. [2] Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models[J]. Advances in Neural Information Processing Systems, 2020, 33: 6840-6851.
- Please rate the clarity and organization of this paper
Poor
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
This paper is unclear about some implementation details about model training and settings.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
The detailed pretraining details and motivation of generating 2 consecutive slices should be provided.
Since the current method is bulit on U-net, the authors can discuss if designing more advanced backbone via NAS [1] [2] can help to improve the inference efficiency and performance.
[1] Huang Q, Xian Y, Wu P, et al. Enhanced mri reconstruction network using neural architecture search[C]//International Workshop on Machine Learning in Medical Imaging. Springer, Cham, 2020: 634-643. [2] Yan J, Chen S, Zhang Y, et al. Neural Architecture Search for compressed sensing Magnetic Resonance image reconstruction[J]. Computerized Medical Imaging and Graphics, 2020, 85: 101784.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
4
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Though the combination of the diffusion model and MR reconstruction framework may interest MICCAIers, the lack of important details make this paper difficult to follow.
- Number of papers in your stack
5
- What is the ranking of this paper in your review stack?
2
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #3
- Please describe the contribution of the paper
The paper talks about how to achieve a robust and reliable model to reconstruct MRI images from undersampled acquisitions. The DiffuseRecon model consists of two components is proposed: 1) a k-space guidance module incorporates observation into the denoising process and allows for stochastic sampling from the marginal distribution of E[q(y_full x_obs)]; 2) a coarse-to-fine sampling module accelerates the selection of reconstructed images and gives an approximation of the noise on reconstructed samples. The goal is to generate all candidate reconstruction samples of MR images and accelerate the reconstruction process. - Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The k-space guidance module uses $x_obs$ added by a zero-mean Gaussian noise as the condition and mixes it with the reconstructed image in k-space, allowing for stochastic sampling.
-
The coarse-to-fine sampling module accelerates the speed of selecting reconstructed images by averaging samples generated for the estimation of E[q(y_full x_obs)]. - The ablation study supports the benefit of using each module. The evaluation experiment shows the improvement with the proposed method in both increasing performance and less time-cost.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- The description of the refinement in the coarse-to-fine module is not very clear.
- The implementation detail of training DiffuseRecon is not elaborated.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The author were able to make the code and data public. That would benefit the community in recognizing this problem. Also, the evaluation metric is clearly described so that the result would be convincing.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
- The detail of the refinement step in the coarse-to-fine module could be more elaborated: Is $epsilon_theta$ the same as the one used in the k-space guidance module and the coarse sampling step? How does $x_obs$ involve in the conditioning? How to choose $T_refine$?
- The author should provide the objective function of training the neural network of $epsilon_theta$.
- The author should elaborate on 1) how many samples are used to train the network $epsilon_theta$ and 2) how much time it takes to train the network.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
6
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
This work combines a conditional reconstruction module and an accelerated refining module to improve the reconstruction of MR images with higher efficiency. Sufficient experiments have been done to support the benefit.
- Number of papers in your stack
4
- What is the ranking of this paper in your review stack?
1
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
The authors proposed a novel diffusion model-based MR reconstruction method. All three reviewers recognize the originality of the work and appreciate the extensive evaluations and ablation studies. I believe the minor concerns on clarity and parameters can be addressed in the final version and therefore recommend acceptance of the work.
- What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
3
Author Feedback
We thank all reviewers for recognizing the value in our work. We will make amendments on more complete references and comparisons. Below are the specific responses:
Reviewer 1: Please comment on the advantages of using diffusion model compared to GAN model. An key issue with GAN-based reconstruction is the evaluation formulation. Specifically, in MR reconstruction, the most clinically important metric is whether the image is reconstructed correctly as compared to its fully sampled version, i.e. PSNR.
Conditional GAN-based reconstruction uses an additional adversarial loss that often points to a different optimization direction as compared to a pixel-wise loss; as such, the results are generally a compromise of the two losses - PSNR is lower, but the image is not very realistic either and often contains artifacts.
It is theoretically possible to use a GAN-inversion method to perform reconstruction based on observed signal. This generally involves training an inversion model that maps down-sampled image to the latent space; such an inversion model is generally implemented with a feed-forward, single step neural network and as such deterministic. As such it loses an important aspect in uncertainty estimation as compared to diffusion models.
Can you provide some potential ways to accelerate DiffuseRecon?
Many works have focused on accelerating diffuse models and are worth investigating in its utility wrt DiffuseRecon. Denoising Diffusion Implicit Models, ICLR 2021 Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling, NIPS 2021 Progressive Distillation for Fast Sampling of Diffusion Models, ICLR 2022 Come-Closer-Diffuse-Faster: Accelerating Conditional Diffusion Models for Inverse Problems through Stochastic Contraction, CVPR 2022
An important question to ask for acceleration methods, including for our work, is whether acceleration biases the generation process such that the uncertainty estimation becomes unreliable.
Reviewer 2:
We thank reviewer 2 for pointing out some of potential confusions. Unfortunately, due to the limiting length of MICCAI submission, we couldn’t fit in more implementation details; however, we will release our exact implementation for the community to use. In terms of the training modifications, we mainly changed the model in DDPM such that it can accommodate two channel input for complex images and two consecutive slices (4 channels)
The reason we generate two consecutive slices stems mostly from consideration for computational costs; specifically, we can achieve similar results with single slice generation; however, as we demonstrated with our ablation study, multiple generations lead to more unbiased reconstruction results. In this context, generating two slices provides twice the efficiency with a single inference. There seems to be an optimal number for slices to use, as more simultaneous slice reconstruction can lead to suboptimal individual slice reconstruction quality; this may be due to limits in model capacity and can be further investigated.
We will add the additional details with the additional space allowed for the camera ready version. Please refer to comments to Reviewer 1 on potential ways in accelerating our method.
Reviewer 3:
Yes, $epsilon_theta$ is the same. During the refine stage, x_obs directly replaces the k-space of the generated image in the diffusion process. We chose T_refine based on the trade off between output quality and compute time, and found that results don’t improve or only marginally improve with larger T_refine.
Following DDPM, our diffusion model is trained by mean matching, i.e. $epsilon_theta$ attempts to estimate the noise level on the image, which is pixel-wise minimized with respect to the groundtruth noise we injected to the image.
As described in the dataset section, we use 973 subjects from FastMRI for training, and 155 subjecst from SKM-TEA. Using four NVIDIA A5000, the training is done in 12 hours.