List of Papers By topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Xiaobin Hu, Ruolin Shen, Donghao Luo, Ying Tai, Chengjie Wang, Bjoern H. Menze
Abstract
Considering the difficulty to obtain complete multi-modality MRI scans in some real-world data acquisition situations, synthesizing MRI data is a highly relevant and important topic to complement diagnosis information in clinical practice. In this study, we present a novel MRI synthesizer, called AutoGAN-Synthesizer, which automatically discovers generative networks for cross-modality MRI synthesis. Our AutoGAN-Synthesizer adopts gradient-based search strategies to explore the generator architecture by determining how to fuse multi-resolution features and utilizes GAN-based perceptual searching losses to handle the trade-off between model complexity and performance. Our AutoGAN-Synthesizer can search for a remarkable and light-weight architecture with 6.31 Mb parameters only occupying 12 GPU hours. Moreover, to incorporate richer prior knowledge for MRI synthesis, we derive K-space features containing the low- and high-spatial frequency information and incorporate such features into our model. To our best knowledge, this is the first work to explore AutoML for cross-modality MRI synthesis, and our approach is also capable of tailoring networks given either different multiple modalities or just a single modality as input. Extensive experiments show that our AutoGAN-Synthesizer outperforms the state-of-the-art MRI synthesis methods both quantitatively and qualitatively. Code will be made publicly available.
Link to paper
DOI: https://link.springer.com/chapter/10.1007/978-3-031-16446-0_38
SharedIt: https://rdcu.be/cVRTx
Link to the code repository
N/A
Link to the dataset(s)
N/A
Reviews
Review #1
- Please describe the contribution of the paper
Authors propose a new MRI synthesizer, called AutoGAN Synthesizer, which automatically discovers generative networks for cross-modality MRI synthesis.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The proposed model can search for a remarkable and light-weight architecture with 6.31 Mb parameters by occupying 12 GPU hours.
- Prior knowledge is introduced into AutoGAN the for MRI synthesis.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
1: The main ideas of the paper are known or incremental advances over past work. Besides, the incorporation of the K-spcace has also been employed in some medical image processing works. 2: The number of images are too small to obtain meaningful training results. There are only 75 samples of BRATS2018 dataset and 25 samples of IXI dataset for training.
3: The experimental setup details are incomplete. For example, the value of and in Equation(3) is unclear. The authors should provide the value of the hyper-parameters for the reproduction. 4: The code and the data are not available to aid reproducibility.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The code and the data are not available to aid reproducibility.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
1: More experiments are needed: comparison to more recent and SOTA cross-modality methods; the convergence of the GAN model. Besides, the ablation studies are insufficient. For example, the learning rate.
2: There are not comparison of the InstanceNorm and BatchNorm. It’s claimed that the InstanceNorm is superior to the BatchNorm in low-level image processing task
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
5
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Both MRI synthesis using GAN and neural architecture search is not new. This combination seem new.
- Number of papers in your stack
4
- What is the ranking of this paper in your review stack?
1
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Not Answered
- [Post rebuttal] Please justify your decision
Not Answered
Review #3
- Please describe the contribution of the paper
Aiming at the recovery of realistic texture while constraining model complexity, authors propose a GAN-based perceptual searching loss that jointly incorporates the content loss and model complexity. To incorporate richer priors for MRI synthesis, we exploit MRI K-space knowledge containing low-frequency (e.g., contrast and brightness) and high-frequency information (e.g., edges and content details), to guide the NAS network to extract and merge features. Considering that the low- and high-resolution of multi-scale networks can capture the global structure and local details respectively, they use a novel multi-scale module-based search space which is specifically designed for multi-resolution fusion. Their searching strategy can produce a light-weight network with 6.31 Mb parameters from module-based search space only taking 12 GPU hours and achieve state-of-the-art performance.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
1) Authors propose lightweight but effective model for cross-modality MRI synthesis. 2) They provide comprehensive experiments to verify the advantage of their method. 3) They propose several loss functions and present the visualized results, which is convicing.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
Table 1 is not suitable to place in Page 5, which makes readers hard to read this papers. Authors just compare the model weights but not whole computational complexity. Most models contain less parameters but take longer inference time.
- Please rate the clarity and organization of this paper
Excellent
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
I believe their experiments are reproducibile.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
This paper is well organized. Authours deal with the challenging tasks. They provide comprehensive experiments to verify the advantage of their method and detailed methods in supplementary materials. I find the Equation(2) is hard to read. First, the “Upsampled Fr” is not the equation. Second, authors don’t mention how to implement Upsample and down-sample. Please explain your methods.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
6
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
I focus on experiemental results and methods. Both of them are good.
- Number of papers in your stack
5
- What is the ranking of this paper in your review stack?
1
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
6
- [Post rebuttal] Please justify your decision
My decision is accept. The rebuttal from authors explains my concerns totally.
Review #4
- Please describe the contribution of the paper
This paper presented a novel AutoGAN-Synthesis framework for cross-modality MRI synthesis by designing a generator to extract and fuse multi-resolution features. Moreover, the authors proposed a GAN-based perceptual searching loss to balance the model complexity and synthesis performance. Experiments on 2 datasets demonstrate that the proposed methods can outperform other cutting-edge methods qualitatively and quantitatively.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The direction of investigating neural architecture search for cross-modality MRI Synthes is interesting.
- The experiments are comprehensive.
- Code will be made publicly available.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- The technical novelty is limited and incremental, as claimed in this paper, the two main contributions of this work are GAN-based perceptual searching loss and K-space learning while those two are off-the-shell techniques.
- The authors use 2D axial image for training and testing, they didn’t consider the consistency in the coronal view and sagittal view.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Since the authors mention they will provide the code to public, it is highly possible to reproduce this paper.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
- It’s not sure that by using proposed method whether the authors need to train different models for different number of modality MRI images as input? Or is it a uniform model that no matter how many modality MRI image there are, only one model is trained.
- How is the K-space data acquired? I think for those two public datasets they only give the magnitude image.
- As K-space Learning has been proposed for fastMRI in other works, I won’t think adopting this strategy for cross-modality MRI synthesis can be summarized as a contribution.
- In figure 3 and 4, it’s better to give the synthetic T2 images by different methods, not just the difference map.
- In figure 6, a part of the image is missing in Flair, but after adding adversarial and k-space learning, there is some context predicted in this region, why?
- Why do the authors choose to synthesizing T2 from Flair, rather than the opposite?
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
4
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Please check the detailed comments
- Number of papers in your stack
6
- What is the ranking of this paper in your review stack?
4
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
5
- [Post rebuttal] Please justify your decision
I appreciate the feedback from the authors. They have addressed most of my concerns, except that the explanation of interpretation of image completion is still not convincing to me. It’s better that those non-corresponding image pairs are removed from the training and test dataset. Or the author could consider cut the corresponding part in the T2 image to make it align with T1 image. About the 4 contributions as authors listed in the paper, it’s better to remove the first two, GAN-based perceptual searching loss and K-space learning, as those are off-the-shell techniques. After all, the proposed method has achieved good performance on important clinical application, and is the first work to explore AutoML for cross-modality MRI synthesis. I would like to see this work published and code released in the future that will benefits the medical imaging community.
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
This paper proposed a lightweight AutoGAN-Synthesis framework for cross-modality MRI synthesis based on NAS techniques. Two reviewers raised concerns about the technical contribution of the proposed work and pointed out the two key components, i.e., GAN-based perceptual searching loss and K-space learning are established techniques. They also had concerns about the validation of the proposed method, as only a small number of samples from BRATS2018 and IXI datasets were used, and the proposed method was applied on 2D axial images although MR images are 3D. It is also noticed that the proposed method was not compared with any other existing NAS methods. During rebuttal, please carefully address the above concerns and other issues raised by the reviewers.
- What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
3
Author Feedback
We thank reviewers for their valuable comments and address the raised issues below
1.Compared with other NAS and random policy (Meta) As a gold-standard, the random policy is used here by randomly sampling 20 models from our search spaces. In Fig. 7, our AutoGAN can search superior networks with less model size but better performance. We also compare our AutoGAN with a natural image synthesis NAS (HiNAS) on the same Brats dataset (Flair->T2) by the open-source implementations. The quantitative results of HiNAS [PSNR:24.49dB; SSIM:0.88] are worse than AutoGAN [PSNR: 25.54dB; SSIM:0.90]. [HiNAS] Memory-Efficient Hierarchical Neural Architecture Search for Image Restoration 2021
2.Novelty (R1, R3) The key novelty of our paper is to propose a lightweight AutoGAN pipeline that can adaptively construct the neural architecture based on our novel multi-scale module-based search space. It is capable of understanding how to extract and merge features according to different multiple input modalities. To our best knowledge, this is the first work to explore AutoML to tackle the drawback of using one fixed network for many MRI synthesis tasks.
3.Dataset (R1) Following [7,34], we use the same number of dataset with [7] on BRATS (low grade glioma patients with visible lesions) and IXI dataset (brain tissue and free of artifacts) for a fair comparison. Dar et al., [7] have verified that this setting of the dataset can well reflect the statistical significance (p-value) and fairly evaluate the model performance.
4.More experiments (R1) We have compared AutoGAN with recent 3D and NAS cross-modality methods (in 1 and 9). We also make more ablation studies to adjust the initial learning rate from 0.025 to 0.01. The convergence of AutoGAN is changed slowly but is more stable and the PSNR result keeps almost the same at 25.50dB.
5.InstanceNorm (R1) In our paper, we focus on how to explore NAS to find an optimal network. Thus, we utilize the common layers to show the effectiveness of our strategy. But instance norm is a good attempt for our future low-level work.
6.Equations (R1, R2) In Eq.2, the upsample block includes Conv2D, BatchNorm, and nearest Upsample operations. The downsample block includes multiple strided Conv2D layers and BatchNorm, and a Relu operation. In Eq.3, we adopted λ_adv=1e-4 and λ_complexity=0.01 in search process but λ _complexity = 0 in train process.
7.Typesetting and Reproduction (R1, R3) We will correct all typesetting including figures. More implementation details will be added and code will be publicly released.
8.Inference time (R2) We calculate the inference time on Tesla V100. With the same configuration, the inference time of our AutoGAN is 0.005s better than CycleGAN (0.015), Pix2pixGAN (0.006), cGAN (0.054) and HiNet (0.007).
9.3D vs 2D model (R3) As the reviewer said, 3D model can embed spatial contextual information. We aim to develop a lightweight architecture rather than a heavy 3D model. We still make a comparison with a variant using 3D-Unet (size: 16M) as a generator to synthesize the MRI. The PSNR of 3D model is 22.21dB which is worse than AutoGAN.
10.Synthesizing T2 from Flair (R3) We focus on synthesizing T2 contrasts that are complementary to T1 contrasts, and offer better information for investigating fluid-filled structures within tissue.
11.Uniform search architecture (R3) We designed one search architecture for different situations. With different number of modality MRI as input, our search model could automatically find an optimal network structure for further training.
12.K-space(R3) K-space is the spatial frequency representation of MRI images. K-space data can be acquired by applying Fourier transform on the given MRI images.
13.Interpretation of image completion (R3) The reason for completion is mainly from the joint role of perceptual and adversarial loss which distinguish the MRI incomplete and then AutoGAN tries to compensate for the loss part guided by k-space priors.
Post-rebuttal Meta-Reviews
Meta-review # 1 (Primary)
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
Thank the reviewers to provide experimental comparison with another NAS methods. And the meta-reviewer thinks the technical novelty of this paper is at an acceptable level. However, the authors’ clarification about using a small number of samples from the two public datasets could not convince me. On BRATS2018, only 75 LGG subjects were used for training and 15 LGG subjects were used for validation, which is a very small portion of the 285 subjects in BRATS2018 training set. More importantly, most existing models for cross-modality MRI synthesis were trained and evaluated using the whole BRATS2018 dataset, making it difficult to benchmark the proposed method with the existing ones. Moreover, the IXI dataset contains 578 subjects, however, only 40 subjects were used in the experiment, without explanation. In the rebuttal, the proposed method is claimed to show better PSNR than a 3D model. However, it is suspicious if this is because of the small size of training data used in this paper.
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Reject
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
10
Meta-review #2
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
Based on NAS methods, this research reported a lightweight AutoGAN-Synthesis framework for cross-modality MRI synthesis. Two reviewers expressed concerns about the proposed work’s technical contribution, pointing out that the two essential components, GAN-based perceptual searching loss and K-space learning, are well-established approaches. They were also concerned about the suggested approach’s validation because only a small number of samples from the BRATS2018 and IXI datasets were employed, and the proposed method was used on 2D axial pictures rather than 3D MR images. It is also worth noting that the suggested approach was not tested against any other current NAS methods.
Although the authors have done a reasonable rebuttal, I want to point out that the authors submitted an overlength Supplementary File (7 pages with methods descriptions etc.), which is a clear violation of the submission policy (max. 2 pages allowed without method description).
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Reject
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
5
Meta-review #3
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
This paper presents a NAS-based cross-modal MRI generation network. The reviewers raised some questions about the technical contributions, experimental validation, and experimental comparisons of the paper. After rebuttal, all reviewers unanimously recommended acceptance of the paper and stated that the feedback from the authors fully explained the problems raised.
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
1
Meta-review #4
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
AC recommendations on this paper were split with a majority vote of “rejection”, while the reviewers expressed consensus in supporting acceptance after rebuttal. The PCs thus assessed the paper reviews, meta-reviews, the rebuttal, and the submission. All of the reviewers highly appreciated the first exploration of AutoML for cross-modality MRI synthesis and the comprehensive experiments. Two of the reviewers also expressed good appreciation of the authors’ rebuttals. Despite suggested areas of improvement, the PCs agree with the convincing arguments of the reviewers and AC that the contribution of the paper outweighs the weaknesses. The final decision of the paper is accept.
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
NR