Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Tianshu Zheng, Weihao Zheng, Yi Sun, Yi Zhang, Chuyang Ye, Dan Wu

Abstract

Diffusion MRI (dMRI) is a powerful tool for probing tissue microstructural properties. However, advanced dMRI models are commonly nonlinear and complex, which requires densely sampled q-space and is prone to estimation errors. This problem can be resolved using deep learning techniques, especially optimization-based networks. In previous optimization-based methods, the number of iterative blocks was selected empirically. Furthermore, previous network structures were based on the iterative shrinkage-thresholding algorithm (ISTA), which could result in instability during sparse reconstruction. In this work, we proposed an adaptive network with extragradient for diffusion MRI-based microstructure estimation (AEME) by introducing an additional projection of the extra gradient, such that the convergence of the network can be guaranteed. Meanwhile, with the adaptive iterative selection module, the sparse representation process can be modeled flexibly according to specific dMRI models. The network was evaluated on the neurite orientation dispersion and density imaging (NODDI) model on 3T and 7T datasets. AEME showed superior improved accuracy and generalizability compared to other state-of-the-art microstructural estimation algorithms.

Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16431-6_15

SharedIt: https://rdcu.be/cVD4X

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    The authors proposed a deep-learning based method to estimate microstructure maps from undersampled diffusion MRI data. The main contribution of the work is the introduction of an extragradient that warrants the convergence of the network.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The introduction of extragradient is not new per se, but it is innovative in the context of estimating microstructure from dMRI. The paper is very well written, easy to follow. Statistical tests are sufficient.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    There is no main weakness. Please see my comments below for minor concerns.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Results can be reproduced

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
    1. It is unclear why the downsampled data lack the b=3000 shell. Since the HCP data have 3 shell, it is better to equally downsample all 3 shells.

    2. Is the number of shell or which shell used affect the prediction results? For example, will the error be higher if one just use 60 directions in b=1000 shell and no other shell to test?

    3. While the numerical results show significance, there is little improvement in Fig. 3. Is there any better way to make the results in Fig. 3 more convincing? When compare AEME with MEDN+ or MESC2, it is impossible to notice the differences in Fig. 3

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Overall a decent paper. The technical difficulty is not high but good implementation and sufficient validations. A minor concern is marginal improvement in Fig. 3.

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    7

  • [Post rebuttal] Please justify your decision

    The authors addressed all of my concerns.



Review #3

  • Please describe the contribution of the paper

    The authors proposed a novel network for microstructure parameter estimation from diffusion MRI. Specifically, they show their network reduced the prediction error when using few diffusion MRI measurements compared to other methods. Overall, the proposed method shows the least error when compared to the gold standard parameter maps. Results are shown on 2 in vivo datasets. The experimental design is sound, and the paper is well-written.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    -The experimentation and results are well presented and adequatly compared to other learning-based methods. -The in vivo datasets used have different diffusion acquisition protocols and were acquired at 3T and 7T, supporting the generability of the findings. -The manuscript is well written.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    -The proposed method showed a slight improved fitting error benchmark against previous techniques. The manuscript shows incremental work with low novelty.

    • Results are limited to the prediction of gold standard NODDI maps in an in vivo setting. It would be of interest to measure the performances of the proposed method on other method parameter maps. Although, I understand the conference format limitation.
  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The 3T in vivo dataset is freely available. It is unclear if the 7T dataset will be made available. The network will be made publicly available on Github. The experimental setup, the method description, and networks architecture structures are sufficiently described.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    Results. The authors used NODDI parameter maps computed on high angular resolution data as a reference gold standard to test their and other methods. This is done and reported with various subsampled diffusion data. The issue here is that the authors also compared their results with the AMICO methods. Although the NODDI toolbox and AMICO both estimate the same parameter maps, they do it in a different fashion and as such, it is not fair to report the difference between AMCIO and NODDI toolbox as the AMICO error. It is also not clear why the authors selected the NODDI toolbox as the gold standard and used AMICO on the subsampled data. The same method should be used for both to clearly highlight the error due to using fewer diffusion measurements. Please either replaced AMICO with the NODDI toolbox results on the subsampled data or use the AMICO maps computed on the fully sampled data as the gold standard.

    P7. Figs 3 and 4 and too small to appreciate the difference on the maps. Please increase the size or focus the figures on selected brain areas.

    P8. Fig 5 is too small. The axis labels and text are too small to read.

    P8. Conclusion. The authors report a reduction in the needed diffusion measurements of 11.25 (270/24). The manuscript suggests the lowest q-space sampling tested was 12 measurements per shell on the 3-shells HCP data, which would be a reduction factor of 270/36. Please correct, or otherwise clarify.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The manuscript is well-written and interesting to the MICCAI community. The method and the experimental design are sound.

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    2

  • Reviewer confidence

    Somewhat Confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #4

  • Please describe the contribution of the paper

    The paper approaches the problem of dMRI microstructure estimation by an adaptive network with extragradient. An integrated algorithm AEME is proposed, which can adaptively determine the number of iterative units and incorporate with extragradient unit (EG-Unit).

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The topic is interesting and clinically significant.
    2. A series of experiments (ablation study, the performance comparison against several previous methods) demonstrated the superiority of the proposed method.
    3. The adaptive mechanism of iterative units selection seems effective and provides an efficient training strategy.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The extragradient sparse encoding is originally proposed in Ref. [9], so the technical contribution of AEME is limited. The motivation of using extragradient in this study is unclear.
    2. Some implementation details are missing. (e.g., batch size, the maximum epochs, etc.)
    3. The experiment setting of Fig. 2 need further clarify. According to the method section, adaptive mechanism will determine the number of iteration block. How could authors achieve experiments in different fixed number of iteration blocks.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Authors mentioned that a demo will be provided at https://github.com/Tianshu996/AEME after this work is accepted. The availability of pre-trained models is not clear.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
    1. The adaptive mechanism is the main contribution of this study. Authors may provide a pseudo code algorithm in methods part. Fig. 1 (b) is not informative. For example, what’s the ‘reuse EG-Unit’? Author may make it specific (reusing the network architecture or the trained weights; how to reuse when there are several EG-Units).
    2. Extragradient-based method was originally proposed to address LISTA in [9]. Authors may further clarify the motivation of using it in this study and provide the justification of the performance improvement brought by extragradient.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The rating is based on the good performance of the proposed approach in dMRI microstructure estimation. As stated in previous sections, authors should resolve the several concerns in Q5 and Q8.

  • Number of papers in your stack

    3

  • What is the ranking of this paper in your review stack?

    2

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This paper proposes a novel method for microstructure estimation. The key novelty is the adoption of extragradient in a network for improved convergence. There are some concerns from the reviewers regarding the comparison with AMICO and clarity of the figures that should be clarified in the rebuttal.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    2




Author Feedback

General response for all reviewers and Meta-reviewer: Thanks for your advice. 1) The comparisons between AMICO and NODDI toolbox were performed in the original AMICO paper, which showed the two methods are mostly equivalent (Daducci et al., NeuroImage 2015), but there are some slight differences according to the paper and GitHub issues, especially for AMICO without regularization. Thus, we chose NODDI toolbox results as our goldstandard. We agree that it is not fair to directly compare the NODDI toolbox results using fully sampled data with AMICO results using subsampled data. Now we have added the results of NODDI toolbox using subsampled data. Taking 270/60 subsampled data as an example, we found the NRMSE between the fully sampled and the subsampled data is about 0.1 by the NODDI toolbox, 0.2 by AMICO, and 0.03 by ours. 2) Due to the space limitation of MICCAI, we over-reduced the image size, but our image is a vector image that can be zoomed in and out without distortion, and the full size of figures can be found in our GitHub link. We agree that differences between parameter maps from different estimation methods were not very obvious in the current figures. In the camera-ready version, we will replace them with selected slices where the difference is more striking and explicitly point out the differences in Fig. 3 and 4. We will make Fig. 5 more readable by leaving only the 270/60 results and putting other subsampling factors in supplementary.

Specific for Reviewer1: 1) The original NODDI paper demonstrated a minimum of 2 b-shells are necessary. In order to make the protocol clinically friendly (some scanners cannot reach b=3000 or much longer echo time is needed for a high b-value), we only used shells with b=1000 and 2000, following the previous papers on deep learning based NODDI estimation (Ye et al., MedIA 2017,2019 and 2020). 2) The number and choice of shells will affect the predicted results, as shown in the original NODDI paper. Following your advice, we added a test on only choosing b=1000 with 60 directions and found it cannot accurately estimate the microstructural parameters except for OD, which is consistent with (Zhang et al., NeuroImage 2012). 3) Please refer to the general reply.

Specific for Reviewer2: 1-3) Please refer to the general reply. 4) We obtained the gold standard on 3 shells, but training and testing used only 2 shells. Thus, 12 measurements per shell would make a reduction factor of 270/24=11.25.

Specific for Reviewer3: 1) The motivation for using the extragradient method is to improve the results of convergence and reduce the computational time in the training because each additional iteration block requires an increase of 21B FLOPs. Table 1 and Fig. 2 showed that the iterative mechanism with extragradient can achieve better results with fewer iteration blocks. 2) We have added the details that the batch size is 128 and the maximum epoch is 800 in our paper. 3) Sorry for the confusion. For a fixed number of iterative blocks, we keep the number of iterative blocks constant throughout the training and testing process to find the optimal number of iterative blocks based on lowest validation error. 4) The details of Fig. 1(b) are in the supplementary, and we will add the pseudo-code in a future version or GitHub. The reuse EG-Unit means keeping this optimization-based structure and reusing the trained weights. If there are several EG-Units, we will increase the EG-Units with the trained weights because of the share weights strategy within EG-Units. We have added these details. 5) Our structure is a framework for approximated sparse representations using a network without a predefined dictionary, and our baseline network is an improved variant of LISTA with momentum and a separate dictionary. Therefore, we introduced the extragradient method into our network and showed the proposed structure with extragradient can achieve a statistically lower estimation error (Table 1).




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This work presents an interesting new method for microstructure estimation, and the authors largely addressed the concerns from the reviewers.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    7



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The paper presents the utilization of extragradient sparse encoding, proposed previously in another context, to improve brain microstructure estimation from limited DW-MRI data. As reviewers and meta-reviewer indicated, the technical contribution is limited and the specific motivation of using extragradient for the authors purpose is unclear. Author rebuttal mentioned that their motivation is to improve results, but this is a very general motivation rather than a specific technical argument. Further, while authors demonstrated the added-value of their approach compared to other alternatives in terms of accruacy, the clinical impact of the method remain question able.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Reject

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    NR



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The work adopts an adaptive network with extragradient for microstructure estimation, which tackles an interesting application. Though all reviewers found incremental technical novelty with the work, the application of it still remains interesting to the community. Comparison against other approaches were explained in the rebuttal to some extent, and some missing details have been clarified. This work could be considered for acceptance.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    8



back to top