Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Xiajun Jiang, Zhiyuan Li, Ryan Missel, Md Shakil Zaman, Brian Zenger, Wilson W. Good, Rob S. MacLeod, John L. Sapp, Linwei Wang

Abstract

Clinical adoption of personalized virtual heart simulations faces challenges in model personalization and expensive computation. While an ideal solution is an efficient neural surrogate that at the same time is personalized to an individual subject, the state-of-the-art is either concerned with personalizing an expensive simulation model, or learning an efficient yet generic surrogate. This paper presents a completely new concept to achieve personalized neural surrogates in a single coherent framework of meta-learning (metaPNS). Instead of learning a single neural surrogate, we learn the process of learning a personalized neural surrogate using a small number of context data from a subject, in a novel formulation of few-shot generative modeling underpinned by: 1) a set-conditioned neural surrogate for cardiac simulation that, conditioned on subject-specific context data, learns to generate query simulations not included in the context set, and 2) a meta-model of amortized variational inference that learns to condition the neural surrogate via simple feed-forward embedding of context data. As test time, metaPNS delivers a personalized neural surrogate by fast feed-forward embedding of a small and flexible number of data available from an individual, achieving – for the first time – personalization and surrogate construction for expensive simulations in one end-to-end learning framework. Synthetic and real-data experiments demonstrated that metaPNS was able to improve personalization and predictive accuracy in comparison to conventionally-optimized cardiac simulation models, at a fraction of computation.

Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16452-1_5

SharedIt: https://rdcu.be/cVRYI

Link to the code repository

https://github.com/john-x-jiang/epnn

Link to the dataset(s)

https://drive.google.com/drive/folders/1j3IkXeqSCI0BJGf1FUD2KBQ0e-0LbhkP?usp=sharing


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper presents a new concept to achieve personalized neural surrogate in an end-to-end meta-learning framework, the proposed method shows a good performance on cardiac simulation.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The proposed learning framework is new as it learns the process of learning a personalized neural surrogate from a small number of available data of a subject.
    2. The proposed method is evaluated in synthetic and real-data experiments
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The compared methods are not explained clear or cited.
    2. The conclusion does not reflect the contribution clearly.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    It would be helpful if the code and the data would be released, otherwise it might be impossible to reproduce the paper

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
    1. The compared method should be discussed and analysed to emphase the contribution of this work.
    2. the contribution should summarize the contribution of the the work effectively.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Overall, this paper contributes a novel idea and shows impressive results, it would be great if it is presented better.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #2

  • Please describe the contribution of the paper

    The paper uses Bayesian meta-learning for neural cardiac simulation. Specifically, the paper uses graph CNN to model the 3D geometry of heart, and uses Gated Recurrent Units for temporal modeling, which are optimized through amortized variational inference.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper targets cardiac simulation, an important problem in electrophysiology, and sophisticatedly designs advanced machine learning algorithms, which perform better than the baselines.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    This paper is not self-contained and extremely hard to follow. For example, on page 3, what are the dimensions of s, theta, and x? And how are they corresponding to notations in Fig. 1? On page 4, what is the meaning of context observations D_c? In Table 1, 1.1 is less than 4.4, but why the latter is marked bold?

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    In the Reproducibility Response, it seems the authors will release the corresponding code. However, the reviewer did not find the description of code in the main text.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    Fig. 1 is confusing: Are the four cardiac surfaces D_c or D_x? On the right, \hat{D_c} and \hat{D_x} are not explained anywhere in the paper. The KL notations in the middle are not correct either. On page 4, what is the graph CNN model used here? The description of this GNN is very vague and more detail is needed. In Table 1, comparing metaPNS and PNS, why metaPNS typically has much better CC/DC but much worse MSE? There is a typo in the title.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The proposed meta-model shows significantly worse MSE, according to Table 1, so the proposed meta-learning strategy has not been sufficiently corroborated.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    3

  • Reviewer confidence

    Somewhat Confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #4

  • Please describe the contribution of the paper

    In this paper, a Byesian Meta-Learning based method is proposed for few-shot cardiac simulation is proposed. The generative GCNN model is conditioned on specific input data and the output is personalized cardiac simulation results. GRU model is used for temporal modeling.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The meta learning framework demonstrate better adaptation performance when the support set is small.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1) Some naming is a little confusing. The common terminology for few shot learning and meta-leaning are “meta-train”, “meta-validate”, “meta-testing”, query and support set, etc. It is better to review the methodology and make it more clear. 2) It is not clear why using Bayesian meta-learning here. There are other popular meta-learning frameworks, like model-agnostic meta-learning, which is used in few shot learning tasks like classification, segmentation, and also cardiac modeling(motion estimation). reference: Finn, Chelsea, Pieter Abbeel, and Sergey Levine. “Model-agnostic meta-learning for fast adaptation of deep networks.” International conference on machine learning. PMLR, 2017. 3) Table results are not very clear. The meaning of numbers in Table 1 is not clear to me. Seems PNS has a lower MSE in target set compared with meta-PNS

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Looks like the dataset is not public available and there is no plan to release the code.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    Refer to weakness. 1) It is better to review the terminology and make it more consistent with those widely used in meta-learning. 2) Add some discussion on motivations of why use Bayesian meta-learning over other methods like MAML.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The major factors are the unclear motivations and the confusing results.

  • Number of papers in your stack

    8

  • What is the ranking of this paper in your review stack?

    6

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    5

  • [Post rebuttal] Please justify your decision

    Most of the concerns has been resolved and I increased my rating, while it is strongly suggested doing a complete proofreading to correct typos, if accepted




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    The proposed few-shot method in this paper seems to achieve good results. However, reviewers have identified significant weaknesses related to the clarity and the main motivation behind the proposed approach. Specifically, the first reviewer thinks authors did not explain clearly the contribution and details in experimental comparisons. Reviewer 2 raised the concern that the paper is extremely hard to follow with confusing notations and tables. Reviewer 4 echo this concern about confusing terminology and unclear motivation for using Bayesian meta learning here. After considering all input, the area chair thinks that the paper did not meet the high MICCAI standard. We regret that the paper cannot be accepted in its current form.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    NR




Author Feedback

We thank the reviewers and AC for constructive feedback. We clarify major confusions below.

Critical typo leading to higher MSE in metaPNS (R2/R4/MR): A major concern came from Table 1 where a comparison method (PNS) had a lower MSE (1.2±0.29e-4) than the presented metaPNS (4.4±1.1e-4). This was due to a critical typo: we had mistakenly written e-4 instead of e-3 for the MSE of PNS. The actual results are 4.4±1.1e-4 (metaPNS) vs. 1.2±0.29e-3 (PNS). We apologize for this critical typo and hope its correction will clarify the major concern: that metaPNS did significantly outperform all comparison methods in meta-testing target sets.

Clarity in comparison methods and contribution (R1/MR): This work presents the VERY FIRST framework for learning how to obtain a personalized neural surrogate (PNS) of cardiac simulations, using few-shot data from a subject. The resulting PNS can then efficiently predict how a subject’s heart responds to different pacing. The closest related works are those utilizing expensive optimizations to personalize the parameters of a cardiac simulation model, and using the expensive personalized model in predictions. Two of such published works (FS-BO/VAE-BO) were selected as baselines in this paper. Our meta-PNS replaces the optimization process with a simple feed-forward meta-model, and the resulting PNS is efficient to run. These benefits come with a further improvement in predictive performance (e.g., Table 1). Another line of related research is alternative neural surrogates of cardiac simulations – to date, published neural surrogates in this space are limited [4,15,16,12]. Most of them are 2D on image grids (metaPNS is on 3D ventricles) except [12] on atria. Once learned, they all have to be separately optimized to a subject’s data for personalized predictions – the latter we have not seen in published works. Thus, as a neural network baseline, we included the same PNS architecture under meta-PNS but without meta-learning, to illustrate the importance of meta-learning. We will clarify these contributions.

Motivation of Bayesian meta-learning (R4/MR): In optimization-based meta-learning such as model-agnostic meta-learning raised by R4, given context data at meta-testing, the model has to be optimized further using the context data. In contrast, feed-forward methods can obtain the model for target data via simple feed-forward embedding. This allows us to obtain a personalized PNS without retraining, which motivated our choice of methods. We will clarify the motivation in the final manuscript.

Confusion in notations (R2/MR): x_i is one instance in the context/target set, e.g., heart signals across space (the number of the heart mesh nodes) and time, labeled in the right top and bottom corner of Fig 1. s is the binary vector of cardiac pacing sites, of the same spatial dimension as x_i and labeled in the up-left corner of Fig.1. \theta is the tissue parameter sharing the same dimension as s, and z_c in Fig. 1 is its latent estimation. \hat{D_c} and \hat{D_x} are reconstructions of the context set D_c and target set D_x. On page 4, the context observations refer to the observed context set. We do not see an error in the two KL terms in Fig 1: they constrain the set-embedded q(z_c D_c) to be close to 1) an isotropic Gaussian prior, and 2) the set-embedded q(z_c D_cD_x).

Reproducibility (R1-4): Clinical data used are open at https://edgar.sci.utah.edu/ (citation will be added). Our codes and data are made available in this anonymous github link: https://github.com/temporary-repos/MICCAI22.

GCNN model (R2): The GCNN follows a previously published work [11], and the metaPNS framework is agnostic to its detailed architecture. We thus included limited details considering space limits. We will include more details about the B-spline based graph convolutions.

Terminology (R4/MR) and typo: We will make our terminology consistent with current few-shot/meta-learning literature, and correct typos as suggested.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The proposed few-shot method in this paper seems to achieve good results. However, reviewers have identified significant weaknesses related to the clarity and the main motivation behind the proposed approach. Specifically, the first reviewer thinks authors did not explain clearly the contribution and details in experimental comparisons. Reviewer 2 raised the concern that the paper is extremely hard to follow with confusing notations and tables. Reviewer 4 echo this concern about confusing terminology and unclear motivation for using Bayesian meta learning here. The rebuttal promised to expand the paper to address many of these issues. Whether authors are able to do so is questionable given the space limit. The paper needs more careful revision. Even the title has a typo. After considering all input, the AC voted to reject the paper.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Reject

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    NR



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Paper strengths: 1) New learning framework 2) The paper targets cardiac simulation, an important problem

    Paper weaknesses: 1) Unclear baseline approaches 2) Unclear conclusion 3) This paper is not self-contained 4) Uncommon meta learning notation 5) Why using Bayesian meta-learning 6) Unclear Table 1

    The rebuttal addresses the issues above, so I’m recommending the paper to be accepted.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    4



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This paper has clear strengths and weaknesses. The strengths include the strong motivation of the problem, sophisticated ML models, and good performance. The weakness include the vague writing and confusing notation. After carefully considering the reviews and rebuttal, I vote for the acceptance of the paper. It is suggested to the authors to consider the reviews to improve the clarify of the paper.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    5



back to top