Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Xiaofeng Liu, Helen A. Shih, Fangxu Xing, Emiliano Santarnecchi, Georges El Fakhri, Jonghye Woo

Abstract

Deep learning (DL) models for segmenting various anatomical structures have achieved great success via a static DL model that is trained in a single source domain. Yet, the static DL model is likely to perform poorly in a continually evolving environment, requiring appropriate model updates. In a continual learning setting, we would expect that well-trained static models are updated, following continually evolving target domain data—e.g., additional lesions or structures of interest—collected from different sites, without catastrophic forgetting. This, however, poses challenges, due to distribution shifts, additional structures not seen during the initial model training, and the absence of training data in a source domain. To address these challenges, in this work, we seek to progressively evolve an ``off-the-shelf” trained segmentation model to diverse datasets with additional anatomical categories in a unified manner. Specifically, we first propose a divergence-aware dual-flow module with balanced rigidity and plasticity branches to decouple old and new tasks, which is guided by continuous batch renormalization. Then, a complementary pseudo-label training scheme with self-entropy regularized momentum MixUp decay is developed for adaptive network optimization. We evaluated our framework on a brain tumor segmentation task with continually changing target domains—i.e., new MRI scanners/modalities with incremental structures. Our proposed framework demonstrated superior segmentation performance, by efficiently learning new anatomical structures from different types of MRI data. Our framework was able to well retain the discriminability of previously learned structures, hence enabling the realistic life-long segmentation model extension along with the widespread accumulation of big medical data.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43895-0_5

SharedIt: https://rdcu.be/dnwxN

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #5

  • Please describe the contribution of the paper

    This paper proposed an HSI(Heterogeneous Structure Segmentation) framework under a clinically meaningful scenario, in which clinical databases are sequentially constructed from different sites/imaging protocols with new labels. To alleviate the catastrophic forgetting alongside continuously varying structures and data shifts, the proposed method resorted to a D3F module for learning and integrating old and new knowledge nimbly. It can achieve divergence awareness with our cBRN-guided model adaptation for all the data involved.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The framework related figures and plots are clear and easy for understanding. This figures help a lot for understanding the paper proposed framework.
    2. The topic raised in this paper is very interesting and will have widely impact in clinical applications.
    3. The paper is well written and organized.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. It will be better to add more experiment about comparison of different backbone models.
    2. It will be better to add more baselines in evaluation part.
    3. For the data distribution change/shift, novel class detection scenario, there are more existing related work in this area. For example: “Multistream regression with asynchronous concept drift detection”,“SIM: Open-world multi-task stream classifier with integral similarity metrics”, “Co-representation learning framework for the open-set data classification”, “Multistream classification with relative density ratio estimation ”. These papers should also be discussed in related work.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors have not share code yet. If the authors can release code, the reproducibility is possible for this paper.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    This paper proposed an HSI(Heterogeneous Structure Segmentation) framework under a clinically meaningful scenario, in which clinical databases are sequentially constructed from different sites/imaging protocols with new labels. To alleviate the catastrophic forgetting alongside continuously varying structures and data shifts, the proposed method resorted to a D3F module for learning and integrating old and new knowledge nimbly. It can achieve divergence awareness with our cBRN-guided model adaptation for all the data involved. pros

    1. The framework related figures and plots are clear and easy for understanding. This figures help a lot for understanding the paper proposed framework.
    2. The topic raised in this paper is very interesting and will have widely impact in clinical applications.
    3. The paper is well written and organized. cons
    4. It will be better to add more experiment about comparison of different backbone models.
    5. It will be better to add more baselines in evaluation part.
    6. For the data distribution change/shift, novel class detection scenario, there are more existing related work in this area. For example: “Multistream regression with asynchronous concept drift detection”,“SIM: Open-world multi-task stream classifier with integral similarity metrics”, “Co-representation learning framework for the open-set data classification”, “Multistream classification with relative density ratio estimation ”. These papers should also be discussed in related work.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
    1. The framework related figures and plots are clear and easy for understanding. This figures help a lot for understanding the paper proposed framework.
    2. The topic raised in this paper is very interesting and will have widely impact in clinical applications.
    3. The paper is well written and organized.
  • Reviewer confidence

    Somewhat confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    This paper proposed a trainable heterogeneous structure segmentation model for incremental learning (HSI). HSI proposed D3F module guided by cBRN for alleviating catastrophic forgetting, and then the adaptively constructed HSI pseudo-label is developed for efficient HSI knowledge distillation. HSI innovatively considers both incremental structures of interest and diverse domains.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    HSI has for the first time considered simultaneously learning both the interesting structure and domain in incremental learning for heterogeneous structure segmentation, while previous methods only considered structural increments. The experimental method for dataset configuration and usage is more in line with real-world needs, where the data for each incremental stage comes from different clinical sites, suppliers, or populations.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Some minor issues listed in 9 should be considered.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The paper have provided sufficient details about the model, and thus I think the paper can be reproduced.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    1) In section 2.1, the authors state that directly averaging the parameters of the two branches would be sub-optimal, but in Eq. (2), the authors directly calculate 1/2 *{\mu^r + \mu^p}, is this reasonable? As the two variable also come from different domain. 2) The authors should clearly explain how the data is used in the cross-subset and cross-modality, such as “CoreT with BraTS2013” uses one modality or multiple modalities in the cross-subset experiment, and if only one modality is used, which modality is it? 3) In Fig. 2, both SE and CE use the outputs of a new model for all classes, not just the old ones. The lines drawn in the figure may be inaccurate. 4) In Eq. (3), “+\mu_c” should be “-\mu_c”. 5) The symbols throughout the whole manuscript should be consistent, e.g., the \alpha in Eq. (5) and alpha in page 7.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    HSI considers the heterogeneity of structure and domain in the incremental learning process, which is in line with real-world needs. The proposed method significantly outperforms existing methods, and thus it is recommended to be accepted. But some minor issues should be considered to further improve its quality.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #3

  • Please describe the contribution of the paper

    The paper discusses the challenges of using deep learning models for segmenting anatomical structures in continually evolving environments. The authors propose a framework for progressively evolving a pre-trained segmentation model to diverse datasets with additional anatomical categories in a unified manner. The framework includes a divergence-aware dual-flow module with balanced rigidity and plasticity branches, guided by continuous batch renormalization. A complementary pseudo-label training scheme with self-entropy regularized momentum MixUp decay is also developed for adaptive network optimization. The framework was evaluated on a brain tumor segmentation task with continually changing target domains and demonstrated the ability to retain previously learned structures while adapting to new structures, enabling realistic life-long segmentation model extension along with the accumulation of big medical data.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper’s text is clear and easy to understand, and the visualizations effectively support the presented ideas. The paper employs promising techniques for domain adaptation and continual learning, and carefully adapts them to the problem at hand. The results are strong, with valid and convincing scores that demonstrate the effectiveness of the proposed framework.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Overall, I did not identify any major weaknesses in the paper. However, it would be beneficial if the authors included more metrics and scores. For instance, the recent LifeLonger paper (LifeLonger: A Benchmark for Continual Disease Classification, MICCAI 2022) introduced a forgetting metric that measures the amount of knowledge forgotten from previous tasks during the model’s training on a new task. To further enhance the evaluation of their proposed framework, it would be useful for the authors to report the forgetting metric as suggested by the LifeLonger paper.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The paper is reporoduceable.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    • Adding forgetting measures and scores to the table.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The method presented in the paper appears to be innovative and tailored to the specific downstream problem the model aims to address. The explanation and justification of the method are convincing, and the results demonstrate its effectiveness. As a result, I am inclined to recommend acceptance of the paper. However, I am open to other reviewers raising significant concerns that I may have missed, and I am willing to reassess my evaluation based on their feedback.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #4

  • Please describe the contribution of the paper

    In this paper, the authors propose an incremental learning method to solve incremental structure segmentation and they also consider the domain shift across datasets. They design a dual flow module to avoid catastrophic forgetting and a new normalization manner to mitigate domain divergence. The method is evaluated on Brats2018 under cross-subset and cross-modality settings.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. This paper addresses a relevant clinical problem: medical image segmentation with incrementally increased structures of interest.
    2. The paper is well-written and well-motivated.
    3. This paper has innovative contributions on methods and achieves good performance.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The description of the method could be improved. The notations are unclear and redundant, which makes the methods sometimes hard to follow. (see comments below).
    2. There’s no discussion of some experimental settings such as the choice of incremental structures or modalities.
    3. The experiment comparisons are inadequate. In order to test the performance on domain shift HSI, various brain tumor datasets should be utilized instead of merely Brats18. The cross subsets within Brats18 do not exhibit significant domain differences.
    4. More insights required to explain the progressive incremental learning process.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Could be reproduced if the authors release their codes.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    1. Notations:

    -Please explicitly define N, n, c properly.

    -Clearly define {\mu, \sigma}, {\mu_B, \sigma_B}, {\mu_c, \sigma_c}

    -Do W_T and W_t refer to the same thing? Also b_T and b_t

    • I would suggest the authors use fewer symbols and fewer sub/super scripts. The current version have so many complex notations.
    1. “Traditional BN does not hold in HSI” needs further explanation

    2. Why do you use an average of the two branches? The re-parameterization process also requires clear clarification.

    3. Experimental settings:

    -It would be better to include the experimental settings in the main paper instead of the supplementary materials.

    -In the cross-subset setting, how do you choose the different structures from different domains?

    -Will the imbalance subject number affect the incremental learning process?

    -It is also unclear whether the data splits are separately for each domain and what their corresponding subject numbers are.

    -In the cross-modality setting, did you use the different modalities of the same subjects or different subjects across the three stages? If you are using the same subjects, it is hard to see the values of the experimental setting. Besides, why did you choose these three modalities instead of T1ce which can better describe CoreT.

    -Please define the upper bound “Joint State”

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Despite some instances of missing explanation and unclear expression, the approach addresses a significant clinical issue and shows promising performance. I would be inclined to accept it on the condition that the authors properly address the above issues in the rebuttal and fulfill their promise to publish their codes.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    The paper is interesting for the medical image community, well-written and its contributions are novel. The reviewers have suggested minor improvements that could further improve the work namely improving the clarity of some aspects of the experimental settings and providing insights that further explain the progressive incremental learning process.




Author Feedback

N/A



back to top