Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Wei Feng, Lin Wang, Lie Ju, Xin Zhao, Xin Wang, Xiaoyu Shi, Zongyuan Ge

Abstract

Existing unsupervised domain adaptation methods based on adversarial learning have achieved good performance in several medical imaging tasks. However, these methods focus only on global distribution adaptation and ignore distribution constraints at the category level, which would lead to sub-optimal adaptation performance. This paper presents an unsupervised domain adaptation framework based on category-level regularization that regularizes the category distribution from three perspectives. Specifically, for inter-domain category regularization, an adaptive prototype alignment module is proposed to align feature prototypes of the same category in the source and target domains. In addition, for intra-domain category regularization, we tailored a regularization technique for the source and target domains, respectively. In the source domain, a prototype-guided discriminative loss is proposed to learn more discriminative feature representations by enforcing intra-class compactness and inter-class separability, and as a complement to traditional supervised loss. In the target domain, an augmented consistency category regularization loss is proposed to force the model to produce consistent predictions for augmented/unaugmented target images, which encourages semantically similar regions to be given the same label. Extensive experiments on two publicly fundus datasets show that the proposed approach significantly outperforms other state-of-the-art comparison algorithms.

Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16434-7_48

SharedIt: https://rdcu.be/cVRsg

Link to the code repository

N/A

Link to the dataset(s)

https://refuge.grand-challenge.org/


Reviews

Review #1

  • Please describe the contribution of the paper
    1. This work proposed a category-level regularization approach for both intra and inter-domain categorization in unsupervised domain adaptation for segmentation.
    2. A prototype-guided discriminative loss was proposed to learn discriminative feature representations.
    3. The proposed method yielded a decent performance which is comparable to a supervised method that is deemed upper-bound, especially for Drishti-GS.
  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The global distribution matching via adversarial learning cannot align subtypes or subcategories efficiently, and thus the authors proposed category level regularization approaches to align categories.
    2. The authors proposed the novel formulations alongside the extensive experimental results including the ablation study.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. A concept akin to the prototype-guided discriminative loss was proposed previously, including transferable prototypical networks and subtype aware UDA approach, e.g., [1] Pan et al. “Transferrable prototypical networks for unsupervised domain adaptation” CVPR, 2019 [2] Liu et al., “Subtype-aware Unsupervised Domain Adaptation for Medical Diagnosis,” AAAI, 2021

    2. It is unclear how to achieve edge localization.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors indicated sharing their code, which thus is highly reproducible.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
    1. While the method can be applied to datasets with >2 subcategories, the dataset used in this work has only two categories including object and background (binary).
    2. The authors presented only good results, and it would be good to provide the worst cases as well.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
    1. Novel formulations for a category level regularization to deal with aligning subtypes or subcategories efficiently.
    2. The experiments are thorough, and the results are convincing.
  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    2

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #2

  • Please describe the contribution of the paper

    This paper focused on the UDA for Fundus Image Segmentation. The model majorly consists of three sections: inter-domain category regularization, source domain category regularization, and target domain category regularization. Experimental results showed that the proposed model outperforms several baseline methods.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The method section is well explained.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. In section 2.1, what is segmentation model G? The segmentor? Eq (2) is just a simple Euclidean distance. Why not choose some domain adaptation distance functions, such as MMD and CORAL?

    2. In source domain category regularization, margin delta is the key parameter. It is better to show how to set the best value of 0.01. What are the effects of different values?

    3. There are too many loss functions in the training procedure. Without a detailed algorithm, it is difficult to know how to train the whole pipeline.

    4. In terms of the results, the paper did not compare with SOTA methods [1-2]. Some results are significantly better than the source-free model [1], in which no source domain labels are used. In RIM dataset, 0.905 VS 0.908 [2]. Also, the results of the Drishti dataset are exactly the same as the results as in [2]. These performances should be further explained, at least it is not better than [2].

    5. There are many parameters in the models. It is better to add parameters analysis section to show how to get these optimal values.

    6. In the ablation study, there are many variants that are not compared.

      • L_{inter}
      • L_{dis}
      • L_{aug}
      • L_{inter} + L_{dis}
      • L_{inter} + L_{aug}
      • L_{dis}+ L_{aug} Without these results, it is difficult to know the effectiveness of different loss functions on the model.

    [1]. Chen, C., Liu, Q., Jin, Y., Dou, Q., & Heng, P. A. (2021, September). Source-Free Domain Adaptive Fundus Image Segmentation with Denoised Pseudo-Labeling. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 225-235). Springer, Cham. [2]. Lei, H., Liu, W., Xie, H., Zhao, B., Yue, G., & Lei, B. (2021). Unsupervised domain adaptation based image synthesis and feature alignment for joint optic disc and cup segmentation. IEEE Journal of Biomedical and Health Informatics.

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Without a detailed algorithm, it is difficult to reproceduce the results in the paper.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    See weakness section

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The uncleared training process of the model and inadequate experiments

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #3

  • Please describe the contribution of the paper

    This paper proposes an unsupervised domain adaptation framework based on category-level regularization to accurately segment the optic disc and cup from fundus images.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Different from previous global distribution alignment, category information is also considered in this work.
    2. It solves UDA from both intra-domain and inter-domain perspectives.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. Only a dice score is applied to evaluate the performance.
    2. Target domain category regularization is similar to strong and weak augmentations in semi-supervised learning.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Sufficient details for reproducibility.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
    1. What is the dimension of prototypes and how do determine which layer to extract features for calculating prototypes. You need to specify these details.
    2. Edge discriminator is utilized in your work but without detailed illustration and ablation study.
    3. are all the results based on the single run? Since the test data size is small, the performance variance may be large. You would better apply cross-validation in your experiments.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    interesting idea but without detailed description and solid experiments.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    2

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This paper proposes an unsupervised domain adaptive fundus image segmentation with category-level regularization. This paper received two positive reviews and one negative reviews. In the rebuttal, the authors are suggested to clarify the detailed illustration of the proposed models, including training procedure, parameters analysis, more clarification on prototypes.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    5




Author Feedback

We would like to thank all the reviewers for the insightful comments and constructive suggestions. Q1: Clarification of model description details: Q1-1.Clarification of the training procedure. (@R2) A: For the segmentation model, we used MobileNetV2 modified Deeplabv3+ network as the segmentation model G [13]. Q1-2. Why choose Euclidean distance. (@R2) A: We have tried some other distance metric functions, such as KL distance, and found that we achieved experimental results similar to Euclidean distance (RIM-ONE-r3: Dice disc: 0.905,Dice cup: 0.843, Drishti-GS:Dice disc: 0.965 Dice cup: 0.890), which are also commonly used in prototype-based UDA methods [9]. MMD and CORAL are commonly used in moment matching-based DA methods, which only align the global distributions of the source and target domains, while we also consider category-level distribution alignment. Q1-3. The dimension of the prototype. (@R3) A:The dimension of the feature map of the previous layer of the output layer of the segmentation model is [B,305,H,W], so the dimension of the prototype is [1,305]. Q1-4. network structure and training details. (@R3,R1) A. 1) We follow [13] to locate the edges of the optic cup and optic disc and construct the edge discriminator (containing five convolutional layers), see [13] for an ablation study of the edge discriminator. 2) For a fair comparison, we use the same random seed for all comparison algorithms, so the results are reproducible, which is the experimental setup in [13]. 3) We follow [13] to use DICE as an evaluation metric. Q2: Extra evidence for model effectiveness: Q2-1. Ablation experiments on model variants (@R2) A: Thanks for pointing this out. Part of the ablation experiments has been given in Figure 3, + L_{inter} corresponds to + inter_reg,+ L_{dis} corresponding to + src_reg,+ L_{aug} corresponding to +trg_reg. Here we also show additional ablation experiments,

  • L_{inter} + L_{dis} (RIM-ONE-r3:Dice disc :0.903, Dice cup:0.836 , Drishti-GS: Dice disc:0.964, Dice cup:0.883);+ L_{inter} + L_{aug} (RIM-ONE-r3:Dice disc: 0.902, Dice cup:0.838, Drishti-GS: Dice disc:0.963, Dice cup:0.887);+ L_{dis}+ L_{aug} (RIM-ONE-r3:Dice disc: 0.900, Dice cup:0.839 , Drishti-GS: Dice disc:0.965, Dice cup:0.880), the above results show that different combinations of components yield consistent performance gains. Q2-2. Selection of margin delta value. (@R2) A: we empirically set the margin delta to 0.01 and found that it worked well on different datasets. We vary the margin delta to get extra experimental results. 0.008 (RIM-ONE-r3:Dice disc :0.905, Dice cup:0.844 , Drishti-GS:Dice disc:0.965, Dice cup:0.891) 0.05 (RIM-ONE-r3: Dice disc :0.905, Dice cup:0.843 , Drishti-GS:Dice disc:0.966, Dice cup:0.890) 0.1 (RIM-ONE-r3: Dice disc :0.904, Dice cup:0.841 ,Drishti-GS: Dice disc:0.967, Dice cup:0.891) 0.3 (RIM-ONE-r3: Dice disc :0.903, Dice cup:0.840 , Drishti-GS:Dice disc:0.965, Dice cup:0.889) It can be found that changing it within a certain range does not have a significant effect on the results. Q3: Comparison with other methods: Q3-1. Comparison with some other DA algorithms. (@R2) A: [1] is designed for source-free DA, so direct comparison is not necessary. [2] involves a complex training process and is not an end-to-end domain adaptation model. It first generates target-like query images by image synthesis and then performs feature-level adaptation, however, the quality of the generated images affects the performance of subsequent feature adaptation,. In contrast, our approach can be trained in an end-to-end manner and can achieve similar performance as [2], and is therefore more elegant. Q3-2. Comparison with prototype-based UDA methods. (@R1) A: Liu et al, AAAI, 2021 and Pan et al, CVPR 2019 both utilize the prototype for inter-domain category-level distribution alignment, but we observe that intra-domain category-level regularization is also critical, see Figure 3.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The author address the issues raised by the reviewers carefully, including the training procedure, parameter analysis. Therefore, I would suggest to accept this paper.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    5



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The rebuttal well addressed all reviewers’ concerns. I recommend accept

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    2



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Motivation is clear and solution is reasonable, in my opinion the rebuttal addressed reviewers’ questions.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    3



back to top