Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Xingguang Wang, Zhongyu Li, Xiangde Luo, Jing Wan, Jianwei Zhu, Ziqi Yang, Meng Yang, Cunbao Xu

Abstract

Cell segmentation plays a critical role in diagnosing various cancers. Although deep learning techniques have been widely investigated, the enormous types and diverse appearances of histopathological cells still pose significant challenges for clinical applications. Moreover, data protection policies in different clinical centers and hospitals limit the training of data-dependent deep models. In this paper, we present a novel framework for cross-tissue domain adaptative cell segmentation without access both source domain data and model parameters, namely Multi-source Black-box Domain adaptation (MBDA). Given the target domain data, our framework can achieve the cell segmentation based on knowledge distillation, by only using the outputs of models trained on multiple source domain data. Considering the domain shift cross different pathological tissues, predictions from the source models may not reliable, where the noise labels can limit the training of the target model. To address this issue, we propose two practical approaches for weighting knowledge from the multi-source model predictions and filtering out noisy predictions. First, we assign pixel-level weights to the outputs of source models to reduce uncertainty during knowledge distillation. Second, we design a pseudo-label cutout and selection strategy for these predictions to facilitate the knowledge distillation from local cell to global pathological images. Experimental results on four types of pathological tissues demonstrate that our proposed black-box domain adaptation approach can achieve comparable and even better performance in comparison with state-of-the-art white-box approaches.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43907-0_71

SharedIt: https://rdcu.be/dnwdS

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    The paper presents a novel framework for cross-tissue domain adaptative cell segmentation without access both source domain data and model parameters which is called Multi-source Black-box Domain adaptation (MBDA). They propose two practical approaches for weighting knowledge from the multi-source model predictions and filtering out noisy predictions. First, they assign pixel-level weights to the outputs of source models to reduce uncertainty during knowledge distillation. Second, they design a pseudo-label cutout and selection strategy for these predictions to facilitate the knowledge distillation from local cell to global pathological images.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The organization of the paper is clear.
    2. The proposed approach WL (weighted logits) for weighting knowledge from the multi-source model predictions can filter out noisy predictions, by assigning lower pixel-level weights to the pixels with high uncertainty and boundary ambiguity to reduce uncertainty during knowledge distillation.
    3. This work also designs a pseudo-label cutout and selection strategy for pseudo labels to facilitate the knowledge distillation from local cell to global pathological images. They consider not only pixel-level information, but also structured information in the knowledge distillation process.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. This work should validate this method on different dataset with 4 different as target domain to make it more complete. With Fig.2 and Table2, only 3 datasets are considered as target, better using CRC as the target domain and conduct a fourth experiments.
    2. The comparison is too limited, better compare this method with other SOTA UDA methods, such as [1,2].
    3. The ablation study should contain the same source domains and target domain as Table 1. Or contain all four situations using four datasets as target separately.
    4. The symbol for semi-supervised loss in Fig.1 should be the same with formula (9) ‘Lcons’, and maximizing mutual information Lmmi is not illustrated in Fig.1.
    5. What is the initialization of target model? In BBUDA, target model should be random or other pretrained weights. But this setting is not described in this paper. And they may study the effect of different pretrained models.
    6. The word ‘table’ on page 7 line 2 and page 8 line 4 should be capitalized.
    7. Fig.2 should also include segmentation results from the settings of Table 2.
    8. Figure should be the same format in the paper as in the notion ‘Fig.’
    9. Table 1 exceeds the paper page limit. [1] Ahmed, Sk Miraj, et al. “Unsupervised multi-source domain adaptation without access to source data.” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. [2] Abbet, Christian, et al. “Self-rule to multi-adapt: Generalized multi-source feature learning using unsupervised domain adaptation for colorectal cancer tissue detection.” Medical image analysis 79 (2022): 102473.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    No code provided.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    See my above detailed comments in weakness part.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The method utilizes an interesting method to filter out the noisy labels.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    6

  • [Post rebuttal] Please justify your decision

    Keep my original rating.



Review #3

  • Please describe the contribution of the paper

    The authors propose a novel framework for the semi-supervised training in a target domain by leveraging multiple black-box models trained in different source domains. Pixel-level weighting based on uncertainty and boundary ambiguity, and patch-level filtering based on confidence threshold is proposed for the incorporation of different source-domain models. The mean-teacher framework is further adopted for the training of target-domain model.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The motivation and research topic is useful, the method is simple and effective, the experiment is sufficiently comprehensive, the writing is quite fluent, and the description is quite clear.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. Grammar mistakes: “ predictions from the source models may not reliable”, “a adaptive knowledge voter”, “we will obtain a ensemble logits map”.
    2. The final loss function in Equ 10 is not align with fig 1.
    3. The loss functions, L_{pcl} and L_{kd}, lack of arrows from predictions.
    4. “For the target domain network, we use unsupervised and semi-supervised as our baselines respectively. In semi-supervised domain adaptation, we only use 10% of the target domain data as labeled data.” These should not be named as baselines, they are your model or task settings.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Open-sourcing the code is highly encouraged.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    See above.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Though some weakness or minor mistakes, this article is worthy of being accepted. However, the task of cell segmentation is a little bit small, such a framework can be applied to other tasks.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #4

  • Please describe the contribution of the paper

    In this work, the authors proposed to study the multi-source black-box domain adaption for cell segmentation in pathology images. First, the distillation by the weighted logit map is employed. Next, the adaptive pseudo-cutout labels are employed for self-supervised learning. Extensive experiments on several pathology datasets have indicated the effectiveness of the proposed method by outperforming various comparison methods.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    1) This work has explored an important research study. Compared with the traditional source-free DA settings, the proposed black-box multi-source domain adaptive does not require the sharing of model parameters, which enhances data security.

    2) The proposed method can outperform some traditional source-free DA and UDA methods.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1) The major concern is the lack of novelty. First, the knowledge distillation strategies by weighted knowledge maps are similar to [20]. First, the idea of selecting the k-nearest-neighbor regions is directly borrowed from [20]. Next, the idea of employing impurity is also borrowed from [20], and the definitions of Eq. 3 are the same as Eq. 4 in [20]. Third, the idea of selecting the high-confident pixels in the pseudo labels has been previously proposed in SFDA-DPL [5].

    2) According to the manuscript, there is no clear evidence that the proposed method can only work for pathology images. Therefore, it is also necessary to validate the algorithm on other SFDA benchmarks, such as the optic disc and cup segmentation with SFDA studied in [5].

    3) The authors mention that the Adaptive Pseudo-Cutout Label strategy is revised from [7]. However, there lack experiments to indicate the proposed strategy is better than [7].

    4) There lacks computational complexity analysis of the proposed method.

    5) The experiment results of the fully-supervised learning on the target datasets are missing, which are normally treated as the upper bound of the domain adaption studies.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    All datasets used in the work are public, and the source code is not available.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    Please address my concerns in the ‘weakness’ section.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    3

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Although this paper has studied an important research topic, it remains several major limitations, including the lack of novelty and limited experimental validations. Overall, my initial recommendation for this paper is: ‘3: reject’.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    4

  • [Post rebuttal] Please justify your decision

    Thanks to the authors for the feedback. However, I still think the proposed method lacks novelty, although it has some different implementations from the existing works. Overall, my final rating of the paper is ‘4: weak reject ‘.




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    The paper presents a novel framework for cross-tissue domain adaptative cell segmentation without access both source domain data and model parameters which is called Multi-source Black-box Domain adaptation (MBDA). The framework is technical sound and well written.

    However, some reviewers raised the concern of novelty of its k-nearest-neighbor regions and selecting the high-confident pixels w.r.t. [20,5]. The authors expected to clarify these points. In addition, the difference or novelty between the DINE [15] for multi-source black-box UDA should be clarified.

    More importantly, the compared methods are not properly selected:

    –The SFDA-DPL [5] is source-free UDA, but not ‘black-box’, since the model parameters are available for initialization. It is hard to understand why the proposed ‘blackbox’ method can outperform [5] without the model parameters.

    –There are several ‘black-box’ UDA medical segmentation methods available, e.g., Liu, X., Yoo, C., Xing, F., Kuo, C.C.J., El Fakhri, G., Kang, J.W. and Woo, J., 2022. Unsupervised black-box model domain adaptation for brain tumor segmentation. Frontiers in Neuroscience, p.341. It can be better baseline of single source ‘black-box’ UDA.




Author Feedback

We thank the reviewers for their constructive comments. They think our approach is “interesting”, “practical”(R#1), “simple and effective”(R#3), “enhances data security”(R#4). Here we address the main points in their reviews. Novelty against [20,5,15] (MR#1Q3-2&R#4Q3-1): 1)Firstly, the KNN regions is a commonly employed method in pixel-level segmentation and pathology to compute the info richness of pixel’s neighbor within the ROI. We follow the [20] to calculate the impurity of cell boundaries. Then, unlike [20] which selects informative regions for manual annotation and active learning, we design a pixel-level weighting strategy(eq. 4,5) that indicates the prediction reliability of each pixel from multiple source models respectively, for weighted logits map fusion and KD loss calculation. 2)Secondly, we propose the Adaptive Pseudo-Cutout Labels (eq. 6) to generate high-confident pixels by considering predictions from multi-source domains. In contrast, SFDA-DPL [5] generates pseudo labels based on Monte Carlo Dropout and Prototype Estimation, which is different from our solution. Besides, [5] is not suitable for our BBDA task, since dropout layers and intermediate features cannot be accessed in black-box models, which is beyond the scope of our task. 3)DINE [15] proposed three strategies for BBDA in the classification task, which didn’t specifically focus on multi-source BBDA. In contrast, we especially focus on the multi-source BBDA in the segmentation task, by designing a novel multi-source distillation framework with pixel-level weighting and adaptive pseudo-cutout labels. Compared methods(MR#1Q3-3,4&R#1Q3-1,2,3&R#4Q3-3,5): 1) Our BBUDA method can outperform [5] mainly because of the two proposed modules, which are especially developed for cell boundary prediction with significant domain shift. In contrast, SFDA-DPL[5] achieved source-free UDA mainly through the dropout-based uncertainty estimation, i.e., K stochastic forward passes in computing the uncertainty map, which cannot well focus on ambiguous cell boundaries from diverse tissues. 2) We re-implemented the BBUDA method mentioned by MR#1 (FRONT NEUROSCI). The method performance (DICE&HD95&ASSD): 1) Single-source (upper), 0.66&39.69&11.39(Target: BRCA), 0.68&42.99&7.44(Target:KIRC); 2) Source-combined, 0.67&41.89&11.54(Target: BRCA), 0.69&46.73&8.75(Target:KIRC). Our method still performs better. We also tested the upper bound mentioned by R#4 Q3-5 (DICE, TNBC:0.74, CRC:0.75, BRCA:0.77, KIRC:0.75). We will add ablation experiments (R#4Q3-3&R#1Q3-3) and update Table.1 in the revised version. 3) Results of CRC as target will be added. We will compare with more SOTA UDA methods (R#1 Q3-2). As these methods are originally designed for classification, we will try to extend them to the segmentation task. Extend on other tasks (R#3Q8&R#4Q3-2): Our MBDA method is designed for cross-tissue cell segmentation for 2 reasons: 1) the two proposed modules are pixel-level operations, which are particularly suitable for the segmentation of small objects with ambiguous boundaries. 2) cross-tissue cell segmentation is an urgent demand in clinical diagnosis, as too many cell types existed. Computational complexity (R#4Q3-4): The trainable parameters (1.6million) of our model are just from the student model in Fig.1, which is a regular U-Net. It costs 1.76s per epoch in a regular U-Net, while our method costs 3.46s per epoch under the same settings, due to the proposed pixel-level operations. After model training, our method only uses the student model for inference, which has the same complexity with regular U-Net. Fig mistakes (R#1Q3-4,6,7,8,9&R#3Q3): The SSL loss in Fig1 incorporates the last three terms of Eq 10. A detailed figure caption will be added. The arrows in Fig1 have ambiguities, and we will address this issue and grammar mistakes in the revised version. Model setting (R#1Q3-5):The parameters of the target model are initialized randomly. Details will be added.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Though it has some weakness of choosing the comparision methods, it may bringh some novel insight to this proble, and thus this article is worthy of being accepted.



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This work is aimed at addressing the challenges of cell segmentation in histopathological images by proposing a novel framework called Multi-source Black-box Domain Adaptation. This paper presents intriguing and relatively novel ideas, and the authors adequately addressed the concerns raised by the reviewers such as details on the novelty and clarification about comparison methods. Thus, this paper is recommended for acceptance.



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This submission introduces a multi-source black-box domain adaptation method for semantic segmentation of cells in histopathological images, with no need to access to source data or model parameters. This is different from traditional source-free domain adaptation methods that typically assume model parameters available. The experiments shows that the proposed method produces better segmentation performance than some other domain adaptation approaches. In addition, the rebuttal has addressed most of the reviewers’ concerns, such as including comparison with more recent state-of-the-art methods and performance of upper bound models. These strengths outweigh the weaknesses (e.g., limited technical novelty and a few typos/grammar errors, which should be corrected in the revised version), so an acceptance is recommended.



back to top