Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Luyuan Xie, Cong Li, Zirui Wang, Xin Zhang, Boyan Chen, Qingni Shen, Zhonghai Wu

Abstract

The rapid identification and accurate diagnosis of breast cancer, known as the killer of women, have become greatly significant for those patients. Numerous breast cancer histopathological image classification methods have been proposed. But they still suffer from two problems. (1) These methods can only hand high-resolution (HR) images. However, the low-resolution (LR) images are often collected by the digital slide scanner with limited hardware conditions. Compared with HR images, LR images often lose some key features like texture, which deeply affects the accuracy of diagnosis. (2) The existing methods have fixed receptive fields, so they can not extract and fuse multi-scale features well for images with different magnification factors. To fill these gaps, we present a Single Histopathological Image Super-Resolution Classification network (SHISRCNet), which consists of two modules: Super-Resolution (SR) and Classification (CF) modules. SR module reconstructs LR images into SR ones. CF module extracts and fuses the multi-scale features of SR images for classification. In the training stage, we introduce HR images into the CF module to enhance SHISRCNet’s performance. Finally, through the joint training of these two modules, super-resolution and classified of LR images are integrated into our model. The experimental results demonstrate that the effects of our method are close to the SOTA methods with taking HR images as inputs.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43904-9_3

SharedIt: https://rdcu.be/dnwGG

Link to the code repository

https://github.com/xiely-123/SHISRCNet

Link to the dataset(s)

https://web.inf.ufpr.br/vri/databases/breast-cancer-histopathological-database-breakhis/


Reviews

Review #1

  • Please describe the contribution of the paper

    The paper presents a new approach to breast cancer histopathological image classification, addressing the challenges of low-resolution images and multi-scale feature extraction.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    SHISRCNet improves upon existing methods by introducing a combination of two multi-scale feature extraction methods and joint training of super-resolution and classification modules. It also uses high-resolution images in the training stage to improve performance and reduce errors caused by reconstructed super-resolution images. The experimental results demonstrate that the effects of this method are close to those of state-of-the-art methods that take high-resolution images as inputs.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The experimental results were obtained using a specific dataset and may not generalize to other datasets. Additionally, while SHISRCNet improves upon existing methods, it still requires high-resolution images in the training stage to achieve optimal performance. Finally, further studies are needed to validate the effectiveness of this method in clinical settings and compare it with other state-of-the-art methods.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The proposed methods is built upon existing methods, which is of high reproducibility. But their dataset is unique.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    Validating the proposed method in other dataset. Compared with other state-of-art methods in clinical settings to show it’s effectiveness.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Introduced the high-resolution method to identify breast-cancer.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    This study presents a Single Histopathological Image Super Resolution Classification network (SHISRCNet) integrating Super-Resolution and Classification modules to tackle the problem of LR breast cancer histopathological images reconstruction and diagnosis.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This study propose a Multi-Features Extraction block (MFEblock) as the backbone to extract and fuse multi-scale features. The CF module adopts two different multi-scale features extraction methods to capture features for the breast cancer diagnosis.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The main challenge of this study is not clearly defined. The use of multi-scale features is a commonly used solution in SR problems. This study merely illustrates that existing methods do not adequately address the issue of feature fusion. Therefore, the novelty of this study is limited.

  • Please rate the clarity and organization of this paper

    Poor

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    This work may be reproducibility.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    1.The main challenge addressed in this study should be clearly stated. 2.Tables 2 and 3 should include AUC, sensitivity, and specificity. 3.Other fusion methods should be compared, as the authors consider feature fusion to be an important point in this study.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Considering the poor expression, I think this study is slightly not read for acceptance.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    5

  • [Post rebuttal] Please justify your decision

    The authors have addressed my concerns.



Review #3

  • Please describe the contribution of the paper

    The authors proposed a joint training of super-resolution network and a multi-scale classification network for classification of breast cancer in histopathology images. The proposed design was tested on a public dataset and ablation studies were carried out. The authors claimed that the proposed method reach state-of-the-art in terms of super resolution result and classification performance.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The challenges in breast cancer diagnosis from histopathology images that the authors try to address is clinically very relevant
    • The paper is relatively well structured and easy to follow
    • The experimental design & ablation studies shown in this work seem to be thorough
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • Writing and details of the paper could be refined. More proof-reading & correction of some typos
    • Not sure about the novelty of the proposed network. Multi-scale classification and super resolution for pathology images are well studied subjects in the literature. This work resembles much the work in reference 24.
    • Some design choices, for example on the multi-scale classification module, are missing rationales explaining why the choices were made.
  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    A public dataset was used in this study. The author did not mention releasing the codes.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    • Abstract: “… known as the killer of women …” this does not seem to be very scientific. The authors could consider replacing this sentence by some statistics showing the importance of breast cancer.
    • Section 2.2: based on the description of the CSF block, it seems that only features from adjacent resolutions are processed by each CSF block. Could the authors comments on the design choice? How does the CSF block design compare to a global selection / attention design?
    • Section implementation details: for reproducibility purpose, could the authors comment on which hardware they used for training and the training time?
    • Result section, Figure 2: in the top 2 rows, we still see that the SR images produced by SHISRCNet seem to be quite blurry. Could the author comment on this and the impact of the blur?
    • In general: the authors did cross validation, therefore in tables showing performance metrics, it is better to add error bar (using at least an estimate, min-max values for example). This is a way to justify that the reported performances are statistically significant.
    • Section 4.2: table 3 should be table 4
    • What is the classification objective? Is it binary cancer vs non-cancer? This should be stated somewhere. In case of binary classification, how was the classification threshold set? Why only report accuracy, but not the area under the ROC curve? What about false positives of the classifier? I believe the authors should comment on these questions.
    • Section conclusion: the authors should add limitations of this work and directions for future work.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This paper addresses a clinically relevant challenges. The design method was compared to state-of-the-art models and was validated by ablation studies. However this paper lacks some clarity explaining the design choices. The reviewer is also not sure about the novelty of the proposed design.

  • Reviewer confidence

    Somewhat confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This paper proposed multi-scale feature extraction methods and joint training of super-resolution and classification modules. The strengths include well-structured manuscript, sufficient ablation studies. Reviewers also have some concerns. Novelty could be elaborated as suggested by R2 and R3. I also have a question. Table 2 provides results of SOTA methods, while their performance seems similar for different resolutions. 40x was even better than other resolutions for AlexNet, and Inception. If 40x is good enough for the classification task, author may consider whether the super resolution is really needed.




Author Feedback

Overview:We sincerely appreciate all reviewers for your comments. For META-REVIEWER, we provide a detailed explanation of 40x magnification. Please refer to Q1 for your concerns. For R1, thank you for your approval of our work. Please refer to Q5 for your concerns. For R2, thank you for your constructive suggestions. Please refer to Q2 to Q4 for your concerns. For R3, thank you for your approval of our work. Please refer to Q2,Q3,Q5 and Q6 for your concerns. Besides that, we have polished our paper carefully and provided the source codes in the camera-ready (CR) version .

Q1. If 40x is good enough, whether the super resolution is needed? We apologize for not explaining this issue clearly. In [20] (Sec. 1), the authors provide a description of various magnifications (40x,…,400x). They denote that images captured using different magnification objectives, where these images have the same dimensions and resolution (HR). Actually, lower magnification objectives (e.g.,40x) offers a broader view of the tissue section (pathological specimen), while higher ones (e.g.,400x) provide more detailed views of specific regions (multiple cells within tissues). So in Tab. 2, 40x and others mean different magnifications, rather than resolutions. All of these magnification objectives are meaningful for cancer diagnosis from the whole and part perspectives. We will give a elaborate explanation in footnote of Tab. 2 in the CR version.

Q2. Challenge of this work & Description of novelties (compared with [24]) (1) Our challenge is how to identify LR breast cancer images at multi-scales, which requires us to better extract and fuse multi-scale features. The difficulties we solve has been provided in Sec. 1, specifically in the 2nd and 3rd paragraphs. (2) Different from reference [24] concerning HR images classification, our work focuses on LR image super-resolution and classification. (1) A New block called MFEblock is designed for super-resolution. (2) A multi-scale feature extraction as the backbone and new fusion methods(CSF) is utilized. (3) Jointly training and contrastive learning are introduced in the training stage.

Q3. Classification objective and evaluation metrics The objective of this work is the usage of threshold 0.5 to achieve the accurate cancer or non-cancer identification. We evaluate the performance of our model only with the accuracy metric, which refers to previous works [2,12,21,22,24].

Q4. Other fusion methods should be compared. In Tab. 4, we compared our fusion methods with classic fusion methods. Concretely, “w/o MSF” denotes the comparison with the concatenate with 1x1 convolution. “w/o CSFblock” denotes the comparison with the feature fusion employed in FPN, which involves the deconvolution and addition [15]. In contrast with these methods, ours has the advantage across various metrics.

Q5. Dataset & Hardware with training time & The blurry of SR images. (1)Due to the sensitivity of medical data, this dataset is the only one we can collect for the breast cancer. (2)Our model is implemented with PyTorch 1.10, and we train it on two NVIDIA GeForce RTX 2080 Ti GPUs. For x2↓ , x4↓ and x8↓ LR images training , we need about 48h, 31h and 16h. (3)The displayed SR images are reconstructed from extremely LR(x8↓) images. In this situation, our model still can reconstruct the texture of the tissues and capture certain details, which has the best reconstruction results among existing methods.

Q6. The design choice of CSF & This work’s limitations and future work. (1)Considering medical images have varying scales (e.g. 40x, 400x), CSF employs a hierarchical fusion method to integrate features at different scales. Compared with using a global selective attention module, our approach captures more diverse feature scales, which are beneficial for the classification task. Please refer to Sec. 2.2 for more details. (2)Our work mainly focuses on the breast cancer. In the future, we will explore to apply it in other diseases.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The concerns raised by reviewers are addressed in rebuttal.



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Rebuttal addressed most of the concerns, especially clarified the novelty. One reviewer increased the score from 4 to 5. Super-resolution techniques are not exciting. But dealing with low-resolution histopathological images may be useful for low-resource healthcare.



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Unfortunately, the study in its current form is not suitable for presentation in MICCAI, on account lack of clarity. I encourage the authors to further develop their study for future publication.



back to top