List of Papers By topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Xin Zhang, Maosong Cao, Sheng Wang, Jiayin Sun, Xiangshan Fan, Qian Wang, Lichi Zhang
Abstract
Cervical cancer is one of the primary factors that endanger women’s health, and Thin-prep cytologic test (TCT) has been widely applied for early screening. Automatic whole slide image (WSI) classification is highly demanded, as it can significantly reduce the workload of pathologists. Current methods are mainly based on suspicious lesion patch extraction and classification, which ignore the intrinsic relationships between suspicious patches and neglect the other patches apart from the suspicious patches, and therefore limit their robustness and generalizability. Here we propose a novel method to solve the problem, which is based on graph attention network (GAT) and supervised contrastive learning. First, for each WSI, we extract and rank a large number of representative patches based on suspicious cell detection. Then, we select the top-K and bottom-K suspicious patches to construct two graphs seperately. Next, we introduce GAT to aggregate the features from each node, and use supervised contrastive learning to obtain valuable representations of graphs. Specifically, we design a novel contrastive loss so that the latent distances between two graphs are enlarged for positive WSIs and reduced for negative WSIs. Experimental results show that the proposed GAT method outperforms conventional methods, and also demonstrate the effectiveness of supervised contrastive learning.
Link to paper
DOI: https://link.springer.com/chapter/10.1007/978-3-031-16434-7_20
SharedIt: https://rdcu.be/cVRrC
Link to the code repository
https://github.com/ZhangXin1997/MICCAI-2022
Link to the dataset(s)
N/A
Reviews
Review #1
- Please describe the contribution of the paper
The authors propose a cervical cancer screening method based on whole slide images. Specifically, they use graph attention network to aggregate the features from representative patches and use supervised contrastive learning to enhance the separability of graph representations.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
A novel formulation that uses graph attention network and supervised contrastive learning in whole slide cervical cancer screening.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
Lack of strong evaluation to show the superiority over other whole slide cervical cancer screening methods.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Very good
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
- In section 2.1, the sentence “applying them to the classification model” should be “applying the classification model to the selected patches”.
- A sensitivity analysis on some key parameters should be provided, e.g., the number of the representative patches detected by RetinaNet and the number of patches used to construct graph.
- Would it be better to choose the top K and bottom K patches from all patches in a WSI than from the top 200 patches detected by RetinaNet. In a positive WSI, all the top K and bottom K patches may contain lesion cells, so it seems inappropriate to force the graph representations of the top K and bottom K patches to be far from each other.
- The authors should also directly compare their method with the whole slide screening methods in ref No. 2 and No. 20.
- In section 2.1, the sentence “applying them to the classification model” should be “applying the classification model to the selected patches”.
- A sensitivity analysis on some key parameters should be provided, e.g., the number of the representative patches detected by RetinaNet and the number of patches used to construct graph.
- Would it be better to choose the top K and bottom K patches from all patches in a WSI than from the top 200 patches detected by RetinaNet. In a positive WSI, all the top K and bottom K patches may contain lesion cells, so it seems inappropriate to force the graph representations of the top K and bottom K patches to be far from each other.
- The authors should also directly compare their method with the whole slide screening methods in ref No. 2 and No. 20.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
5
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The method is technically sound and intelligible.
- Number of papers in your stack
5
- What is the ranking of this paper in your review stack?
2
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Not Answered
- [Post rebuttal] Please justify your decision
Not Answered
Review #2
- Please describe the contribution of the paper
The authors worked on Graph Attention Network and Supervised Contrastive Learning algorithms for the whole slide cervical cancer screening.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The study objective is interesting as the authors worked on cervical cancer screening. The manuscript has been structured properly. Comparative analysis showed that the proposed approach outperforms the existing approaches.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
The authors mentioned the importance of whole slide images, but there is no evidence of using whole slide images in their manuscript. The authors didn’t include qualitative image-based results. Hence, it is hard to decide if the analysis is correct or wrong. Technical novelty is none or limited. The authors worked on Graph Attention Network and Contrastive Learning. These approaches may be new to the healthcare domain but have been used in other image analysis platforms. Hence, the author’s technical novel contribution is limited. In Fig. 2, two different types of images have been used. The image in Slide seems H&Es, which doesn’t match with tile images. It will be better if the authors introduce their data first. Then show their qualitative results in whole slide images. There is no clue on how the screening will be performed. The t-SNE plot is not sufficient as a performance measure. The manuscript lacks sufficient qualitative and quantitative evidence. The authors should share their source codes, trained models, and a small amount of test data for review.
- Please rate the clarity and organization of this paper
Poor
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Not sure as there is no sufficient source code-related information available.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
The manuscript lacks sufficient qualitative and quantitative evidence. Technical novelty is none or limited. The authors should validate their results by pathologists.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
3
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The authors mentioned the importance of whole slide images, but there is no evidence of using whole slide images in their manuscript. The authors didn’t include qualitative image-based results. Hence, it is hard to decide if the analysis is correct or wrong. Technical novelty is none or limited. The authors worked on Graph Attention Network and Contrastive Learning. These approaches may be new to the healthcare domain but have been used in other image analysis platforms. Hence, the author’s technical novel contribution is limited. In Fig. 2, two different types of images have been used. The image in Slide seems H&Es, which doesn’t match with tile images. It will be better if the authors introduce their data first. Then show their qualitative results in whole slide images. There is no clue on how the screening will be performed. The t-SNE plot is not sufficient as a performance measure. The manuscript lacks sufficient qualitative and quantitative evidence. The authors should share the
- Number of papers in your stack
5
- What is the ranking of this paper in your review stack?
4
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
5
- [Post rebuttal] Please justify your decision
I am partially satisfied with the authors’ comments. Hence, I am changing the score to “weak accept — interesting paper where merits slightly weigh over weakness”. Reproducibility should be checked before final acceptance.
Review #3
- Please describe the contribution of the paper
This paper proposes to utilize the relationships between suspicious patches in whole slide images for classification. Graph attention network describes and extracts connections among suspicious patches. A loss function is designed to enlarge latent distances for positive WSIs and reduce latent distances for negative WSIs.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
(1) Two graphs of the top-K and bottom-K patches are built to describe relationships between suspicious patches. (2) Loss function is designed based on cross-entropy loss and supervised contrastive learning. Thus, the latent distances for positive WSIs are enlarged, while the latent distances for negative WSIs are reduced.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
Some parameter values are not justified. For exmaple, what is the criteria to determine the appopriate patch size? How to determine the number of top-k patches?
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The workflow is introduced clearly in this paper, including patches extraction and ranking, graphs construction, graph attention network and loss design. Overall, the reproducibility is acceptable.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
(1) Add details of data preprocessing about WSI, big patches and cell patches (2) Final loss consists of cross-entropy loss and supervised contrastive learning. The experiment with only cross-entropy loss should be added for comparison.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
7
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
This paper proposes a graph attention network for WSIs classification. Relationships between suspicious patches are utilized. The loss function is the combination of cross-entropy loss and a loss based on supervised contrastive learning. The method proposed reaches the best performance compared with other methods. The clarity and organization are good in this paper.
- Number of papers in your stack
5
- What is the ranking of this paper in your review stack?
2
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Not Answered
- [Post rebuttal] Please justify your decision
Not Answered
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
The paper proposes to use GAT and contrastive learning for WSI cervical cancer screening. The reviewers raise various concerns about the technical details, experimental setup and results analysis, and limited novelty. We invite the authors to carefully address these concerns in rebuttal.
- What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
5
Author Feedback
Q1: Limited novelty of the work. (R2) A: The main contribution of this work is that we specifically design a novel WSI-level classification method based on GAT and incorporate it with a supervised contrastive learning strategy. Generally, there are sample variations (such as staining color) between individual WSIs, and we aim to eliminate the influences of these variations which can undermine the classification performance. We propose the specifically designed method to resolve the challenging problem, which is novel and has not been reported elsewhere in this field. Our method consists of two major steps: First, we find two patch groups (top-K and bottom-K) to represent each WSI. Second, we aggregate patches into graphs and use GAT to model the intrinsic relationships between patches, with a novel contrastive learning strategy adopted to further utilize the different latent distances between graph representations in different WSIs. Furthermore, our contrastive learning method is fundamentally different from other conventional methods in the mechanism design. After obtaining the graph representations of top-K and bottom-K patches, we use a novel optimization mechanism for positive and negative WSIs by expanding the distances of two graph representations for positive WSIs while reducing them for negative WSIs. The novel contrastive learning strategy helps model to learn the difference of two selected patch groups between negative and positive WSIs, which is also the core novelty of our work.
Q2: Confusion about how WSIs are used in the framework and the screening procedure. (R2) A: Our dataset is introduced in Section 3, and all experiments are conducted on WSI level. As we cannot directly work on the WSI due to its enormous image size (60000 x 80000), we crop the target WSI into image tiles (1024x1024) and implement our suspicious patch detection model based on the tiles (introduced in Section 2.1). Then we find representative patches (224x224, shown in Fig 2) in the WSIs and make the final prediction in WSI level.
Q3: Not including qualitative image-based results. (R2) A: Due to the page limit, it is difficult to fully present the details of our image-based results in the paper, such as the representative patch selection results and etc. Instead, we present Table 1 to show the validity of our method and Fig 3 to show the effectiveness of our contrastive learning strategy. We understand the importance of qualitative image-based results in our study, and we will publish them with the source code on our project page.
Q4: Experimental setups for the comparing methods and the ablation study. (R1/R3) A: For the comparing methods in Table 1, RNN denotes Ref [2] and MLP denotes Ref [20]. We follow their implementations reported in the original papers. For the ablation study, the experiment with only cross-entropy loss has also been conducted, which is α=0, β=0 in Table 1. We will make the above descriptions clearer in the revision.
Q5: Technical detail of the patch number. (R1/R3) A: First, we choose top-200 patches detected by RetinaNet as the representative patches of the WSI, which is sufficient to include positive patches for determining positive WSIs. Meanwhile, since there are many false positive patches in the top-200 patches, we choose the bottom-20 patches to ensure that they are generally negative. Note that there is no need to select all the patches in WSIs, which can greatly undermine the diagnostic speed. Second, we choose top-20 patches for graph construction, and symmetrically bottom-20 patches for consistency of the node number in graphs. We also conducted two experiments with 10 and 30 patches, and the results are statistically similar to those with 20 patches. We cannot include them here due to MICCAI’s rebuttal policy, but the results will be released with our source codes in our project page.
Q6: The reproducibility of the work. (R2) A: We will release our source codes and data once the paper is accepted.
Post-rebuttal Meta-Reviews
Meta-review # 1 (Primary)
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
The paper proposes to use GAT and contrastive learning for WSI cervical cancer screening. All the reviewers acknowledge the novelty. The rebuttal has addressed most of the reviewers’ concerns. It is suggested that the authors include more details according to the reviewers’ comments, such as experimental setup and technical details, as well as the source code for reproducibility.
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
1
Meta-review #2
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
The paper presents a WSI analysis method. The rebuttal has clarified some confusion points particularly about experimental setup, WSI-level analysis and method details. This is a borderline paper, with interesting methods but result evaluation is not very convincing especially with only one private dataset and comparison is against some basic approaches only. The final version should be updated to include such information and the authors should consider releasing the dataset.
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
7
Meta-review #3
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
This paper proposes WSI based cervical cancer screening method using graph attention network and supervised contrastive learning. In the first-round of review, the reviewers anonymously provide positive reviews on this paper. I think the rebuttal had addressed the concerns about innovation, clarity of the method, rigor of experimental design. In my opinion, this method is interesting for the community to discuss during the MICCAI conference and its technical merits weigh over weakness. For these reasons, the recommendation is toward acceptance.
- After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
1