Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Emma Sarfati, Alexandre Bône, Marc-Michel Rohé, Pietro Gori, Isabelle Bloch

Abstract

Large medical imaging datasets can be cheaply and quickly annotated with low-confidence, weak labels (e.g., radiological score). Access to high-confidence labels, such as histology-based diagnoses, is rare and costly. Pretraining strategies, like contrastive learning (CL) methods, can leverage unlabeled or weakly-annotated datasets. These methods typically require large batch sizes, which poses a difficulty in the case of large 3D images at full resolution, due to limited GPU memory. Nevertheless, volumetric positional information about the spatial context of each 2D slice can be very important for some medical applications. In this work, we propose an efficient weakly-supervised positional (WSP) contrastive learning strategy where we integrate both the spatial context of each 2D slice and a weak label via a generic kernel-based loss function. We illustrate our method on cirrhosis prediction using a large volume of weakly-labeled images, namely radiological low-confidence annotations, and small strongly-labeled (i.e., high-confidence) datasets. The proposed model improves the classification AUC by 5% with respect to a baseline model on our internal dataset, and by 26% on the public LIHC dataset from the Cancer Genome Atlas. Code will be made publicly available.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43907-0_22

SharedIt: https://rdcu.be/dnwcz

Link to the code repository

https://github.com/Guerbet-AI/wsp-contrastive

Link to the dataset(s)

https://portal.gdc.cancer.gov/projects/TCGA-LIHC


Reviews

Review #1

  • Please describe the contribution of the paper

    This work introduces a new approach for training machine learning models using both continuous and discrete meta-data. The approach is tested on a clinical application, cirrhosis prediction, and compared to other state-of-the-art contrastive learning methods. Results show that the proposed method achieves higher performance than the other methods. Additionally, the proposed method has an organized representation space, which may explain its superior performance. This approach has potential for adaptation to other medical problems and could be improved further in future work.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This paper proposes a composite kernel loss function to take both continuous and discete label into account in contrastive pertaining. Leveraging meta-data for contrastive learning in medical image tasks is an essential yet not fully explored topic. The proposed method achieves considerable improvements over the presented baselines, extensive experiments are conducted and well discussed.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The method section should be expanded with a specific method figure or a algorithm flow section.
    2. In Table 1, tinynet with fewer parameters outperformed resnet18 on D2histo with a decent margin, can you provide more discussion and insight on this result?
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The experimental settings and model architectures(tinynet) are introduced in the paper. The reproducibility will be better if the author could further elaborate the loss function.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    1. Further discussions on the classification preformance of Tinynet on only Dhisto2 dataset.
    2. A figure illustrate the design of the composite kernel loss function.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    My initial rating primarily based on the novelty of this work and the concerns about the method section and the experimental results.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    5

  • [Post rebuttal] Please justify your decision

    The author’s rebuttal somewhat addressed my concerns.



Review #2

  • Please describe the contribution of the paper

    The authors proposed a weakly-supervised positional contrastive learning strategy for cirrhosis classification using weak radiological labels to predict histopathological results. Their approach integrates different meta-data during pretraining and was validated on three datasets, including a public LIHC dataset, demonstrating improvements in AUC.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    Comparisons to existing state-of-the-art methods: the authors compared their approach to existing CL methods such as SIMCLR and BYOL. Simplicity: the clinical task is straightforward, and the proposed method solves cirrhosis classification from weak radiology labels without pathological results to histopathological scores via CL pretraining. Novelty: the authors combined both continuous and discrete labels during pretraining.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Lack of clinical justification for weak radiological labels: the paper does not clarify how weak radiological labels were extracted or if any NLP tools were used to obtain them. Different prevalence of diseases in the datasets: the different prevalence might be a confounder in the classification task. Lack of clarification: the paper does not provide information about the number of radiologists, their clinical experience, specialty, and whether their results were averaged when showing the bACC score by radiologists in Table2. Incremental improvements: the proposed method outperformed the depth-aware method by just 1%, and it is not clear if the “superiority” of the proposed method is statistically significant.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors validated their approach on three different datasets, including a public LIHC dataset. The authors will make their code available.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    Include the baseline model (None pretraining) with weak labels and depth position. Compare the proposed approach to a supervised learning pretraining such as ImageNet or RadImageNet, which could serve as another baseline model in all comparisons.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The proposed method demonstrates superiority over existing contrastive learning approaches in cirrhosis classification from weak radiology labels. The authors combined continuous and discrete labels during pretraining, validated their approach on three different datasets, and will make their code available for reproducibility. However, the lack of clinical justification for the weak radiology labels, the different disease prevalence in the datasets, and the lack of clarity in Table 2 are my concerns. It would also be helpful to compare their approach to a supervised learning pretraining and provide a baseline model with weak labels and depth position.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    6

  • [Post rebuttal] Please justify your decision

    The authors addressed my comments accordingly



Review #3

  • Please describe the contribution of the paper

    This work proposed a weakly-supervised constrastive learning framework that consider both diagnostic class and positional information. The method is particularly designed for analyzing 3D radiological data such as MRI and CT scans. The authors validate the method on fibrosis/cirrhosis classification and showed superior performance compared to the baselines.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. This paper address a fairly important limitation of radiological AI: high-quality diagonsitic labels are costly to obtain from radiological scans. Unsupervised learning or weakly supervised learning might solve the problem and pave the path toward strong AI in medicine.
    2. The proposed method is clear and makes sense.
    3. The authors made comprehensive evaluations on comparing the proposed method and other contrastive learning baselines in the context of classification.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The proposed method is an natural extension of supervised constrastive learning (Khosla et al, 2020). In my view, the only innovation of this work is introducing an Gaussian kernel as a weighting factor into the supervised contrastive loss. Therefore, the technical innovation is limited.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    ok

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    1. Given the limited technical innovation, I would like to see more evaluations such as its performance on segmentation tasks.
    2. Depth information is of great importance in this work. However, in section 3.1 Datasets, I didn’t find any information with respect to the field of view (FOV) of used CT scans. May I ask how the varing FOV affect the proposed method.
    3. In Figure 2, the authors claimed their method is the only one where the diagnostic label and the depth position are correctly integrated. May I ask why the fibrosis label should correlate well with the depth information?
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
    1. limited technical innovation
    2. I doubt if adding such depth information will generally help in fibrosis prediction.
  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This paper presents a weakly-supervised contrastive learning framework that incorporates diagnostic class and positional information. The reviewers found the idea intriguing, praised the clarity of writing, and recognized the rationale behind the proposed method. However, they expressed concerns regarding the rigor of the evaluation, the incremental nature of the innovation and performance. For detailed feedback, please refer to the reviewer comments. It is important for the author to address these major concerns, such as the limited technical innovation, the statistical significance of the incremental improvements, and provide additional details as mentioned by the reviewers.




Author Feedback

We thank the reviewers for all their constructive comments. We address their concerns below.

Rigor of evaluation (R1, R2, MR): In Table 1, the reported AUC scores are consistently correlated with the dataset size and the simplicity of the backbone. On the small-size D_histo^2 dataset (N=49), TinyNet performs systematically better than ResNet-18. By contrast, on D_histo^{1+2} (N=155) 5/6 methods obtain better results with the ResNet encoder. To answer R1’s concerns, we would simply conclude that the smaller the dataset, the fewer parameters are necessary in the encoder.

Incremental performances (R1, R2, MR): Our WSP method achieves superior performance regardless of the type of encoder and the number of patients, in all experiments but one. As underlined by R2, WSP outperforms the 2nd-ranking method (depth-Aware) by only 1% in terms of bACC. However, on average across the considered datasets and encoder architectures, WSP outperforms depth-Aware by a larger 4% in terms of AUC, an integral metric which may more comprehensively capture the performance of a given method than bACC. Moreover, we argue that depth-Aware, actually is an original method and forms an additional contribution of our work – this will be clarified in the introduction. To further compare our method against already-existing baselines, as suggested by R2, in the final version of the article we will add a pretraining on ImageNet with ResNet-18 which achieves an AUC of 0.72 on D_histo^1, 0.76 on D_histo^2 and 0.66 in D_histo^{1+2}.

Technical novelty (R1, R3, MR): The literature on contrastive learning (CL) using meta-data reports different methods using continuous (y-Aware [8]) or discrete (SupCon [13]) labels during pretraining. Yet, no paper proposed to combine both continuous and discrete labels for pretraining within the CL framework. Furthermore, we also propose, for the first time, to combine different sources of information: we use prior meta-data (discrete radiological label) together with spatial positional information (continuous variable). Eventually, we also propose to use a geometrical approach of CL, which entails a mathematically sound framework where different kinds of meta-data can be easily combined, as shown in Fig. 2. This is formalized in Eq. 2, where we propose a kernel-composite constraint that had never been proposed before. As requested by R1, we will add a figure describing the algorithm flow in the supplementary material.

Data acquisition (R2, R3): In the D_radio dataset, each CT study was blindly reviewed by one radiologist specialized in abdominal imaging to assess the presence of cirrhosis which was classified according to four pathological stages: no/mild/moderate/severe cirrhosis. These cirrhosis stages are only radiological, without any histological confirmation, hence the “weak” appellation. About the field of view (FOV) of the CT-scans used in our paper (R3), our pretraining dataset D_radio presents a slice spacing standard deviation of 1.29mm with a mean spacing of 3.23mm, and min / max of 0.5mm and 5mm. During training, the proposed WSP method is not directly influenced by the slice thickness, as it uses a normalized depth metric d in the Gaussian kernel, which is an anatomical component expressed in “liver percentage” rather than in slice number. However, in a future work, it could be interesting to study the influence of the spacing distribution on the classification performances. In inference, since each slice is treated independently, the slice spacings have no influence on the results. These points will be clarified in the final paper.

Extension to segmentation (R3): Extending our approach to segmentation tasks raises non-trivial questions, in particular regarding the calibration of a decoder architecture within a CL framework. In addition, we argue that requiring weak semantic segmentation labels would probably limit the applicability of such extension to our work.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This paper introduces a novel framework for weakly-supervised contrastive learning that incorporates both diagnostic class and positional information.

    During the first round of review, the reviewers find the idea intriguing and commend the clarity of writing. They also acknowledge the rationale behind the proposed method. However, they express concerns regarding the rigor of the evaluation, the incremental nature of the innovation, and the performance. The author responds with a comprehensive rebuttal, summarizing and addressing these concerns. As a result, the paper receives two positive reviews and one negative review.

    I believe the author has done an excellent job in the rebuttal, effectively addressing the concerns raised in the first round of review. The proposed weakly-supervised positional contrastive learning approach provides a new perspective for conducting radiology research and has the potential to be applied to large-scale radiological datasets.

    Based on these reasons, my recommendation leans towards acceptance.



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Leveraging meta-data for contrastive learning in medical image tasks is an interesting yet not fully explored topic. The proposed method achieves considerable improvements over the presented baselines, and extensive experiments are conducted and well discussed. The reviewers pointed to limited technical novelty but, in my opinion, this is not an issue. The novelty lies in the idea of integrating prior meta-data (discrete radiological label) together with spatial positional information (continuous variable).



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This paper presents a weakly supervised positional contrastive learning strategy to integrate both the spatial context of each 2D slice and a weak label via a generic kernel-based loss function. The reviewers appreciated the presented method and the clarity of the method however they raised some concerns about the limited novelty, incremental improvements and missing details. The authors submitted a rebuttal to address these points and two of the reviewers indicated that their concerns had been properly addressed. The metareviewer agrees with the reviewers and he/ she thinks that the paper could be an interesting contribution for MICCAI.



back to top