Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Kihyun You, Suho Lee, Kyuhee Jo, Eunkyung Park, Thijs Kooi, Hyeonseob Nam

Abstract

Radiologists consider fine-grained characteristics of mammograms as well as patient-specific information before making the final diagnosis. Recent literature suggests that a similar strategy works for Computer Aided Diagnosis (CAD) models; multi-task learning with radiological and patient features as auxiliary classification tasks improves the model performance in breast cancer detection. Unfortunately, the additional labels that these learning paradigms require, such as patient age, breast density, and lesion type, are often unavailable due to privacy restrictions and annotation costs. In this paper, we introduce a contrastive learning framework comprising a Lesion Contrastive Loss (LCL) and a Normal Contrastive Loss (NCL), which jointly encourage models to learn subtle variations beyond class labels in a self-supervised manner. The proposed loss functions effectively utilize the multi-view property of mammograms to sample contrastive image pairs. Unlike previous multi-task learning approaches, our method improves cancer detection performance without additional annotations. Experimental results further demonstrate that the proposed losses produce discriminative intra-class features and reduce false positive rates in challenging cases.

Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16437-8_6

SharedIt: https://rdcu.be/cVRsS

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    The paper describes an extension of the contrastive learning approach for cancer detection from multi-view mammography images. They propose using contrastive learning with a triplet loss within normal and lesion classes to improve the separability of the embedding space. Comparison experiments demonstrate the benefits of their proposed approach.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • Well written detailed paper
    • The extensions to the contrastive learning triplet loss to incorporate label-free information is a unique extension
    • Thorough experiments and ablation studies.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • Some details are missing.
    • Question over some of the choices in the approach.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
    • Ok. The dataset is in-house, so I don’t know if it will be released. Otherwise, the implementation is a straightforward contrastive loss setup with a ResNet. That should be reproducible, but the heuristics of the weighting is not provided (see below in my detailed comments).
  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
    • The marked lesion in Fig 1 is very hard to see. Please darken the contour to mark the region.
    • How do the Hard Negative mining examples vary as the learning progress. For instance, as separability improves, is it correct to assume that the triplet loss margins improve and the harder to distinguish (like the benign cases experiment) becomes closer in terms of their l2 distances in the embedding space? Can the authors comment on what they observed?
    • Authors use this LCL + NCL + classifier loss to improve their accuracy over the baseline when trained. But the details on how the overall weighted combination of the losses (i.e., were the losses weighted equally or something else) is missing. Further, during training, was there a training policy in place? I would assume one batch had lesion anchors, positive and negative, and the other would be normals? I would suggest the authors provide more information on that. I recognize that space is limited so it could go in the supplementary section.
    • In table 3 in the supplementary material, were the differences statistically significantly different?
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Well written paper. Good application extenstion of contrastive learning with thorough experiments to back up their hypothesis.

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    This paper proposed novel loss functions that consider the contrastive properties among lesion and normal cases. The proposed methods can effectively work with various latest multi-task learning frameworks.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • This paper is well written and easy to follow.
    • The technical part is sound and novel. Various latest learning strategies are included to validate the effectiveness of the proposed methods.
    • The subgroup analysis is interesting and integrates with domain knowledge.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • The authors mentioned the cancer/benign/normal ratio in the in-house dataset, but in validation and test sets, the ratio is not the same (almost 1:1:1 instead). Why to design the validation and test sets like this way? This might introduce a distribution gap between the model and the actual distribution in the real screening scenario.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The reproducibility of the paper should not be an issue.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    The authors can more clarify the usage description of the in-house dataset.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Novelty Performance Dataset description Consideration of domain knowledge

  • Number of papers in your stack

    4

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #4

  • Please describe the contribution of the paper

    In this paper, authors introduced a contrastive learning framework involving Lesion Contrastive Loss (LCL) and Normal Contrastive Loss (NCL) to improve the overall accuracy.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The paper compared the results with the recent approaches and experimentally provided evidences that the improvement is statistically significant.
    2. The formulation of LCL and NCL is intriguing and adapted from the domain knowledge.
    3. The results are provided on test dataset and they shows the merit of the proposed solution.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The evaluation on independent test set in a prospective manner is important for testing the clinical feasibility. However, this can be future work.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The experimentation is well written and i believe it can be reproducible

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    Overall, I congratulate authors for their work breast cancer, which is the leading cause of women deaths. The technique is well explained. The formulation of LCL and NCL is intriguing. The results presented in the paper shows the merits of this approach. However, The evaluation on independent test set in a prospective manner is important for testing the clinical feasibility.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The claims are substantiated with the provided results. The experimentation seems to be proper.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This is a very well written paper on cancer detection in mammograms. The authors have added novel parts to the contrastive loss and done a thorough analysis. Reviewers are strongly supportive.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    3




Author Feedback

N/A



back to top