Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Julia Wolleb, Robin Sandkühler, Florentin Bieder, Muhamed Barakovic, Nouchine Hadjikhani, Athina Papadopoulou, Özgür Yaldizli, Jens Kuhle, Cristina Granziera, Philippe C. Cattin

Abstract

The limited availability of large image datasets, mainly due to data privacy and differences in acquisition protocols or hardware, is a significant issue in the development of accurate and generalizable machine learning methods in medicine. This is especially the case for Magnetic Resonance (MR) images, where different MR scanners introduce a bias that limits the performance of a machine learning model. We present a novel method that learns to ignore the scanner-related features present in MR images, by introducing specific additional constraints on the latent space. We focus on a real-world classification scenario, where only a small dataset provides images of all classes. Our method Learn to Ignore (L2I) outperforms state-of-the-art domain adaptation methods on a multi-site MR dataset for a classification task between multiple sclerosis patients and healthy controls.

Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16449-1_69

SharedIt: https://rdcu.be/cVRXH

Link to the code repository

https://gitlab.com/cian.unibas.ch/L2I

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper proposes a supervised domain adaptation approach for classification of Multiple sclerosis patients and healthy controls on MRI scans. The target dataset contains images from one scanner type only, whereas the source data contains different scanner types. The authors propose to learn cluster centers based on the target data, and force latent vectors from the target and source data to be close to those centers. The approach outperforms several domain adaptation and contrastive learning approaches on the target domain and keeps good performance on the source domain.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • the method is simple but elegant and outperforms baselines and state of the art approaches on the target domain by a large margin
    • the approach is compared to multiple baseline and state of the art approaches
    • the setting is clinically relevant, as data collected in a hospital is often from the same scanner
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • the method is restricted to applications where the target domain is labeled and source data is available, meaning a trained model can’t easily be finetuned on a new domain when source data is not available anymore.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    code online, inhouse data can’t be published due to privacy concerns, but other data used is publicly available.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
    • the data sampling algorithm is only mentioned in the supplement, it would be nice to explain the data sampling in 1-2 sentences in the paper, so the reader doesn’t have to switch to the supplement.
    • did the authors try to train the center points based on target and source domain instead of just the target domain? This would be an interesting comparison.
    • in contrastive learning the projection subnetwork (layers after the ResNet that project the feature vector to dim 128 in this paper) is usually discarded after the contrastive learning, as it leads to some information loss (see SimCLR paper), could the authors explain why they chose to keep these layers?

    Chen, Ting, et al. “A simple framework for contrastive learning of visual representations.” International conference on machine learning. PMLR, 2020.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
    • the authors show impressive results on the target domain, while keeping the performance on the source domain high. The method is simple and could possibly be useful for other tasks, e.g. image segmentation.
  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    6

  • [Post rebuttal] Please justify your decision

    I agree with the authors, that domain adaptation doesn’t mean training on target data is not allowed.



Review #2

  • Please describe the contribution of the paper

    The authors formulate a new domain adaptation paradigm where diseased and healthy participants come from different studies in the source domain. They propose a method to enforce the latent codes from different classes to be far from each other whereas the codes within each category close to each other by introducing a center point loss and a latent loss. Essentially, the losses are controlled by two parameters: the radius of each class sphere and the distance between the clusters. In an experiment of multiple sclerosis classification, the proposed method outperforms the other domain adaptation methods on target domain when training on both source and target domains.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The paper is trying to tackle an interesting problem: how to generalize a classification model when the training data of each class comes from different studies.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • The proposed method is complex. I wonder how you can generalize it to multi-way classification?
    • There are many hyper-parameters introduced in the proposed method, such as d, r, lambda_cls, lambda_latent, and lambda_cen. The authors need to include hyper-parameter search results in the paper;
    • There is a flaw in the experimental design. Why do you train on both source and target domains? Then, why do you call your method a domain adaptation method? Here you have nothing to adapt to. And why do you bother to compare with those domain adaptation methods? They are not designed to train on both source and target domains;
  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The github link is provided in the paper.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    As stated above in strengths and weaknesses.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    3

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Overall, this domain adaptation paper cannot validate its claim by training on both source and target domains in the experiment. And apparently the authors still compare their method with the baseline methods on the target domain.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    3

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    4

  • [Post rebuttal] Please justify your decision

    The authors addressed some of my concerns such as model complexity. However, since they use both source and target data for training and their goal is to learn scanner-invariant features, this paper is essentially not about domain adaptation because there’s no domain to adapt to. Instead this is a harmonization work which implicitly remove scanner/site effects from the learned representations. So, I agree with Reviewer 3 that they need to compare with harmonization methods such as ComBat. The authors didn’t address this issue. So, I cannot recommend for acceptance.



Review #3

  • Please describe the contribution of the paper

    The authors propose a novel domain adaptation/harmonization method for MRI acquired by different scanners. Adaptation is facilitated by adding specific constraints on the latent space of the MRIs to reduce the domain shift caused by scanner differences. Experiments on several MR datasets demonstrate the effectiveness of the proposed method.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    [1] The research topic is of practical value for MR image adaptation/harmonization for dealing with the between-site variations caused by scanner differences. [2] The authors have clearly presented their motivation and insights for the model design. The overall architecture is appropriate. [3] The extensive experiment on diverse MR datasets is appreciated. The comparison with several state-of-the-art or classic adaptation methods makes the results solid. [4] The authors released the source which is helpful to reproduce the proposed method and the experimental results.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    [1] As ones of the key parameters, Lambda_cls in Eq (1) is not necessary. The other two parameters are enough to control the contributions of all the three losses. [2] Some discussions about the influence of the key parameters are missing. Since there are three losses during training, which one plays a more important role in the final classification result? [3] Some lightweight adaptation/harmonization methods, such as Combat, TCA can be used for comparison.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Good. The authors have released the source code which is helpful to reproduce the proposed method.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
    1. Remove the first parameter for classification loss.
    2. Add some discussions about influence of the key parameters on the classification results.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Appropriate model design. Extensive experiments on diverse datasets.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    3

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    is training on source and target domain still domain adaption? is it a fair and valid experimental setup? please clarify how hyper-parameter choices were made and should be made.

    “inhouse data can’t be published due to privacy concerns, but other data used is publicly available.”

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    2




Author Feedback

First, we want to thank all reviewers for their helpful and valuable comments.

Reviewer #2 points out that domain adaptation methods are not trained on the source and target domain together, and also the Meta-Reviewer asked for a clarification. We would like to contradict that. The goal of domain adaptation is adapting a model to perform well on a target domain that is different from the source domain. This does not mean that the target domain dataset is never seen during training, as it often is required during the adaptation process. In supervised domain adaptation methods like ours, the target domain labels are used during training, as is also the case in “Unlearning” [9]. In adversarial-based approaches such as “DANN” [12], target domain samples are needed for adversarial training. Even unsupervised domain adaptation methods such as “CAN” [19] use target domain samples for training, just without the class label. We think that the comparison to unsupervised domain adaptation methods such as “CAN” is fair, as it is a state-of-the-art domain adaptation network. As described in Section 1.2, our initial problem was that the target domain alone was too small for training. Therefore, we added more data as the source domain. Within this setup, we think it is justified to use the target domain samples during training.

As asked by Reviewer #2 and the Meta-Reviewer, the hyper-parameters were manually chosen. The choice of d and r is intuitive, as the hyperspheres around the center points need to be separated in the latent space, as described in Section 2.1. This is visualized in Figure 3. The loss weight lambda_cls is set to 1 and can be removed, as pointed out by Reviewer #3. The loss weights lambda_cen and lambda_latent are chosen such that both loss components are about equally important.

Reviewer #2 mentioned that the method is complex. However, we argue that a generalization to multi-way classification is already given in Equations 2 and 3. Only the hyperparameter d would need to be adapted, as it would represent the difference between multiple center points c_i. Furthermore, any existing classification network can be amended with this approach by adding the center point loss and the latent loss.

Reviewer #1 points out that it would be a good experiment to train the center points on the source and the target domain. Thanks for this idea, this would indeed be interesting as part of an ablation study. Furthermore, Reviewer #1 mentions that the projection head is often dropped after contrastive learning. However, in our architecture, it is needed as input to the final classification network and can therefore not be discarded. This setup is based on the architecture presented in “Unlearning” [9].

Reviewer #3 observed that the weight for the classification loss is not necessary, as this loss is the only one that updates the parameters of the classifier C. Thank you for pointing this out, we will remove this hyperparameter for simplicity. Moreover, Reviewer #3 wishes for a comment on the influence of the three loss functions, which we will add in the final version. While the classification loss is separate and only responsible for the final score, it is the center point loss and the latent loss that iteratively adapt the feature space to be scanner-invariant.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    relevant for the community, some points were clarified in the rebuttal

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    mid



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This is a good paper with some minor weaknesses. Specifically, there is a confusion of whether it is a domain adaptation or a harmonization. The majority of reviewers favor an acceptance. The AC voted to accept this paper.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    NR



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
    • The authors propose a method to address a practical issue. Although there are some disagreement between reviewers whether this method should be called domain adaptation or harmonization, I think the general idea would be valuable for the miccai audience. For that reason I recommend to accept this paper. However, I would like to ask the authors and potential readers to perform more comparsion, at least, against some harmonization methods such ComBat. As it stands, the experiments is not sufficient for a definitive conclusion but it is good enough for a conference publication.
  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    na



back to top