List of Papers By topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Pengfei Diao, Akshay Pai, Christian Igel, Christian Hedeager Krag
Abstract
Domain shift is a common problem in machine learning and
medical imaging. Currently one of the most popular domain adaptation
approaches is the domain-invariant mapping method using generative
adversarial networks (GANs). These methods deploy some variation of
a GAN to learn target domain distributions which work on pixel level.
However, they often produce too complicated or unnecessary transformations. This paper is based on the hypothesis that most domain shifts
in medical images are variations of global intensity changes which can
be captured by transforming histograms along with individual pixel intensities. We propose a histogram-based GAN methodology for domain
adaptation that outperforms standard pixel-based GAN methods in classifying chest x-rays from various heterogeneous target domains.
Link to paper
DOI: https://link.springer.com/chapter/10.1007/978-3-031-16449-1_72
SharedIt: https://rdcu.be/cVRXK
Link to the code repository
https://github.com/chronicom/histogram-based-gan-for-lung-disease-classification
Link to the dataset(s)
https://stanfordmlgroup.github.io/competitions/chexpert/
https://nihcc.app.box.com/v/ChestXray-NIHCC
Reviews
Review #2
- Please describe the contribution of the paper
This paper describes a domain adaptation method base on histograms rather than images and applies it to disease detection from chest x-ray images.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The most important aspect is that the results show an improvement over baseline only for the proposed methods. Additionally, the paper is well written; it explains the underlying idea well and explains how it is implemented.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
Table 1 is very confusing. That needs major work. For example, the training error is shown in the first few lines for each experiment, but that isn’t explicitly explained. Also, what exactly is the baseline method for each experiment? Finally, please add p-values for each row. Re-do Table 1 and add a better caption that explains what exactly is being classified with the AUC.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The paper appears to be reproducible. The most important component will be code in pytorch and tensorflow code for the histogram-layer. The paper doesn’t tell which code libraries are used, so it is hard to determine what the implementation will be without code.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
This paper is a good paper but I would try to improve Table 1 a lot and also the grammar could use some more editing. Otherwise good job.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
7
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Great idea and results to back it up.
- Number of papers in your stack
3
- What is the ranking of this paper in your review stack?
1
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #3
- Please describe the contribution of the paper
In this paper, the authors work on the hypothesis that most domain shifts in medical images are variations of global intensity changes which can be captured by transforming histograms along with individual pixel intensities.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The method may work.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
Experiments are insufficient and the motivation is unclear.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Difficult to reproduce.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
Experiments are insufficient and the motivation is unclear. The convergence of the model should be demonstrated to help analyze performance. Experimental details should also be supplemented.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
4
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Many details are lost.
- Number of papers in your stack
4
- What is the ranking of this paper in your review stack?
4
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #4
- Please describe the contribution of the paper
This paper propses a simple and straightforward yet effective for the unsupervised domian adaptation for medical images, which only operate on the high level of the input images via learnable gama transformation. The proposed method also achieves the best results compared with the most poplar image translation based methods.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The proposed method is very straightforward and effective, which can be utilized as a general method for another medical image applications.
- The motivation and the method part is well explained.
- The experiments are sufficient to show the robustness and the effectiveness of the proposed method.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
No obvious weakness.
- Please rate the clarity and organization of this paper
Excellent
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
No abvious issue of the reproducibility as the author claimed.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
Motivation and intuition is clear. The paper writing is good.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
6
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The simple, effective and general method for medical image domain adaptation are the key facts.
- Number of papers in your stack
2
- What is the ranking of this paper in your review stack?
1
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
- The paper proposes histogram adaptation method as way for domain adaptation
- Imporve the quality of Table 1 as asked by the reviewer
- The reproducibility aspec of the paper (eg code release) MUST be addressed
- What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
6
Author Feedback
Meta-review
- Imporve the quality of Table 1 as asked by the reviewer
- The reproducibility aspec of the paper (eg code release) MUST be addressed
We have improved Table 1 and the overall writing quality. We will make the code [2] for reproducing the experiments on the publicly available datasets available as open source in a public code repository. This will ensure reproducibility. Reviewer #2 Thank you for your positive feedback.
*Table 1 is very confusing. That needs major work. For example, the training error is shown in the first few lines for each experiment, but that isn’t explicitly explained. Also, what exactly is the baseline method for each experiment? Finally, please add p-values for each row. Re-do Table 1 and add a better caption that explains what exactly is being classified with the AUC.
Thank you for the suggestions, we revised the table and extended the caption. The revised table has the following columns: Test on, Histogram Equalized, Method, Mean AUC,.
The caption reads (tentative): AUCs (area under the receiving operator curve) for different methods evaluated on the test data specified in the leftmost column. The AUC is the macro average over 5 classes. The dataset name refers to the dataset on which the classifier was trained (e.g., NIH-net was trained on NIH). The numbers of test images were 231, 25596, and 25523 for RH, NIH, and Chexpert, respectively, except for Chexpert$^{234}$ and Chexpert$^{6886}$, where 234 (standard Chexpert test set) and 6886 images were used.When evaluated on Chexpert, the baseline was NIH-net + plain input, when evaluated on NIH, the baseline was Chexpert-net + plain input (input histogram equalized), and When evaluated on RH, the baselines were Chexpert-net + plain input (input histogram equalized) and NIH-net + plain input.
In the revised paper, we will add a table consisting of p-values for each class comparison based on Delong test[1].
*The paper appears to be reproducible. The most important component will be code in pytorch and tensorflow code for the histogram-layer. The paper doesn’t tell which code libraries are used, so it is hard to determine what the implementation will be without code. We will make the code[2] for reproducing the experiments on the publicly available datasets available as open source in a code repository. This will ensure reproducibility. *This paper is a good paper but I would try to improve Table 1 a lot and also the grammar could use some more editing. Otherwise good job.
We have improved Table 1 and the overall writing quality.
Reviewer #3
*Experiments are insufficient and the motivation is unclear. In medical image analysis, we often have the problem of domain shift: The model was trained on data from different sites than it is applied to. This is in our experience a very fundamental problem. We propose - based on previous work - a way to address this problem by learning how to correct for site-specific differences in the imaging using unsupervised learning and without changing the original classification model.
*Difficult to reproduce. We will make the code[2] for reproducing the experiments on the publicly available datasets available as open source in a public code repository. This will ensure reproducibility.
*Experiments are insufficient and the motivation is unclear. The convergence of the model should be demonstrated to help analyze performance. Experimental details should also be supplemented. We improved the overall writing quality. We will make the code[2] for reproducing the experiments on the publicly available datasets available as open source in a code repository. This will ensure reproducibility.
Reviewer #4 Thank you for your positive feedback.
[1]DeLong, E. R., et al. (1988). Comparing the Areas under Two or More Correlated Receiver Operating Characteristic Curves: A Nonparametric Approach. [2]https://github.com/chronicom/histogram-based-gan-for-lung-disease-classification.git