List of Papers By topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Jinpeng Li, Hanqun Cao, Jiaze Wang, Furui Liu, Qi Dou, Guangyong Chen, Pheng-Ann Heng
Abstract
In medical image analysis, imbalanced noisy dataset classification poses a long-standing and critical problem since clinical large-scale datasets often attain noisy labels and imbalanced distributions through annotation and collection. Current approaches addressing noisy labels and long-tailed distributions separately may negatively impact real-world practices. Additionally, the factor of class hardness hindering label noise removal remains undiscovered, causing a critical necessity for an approach to enhance the classification performance of noisy imbalanced medical datasets with various class hardness. To address this paradox, we propose a robust classifier that trains on a multi-stage noise removal framework, which jointly rectifies the adverse effects of label noise, imbalanced distribution, and class hardness. The proposed noise removal framework consists of multiple phases. Multi-Environment Risk Minimization (MER) strategy captures data-to-label causal features for noise identification, and the Rescaling Class-aware Gaussian Mixture Modeling (RCGM) learns class-invariant detection mappings for noise removal. Extensive experiments on two imbalanced noisy clinical datasets demonstrate the capability and potential of our method for boosting the performance of medical image classification.
Link to paper
DOI: https://doi.org/10.1007/978-3-031-43987-2_30
SharedIt: https://rdcu.be/dnwJM
Link to the code repository
N/A
Link to the dataset(s)
https://www.nature.com/articles/sdata2018161
https://bupt-ai-cz.github.io/HSA-NRL/
Reviews
Review #1
- Please describe the contribution of the paper
proposes a solution to Class imbalance problem in Medical Image Datasets with noisy labels
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
Mathematical formulation is Good. Results are validated across multiple datasets
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
it is an extension of a recent paper in CVPR 2021 where the Authors had suggested a two step approach . The first two step are similar where the classifier is self trained, The earlier work had been tested in Clothing 1M and CIFAR datasets,This work extends it to a three step process.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Mathematical setting is clear Datasets are clearly mentioned
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
Can be extended to multiple open datasets and across modalities.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
7
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The Authors have introduced a new framework called Class Aware Gaussian Mixture modelling with appropriate mathematical basics
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #3
- Please describe the contribution of the paper
The main contributions are: (1): Authors proposed a framework that include two stages: causal feature learning and class-aware aggregation. Causal feature learning leverages the multiple- distribution risk to push the boundary between noise and clean samples and enhance the discrimination of tail classes, whereas class-aware aggregation distinguishes clean and noisy samples. (2) Authors tested their algorithm on 2 public datasets that are imbalanced and contain noise labels. Their results demonstrate significant improvements, surpassing the performance of current imbalanced and noisy classification approaches.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The organization of the paper is clear, the content is well organized, and the application is the core of MICCAI conference.
Authors proposed a two-stage causal feature learning and class aware aggregation to learn a robust Classifier for Imbalanced Medical Image Dataset with Noisy Labels. The method is interesting as it deals with noisy and unbalanced data by reducing the noise and balancing the data which could be very beneficial for the medical domain fields that suffers already from small dataset.
Authors conducted their results on 2 different datasets and showed improvements in their results Over other methods which prove the efficiency pf their framework.
Authors put some effort to compare their methods with multiple state of the art methods under the same settings of experiments which added to the quality of the paper
The ablation study proved the efficiency of the full framework, this added to the quality of the paper as well.
The fact that authors explained how the components of their framework interact with each other added to the clarity of the paper.
The supplementary methods show some hit maps for authors method in comparison to their baseline method without additional components. These figures highlighted the efficiency of the method
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
The results section could be improved by adding a full discussion section of why their algorithm, achieved better results than other algorithms and where it did fail. The components of the framework lack in innovation as methods like gaussian mixture models have been widely used in the past.
Authors didn’t state clearly their contributions in bullets. For MICCAI paper that mainly target innovation and new methods, stating contributions clearly is very important
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Authors included a supplementary material.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
Both used datasets in the experimental section are medical imaging datasets. It will improve the quality of the paper to add some examples of images where the algorithm fails and explain why
Authors need to state their innovations and contributions clearly
A discussion section for pros and cons of the method and where it succeeded and failed will highly improve the quality of the paper.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
6
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The idea is of the paper interesting and used authors combined multiple components to optimize their framework.
All conducted experiments show the superiority of the method over other state of the art methods and authors included a descent number of methods
It would be good to show some examples where the algorithm failed and to add a discussion section of pros and cons of their method.
The method lake some innovation as all components uses well known concepts.
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #4
- Please describe the contribution of the paper
The authors present a noise-removal method to deal with noisy medical image classification tasks.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
Noise removal is important for medical imaging, but is a non-trivial task.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- Several terms are used without proper explanation of their meanings, e.g., class hardness, long tail distributions.
- The method description is unclear.
- The noise removal method is described for classification. Will these work equally well for segmentation tasks?
- Please rate the clarity and organization of this paper
Poor
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Not reproducible as the method description is unclear.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
Please see weakness.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
4
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The major reason for my recommendation is the lack of clarity in method description that makes it hard to assess the paper.
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
4
- [Post rebuttal] Please justify your decision
Overall opinion remains unchanged.
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
The method is motivated by a central problem in medical image classification. The approach is explained well, the validation is done with extensive comparisons to other methods and ablation studies are performed to justify the model architecture. Minor comment by reviewers is that the proposed architecture is incremental on recent literature (isn’t everything incremental?). Perhaps authors can clarify this further in intro. Also, as another reviewer suggested, some fail cases of the method might complement the ablation studies to give intuition on why the method works the way it does.
Author Feedback
We sincerely thank all meta-reviewers and reviewers for their constructive comments and devoted time.
Common question: Novelty and contributions: We apologize for unclear statements of our novelty and contributions. In the camera-ready version, we will clarify them in the introduction. Our key contributions and novelty are as follows: 1) We present a novel robust medical image classifier to address the negative effects of imbalance, label noise, and class hardness, which are overlooked by existing works trained on manually pruned datasets. Our work is the first to disentangle these factors and validates on real-world medical data. 2) We propose a novel Re-scaling Class-aware Gaussian Mixture Modeling (CGMM) approach to distinguish noise labels under various class hardness. Compared to original GMM, the re-scaled scores enlarge the distribution gap between clean and noisy samples, and class-aware clustering unifies probability distributions across classes. 3) We minimize the invariant risk to tackle noise identification influenced by multiple factors, enabling the classifier to learn causal features and be distribution-invariant. 4) We achieve SOTA results on two medical datasets, and conduct thorough ablation studies to demonstrate our approach’s effectiveness.
R1: Relation with previous two-step methods While two-stage training is common in imbalance classification methods, we improve it by introducing a multi-stage protocol, which is motivated by a novel effects decomposition of imbalance, noise, and class hardness. The key innovations of our work lie in the proposed framework components and holistically tackling the widely ignored issues in medical images.
R4: Definitions of class hardness and long-tailed distribution Long-tailed distribution means the number of instances largely varies across different classes. Class hardness refers to the degree of difficulty in classifying instances in each class. Figure 1(c) shows the class distribution and class hardness in HAM10000 dataset.
R4: Clarity of method description: Our framework’s overall training protocol is outlined in Section 2.5 and Figure 2. We provide a brief pipeline of our method as follows: 1) We train a classifier using a warm-up process. 2) We employ MER to learn a label noise identifier that is robust to imbalance. 3) We utilize CGMM to obtain noise confidence scores. 4) The classifier is fintuned using the reweighted samples and rebalanced losses. We will polish the method description in the camera-ready version.
Meta and R3: Discussion of pros and cons: Our advantages stem from the proposed MER and CGMM, as shown in the “Contribution & Novelty” answer and ablation studies. The learned causal features and more attention on lesion regions (Fig.1, supplementary materials) contribute to high performance. Our work mainly has two limitations: 1) Multi-stage training requires lacks simplicity compared to end-to-end training, which is a common challenge in noisy imbalanced classification methods and warrants future simplification. 2) A few hard noise samples with features similar to clean ones may be wrongly used in training due to violating the big loss assumption. We will visualize these failure cases in updated supplementary materials.
R1 and R4: New experiments on open datasets, across modalities and segmentation tasks. We appreciate the suggestion for new experiments, but the Rebuttal Guide restricts us from providing new results. HAM10000 and CHAOYANG datasets used in our work are open and cover modalities of dermatoscopy and histopathology. The segmentation task could be formulated as pixel classification and solved by our method, while the unique challenges of different noise distributions and contextual correlation of pixels in segmentation are desired to explore. Overall, our method solves an overlooked challenge in classification and is robust across image modalities. Reviewer-suggested experiments will benefit our future work.
Post-rebuttal Meta-Reviews
Meta-review # 1 (Primary)
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
The consensus among reviewers is acceptance with only dissent because of “lack of clarity in method description”. I recommend acceptance but I strongly urge the authors to improve the readability of method description.
Meta-review #2
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
This paper targets a fundamental research question: imbalanced data and noisy label. Two reviewers gave quite positive assessment while the third reviewer has some concerns but they seem to be minor/addressable. Overall, this paper may be accepted.
Meta-review #3
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
The rebuttal does not fully address the concerns raised by reviewers. R#1 maintains the negative rate after reading the rebuttal.