List of Papers By topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Mingyuan Liu, Lu Xu, Jicong Zhang
Abstract
Fueled by deep learning, computer-aided diagnosis achieves huge advances. However, out of controlled lab environments, algorithms could face multiple challenges. Open set recognition (OSR), as an important one, states that categories unseen in training could appear in testing. In medical fields, it could derive from incompletely collected training datasets and the constantly emerging new or rare diseases. OSR requires an algorithm to not only correctly classify known classes, but also recognize unknown classes and forward them to experts for further diagnosis. To tackle OSR, we assume that known classes could densely occupy small parts of the embedding space and the remaining sparse regions could be recognized as unknowns. Following it, we propose Open Margin Cosine Loss (OMCL) unifying two mechanisms. The former, called Margin Loss with Adaptive Scale (MLAS), introduces angular margin for reinforcing intra-class compactness and inter-class separability, together with an adaptive scaling factor to strengthen the generalization capacity. The latter, called Open-Space Suppression (OSS), opens the classifier by recognizing sparse embedding space as unknowns using proposed feature space descriptors. Besides, since medical OSR is still a nascent field, two publicly available benchmark datasets are proposed for comparison. Extensive ablation studies and feature visualization demonstrate the effectiveness of each proposed design. Compared with recent state-of-theart methods, MLAS achieves superior performances, measured by ACC, AUROC, and OSCR.
Link to paper
DOI: https://doi.org/10.1007/978-3-031-43993-3_53
SharedIt: https://rdcu.be/dnwNY
Link to the code repository
N/A
Link to the dataset(s)
N/A
Reviews
Review #2
- Please describe the contribution of the paper
This study focuses on the field of open set recognition within the medical domain, which is an intriguing, evolving, and expanding area of research.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The paper is easy to follow. The area of the research in the medical domain is interesting.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
1) The novelty of this method in comparison to other machine learning methods is uncertain.
- It is unclear how the proposed loss function differs from the Large Margin Cosine Loss (https://arxiv.org/abs/1801.09414).
- Additionally, it is unclear how the abstention loss function (https://arxiv.org/abs/1905.10964) differs from the open space suppression proposed in this paper.
- It is also unclear which part of the loss function proposed by the authors is original, as it appears to be a mixture of existing loss function applied to a medical dataset.
2) The methods used for comparison in this study, including baseline[12], GCPL[29], RPL[3], DIAS[19] (the latest generative model), and ARPL+CS[2] (a hybrid method), are not adequately explained or discussed in either the related work or experiments section.
3) The effectiveness of the proposed method is not well established in comparison to other existing methods
- Please rate the clarity and organization of this paper
Excellent
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
If the code is available online, it will be easy to reproduce the results.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
1) Need more explanation in the comparison with the state of the art methods.
- Discuss more about the pros and cons of each method.
2)The authors should provide the standard deviation of the results of K trails in their evaluation.
3) Hyperparameter tuning is not well explained. How are these parameters (m, t, lambda) selected? Based on the validation set or based on the train or test! (Figure 4). I believe it’s very important to know about it since If it’s done on the test set , it’s not fair.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
6
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Need more clarification of the novelty and comparison.
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #3
- Please describe the contribution of the paper
In this paper, the authors proposed a method called OMCL under the assumption that known features could be assembled compactly in feature space and the sparse regions could be recognized as unknowns. The proposed approach provides improved performance on two publicly available benchmark datasets.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The paper provided sufficient information about the background and relevant literatures.
- The paper is well organized and description of the method is clear.
- The experimental results supports the validity of the proposed method.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
-
- Clinical importance is not well explained. 2- Lack of results discussion against the prior art.
-
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
yes
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
See Q6
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
5
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
- The motivation of the work is clear and relevant to medical image dataset.
- The proposed method is simple but reasonable approach to resolve exsting problem.
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #1
- Please describe the contribution of the paper
This paper proposes a novel Open Margin Cosine Loss (OMCL) to address the medical open set recognition (OSR) problem, which consists of Margin Loss with Adaptive Scale (MLAS) and Open-Space Suppression (OSS). The former reinforces intra-class compactness and inter-class separability, while the latter recognizes sparse feature space as unknowns. Experiments on two proposed OSR benchmarks demonstrates its effectivenesses.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
1) The OSR problem in medical diagnosis is very critical but underexplored. It presents a novel loss functions which proves to be effecitive. 2) They present two benchmarks for OSR in medical images.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
1) The method description is unclear. For instance, why initializing descriptor with uniform distribution ranging from [-s, s]. Since s is a learnable parameter, does it meaning that this initialization process will be updated frequently during the training process? Moreover, before the very start of the training process, does the method needs a good initialization for s; 2) In subsection 2.4, what does it means by taking maximum prob. Of known classes are the index for unknowns? 3) Lastly, the reviewer suggests that adding more discussions in the ablation part. For instance, why the ratio between M and N equals to 1 leading the optimal results.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
This paper can be reproducible.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
Please see the weakness part.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
5
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Please see the weakness part.
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
The paper presents a open-set recognition method and proposes various loss functions. Evaluation has been conducted on two public datasets. Overall, reviewers like the problem domain, the proposed approach and presentation of the paper. It is however suggested to improve the description about method novelty and include more comparison with other state-of-the-art methods. Also, it would be better to use other more commonly used datasets.
Author Feedback
We thank all reviewers and meta-reviewer for their thoughtful feedback. We use symbols R1, R2, R3, and MR1 to mark the reviewer 1,2,3 and meta-reviewer respectively. We are encouraged they find our focused field is underexplored, and intriguing (R1, R2, MR1); our designs are reasonable and effective (R1, R3). Besides, we are pleased they find the paper is easy to follow and well-organized (R2, R3, MR1); provides sufficient background and relevant literature (R3). Still, we address the reviewer’s concerns about our method and experiments here.
R1: Concerns on method descriptions. 1) The descriptors are dynamically generated. The lower and upper bounds of the feature space (-s and s), will keep updating with the training. As for s, it is initialized by a fixed value. Existing experiments show s could be adapted well to two datasets with different modalities. 2) Maximum probability of known classes is used in many previous arts as the unknown measurement. Specifically, such as in a trinary classification problem, the output probabilities of a sample belonging to three known classes could be [a,b,c], where a+b+c=1. Given a known sample, its maximum class-wise probability could be high, indicating it is confidently classified as a known class, i.e. max(a,b,c) is large. When inputting an unknown sample, its max(a,b,c) could be low, indicating a higher unknown likelihood. 3) For the M-N ratio, a randomly generated descriptor could be extremely close to a known feature, but classified as a novel category and hence disturbs the training. In fact, if the number of descriptors is far more than that of the training samples (like 5 times shown in Figure 4), the training could be disturbed and the performance could be low. The ratio 1:1 is experimentally validated as a proper ratio.
R2: Concerns on differences between LMCL [25], and Abstention loss (AL. It will be cited in camera-ready version). From the task perspective, these designs are proposed for solving closed-world problems. From the method perspective, different from LMCL, we not only design an adaptive scaling factor for increasing generalization capacity, but also add an OSS mechanism to generate pseudo-unknown features to train a discriminator for open set recognition. Different from AL which abandons the training of ambiguous images to reduce misclassification, our design generates feature-level descriptors as potential unknown samples and exploits a discriminative loss for better open set recognizing capacities.
R2: Concerns on hyperparameter tuning. Strictly speaking, novel samples should not appear in the validation set which hinders the model tuning according to open-set performances. The lack of closed set measurement for estimating open set performances makes most existing methods to tune hyperparameters on the testing set, for showing the maximum capacity on identifying the unknowns, which is the protocol commonly used and followed by this work. Moreover, we claim the comparison is fair, since all compared models are carefully tuned (shown in Supp Fig3) and their best performances are reported. Compared models with original hyperparameters perform worse when transferred to medical fields.
R3 Concerns on clinical importance. Clinically, an OSR-informed model is helpful in constructing trustworthy computer-aided systems. Novel diseases could be identified and forwarded to experts, which could not only offer an early warning of the possible outbreak of new diseases but also avoid the misdiagnosis of diseases unseen during training.
R2, R3: Concerns on inadequate discussion and explanation. Limited by space, some discussions are not fully explained, but will be added in our camera-ready version.
MR1, R2: Suggestions on more existing models and more common datasets. We will validate more arts on more datasets in our future work.
Finally, thanks again for these valuable comments. We will polish our paper in camera-ready version.