Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Haoxuan Che, Haibo Jin, Hao Chen

Abstract

Diabetic retinopathy (DR) and diabetic macular edema (DME) are leading causes of permanent blindness worldwide. Designing an automatic grading system with good generalization ability for DR and DME is vital in clinical practice. However, prior works either grade DR or DME independently, without considering internal correlations between them, or grade them jointly by shared feature representation, yet ignoring potential generalization issues caused by difficult samples and data bias. Aiming to address these problems, we propose a framework for joint grading with the dynamic difficulty-aware weighted loss (DAW) and the dual-stream disentangled learning architecture (DETACH). Inspired by curriculum learning, DAW learns from simple samples to difficult samples dynamically via measuring difficulty adaptively. DETACH separates features of grading tasks to avoid potential emphasis on the bias. With the addition of DAW and DETACH, the model learns robust disentangled feature representations to explore internal correlations between DR and DME and achieve better grading performance. Experiments on three benchmarks show the effectiveness and robustness of our framework under both the intra-dataset and cross-dataset tests.


Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16437-8_50

SharedIt: https://rdcu.be/cVRuB

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    The manuscript describes a method that addresses diabetic retinopathy and its associated diabetic macular edema together in a single framework. The main purpose of the work is to design an automatic grading system with good generalization for DR and DME. To avoid ignoring potential generalization issues, the authors proposed a dynamic difficulty-adaptive weight (YAW) and the dual-stream disentangled learning architecture (DETACH), in order to learn way different features with curriculum learning, and separates features of grading to avoid potential emphasis on bias, respectively. Experiments are conducted on three well-known datasets, either intra and cross.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    Novelty of the task: it is the first strong attempt to propose a joint grading system for DR and DME.

    Novelty of the proposal: the authors, aiming to offer a generalization framework, realised some approaches to deal with the potential bias in grading the two pathologies.

    Experimental evaluation: well conducted and presented. Strong results in both intra-dataset and cross-dataset experiments.

    Conclusions: supported by the presentation and results.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    I have no major concerns for this manuscript. The only suggestion I have is to improve the introduction in order to better state the unique challenges are associated with this task and to provide a deeper overview of the study.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The reproducibily is adequate. Perhaps, the authors could give more details regarding the key parameters involvedin their method.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    Dear Authors, I read your manuscript with great interest and I found it of excellent quality. Also the results are quite impressive and opens the field for further improvements. I have no major concerns for this manuscript.

    The only suggestion I have is to improve the introduction in order to better state the unique challenges are associated with this task and to provide a deeper overview of the study.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Quality of the proposal: the method is well presented and described and offers the right insights to the task at hand.

    Experimental evaluation: well conducted and presented. Strong results in both intra-dataset and cross-dataset experiments.

    Conclusions: supported by the presentation and results.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    The paper proposes a network for joint grading of diabetic retinopathy (DR) and diabetic macular edema (DME). The proposed network uses a dynamic difficulty-adaptive weight for weighting samples gradually, and two encoders with a detached shared features to model the correlation between the two tasks and find a disentangled representation of the features.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    Strengths

    • Proposing dynamic adaptive weighting of samples and disentangled representation of DR and DME.
    • Comparison with other existing methods
    • Performing Ablation study
    • The paper is well written
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • details of the network needs to be clearer
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
    • details of the network needs to be clearer
  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
    • The abbreviations (YAW) and (DETACH) are quite unrelated to the full words, so consider make it clearer.
    • In the introduction, redefine (DR) and (DME) again.
    • In Methods, Fig. 3 is referenced before Fig. 2. Reorder the figures in the paper.
    • In Fig. 3, please put details of the network architecture, e.g., feature map sizes, and size of fully connected layer, or mention them in text.
    • For all the tables, please, put vertical separators between different datasets and diseases to make the tables more readable.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    the methods are strong the results and experiments are strong the paper is well written

  • Number of papers in your stack

    3

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #3

  • Please describe the contribution of the paper

    The authors proposed a CANet for automated DR and DME diagnosis and grading. The YAW is used to specifically dealwith hard samples, while the DETACH is used for disentanglement of DR and DME for more robust diagnosis.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The relationship of DR and DME is considered and modeled.
    2. The DETACH does proved its improvement in performance and generalizability
    3. The paper is well written and easy to follow
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The dataset is not well introduced, and the data split is not introduced.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The dataset is not well introduced, and the data split is not introduced.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    the dataset should definitely be properly introduced.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The method is quiet novel, and experiments does proved the authors design of network and loss functions. The paper is well written and easy to follow

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    The paper is reviewed by three experts in the field. All the reviewers agree with that the paper is generally well written with the methods being easy to understand and follow. There are some minor issues needed to be addressed in the final version, including a deeper overview of the study, and more details of the network.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    2




Author Feedback

We appreciate the meta reviewer for the comments sincerely. We have carefully followed every issue raised by the reviewers and rethought the paper accordingly. In the following, we will address the concerns and respond to the advice from each reviewer one by one.

Dear Reviewer #1,

Many thanks for your precious time and professional comments on our work. The issue you motioned about improving the introduction is impressive and valuable. Indeed, previously, we removed some discussions from the introduction part due to the paragraph limitation. We will follow your advice to discuss and provide those challenges and overview in the introduction.

Dear Reviewer #2,

Many thanks for your careful review and advice on our paper. Your concerns and advice are really precious. We have no doubt your suggestions will help us improve and polish our paper. We will follow those suggestions to improve this paper and carefully revise the paper to make everything well.

Dear Reviewer #3

Many thanks for your professional efforts in reviewing our paper. Data split is essential to discuss. Although we provide a preliminary statement, we think a detailed one is needed, like your suggestion. We will discuss the detailed data split in the experiments section.



back to top