Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Ying Zhang, Yaping Huang, Jiansong Qi, Sihui Zhang, Mei Tian, Yi Tian

Abstract

People with autism spectrum disorder (ASD) show distinguishing preferences for specific visual stimuli compared to typically developed (TD) individuals, opening the door for objective and quantitative screening by eye-tracking data analysis. However, existing eye-tracking-based ASD screening approaches often assume that there are no individual differences and that all stimuli contribute equally to the prediction of an ASD. Consequently, a fixed number of images are usually selected by a pre-defined strategy for further training and testing, ignoring the distinct characteristics of various subjects viewing the same image. To address the aforementioned difficulties, we propose a novel Uncertainty-inspired ASD Screening Network (UASN) that dynamically modifies the contribution of each stimulus viewed by different subjects. Specifically, we estimate the uncertainty of each stimulus by considering the variation between the subject’s fixation map and the ones of the two clinical groups (i.e., ASD and TD) and further utilize it for weighting the training loss. Besides, to reduce the diagnosis time, instead of the shuffle-appeared mode of image viewing, we propose an uncertainty-based personalized diagnosis method to dynamically rank the viewing images according to the preferences of different subjects, which can achieve high prediction accuracy with only a small set of images. Experiments demonstrate the superior performance of our proposed UASN.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43904-9_39

SharedIt: https://rdcu.be/dnwHk

Link to the code repository

N/A

Link to the dataset(s)

https://saliency4asd.ls2n.fr/datasets/


Reviews

Review #1

  • Please describe the contribution of the paper

    In this work, the authors mainly focus on the Autism Spectrum Disorder (ASD) problem. They propose a new method named Uncertainty-inspired ASD Screening Network (UASN), in term of the uncertainty. In particular, they estimate the uncertainty score based on the subject’s fixation maps. Finally, they propose a personalized online diagnosis model for it. The experimental results show the effectiveness of their proposed model.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This paper is well written and easy to follow. The idea of uncertainty or diversity to estimate the different impact of different images is interesting and makes sense.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    There exist the following aspects to prevent me vote for acceptance.

    The first one is method. It might be simple and straight-forward. It is common to use the difference as uncertain scores as well as the weighted loss. If understanding right, the similarity list is specific for a certain subject during training, maybe, why not directly using the uncertainty or diversity of the fixation maps to select the images. For the personalized diagnosis, in the inference, will the model use the fixation maps of ASD and TD group? If yes, it might be unfair for the comparison with SOTA, since this model use more information.

    The second one is the insufficiency of experiment. The main concern is the lack of SOTA, only using [2] (2019). There exists some other method or reviews in this area, but the authors only use the method proposed in 2019, after 4 years. In other words, the proposed model has achieved very high performance, such as lots of ‘1’, which means this problem, or this dataset has been well addressed? From Table 1, 2 and 3, the proposed USAN can achieve almost auc=1, in many settings. Besides, the authors might provide some middle results, such as images with different uncertain scores, the similarity list, the effect of the different initial sample, the results using randomly selecting images.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Yes, since the authors list the used dataset, and important parameters.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    1. Experimental results. The authors are encouraged to add more comparison or SOTA methods, near 2023.
    2. The authors might need to show some middle results, such as images with different uncertain scores, the similarity list, the effect of the different initial sample, the results using randomly selecting images.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    3

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    See the weakness.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    In this paper, the authors proposed an Uncertainty-inspired ASD Screening Network (UASN) that dynamically modifies the contribution of each stimulus viewed by different subjects. Specifically, the uncertainty is estimated by considering the variation between the subject’s fixation map and the ones of the two clinical groups. And an uncertainty-based personalized diagnosis method is designed to dynamically rank the viewing images according to the preferences of different subjects.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The method is technically sound. The paper is very well structured and written.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Please make more comparisons with related works.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    I believe that the obtained results can, in principle, be reproduced.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    In this paper, the authors proposed an Uncertainty-inspired ASD Screening Network (UASN) that dynamically modifies the contribution of each stimulus viewed by different subjects. Specifically, the uncertainty is estimated by considering the variation between the subject’s fixation map and the ones of the two clinical groups. And an uncertainty-based personalized diagnosis method is designed to dynamically rank the viewing images according to the preferences of different subjects. Experiments have been performed to evaluate the proposed model on Saliency4ASD dataset. My detailed comments are listed below: 1) I confused with the relationship between Fig 1 and Fig 2. Please make it clearly.
    2) In the experiment section, the authors just compare the proposed model with ref [2], which makes the results not very solid and convincible. 3) In Table 3, the model with parameter settings (T=10, K=2) can obtain better performance than (T=10, K=1). Does the result means that the higher value of K, the better the performance of the model. Did authors conduct experiments with parameters settings (T=20, K=2) or with higher value of K?

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This paper has little overlap with my research focus but I am somewhat knowledgeable about some of the topics covered by the paper.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #4

  • Please describe the contribution of the paper

    The authors designed two novel workflows, 1) Uncertainty Guided Training and 2) Uncertainty Guided Personalized Diagnosis, to perform personalized ASD diagnosis. Experiments show that his personalized diagnosis method achieved good performance using fewer visual stimuli.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This article investigates differences in subject-specific visual stimuli in ASD patients compared with typical controls. Desgined their classification framework from the subject-specific views which is not considered in previous works using eye-tracking for ASD diagnosis.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1) Some details of the method and implementation details are not well clarified. For example, what is the relationship between the image set and the viewing list and the Saliency4ASD dataset is used in uncertainty-guided training or is used as Image set in Fig.2? 2) Comparative experiments are not sufficient, and only one existing method is compared.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    This paper provides sufficient details about the models, datasets, and evaluation.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    1) Please specify what the ASD screening network is in Fig.2. Is it the Gaze Pattern Feature Extraction block in Fig.1? 2) The overall diagnostic process is not clearly described, and the input and output of each stage should be specifically pointed out.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The investigate view of this paper is interesting, but some details of the method are a bit missing and not well described.

  • Reviewer confidence

    Somewhat confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    Strengths:

    • Interesting approach that considers uncertainty/subject-specific differences to estimate impact of different images in eye-tracking screening of ASD.
    • Method makes sense technically.
    • Well-structured and well-written paper.

    Weaknesses:

    • Insufficient comparisons to other approaches. Experiments including comparison to only 1 prior method, and may be unfair if proposed method uses additional information as inputs.
    • Missing some details regarding method and implementation (e.g., does model use fixation maps ASD/TD groups in diagnosis, which dataset is used where, unclear overall diagnostic process, outputs/inputs).
    • Lack of novelty in method - common approach to use difference as uncertain scores and apply weighted loss.

    I have an additional concern regarding scope - this work analyzes only eye-tracking data - thus while images are involved (subjects view the images and their gaze is tracked), there is no “medical imaging” modality in this work, and thus may be out of scope for MICCAI.

    In rebuttal, please:

    • clarify whether the inputs for the proposed and compared approach are the same, or whether proposed method includes additional information (fixation maps)
    • clarify methods/figure details questioned by reviewers
    • explain the novel aspects of the proposed approach
    • discuss intermediary results/analysis from already presented experiments




Author Feedback

We sincerely thank Meta-Review (MR) and all three reviewers (R1, R2, R4) for their valuable comments. MR, R1: Unfair additional inputs. Our UASN model takes the gaze data as input and outputs the prediction result for each gaze pattern which is the same as SOTA. What we need to clarify is that the fixation maps are all derived from the obtained gaze data and no other additional information is involved. Specifically, SOTA model selects the most discriminative images based on the Fisher-Score statistics of gaze data, and these selected images are fixed for training and testing for each subject. On the contrary, our model dynamically selects the top discriminative images based on the uncertainty-driven strategy, which are different for each subject. So we believe that the comparison between our model and the SOTA is fair and convincible. MR,R1: UASN’s Novelty. Since the ASD screening assisted dataset is hard to collect mainly due to the poor cooperation capability of ASDs especially kids, it is urgent to come up with some clinical diagnosis approaches that can make the fullest use of each visual stimulus and achieve the best performance using the least images, and at the same time, reduce the diagnosis time. We believe that taking into account the individual difference can address the above issues because the intrinsic ambiguity implied in each subject’s eye movement behavior has not beenadequately explored before. For this purpose, we design our UASN which treats each subject’s gaze on each image unequally and dynamically launch a personalized diagnosis procedure for each subject. We believe that our work can bring a new insight into the community and potentially help with the early intervention of ASD individuals. MR, R1: Intermediary results. Due to the limited space, we cannot list every intermediary result in the main paper. But we have presented some images with different uncertainty scores in the Fig II of supplementary material. We will also take your suggestions to give more results for clarity. MR: “Out of scope” concerning. Since MICCAI is concerned about both Medical Image Computing and Computer-Assisted Intervention, we believe that our paper suits the CAI field because we propose a novel ASD screening approach that will potentially help with the early intervention of ASD people. R1, R2, R4: Outdated baseline model. Our work is derived from an ASD early intervention project cooperating with a hospital, and out of their urgent need for a better clinical screening approach, we search for the most related methods in the community while only finding most works focus on the analysis of gaze data from some simple statistics standpoints. For the diagnosis, only Chen’s work is the most suitable for us to make a reference and comparison with. Since the open-sourced data and code are few and far between, we sincerely hope that our work can promote the field forward a little. Besides, We certainly see that the issue regarding ASD early screening is far beyond addressed, so we will also evaluate our method on our larger-scaled clinically collected dataset and will make it publicly available once the work is completed. R2: Fig1 and Fig2. Fig.1 is the training process and Fig.2 represents the diagnosis process (i.e., inference process). R2: The results of higher K values. As suggested, we conducted more experiments with higher T and K, and did not increase the accuracy, yet prolonged the diagnosis time. We will add more ablations in the paper. R4: Fig.2. (1) The ASD Screening Network in Fig.2 consists of both the Uncertainty Estimation and Gaze Pattern Feature Extraction modules (shown in Fig.1). (2) In the diagnosis process, our model takes one image and its corresponding gaze pattern as input, and outputs both the prediction score belonging to ASD and the uncertainty value which is used to decide the next image shown to the subject. Thanks for pointing out the issues, and we will carefully revise the paper.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors address a some points well in the rebuttal, such as the clarification of the input data be the same for methods compared. However, I find the validation weak with comparison to only 1 other approach, and promises to include addition ablations will be hard to keep given the restricted length. Moreover, I am still concerned about the scope - again, this approach is for analysis of gaze data, so there is no medical image analysis. Respectively, the computer-aided intervention part of MICCAI generally focuses on surgical intervention with imaging, e.g. interventional radiology. Thus, I recommend reject.



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Most of the major concerns (such as the clarification of insufficient competing methods and description of method details) have been well addressed.



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors proposed a method to analyze eye gazing data for ASD screening. It seems most concerns were answered in the rebuttal.



back to top