List of Papers By topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Prashant Pandey, Aleti Vardhan, Mustafa Chasmai, Tanuj Sur, Brejesh Lall
Abstract
Few-shot Learning (FSL) methods are being adopted in settings where data is not abundantly available. This is especially seen in medical domains where the annotations are expensive to obtain. Deep Neural Networks have been shown to be vulnerable to adversarial attacks. This is even more severe in the case of FSL due to the lack of a large number of training examples. In this paper, we provide a framework to make few-shot segmentation models adversarially robust in the medical domain where such attacks can severely impact the decisions made by clinicians who use them.
We propose a novel robust few-shot segmentation framework, Prototypical Neural Ordinary Differential Equation (PNODE), that provides defense against gradient-based adversarial attacks.
We show that our framework is more robust compared to traditional adversarial defense mechanisms such as adversarial training. Adversarial training involves increased training time and shows robustness to limited types of attacks depending on the type of adversarial examples seen during training. Our proposed framework generalises well to common adversarial attacks like FGSM, PGD and SMIA while having the model parameters comparable to the existing few-shot segmentation models. We show the effectiveness of our proposed approach on three publicly available multi-organ segmentation datasets in both in-domain and cross-domain settings by attacking the support and query sets without the need for ad-hoc adversarial training.
Link to paper
DOI: https://link.springer.com/chapter/10.1007/978-3-031-16452-1_8
SharedIt: https://rdcu.be/cVRYL
Link to the code repository
https://github.com/prinshul/Prototype_NeuralODE_Adv_Attack
Link to the dataset(s)
N/A
Reviews
Review #3
- Please describe the contribution of the paper
The paper proposes a novel framework/architecture PNODE that can be employed to train adversarially robust few-shot segmentation models.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The topic of adversarial robustness is an interesting and relevant topic to the computer vision as well as the medical community, thus the motivation for the work is solid.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
The introduction starts with a number of bold statements that come with no references and, to me, without those references, it appears the entire premise of the paper is incorrect. The authors state that the lack of available data makes models vulnerable to adversarial attacks and proceed to cite three references in the paper (references 1,2,3 in the paper) that do not support this statement. In fact, availability of the data has nothing to do with adversarial robustness of a model. If more data would make a model more robust to adversarial examples, models trained on ImageNet (more than a million training samples) would be more robust to attacks as compared to CIFAR, which is not the case.
What confused me the most in the paper is the usage of the terminology “framework”, as in, the authors claim that they propose a framework to make few-shot segmentation segmentation models more adversarially robust. However, in the experiments section, the proposed framework PNODE is detailed as having a CNN backbone following a Neural-ODE block, where this backbone is different than all other compared architectures. So, does it mean that the authors propose an architecture rather than a framework? Because if it is a framework that is applicable to any feature extractor, the expectation is to see performance (clean accuracy, robust accuracy) of (a) model, (b) model trained with SAT, and (c) model with PNODE, where the robust accuracy results obtained with (c) is hopefully better than both (a) and (b) while clean accuracy is comparable. Authors also claim that PNODE’s clean accuracy is even higher than other architectures, but, is this improvement in the accuracy attributed to Neural-ODE or the superior architecture in the backbone? I do not understand why the authors did not provide the same results for the model employed in the backbone so that we can make a fair comparison between models/framework. As it is, experiments section leaves much more comparative results to be desired.
Authors also do a poor job of literature reviewing for robust few-shot model models. The paper could use a discussion on how PNODE differs (advantages vs disadvantages) compared to some other work in the field [1,2,3].
Finally, the references are not up to the standards of the MICCAI. Sometimes the first names are shortened (Paschali, M.) other times full name is writte (Cihang Xie). Also the venues of publications are not consistent (only acronym, full name, full name + acronym). Please refer to the publication guidelines for the correct and consistent reference format.
[1] Tan et al., Towards A Conceptually Simple Defensive Approach for Few-shot classifiers Against Adversarial Support Samples [2] Goldblum et al., Adversarially Robust Few-Shot Learning: A Meta-Learning Approach [3] Liu et al., Long-term Cross Adversarial Training: A Robust Meta-learning Method for Few-shot Classification Tasks
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
No issues on reproducibility that I can think of.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
1- Support the claims made in the introduction with correct citations. 2- Improve experimental section with correct comparisons. 3- Compare the work to the existing methods of adversarially robust few-shot learning methods.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
4
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
See the weaknesses section.
- Number of papers in your stack
5
- What is the ranking of this paper in your review stack?
2
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #4
- Please describe the contribution of the paper
This submission proposes a novel adversarial defense framework against attacks on few-shot segmentation. Authors demonstrate the superior performance of their method to the traditional adversarial training. They also show the proposed framework generalizes well to common adversarial attacks such as FGSM, PDD and SMIA as well as to multiple data sets in both in-domain and cross-domain settings.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
Novelty: the adversarial attacks on few-shot segmentation (FSS) with deep neural networks and their defense mechanisms have not yet been explored. Clearly there is a need for such robust models in clinical setting.
Literature review: systematic and complete reviews on neural ODE, FSL, Adversarial robustness
Experiments: extensive experiments using three benchmark data sets across multiple segmentation models and attacks methods.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
No major weakness as far as I can see.
Authors may consider citing more related papers, for example,
Kang, X et al., Adversarial Attacks for Image Segmentation on Multiple Lightweight Models, IEEE Access Vol. 8, 2169-3536 Daza, L, et al, Towards Robust General Medical Image Segmentation. MICCAI-2021
Authors may consider testing your method using an ensemble of attacks such as AutoAttack.
- Please rate the clarity and organization of this paper
Excellent
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Authors provide github website so it is acceptable.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
As above, citing relevant references and test your method with auto attack Minor: Defence should be “defense”
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
7
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Novelty, presentation quality and experimental results.
- Number of papers in your stack
5
- What is the ranking of this paper in your review stack?
1
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #5
- Please describe the contribution of the paper
In this work, the authors propose to use Neural-ODE for Few-shot Segmentation, which is robust to adversarial attack. The proposed PNODE method, based on Neural ODE, doesn’t require prior knowledge on the type of adversarial attacks. The authors perform experiments comparing with multiple baseline methods, the results show that the results are more robust to adversarial attack.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The authors perfroms extensive experiments comparing with different baseline methods, the evaluation results show that the proposed method outperforms baseline methods.
- The idea of using neural ODE for prototypical few-shot segmentation is interesting.
- The writing quality is good.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- The authors claim that the adversarial robustness of proposed method come from the fact that the integral curves of Neural-ODEs are non-intersecting. The authors should expand on this and give more details to explain it.
- There are a few pervious work that already propose to use Neural ODE for adversarial robustness. e.g. [1] [2], the authors may discuss them in related work.
[1] Shin, Yu-Hyun, and Seung Jun Baek. “Hopfield-type neural ordinary differential equation for robust machine learning.” Pattern Recognition Letters 152 (2021): 180-187. [2] Yan, Hanshu, et al. “On robustness of neural ordinary differential equations.” arXiv preprint arXiv:1910.05513 (2019).
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The authors agree to release the code when the paper is accepted, and the datasets they used is publicly available.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
- The authors can perform ablation study to demonstrate the effectiveness of proposed components.
- From Fig.2 (right), the proposed PNODE method’s performance seems to drop the most as the attack intensity increase, can authors provide some explanation for that?
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
5
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
I think overall extending Neural ODE to enhance the adversarial robustness for FSS is interesting. And the authors conducted extensive experiments and compare against a range of baseline to demonstrate the effectness.
- Number of papers in your stack
5
- What is the ranking of this paper in your review stack?
3
- Reviewer confidence
Somewhat Confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
This paper proposed a novel adversarial defense framework against attacks on few-shot segmentation. Authors demonstrated superior performance to the traditional adversarial training and they also showed the proposed framework generalizes well to common adversarial attacks such as FGSM, PDD and SMIA as well as to multiple data sets in both in-domain and cross-domain settings. It seems that some relevant references are missing. More ablation studies may be required to demonstrate the effectiveness of proposed components.
- What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
1
Author Feedback
We thank all the reviewers for their valuable feedbacks and encouraging us to clarify or improve different sections of the paper. We really appreciate that our writing style, extensive experiments, and the literature review are acknowledged by the reviewers.
We thank reviewer R3 for pointing out that our statements on the relationship between lack of data and model vulnerability were made without adequate references. We will add the reference [1] that supports the message we intended to convey and builds the premise for the paper.
Our intention for calling the method a framework is due to its ability to serve various aspects. The framework provides a means to handle different types of attacks over various intensities on both the support and the query sets without requiring adversarial examples during training. These advantages make the framework unique compared to the existing methods. Another advantage is the flexibility of choosing any model architecture as the feature extractor. To make fair comparisons and have comparable parameters, we’ve chosen the same VGG feature extractor used in the baseline experiments as a backbone in PNODE. As the Neural-ODE in PNODE also consists of three convolutional layers, we have ensured to remove three corresponding layers from the VGG feature extractor to adhere to a strict sense of fair comparison. To further show the Neural-ODE’s role in robustness, we conduct a set of ablation studies as requested by reviewer R3 and reviewer R5. Upon removing the Neural-ODE block from PNODE, maintaining the remaining architecture and training procedure, we observed drops of 0.41, 0.36, 0.36 and 0.31 units for clean, FGSM, PGD and SMIA, respectively. Using SAT for training, this model increased the performance, but PNODE outperformed it by 0.28, 0.19, 0.20 and 0.31 units, respectively. We will add these (a: model, b: model+SAT, c: model with PNODE) comparative results and discuss how they demonstrate the effectiveness of PNODE in the camera-ready version of the paper.
As asked by reviewer R5, the robustness of PNODE is attributed to the property of non-intersecting integral curves in Neural-ODEs. With small perturbations, the integral curves with respect to the perturbed sample are sandwiched between the curves that correspond to the neighbouring samples ensuring that outputs of the perturbed samples do not change drastically. This is not the case with traditional CNNs, as there are no such intrinsic constraints. The proof and a more detailed explanation of the non-intersecting curves property in Neural-ODEs can be found in [2]. As requested by reviewer R5 for details on the Fig.2, we see that as the intensity increases, the perturbed samples tend to go farther away from the clean ones. This results in a large deviation between the initial points i.e, the clean samples and the perturbed ones, rendering the corresponding integral curves of the perturbed samples to become less likely to be sandwiched by the integral curves of the clean samples. Thus, the robustness tends to reduce as the intensity of the attacks increases. In addition to the natural decrease in the accuracy because of harder-to-segment samples (which are common for all methods), PNODE also suffers from a greater reduction in robustness (not common in others). Thus, PNODE performance decreases faster.
We thank reviewer R4 for the idea of testing our method under AutoAttack, however, we found that it is mainly focused on classification-based models. We have ensured to cover a spectrum of attacks by choosing the most relevant and recent attacks for segmentation, like the SMIA attack, as well as the traditional attacks such as FGSM and PGD attacks.
[1] Goldblum et al., Adversarially Robust Few-Shot Learning: A Meta-Learning Approach [2] H. Yan, J. Du, V. Y. Tan, and J. Feng, “On robustness of neural ordinary differential equations,” In Advances in Neural Information Processing Systems, 2018.