List of Papers By topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Yixiong Chen, Li Liu, Jingxian Li, Hua Jiang, Chris Ding, Zongwei Zhou
Abstract
In medical image analysis, transfer learning is a powerful method for deep neural networks (DNNs) to generalize on limited medical data. Prior efforts have focused on developing pre-training algorithms on domains such as lung ultrasound, chest X-ray, and liver CT to bridge domain gaps. However, we find that model fine-tuning also plays a crucial role in adapting medical knowledge to target tasks. The common finetuning method is manually picking transferable layers (e.g., the last few layers) to update, which is labor-expensive. In this work, we propose a meta-learning-based learning rate (LR) tuner, named MetaLR, to make different layers automatically co-adapt to downstream tasks based on their transferabilities across domains. MetaLR learns LRs for different layers in an online fashion, preventing highly transferable layers from forgetting their medical representation abilities and driving less transferable layers to adapt actively to new domains. Extensive experiments on various medical applications show that MetaLR outperforms previous state-of-the-art (SOTA) fine-tuning strategies. Codes are released.
Link to paper
DOI: https://doi.org/10.1007/978-3-031-43907-0_67
SharedIt: https://rdcu.be/dnwdO
Link to the code repository
https://github.com/Schuture/MetaLR
Link to the dataset(s)
https://github.com/jannisborn/covid19 ultrasound
https://scholar.cu.edu.eg/?q=afahmy/pages/dataset
https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia
Reviews
Review #3
- Please describe the contribution of the paper
Authors propose a fine-tuning algorithm for medical deep networks. The proposal consists on a learning rate (LR) tuner to make the fine-tuning more accurate than only training a few layers. Compared to other fine-tuning methods, this work differentiates layers adaptation using layer-wise LRs. Authors validate the method on the different datasets for classification and segmentation, including an ablation study.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- Simplicity: the main strength is that the algorithm is quite simple. The LR and model parameter optimization is done by gradient descent, which is a common effective optimization technique.
- Extendibility: The method can be applied to any deep architecture and to any other segmentation or detection method (not only for classification).
- Validation: the proposed model has been validated with several medical datasets, two different networks (one of classification and one for segmentation), and also an ablation study is done to check the best performance of the model parameters. Besides, they compared with others fine-tuning methods.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- Topic: the contribution of the paper is more suitable to be published in a neural network focused conference instead of MICCAI.
- Justification: the model was applied for medical images, but there is no reason to be used with any other type of datasets. Authors should justify better the applicability of the method on medical images compared to others.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
- Models and algorithms: the models are mathematically correct, and an algorithm is provided in the paper, which helps for the reproducibility.
- Datasets: They used well-known public datasets, and also provided links in the Supplementary material.
- Code: authors provide a GitHub repository with very clear instructions about how to run the model, with examples.
- Reported experimental results: tables are very descriptive and include the essential metrics for evaluation and comparison.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
- The proposal is very good detailed mathematically and validated with many experiments. I would improve the introduction saying why your method is appropriate only for medical images and no others.
- For future works, I would try to include detection networks and test it with lesion detection problems.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
6
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The contribution of the paper is adequate, and the experiments validate the findings. In addition, the methodology is correct. It is simple but it hasn’t been applied before for fine-tuning. Thinking just on the medical part, more justification is needed to say that the proposal is only focused on medical images.
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #2
- Please describe the contribution of the paper
In this work the authors propose a meta-tuning approach for learning rates in transfer learning settings. Several enhancements steps are set including hyper-Lr and a dedicated batched validation schema. Experiments are conducted on multiple public medical data sets on detection and segmentation tasks. The results are compared to state-of-the-art fine-tuning strategies.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
• Proposes a novel approach for model fine tuning focusing on meta tuning of learning rates to tackle transfer learning challenges in medical imaging. • The approach is very interesting opening further idea generation in novel fine tuning strategies. • According to the authors the approach is easy to implement and also not dependent on certain model architectures. • Results are conducted on public data sets covering different tasks (detection, segmentation) using public models. In combination with the to be released code reproducing the results will be possible. Furthermore, an ablation study was conducted.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
• My main critique is more related to the model-centric approach of the proposed work. Although competitive or in case of detection even slightly better results can be achieved by using MetaLR, no time savings compared to fine-tuning all layers can be achieved (with accuracies increases of 1-2.3%). I’d argue that with a data-centric approach, although more time and labor intensive, a higher increase in performance could be achieved. In the end and especially in medical applications, where the focus should lie on the highest possible accuracies leading to accurate diagnosis and thereafter better patient health, prioritizing small improvements with the argument of costs and time won’t suffice. I would appreciate if the authors add their thoughts on this topic to the discussion.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The work was evaluated on a publicly available dataset. Evaluation was done using publicly available pre-trained models. Code will be available after acceptance.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
• I’m missing a reference regarding the privacy concerns related to pre-training data in the introduction section. Pseudo- and/or Anonymization as well as privacy preserving approaches already exist. • If I see it correctly, the MetaLR optimization is strongly dependent on the used validation data set. An ablation study on different structured and distributed validation sets would be very interesting to see if there are any interdependencies. • In 2.4 it is somewhat unclear which different datasets are used when. Are you referring to different data splits from one data set or actual differing data sets from e.g. Different domains, organs, task,…? • Which statistical tests were used to compute p-values? • What is the reason to not include AutoLR in the comparative segmentation experiments? • It would be interesting to see the LRs resulting from AutoLR in comparison to strengthen the discussion. • Limitations for the proposed MetaLR are missing.
Minor: • The abbreviation LR should be defined before usage in the abstract. • Typo in 2.1: “Let (x,y) denotes …” should be denote • Typo in 2.2 “Onilne” -> Online • Reference [2] is wrongly referenced. Its not Mina et al. but should be Amiri et al. If the reference is correct…
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
6
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Very interesting and easy to implement (generalizable) idea on fine-tuning models in transfer learning settings, allowing to tackle the issue of low amounts of data and/or time and cost intensive annotation work.
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #1
- Please describe the contribution of the paper
They propose a meta-learning-based LR tuner, named MetaLR, to make different layers automatically co-adapt to downstream tasks based on their transferabilities across domains. MetaLR learns appropriate LRs for different layers in an online manner, preventing highly transferable layers from forgetting their medical representation abilities and driving less transferable layers to adapt actively to new domains.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
1,This paper addresses the generalization problem of deep neural networks on limited medical data by introducing a meta-learning approach. 2,The experiment in this paper is relatively sufficient. They extensively evaluate MetaLR on four transfer learning tasks.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
1, This paper learns a network by adding a constantly updated weight to each network parameter. Is there already a relevant approach in meta-learning? Can the author explain in detail the differences and advantages between this paper’s approach and the meta-learning related approach? 2, Training a weight for each network parameter to learn its transferability will cause the number of parameters in the network to surge.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The authors provide relatively detailed details of the experiment and anonymous codes for reproducing MetaLR are released.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
I suggest that the authors elaborate on the advantages and innovations of this approach compared with the meta-learning approach.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
5
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
They proposed a new fine-tuning scheme, MetaLR, for medical transfer learning. The author solved the problem of medical transfer learning by using the method of meta-learning, and the perspective of application is novel. Whether relevant methods already exist in meta-learning? However, it does not show the advantages and innovation compared with meta-learning related methods.
- Reviewer confidence
Somewhat confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
Three reviewers hold positive views on this paper, acknowledging its technical contribution and application value. I have decided to provisionally accept this paper. However, I suggest that the authors carefully consider the shortcomings pointed out by the reviewers and address these issues in the final version.
Author Feedback
We would like to thank the chairs and reviewers for their efforts in reviewing this paper. Key comments and responses are summarized as follows. Due to the limited space, for minor issues (i.e., typos and presentation problems), we promise to correct them in the revised version.
Reviewer1
Q1: This paper learns a network by adding a constantly updated weight to each network parameter. Is there already a relevant approach in meta-learning? R1: In this work, we assign adaptive learning rates for each layer to control their adaptation strength to the downstream tasks. Common meta-learning paradigms (e.g., MAML, Reptile) learn how to initialize the model parameters so that they can adapt to different few-shot tasks more easily. To be best of our knowledge, MetaLR has a large difference from traditional meta-learning methods, and MetaLR is the first one to learn learning rates instead of parameters.
Q2: Training a weight for each network parameter to learn its transferability will cause the number of parameters in the network to surge. R2: MetaLR only learns layer-wise LRs instead of the LRs for each parameter, so the extra parameters to learn are only the number of layers, ensuring the efficiency of the method.
Q3: I suggest that the authors elaborate on the advantages and innovations of this approach compared with the meta-learning approach. R3: Thank you for your constructive suggestion. We will highlight the difference between MetaLR and the traditional meta-learning approach in the camera-ready version.
Reviewer2
Q1: I would appreciate if the authors add their thoughts on the data-centric approach to the discussion. R1: Thank you for your good suggestion. The data scarcity problem is one of the core problems in medical scenarios, and often we can only find ways to make better use of the existing data (e.g. rare diseases may have only a few cases in history), in which case it is valuable to design better ways like MetaLR to make better use of these precious data. Actually, the model-centric approach in this paper is orthogonal to the data-centric approach. Both the techniques of training and more high-quality data can have a positive effect on the generalization ability of the model. We will add relative discussions to the revised paper.
Q2: Unclear descriptions, typos, and the suggestion for discussing limitations. R2: We appreciate your detailed comments, and these issues will be well-addressed in the camera-ready version.
Reviewer3
Q1: The proposal is very good detailed mathematically and validated with many experiments. I would improve the introduction saying why your method is appropriate only for medical images and no others. R1: Thank you for your positive feedback and valuable comments. Medical image analysis is one of the most prominent research areas that need transfer learning to mitigate data limitation. We were motivated by an observation that the practitioners usually need to transfer models pretrained on other domains (natural images) to medical tasks. The coexisting domain discrepancy and task discrepancy make layer-wise transferability hard to predict. Unlike the common transfer learning tasks from natural images to natural images, where the lower layers for basic features are usually more transferable, scenarios for medical images are much more complicated. This is why MetaLR is specially designed for medical images. We will revise our introduction according to your suggestion.
Q2: For future works, I would include detection networks and test them with lesion detection problems. R2: Thank you for your sincere suggestion. Besides detection, we would also try MetaLR on more medical tasks and models like the prevalent medical LLMs and visual foundation models.