Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Xinrui Zhou, Yuhao Huang, Wufeng Xue, Xin Yang, Yuxin Zou, Qilong Ying, Yuanji Zhang, Jia Liu, Jie Ren, Dong Ni

Abstract

Localization of the narrowest position of the vessel and corresponding vessel and remnant vessel delineation in carotid ultrasound (US) are essential for carotid stenosis grading (CSG) in clinical practice. However, the pipeline is time-consuming and tough due to the ambiguous boundaries of plaque and temporal variation. To automatize this procedure, a large number of manual delineations are usually required, which is not only laborious but also not reliable given the annotation difficulty. In this study, we present the first video classification framework for automatic CSG. Our contribution is three-fold. First, to avoid the requirement of laborious and unreliable annotation, we propose a novel and effective video classification network for weakly-supervised CSG. Second, to ease the model training, we adopt an inflation strategy for the network, where pre-trained 2D convolution weights can be adapted into the 3D counterpart in our network for an effective warm start. Third, to enhance the feature discrimination of the video, we propose a novel attention-guided multi-dimension fusion (AMDF) transformer encoder to model and integrate global dependencies within and across spatial and temporal dimensions, where two lightweight cross-dimensional attention mechanisms are designed. Our approach is extensively validated on a large clinically collected carotid US video dataset, demonstrating state-of-the-art performance compared with strong competitors.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43895-0_48

SharedIt: https://rdcu.be/dnwy1

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #3

  • Please describe the contribution of the paper

    irst, the authors propose a weakly-supervised CSG, which can alleviate the laborious and mask annotation. Second, the authors adopt an inflation strategy and adapted into the 3D counterpart. Third, the authors propose a (AMDF) transformer encoder to integrate global dependencies within and across ST dimensions. Two lightweight cross-dimensional attention mechanisms are devised.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    (1) This paper first tackle the task of weakly-supervised CSG and achieve good performance. (2) An AMDF Transformer encoder is devised and reduce the computational complexity. (3) Good Qualitative result in fig4 and substantial improvement in Table 1.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The CSG is defined as a classification task in this paper (mild or severe). And the classification annotation is provided for training the model. How come it is a weakly supervised CS grading method? If the authors claim that the CSG-3DCT is able to perform localization without segmentation annoation, there should be Quantitative experiments rather than a selected sample for visulization. Moreover, this method is evaluated on the private dataset collected by the authors. Therefore, the reproducibility of the method’s ability to process ultrasound video is questionable.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The dataset is not public available. It’s not clear about the reproducibility of the paper.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    (1) The SWA and CA in fig 3 should be introduced in more details. (2) The authors should discuss about the speed and hardware requirement when test a US video. (3) The authors should talk about the limitations of their method.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This paper first tackles the CSG task with a well-designed method. The performance is good and can be considered for clinical use. I still have concerns about two things. My first concern is that the dataset is not public available and the improvement is hard to reproduce. My second concern is that whether the attention map is good enough for the segmenation.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #1

  • Please describe the contribution of the paper

    This paper proposes a video classification network for CSG with weakly supervision. New feature encoders have been proposed to improve model performance while reducing computational complexity.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • Combination of CNNs and Transformers
    • Inflation strategy for better initialization of model parameters and training
    • Improved Transformer encoder for enhanced feature discrimination
    • Clear explanation on rationale behind the encoder designs
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • Generalization capability needs to be discussed during an extension of the paper
    • Inference speed and computational cost needs to be discussed
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Reproducibility maybe a challenge due to the many details in the paper.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    Please see above.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Please see the strength section.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    This paper proposes a convolution-transformer hybrid model for 3D ultrasound video classification. It combines multiple elements from previous methods, including spatial-temporal feature fusion from timesformer, the idea of feature inflation from i3d, and the two-steam convolution-transformer architecture from conformer. It achieves better performance than a few baselines on a carotid transverse US video dataset consisting of 200 videos.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The proposed model is the first convolution-transformer hybrid model customized for carotid stenosis grading.
    2. The architecture is a combination of existing ideas. It seems that the only novel component is the Switched Attention, but it performs no better than conventional cross-attention (Table 1).
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The literature review and the compared baselines are a bit old.
    2. The authors claim the second major contribution of this paper is an initialization technique of inflating pretrained 2D CNN weights into 3D CNN kernels. However, it’s a technique proposed in I3D, and it’s arguable whether this should be listed as a major contribution.
    3. The modification that leads to the Switched Attention is not well motivated. The intuition why it works better is not explained.
  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Seems reproducible.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    1. As the first step, each slice is cropped with a detector to remove useless background. Then the cropped regions are concatenated to form a volumetric vessel and input to the 3D CNN and Transformer. What if the detected bounding boxes are far apart across adjacent slices? As far as I know, transformers are less sensitive to unaligned slices, as their locations are marked with positional encodings. However, 3D CNNs can hardly handle unaligned input.
    2. I’m still unsure whether 200 videos are sufficient to train a 3D CNN/transformer model for the task. The dataset seems too small for me, since transformers are not initialized with the inflation technique.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    It seems that the authors did a lot of work and the overall experiments are ok. But I have a few major concerns that I hope the authors could address properly.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    The proposed work integrates convolution and transformer mechanism into a hybrid model.




Author Feedback

N/A



back to top