Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Xiongchao Chen, Bo Zhou, Huidong Xie, Xueqi Guo, Jiazhen Zhang, Albert J. Sinusas, John A. Onofrey, Chi Liu

Abstract

Single-photon emission computed tomography (SPECT) is a widely applied imaging approach for diagnosis of coronary artery diseases. Attenuation maps (μ-maps) derived from computed tomography (CT) are utilized for attenuation correction (AC) to improve diagnostic accuracy of cardiac SPECT. However, SPECT and CT are obtained sequentially in clinical practice, which potentially induces misregistration between the two scans. Convolutional neural networks (CNN) are powerful tools for medical image registration. Previous CNN-based methods for cross-modality registration either directly concatenated two input modalities as an early feature fusion or extracted image features using two separate CNN modules for a late fusion. These methods do not fully extract or fuse the cross-modality information. Besides, deep-learning-based rigid registration of cardiac SPECT and CT-derived μ-maps has not been investigated before. In this paper, we propose a Dual-Branch Squeeze-Fusion-Excitation (DuSFE) module for the registration of cardiac SPECT and CT-derived μ-maps. DuSFE fuses the knowledge from multiple modalities to recalibrate both channel-wise and spatial features for each modality. DuSFE can be embedded at multiple convolutional layers to enable feature fusion at different spatial dimensions. Our studies using clinical data demonstrated that a network embedded with DuSFE generated substantial lower registration errors and therefore more accurate AC SPECT images than previous methods.

Link to paper

DOI: https://link.springer.com/chapter/10.1007/978-3-031-16446-0_5

SharedIt: https://rdcu.be/cVRSL

Link to the code repository

https://github.com/XiongchaoChen/DuSFE_CrossRegistration

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    The paper proposed a rigid multi-modality DL-based registration algorithm with application to SPECT and CT images. The SE block (Squeeze Excitation) is utilized in between the feature extraction for each of the SPECT and CT feature extraction pipeline. The dataset includes in-house 450 aligned pairs, which have been artificially deformed with a random rigid transformation for the evaluation. The proposed method is compared with a conventional method and several multi-modality DL-based registrations.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    1) Comprehensive evaluation but limited to artificial deformations.
    2) The proposed method improves the performance for this rigid SPECT/CT registration task. 3) The paper is well written.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1) Novelty is limited 2) Lack of evaluation of real data.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    All looks good. It would be nice to make several cases in the dataset publicly available.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    1) It is not mentioned how the ground truth transformations are generated. Are the SPECT and u-maps aligned manually? How are the ground truth transformations validated?

    2) It would be helpful to add the original size and voxel spacing of CT images.

    3) It would be nice to clarify (a few sentences) how the proposed method can be applied to a real case scenario, especially when CT and SPECT images are taken with different size and voxel spacing.

    4) Is there any intensity augmentation used during the training? If not, it would be nice to mention that explicitly.

    5) The squeeze excitation module has been used in several DL-based registration publications but not in the context of multi-modality registration. It would be relevant to cite a few of those. (not necessary for this conference paper)

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The proposed method achieved the best result for the rigid SPECT/CT registration task over an in-house dataset compared to current multi-modal approaches. Although no evaluation has been performed on real data, the artificial transformations look sufficient and promising for this conference paper.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    2

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #3

  • Please describe the contribution of the paper

    The paper proposes a potentially novel, dual-branch squeeze-fusion-excitation (DuSFE) attention module for feature fusion between cardiac SPECT and CT. The proposed attention module aims to explicitly model the feature fusion process between the two modalities. The proposed framework aims to directly utilize the spatial and channel re-weighting property of the squeeze-and-excitation networks (SENets) to better fuse multimodal information for image registration. The authors motivate the need for such a task specific feature fusion module due to a lack of cross-modal feature integration models.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The proposed method claims to propose a completely novel attention module for early feature fusion between two different medical imaging modalities before registering them using a deep multilayered module. DuSFE module is takes two inputs from a parallel convolutional backbone catering to each modality. This layer then fuses and recalibrates the features into two stages – channel-wise squeeze-fusion-excitation (cSFE) followed by spatial squeeze-fusion-excitation (sSFE). This construction offers a potentially novel way of fusing multimodal features across spatial as well as channel dimensions. Model performance was validated in an appropriately sized cohort of 450 individuals and the proposed scheme was compared with four different iterative and deep learning-based image registration methods.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The proposed approach simulates mis-registrations by manually introducing translations and rotations. It is unclear as to what mis-registrations were inherent to the dataset. How bad was the alignment between two images before adding translations and rotations? This leads to two cases: • Case – I: If there were mis-registrations before adding rotations or translations, why did the authors introduce them? • Case – II: On the contrary, if the images were in reasonable spatial agreement, why did the authors simulate mis-registrations? If the image were in good agreement, then mis-registering the images could possibly lead to a simpler task of learning rotations and translations? Moreover, if the analysis cohort does not exhibit naturally occurring mis-registrations, would it be useful to evaluate the model over a cohort that has some natural mis-registrations?

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    This could be assessed by looking at the source code repository which is currently anonymous.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    It would be great to clearly state that the paper focuses on rigid image registration and not deformable image registrations? Could there be potential deformations as well in case of SPECT vs. CT that might needed to be modeled. Also, authors should compare their method with at least one approach that specifically focuses on feature fusion.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    It would be great to clearly state that the paper focuses on rigid image registration and not deformable image registrations? Could there be potential deformations as well in case of SPECT vs. CT that might needed to be modeled. Also, authors should compare their method with at least one approach that specifically focuses on feature fusion.

  • Number of papers in your stack

    2

  • What is the ranking of this paper in your review stack?

    2

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Not Answered

  • [Post rebuttal] Please justify your decision

    Not Answered



Review #2

  • Please describe the contribution of the paper

    This paper is well-organized, clearly written, and easy to follow. The novelty of increasing cross-modal registration accuracy by squeeze-fuse-exci both the channel and spatial dimensions is intuitive and interesting. The main motivation is well-supported by strong experimental evaluations.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This paper is well-organized, clearly written, and easy to follow. The novelty of increasing cross-modal registration accuracy by squeezing-fusing-excitating both the channel and spatial dimensions are intuitive and interesting. The main motivation is well-supported by strong experimental evaluations.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The detail illustrations of figures and tables are severely insufficient. i.e., the captions of figures and tables are too simple and need to be enriched. The difference in the visualization results are not clearly to see. Please update the figures with zoom-in view highlight.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    It is easy to reproduce since the authors will provide the code upon acceptance and the description is clear in the paper.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    This is a very good submission with enough technical contributions and strong experimental validations. However, there are some minor problems need to be solved: — It is better to put subsection 2.1. Dataset and Preprocessing and subsection 2.4 Implementation Details to section 3. Results. It will be more concise to describe only the method design in the section 2. Methods. — Please enrich the caption of the Fig.1. and 2. — The difference in visualisation results in Fig.3. is not straightforward to see. Please use some signs or zoom-in window to highlight the different regions. Also enrich the caption would help to understand this figure too. — There is a third kind of registration encoder, which use a weight-shared encoder (means one encoder) to extract CNN from the moving and fixed images. I would suggest to discuss this third kind in the literature review part too. [1]. Liu, L., and et al.: Contrastive registration for unsupervised medical image segmentation.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This is a very good submission with clear organization, interesting novelty, and strong experimental validation. I would recommend acceptance of this paper.

  • Number of papers in your stack

    5

  • What is the ranking of this paper in your review stack?

    1

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    7

  • [Post rebuttal] Please justify your decision

    Overall, this is a good submission with clear organization and enough technical novelty. I only spot minor problems in this paper, which are solved in the response files. Hence, I would recommend accepting.



Review #4

  • Please describe the contribution of the paper

    This paper proposes a Dual-Branch Squeeze-Fusion-Excitation (DuSFE) module for cross-modality registration of cardiac SPECT and CT. Experiments on simulated unaligned data show the validity of the proposed method.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This paper employs a dual-branch squeeze-fusion-excitation module, which combines features/knowledge from multiple modalities and recalibrates both channel-wise and spatial features for each modality.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    (1) Unclear motivation and limited novelty. The authors do not explain the mechanisms/benefits or their intuitions about why the DuSFE could help in cross-modality registration. Please give explanations about how the proposed method help in cross-modality registration situations, where the same structures in two modalities present totally different appearance intensity distributions. On the other hand, the authors focus too much on existing techniques used [1], such as the recalibration of the channel-wise and spatial features, which should not be their contribution. Although this paper additionally employs dual-branch techniques, the innovation is incremental and limited.

    (2) Insufficient experiment. The improvements are marginal in terms of registration accuracy compared to the baseline DenseNet [2] (backbone network without DuSFE). Also, it would be much better for the authors to validate the proposed method on real challenging cross-modality registration, rather than synthetic unaligned data.

    [1] Squeeze-and-excitation networks. [2] Densely connected convolutional networks.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Good reproducibility.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html

    In section 2.3, please give more illustrations about late feature fusion.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The major issues are unclear motivation and limited novelty.

  • Number of papers in your stack

    8

  • What is the ranking of this paper in your review stack?

    5

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    5

  • [Post rebuttal] Please justify your decision

    The author’s rebuttal somewhat addressed my concerns about unclear motivation. This paper uses insights from existing DL works and designs the gradual cross-connected feature fusion with tailored Squeeze-Fusion-Excitation for Cross-Modality registration.




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    Dual-Branch Squeeze-Fusion-Excitation Module for Cross-Modality Registration of Cardiac SPECT and CT

    This submission tackles the problem of rigid cardiac SPECT/CT registration. The difference with conventional learning-based registration approaches resides in proposing a dual branch architecture, cSFE and sSFE, each with a modality, that are fused in a fusion network (DuSFE). Evaluation is using a private dataset of 450 images, and tested using data augmented with random affine transformations. The compared methods are the original mutual information (from 1997), a ct-ultrasound method (DVNet), a mr-ultrasound method (MSReg), and a classification network (DenseNet).

    The reviewers have wide ranges of scores, but also indicate a limited novelty or motivation (R1,4), issues with evaluation (R1,4). One review remains unfortunately superficial (R3) and needs to be weighted accordingly. I have to agree with the reviewers that the motivation and experiments remain limited and need to be addressed in a rebuttal:

    • novelty - the originality of the method needs to be strengthened. For instance, the dual branch concept is already proposed in several multi-modal tasks, including in registration, such as [19]. If the contribution is focused on proposing a dual-branch and fusing blocks, the experiments should evaluate this specific component. As is, the reported improvement over the variants appears to be from 3.88 or 3.81mm to 3.33m (an improvement of 0.48mm); or from 1.01 or 0.70 degrees to 0.61 degrees (an improvement of 0.09 degrees). This may be considered low.

    • motivation - the community has evolved since the mutual information paper of 1997. For instance, several recent multi-modal registration approaches are referenced, such as [23]. The heart is also a highly deformable organ, the motivation for developing a rigid registration may, therefore, be considered inadequate. The literature, for instance, does not cover the recent widely used registration approaches such as VoxelMorph, which can also use mutual information as a loss function. Furthermore, DenseNet is a classification network, and needs further explanation on how this was used as a comparative method in registration. The choices of the baseline methods are, therefore, currently questionable.

    • experiments - the contribution is indicated as the fused-dual-branch, experiments on this specific aspect may currently be insufficient. Table 1 indicates an improvement of 0.48mm (+-0.45) or 0.09 degrees (+-0.45) in rigid alignments. This may be considered marginal. The choices of the baselines methods should also be better motivated as SPECT/CT registration may not be considered new today and a vast literature exists in spect/ct dosimetry, dating prior to the learning-based approaches, for instance, Sjögreen-Gleisner,Rueckert,Ljungberg, 2009; or Jackson,Hicks, 2013. As is, it is hard to assess the contribution with respect to the state-of-the-art.

  • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    12




Author Feedback

R1&R4&AC asked about the novelty and motivation of our work. We want to respectfully clarify that DuSFE is not a simple replicate of previous methods. The initial SENet and its variants are all based on the self-attention mechanism of a single input data stream. In the cross-modality registration, effective feature fusion across two inputs is required. Thus, DuSFE was designed based on a co-attention mechanism of two cross-connected input data streams, for the gradual cross-modality fusion of channel (cSFE) and spatial (sSFE) features. Although the dual-branch concept was proposed in some multi-modal registration works, the two branches were independent without the gradual cross-connected feature fusion. DuSFE is also the first DL study for SPECT-CT registration.

AC pointed out that our experiments of evaluating DuSFE might be insufficient. We conducted several ablation experiments of DuSFE, which were presented in Supplementary. Single branch and different feature fusion manners were tested and compared.

R1&R3 asked how ground-truth registrations were generated. The μ-maps were all manually checked and registered with SPECT by technologists using vendor software. It will be mentioned in the final version.

R3&AC asked why we didn’t consider non-rigid registration. In clinical cardiac SPECT, the non-rigid motions are currently not considered. The manual corrections were done only for rigid motions in current clinical practice. However, we will investigate the non-rigid registration, including the comparison to DL non-rigid methods, such as VoxelMorph, in our future studies.

R1&R3&R4 pointed out the lack of evaluation of real clinical motion data. We believe our rigid motion simulation represents realistic scenarios in clinical cases. The network trained using simulation data could be adapted to real cases. Results on real motion studies will be added in Supplementary or our future journal submission. Cropping and interpolation can be used if the real CT and SPECT are of different sizes or voxel spacing.

AC asked how DenseNet was used as a baseline in registration. The “DenseNet” in Table 1&2 refers to the entire backbone registration framework shown in Fig.1 (without DuSFE), not the initial DenseNet in [10]. In this framework, DenseBlocks were embedded as components at each CNN layer for image feature extraction.

AC suggested two baseline methods used in SPECT/CT registration. However, those methods are uni-modality CT-CT registration instead of the cross-modality SPECT-CT registration in our work. We will include them in our introduction.

R4&AC pointed out that the accuracy improvement using DuSFE was relatively low. We want to respectfully clarify that the change from 3.88 to 3.33 was nearly a 15% error decrease. Paired t-tests on indexes, μ-maps, and SPECT all proved the significance (p<0.05). The images and error maps in Fig. 3&4 also showed improvement.

R1 asked about CT size and voxel spacing. We will add this information in our final version. We will also clarify that we didn’t use data intensity augmentation.

R2&R4 pointed out that the illustrations of figures/tables are insufficient. We will expand the figure and table captions with more details. We will also add arrows to highlight differences. The weight-shared encoder will be cited as suggested.

R3 asked why we simulated mis-registrations if images were well registered. We simulated mis-registrations to enable the training of the SPECT-CT registration network.

R4 asked why the DuSFE can help cross-modality registration. As we mentioned in the introduction, the co-attention mechanism of DuSFE enabled gradual feature fusion between two cross-connected input data streams for cross-modality registration. Although SPECT and CT might present different patterns, they might have shared underlying anatomy structures that can be extracted for registration.

The data (upon permission) and code will be published online upon paper acceptance.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Dual-Branch Squeeze-Fusion-Excitation Module for Cross-Modality Registration of Cardiac SPECT and CT

    The review scores lean towards Acceptance but the reviews also indicate major evaluation issues (old methods from 1997), limited improvements (0.48mm improvement from 3.81mm while the resolution is 6.8mm), and limited novelty (a dual-branch for rigid registration).

    The evaluation is currently limited on baselines that use the original mutual information (from 1997), a ct-ultrasound (DVNet), a mr-ultrasound (MSReg), and a classification network (DenseNet). These choices were not adequately addressed in the rebuttal.

    The reported improvement of 0.48mm (from 3.81mm to 3.33mm) or of 0.09 degrees also needs to be put in perspective with the imaging resolution - the indicated resolution of SPECT and CT attenuation maps is 6.8mm - In relation, 0.48mm seems negligible despite p-values on a private dataset of synthetic affine transformations. The lack of a real case experiment creates a weakness in the submission.

    The rebuttal promises additional results on real motion studies possibly in a supplementary material and also points out to important ablation studies in the appendix. This deferral to the appendix may be unfair with respect to the other submissions.

    The decision also needs to face the presence of a few reviews that were unfortunately too superficial, of comestic nature, or last-minute entry. Scores need to taken accordingly.

    The work has scientific merit, so does several other work in the evaluated pool of submissions.

    For all these reasons and situating this work with respect to the other submissions, the recommendation is be towards Rejection.

    The final decision will be a consensus with the other co-meta-reviews.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Reject

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    14



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    After the rebuttal, all four reviewers have now scored the work as acceptable, with one strong accept. I have chosen to go along with the majority view.

    I note that they use a classical mutual information registration as one of the baselines, but don’t say which implementation. I’m aware that subtle implementation differences can greatly change accuracies (e.g., Markiewicz et al, 2021).

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    4



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Overall the reviewers found value in the proposed multi-modal dual-branch feature fusion, which is a solid contribution for learning based registration. The evaluation was adequately sized but limited to synthetically misaligned scans, which may over-estimate the performance. Despite these problems all reviewers recommend acceptance and the authors will provide source-code that might enable further comparisons on public benchmark data.

  • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.

    Accept

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    4



back to top