Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Tengfei Xue, Yuqian Chen, Chaoyi Zhang, Alexandra J. Golby, Nikos Makris, Yogesh Rathi, Weidong Cai, Fan Zhang, Lauren J. O’Donnell

Abstract

Diffusion MRI tractography parcellation classifies streamlines into anatomical fiber tracts to enable quantification and visualization for clinical and scientific applications. Current tractography parcellation methods rely heavily on registration, but registration inaccuracies can affect parcellation and the computational cost of registration is high for large-scale datasets. Recently, deep-learning-based methods have been proposed for tractography parcellation using various types of representations for streamlines. However, these methods only focus on the information from a single streamline, ignoring geometric relationships between the streamlines in the brain. We propose TractCloud, a registration-free framework that performs whole-brain tractography parcellation directly in individual subject space. We propose a novel, learnable, local-global streamline representation that leverages information from neighboring and whole-brain streamlines to describe the local anatomy and global pose of the brain. We train our framework on a large-scale labeled tractography dataset, which we augment by applying synthetic transforms including rotation, scaling, and translations. We test our framework on five independently acquired datasets across populations and health conditions. TractCloud significantly outperforms several state-of-the-art methods on all testing datasets. TractCloud achieves efficient and consistent whole-brain white matter parcellation across the lifespan (from neonates to elderly subjects, including brain tumor patients) without the need for registration. The robustness and high inference speed of TractCloud make it suitable for large-scale tractography data analysis. Our project page is available at https://tractcloud.github.io.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43993-3_40

SharedIt: https://rdcu.be/dnwNL

Link to the code repository

https://tractcloud.github.io/

Link to the dataset(s)

https://tractcloud.github.io/


Reviews

Review #1

  • Please describe the contribution of the paper

    The author implemented a geometric deep learning approach for the parcellation of the whole brain tractography. The authors claim that the solution avoids the use of registration to efficiently perform the parcellation. They tested the approach on 5 different datasets, 4 public and 1 private and they compared the results with state of the art methods.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The problem approached by this manuscript is relevant and of interest to the community
    • New embedding for tractography representation and learning
    • Extended empirical analyses
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • The interpretation of results is not straightforward
    • Some aspects of experimental design is not clear
    • The lack of details in the description of data
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The clinical dataset is not open publicly available. It prevents the reproducibility assessment of the impact in clinical setting

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    • The learning approach is defined as an end-to-end process. In this perspective a premise is the annotation of all datasets with respect to the 43 categories of bundles. The description of data preparation doesn’t provide details on how the annotation has been carried out. It looks like the labeling of streamlines has been obtained by projecting the bundles from an atlas after registration. This detail is relevant because it creates the premise that the definition of ground truth relies on a registration step.

    • The results reported in Table 1 combine the investigation of the local-global embedding and of the data augmentation strategy. The interpretation of these results is not straightforward. The comparison between local versus local-global approach for TractCloud shows a drop when using data augmentation. Local embedding looks not robust with respect to variations of rotation, translation and scaling. Such variability would be expected to characterize also the streamlines when the tractograms are not registered to a common space. Unexpectedly the results of TractCloud in this case do not have a drop in performance, as if the variance of unregistered tractograms would be marginal. Analogously this hypothesis is confirmed also by the results of TractCloud for local-global embedding: for original data PointCloud behaves like DGCNN for data augmentation. This is according to the working assumption that when data variance is relevant it becomes useful to capture the relation among streamlines: local-global vs local, GGCNN vs PointNet. The interpretation of results reported in Table 1 suggests that the original data are not encoded in the subject space but have been normalized (e.g. ACPC alignment).

    • The discussion argues that registration-free method can outperform the method based on registration. Since the ground truth looks derived from a registration, it is not clear or motivated how it may happen.

    • The design of the experimental setup divided the streamlines of each whole brain tractogram into train, validation and test by subject. Therefore the learning process always has a partial sampling of the streamlines for each subject. The proper design of cross-fold validation would have partitioned the data with respect to the population. It would be important to disambiguate this potential misleading description of the experimental setup.

    • Table 1 reports the results for cross-validation analysis. Some measures for TractCloud are marked in bold. Nevertheless it is not explained the meaning of this highlighting. Does the bold refer to a statistical significance test?

    • The 43 categories are not homogeneous: 42 categories refer to anatomical plausible bundles and 1 category to all the streamlines considered not anatomical plausible. Without a statistics

    • The TIR measure is thresholded at 50 minimum number of streamlines for a bundle to be considered as identified. The presentation of datasets doesn’t report the statistics of the 43 categories of bundles. Considering that a whole brain tractogram is composed of 1M streamlines, and the bundle categories are 43, the average size of each bundle is quite larger than 50 streamlines. It is not clear how only 50 streamlines can be representatives to identify a bundle.

    • In Table 2 it would be informative to report the standard deviation of the TDA measures

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The research goal is relevant. The proposed method is clear. Some details of the design of analyses are not clear. The interpretation of results is not straightforward.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #3

  • Please describe the contribution of the paper

    The paper proposes a novel registration-free framework, TractCloud, for whole-brain tractography parcellation directly in individual subject space. The proposed framework leverages a learnable local-global streamline representation that takes into account geometric relationships between streamlines in the brain. The authors trained their framework on a large-scale labeled tractography dataset and tested it on five independently acquired datasets across populations and wide age ranges. The results showed that TractCloud achieved fast and accurate whole-brain tractography parcellation without the need for registration, making it an attractive solution for large-scale datasets. Overall, the paper presents an innovative approach to tractography parcellation that has the potential to advance the field of dMRI.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. A novel formulation: The paper presents a novel registration-free framework, TractCloud, for whole-brain tractography parcellation directly in individual subject space. This innovative approach leverages a learnable local-global streamline representation that takes into account geometric relationships between streamlines in the brain.

    2. Strong evaluation: The authors trained their framework on a large-scale labeled tractography dataset and tested it on five independently acquired datasets across populations and health conditions. This demonstrates the robustness and generalizability of the proposed framework.

    3. Fast and accurate results: The results showed that TractCloud achieved fast and accurate whole-brain tractography parcellation without the need for registration, making it an efficient solution for large-scale datasets. This is a significant advantage over current methods that require registration, which can be time-consuming and computationally expensive.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    There are no major weaknesses.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Authors detailed the framework clearly, and will release code on GitHub. The formulation and experimental design are easy to follow, with all the details covered in the paper.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    Overall the paper is well-written, comprehensive, and easy to follow. I really enjoyed reading the paper. I would like to congratulate the authors on their efforts. I have one minor suggestion.

    • Lack of interpretability: The paper does not provide a clear interpretation of the learned local-global streamline representation used in TractCloud. While the authors demonstrate that this representation improves the accuracy of tractography parcellation, it is unclear how this representation relates to known anatomical structures in the brain.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    8

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The proposed framework, TractCloud, achieved fast and accurate whole-brain tractography parcellation without the need for registration, making it an attractive solution for large-scale datasets. Overall, the paper presents an innovative approach to tractography parcellation that has the potential to advance the field of dMRI.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #4

  • Please describe the contribution of the paper

    The authors present their work on registration-free tractography parcellation using not only single-streamline information but taking also the local neighborhood into account

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The approach is evaluated on diverse data The authors represent a nice end-to end method that does not rely on any intermediate steps such as registration The authors include multiple benchmark methods in their evaluation Visually it seems like the proposed approach yields less outliers

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    It is unclear why not requiring registration is important. Simple rigid alignment to a common space is fast and robust.

    The comparison to an approach the requires registration on data that is not registered is unfair and results are of course expected to be worse. The data should be registered to have a fair comparison.

    The authors can show only small improvements using their approach.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Data is openly available and code will be published

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    -

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    While the improvements are not strong, the method is interesting for the community and might be a basis for further developments.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    7

  • [Post rebuttal] Please justify your decision

    The authors convincingly clarified my raised issue regarding an unfair comparison. While I am still not 100% convinced regarding the drawbacks of registration, their clarification regarding the evaluation definitely warrants warrants bumping this to a strong accept.




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    Reviewers thought that the topic of this manuscript should be of interest to the MICCAI community, that the proposed idea is sufficiently novel, and that the extent of the experimental evaluation more than satisfies the expectations for a conference paper. Despite this, there were a few open questions that should be clarified in a rebuttal:

    1. The authors emphasize that their method is registration-free, but it is neither entirely clear why that should be practically relevant, nor is it entirely clear whether that is really the case, since the labels with which the method has been trained still appear to have been generated based on a registration.
    2. Please comment on R1’s concerns regarding the interpretation of the results in Table 1. In particular, have input volumes for the proposed registration-free approach been pre-aligned (e.g., ACPC alignment)?
    3. Please clarify whether, as suspected by R4, the registration step has been omitted when comparing to approaches that require registration, which would amount to an unfair comparison.




Author Feedback

We thank the AC and reviewers. We first give our responses to the three questions asked by the AC, followed by our responses to all other comments/requests.

Registration-free nature and practical relevance/benefits of the method (AC, R1, R4): We will clarify in the paper that the proposed method is registration-free specifically at the inference stage. The benefits of the registration-free method include robustness to inter-subject variability and reduced computational time and cost. These advantages are very practical when processing large-scale tractography datasets. We agree with R1 that registration was previously used in the creation of the openly available multi-subject tractography atlas that we use as training data, and we will clarify this in the paper.

Questions about the interpretation of Table 1 (AC, R1): First, we apologize for confusion regarding the structure of Table 1, which will be clarified in the paper. R1 is correct that the “original” data are aligned, as these data are in the training atlas space (experiments training and testing on this data are located in the top two rows of Table 1). The “STA” data is unregistered data that was synthetically transformed (experiments training and testing on this data are located in the bottom two rows of Table 1). To specifically address R1’s questions, yes the “original” data are aligned, while the unregistered “STA” data are not pre-aligned. We will clarify that we only centered each brain as a pre-processing step.

Question if the registration step has been omitted when comparing to approaches that require registration (AC, R4): We clarify that all comparison methods in Table 2 (DeepWMA, DCNN++, and TCregist) were trained and tested on registered data. Only our registration-free framework (TCreg-free) was trained and tested on unregistered data.

We will clarify several points raised by the reviewers. We will add a discussion of the potential anatomical “interpretation of the learned local-global streamline representation” (R3). Due to space limitations we are not able to provide a detailed result for all 43 tracts in Table 2, so we will provide standard deviation information, which shows a highly consistent result across all tracts (R1). We will also describe the distribution of the numbers of streamlines in the 43 tract categories of the training atlas (R1). We will clarify that the labeled atlas data was divided into train/validation/test by subjects (such that all streamlines from an individual subject were placed into only one category, either train or validation or test), as asked by R1. We will clarify that the bold text in Table 1 indicates the best-performing method for the “original” and “STA” data experiments (R1). The tract identification rate based on a streamline threshold (R1) is a popular metric. We agree with R1 that more information is needed and we will clarify that we include a complementary tract quality measure in Table 2, the tract distance to atlas. R4 suggests that rigid alignment is fast and sufficient; we will clarify that in published literature, affine or even nonrigid registration is needed for tractography parcellation. We will clarify “how the annotation has been carried out” (R1) by clarifying that the annotation of tract labels is performed directly using the inference stage of our proposed deep learning method. This requires no registration of an atlas. We will clarify the magnitude of the improvements of our method (R4) as follows. On augmented (unregistered) training data, the improvement of our framework is large (11%-14% improvement of the F1 metric). On testing datasets, our method is faster than compared methods and has significantly better quality of identified tracts.

We thank the reviewers again for their constructive comments, which will all be addressed in the final paper should it be accepted.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    Reviewers were positive about this work to begin with. The rebuttal convincingly addresses the remaining concerns, and describes modifications to the manuscript that will further strengthen it. In particular, authors will clarify a) that the method is registration-free at the inference stage, b) the practical importance of that fact, c) interpretation of central results, d) that the comparison to previous methods has been done in a fair manner, e) several smaller points brought up in the individual reviews. I am looking forward to seeing the revised paper at MICCAI!



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors have convincingly addressed all the concerns raised by the reviewers; therefore, I recommend accepting this paper.



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    All reviewers agreed this is a methodologically strong paper that solves an interesting clinical problem. The main critiques were related to some unclear writing (although reviewers were divided on this topic) and a point that some of the data used are not publicly available. However I think the fact that this manuscript has 6 datasets (1 training and 5 others for testing) and only one that is not publicly available is a real strength of the manuscript.

    Overall I think the weaknesses raises are relatively small, and fixable in the final manuscript, and are not outweighed by the strengths.



back to top