List of Papers By topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Mohamed A. Suliman, Logan Z. J. Williams, Abdulah Fawaz, Emma C. Robinson
Abstract
Cortical surface registration is a fundamental tool for neuroimaging analysis that has been shown to improve the alignment of functional regions relative to volumetric approaches. Classically, image registration is performed by optimizing a complex objective similarity function, leading to long run times. This contributes to a convention for aligning all data to a global average reference frame that poorly reflects the underlying cortical heterogeneity. In this paper, we propose a novel unsupervised learning-based framework that converts registration to a multi-label classification problem, where each point in a low-resolution control grid deforms to one of fixed, finite number of endpoints. This is learned using a spherical geometric deep learning architecture, in an end-to-end unsupervised way, with regularization imposed using a deep Conditional Random Field (CRF). Experiments show that our proposed framework performs competitively, in terms of similarity and areal distortion, relative to the most popular classical surface registration algorithms and generates smoother deformations than other learning-based surface registration methods, even in subjects with atypical cortical morphology.
Link to paper
DOI: https://link.springer.com/chapter/10.1007/978-3-031-16446-0_12
SharedIt: https://rdcu.be/cVRSS
Link to the code repository
https://github.com/mohamedasuliman/DDR/
Link to the dataset(s)
https://db.humanconnectome.org/
Reviews
Review #1
- Please describe the contribution of the paper
- Deep neural nets-based surface registration
- Reshaping existing methods to a CNNs framework
- Validation on a large dataset with other spherical registration methods
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The paper proposes a neural nets-based spherical registration. The underlying theory is based on existing spherical registration [22,23] and MoNet architecture [9]. Although theoretical contributions seem weak in this work, the authors integrated existing approaches into a single framework. The proposed method was validated on a part of the HCP dataset (N~1.1k) with other spherical registration methods. The results show comparable performance (especially to Spherical Demons) while reducing computing time.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
The paper presents promising results. Yet, some points are not clear; in particular, the motivation of using rotational equivariance seems weak. Please see my comments in the section below.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The proposed method seems reproducible.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
-
The authors pointed out rotation equivariance does not hold in S3Reg, in which a filter orientation is inconsistently represented at the poles. This might be motivation of this study in using MoNet. I like a rotation-equivariant approach but was disappointed in how rotational equivariance was used in this work. Although I do understand the orientation problem, the MoNet approach (or rotation equivariance) seems not critical in the proposed method. In particular, MoNet was used for rigid alignment as preprocessing in this work, which may be able to capture the global orientations because of rotation equivariance. However, rigid alignment is relatively quick and generally works well without learning in conventional methods such as FreeSurfer or Spherical Demons. When it comes to spherical registration, rotation equivariance seems not valuable as the input data are already rigidly aligned. Also, I think “good” rigid alignment is key in the proposed method because of the label order, which may not be modeled even with rotation equivariance. In this step, MoNet and Spherical Unet (or other spherical CNNs) perhaps make no huge difference in terms of the label inference. It is unclear how critical the orientation failure at the poles is on overall registration performance from a practical view.
-
The overall registration framework is heavily based on [22,23]. The proposed registration is clearly faster than conventional surface registration. Since diffeomorphism is implicit yet, how about the regularity of the deformations (e.g., self-intersection)? Does the proposed method unfold flipped triangles (triangles with negative area)?
-
In the introduction, the authors claim that “there is a limit to these improvements since cortical topographies vary in ways that break the diffeomorphic assumptions of classical registration algorithms”. I am unsure if this is overcome by the proposed method. Since the framework follows [22,23], what makes methodological difference from the classic methods? Any supporting examples?
-
In the results, Spherical Demons seems better in all benchmarks except for computing time. Although the proposed method is fastest, the computing time is often not more important than registration accuracy in many neuroscience studies. Can the authors comment on the results? Along this, some manual labels (e.g., cortical parcellation) would be also helpful to understand the performance of the work, though I am unsure if this is feasible in this work.
-
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
4
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Please see my comments above.
- Number of papers in your stack
5
- What is the ranking of this paper in your review stack?
3
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #2
- Please describe the contribution of the paper
This paper proposes a deep learning framework for registration on a sphere. The problem is framed as a discrete matching problem - where each control point on a sphere is assigned a label corresponding to one of a possible set of displacements, with regularization by a conditional random field network. The approach is evaluated in a large dataset of brain surfaces against both deep learning and conventional algorithms, and has competitive performance.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The proposed method uses different architecture choices compared to the closest existing work, S3Reg, that simplify the approach, eliminating the need to average multiple solutions. The overall design of the approach makes sense for the registration problem at hand. The method is very clearly explained. The evaluation is excellent, using the very large Human Connectome Project dataset, and comparing to several conventional methods and S3Reg under different parameter settings. The comparison considers both quality of matching and distortion, making it more thorough.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
-
It is not clear whether diffeomorphism of the spherical deformation is guaranteed, as in Spherical Demons. It may be so if the local displacements are constrained to be small enough and enough iterations of fitting are performed, but this is not stated, and in description of the coarse-to-fine architecture it seems like the coarse and fine level models are only executed once per pair of input spheres. So it seems like it would be possible for the deformation to cause folds on the surface. I wasn’t sure whether this is captured by the strain variables - they all are plotted on the log scale so one has to assume there were no negative values.
-
The overall contribution is somewhat incremental.
-
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Ok
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
Overall this is a very nice paper, I would recommend addressing the question of diffeomorphism. Some additional specifics could be provided, for example it wasn’t clear until seeing Figure 3 that the features being matched were sulcal depths; the loss function could also be more explicitly stated.
It would also be interesting if the approach would also work well for non-learning based registration, i.e., for optimizing the loss function for a given pair of inputs. After all one really wants the best registration possible, even if it takes a bit more time (i.e. a few iterations of gradient descent) to compute the solution.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
6
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Good evaluation and valuable method for a real-world problem. However, enthusiasm tempered by the two weaknesses noted above.
- Number of papers in your stack
4
- What is the ranking of this paper in your review stack?
2
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #3
- Please describe the contribution of the paper
This paper presents a learning-based pipeline to co-register two Cortical surfaces using spherical representations of the surfaces. Followed by a learned rigid alignment of the surfaces, the author’s apporach includes the successive prediction of displacement fields eventually aligning one spherical surface to the other.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The authors address a weakness of a prior learning-based approach, i.e. lacking equivariance of filters, by incorporating a suitable solution from another paper. The authors’ design is creative and well tailored to the problem at hand. Furthermore, the authors provide sensible explanations about the components of their method and present evaluations providing insight into different dimensions of their results, e.g. looking at distributions of strain etc..
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
No major weaknesses.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The method’s architecture should be reproducible from the descriptions in the paper. The dataset used for training and evaluation is also public. However, to directly reproduce method and results, parameters of the method would need to be published.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2022/en/REVIEWER-GUIDELINES.html
I enjoyed reading this paper (see comments above). A few minor points:
- More details about the loss function desirable. While the authors state loss function components for different parts of their pipeline, it is unclear whether all components are jointly or independently trained (Rotation + DDR1 + DDR2). It would help to state the loss function including the cross correlation term (maybe instead of equations (1) and (2) if space is an issue).
- Formulas for J and R are the same (page 6)
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
7
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The authors’ approach is interesting, addresses shortcommings or prior work, and is well explained in this paper.
- Number of papers in your stack
4
- What is the ranking of this paper in your review stack?
2
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
A Deep-Discrete Learning Framework for Spherical Surface Registration
This submission tackles the task of cortical surface registration by reformulating it as a discrete labeling problem. To do so, a graph convolutional network using monet-based filters are used on a sphere. The evaluation is on a the public hcp brain dataset. The method may not be considered as strikingly novel, essentially reusing existing learning-based concepts in spherical surface analysis and discrete optimization as noted by the reviewers, however, the results indicate notable improvements in terms of areal distortions in the registrations maps. This work has, therefore, a potential of impact in the analysis of brain surfaces. While both R1,2 had explicit questions on guarantees of diffeomorphism, the submission does not claim such guarantee. To be possibly addressed in a future extended work. For all these reasons and despite the weak-reject of R1, the recommendation is towards Acceptance.
- What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
1
Author Feedback
We thank the reviewers for their detailed reviews and the helpful comments, particularly around the clarity of the paper. Following the comments of R3, we will improve the description of the loss function and correct the definition of R and J. We plan to clarify that the components of the network (Rotation + DDR1 + DDR2) are each trained once, independently. We have subsequently tried training components jointly - the quantitative results are the same,
In response to the points raised by both R1 and R2 regarding diffeomorphism. We would completely agree that it is very important to show that the network learns smooth deformations. This is addressed in this paper by evaluating deformation strain. Diffeomorphism is not actually the end goal of this method - since in future we seek to address cases where differences in cortical topography would force topology breaking mappings. Investigation of this is outside of the scope of the current work. We do agree that a calculation of the distribution of negative jacobians would be informative, however.
Regarding R1 pts 2-4, R2 pt 2 (comments relating to novelty and performance over classical methods) we do recognise that the paper is strongly influenced by the classical discrete spherical framework of [22,23]; however, the extension to a learning-based framework enables several changes that significantly improve performance. Firstly, implementing the model as a deep network drastically lifts the computational feasibility of having a large label space (from ~ 15-30 labels for MSM to up to 1000 for DDR). This both increases the resolution of the final warp, and means that the network can search much further for common features in source and target domains. As a result DDR alignments achieve higher peak correlation than [22] and [23] whilst achieving smoother deformations. Qualitative results (supplementary figure) show the accuracy of the final warp is much improved relative to [23], which is the closest classical method since it also implements only pairwise discrete regularisation. DDR also shows significant improvements in terms of smoothness and correspondence, relative to S3Reg, despite the fact that S3Reg is implemented as a spherical implementation of diffeomorphic voxelmorph (with scaling and scaling layers). Whilst DDR performs comparably to Spherical Demons, it is known that Demons does not extend well to multimodal data due to its use of a Gauss Newton optimisation (sum of square differences). Extending this network to multimodal data, and evaluating based on segmentation overlap will form the basis of future work.
In response to R1’s questions regarding the motivation for training a rotationally equivariant surface network (pt 1), we apologise that this was not made clear. The issue is quite separate from whether the network is initialised with a rotation or not. The motivation is purely to overcome the problems observed in S3reg where use of a non-rotationally equivariant surface convolution results in convolutions that are not defined at the poles. This occurs because S3Reg filters are translated only along the sphere’s latitude, meaning that filter orientation is flipped at the poles. To overcome this problem, S3Reg must learn 3 registration networks, for 3 different spherical orientations, merging the results across networks, while masking discontinuities. We show in this paper that such a solution is not necessary if one is to instead use MoNet, which learns convolutional filters from a mixture of Gaussians - thereby having no fixed filter orientation. For the camera ready paper, we will seek to rephrase away from an abstract discussion of rotational equivariance towards a clearer motivation along these lines. Note, good rigid initialisation is the basis of all the benchmarked methods. We would agree that discrete optimisation may have trouble approximating a rotation but to try to do so would likely overparameterise the problem.