List of Papers By topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Md Asadullah Turja, Martin Styner, Guorong Wu
Abstract
Functional brain dynamics is supported by parallel and overlapping functional network modes that are associated with specific neural circuits. Decomposing these network modes from fMRI data and finding their temporal characteristics is challenging due to their time-varying nature and the non-linearity of the functional dynamics. Dynamic Mode Decomposition (DMD) algorithms have been quite popular for solving this decomposition problem in recent years. In this work, we apply GraphDMD — an extension of the DMD for network data — to extract the dynamic network modes and their temporal characteristics from the fMRI time series in an interpretable manner. GraphDMD, however, regards the underlying system as a linear dynamical system that is sub-optimal for extracting the network modes from non-linear functional data. In this work, we develop a generalized version of the GraphDMD algorithm — DeepGraphDMD— applicable to arbitrary non-linear graph dynamical systems. DeepGraphDMD is an autoencoder-based deep learning model that learns Koopman eigenfunctions for graph data and embeds the non-linear graph dynamics into a latent linear space. We show the effectiveness of our method in both simulated data and the HCP resting-state fMRI data. In the HCP data, DeepGraphDMD provides novel insights into cognitive brain functions by discovering two major network modes related to fluid and crystallized intelligence.
Link to paper
DOI: https://doi.org/10.1007/978-3-031-43993-3_35
SharedIt: https://rdcu.be/dnwNA
Link to the code repository
https://github.com/mturja-vf-ic-bd/DeepGraphDMD
Link to the dataset(s)
https://db.humanconnectome.org/data/projects/HCP_1200
Reviews
Review #3
- Please describe the contribution of the paper
The paper proposes DeepGraphDMD, an extension to GraphDMD where the Koopman autoencoders are used to nonlinearly lift the dynamic graph into a space where dynamics are linear, therefore accounting for nonlinear dynamics of the network such as rapid transients and synchronization effects. Results on simulations and real MRI data shows that DeepGraphDMD is better able to recover the dynamic modes and achieves better behavioral measures.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The extension of GraphDMD to DeepGraphDMD is neat and timely and the motivation is well substantiated.
- The write-up is clear and understandable, the summary of GraphDMD helps orient the reader and naturally leads into the proposed contribution.
- The graphical representation in Fig. 2 gives a sense of what can be achieved using this method.
- The paper contains a fair amount of details on motivation, methods, task details, and reproducibility information.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
-
Incomplete literature review, there is a large literature on Koopman autoencoders and their variants. The methodological contribution of the paper is not clearly described. Do the authors claim that he combination of Koopman autoencoders and graph dynamics didn’t exist before? Or is it just the application to the fMRI data that is the main novelty? A short section on contribution describing what’s new and what existed before would be helpful. Relevant papers can be cited either in the contributions section or intro.
-
Results on real datasets are minimal. There’s only one table with behavioral measures included on real data. Some visualizations (specifically videos where the edges of the graph change over time, which I believe is achieved using the method since it proves a spatiotemporal decomposition) on real data would improve the significance of the paper.
-
Comparisons are incomplete. There’s a large body of work that’s developed in the neuroscience literature for decomposing the data into (time varying) spatial and temporal modes. Some of these methods are included below but this is by no means an exhaustive list [1-6]. I would like to see comparisons against these methods and a short description of benefits of each methods.
-
Methods descriptions is rather lengthy. My recommendation would be to condense the methods and present more results on real world datasets and comparisons against some of the papers below.
-
Since the Koopman operator can be a generic function applied to the graph adjacency matrix, how restrictive is the assumption that instead of edge embeddings (which makes it challenging to preserve the edge identities) one can embed the nodes and create graphs in the embedding space? To me this is very similar to state space models with linear dynamics and nonlinear observations, so in that sense similar methods existed before (such as [6]).
-
There are some hyperparameters in the model, how are they choses? What kind of cross-validation is performed to help choosing the hyperparameters? Examples are alpha, beta; window length s, number of modes, etc.
[1] Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems [2] Hierarchical recurrent state space models reveal discrete and continuous dynamics of neural activity in C. elegans [3] Nonparametric Bayesian Learning of Switching Linear Dynamical Systems [4] LFADS - Latent Factor Analysis via Dynamical Systems [5] Structured Inference Networks for Nonlinear State Space Models [6] Linear dynamical neural population models through nonlinear embeddings
-
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
The information included seem to be sufficient for reproducing the results.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
Please include more comparisons against the papers I included above. Please clarify what contributions are yours and what existed before. Please include more results and visualizations on real datasets. Please include a discussion on choosing hyperparameters.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
5
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The idea is interesting, however it’s unclear what part of the contribution is novel. More results and visualizations on real datasets against other methods would be very helpful.
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #1
- Please describe the contribution of the paper
This paper extends GraphDMD to non-linear embeddings using an autoencoder-based neural network approach.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- Clear presentations of the problems and their method.
- Good demonstration through simulation and real data and comparison with other commonly used methods.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- Simulation not showing DeepGraphDMM result.
- No visual comparison of the mode identified using DeepGraphDMD and the comparison to other methods.
- Please rate the clarity and organization of this paper
Excellent
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Good
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
The paper is well written with clear presentation of the problem to be solved, how it is tackled, and solid results showing advantages of the non-linear embedding. A few suggestion/comments below that may further improve the paper for the authors’ consideration.
-
It would be great if the authors can show some of the modes plotted on a brain surface for visualization. Will some of the modes correspond to well known brain networks, e.g. DMN? How are they similar/different from networks identified using other commonly used method, such as PCA/ICA/tensor/original DMM/etc.
-
The simulation should also show DeepGraphDMD result so the advantage of the non-linear embedding can be seen.
-
I’m curious why not working on the voxels/vertices directly, instead of the connectivity? i.e. why not model the time-varying patterns in the original time series rather than the patterns in the correlations?
-
The spatial dimension is the length of the vectorized upper or lower triangle of the 50x50 ROI correlations, right? so that should be 50*49/2=1225? the network structure shown in the supp says 1220.
-
The ROIs used in this study was based on group ICA results. Does this parcellation prior bias the follow up analysis? because the purpose of DMD-based methods is to find a time-varying representations of the signal without imposing independence/orthogonality constraint.
-
Related to the point above, there are several works proposed recently that try to find natural representation of the fMRI data (network identification) but without imposing strong constraints. It would be interesting to see how do they perform when compared with DMD-based methods. Just list a few for authors information. 10.1016/j.neuroimage.2015.01.013 10.1016/j.neuroimage.2020.117226 10.1016/j.neuroimage.2020.117615 10.1016/j.neuroimage.2023.119944
-
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
8
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Strong paper all around with potential improvement that is not critical.
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #2
- Please describe the contribution of the paper
Authors present a generalized version of GraphDMD (Dynamic Mode Decomposition) that can be applied to non-linear graph dynamical systems. This is achieved using an autoencoder-based architecture as follows: (1) First, ROI BOLD signals are embedded into a latent space independently, and then the Pearson correlation is computed to get the Koopman eigenfunctions. A special loss term is designed to force the latent system to be linear. Embedding ROI separately also ensures that edge identities are preserved through the process. (2) To ensure that the inverse eigensystem is well-posed, a sparse Koopman operator is proposed where an edge is regressed using only the edges that share a common endpoint with it. Authors compare the proposed method with GraphDMD and other standard dimension reduction techniques using both simulated and real-world data. Regression results are provided for four different behavioral measures related to fluid and crystallized intelligence.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The proposed method is a clever way to generalize GraphDMD to non-linear systems. It solves an important problem in dynamic network analysis. The results show that considering non-linearity does lead to robust, less noisy DMs and significant improvement in regression performance over existing approaches. The presentation is clear and the claims are well-supported with strong experimental results using both simulated and real-world data.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
While results are strong, a discussion of their significance, and a more verbose discussion / interpretation (for someone who is interested in fMRI analysis but not familiar with the math!) is lacking. Visualizations of the identified dynamic modes, particularly those that are strongly correlated to the behavior measures is warranted.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
All the details required to reproduce the results are provided. I noticed an anonymised github link and assume that the source code will also be made available once the manuscript is published.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
This is a very well-presented paper and I enjoyed reading it. But it can be improved even further:
- Please include a more verbose / qualitative discussion of the results, why are they significant from the perspective of the end user of the proposed method?
- Please include better visualizations of the identified DMs, at least the DMs that are most correlated with the behavioral measures.
- How sensitive is the proposed method to window size and step size? If possible include results with a few different window size - step size combinations.
- Please explain the post-processing of DMs in more detail. Why did you choose spherical clustering? How is the alignment of DMs across subjects performed?
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
6
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
- The paper proposes the generalization of Graph DMD to non-linear systems. The approach is novel and the various algorithmic choices are well-justified. Content presentation is very good. Results show significant improvement over baseline and other relevant methods and are backed by experiments using simulated as well as real-world data.
- The results and discussion only cover the mathematical / quantitative aspects. To be appealing to the broad audience at MICCAI, the authors need to include a more verbose discussion. Why is the proposed method important and the results significant from the end-user’s / domain-expert’s / non-mathematician’s perspective?
- I would consider the discussion part a minor weakness of the paper that can be easily fixed.
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
This paper extended previous GraphDMD to DeepGraphDMD for non-linear embeddings using an autoencoder-based architecture. In general, the key strength is that this is a well-organized work with sufficient novelty and clarity as recognized by all three reviewers. Although there are several weaknesses such as unclearness of some technical details and parameters, lack of simulation and visualization results, insufficient comparisons with other methods, etc, the meta-reviewer agrees with all three reviewers that this paper has enough technical novelty as well as promising applications. The authors should consider integrating all reviewer comments into the final paper.
Author Feedback
We sincerely appreciate the valuable feedback provided by all the reviewers on our paper. We firmly believe incorporating these recommendations will significantly strengthen our paper and make it more robust. In the following response, we aim to summarize the major points raised by the reviewers and provide clear and concise answers to address any misunderstandings or confusion that may have arisen.
Comparison with other methods: We appreciate the reviewers for bringing to our attention the relevant articles (those pointed out by Reveiwer #1 and Reveiwer #3) modeling spatiotemporal data. While we were aware of most of these methods, we couldn’t include them in the current draft due to space limitations. However, we acknowledge the importance of conducting a comprehensive comparison with these methods, and we intend to address this in a future journal paper. Below we give a brief summary of how our method compares against some of the linear and non-linear state-space models pointed out by Reveiwer #3: Articles [1-3] propose switching linear dynamical system (SLDS) to tackle the nonlinearity of spatiotemporal data by a piecewise linear approximation. While this model offers interpretability, their shallow architecture (linear emission and transition distributions) limits its generalizability to nonlinear systems. Furthermore, the main novelty of these SLDS-based models lies in the “Switching” aspect. In a similar vein, one can think of “Switching-GraphDMD” or “Switching-DeepGraphDMD” to include state transitions in our model (a future avenue to explore).
On the other hand, the methods in [4-6] either replace the emission distribution [6] (fLDS) or both emission and transition distributions [4-5] (LFADS and DMM) with a deep neural network. Although these models have more representation capabilities compared to [1-3], the latent states are not interpretable. Our method is the first one to our knowledge, that has both the representation power of deep neural networks as well as interpretability in the latent space.
- How does our method differ from Article [6]? Our model differs from that in two important ways - 1. Interpretability in the latent space as discussed above, 2. Edge dynamics instead of node dynamics: Learning the Koopman operators directly on the learned node eigenfunctions is restrictive as it doesn’t capture how the interaction (i.e. the edges) between the nodes covary. Hence we learn the Koopman operator on the edge space.
In our final version, we will summarize these comparisons and include them in the introduction section of our paper.
Choice of Hyperparameters: We thank the reviewers for bringing to our attention the missing details regarding the selection of hyperparameters in our draft. We would include the following details in our final version: For training DeepGraphDMD architecture, we choose the alpha, beta, and learning_rate using grid search on the validation set. We experimented with window sizes s=16, 32, and, 64 all of which converge to similar loss values on the validation set. For cross-validation, we use a 5-fold cross-validation technique. For the post-processing, we have applied 5 different clustering approaches: Gaussian Mixture Model, KMeans, Spherical KMeans, DBSCAN, and, KMedoids. We choose spherical KMeans because it produces higher silhouette scores (thus better quality clusters) across all frequency bins.
Visualization: The activation maps for each brain region are shown on the brain surface on the perimeter of the circle plots in Figure 2(b-c). The brain regions are grouped based on their common resting-state networks and their interactions are shown using the edge bundles. We only show 2 DMD modes correlated with the behavioral measures due to space constraints.
Discussion on the results: We will enrich the discussion part of section 4.2 to add more discussion on results and mention what they mean from an end-user perspective.