Re: [RESEAUX SOCIAUX] [TVB] Pre- and Post-FMRI processing problems for E-I DMF

182 views
Skip to first unread message

WOODMAN Michael

unread,
Jan 9, 2024, 4:54:48 AMJan 9
to TVB Users

hi,


A few questions come to mind:  what is the correlation between your SC & empirical FC?  Do you have any references in the literature doing simulations with the Schaefer atlas on which you are basing your work?  Most importantly, what kind of effect are you trying to model?


cheers,

Marmaduke 


From: tvb-...@googlegroups.com <tvb-...@googlegroups.com> on behalf of Szymon Tyras <szymon...@gmail.com>
Sent: Wednesday, December 27, 2023 11:52:35 AM
To: TVB Users
Subject: [RESEAUX SOCIAUX] [TVB] Pre- and Post-FMRI processing problems for E-I DMF
 
Hello everyone,  

I'm still fighting my first project in whole-brain modeling and facing some challenges. 

My project involves a version of the Deco et al. (2014) Excitation-Inhibition Dynamic Mean Field (DMF) model.  I'm using resting-state fMRI data from healthy subjects, from the UCLA dataset (available at https://openfmri.org/s3-browser/?prefix=ds000030), preprocessed via the fMRIPrep pipeline. Model utilizes average structural connectivity weights derived from a large cohort of healthy subjects and is parcellated according to the Schaefer 100 atlas, which is available by default in software that I used. 

However, I'm encountering an issue with low correlation values between the simulated and empirical FC matrices. The correlations are around 0.05 for the upper parts of the matrices, excluding diagonals, and this varies with the model's stochasticity.   I've been adjusting the default parameters for healthy subjects by ±100%, which affects the matrices, but the correlations remain low at almost every point (max I got about 0.15 for -40% of default G however I believe that most of the gain was due to stochasticity. I have tested it for several subjects.  

As this is my first project, I suspect that the issue might be related to processing steps for empirical data or post-simulation processing of the simulated data, but I've been unable to pinpoint the exact problem. I would greatly appreciate it if someone experienced could review the following code and FC plots to maybe suggest a direction for diagnosing and addressing this issue. I want to rule out the pre- and post-processing problems before digging into problems with simulations which are set in a very standard way.  

Thank you in advance, 
Respectfully,
Szymon Tyras 

#Python Code:
import numpy as np
import nibabel as nib
from nilearn import datasets, input_data
from nilearn.connectome import ConnectivityMeasure
from nilearn import signal

#Empircal part
# Load the fMRI data
fmri_img = nib.load(fmri_file) # It is downloaded from dataset file "MNI152NLin2009cAsym_preproc.nii.gz", preprocessed with standard fMRIPrep

# Load the Schaefer atlas
atlas = datasets.fetch_atlas_schaefer_2018(n_rois=100, yeo_networks=7) # Same as structural weights data
atlas_filename = atlas.maps

# Parcellate the signal
masker = input_data.NiftiLabelsMasker(labels_img=atlas_filename, standardize=False)
BOLD_E = masker.fit_transform(fmri_img) # BOLD_E has shape 152, 100

# Filtering
low_freq, high_freq = 0.01, 0.1
BOLD_E= signal.clean(BOLD_E, low_pass=high_freq, high_pass=low_freq, t_r=2)

# Create a connectivity measure object with correlation metric
connectivity_measure = ConnectivityMeasure(kind='correlation')
# Compute the connectivity matrix
connectivity_matrix_e = connectivity_measure.fit_transform([BOLD_E])[0]

#Simulation part
# Deleting first and last 10 seconds, filtering
trans = 5;
low_freq, high_freq = 0.01, 0.1
BOLD_S = BOLD_S[:,trans:-trans] # BOLD_S is output of the simulations, it has 162 timepoints for each of 100 brain regions, BOLD_S.shape is 100, 162
BOLD_S=BOLD_S.T
BOLD = signal.clean(BOLD_S, low_pass=high_freq, high_pass=low_freq, t_r=2)
connectivity_matrix_s = connectivity_measure.fit_transform([BOLD_S])[0]

--
You received this message because you are subscribed to the Google Groups "TVB Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tvb-users+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/tvb-users/97a87761-9c32-452c-bb76-a51dba29168bn%40googlegroups.com.

Michael Schirner (michael.schirner@bih-charite.de)

unread,
Jan 9, 2024, 5:21:16 AMJan 9
to TVB Users
Hi Szymon,

from my perspective the empirical FC looks like there is a strong global signal across many regions: very high global synchronisation between many nodes, which may indicate artifacts or fMRI processing problems (for example imaging artifacts like movement, or preprocessing errors like registration errors, etc.).

The simulated FC looks like it can be expected for brain simulations where no heterogeneous regional excitation/inhibition ratios are used. In that case the fit with empirical FC will be strongly dependent on the topology of the used SC. This problem can be mitigated by tuning E/I-balance of the network.

Since you mention that you tested the G parameter in the range +/- 100 % of default G: G is highly dependent on the specific SC, so even outside of that range there may be plausible settings -- depending for example on how large the SC is (how many regions), its node strength distribution, its normalization, etc.

Hope this helps!

Best,
Michael

Szymon Tyras

unread,
Jan 18, 2024, 2:52:15 AMJan 18
to tvb-...@googlegroups.com
Hello Michael,

Thank you for your response; it was very helpful. I recheck and indeed now I see that my simulations are running correctly. You were right about the issues lying in my empirical signals analysis, as the FC appears to be unusually highly correlated - I am currently working on finding what is wrong with it.

Thank you for your assistance,
Respectfully,
Szymon Tyras

Hello Marmaduke,

As mentioned earlier, I identified that the issue lies solely in the empirical analysis. I came across a source that utilized the Schaefer atlas in a similar context to mine, but with 1000 regions. My aim was to compare optimal simulation parameters across groups so my modeling goal was simply as close as possible fit on individual level.

Thank you for your assistance,
Respectfully,
Szymon Tyras

Szymon Tyras

unread,
Jan 18, 2024, 12:52:49 PMJan 18
to TVB Users
Hello everyone,  

I have attempted to address the problems mentioned earlier regarding empirical analyses. The matrices appear to be improved, but as I'm not very experienced, I'm unsure if they are as they should be. I would greatly appreciate it if someone could take a look at this. Please note that the data are from individual, healthy subjects in a resting state, filtered between 0.01-0.1 Hz, with only six motion parameters regressed out (the global signal was not regressed).  

Thank you again for your assistance. I apologize for sending two messages in a row. 

 Respectfully, 
Szymon Tyras
2.png
1.png
3.png

Randy McIntosh

unread,
Jan 18, 2024, 1:16:59 PMJan 18
to tvb-...@googlegroups.com

The matrices look great!

--

You received this message because you are subscribed to the Google Groups "TVB Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tvb-users+...@googlegroups.com.

Szymon Tyras

unread,
Jan 19, 2024, 1:16:37 AMJan 19
to TVB Users
Dear Randy,

That's great news. Thank you such much for help (again). 

Respectfully, 
Szymon Tyras

Szymon Tyras

unread,
Jan 27, 2024, 8:47:32 AMJan 27
to TVB Users
Hello again,  

I hope I'm not overloading this group with my queries, but I've encountered another challenge in my analysis. Once again, I would greatly appreciate any insights on the following issue:  

As I mentioned earlier, I am using the reduced Wong-Wang model and optimizing G along with two local parameters. My work involves resting-state fMRI of healthy subjects at the single-subject level, utilizing an average healthy SC obtained from a group not included in the test. I am fitting to both FCD and FC. The correlation for SC to eFC is around 0.2, which seems expected. After fitting, I am receiving values ranging from 0.05-0.1 for KS (FCD) and about 0.2-0.25 for r (FC) (using both grid search and the Bayesian Optimization Algorithm), which aligns with my expectations given the setup. However, I've noticed that my results are significantly influenced by the initial conditions. For instance, setting the same seed for all simulations yields highly stable outputs, but allowing random initialization can result in fits varying from 0.02 to 0.3 (r) for the same parameter set. Obviously, some influence from initial conditions is expected due to the relatively poor fit from the lack of personalized SC and homogeneous local parameters, but this degree of variation seems excessive.  

Can anyone provide insights into what's happening? Is this behavior expected? Moreover, how can I mitigate this to still achieve somewhat stable results? I have tried averaging over 10 runs for each parameter set, but it doesn't seem to help much.  

Thank you all for any assistance.
  
Best regards, 
Szymon

Michael Schirner (michael.schirner@bih-charite.de)

unread,
Jan 29, 2024, 3:24:08 AMJan 29
to TVB Users
Hi Symon,

a fit with a Pearson correlation coefficient of 0.2 is very low and not expected. It indicates there is hardly any correspondence between the simulated and the empirical data. Together with the observed high variability of the fit, this may indicate that the simulated activity is driven by noise, rather than SC.

Could G be too low?

A quick check could be to increase G from very low to very high. With very low settings there is no correlation between nodes. Starting from zero synchronisation between nodes, as G increases, correlation between nodes (synchronisation) should start to increase until they become fully synchronised. I would expect that in between these two points the optimal value for G should be located.

Instead of G being too low it could also be that noise amplitude is too high. Here one could experiment with a selection of different noise amplitudes to establish a good relative strength with regard to the strength of SC.

Best,
Micha

Szymon Tyras

unread,
Jan 29, 2024, 2:43:14 PMJan 29
to TVB Users
Dear Michael,  

Thank you once more for your response.  

I had thought that a Pearson correlation coefficient of 0.2 between sFC and eFC was to be expected due to several factors: the correlation applies to the upper part of the triangle excluding the diagonal, I am optimizing only two homogeneous parameters, and I am not employing personalized SC. In the literature, especially in scenarios with a limited number of variables and personalized SC, optimization results seem to hover around 0.3. (for example but not only like in this example https://github.com/the-virtual-brain/tvb-educase-braintumor/blob/master/TVB_braintumor.ipynb).   

Observationally, there appears to be an emergence of complex connectivity patterns within the anticipated range of G. For lower values, the FC is all blue, shifting to all red at higher values. However, the correlations between sFC and eFC are quite weak, seemingly largely influenced by noise. Yet, they do improve slightly within the expected parameter ranges, though not big. Notably, I have obtained very good fits for FCD (K-S about 0.15) in these ranges. My interpretation is that static FC is more strongly driven by SC compared to FCD, resulting in low r values due to the absence of individualized SC. This is partially corroborated by the lower r values for sFC and eFC in subjects where my SC is less correlated with eFC. Do you think this is a plausible explanation, or might I be inadvertently fitting noise in FCD’s?

Thank you again for all your help.  
Best regards, 
Szymon Tyras

Michael Schirner (michael.schirner@bih-charite.de)

unread,
Jan 30, 2024, 7:08:55 AMJan 30
to TVB Users
Hi Szymon,

correlations in literature are usually higher; a value of 0.2 means that the two data sets are almost fully uncorrelated, indicating that the model does not explain the data well. I don't think the problem is that the SC is not subject specific, but without further information it's hard for me to diagnose. Since the expected behavior when tuning G from low to high seems to be present, the only other explanation I could come up with would be that there is a problem with SC and/or FC -- as they are the only elements that were exchanged compared to previous brain modelling studies.

Best,
Michael

Randy McIntosh

unread,
Jan 30, 2024, 1:51:32 PMJan 30
to tvb-...@googlegroups.com

Interesting observation and thoughts.  @Szymon, have you tried a parameter space map varying noise amplitude and G and assessing the correlation with eFC for each parameter combination. If the parameter space is flat, then there could indeed be an issue with the input data. If there is a maximum, that would be good news. We have tried to parameter space mapping using multiple fit measiure for “static” and “dynamic” FC and do usually get a maximum in parameter space.

Szymon Tyras

unread,
Feb 5, 2024, 12:07:29 PMFeb 5
to TVB Users
Hello, 

I want to extend my gratitude to Michael and Randy for their insightful responses. 

Apologies for my delayed reply; I've been exploring various options to provide a more comprehensive analysis in return.  

To address the question regarding the parameter space, indeed, there are "hot spots" somewhere expected.  

Following Michael's advice on a potential issue with empirical FC and SC, I utilized an average SC and FC matrix derived from several subjects, used in a published study based on the 68-region DK atlas. The outcomes were surprisingly more intriguing than I anticipated, and I would greatly appreciate any insights or comments on this:  

The correlation between SC and eFC is approximately 0.35. Employing FIC and adjusting G, my result was r=0.62 for eFC versus sFC. Nonetheless, the simulated matrix exhibited exceedingly high correlation values. I'm attaching histograms of both eFC and the best sFC, along with a plot comparing eFC and sFC using the same scale for direct comparison. I attempted to scale down the SC, but this approach resulted in poorer fitting outcomes compared to the original. Additionally, I'm including a histogram of the correlation coefficients for G=0 with FIC, which also appear to be notably high.

I'm grateful for all the assistance provided thus far. 
 Best regards, 
Szymon Tyras
EFC.png
0SFC.png
SFC.png
FC:EFC.png

Michael Schirner (michael.schirner@bih-charite.de)

unread,
Feb 6, 2024, 2:26:02 AMFeb 6
to TVB Users
Hi Szymon,

happy to provide input!

Obtaining FC correlations for G=0 is indeed quite surprising -- this shouldn't be the case, the node dynamics should be uncoupled when G is zero. 

Two explanations come to mind:
1. Are there initial transients in the time series that were not removed that drive the correlation? When starting from initial conditions the nodes in the beginning quickly converge to the nearest attractor, the resulting sharp transient in the time series would then yield high correlations. To solve this problem the first x seconds of the time series need to be removed before computing the FC.
2. Ensure that the noise that drives each node is truly uncorrelated: the random numbers that drive each node should be independently drawn. Are the noise time series that drive each node uncorrelated?

If both explanations can be ruled out there must be another form of coupling or exchange of information between nodes -- probably due to the specific implementation -- which needs to be debugged.

Best,
Michael


Szymon Tyras

unread,
Feb 12, 2024, 1:18:32 PMFeb 12
to TVB Users
Dear Michael,  

Thank you very much for your response. It appears that the issue was indeed related to initial transients. Initially, I removed the first 5 timepoints, which seemed insufficient. However, after your suggestion, I increased this to 40 and the problem appeared to be resolved. Following this adjustment, for G=0, I received a normal distribution of correlations centered around 0.  

I truly appreciate your guidance and hope I haven't overwhelmed group with messages. Thank you once again.  
Best regards, 
Szymon
Reply all
Reply to author
Forward
0 new messages