Hello,
I am just starting to learn DTI-TK and would like to use it prior to running TBSS. Before doing this, however, I was hoping to get confirmation about the steps I am to follow. Here is what I have (questions in blue):
1. Pre-process data using TORTOISE (motion correction, eddy current correction, etc.)
2. Confirm the diffusivity unit and ensure it is DTI-TK compatible, confirm there are no outliers, and make sure volumes share one common voxel space
3. Spatial Normalization using DTI-TK
a. Create a txt file of all subject DTI volumes
b. Bootstrap the initial template mean
c. Affine alignment
d. Deformable alignment with the final refined template estimate from affine alignment
**Group-specific template created from this stem, and data fitted (normalized) to the template**
4. Run a custom implementation of tbss_3_postreg, which replaces the general TBSS postreg step.
** During this step, the template and DTI data is brought into a standard space. In this case, it will spatially normalized to 1mm^3, and an FA skeleton and 4-D FA map will be created**
How is this different than what you do in the spatial normalization and template generation step (#3)? Is it that you are putting things in standard space here?
5. Rename data to be compatible with TBSS
6. Create FA skeleton (How do you assign/check the FA threshold?)
7. Generate FA map of spatially normalized data
8. Put TBSS relevant files in mytbss folder
9. Run tbss_4_prestats
10. Run voxelwise stats on the skeletonized FA data.
11. Repeat with MD, AD, and RD as necessary.
Am I on the right path? Any input or clarifications would be GREATLY appreciated.
Thank you,
Cristina