Right Hemisphere File Format

1 view
Skip to first unread message

Twyla Plack

unread,
Aug 4, 2024, 1:49:18 PM8/4/24
to profithusher
HiI am trying to export my segmentation scene and its segmentations as a nifti file for my PI to be shared and downloaded from dropbox. However, the entire scene can only be saved as a mrml file and the segmentations can only be saved in nrrd format with only the original unsegmented template able to be saved in Nifti format (See attached photo). My PI says that they can only access nifti files and since my entire scene and segmentations cannot be saved in the nifti format as nifti format is not an option in their menu (first two files), how can I save them both in nifti format so that my PI can view them properly? Thank You! InkedFiles_LI944308 79.6 KB

Are you working with brain images? If yes, then using nifti file format may make sense (otherwise I would recommend nrrd). To save segmentation result as nifti, right-click on the segmentation node in Data module to export to labelmap node.


Thank you! I am working with segmenting and entire brain so it does make sense for nifti. I have gone to the Segmentations module where export and export to labelmap is located and cannot seem to find the nodes that you are talking about. Right clicking on segmentations in save also does not do anything. Can you show me how to get there?


Thank you for the advice! I was able to get my file into nifti format. However, as I am performing an entire brain segmentation (both hemispheres), it appears that the new nifti file (after exported to binary labelmap) when opened in a new slicer tab has its spatial position slightly altered and the right hemisphere is completely dark with only left hemisphere segmentations visible. How can I go about correcting this?


Here are my original (with Freesurfer label colors) and nifti. The nifti appears to be spatially disoriented (acccording to my PI) as well as missing the right hemisphere even though I created it by simply right clicking on segmentation node and choosing exporting to binary labelmap. Hope you can let me know what is wrong, thanks!

Nifti927827 14.7 KB


We could add the functionality in SlicerFreeSurfer to export the Segmentations with the correct label values directly to nifti.

Exporting the segmentations to labelmap using the color node would also solve this issue.


Thank you for all the feedback. I will indeed try it. In response to Dr. Lasso @lassoan , my processing workflow has simply been to use the segmentations editor and manually color them in, using automated tools such as thresholding only occasionally, as my project requires manual segmentation. I also label my segmentations simply with freesurfer colors. How would I export using color node? The link does not seem to inform me how to do so but I would appreciate it!


The site is secure.

The ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.


The aim of the present survey was to review clinical and experimental data concerning the visual (face), auditory (voice) and verbal (name) channels through which familiar people are recognized, by contrasting these data with assumptions made by modular cognitive models of familiar people recognition. Particular attention was paid to the fact that visual (face), auditory (voice) and verbal (name) recognition modalities have different hemispheric representations and that these asymmetries have important implications for cognitive models which have not considered hemispheric differences as an important variable in familiar people recognition. Several lines of research have, indeed, shown that familiar faces and voices are mainly underpinned by the right hemisphere, whereas names are mostly subsumed by the left hemisphere. Furthermore, anatomo-clinical data have shown that familiarity judgements are not generated at the level of the Person Identity Nodes (PINs), as suggested by influential cognitive models, but at the level of the modality-specific recognition units, with a right hemisphere dominance in the generation of face and voice familiarity feelings. Additionally, clinical and experimental data have shown that PINs should not be considered as a simple gateway to a unitary semantic system, which stores information about people in an abstract and amodal format, but as structures involved in the storage and retrieval of person-specific information, preferentially represented in a sensory-motor format in the right hemisphere and in a language-mediated format in the left hemisphere. Finally, clinical and experimental data have shown that before the level of the person identity nodes (PINs) a cross-communication exists between the perceptual channels concerning faces and voices, but not between the latter and personal names. These data show that person-specific representations are mainly based on perceptual (face and voice) information in the right hemisphere and on verbal information in the left hemisphere.


Partial volume effects (PVE) in PET images are increased when volumetricdata is transformed to the surface, resulting in maps that are heavilybiased by the underlying curvature of the surface. For this reason, it isgenerally not recommended to transform PET volumes to the surface.


Transforming between surface-based coordinate systems works bidirectionally.That is, data in each surface system can be transformed to and from every othersurface system. Here, the transformation functions take the form ofneuromaps.transforms.XXX_to_YYY where XXX is the source system andYYY is the target system.


Note that, by default, all of the transformation functions assume the providedtuple contains data in the format (left hemisphere, right hemisphere) andperforms linear interpolation when resampling data to the new coordinatesystem. However, the surface functions in neuromaps.transforms accepttwo optional keyword parameters that can modify these defaults!


We can use an instance of the neuromaps.parcellate.Parcellater classto parcellate our data. The Parcellater class expects a path to the parcellationfiles or a tuple of parcellation files (left and right). Note that hemispherebusiness can be handled with the hemi parameter. The Parcellater class alsoexpects these files to contain a unique integer ID for each parcel (brain region),and it will ignore all IDs of 0. This means that if your parcellation is not in theright format, you will need to rework it using helper functions such asneuromaps.images.relabel_gifti() and neuromaps.images.annot_to_gifti() .


Here is another example in which surface data (fslR) is parcellated into theSchaefer-400 atlas. Note that in this case the atlas is in dlabel.nii format.neuromaps requires tuples of gifti (*.gii) files but this can be handledusing the neuromaps.images.dlabel_to_gifti() .


In these two examples, parcellations were fetched using netneurotools .But of course you can fetch your parcellation files from wherever you wouldnormally get them. Just make sure they are in neuromaps format and checkthat your parcellation makes sense (i.e. looks the way it should) afterwards.


NIfTI stands for Neuroimaging Informatics Technology Initiative, which is jointly sponsored by the US National Institute of Mental Health and the National Institute of Neurological Disorders and Stroke. NIfTI defines a file format for neuroimaging data that is meant to meet the needs of the fMRI research community. In particular, NIfTI was developed to support inter-operability of tools and software through a common file format. Prior to NIfTI there were a few major fMRI analysis software packages, and each used a different file format. NIfTI was designed to serve as a common file format for all of these (and future) neuroimaging software packages.


NIfTI was derived from an existing medical image format, called ANALYZE. ANALYZE was originally developed by the Mayo Clinic in the US, and was adopted by several neuroimaging analysis software packages in the 1990s. The ANALYZE header (where meta-data are stored) had extra fields that were not used, and NIfTI format basically expands on ANALYZE by using some of those empty fields to store information relevant to neuroimaging data. In particular the header stores information about the position and orientation of the images. This was a huge issue prior to NIfTI. In particular, there were different standards for how to store the order of the image data. For example, some software packages stored the data in an array that started from the most right, posterior, and inferior voxel, with the three spatial dimensions ordered right-to-left, posterior-to-anterior, and then inferior-to-superior. This is referred to as RPI orientation. Other packages that also used ANALYZE data stored the voxels in RAI format (with the second dimension going anterior-to-posterior) or LPI format (reversing left and right). This caused a lot of problems for researchers, especially if they wanted to try different analysis software, or use a pipeline that involved tools from different software packages. In some cases, this was just annoying (e.g., having to reverse the anterior-posterior dimension of an image). In other cases, it was confounding and potentially created erroneous results. This was especially true of the right-left (x) dimension. While it is immediately obvious when viewing an image which the front and back, and top and bottom, of the brain are, the left and right hemispheres are typically indistinguishable from eahc other, so a left-right swap could easily go undetected, potentially leading researchers to make completely incorrect conclusions about which side of the brain activation occurred on! The NIfTI format was designed to help prevent this by more explicitly storing orientation information in the header.


Another improvement with the NIfTI format was to allow a single file. ANALYZE format requires two files, a header (with a .hdr extension) and the image data itself (.img). These files had to have the same name prior to the extension (e.g., brain_image.hdr and brain_image.img), and doubled the number of files in a directory of images, which created more clutter. NIfTI defines a single image file ending in a .nii extension. As well, NIfTI images can be compressed using a standard, open-source algorithm known as Gzip, which can significantly reduce file sizes and thus the amount of storage required for imaging data. Since neuroimaging data files tend to be large, this compression was an important feature.

3a8082e126
Reply all
Reply to author
Forward
0 new messages