We at Washington University in St Louis have created a new schema
representation to represent the Freesurfer data. Freesurfer data can
be cross-sectional and longitudinal. The new representation contains
both the ASEG and the APARC measurements in one assessor.
I have attached the new schema. The schema contains
fs:asegRegionAnalysis and fs:aparcRegionAnalysis which are deprecated
in our instance.
Regards
Mohana
> --
> You received this message because you are subscribed to the Google Groups
> "xnat_discussion" group.
> To post to this group, send email to xnat_di...@googlegroups.com.
> To unsubscribe from this group, send email to
> xnat_discussi...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/xnat_discussion?hl=en.
>
XNATRestClient -host $host $connect_str -local ${freesurfer_out}/${session}_freesurfer5.xml -remote $restPath -m PUT
XNATRestClient -host $host $connect_str -m PUT -remote "/data/archive/experiments/${session}/assessors/${fs_id}/resources/DATA"
XNATRestClient -host $host $connect_str -m PUT -remote "/data/archive/experiments/${session}/assessors/${fs_id}/resources/DATA/files?extract=true&content=DATA" -local ${freesurfer_out}/${sessLabel}.zip
> i've going through the shell script you had sent a while earlier. in the
> script the entire directory is zipped up and sent to the server. is there a
> way to get that enter directory back? also i'm less interested in the
> derived data for the moment, but can i run a query to find all left
> hemisphere white matter surfaces and all left hemisphere sulc files?
> i don't understand xnat well enough, so here are some questions
The way to get a catalog of files back would be:
GET request on
/data/archive/experiments/EXPT_ACCESSION_ID/assessors/FS_ASSESSOR_ID/resources/DATA/files?format=zip
Depending on the way you choose to catalog the files, you could get a
sub-folder, which would contain the required files or a specific
file. So for example, to get the recon-all.log you would:
GET /data/archive/experiments/EXPT_ACCESSION_ID/assessors/FS_ASSESSOR_ID/resources/DATA/files/EXPT_LABEL/scripts/recon-all.log
(EXPT_LABEL happens to be the root folder when the files are uploaded,
if you create the zip in another format, you would create an
appropriate URL).
>
> XNATRestClient -host $host $connect_str -local
> ${freesurfer_out}/${session}_freesurfer5.xml -remote $restPath -m PUT
>
> i presume the xml file comes from a pipeline in xnat. what if i did my
> freesurfer processing separately from xnat pipelines. is this statement
> necessary?
XML file is generated by a perl script after the FS processing is
complete. You could invoke the perl script independent of the
pipeline.
The XML file is inserted into XNAT to represent a FS assessor. After
the assessor is inserted, files (catalog(s) of file(s)) can be
associated with the assessor. So you would run the following two
calls after the assessor exists in XNAT.
The Freesurfer datatype is fs:fsdata for representing cross-sectional
data. We create a catalog of the entire output of the Freesurfer.
We download the images using REST calls. (ref: my prev post in this thread).
I am attaching the template file.
Regards
Mohana
Mohana,
Don’t we also have tools which will parse a stat file and generate the XNAT freesurfer XML? What is that written in?
If that tool works consistently, then we could bundle it as part of a FreesurferImporter. It could be an additional implementation of the ‘import’ service available via REST. That way, a user could just push up the Freesurfer files, and XNAT could handle generating the xml, catalogs, etc. Post 1.6, this could be easily integrated into XNAT servers using the module structure (or plugins, post-2.0… next year). Is there any other info needed that is only known on the client side? Provenance?
Regarding potential catalog structures, we’ve gone around and around on handling Freesurfer files for years. You certainly could separate the files into separate catalog files. The benefit here is that the catalog properties (label, format, content, etc) are stored in the database and thus easily query-able. Versus file level properties which are stored in the catalog xml stored on the file system, and much more complicated to query across sessions. On the other hand, you certainly wouldn’t want a catalog for each file as this would quickly overwhelm your xnat_resource table. Finding the right solution to this has been a popular debate topic in our group for years, with the penultimate solution still undecided.
If you use the single catalog strategy, you could use custom variables to store whether or not certain files are expected to be present. Or, you could separate the common files (stat files, brainmask, etc) into separate catalogs and lump everything else into one big catalog. The benefit to option 1 is that querying custom variables across sessions is already supported in XNAT. Whereas, option 2 wouldn’t be query-able across sessions out of the box (though it could easily be configured via a custom SQL View in the display docs).
Tim
If that tool works consistently, then we could bundle it as part of a FreesurferImporter. It could be an additional implementation of the ‘import’ service available via REST. That way, a user could just push up the Freesurfer files, and XNAT could handle generating the xml, catalogs, etc. Post 1.6, this could be easily integrated into XNAT servers using the module structure (or plugins, post-2.0… next year).
Is there any other info needed that is only known on the client side? Provenance?
Regarding potential catalog structures, we’ve gone around and around on handling Freesurfer files for years. You certainly could separate the files into separate catalog files. The benefit here is that the catalog properties (label, format, content, etc) are stored in the database and thus easily query-able. Versus file level properties which are stored in the catalog xml stored on the file system, and much more complicated to query across sessions. On the other hand, you certainly wouldn’t want a catalog for each file as this would quickly overwhelm your xnat_resource table. Finding the right solution to this has been a popular debate topic in our group for years, with the penultimate solution still undecided.
If you use the single catalog strategy, you could use custom variables to store whether or not certain files are expected to be present. Or, you could separate the common files (stat files, brainmask, etc) into separate catalogs and lump everything else into one big catalog. The benefit to option 1 is that querying custom variables across sessions is already supported in XNAT. Whereas, option 2 wouldn’t be query-able across sessions out of the box (though it could easily be configured via a custom SQL View in the display docs).