Hello,
I'm a Nipype newbie and I have a relatively straightforward problem. I'm hoping for both a specific solution for this situation, and ideally a more generic solution for this type of problem, since I anticipate using Nipype a lot in the future in these kinds of situations.
I have a bunch of images that I want to smooth using FSL's SUSAN and I stumbled upon a readymade workflow, create_susan_smooth. I can run the workflow fine by manually giving it inputs, but in order to structure the output images in a nice way, I need to use a Datasink. Also, in order to give it multiple inputs (to iterate over subjects) I should also use a DataGrabber. I think I am creating all 3 pieces correctly in isolation, but I keep getting errors when trying to link them up. Here's what I have so far, copied from the documentation:
import nipype.pipeline.engine as pe # pypeline engine
import os, os.path as op
from nipype.workflows.fmri.fsl.preprocess import * #this is where create_susan_smooth is
# Specify the location of the data.
data_dir = os.path.abspath('/Users/jason/Desktop/testsmooth')
# Specify the subject directories
subject_list = ['0001167','0000322']
# Map field names to individual subject runs.
info = dict(func=[['subject_id', ['func']]],
mask=[['subject_id','mask']])
datasource = nio.DataGrabber(infields=['subject_id'], outfields=['func', 'mask'])
datasource.inputs.base_directory = data_dir
datasource.inputs.template = '%s/r1.feat/*.nii.gz'
datasource.inputs.field_template = dict(func='%s/r1.feat/filtered_func_data.nii.gz',
mask='%s/r1.feat/mask.nii.gz')
datasource.inputs.template_args = dict(func=[['subject_id']],
mask=[['subject_id']])
datasource.inputs.subject_id = subject_list
datasource.inputs.sort_filelist= True
smoother = create_susan_smooth()
smoother.inputs.inputnode.fwhm = 4
datasink = pe.Node(interface=nio.DataSink(), name="datasink")
datasink.inputs.base_directory = '/Users/jason/Desktop/smoothing_test'
#Initiation of the metaflow
metaflow = pe.Workflow(name="metaflow")
#Define where the workingdir of the metaflow should be stored at
metaflow.base_dir = '/Users/jason/Desktop/smoothing_test'
#Connect up all components
metaflow.connect([(datasource,smoother,[('func','inputspec.in_files')]),
(smoother,datasink,[('inputspec.smoothed_files',
'smoothed')
])
])
and I always get the following error when running the last bit:
AttributeError Traceback (most recent call last)
<ipython-input-6-63200700350b> in <module>()
9 metaflow.connect([(datasource,smoother,[('func','inputspec.in_files')]),
10 (smoother,datasink,[('inputspec.smoothed_files',
---> 11 'smoothed')
12 ])
13 ])
/usr/local/lib/python2.7/site-packages/nipype/pipeline/engine.pyc in connect(self, *args, **kwargs)
316 newnodes.append(destnode)
317 if newnodes:
--> 318 self._check_nodes(newnodes)
319 for node in newnodes:
320 if node._hierarchy is None:
/usr/local/lib/python2.7/site-packages/nipype/pipeline/engine.pyc in _check_nodes(self, nodes)
790 node_lineage = [node._hierarchy for node in self._graph.nodes()]
791 for node in nodes:
--> 792 if node.name in node_names:
793 idx = node_names.index(node.name)
794 if node_lineage[idx] in [node._hierarchy, self.name]:
AttributeError: 'DataGrabber' object has no attribute 'name'
The DataGrabber is giving me what I want when I run it independently, and I think that the datasink is working correctly too. I'm just unsure about how to connect everything up.
This brings me to my generic question. Let's assume I have a folder "mydata" that has a subfolder for each subject (101,102, etc). I have found a workflow that performs some function that I want. I want to feed that workflow the inputs from my folder structure, run it, then save the outputs in an orderly way (a separate folder with a subfolder for each subject, or in the same location as the original inputs). This is a general use of Nipype that I could see myself and many other people using very frequently. While the documentation provides many comprehensive examples of large, complex workflows, it does not seem to address this basic "building block" of how to re-use workflows (or perhaps I have just missed it). It's a hurdle that has pushed me away from using Nipype many times in favor of creating throwaway shell scripts.
Generally, I would like to know:
1. What's the most efficient way to figure out the necessary inputs that the workflow needs?
2. What's the most efficient way to see what outputs the workflow creates, and how do I access them?
3. What's the most straightforward way to feed my existing data to the workflow (assuming the folder structure above)?
4. What's the most straightforward way to save those outputs in a reasonably organized way (i.e., a subfolder for each subject)?
Thanks for any help you could provide!
Jason
P.S. -- I realize that FEAT can do the smoothing, but for various reasons we have to do it at a later stage.