Anyway, it depends (like most things in fMRI analysis and statistics
more generally). For instance, are you using a linear classifier (e.g.
linear svm)? Are you doing a searchlight analysis? ROI-based? When in
the processing stream would you take out the mean? And take out the mean
voxel-wise (each voxel averages zero over example) or volume-wise (each
group of voxels in an example averages zero - not a good idea in
searchlight-type analyses)?
In terms of linear svm, yes, it will use univariate differences. In
fact, it *must* have univariate differences (condition A != condition B
in individual voxels) in order to classify. I gave a talk that includes
a description of this (and some other MVPA issues); see
http://nrg.wustl.edu/events/jo-etzel-ph-d-2011-nil-niac-seminar-series/
or google "Multivariate Pattern Analysis of fMRI data: what it can and
can not tell us".
A linear classifier constructs a linear combination of the voxel values;
it can be pictured as pooling the weak bias across multiple voxels. So a
very weak bias (e.g. A > B) in a group of individual voxels might not be
detected by a standard mass univariate GLM analysis (because the
individual voxels' bias is too weak and perhaps not distributed in a
tight cluster) but could be in an MVPA. Also, mass univariate analysis
generally looks for a cluster of voxels with consistent bias, whereas
linear classifiers can use voxels with opposite bias (e.g. A > B as
informative as B > A).
So I don't think it's quite proper to ask if MVPA is only meaningful
when there are no univariate differences. But it varies with desired
interpretation: for example, if you want to argue that the mean
activation in an anatomical region is not the only source of information
you could subtract the mean from that group of voxels before analysis.
Jo Etzel
--
You received this message because you are subscribed to the Google Groups "Princeton MVPA Toolbox for Matlab" group.
To view this discussion on the web visit https://groups.google.com/d/msg/mvpa-toolbox/-/avWjihuEsIcJ.
To post to this group, send email to mvpa-t...@googlegroups.com.
To unsubscribe from this group, send email to mvpa-toolbox...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/mvpa-toolbox?hl=en.
The potential problem is that volume-wise mean-subtraction could
introduce information to truly uninformative voxels.
This is easiest to picture in small numbers of voxels. I have (R) code
and pictures showing the example I describe here; email me directly if
you'd like a copy. (I need to get an MVPA blog started!)
In brief, imagine that I have 25 voxels and two classes. The activation
patterns for the two classes are identical in 15 voxels (no information
about the two classes) but different in the other 10: I added 1 to the
class 'a' activations to get the class 'b' activations for those 10
voxels. We now have 10 informative voxels and 15 uninformative ones. But
if I subtract the mean activation from each example, I'm not just
subtracting from my 10 informative voxels, I'm subtracting from the 15
uninformative ones as well, and the number I subtract will be different
in the two classes (I'll subtract a larger number from the class 'b'
examples, since I added 1 to some of the voxels). After volume-wise
mean-subtraction I therefore now have 25 informative voxels: 10 with
real signal and 15 with a small bias introduced by the mean-subtraction.
Realistically, this will *probably* not be a big problem if you run a
small-diameter searchlight after whole-volume mean-subtraction. But it
could be a problem, and I suspect the distortion will be worse if there
are big differences in activation across conditions or a closer
proportion of searchlight size and volume size (i.e. searchlight
analysis within a ROI or large-diameter searchlights).
Jo
PS: A short discussion of this issue was on the pymvpa message list last
year:
http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/2011q4/001929.html
PPS: I fully agree with Jesse's description of the weird boundary
effects that can happen with searchlight analyses, even if
mean-subtraction is done within each searchlight. And I don't have a
good fix; there are a very, very large number of unpleasant surprises
that can happen with searchlight analyses.