Hi John,
If you haven't already got a solution in place, the solution Nick
posted is a good choice given you have enough RAM.
My method is very idiosyncratic to my data pipeline and involves
pulling in the spiketimes, then seeking to each location in the DAT
file and grabbing the waveforms. I don't have any analysis that really
cares what the waveforms look like on a spike-by-spike basis after
sorting, so I usually pick a random subset (50-100) for averaging and
computing waveform shape statistics.
Here's a little snippet that you could repeat for every unit. There
are some things that might need explaining in there, but it gives the
gist if you know what you're looking for:
% Nchannels x Nsamples x Nwaveforms
RawChannelCount = double(h5readatt(FilesKK.KWIK,
'/application_data/spikedetekt','n_channels'));
fdata=fopen(FilesKK.DAT);
subspks =
TSECS{unit+1}(randperm(length(TSECS{unit+1}),min(wfsubsample,length(TSECS{unit+1}))));
WFx = zeros(RawChannelCount,48,length(subspks));
for spk = 1:length(subspks)
fseek(fdata,round((subspks(spk)*30000)-24)*RawChannelCount*2,'bof');
WFdata = fread(fdata,48*RawChannelCount,'*int16');
WFdata = reshape(WFdata,RawChannelCount,[]);
fWF = filtfilt(B,A,double(WFdata'));
WFx(:,:,spk) = fWF';
end
fclose(fdata);
% To make extracted phy waveforms be like klustakwik waveforms
% let's eliminate the ignored channels.
WFx = WFx(realchannellist+1,:,:);
avgwaveform{unit+1} = mean(WFx,3);
hope that helps,
kb