In [2]: r = neo.io.PlexonIO(filename='12-10-m1wb.plx')
In [3]: %timeit seg = r.read_segment()
1 loops, best of 3: 46.5 s per loop
In [4]: seg = r.read_segment()
In [5]: bl = neo.Block(name = 'my block')
In [6]: bl.segments.append(seg)
In [7]: w = neo.io.NeoMatlabIO(filename = 'm1wb.mat')
1 loops, best of 3: 335 ms per loop
I have not been able to compare the read speed for the matlab format data as I get the following error when I try and load it back into python:
In [8]: bl2 = w.read_block()
------------------------------------------------------------
Traceback (most recent call last):
File "<ipython console>", line 1, in <module>
File "C:\Python27\lib\site-packages\neo-0.2.0-py2.7.egg\neo\io\neomatlabio.py", line 196, in read_block
bl = self.create_ob_from_struct(bl_struct, 'Block', cascade = cascade, lazy = lazy)
File "C:\Python27\lib\site-packages\neo-0.2.0-py2.7.egg\neo\io\neomatlabio.py", line 307, in create_ob_from_struct
for c in range(len(getattr(struct,attrname))):
TypeError: object of type 'mat_struct' has no len()
Questions:
1). When I load .plx files containing spike train data, the spike times are loaded but the waveforms are not. Is it possible to load the waveforms as well?
2. PlexonIO appears not to load any information about which channel data was recorded on, i.e it does not create any Recording Channel objects. This is problematic if any online sorting has been done on the data, in which case the number of spiketrains imported by neo is different from the number of electrodes. Is there any way of loading this information?
3) Loading data takes a very long time compared to loading the .plx files in Plexon Offline Sorter. For example a 200 MB file that takes < 2 seconds to load in offline sorter takes ~ 4 minutes to load using r.read_segment(). Is this unavoidable or can the process be accelerated in any way? I am running neo 0.2.0 under 32 bit python 2.7.2 on a 64 bit Windows 7 PC with 8 GB ram and an i7-2600 CPU.
Thanks for you help.
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Lyon Neuroscience CNRS - UMR5292 - INSERM U1028 - Universite Claude Bernard LYON 1 Equipe R et D 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE Tél : 04 37 28 74 24 Fax : 04 37 28 76 01 http://olfac.univ-lyon1.fr/unite/equipe-07/ http://neuralensemble.org/trac/OpenElectrophy ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Reading a 1 minute 16 channel continuous recording (40 kHz sample rate) is taking approximately 140 times longer than writing the same data to a .mat file! I'm no expert but this suggests to me that PlexonIO read segment is not operating as efficiently as it might?
In [1]: import neoIn [2]: r = neo.io.PlexonIO(filename='12-10-m1wb.plx')
In [3]: %timeit seg = r.read_segment()
1 loops, best of 3: 46.5 s per loop
In [4]: seg = r.read_segment()
In [5]: bl = neo.Block(name = 'my block')
In [6]: bl.segments.append(seg)
In [7]: w = neo.io.NeoMatlabIO(filename = 'm1wb.mat')
1 loops, best of 3: 335 ms per loop
--
You received this message because you are subscribed to the Google Groups "Neural Ensemble" group.
To post to this group, send an email to neurale...@googlegroups.com.
To unsubscribe from this group, send email to neuralensembl...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/neuralensemble?hl=en-GB.
I have spent more that 30 seconds in the code and there is a option for waveform that is set to False
load_spike_waveform = False
seg = r.read(load_spike_waveform = True)
should load waveforms
--
You received this message because you are subscribed to the Google Groups "Neural Ensemble" group.
To post to this group, send an email to neurale...@googlegroups.com.
To unsubscribe from this group, send email to neuralensembl...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/neuralensemble?hl=en-GB.