Can you provide your code and an example file?
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
_______________________________________________
SciPy-User mailing list
SciPy...@scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user
hmm -- "d" is already double (64 bit float) -- could the labview data be
128bit? seems unlikely.
> and also
> the value of the data are not right.
>
> For example, the attached is the text file and binary file saved by Labview.
>
> the text file reads:
>
> array([-2332., -2420., -2460., ..., 1660., 1788., 1804.])
>
> while the binary file reads (with dtype='>d')
>
> array([-3.30078125, 0. , -3.30297852, ..., 0. ,
> -2.6953125 , 0. ])
>
> Anyone knows what dtype I should use, or how should I build the correct
> dtype for it?
I see from that web page:
"One final note about arrays: arrays are represented by a 32-bit
dimension, followed by the data."
so you may need to skip (or read) 32 bits (4 bytes) before you read the
data:
header = numpy.fromfile(infile, dtype='>i',count=1)
data = numpy.fromfile(infile, dtype='>d')
that's guessing that the header is big-endian 32bit integer.
You also might try both ">" and "<" -- maybe it's not big endian?
It's going to take some experimentation.
The good news that if you read binary data wrong, the result is usually
obviously wrong.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
Those data are big Endian, 80-bit IEEE extended-precision numbers,
flattened to 128-bit extended-precision in the binary file. Not sure
if/how such data can be read into numpy without bit manipulations.
<http://zone.ni.com/reference/en-XX/help/371361B-01/lvconcepts/how_labview_stores_data_in_memory/>
<http://zone.ni.com/reference/en-XX/help/371361E-01/lvconcepts/flattened_data/>
Christoph
It should be possible at least on 64 bits machines (where sizeof(long
double) == 16 bytes), and you may be able to do get away with it on 32
bits if you have a composite dtype with the second type used for
padding, i.e. you assume you have a array of N rows with two columns,
the first column being 12 bytes and the second one a 4 bytes type (say
int on 32 bits archs), or the other way around.
cheers,
David
> http://www.shocksolution.com/2008/06/25/reading-labview-binary-files-with-python/
> I followed Travis' suggestion on that page to convert one of my Labview
> binary file using
> data=numpy.fromfile('name',dtype='>d')
> but this gives a array doubled the shape of my recorded data and also the
> value of the data are not right.
> For example, the attached is the text file and binary file saved by Labview.
> the text file reads:
no matter what ur text file looks like, what is the exactly data type
you specified in
Labview? Can you build a G-code to read this correct value from your binaries?
Did u specify a header in G-code for writing binaries? I knew G-code
have some nice
mechanism to write a header or a filesize code in front of the
binaries. But I think by
default this is OFF. Please, be sure you understand what your G-code generated.
> array([-2332., -2420., -2460., ..., 1660., 1788., 1804.])
> while the binary file reads (with dtype='>d')
> array([-3.30078125, 0. , -3.30297852, ..., 0. , -2.6953125 ,
> 0. ])
> Anyone knows what dtype I should use, or how should I build the correct
> dtype for it?
> Thanks a lot!
> Xunchen Liu
>
>
>