Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How to get from byte to hex in LabView?

1,397 views
Skip to first unread message

Christopher M. Balz

unread,
Feb 11, 2004, 1:46:21 PM2/11/04
to
Our measuring instrument returns its data in three 8-bit bytes. We
are trying to log this data in a nice format to a log file. Currently
we have the logging, but translating the data to the format used on
the display of the actual measuring instrument is turning out to be
amazingly difficult in LabView.

Here is the short description of the problem: We need to:
* Translate hex values of 8-bit bytes into string literals.
> (For example, 52 -> "52", and F7 -> "F7")
* Translate hex values of 8-bit bytes into their decimal
equivalents.
> (For example, 16 -> 22)
* Translate decimal values into their string literal representation
> (For example, 22 -> "22")

We are using an older version of LabView (v4), but it should be able
to do the logging we need to do.

We are amazed at how hard this is to do in LabView, given how much
easier it is in C-family languages. We really hope there is some
knowledge out there. If you need more info, please see below. Thank
you in advance!

-> Here is the long description of the problem:

The literal string representation of the hex value of the first byte
represents the first two numeric characters of the display. So, for
example, if we get a hex value of 52 for the first byte, this means
that the measuring instrument is displaying 5.2 on its display. Yes,
this is its scheme (and it's a Varian instrument!)

The remaining precision in the measurement is sent in the second byte.
The literal string representation of the decimal value of this second
byte contains the digits for the remaining precision. So, for
example, a second byte with a hex value of 16 means that the digits
for the remaining precision in the measurement are "22".

The exponent is sent in the third byte. The translation (via two's
complement translation) to a signed integer from the hex value of this
third byte forms the value of the exponent. So, for example, F7 means
that the exponent is -9.

To sum up, if the hex values for a three-byte set (8-bit bytes) were
these:
52 16 F7
then the display of the measuring instrument will read '5.222 E-9'.
It is the literal representation of this value that we would like to
log to the file (disk file). In other words, we'd like a log file
with lines in it that contained the reading in final form: "5.222 E-9,
4.995 E-9, 4.994 E-9", and so on.

Here is a related
<a href="http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&threadm=5065000000080000003A...@exchange.ni.com&rnum=2&prev=/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&q=labview+%2522.vi%2522+ascii+decimal">newsgroup
posting</a>.

We can handle the last byte by a simple, short lookup table. But we
need to know how handle the first two bytes.

Sincerely,
- Chris Balz.

- - -

Martin Riddle

unread,
Feb 12, 2004, 9:27:56 AM2/12/04
to
Use the data conversion functions:

Flatten - can flatten a string into an U8 array (or what ever the type descriptor is).
String conversion functions - equivlent to 'C' sprintf.

It really isnt that hard, just seems like more to do (and sometimes its is) in the GUI.
Flatten the data, index each byte, and then format into your displayable string.

Cheers

"Christopher M. Balz" <Christop...@StanfordAlumni.org> wrote in message news:343aeada.04021...@posting.google.com...

Brian Powell

unread,
Feb 12, 2004, 9:55:05 AM2/12/04
to
If I understand correctly, you have a string with those three bytes in
it.

You might consider using String to Byte Array. This will help with
the second and third bytes--it'll convert them to unsigned int8's.

The third byte is signed, but you can just convert that byte to an
int8 and it'll do the right thing.

Once you have it in an array of bytes, it should be pretty easy to use
Format Into String to create what you want. If you can't figure it
out, let us know.

Brian

Christopher M. Balz

unread,
Feb 13, 2004, 1:13:29 PM2/13/04
to
Thanks guys. We already had the bytes picked out of the array and
ready for processing. The solution was to use the 'number to
hexadecimal' function under the Strings heading. Then we had a string
literal representing the direct, actual characters of the hexadecimal
value of the byte.

This was quite counterintuitive, because we had the array of bytes
ready for processing, and generally one must cast datatypes to go from
byte to number. The LabView 4.x manuals weren't much help on this
issue, and consequently this common problem absorbed a lot of time.
But their tech support was rather astounding (they sent a developer
out to help us!)

Minor note: both the first bytes were in binary coded decimal (BCD)
format.

- CB

Brian Powell <x...@no.email> wrote in message news:<50650000000500000050...@exchange.ni.com>...

0 new messages