void
main (void)
{
char Input[16] = "C04874C1A0000000";
double
Result;
Scan(Input, "%s>%f",
&Result);
}
Following an example I tried the line below. This gave a result
= 2.1738120934444362 E -71 when the actual decimal conversion is
-48.9121589.
void main
(void)
{
char Input[16] =
"C04874C1A0000000";
double Result;
Scan(Input, "%1f[z]>%f",
&Result);
}
I then tried Single Precision which is actually my desired
format. This gives the result of 1.8364521 E -39 when the actual decimal
conversion is -48.9121589. (See the conversion page
<a href="http://babbage.cs.qc.edu/IEEE-754/32bit.html" target="_blank">http://babbage.cs.qc.edu/IEEE-754/32bit.html</a>)
void main (void)
{
char Input[8] =
"C243A60D";
float Result;
Scan(Input, "%s>%f[b4]",
&Result);
} If the error of my ways can be explained, I would be grateful.
Scan (Input, "%s>%f", &Result);
but your ASCII representation of a binary code corresponding to a floating point number will need more manipulation. JR