int('11',2) # returns 3
But decimal binary numbers throw a ValueError:
int('1.1',2) # should return 1.5, throws error instead.
Is this by design? It seems to me that this is not the correct
behavior.
- Aditya
So, why should int('1.1', 2) throw an error when int('1.1') doesn't?
Regards,
Pat
Well technically that would be a 'radix point', not a decimal point.
But I think the problem is that computers don't store fractional values
that way internally. They either use floating or fixed point math. You
would never look at raw binary data on a computer and see something like
'1010.1010', and no one would write it that way, and no language (that I
know of) would accept that as a valid value if you did something like "x
= 0b1010.1010"
So in that sense, it might not be an intentional oversight, but it's not
a very practical or useful feature.
Because int stands for integer and 1.1 is not an integer. You get the
same error if you try doing int('1.1')
The int() constructor returns integers.
So, look to float() for non-integral values.
Binary representation isn't supported yet,
but we do have hex:
>>> float.fromhex('1.8')
1.5
Raymond
Well the problem is that int's are integers. So yeah, you can't even do
that with normal value "int ('2.1')" will also throw an error. And
floats don't support radix conversion, because no-one really writes
numbers that way. (At least computer programmers...)
On 3/30/2010 11:43 AM, Shashwat Anand wrote:
> The conversion is not supported for decimal integers AFAIK, however
> '0b123.456' is always valid. I guess you can always get a decimal number
> convertor onto Python-recipes
>
>
>
> On Tue, Mar 30, 2010 at 9:05 PM, Grant Olson <k...@grant-olson.net
> <mailto:k...@grant-olson.net>> wrote:
>
> On 3/30/2010 11:13 AM, aditya wrote:
> Well technically that would be a 'radix point', not a decimal point.
>
> But I think the problem is that computers don't store fractional values
> that way internally. They either use floating or fixed point math. You
> would never look at raw binary data on a computer and see something like
> '1010.1010', and no one would write it that way, and no language (that I
> know of) would accept that as a valid value if you did something like "x
> = 0b1010.1010"
>
> So in that sense, it might not be an intentional oversight, but it's not
> a very practical or useful feature.
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>
What you want is for float() to accept a base, but that is rarely
needed.
That makes sense. The closest thing I've found is this question on
StackOverflow: http://stackoverflow.com/questions/1592158/python-convert-hex-to-float
It seems to me that adding a conversion feature to floats would be a
lot more intuitive.
That looks very elegant, thanks!
>>> int('1.1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: '1.1'
int('1.1', 2) shouldn't return 1.5 because 1.5 isn't an integer.
The obvious question is, why doesn't float('1.1', 2) work? The answer is
that Python doesn't support floats in any base except 10. It's not
something needed very often, and it's harder to get right than it might
seem.
--
Steven
Hex floats are useful because you can get a string representation
of the exact value of a binary floating point number. It should
always be the case that
float.fromhex(float.hex(x)) == x
That's not always true of decimal representations, due to rounding problems.
Long discussion of this here: "http://bugs.python.org/issue1580"
John Nagle
It is supported in the gmpy module.
>>> import gmpy
>>> help(gmpy.mpf)
Help on built-in function mpf in module gmpy:
mpf(...)
mpf(n): builds an mpf object with a numeric value n (n may be any
Python number, or an mpz, mpq, or mpf object) and a
default
precision (in bits) depending on the nature of n
mpf(n,bits=0): as above, but with the specified number of bits (0
means to use default precision, as above)
mpf(s,bits=0,base=10): builds an mpf object from a string s made
up of
digits in the given base, possibly with fraction-part
(with
period as a separator) and/or exponent-part (with exponent
marker 'e' for base<=10, else '@'). If base=256, s must be
a gmpy.mpf portable binary representation as built by the
function gmpy.fbinary (and the .binary method of mpf
objects).
The resulting mpf object is built with a default precision
(in
bits) if bits is 0 or absent, else with the specified
number
of bits.
>>> gmpy.mpf('1.1',0,2)
mpf('1.5e0')
> Hex floats are useful because you can get a string representation of
> the exact value of a binary floating point number. It should always
> be the case that
>
> float.fromhex(float.hex(x)) == x
Until you try running your program on a machine that represents floats
using a radix other than 2,4, or 16.
;)
And it works for NaN and Inf too!
It would have been nice to have had that 5-6 years ago when I had to
write my own pickle/unpickle methods for floating point values so that
inf and nan were portable between Windows and Linux.
--
Grant Edwards grant.b.edwards Yow! But they went to MARS
at around 1953!!
gmail.com
gmpy gives you arbitrary precision floats, also.
>
>
>
> > Long discussion of this here: "http://bugs.python.org/issue1580"- Hide quoted text -
>
> - Show quoted text -- Hide quoted text -
>
> - Show quoted text -