Guido van Rossum, 29.09.2012 23:06:
> On Sat, Sep 29, 2012 at 1:34 PM, Calvin Spealman wrote:That would be "fractions" territory. Given that all three have their own
>> I like the idea a lot, but I recognize it will get a lot pushback. I
>> think learning Integer -> Decimal -> Float is a lot more natural than
>> learning Integer -> Float -> Decimal. The Float type represents a
>> specific hardware accelleration with data-loss tradeoffs, and the use
>> should be explicit. I think that as someone learns, the limitations of
>> Decimals will make a lot more sense than those of Floats.
> Hm. Remember decimals have data loss too: they can't represent 1/3 any
area where they shine and a large enough area where they really don't, I
can't see a strong enough argument for making any of the three "the default".
Also note that current float is a very C friendly thing, whereas the other
anecdotal PS: I recently got caught in a discussion about the impressive
You must Sign in before you can post messages.
To post a message you must first join this group.
Please update your nickname on the subscription settings page before posting.
You do not have the permission required to post.