Hoping the bool type will be fixed will be fixed,
Alan Isaac
I'm not sure how the bools we have now are not "real."
> Say something along the line of the numpy implementation for arrays
of > type 'bool'?
What aspect of this do you want? A bool typecode for the stdlib array
module?
I can guess a number of things that you might mean, but it would be best
if you explained with an example of what current behavior is and what
you would like it to be.
--
Michael Hoffman
> Is there any discussion of having real booleans
> in Python 3000?
The last I have seen is
http://mail.python.org/pipermail/python-3000/2007-January/005284.html
> Hoping the bool type will be fixed will be fixed,
Do you care to explain what is broken?
Peter
OK. Thanks.
> Do you care to explain what is broken?
I suppose one either finds coercion of arithmetic operations to int
to be odd/broken or does not. But that's all I meant.
My preference would be for the arithmetic operations *,+,-
to be given the standard interpretation for a two element
boolean algebra:
http://en.wikipedia.org/wiki/Two-element_Boolean_algebra
In contrast with the link above,
it does not bother me that arithmetic with ints and bools
produces ints.
Cheers,
Alan Isaac
If I understand this right, the biggest difference from the current
implementation would be that::
True + True == True
instead of:
True + True == 2
What's the advantage of that? Could you give some use cases where that
would be more useful than the current behavior?
It's much easier to explain to newcomers that *, + and - work on True
and False as if they were 1 and 0 than it is to introduce them to a two
element boolean algebra. So making this kind of change needs a pretty
strong motivation from real-world code.
Steve
I prefer the use of 'and' and 'or', and they feel more pythonic than & and
+
--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
> Hoping the bool type will be fixed will be fixed,
Is there any type named "bool" in standard Python?
Regards,
Björn
--
BOFH excuse #207:
We are currently trying a new concept of using a live mouse.
Unfortunately, one has yet to survive being hooked up to the
computer.....please bear with us.
> My preference would be for the arithmetic operations *,+,-
> to be given the standard interpretation for a two element
> boolean algebra:
> http://en.wikipedia.org/wiki/Two-element_Boolean_algebra
>>> [bool(True+True), bool(True+False)]
[True, True]
Works for me, or did I misunderstand you?
If you don't want explicitly to write bool, you could define your
own True and False classes.
Regards,
Björn
--
BOFH excuse #184:
loop found in loop in redundant loopback
As far as I know, there is no such discussion among the developers.
Clp is always a different matter ;-)
> Alan Isaac wrote:
> > Is there any discussion of having real booleans
> > in Python 3000?
>
> I'm not sure how the bools we have now are not "real."
I'm guessing that Alan is referring (at least in part) to this behaviour:
Python 2.4.4 (#2, Apr 5 2007, 20:11:18)
[...]
>>> True == 1
True
>>> False == 0
True
Whereas a real bool type would have discrete values for True and False
that would not be equal to any other.
--
\ "I guess we were all guilty, in a way. We all shot him, we all |
`\ skinned him, and we all got a complimentary bumper sticker that |
_o__) said, 'I helped skin Bob.'" -- Jack Handey |
Ben Finney
> Alan G Isaac wrote:
>
>> My preference would be for the arithmetic operations *,+,-
>> to be given the standard interpretation for a two element
>> boolean algebra:
>> http://en.wikipedia.org/wiki/Two-element_Boolean_algebra
>
>>>> [bool(True+True), bool(True+False)]
> [True, True]
>
> Works for me, or did I misunderstand you?
It seems to me that you deliberately misunderstood him. Why else would you
type-cast the integers 2 and 1 to bools to supposedly demonstrate that
there's nothing wrong with operations between bools returning ints?
I mean, by that logic, it should be okay if we had
False = []
True = [None]
because:
bool(False + True), bool(True + True)
also gives (True, True). But that doesn't justify the choice of bools
being lists any more than it justifies the choice of bools being ints.
--
Steven.
> It's much easier to explain to newcomers that *, + and - work on True
> and False as if they were 1 and 0 than it is to introduce them to a two
> element boolean algebra. So making this kind of change needs a pretty
> strong motivation from real-world code.
Pretending that False and True are just "magic names" for 0 and 1 might be
"easier" than real boolean algebra, but that puts the cart before the
horse. Functionality comes first: Python has lists and dicts and sets
despite them not being ints, and somehow newcomers cope. I'm sure they
will cope with False and True not being integers either.
I mean, really, does anyone *expect* True+True to give 2, or that 2**True
even works, without having learnt that Python bools are ints? I doubt it.
And the old Python idiom for an if...then...else expression:
["something", "or other"][True]
tends to come as a great surprise to most newbies. So I would argue that
bools being ints is more surprising than the opposite would be.
--
Steven.
Are they are aren't they?
print 1 in [True]
print 1 == True
print len(set(map(type, [1, 1])))
print len(set(map(type, [1, True])))
I disagree. I think you'd get just as many odd stares if:
True + True == True
But I think all you're really saying is that newbies don't expect things
like +, -, *, etc. to work with bools at all. Which I agree is probably
true.
So it seems like you're really arguing for raising exceptions in all
these situations. That would actually be fine with me since I never use
bools as ints, but I suspect getting it past python-dev will be an
uphill battle since it will break large chunks of code.
STeVe
We had a huge discussion of this stuff when bools were introduced
in Python 2.3 or thereabouts. The current system is about the
best way that doesn't break everything in sight. The weirdness
is basically a consequence of bools being an afterthought in Python.
Python has a long tradition of implicitly casting other values to
bool, e.g. strings, lists, sets etc. are all false if empty, etc.
> It's much easier to explain to newcomers that *, + and - work on
> True and False as if they were 1 and 0 than it is to introduce them
> to a two element boolean algebra.
I've found exactly the opposite. When explaining that None is a value
that is not equal to any other, and that is a useful property, I've
received little confusion. Whereas when someone discovers that
arithmetic works on True and False as if they were numbers, or that
they are in fact *equal to* numbers, their function as boolean values
is much harder to explain.
So, it's for the purposes of explaining True and False to newcomers
(let alone keeping things clear when observing a program) that I would
welcome True and False as discrete values, so that when those values
are produced by an expression or function the result is clearly a
boolean value and not a faked one that is "really" an integer.
--
\ "Pinky, are you pondering what I'm pondering?" "Wuh, I think |
`\ so, Brain, but wouldn't anything lose its flavor on the bedpost |
_o__) overnight?" -- _Pinky and The Brain_ |
Ben Finney
> I mean, really, does anyone *expect* True+True to give 2, or that 2**True
> even works, without having learnt that Python bools are ints? I doubt it.
Sure, why not? It's pretty damn useful. Ever heard of things like "indicator
functions", "Iverson brackets" etc.? Mathematicians have long been using
broken and cumbersome ad hoc notations to be able to do stuff like
``(x<b)*f(x)`` or ``-1**(i==j)`` (e.g. ``-1^{\delta_ij}``).
And python is not alone in this either; take matlab:
>> true+true
ans =
2
so certainly people coming from matlab to scipy *will* often expect True+True
== 2.
I'd claim that even if it weren't for backwards compatibility, python bools
should behave exactly as they are -- for a language that assigns a truth value
to instances of any type, this is the right behavior.
'as
> Steven D'Aprano <st...@REMOVE.THIS.cybersource.com.au> writes:
>> Pretending that False and True are just "magic names" for 0 and 1 might
>> be "easier" than real boolean algebra, but that puts the cart before
>> the horse. Functionality comes first: Python has lists and dicts and
>> sets despite them not being ints, and somehow newcomers cope. I'm sure
>> they will cope with False and True not being integers either.
>
> Are they are aren't they?
I'm sorry, I can't parse that sentence.
> print 1 in [True]
> print 1 == True
> print len(set(map(type, [1, 1])))
> print len(set(map(type, [1, True])))
But I guess that you are probably trying to make the point that True and
False are instances of a _subtype_ of int rather than ints, under the
mistaken idea that this pedantry would matter. (If this is not the case,
then I apologize for casting aspersions.) However, you may notice that I
said _integers_, which is not the same thing as ints: the Python types
int and bool are both implementations of the mathematical "integer" or
"whole number".
--
Steven.
>> I mean, really, does anyone *expect* True+True to give 2, or that
>> 2**True even works, without having learnt that Python bools are ints? I
>> doubt it.
>>
>> And the old Python idiom for an if...then...else expression:
>>
>> ["something", "or other"][True]
>>
>> tends to come as a great surprise to most newbies. So I would argue
>> that bools being ints is more surprising than the opposite would be.
>
> I disagree. I think you'd get just as many odd stares if:
>
> True + True == True
Well, sure, if you're talking about people with no programming experience
whatsoever, or at least those who aren't at all familiar with the concept
of operator overloading. But we don't prohibit "foo" + "bar" because of
the existence of non-programmers.
> But I think all you're really saying is that newbies don't expect things
> like +, -, *, etc. to work with bools at all. Which I agree is probably
> true.
No, what I am saying is that True and False being integers under the hood
is a surprising implementation detail. It has no _inherent_ benefit: it
is merely a practical way to bring bools into the language while
remaining backward compatible. For Python 2.x, that was the least bad
solution to the issue "oops, we should have included a bool type".
Python 3 is allowed to break backwards compatibility, and there is no
reason I can see to keep the current hack.
--
Steven.
> We had a huge discussion of this stuff when bools were introduced in
> Python 2.3 or thereabouts. The current system is about the best way
> that doesn't break everything in sight.
But Python 3 is allowed to break backwards compatibility, so that's no
longer a reason for keeping the current behaviour.
> The weirdness is basically a
> consequence of bools being an afterthought in Python. Python has a long
> tradition of implicitly casting other values to bool, e.g. strings,
> lists, sets etc. are all false if empty, etc.
No, that's not true. How could Python cast objects to bool before bool
existed?
What Python has is much more powerful: the concept of Something versus
Nothing. "x" and 4 and [23, "foo"] are all Something. "" and 0 and [] and
None are all Nothing. No cast, whether implicit or explicit, is needed.
What Python does is call the object's __nonzero__ method, if it has one,
otherwise it checks to see if the object has a non-zero length (if it has
a length), and otherwise the object is considered true.
From a purely functional perspective, bools are unnecessary in Python. I
think of True and False as syntactic sugar. But they shouldn't be
syntactic sugar for 1 and 0 any more than they should be syntactic sugar
for {"x": "foo"} and {}.
--
Steven.
Cons: This comes from the pioneering work of Dijkstra and his coworkers)
The distributive law (one of them) in boolean algebra looks like this:
a /\ (b \/ c) = (a/\b) \/ (a/\c)
which becomes simpler to read and type and more familiar as
a(b+c) = ab + ac
So far so good. However its dual is
a\/(b/\c) = (a\/b) /\ (a\/c)
which in arithmetic notation becomes
a + bc = (a+b)(a+c)
This is sufficiently unintuitive and unnatural that even people
familiar with boolean algebra dot get it (so Dijkstra, Gries etc
claim)
Boolean algebra is perfectly dual, arithmetic is not. That is why we
need logical connectives 'and' and 'or' and dont somehow fudge along
with + and *. Therefore True and False should belong with 'and', 'or'
and 0,1 should belong with +,*
No, I think Bjoern just wanted to point out that all those binary
boolean operators already work *perfectly*. You just have to emphasize
that you're doing boolean algebra there, using `bool()`.
"Explicit is better than implicit."
>>> bool(False-True)
True
But reread Steven.
Cheers,
Alan Isaac
>>> type(True)
<type 'bool'>
Cheers,
Alan Isaac
> From a purely functional perspective, bools are unnecessary in Python. I
> think of True and False as syntactic sugar. But they shouldn't be
> syntactic sugar for 1 and 0 any more than they should be syntactic sugar
> for {"x": "foo"} and {}.
But `bools` are usefull in some contexts. Consider this:
>>> 1 == 1
True
>>> cmp(1, 1)
0
>>> 1 == 2
False
>>> cmp(1, 2)
-1
At first look you can see that `cmp` does not return boolean value
what not for all newbies is so obvious.
Rob
What boolean operation does '-' represent?
What would you expect this operation to return then?
The * and + operations described in the previously mentioned document
(http://en.wikipedia.org/wiki/Two-element_Boolean_algebra) work as you'd
expect them to do, if you explicitly state "hey, this should be a
boolean algebra operation". And that's okay in my opinion. If you could
describe what's wrong about that result, we could probably help better.
Steven just suggests another (IMO totally unrelated) implementation of
bool types. A point that Bjoern does not even touch here. Besides, his
implementations fails at the * operation.
Bjoern does not (sorry, if I say anything wrong, Bjoern) say the current
behaviour is right or wrong. He just says that you can make Python aware
of boolean algrebra utilizing `bool()` easily.
bool-ly,
Stargaming
regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
--------------- Asciimercial ------------------
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
----------- Thank You for Reading -------------
Complementation.
And as usual, a-b is to be interpreted as a+(-b).
In which case the desired behavior is
False-True = False+(-True)=False+False = False
In response to Stargaming, Steve is making
a point about the incoherence of certain arguments,
not proposing an implementation.
Cheers,
Alan Isaac
In an ideal world, bool might be better implemented as a
separate integer type that knows how to perform mixed-mode
arithmetic.
I mentioned Python 3000 since that is an opportunity for an ideal world.
Cheers,
Alan Isaac
Remember that while Python 3 is allowed to break backwards
compatibility, it's only supposed to do it when there are concrete
benefits. Clearly there are existing use cases for treating bools like
ints, e.g. from Alexander Schmolck's email:
(x < b) * f(x)
-1 ** (i == j)
If you want to remove this functionality, you're going to need to
provide some new use cases that it satisfies that are clearly more
important than these existing ones.
STeVe
Excellent point! And as long as we have them I
agree with Alan that the boolean data type should
implement real boolean algebra with respect to +, *, and ~,
for example supporting operator precedence appropriately
(versus using and, or, not) and also correctly
implementing DeMorgan's laws and other property's of
boolean algebra
~(a*b) == ~a + ~b
etcetera.
1+True is bad practice and should be an error.
Anything else is false advertising
(and the java community has the patent on that
methodology :c) ).
-- Aaron Watters
===
Why does a giraffe have such a long neck?
Because its head is so far from its body!
Sorry I fail to see your point!? What has ``==`` to do with `cmp()` here?
The return of `cmp()` is an integer that cannot and should not be seen as
boolean value.
Ciao,
Marc 'BlackJack' Rintsch
I always thought, at least in a Python context, A-B would trigger
A.__sub__(B), while A+(-B) triggers A.__add__(B.__neg__()). A better
choice could be A+~B (A.__add__(B.__invert__())) because it's always
unary (and IMO slightly more visible).
> In response to Stargaming, Steve is making
> a point about the incoherence of certain arguments,
> not proposing an implementation.
Why should it be incoherent? Bjoern is pointing out an important aspect
of how Python handles binary algebra (correctly). In contrast, Steven
tries to invert his argument. Following, I showed why Steven's proof is
wrong because his implementation fails at some aspects where the current
one works. So I cannot see how Bjoern's argument is either wrong or not
relevant.
Both can be cleanly handled using int():
int(x < b) * f(x)
-1 ** int(i == j)
Not that I have any clear opinion on the topic FWIW.
Before `bool` appeared it looked like this:
>>> 1 == 1
1
>>> cmp(2, 1)
1
Wich result is boolean value?
Rob
> How could Python cast objects to bool before bool
> existed?
Time machine?
Sorry, I couldn't resist.
Peter
I may be biased since I learned C before Python and learned Python
before it had a Boolean type, but I'd think that having False==0 and
True==1 is not that surprising for most programmers.
> without having learnt that Python bools are ints? I doubt it.
>
> And the old Python idiom for an if...then...else expression:
>
> ["something", "or other"][True]
>
> tends to come as a great surprise to most newbies.
This idiom should slowly disappear now we have a clean syntax for such
expressions.
> So I would argue that
> bools being ints is more surprising than the opposite would be.
I suppose this mostly have to do with one's background.
> No, I think Bjoern just wanted to point out that all those binary
> boolean operators already work *perfectly*. You just have to emphasize
> that you're doing boolean algebra there, using `bool()`.
> "Explicit is better than implicit."
I think that the assignability to the names 'True' and 'False' is
incorrect, or at the very least subject to all sorts of odd results.
Look at this:
>>> True, False
(True, False)
>>> True = False
>>> True, False
(False, False)
>>> True == False
True
>>> (True == False) == True
False
Yeah, I know: "Doctor, it hurts when I do this". Doc: "So don't do
that!". I haven't kept up with all the Python 3000 docs, so does
anyone know if True and False will become true keywords, and whether
oddball stuff like the above will no longer be possible?
-- Ed Leafe
-- http://leafe.com
-- http://dabodev.com
If you want to do algebra with bools in python then use the logical
operators (and or not) and not the arithmetical operators.
Eg
>>> False or not True
False
--
Nick Craig-Wood <ni...@craig-wood.com> -- http://www.craig-wood.com/nick
It is necessary for 2.x to not break older code. I believe they will
somehow be reserved, like None, in 3.0.
tjr
> No, I think Bjoern just wanted to point out that all those binary
> boolean operators already work *perfectly*. You just have to emphasize
> that you're doing boolean algebra there, using `bool()`.
> "Explicit is better than implicit."
So we should always write things explicitly like:
if bool(bool(some_condition) is True) is True:
first_line = str(some_string).split(str("\n"))[int(0)]
n = int(int(len(list(some_list))) + int(1))
elif bool(bool(some_condition) is False) is True:
f = float(math.sin(float(6.0)/float(math.pi)))
instead of the less explicit code. I'll try to remember that, thank you
for the advice.
--
Steven
http://www.python.org/dev/peps/pep-0285/
tjr
None was present in the language for a long time before 2.3 though,
and any code that actually assigned to it was asking for trouble.
True and False didn't exist til recently and it was common for
programs to define them.
>
> Steven D'Aprano wrote:
>
>> From a purely functional perspective, bools are unnecessary in Python. I
>> think of True and False as syntactic sugar. But they shouldn't be
>> syntactic sugar for 1 and 0 any more than they should be syntactic sugar
>> for {"x": "foo"} and {}.
>
> But `bools` are usefull in some contexts.
Agreed. Syntactic sugar is useful, even though it is unnecessary. I'm not
against having bools. I just want them to be Booleans, not dicts, or
lists, or sets, or even integers.
--
Steven.
> Steven D'Aprano wrote:
>> On Tue, 10 Jul 2007 17:47:47 -0600, Steven Bethard wrote:
>>> But I think all you're really saying is that newbies don't expect things
>>> like +, -, *, etc. to work with bools at all. Which I agree is probably
>>> true.
>>
>> No, what I am saying is that True and False being integers under the hood
>> is a surprising implementation detail. It has no _inherent_ benefit: it
>> is merely a practical way to bring bools into the language while
>> remaining backward compatible. For Python 2.x, that was the least bad
>> solution to the issue "oops, we should have included a bool type".
>>
>> Python 3 is allowed to break backwards compatibility, and there is no
>> reason I can see to keep the current hack.
>
> Remember that while Python 3 is allowed to break backwards
> compatibility, it's only supposed to do it when there are concrete
> benefits. Clearly there are existing use cases for treating bools like
> ints, e.g. from Alexander Schmolck's email:
>
> (x < b) * f(x)
> -1 ** (i == j)
You have cause and effect confused here. Expressions like (i == j) used
to return 0 and 1, and it was to avoid breaking hacks like the above
that bools were implemented as a subclass of int, not because being
able to write the above was a specific feature requested.
In the hypothetical bright new world of Python with bools that are
actually bools, the above are good cases for explicit being better than
implicit:
int(x < b) * f(x)
-1 ** int(i == j)
It makes more sense to explicitly cast bools to ints when you want to
do integer arithmetic on them, rather than explicitly casting bools to
bools to do boolean arithmetic! I feel strongly enough about this that I
believe that being able to write (x < b) * f(x) is a DISADVANTAGE -- it
gives me a real WTF moment to look at the code.
In some hypothetical world where backwards compatibility was not an
issue, where bools had already existed, if somebody had specifically asked
for bools to become ints so they could write (x < b) * f(x), I have every
confidence that their request would have been denied, and they would have
been told to explicitly cast the bool to an int. As they should.
--
Steven.
Let me comment on what I suspect is Alan's Hidden Agenda (tm). Since
this question surfaced earlier on the numpy list, I suspect that part
of the motivation here has to do with trying to come up with a natural
way to work with arrays of booleans. The operators and,or,not don't
work for this purpose since they can't be overloaded to return an
arbitrary value. You can almost make this work with &,|,^:
>>> a = np.array([True, False, False])
>>> b = np.array([True, True, False])
>>> a & b
array([ True, False, False], dtype=bool)
>>> a | b
array([ True, True, False], dtype=bool)
>>> a ^ b
array([False, True, False], dtype=bool)
This is meshes well with the behavior of True and False:
>>> True & True
True
>>> True | True
True
>>> True ^ True
False
This doesn't leave you with anything equivalent to 'not' however. Or
nothing consistent. Currently '~a' will complement a boolean array,:
>>> ~a
array([False, True, True], dtype=bool)
However that's less than ideal since it doesn't mesh up with the
behavior of booleans on their own:
>>> ~True
-2
That's potentially confusing. It's not any skin of my nose since I
use, and will likely continue to use, boolean arrays in only the most
rudimentary ways. However, I thought I'd offer some additional
context.
-tim
Can you use operator.not_(a) ?
You're missing like 400 bool(...) is True constructs there! Fatal error,
recursion depth reached. Aww!
You forgot to quote this bit:
4) Should we strive to eliminate non-Boolean operations on bools
in the future, through suitable warnings, so that for example
True+1 would eventually (in Python 3000) be illegal?
=> No.
There's a small but vocal minority that would prefer to see
"textbook" bools that don't support arithmetic operations at
all, but most reviewers agree with me that bools should always
allow arithmetic operations.
Nis
> In Guido's opinion (and mine, but his counts 100x), the positive
> benefits of the current implementation are greater than the net
> positive benefits of a 'pure' type. See
>
> http://www.python.org/dev/peps/pep-0285/
I assume you're referring to:
6) Should bool inherit from int?
=> Yes.
In an ideal world, bool might be better implemented as a
separate integer type that knows how to perform mixed-mode
arithmetic. However, inheriting bool from int eases the
implementation enormously [...further explanation...]
I accept Guido's explanation in the PEP, that the implementation is
made much easier, as an explanation of why bool inherits from int. I
haven't seen people here expressing that they want the opposite.
To my mind the more fundamental issue is this one:
4) Should we strive to eliminate non-Boolean operations on bools
in the future, through suitable warnings, so that for example
True+1 would eventually (in Python 3000) be illegal?
=> No.
There's a small but vocal minority that would prefer to see
"textbook" bools that don't support arithmetic operations at
all, but most reviewers agree with me that bools should always
allow arithmetic operations.
Frustratingly, unlike the above point about inheritance, the PEP gives
no explanation of why the answer to this is "No". All we get is "most
reviewers agree", with no explanation of *why*.
So, I'm left with the points already made in this thread as to why the
answer should be "yes", and no source online for an official
explanation of the "no".
--
\ "I don't like country music, but I don't mean to denigrate |
`\ those who do. And for the people who like country music, |
_o__) denigrate means 'put down'." -- Bob Newhart |
Ben Finney
>> Is there any type named "bool" in standard Python?
>>>> type(True)
> <type 'bool'>
Thanks anyway, but I remembered it shortly after sending. Thus the
cancel (seems to have failed a bit).
Regards,
Björn
--
BOFH excuse #384:
it's an ID-10-T error
> It seems to me that you deliberately misunderstood him.
I know for sure I didn't.
> Why else would you type-cast the integers 2 and 1 to bools to
> supposedly demonstrate that there's nothing wrong with operations
> between bools returning ints?
Kindly excuse me bothering You.
Björn
--
BOFH excuse #419:
Repeated reboots of the system failed to solve problem
> Expressions like (i == j) used to return 0 and 1, and it was to avoid
> breaking hacks like the above that bools were implemented as a subclass of
> int, not because being able to write the above was a specific feature
> requested. In the hypothetical bright new world of Python with bools that
> are actually bools, the above are good cases for explicit being better than
> implicit:
>
> int(x < b) * f(x)
> -1 ** int(i == j)
>
> It makes more sense to explicitly cast bools to ints when you want to
> do integer arithmetic on them, rather than explicitly casting bools to
> bools to do boolean arithmetic! I feel strongly enough about this that I
> believe that being able to write (x < b) * f(x) is a DISADVANTAGE -- it
> gives me a real WTF moment to look at the code.
Just because it looks funny to you doesn't mean it is a hack. It turns out
that many mathemtical formulas can be written more clearly using this notation
(see e.g. at knuth et al's "concrete mathematics"), and in mathematics limited
and often ambiguous ad hoc notation that is neatly subsumed by this scheme is
widely established.
And I don't think adding int is an improvement: ``(x < b) * f(x)`` is plenty
clear (not easily misread as something else and, I believe, not even
particularly difficult to figure out even for mediocre python programmers) but
adding padding just obscures formula structure. Of course it doesn't in the
above examples with less than half a dozen terms, but IMO for something longer
the int-litter, even it may be soothing the psyches of the anally retentive,
is counterproductive.
'as
Nis Jørgensen wrote:
> You forgot to quote this bit: [4)]
Actually not. That is a different point.
Ben seems bothered by this, but not me.
I do not mind that True+1 is 2.
I won't do it, but I do not object to it
being possible.
I do not like that True+True is 2.
I do not like that bool(False-True) is True.
I do not like that True and False are assignable,
which clearly begs for bugs to pass unseen.
>>> True, False = False, True
>>> print True, False
False True
Who can like that????
I also generally agree with Steve, whose points
keep being twisted beyond recognition.
Also, tah is right about my underlying interest
in arrays of bools (and more specifically,
boolean matrices).
I think Python 3000 is the right time to reconsider
the "ideal world" that Guido mentions in PEP 285.
Cheers,
Alan Isaac
I've never seen the "A-B" used to represent "A and not B", nor have I
seen any other operator used for that purpose in boolean algebra,
though my experience is limited. Where have you seen it used?
What's wrong with 'and', 'or', and 'not'? I think that redefining *,
+, and - to return booleans would only encourage programmers to use
them as shortcuts for standard boolean operations--I'd hate to see
code like this:
>>> if user.registered * (user.age > 13) - user.banned: ...
I don't mind that arithmatic operations are _possible_ with bools, but
I would strongly prefer to see the boolean keywords used for
operations on booleans.
-Miles
s/cast bools to ints/build ints from bools/
AFAICT, there's no such thing as typecast in Python.
I've personnaly seen the usual arithmatic operators used for boolean
algebra in quite a lot of papers covering the topic - but I've always
had to translate them to more common boolean ops to understand these
papers.
> What's wrong with 'and', 'or', and 'not'? I think that redefining *,
> +, and - to return booleans would only encourage programmers to use
> them as shortcuts for standard boolean operations--I'd hate to see
> code like this:
>>>> if user.registered * (user.age > 13) - user.banned: ...
>
OMG ! Lord have mercy ! St Guido, save us !
> I don't mind that arithmatic operations are _possible_ with bools, but
> I would strongly prefer to see the boolean keywords used for
> operations on booleans.
+10
> Is there any type named "bool" in standard Python?
check this out.
>>> doespythonrock = True
>>> print type(doespythonrock)
<type 'bool'>
>>>
--
ahlongxp
Software College,Northeastern University,China
ahlo...@gmail.com
http://www.herofit.cn