Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

does lack of type declarations make Python unsafe?

9 views
Skip to first unread message

beli...@aol.com

unread,
Jun 15, 2003, 4:28:06 PM6/15/03
to
In Python, you don't declare the type of a variable, so AFAIK there is
no way for the interpreter to check that you are calling functions
with variables of the correct type.

Thus, if I define a function correl(x,y) to compute the correlation of
two vectors, which makes sense to me only if x and y are 1-D arrays of
real numbers, Python will not stop me from trying to compute the
correlation of two arrays of integers or strings. C++ and Fortran 95
compilers can spot such errors.

Calling functions with invalid arguments is one of the commonest
programming errors, and I think it is worthwhile to declare variables,
especially the dummy arguments of functions, explicitly to avoid such
errors, even if it takes a few more lines of code. I worry that
Python's convenience in writing small programs comes at the cost of
making bug-free large programs more difficult to write.

I have only been programming in Python for a few weeks -- am I right
to worry?
Are there techniques to ensure that functions are called with
appropriate arguments?

When I actually try to call correl(x,y) in the program

from stats import correl
from Numeric import array
a = array(["a","b"])
b = array(["c","d"])
print correl(a,b) # should be arrays of Floats, not strings

I get the result

Traceback (most recent call last):
File "xcorr.py", line 5, in ?
print correl(a,b)
File "stats.py", line 88, in correl
ax = mean(x)
File "stats.py", line 57, in mean
return sum(x)/n
TypeError: unsupported operand type(s) for /: 'str' and 'int'

It's good that the program crashes rather than returning a bad result,
but a C++ or Fortran 95 compiler would flag the bad arguments during
compilation, which I think is still better. Also, in a large script,
the bad call to correl(x,y) could come after much CPU time had
elapsed, potentially wasting a lot of programmer's time.

Irmen de Jong

unread,
Jun 15, 2003, 5:30:26 PM6/15/03
to
beli...@aol.com wrote:
> In Python, you don't declare the type of a variable, so AFAIK there is
> no way for the interpreter to check that you are calling functions
> with variables of the correct type.

Indeed, but then again, it usually doesn't have to :-)

Python is a dynamically typed language.
Python is also a strongly typed language.

Dynamic typing makes for very flexible and future-proof code.
To learn more about this, please read
http://www.razorvine.net/python/PythonLanguageConcepts
and
http://home.att.net/~stephen_ferg/projects/python_java_side-by-side.html#typing


> I have only been programming in Python for a few weeks -- am I right
> to worry?

No, there are thousands of Python programs that run just fine!
Python programmers feal perfectly at ease with Python's dynamic
type system. You just have to learn to understand and
appreciate it.

> Are there techniques to ensure that functions are called with
> appropriate arguments?

You *can* test the types of the arguments yourself, inside
the function. But this is regarded bad practice amongst Python
programmers, for the above mentioned reasons.


--Irmen

John Roth

unread,
Jun 15, 2003, 5:35:49 PM6/15/03
to

<beli...@aol.com> wrote in message
news:3064b51d.03061...@posting.google.com...

> In Python, you don't declare the type of a variable, so AFAIK there is
> no way for the interpreter to check that you are calling functions
> with variables of the correct type.

A nit. Actually, the interpreter does find this out. It's the compiler
that's clueless.

> Thus, if I define a function correl(x,y) to compute the correlation of
> two vectors, which makes sense to me only if x and y are 1-D arrays of
> real numbers, Python will not stop me from trying to compute the
> correlation of two arrays of integers or strings. C++ and Fortran 95
> compilers can spot such errors.
>
> Calling functions with invalid arguments is one of the commonest
> programming errors,

I'm not sure I'd go that far. What's a common programming
error depends on the programmer, after all.

> and I think it is worthwhile to declare variables,
> especially the dummy arguments of functions, explicitly to avoid such
> errors, even if it takes a few more lines of code. I worry that
> Python's convenience in writing small programs comes at the cost of
> making bug-free large programs more difficult to write.

It depends on your programming style. If you're in the habit of
writing several hundred lines of code at a time, and then debugging it,
you're probably right - a compile error would be a lot more productive
than finding it out several hours into a debugging session.

On the other hand, if you're using an incremental program construction
process like XP's Test Driven Development, then it doesn't make any
difference at all. The interpreter will give you the stack trace a
couple
of seconds later than the compiler would be able to give you an error
message.

> I have only been programming in Python for a few weeks -- am I right
> to worry?
> Are there techniques to ensure that functions are called with
> appropriate arguments?

A lot of programmers like to use the "assert" statement for
additional checks on the arguements. Assert will be compiled out
if you specify optimization, so it doesn't have production performance
issues, and many people think that it helps in documenting what the
proper inputs are to a method.

Also, there's a lot of work going into a facility that will be
similar to Eiffel's "Design by Contract."

> When I actually try to call correl(x,y) in the program
>
> from stats import correl
> from Numeric import array
> a = array(["a","b"])
> b = array(["c","d"])
> print correl(a,b) # should be arrays of Floats, not strings
>
> I get the result
>
> Traceback (most recent call last):
> File "xcorr.py", line 5, in ?
> print correl(a,b)
> File "stats.py", line 88, in correl
> ax = mean(x)
> File "stats.py", line 57, in mean
> return sum(x)/n
> TypeError: unsupported operand type(s) for /: 'str' and 'int'
>
> It's good that the program crashes rather than returning a bad result,
> but a C++ or Fortran 95 compiler would flag the bad arguments during
> compilation, which I think is still better. Also, in a large script,
> the bad call to correl(x,y) could come after much CPU time had
> elapsed, potentially wasting a lot of programmer's time.

Again, see my comment about incremental development. It's
much more important in a language like Python, although it will
undoubtedly help any language.

John Roth


Marek "Baczek" Baczyński

unread,
Jun 15, 2003, 5:45:55 PM6/15/03
to
On 15 Jun 2003 13:28:06 -0700, beli...@aol.com wrote:

> It's good that the program crashes rather than returning a bad result,
> but a C++ or Fortran 95 compiler would flag the bad arguments during
> compilation, which I think is still better. Also, in a large script,
> the bad call to correl(x,y) could come after much CPU time had
> elapsed, potentially wasting a lot of programmer's time.

Your argument is good in theory, but IME such things are extremly rare to
happen in real life. Searching for a lost colon also takes a lot of time
*and* tends to happen more often.

martin z

unread,
Jun 15, 2003, 5:55:32 PM6/15/03
to

> You *can* test the types of the arguments yourself, inside
> the function. But this is regarded bad practice amongst Python
> programmers, for the above mentioned reasons.

On this subject, is there a way to test not the specific type, or simply the
protocol an object supports? String, int, etc. I want to make a function
do one thing with a numeric-type object and a different thing with a
string-type object.


Cliff Wells

unread,
Jun 15, 2003, 5:57:22 PM6/15/03
to
On Sun, 2003-06-15 at 14:45, Marek "Baczek" Baczyński wrote:
> Searching for a lost colon also takes a lot of time
> *and* tends to happen more often.

<snicker/>

--
Cliff Wells <cliffor...@attbi.com>


Zac Jensen

unread,
Jun 15, 2003, 5:32:31 PM6/15/03
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

I doubt you should worry. It is not difficult at all to test for types when
you feel it is necessary.

i.e. in the attached python script I give you an example of how to do explicit
type checking when you need to do it. This was test to work with python
2.2.3

This is just one way to do it, I may not have even come close to the fanciest,
or simplest way of checking, but, this is how I do it if it becomes an issue.

Another way I suppose would be through assert calls.. but they would be less
informative than individual typeerror exceptions for each variable you
check... (imho)

However, in many cases you can coerce or convert types, or in the case of
complex classes, you can use properties that do the type checking and raise
exceptions.

Usually it's not a problem for python programmers, or this would have been
addressed with a different solution a long long time ago.

hope that helps.

- -Zac
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.2 (GNU/Linux)

iD8DBQE+7OXvV3L7YsSif1URAoQiAJ98+x593lskvHAezpZxHoRTiIwBQgCbBYH7
yLPtXTfnB3EYzTqkBH7zJLg=
=UmuZ
-----END PGP SIGNATURE-----

testd.py

Peter Hansen

unread,
Jun 15, 2003, 6:04:21 PM6/15/03
to
beli...@aol.com wrote:
>
> In Python, you don't declare the type of a variable, so AFAIK there is
> no way for the interpreter to check that you are calling functions
> with variables of the correct type.

This whole discussion has taken place, repeatedly, in the past,
so if you would like to check the archives you'll find a few
dozen versions of the question and the lengthy threads that ensued.

> Calling functions with invalid arguments is one of the commonest
> programming errors

Debatable. Not true in my experience.

> I worry that
> Python's convenience in writing small programs comes at the cost of
> making bug-free large programs more difficult to write.

Actually, Python seems to make writing both small and large programs
easier, and especially (and more importantly) makes *reading* them
much easier, which eases maintenance.

> I have only been programming in Python for a few weeks -- am I right
> to worry?
> Are there techniques to ensure that functions are called with
> appropriate arguments?

I'm on the side that says unit testing is a better way of catching
such problems, and Python makes unit testing much easier than many
other languages, so it's a match made in heaven (or is that Holland? ;-).

-Peter

Tim Rowe

unread,
Jun 15, 2003, 6:49:46 PM6/15/03
to
On 15 Jun 2003 13:28:06 -0700, beli...@aol.com wrote:

>In Python, you don't declare the type of a variable, so AFAIK there is
>no way for the interpreter to check that you are calling functions
>with variables of the correct type.

That's one of the reasons I would not advocate Python for mission
critical systems. But then, neither would I advocate C++ or FORTRAN,
for a whole host of reasons. The mission critical systems I work with
have the possibility of killing a few hundred people whenever they go
wrong, and the whole system development process, including choice of
language for software elements is quite specialised.

Using a rather looser meaning of "unsafe" than the one I usually work
with, yes, Python is type unsafe. Does it matter? Well, it depends.
Sooner or later all code comes down to machine code, which is type
unsafe on any system I can think of. You need to manage the types in
your program somehow; the more help your language gives you the less
likely you are to get bugs in this area but, /mutatis/ /mutandis/, the
harder it is to use the language and the longer it will take you to
roll out your code. These are conflicting concerns. Python is
deliberately placed towards the "ease of use" end of the scale, where
I believe it excels. For some work that's simply not where you need
to be and you should be using a different language. I don't believe
in general purpose programming languages; to misquote Michael Jackson
(the JSM one, not the "Thriller" one nor the Beer Hunter one) "If a
language claims to help you equally well with every problem, it won't
help you much with any problem". Or to misquote him again, "If a
language is claimed to be just as good for all problems, it probably
probably is, but not in the way the person making the claim intended".

Zac Jensen

unread,
Jun 15, 2003, 6:06:14 PM6/15/03
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hm, perhaps hasattr() for the special __whatever__ attributes?

or if you know it has to be a set of types, you could do
isinstance(whatevervar, (type1, type2, type3)) etc....

- -Zac
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.2 (GNU/Linux)

iD8DBQE+7O3YV3L7YsSif1URAm1IAJ0d2gg5mbElV8nLA98LCuTiBlgoTwCePAaP
twa6bsgApe7gucgzRMtkqE0=
=qc8k
-----END PGP SIGNATURE-----


Irmen de Jong

unread,
Jun 15, 2003, 7:02:01 PM6/15/03
to
Tim Rowe wrote:
> That's one of the reasons I would not advocate Python for mission
> critical systems. But then, neither would I advocate C++ or FORTRAN,
> for a whole host of reasons. The mission critical systems I work with
> have the possibility of killing a few hundred people whenever they go
> wrong, and the whole system development process, including choice of
> language for software elements is quite specialised.

offtopic: what language *do* you use for those?


--Irmen

Dan Bishop

unread,
Jun 15, 2003, 11:04:10 PM6/15/03
to
> In Python, you don't declare the type of a variable, so AFAIK there is
> no way for the interpreter to check that you are calling functions
> with variables of the correct type.
>
> Thus, if I define a function correl(x,y) to compute the correlation of
> two vectors, which makes sense to me only if x and y are 1-D arrays of
> real numbers,

But what kind of real numbers? IEEE double-precision? Or might you
someday need a correl function that works with ints (e.g., to compute
Spearman's correlation coefficient), or arbitrary-precision floats, or
BCD numbers, or rational numbers, or dimensioned measurements?

As long as your number classes have +, -, *, /, and __float__ (so
math.sqrt works) defined correctly, you don't have to rewrite your
correl code to support them. THAT is the beauty of dynamic typing.

Manuel M Garcia

unread,
Jun 15, 2003, 11:52:10 PM6/15/03
to
Bruce Eckel wrote about shifting your programming mindset from 'strong
typing' to 'strong testing'

http://mindview.net/WebLog/log-0025

Manuel

Donn Cave

unread,
Jun 16, 2003, 12:15:15 AM6/16/03
to
Quoth dan...@yahoo.com (Dan Bishop):...

|> Thus, if I define a function correl(x,y) to compute the correlation of
|> two vectors, which makes sense to me only if x and y are 1-D arrays of
|> real numbers,
|
| But what kind of real numbers? IEEE double-precision? Or might you
| someday need a correl function that works with ints (e.g., to compute
| Spearman's correlation coefficient), or arbitrary-precision floats, or
| BCD numbers, or rational numbers, or dimensioned measurements?
|
| As long as your number classes have +, -, *, /, and __float__ (so
| math.sqrt works) defined correctly, you don't have to rewrite your
| correl code to support them. THAT is the beauty of dynamic typing.

Or the beauty of static typing. In a rigorously statically typed
language like Haskell, you'd write your function more or less the
same as you would in Python, but the compiler would infer from the
use of +, -, etc. that its parameters are of type Num, and you would
be expected to apply the function to instances of Num - any numeric
type. Anything else is obviously an error, and your program won't
compile until it makes sense in that respect.

One would think from reading this thread that this would be good
for safety but hard to program for, but it's actually the opposite.
I'm told that type checking is practically irrelevant to safety
critical standards, because the testing needed to meet standards
like that makes type correctness redundant. But the compiler cleans
up lots of simple errors when you're writing for more casual purposes,
and that saves time and possibly embarrassment.

Donn Cave, do...@drizzle.com

Alex Martelli

unread,
Jun 16, 2003, 4:54:23 AM6/16/03
to
<posted & mailed>

martin z wrote:

It depends, of course, on what you want "numeric-type" and "string-type"
to mean. For my purposes, I've found the following very simple "protocol
testing" functions work just fine:

def isStringLike(s):
try: s+''
except: return False
else: return True

def isNumberLike(s):
try: s+0
except: return False
else: return True

I haven't actually used the second one very often; perhaps it might
give problems with e.g. "number-vector" types that let you perform
SOME number-like operations (such as "add a scalar to each item") but
not others. The first one I use regularly and still find preferable
to e.g. "isinstance(s, basestring)" which doesn't classify as "string
like" such obvious cases as instances of UserString.UserString.

Of course, this overall approach does depend on an underlying
assumption that types are defined "sensibly" -- e.g. that if a type
is string-like enough to let you concatenate its instances to a
string, then it will also be string-like enough from other points
of view. I have not found that to be a problem in practice.


Alex

Alex Martelli

unread,
Jun 16, 2003, 5:15:13 AM6/16/03
to
Peter Hansen wrote:

> beli...@aol.com wrote:
>>
>> In Python, you don't declare the type of a variable, so AFAIK there is
>> no way for the interpreter to check that you are calling functions
>> with variables of the correct type.
>
> This whole discussion has taken place, repeatedly, in the past,
> so if you would like to check the archives you'll find a few
> dozen versions of the question and the lengthy threads that ensued.

Absolutely true. Moreover, the same discussion also takes place very
often outside of this group, as dynamic-typing languages grown in
scope and importance. For example, here's the discussion on this
issue started by Robert Martin (well-known OO guru, former editor of
C++ Report, etc, etc): "Are Dynamic Languages Going to Replace Static
Languages?", http://www.artima.com/weblogs/viewpost.jsp?thread=4639 ...:

"I've been a statically typed bigot for quite a few years ... type issues
simply never arose. My unit tests kept my code on the straight and narrow.
I simply didn't need the static type checking that I had depended upon for
so many years ... the flexibility of dynamically typed langauges [sic] makes
writing code significantly easier."

Be sure to follow the heated 65-comments ongoing discussion, of course.

In a very similar vein, Bruce Eckel (best-selling author of books about
C++ and Java, and a long-time Python fan) at "Strong Typing vs. Strong
Testing", http://mindview.net/WebLog/log-0025 ...:

"To claim that the strong, static type checking constraints in C++, Java, or
C# will prevent you from writing broken programs is clearly an illusion
(you know this from personal experience). In fact, what we need is

Strong testing, not strong typing.

So this, I assert, is an aspect of why Python works. C++ tests happen at
compile time (with a few minor special cases). Some Java tests happen at
compile time (syntax checking), and some happen at run time (array- bounds
checking, for example). Most Python tests happen at runtime rather than at
compile time, but they do happen, and that's the important thing (not
when)."


Note the commonality: "unit tests", "strong testing". Halfways decent
unit-tests will catch all the type-errors that a typical statically-typed
language would catch *and then some*. Without unit-tests your code is
untrustworthy anyway (both Robert and Bruce make the point forcefully!),
while, WITH good unit-tests, type issues "simply never arise" as Robert
Martin puts it.

BTW, I strongly recommend Kent Beck's "Test-Driven Development by Example"
(Addison-Wesley) -- short and to the point.


>> Calling functions with invalid arguments is one of the commonest
>> programming errors
>
> Debatable. Not true in my experience.

Pretty true in my own experience, for a suitably generalized definition
of "functions" that includes operators and other callables. However,
the reasons the arguments are "invalid" are more often connected to the
*values* that those arguments take at runtime, rather than to the types
of those arguments (which might be checkable at compile time if one
used a statically-typed language), therefore this is not an argument
in favour of static type-checking. E.g., indexing blah[bloh] with an
invalid index value computed in variable bloh (invalid with respect
to the set of indices that container blah can accept) is, alas, far
from rare; but it's not a type-checking issue, and rarely will a
compiler be able to deduce reliably that the value of bloh as run-time
computed is going to be invalid. Thus, runtime checks are needed here
(Java does them, Python does them, C++ may or may not do them) -- AND
unit-tests that you can trust to tickle any bugs that might be there...

Alex

Gary Duncan

unread,
Jun 16, 2003, 10:53:56 AM6/16/03
to Duncan;GaryM
Alex Martelli wrote:

[ lots deleted ]

>
>>>Calling functions with invalid arguments is one of the commonest
>>>programming errors
>>
>>Debatable. Not true in my experience.
>

I suspect this assertion relates to juxtaposing args, not so
much the values thereof. Obviously passing bad values to a function
is a crime we have all committed - at least it's one I admit to ;)

In the pre-ANSI-C days, I did this occasionally - and even thereafter
coz if multiple args had the same type (int?), the function definition
wouldn't help if one put the args out of order.

>
> Pretty true in my own experience, for a suitably generalized definition

See above.

- Gary (definitely a proponent of dynamism )

> of "functions" that includes operators and other callables. However,
> the reasons the arguments are "invalid" are more often connected to the
> *values* that those arguments take at runtime, rather than to the types

Hmm, I'll have to think about that ;)

Terry Reedy

unread,
Jun 16, 2003, 8:53:34 AM6/16/03
to

beli...@aol.com wrote:
> >> Calling functions with invalid arguments is one of the commonest
> >> programming errors

> Peter Hansen wrote:
> > Debatable. Not true in my experience.

"Alex Martelli" <al...@aleax.it> wrote in message > > > Pretty true in


my own experience, for a suitably generalized definition
> of "functions" that includes operators and other callables.
However,
> the reasons the arguments are "invalid" are more often connected to
the
> *values* that those arguments take at runtime, rather than to the
types
> of those arguments (which might be checkable at compile time if one
> used a statically-typed language), therefore this is not an argument
> in favour of static type-checking.

If one considers the set of values as part of the type, this amounts
to saying that the type system of most languages is too coarse for
real needs. Many operations need counts [0, 1, 2, ...]. Python does
not have this, C only backhandedly (unsigned int, like calling an int
an unfractionated rational). A few need a count subset: circular
queues need residue classes [0, 1, ..., n-1] with all operation
results taken %n. These are well-defined mathematical sets that
programmers usually have to simulate rather than simply declare.

Similarly, absolute measurements (as opposed to differences of
absolute measurements, like deviation from freezing temp and sea
level) and the input to real/float sqrt() and other functions are
'non-negative floats'. I don't know of any language that lets a
programmer directly declare this type. C lets you typedef structures
but not restricted subsets.

> E.g., indexing blah[bloh] with an
> invalid index value computed in variable bloh (invalid with respect
> to the set of indices that container blah can accept) is, alas, far
> from rare; but it's not a type-checking issue,

Depends on what you call a type. A 'count' or a 'residue_class' is as
much a mathematical 'type' as an 'int' (or a 'rat').

Terry J. Reedy


Peter Hansen

unread,
Jun 16, 2003, 10:19:47 AM6/16/03
to
Gary Duncan wrote:

>
> Alex Martelli wrote:
>
> >>>Calling functions with invalid arguments is one of the commonest
> >>>programming errors
> >>
> >>Debatable. Not true in my experience.

(To clarify, in the face of trimmed attributions: I, not Alex, wrote
the last sentence above.)

> I suspect this assertion relates to juxtaposing args, not so
> much the values thereof. Obviously passing bad values to a function
> is a crime we have all committed - at least it's one I admit to ;)

If we consider that bugs come from either bad data or bad logic, and
that bad logic will very likely or inevitably lead to bad data, and
that all data eventually is used as an "argument" in some fashion,
I'll agree that calling functions with invalid arguments is quite
common. :-)

To take up Alex' point: passing in the wrong type is probably much
less common than, say, using the wrong value, or mixing up the order
of arguments (and many functions take a bunch of arguments of the
same type, so type-checking doesn't help there!).

-Peter

Peter Hansen

unread,
Jun 16, 2003, 10:22:58 AM6/16/03
to
Donn Cave wrote:
>
> One would think from reading this thread that this would be good
> for safety but hard to program for, but it's actually the opposite.
> I'm told that type checking is practically irrelevant to safety
> critical standards, because the testing needed to meet standards
> like that makes type correctness redundant.

I haven't directly worked in aviation software myself, but a friend
(who is beginning to use Python more) who works in that area has
described much of his employer's development environment to me.

One interesting thing is that C is the mainstay language, and we
all know how type-safe it is.

Definitely the near-onerous testing requirements, as well as the
traceability permeating every aspect of the development from
requirements to delivery, are what give the regulatory agencies
the necessary level of confidence to allow the software to be
released, not the use of type-safe languages. (So far... but I
wouldn't be surprised to see bureaucratic/ivory-tower thinking
lead to a change in this policy. Probably already happening... :-( )

-Peter

Anton Vredegoor

unread,
Jun 16, 2003, 10:15:14 AM6/16/03
to
Alex Martelli <al...@aleax.it> wrote:

<snip links with conditional appreciation of strong dynamic typing>

>Note the commonality: "unit tests", "strong testing". Halfways decent
>unit-tests will catch all the type-errors that a typical statically-typed
>language would catch *and then some*. Without unit-tests your code is
>untrustworthy anyway (both Robert and Bruce make the point forcefully!),
>while, WITH good unit-tests, type issues "simply never arise" as Robert
>Martin puts it.
>
>BTW, I strongly recommend Kent Beck's "Test-Driven Development by Example"
>(Addison-Wesley) -- short and to the point.

Seems everyone is trying to kidnap the baby, smothering it in True
love and imprinting one's personal worldview on it, justified by
rightful concerns about its security :-)

Let's have my view on it then, too. In my opinion the obsession with
testing is a left over from the time one was concerned with type
checking. It is as if one is not really ready to enter the world of
signature based polymorphism, so one seeks the aid of different kinds
of crutches, not realizing that these too, in the end, will prove to
be just as much of a hindrance.

Further more, this seems to be intimately related to the excessive
insisting on use cases before one is ready to accept new features. For
example, what was the use case of the first car? There where no roads
then, nor gas stations, and the things where really not very useful
compared to proven, age-old methods of transportation. However, given
that the necessary infrastructure is present, things can be done with
them that one never could do with horse and wagon, things that one
never even thought about doing. (By the way, I do not own such a
monstrosity.)

What's the use of replacing type checking with batteries of tests
while the real advantage of coding in Python is its "executable pseudo
code" character: "Look ma, no testing necessary!" How can we achieve
that? As usual, by providing the infrastructure for this general idea
to work. PyChecker is a great tool, but also usenet and peer review.
Having one's code checked by hundreds of people reviewing it, for
example by posting it on a public platform, makes the most of the
"eyeball ready" advantages of languages like Python.

I think a lot of programming strategies claim advantages that are
really not caused by them but are caused by the inviting and enticing
properties of the language itself. Python lends itself to playing with
it and to discussing the merits of code snippets with other people.
It's one of the "language" languages, in that it's suitable for people
to communicate using it, and it can even be used as tool to check ones
thoughts.

Now to go back to test driven development, is it necessary to
formulate a plan first before springing into action? As logical as it
seems this is again just a relic of the old static typing fears. Any
linguistic expert - and I'm now using the term in a more psychological
way - will tell you that speech production is "on the fly". This means
we do *not* know what we are going to say before we speak - not even
Dijkstra - , the specific regions in the brain just produce the
linguistic output that corresponds to the things we want to say in a
way that we do not directly control. How could we, given that it's a
real time process? The same should go for a computer language, type in
your thoughts (or speak!) and let the computer figure out how to best
express it! Use a user platform to test the effect of your utterances
or correct them using "live" feedback from the interpreter.

Python is not there yet, but a failure to notice the transformation of
computer languages into languages of thought, and relying on excessive
backtracking of all utterances, instead of concentrating on the
content of what is to be expressed, is a sin against that what is yet
to come.

Anton


Peter Hansen

unread,
Jun 16, 2003, 10:55:51 AM6/16/03
to
Anton Vredegoor wrote:
>
> Let's have my view on it then, too. In my opinion the obsession with
> testing is a left over from the time one was concerned with type
> checking. It is as if one is not really ready to enter the world of
> signature based polymorphism, so one seeks the aid of different kinds
> of crutches, not realizing that these too, in the end, will prove to
> be just as much of a hindrance.

Wow! I really thought you were joking, but I carefully read the whole
post to make sure and didn't see any hint of humour, except for the
fundamental message you are trying to convey, which is in my opinion
quite subtly humorous, however you intended it.

> Further more, this seems to be intimately related to the excessive
> insisting on use cases before one is ready to accept new features. For
> example, what was the use case of the first car? There where no roads
> then, nor gas stations, and the things where really not very useful
> compared to proven, age-old methods of transportation.

No roads? What planet do you live on? Where did the horses trot, and
the wagons roll, and people walk? On the sidewalks, when they were just
lone parallel strips of pavement running between the cities? Of
*course* there were roads! In fact, it was the previous generation
of transportation which *made* the roads, deliberately or not, and
cars were just an innovative way to take advantage of them.

Let's see: use cases for early automobiles: they didn't get tired,
you didn't have to smell their dirty asses as you trotted along,
they could go marginally faster (even at the early stage), on
average if not in "burst mode", probably cheaper at that point too.
Any idea what it cost to maintain a horse and buggy in the city
when cars came out?

If there's no use case for a piece of software, it is pure research,
or purely speculative development. That has a place in the world,
but it's not how most people should be spending their time, any more
than most people should speculate in the stock market or most
companies should pour their shareholders' money into pure research.

> What's the use of replacing type checking with batteries of tests
> while the real advantage of coding in Python is its "executable pseudo
> code" character: "Look ma, no testing necessary!"

While I agree that Python, more than most or perhaps any other language,
lends itself to a form of development which *doesn't* need testing
(because of it's uniquely "executable pseudo-code" nature and that
well-known phenomenon of writing something that you aren't sure is
even supported by the language and "it just runs"), I do not agree
in the least that most software should be written without tests.

Most software has bugs. Lots of them. Most open source projects
have huge bug lists, and are starting to spend entire weeks on
"sprints" which are sometimes focused solely on bug-fixing! How
effective is Python going to be at preventing that? Not very:
as evidenced by the fact that the whole sprint concept, as near
as I can tell, is coming *from* the open source Python community
and its attempt to do mass bug fixes (as well as mass development,
but that's not the issue here).

Without tests, you don't even *know* how reliable your software is.
Without tests, you don't even know you have fixed a bug properly,
sometimes. Without tests, source code is a liability, not an asset.

> Now to go back to test driven development, is it necessary to
> formulate a plan first before springing into action?

Uh, yes? Was that a trick question?

> As logical as it seems this is again just a relic of the old static
> typing fears.

Probably not, but I suppose this is just a rhetorical comment and not
something that really needs debating. If you want to debate it, you
should start by disproving the logic behind it instead of just stating
that it seems logical but is actually (you imply) false.

> Any linguistic expert - and I'm now using the term in a more psychological
> way - will tell you that speech production is "on the fly".

All speech production? Or casual speech in everyday conversation?
Ever heard of speech writers? What about the (at least in English)
phrase "think before you speak"? That advice came about for a good
reason. Thoughtless speech is about as useful, most of the time,
as thoughtless programming - I mean programming without thinking
at least somewhat about what you are trying to do in advance of doing it.
Heck, it's why Usenet is so bad for flames and off-the-cuff responses
(like this one :-), because people just don't think before posting.

> The same should go for a computer language, type in
> your thoughts (or speak!) and let the computer figure out how to best
> express it! Use a user platform to test the effect of your utterances
> or correct them using "live" feedback from the interpreter.

Well, that does start getting interesting, in that it's a neat
discussion of future possibilities in programming. If this whole
post of yours was really an introduction to this idea, then I take
back the part about thinking it was a bad, though subtle, joke
and I think there's some merit in discussing science fiction
like this. Just don't go "dissing" my tests! :-)

> Python is not there yet, but a failure to notice the transformation of
> computer languages into languages of thought, and relying on excessive
> backtracking of all utterances, instead of concentrating on the
> content of what is to be expressed, is a sin against that what is yet
> to come.

Hmm.... "sin" is an evil word, but I'll leave that alone. My question
is whether you think testing, in the worthy-of-a-better-label form it
has in "Test-Driven Development", is something which is really bad
to be doing, or whether you are simply saying that in the very long
wrong, perhaps beyond our lifetimes, it will simply not be necessary.

From a bit of what you've said, I'm under the impression you are really
just talking about "testing" in the traditional sense of writing tests
to make sure your code isn't broken, as opposed to what TDD means by
writing tests. The two are only peripherally related, as best shown
by Robert Martin's quote from a recent thread. Are you really just
against traditional "tests", or do you have a problem with the whole
concept of making sure software works right? (Intentionally polarized
statement to elicit a response... :-)

-Peter

Gerhard Häring

unread,
Jun 16, 2003, 10:15:46 AM6/16/03
to
Peter Hansen wrote:
> To take up Alex' point: passing in the wrong type is probably much
> less common than, say, using the wrong value, or mixing up the order
> of arguments (and many functions take a bunch of arguments of the
> same type, so type-checking doesn't help there!).

There is something that helps with confusing the order of arguments:
*Only* use named parameters.

I've heard of Ada 95 projects that mandate this style for
procedure/function calls.

-- Gerhard

Moshe Zadka

unread,
Jun 16, 2003, 10:55:11 AM6/16/03
to
On Mon, 16 Jun 2003, Peter Hansen <pe...@engcorp.com> wrote:

> To take up Alex' point: passing in the wrong type is probably much
> less common than, say, using the wrong value, or mixing up the order
> of arguments (and many functions take a bunch of arguments of the
> same type, so type-checking doesn't help there!).

And of course, every bug can be said to come from passing the wrong program
to the Python interpreter :)
[Remember, programs are data too!]
--
Moshe Zadka -- http://moshez.org/
Buffy: I don't like you hanging out with someone that... short.
Riley: Yeah, a lot of young people nowadays are experimenting with shortness.
Agile Programming Language -- http://www.python.org/

Tim Rowe

unread,
Jun 16, 2003, 11:57:28 AM6/16/03
to
On Mon, 16 Jun 2003 01:02:01 +0200, Irmen de Jong
<irmen@-NOSPAM-REMOVETHIS-xs4all.nl> wrote:


>offtopic: what language *do* you use for those?

Typically something like a rather small subset of Ada, or at least
something with a strong B&D flavour and formally defined semantics.
And safety-related code is the most boring code you would ever read
(in this field, excitement is to be avoided!)

David Abrahams

unread,
Jun 16, 2003, 11:59:47 AM6/16/03
to
"Donn Cave" <do...@drizzle.com> writes:

I find that static typing makes a big difference for two things:

1. Readability. It really helps to have names introduced with a
type or a type constraint which expresses what kind of thing they
are. This is especially true when I am coming back to code after
a long time or reading someone else's work. Attaching that
information to the name directly is odious, though, and leads to
abominations like hungarian notation.

2. Refactoring. Having a compiler which does some static checking
allows me to make changes and use the compiler as a kind of
"anchor" to pivot against. It's easy to infer that certain
changes will cause compiler errors in all the places where some
corresponding change needs to be made. When I do the same thing
with Python I have to crawl through all the code to find the
changes, and a really complete set of tests often take long
enough to run that using the tests as a pivot point is
impractical.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

Moshe Zadka

unread,
Jun 16, 2003, 11:00:22 AM6/16/03
to
On Mon, 16 Jun 2003, "Terry Reedy" <tjr...@udel.edu> wrote:

> If one considers the set of values as part of the type, this amounts
> to saying that the type system of most languages is too coarse for
> real needs. Many operations need counts [0, 1, 2, ...]. Python does
> not have this, C only backhandedly (unsigned int, like calling an int
> an unfractionated rational). A few need a count subset: circular
> queues need residue classes [0, 1, ..., n-1] with all operation
> results taken %n. These are well-defined mathematical sets that
> programmers usually have to simulate rather than simply declare.

Sure...
Some operations need primes (e.g., RSA encryption). Some operation
need strings which specify programs which halt (think "exec" :).
Of course, if you push too much into the typing system, than the type
system becomes undecidable. A language where the compiler only warns
about a subset of the typing violations is not much better than a language
whose type system is a subset of the real system.

David Abrahams

unread,
Jun 16, 2003, 12:02:59 PM6/16/03
to al...@aleax.it
Alex Martelli <al...@aleax.it> writes:

> So this, I assert, is an aspect of why Python works. C++ tests happen at
> compile time (with a few minor special cases).

I don't think of it that way. When you write C++, you need runtime
tests, but the language can catch many of the most-trivial kinds of
errors at compile time. It can also help understandability by
bounding the sorts of weird things that can happen without very
malicious intent. That limits the number of things one has to
consider when reading code.

Tim Rowe

unread,
Jun 16, 2003, 12:06:32 PM6/16/03
to
On Mon, 16 Jun 2003 09:15:13 GMT, Alex Martelli <al...@aleax.it>
wrote:

>"To claim that the strong, static type checking constraints in C++, Java, or
>C# will prevent you from writing broken programs is clearly an illusion
>(you know this from personal experience). In fact, what we need is
>
>Strong testing, not strong typing.

Sorry, but that looks just like meaningless polemic. Try:

>"To claim that fibre in food will prevent malnutrition is clearly an illusion


>(you know this from personal experience). In fact, what we need is
>

>vitamins, not fibre

and it's just as meaningful. So if we're going to have meaningless
polemic, let me try mine :-)

"The choice between compile time testing and run-time testing is the
choice between the programmer finding the bugs and the customer
finding the bugs".

Peter Hansen

unread,
Jun 16, 2003, 12:11:05 PM6/16/03
to
Tim Rowe wrote:
>
> "The choice between compile time testing and run-time testing is the
> choice between the programmer finding the bugs and the customer
> finding the bugs".

Only if you don't bother to run the software before you give it
to the customer.

That's what unit and acceptance tests are for... they substitute
for the customer, and catch more bugs than you would in either
case if you relied only on compile-time testing, or only on
customer-time testing.

-Peter

Peter Hansen

unread,
Jun 16, 2003, 12:18:06 PM6/16/03
to
David Abrahams wrote:
>
> I find that static typing makes a big difference for two things:
>
> 2. Refactoring. Having a compiler which does some static checking
> allows me to make changes and use the compiler as a kind of
> "anchor" to pivot against. It's easy to infer that certain
> changes will cause compiler errors in all the places where some
> corresponding change needs to be made. When I do the same thing
> with Python I have to crawl through all the code to find the
> changes, and a really complete set of tests often take long
> enough to run that using the tests as a pivot point is
> impractical.

I just went through a particularly straining session of refactoring
yesterday.

I couldn't get my head around how to do what I wanted to do, but I
kept bashing away at it, trying one thing, then another. (One of
my problems is that I'm still a little new at using generators, and
I was trying to use a generator at the heart of the solution.)

At no time was I actually changing the interface to any of my
functions, changing order of arguments or their types.

At no time did I actually encounter a syntax error or any other
kind of error which stopped the code from running.

At no time did I make a single mistake which would have been
caught by the compiler for a statically typed language. I
was on a roll I guess, typing accurately and not doing anything
sloppy.

Nevertheless, I repeatedly injected logic errors and other such
problems into the code as I tried to change its structure.

My tests helped me though. I'd guess I had to back out of at
least five different false directions, as one after another
obscure test popped up, tapped me on the shoulder, and said,
"Uh, excuse me? What am I, chopped liver? I need to pass too!"

Not proof of anything, or disproof, either, I suppose. Just a
little reflection on refactoring and the value of tests and
static type checking.

-Peter

Justin Johnson

unread,
Jun 16, 2003, 12:06:14 PM6/16/03
to
Hi,

I need a way to run an external command, grab its output (stdout,stderr)
and get a return status for the command. It seems like popen2.Popen3
(Note the upper case "P") is the way to do this, but it only works on
unix. Is there a way to get this info on Windows?

Thanks.
-Justin

Peter Hansen

unread,
Jun 16, 2003, 1:29:17 PM6/16/03
to

You might get more or better replies if you create your posting as
a new one, rather than replying to an existing one and changing the
subject. Many news and mail readers will do "threading" which means
your article will show up as part of the discussion to which you
replied, and many people will have killed that thread (and thus
never see your question) by now.

Sorry I can't answer the specific question.

-Peter

Moshe Zadka

unread,
Jun 16, 2003, 1:22:00 PM6/16/03
to
I have always been slightly at odds with the XP community, and I
think your post shows why acutely:

On Mon, 16 Jun 2003, Peter Hansen <pe...@engcorp.com> wrote:

> Without tests, source code is a liability, not an asset.

That's, of course, correct but terribly misleading. Source code,
with or without tests, is always a liability. Sometimes a liability
you compromise for, because you need functionality which, with
or without tests, is an asset. Co-workers from the first company
I worked for still remember me gleefully deleting reams of code
I made unnecessary muttering "less code, less bugs".

Sure, tests sometimes reduce bugs. And sometimes not. And sometimes
they reduce bugs, but less than what you would have accomplished
if you did something else. Sometimes, of course, tests are the
best way to fix bugs. The important thing to realize is "not always".
Sometimes, your time is better spent *not* writing tests. The best
example I have here is the mail program I use, PMS. It has no tests.
None. Nada. It's still reliable. python-slides, a cool package
by itamar which makes slides from Python source, has no tests.
I'd trust it with my life sooner than many other things.

I tend to write code, and then write tests if and when I get to them.
Sometimes, I only write tests before a big refactor. Sometimes,
I do the big refactor and just check several command-line options
manually to check it hasn't broken. Following a religion, like "always
write tests first", has always looked silly to me. Sorry :)

Justin Johnson

unread,
Jun 16, 2003, 1:51:42 PM6/16/03
to
Will do... thanks for the advice.

On Mon, 16 Jun 2003 13:29:17 -0400, "Peter Hansen" <pe...@engcorp.com>
said:

> --
> http://mail.python.org/mailman/listinfo/python-list
>

Peter Hansen

unread,
Jun 16, 2003, 2:53:04 PM6/16/03
to
Moshe Zadka wrote:
>
> I have always been slightly at odds with the XP community, and I
> think your post shows why acutely:

Don't mistake me for the XP community. I'm just me.

> On Mon, 16 Jun 2003, Peter Hansen <pe...@engcorp.com> wrote:
>
> > Without tests, source code is a liability, not an asset.
>
> That's, of course, correct but terribly misleading. Source code,
> with or without tests, is always a liability.

I shortened it from a previous post, in which I continued
"Source code with good unit and acceptance tests is an asset."

Sounds like you wouldn't agree.

For my company, it's very clear that it's true, however, so I guess
once again (surprise!) I speak only for myself. :-)

> Sure, tests sometimes reduce bugs. And sometimes not. And sometimes
> they reduce bugs, but less than what you would have accomplished
> if you did something else. Sometimes, of course, tests are the
> best way to fix bugs. The important thing to realize is "not always".

[snip]


> I tend to write code, and then write tests if and when I get to them.
> Sometimes, I only write tests before a big refactor. Sometimes,
> I do the big refactor and just check several command-line options
> manually to check it hasn't broken. Following a religion, like "always
> write tests first", has always looked silly to me. Sorry :)

I agree. Good thing we don't actually *always* write tests, nor
always write them first. ;-)

Don't mistake strong claims for religion either... I make them
to spur the truly test-less on to consider and investigate and
practice writing tests, because _they_ are the ones who are
writing crappy code, not you, who might understand when to test
and when not to test.

-Peter

Bruno Desthuilliers

unread,
Jun 16, 2003, 5:01:42 PM6/16/03
to
Tim Rowe wrote:
(snip)

> So if we're going to have meaningless
> polemic, let me try mine :-)
>
> "The choice between compile time testing and run-time testing is the
> choice between the programmer finding the bugs and the customer
> finding the bugs".

Oh, really ? So I guess that all software from a well-known editor from
Richmond are written in a language that do not have 'compile-time testing' ?

Please be serious. If 'compile time testing' (well, the few tests a
compiler can make) was enough to produce bug-free programs, we would all
be programming in [please insert your favorite statically-type language
name here].

Bruno Desthuilliers

Nick Vargish

unread,
Jun 16, 2003, 3:31:17 PM6/16/03
to
Tim Rowe <tim@remove_if_not_spam.digitig.cix.co.uk> writes:

> "The choice between compile time testing and run-time testing is the
> choice between the programmer finding the bugs and the customer
> finding the bugs".

Yeah... Right.

I worked for a software house that coded almost exclusively in C and
C++, and our customers were at least as good as the developers and QA
department at finding bugs.

Maybe we just had a more "clever" class of customer than you?

Nick

--
# sigmask.py || version 0.2 || 2003-01-07 || Feed this to your Python.
print reduce(lambda x,y:x+chr(ord(y)-1),'Ojdl!Wbshjti!=obwAqbusjpu/ofu?','')

d.w. harks

unread,
Jun 16, 2003, 3:50:33 PM6/16/03
to
With the win32 extensions, use the win32pipe module. It provides 'safe'
win32 versions of popen, et al.

dave

In our last episode, Justin Johnson expounded thusly:
> Hi,


>
> I need a way to run an external command, grab its output (stdout,stderr)
> and get a return status for the command. It seems like popen2.Popen3
> (Note the upper case "P") is the way to do this, but it only works on
> unix. Is there a way to get this info on Windows?
>

> Thanks.
> -Justin
>
> --
> http://mail.python.org/mailman/listinfo/python-list

--
David W. Harks <da...@psys.org> http://dwblog.psys.org

Bengt Richter

unread,
Jun 16, 2003, 4:12:17 PM6/16/03
to
On Mon, 16 Jun 2003 08:54:23 GMT, Alex Martelli <al...@aleax.it> wrote:

><posted & mailed>
>
>martin z wrote:
>
>>> You *can* test the types of the arguments yourself, inside
>>> the function. But this is regarded bad practice amongst Python
>>> programmers, for the above mentioned reasons.
>>
>> On this subject, is there a way to test not the specific type, or simply
>> the
>> protocol an object supports? String, int, etc. I want to make a function
>> do one thing with a numeric-type object and a different thing with a
>> string-type object.
>
>It depends, of course, on what you want "numeric-type" and "string-type"
>to mean. For my purposes, I've found the following very simple "protocol
>testing" functions work just fine:
>
>def isStringLike(s):
> try: s+''

Nit: succeeding with this test could be expensive if s is e.g., a 100MB file image
(I think this has come up before in a discussion of such testing, but I forgot if
it was me ;-)


> except: return False
> else: return True
>
>def isNumberLike(s):
> try: s+0
> except: return False
> else: return True

I like the concept of testing for comptible behavior, but I would want to
feel sure that there could be no unexpected side effects from testing any
candidate args.

I know you understand all these issues, but all readers of this might not
without a mention ;-)

>
>I haven't actually used the second one very often; perhaps it might
>give problems with e.g. "number-vector" types that let you perform
>SOME number-like operations (such as "add a scalar to each item") but
>not others. The first one I use regularly and still find preferable
>to e.g. "isinstance(s, basestring)" which doesn't classify as "string
>like" such obvious cases as instances of UserString.UserString.
>

I wonder if
try: s and s[0]+'' or s+''
would serve as well, and protect against the big-s hit. Or is
there a sensible string-like thing that doesn't support logical
tests and indexing but does support adding '' ?

>Of course, this overall approach does depend on an underlying
>assumption that types are defined "sensibly" -- e.g. that if a type
>is string-like enough to let you concatenate its instances to a
>string, then it will also be string-like enough from other points
>of view. I have not found that to be a problem in practice.
>

Regards,
Bengt Richter

Terry Reedy

unread,
Jun 16, 2003, 4:31:06 PM6/16/03
to

"Moshe Zadka" <m...@moshez.org> wrote in message
news:mailman.1055775776...@python.org...

> On Mon, 16 Jun 2003, "Terry Reedy" <tjr...@udel.edu> wrote:
>
> > If one considers the set of values as part of the type, this
amounts
> > to saying that the type system of most languages is too coarse for
> > real needs. Many operations need counts [0, 1, 2, ...]. Python
does
> > not have this, C only backhandedly (unsigned int, like calling an
int
> > an unfractionated rational). A few need a count subset: circular
> > queues need residue classes [0, 1, ..., n-1] with all operation
> > results taken %n. These are well-defined mathematical sets that
> > programmers usually have to simulate rather than simply declare.
>
> Sure...
> Some operations need primes (e.g., RSA encryption).

My use of 'set' above was meant to be stronger than you took it.
Types are about values combined with operations thereupon, not just
values. Residues mod n form an algebra (ring?) closed under +,-, and
*, just like integers, whereas primes are not closed under any known
operation. In fact, fixed-sized computer ints are either *not* closed
(and therefore less valid as a type than residue classes) or else are
actually residues mod (bits-per-int). So it is hardly silly to
imagine a language that allowed a variable rather than fixed mod
value.

> Some operation need strings which specify programs which halt
(think "exec" :).

Since membership in this class cannot be tested, it is not even an
operationally defined set. Note that Python does effectively define
and test for a subtype of (unquoted) strings, namely names, which are
closed under catenation but not slicing.

Terry J. Reedy


Terry Reedy

unread,
Jun 16, 2003, 5:13:40 PM6/16/03
to

"David Abrahams" <da...@boost-consulting.com> wrote in message
news:uwufmk...@boost-consulting.com...

> I find that static typing makes a big difference for two things:
>
> 1. Readability. It really helps to have names introduced with a
> type or a type constraint which expresses what kind of thing
they
> are.

Perhaps true, especially for C/C++, but given polymorphic operations
and the absence of an extensible hierachical type system, the
enevitable cost is excluding valid argument values. By fixing both
external interface/behavior and internal structure/implementation,
declaring types fixes too much. Preprocessor macros, with *untyped*
parameters, (like #define max(num1,num2) ...) are an homage to the
virtue of generic programming.

The other problem, which I discussed in another post to this thread,
is that current static typing systems, besides being too narrow, in
excluding valid imputs (given Python's object and operation system),
are also too broad in including invalid values. They often do not
adequately express 'what kind of thing' something is because the only
choice is *too* inclusive.

Besides the above, much Python code is, for me, pretty
self-explanatory as to types. Is 'int i', etc, really clearer than
'i=0', etc. Arithmetic expressions usually define the involved
variables as numbers. 'For item in seq' defines seq as a sequence of
some type, specifics usually not too important. And so on.

And of course, one can give each Python variable a one-line
definition, with a type name even, of the same sort I often did with C
variables. Just move the comment start symbol leftward to the margin
;-)

Terry J. Reedy


Tim Rowe

unread,
Jun 16, 2003, 5:58:48 PM6/16/03
to
On Mon, 16 Jun 2003 21:01:42 +0000, Bruno Desthuilliers
<bdesth...@removeme.free.fr> wrote:

>Tim Rowe wrote:
>(snip)
>
>> So if we're going to have meaningless
>> polemic, let me try mine :-)

>Please be serious.

I consider it at least as serious as the post I was replying to,
though in my case the exaggeration to the point of comedy was
intention (I don't know about successful). No, the compiler and other
static testing won't catch all the bugs. But it has been pretty
thoroughly shown that dynamic testing won't catch them all either.
And the research I've seen indicates that the two techniques catch
pretty much non-overlapping sets of bugs.

Go back to my parody of Anton's position: which do you think we can do
without in our diet? Fibre or vitamins? Ok, now which do you think we
can do without in our software assurance? Static testing or dynamic
testing?

(As an aside, when the project manager comes along and says we have to
cut something to meet timescales, do you think it will be compilation
or test that's more likely to be cut?)

Ben Finney

unread,
Jun 16, 2003, 8:15:14 PM6/16/03
to
On Mon, 16 Jun 2003 12:11:05 -0400, Peter Hansen wrote:
> Tim Rowe wrote:
>> "The choice between compile time testing and run-time testing is the
>> choice between the programmer finding the bugs and the customer
>> finding the bugs".
>
> Only if you don't bother to run the software before you give it
> to the customer.

You, the programmer (or team of programmers) will never run the program
the same way as the customer will, except by blind accident. You can't
know what the customer will do with it ahead of time, and neither can
the customer.

Moreover, unless your program is used by a number of people as small as
the development team, you will never run the program in as many ways, or
over as much time, as the customer will.

Testing cannot try more than a miniscule fraction of the combination of
inputs and usage that the customers will subject it to. Automated
testing, carefully thought out, can increase this fraction
significantly; but "run the software before you give it to the customer"
is a laughably inferior way of finding bugs.

> That's what unit and acceptance tests are for... they substitute for
> the customer, and catch more bugs than you would in either case if you
> relied only on compile-time testing, or only on customer-time testing.

Exactly; use of *only* compile-time testing, or *only* run-time testing,
are both inadequate, and are not mutually exclusive. So use both.

That's what the security guys call "defense in depth".

--
\ "If you're a horse, and someone gets on you, and falls off, and |
`\ then gets right back on you, I think you should buck him off |
_o__) right away." -- Jack Handey |
http://bignose.squidly.org/ 9CFE12B0 791A4267 887F520C B7AC2E51 BD41714B

Moshe Zadka

unread,
Jun 16, 2003, 9:37:16 PM6/16/03
to
On Mon, 16 Jun 2003, Peter Hansen <pe...@engcorp.com> wrote:

> Don't mistake me for the XP community. I'm just me.

Your attitude is, however, typical :)

> I shortened it from a previous post, in which I continued
> "Source code with good unit and acceptance tests is an asset."
>
> Sounds like you wouldn't agree.

Right, I wouldn't. Source code is *always* a liability. Unless you
think the unit tests and acceptance tests make it bug-free, I don't
see *why* you wouldn't happily delete it the moement it proves to be
unnecessarey.

> For my company, it's very clear that it's true, however, so I guess
> once again (surprise!) I speak only for myself. :-)

I doubt it is true, even for your company. Source code is *always*
a liability, unit-tests or no unit-tests. It is an *acceptable*
liability, sometimes, if you get enough functionality out of it.
Insofar as tests measure functionality, they are useful to make sure
you have no code which does not give functionality.

> I agree. Good thing we don't actually *always* write tests, nor
> always write them first. ;-)

That sort of makes your methodology from "TDD" to "Sometimes TDD,
if it looks like the right thing". So preaching TDD while not
practicing it is...well...

> Don't mistake strong claims for religion either... I make them
> to spur the truly test-less on to consider and investigate and
> practice writing tests, because _they_ are the ones who are
> writing crappy code, not you, who might understand when to test
> and when not to test.

It's better to teach people the truth, no? Instead of teaching them
about some rosy place which doesn't exist. I feel you are doing
unit-testing a disservice: if someone buys into it, and finds that
it isn't a win always, he'll just assume that it is worth nothing.
In fact, I assumed tests are worth nothing after finding some glaring
holes in XP's logic, and it took me a long time to be convinced that
they are ever helpful. Assuming there are others like me, it is better
to be honest about the limitations of technology first :)

Peter Hansen

unread,
Jun 16, 2003, 10:10:34 PM6/16/03
to
Ben Finney wrote:
>
> You, the programmer (or team of programmers) will never run the program
> the same way as the customer will, except by blind accident. You can't
> know what the customer will do with it ahead of time, and neither can
> the customer.

Bonus: XP specifies an "on-site customer" who pretty much does just what
you suggest.

> Testing cannot try more than a miniscule fraction of the combination of
> inputs and usage that the customers will subject it to.

Quite debatable. Automated acceptance tests, by definition, cover the
bulk of the functionality that has been designed in, since the only
functionality that's supposed to be designed in (under XP) is that which
is *required* by the acceptance tests. Anything not covered by tests
was therefore put added inappropriately by an over-zealous developer.
(Again, an obvious simplification, but not much of one.)

> Automated testing, carefully thought out, can increase this fraction
> significantly; but "run the software before you give it to the customer"
> is a laughably inferior way of finding bugs.

I agree, that would indeed be a laughable, not to mention ineffective, approach.

-Peter

Greg Ewing (using news.cis.dfn.de)

unread,
Jun 16, 2003, 10:38:36 PM6/16/03
to
> On Sun, 2003-06-15 at 14:45, Marek "Baczek" Baczyński wrote:
>
>> Searching for a lost colon also takes a lot of time
>>*and* tends to happen more often.

Don't they all end up down the back of the sofa?

>--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg

Peter Hansen

unread,
Jun 16, 2003, 10:30:36 PM6/16/03
to
Moshe Zadka wrote:
>
> On Mon, 16 Jun 2003, Peter Hansen <pe...@engcorp.com> wrote:
>
> > Don't mistake me for the XP community. I'm just me.
>
> Your attitude is, however, typical :)

I could get into how yours, too, is typical of ... well, why don't
we just stay away from ad hominem attacks, shall we?

> Source code is *always* a liability.

Okay, now it's just a semantic argument, until we define what each
of us means by "liability". I'd explore that, but I'm not sure whether
you'll just get into my "attitude" again, smiley or not.

> > For my company, it's very clear that it's true, however, so I guess
> > once again (surprise!) I speak only for myself. :-)
>
> I doubt it is true, even for your company. Source code is *always*
> a liability, unit-tests or no unit-tests. It is an *acceptable*
> liability, sometimes, if you get enough functionality out of it.

Okay, using your terminology here: source code with unit tests is
an acceptable liability, source code without good tests is an
unacceptable liability. I'm not sure whether you work on your own,
or for a large company, but in *my* company this is a useful
perspective for us to have most of the time.

> > I agree. Good thing we don't actually *always* write tests, nor
> > always write them first. ;-)
>
> That sort of makes your methodology from "TDD" to "Sometimes TDD,
> if it looks like the right thing". So preaching TDD while not
> practicing it is...well...

Nothing is absolute. Why do you think I have to be absolute in
my position? That would be silly, and impractical. Do you really
know *anyone* sensible who will insist that their viewpoint is
correct, under all imaginable conditions?

> > Don't mistake strong claims for religion either... I make them
> > to spur the truly test-less on to consider and investigate and
> > practice writing tests, because _they_ are the ones who are
> > writing crappy code, not you, who might understand when to test
> > and when not to test.

> It's better to teach people the truth, no? Instead of teaching them
> about some rosy place which doesn't exist.

The truth? You want the truth? You can't handle the truth! :-)
No, really, movie references aside, the truth is that *for me*, TDD
has proven to be an *extremely* valuable advancement over all
previous methods attempted, over a 23 or so year history of writing
code. I don't particular care whether *you*, personally, like it
or not, but I'd appreciate it if you wouldn't attempt to claim that
it is not beneficial for me, if that's what you're trying to do.

More likely you are simply trying to say that it is not *always* an
appropriate or useful method, and I don't believe I've ever said anything
that should lead any intelligent person to conclude otherwise.

> I feel you are doing unit-testing a disservice: if someone buys into it,
> and finds that it isn't a win always, he'll just assume that it is worth
> nothing.

Pah! We're not talking about a bunch of morons here, we're talking about
intelligent people who might not have encountered a particular approach
which could allow them to improve the quality of their output. Why
would you think people aren't intelligent enough to understand that
nothing is an absolute, that no process or language or generalization
of any kind will ever be adequate for all possible situations? If I've
said something that claims XP or TDD will work for everyone, always,
then I hereby take it back, but I'm sure I haven't said such a thing
unless it was in one of my 3:30 in the morning after a lousy few hours
of sleep kind of postings...

> In fact, I assumed tests are worth nothing after finding some glaring
> holes in XP's logic, and it took me a long time to be convinced that
> they are ever helpful. Assuming there are others like me, it is better
> to be honest about the limitations of technology first :)

-Peter

Kevin Reid

unread,
Jun 16, 2003, 10:31:28 PM6/16/03
to
Gerhard Häring <g...@ghaering.de> wrote:

> There is something that helps with confusing the order of arguments:
> *Only* use named parameters.
>
> I've heard of Ada 95 projects that mandate this style for
> procedure/function calls.

I'll put my gripe here just because you mentioned named parameters:

I would like to use named parameters more often in Python, but I don't
like them being required to be optional.

--
Kevin Reid

Steven Taschuk

unread,
Jun 16, 2003, 11:06:55 PM6/16/03
to
Quoth Marek "Baczek" Baczyński:
[...]
> Your argument is good in theory, but IME such things are extremly rare to
> happen in real life. Searching for a lost colon also takes a lot of time
> *and* tends to happen more often.

Searching for a lost colon does take a lot of time, and a lot of
people don't appreciate that. A friend of a friend of mine lost
his colon a couple years ago -- usual story, went out for a night
on the town, woke up in a bathtub full of ice, colon missing --
and he's *still* looking for it.

--
Steven Taschuk stas...@telusplanet.net
"Its force is immeasurable. Even Computer cannot determine it."
-- _Space: 1999_ episode "Black Sun"

Steven Taschuk

unread,
Jun 16, 2003, 11:04:08 PM6/16/03
to
Quoth Kevin Reid:
[...]

> I'll put my gripe here just because you mentioned named parameters:
>
> I would like to use named parameters more often in Python, but I don't
> like them being required to be optional.

They're not, in 2.2.2 at least:

>>> def foo(a):
... print a
...
>>> foo(a=3)
3
>>> foo()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: foo() takes exactly 1 argument (0 given)

Do I misunderstand your gripe?

--
Steven Taschuk 7\ 7'Z {&~ .
stas...@telusplanet.net Y r --/hG-
(__/ )_ 1^1`

JanC

unread,
Jun 17, 2003, 12:10:19 AM6/17/03
to
Peter Hansen <pe...@engcorp.com> schreef:

> Let's see: use cases for early automobiles: they didn't get tired,
> you didn't have to smell their dirty asses as you trotted along,
> they could go marginally faster (even at the early stage), on
> average if not in "burst mode", probably cheaper at that point too.

IIRC the first cars had a maximum speed of about 3.5 km/h and could drive
about 5 km far. Preparing the car for that took about an hour, and
afterwards you had to repair it for another 2 hours. Even a pedestrian
could go faster, further, cheaper, and carry more weight with him... ;-)

--
JanC

"Be strict when sending and tolerant when receiving."
RFC 1958 - Architectural Principles of the Internet - section 3.9

marshall

unread,
Jun 17, 2003, 12:16:21 AM6/17/03
to
Alex Martelli <al...@aleax.it> wrote in message news:<BUfHa.181518$g92.3...@news2.tin.it>...
[Quoting another thread]

> "To claim that the strong, static type checking constraints in C++, Java, or
> C# will prevent you from writing broken programs is clearly an illusion
> (you know this from personal experience). In fact, what we need is
>
> Strong testing, not strong typing."

Most of the code I come across in strong typed languages makes heavy
use of type coercion. So how does typing help if you are just going
to coerce everything?

I was recently stung by this [apologies for VB code]:

Dim lDelaySec As Integer
Dim lDelayTicks As Double
lDelaySec = Fix(Val(Text1.Text))
lDelayTicks = lDelaySec * 1000
Text3.Text = Str(lDelayTicks)

Works fine for values of lDelaySec less than 32 but you get a buffer
overflow for larger values. I guess type checking would have caught
it for me but how stupid is it that I can't multiply an integer, enen
though I prepared for the fact that the result might be a double
(lDelaySec as an Integer was a given). So I have to coerce the result
- type checking does me no good anyway.

Marshall

Christopher Koppler

unread,
Jun 17, 2003, 12:35:07 AM6/17/03
to
JanC wrote:

> Peter Hansen <pe...@engcorp.com> schreef:
>
> > Let's see: use cases for early automobiles: they didn't get tired,
> > you didn't have to smell their dirty asses as you trotted along,
> > they could go marginally faster (even at the early stage), on
> > average if not in "burst mode", probably cheaper at that point too.
>
> IIRC the first cars had a maximum speed of about 3.5 km/h and could drive
> about 5 km far. Preparing the car for that took about an hour, and
> afterwards you had to repair it for another 2 hours. Even a pedestrian
> could go faster, further, cheaper, and carry more weight with him... ;-)
>

Yes, but those were the *very* first cars - alpha releases, which were
quite soon replaced by much more stable beta versions, and 1908 already saw
version 1.0: the Model T Ford.

--,
Christopher Koppler

Thomas Stauffer

unread,
Jun 17, 2003, 2:29:27 AM6/17/03
to
In a very strict language, type-safety can really help you avoid
problems. It's one stage of a test. Which language do I talk about? I
don't know. ADA gets closes to this approach, perhaps.

C++ isn't a strict language at all. 1. You have theses nasty pointers to
cast from one edge to the other. 2. You can't overload a function/method
which different only in the return type. Why? Because there are a lot of
implicit ambiguous conversion.

Java isn't better. For example the Interface Transparency: It's a kind
of an enumeration with values 0, 1 and 2. In Java, a wonderful Integer
with 2^32 possibilities. Do we call this compile-time check?

I saw a transparency-like enumeration with another range (don't
remember), starting with value 0 like all other enumerations. These
other constant (value 1) worked with my Image function, but not as I
expected.

What do I want to say?

There is an alternative view, the API-User view. In a strong typed
language I don't have to look at the documentation. Based on the type
it's difficult to choose a wrong value. Read thousands of pages
(JavaDoc) needs a lot of time, too.

By the way, Dylan has an interesting idea. Dylan combines typed and
untyped variables (IMO the only real new idea).

Today, there isn't great advantage between so called typed an untyped
language. I'm sure there's a language out there been developed, which
solves our problems. Perhaps there's a time where we have a compiler
which can easily prove for a statically correctness. (Come on Dijkstra
we need more fancy ideas :)

Python gets more and more close to Lisp/Scheme without the lack of
readability. Add a rational type (PEP, I know), add lazy-evaluation -
and we have one of the most powerful languages, without types.

Thomas

Alex Martelli

unread,
Jun 17, 2003, 3:25:46 AM6/17/03
to
Steven Taschuk wrote:

> Quoth Kevin Reid:
> [...]
>> I'll put my gripe here just because you mentioned named parameters:
>>
>> I would like to use named parameters more often in Python, but I don't
>> like them being required to be optional.
>
> They're not, in 2.2.2 at least:
>
> >>> def foo(a):
> ... print a
> ...
> >>> foo(a=3)
> 3
> >>> foo()
> Traceback (most recent call last):
> File "<stdin>", line 1, in ?
> TypeError: foo() takes exactly 1 argument (0 given)
>
> Do I misunderstand your gripe?

Not sure about the OP's gripe, but one mildly legitimate gripe might
be that this simple approach doesn't allow the definer of foo to require
it to be called ONLY as "foo(a=value)" -- the call "foo(value)" (with
an anonymous actual parameter) would also be accepted. If you want to
force the caller to use a named actual-parameter you'd have to code e.g.:

def foo(**k):
if k.keys()!=['a']:
raise TypeError, "foo must be called as foo(a=<somevalue>), only"
a = k['a']
""" proceed normally from here onwards """

However, I think the correct answer to this hypothetical gripe is that
Python isn't about "forcing"; it does let you (definer of foo) "force"
your caller to use named-form for the actual parameter, but it's quite
all right that you'll be required to use a couple lines of boilerplate
for the purpose. Of course, if you did that often, you'd encapsulate the
whole "forcing preamble" as a reusable function such as:

def foo(**k):
a, b = forcenames(k, ['a', 'b'])
""" proceed normally from here onwards """

with e.g.

def forcenames(k, names):
if len(k) != len(names):
raise TypeError, "named parameters %s are required" % (names,)
parms = []
for name in names:
parms.append(k[name])
return parms

or some slightly more elaborate version of forcenames to raise better
exceptions if some required name is not among those passed.


Alex

Alex Martelli

unread,
Jun 17, 2003, 3:36:54 AM6/17/03
to
Moshe Zadka wrote:
...

> Right, I wouldn't. Source code is *always* a liability. Unless you

Good thing your job isn't accounting -- if you tried putting, on a
company's balance sheet, the source code it owns under "liabilities",
rather than under "assets", you'd be in for malpractice lawsuits to
dwarf Enron's (tax authorities might be particularly likely to take
a dim view in the matter).


Alex

Alex Martelli

unread,
Jun 17, 2003, 3:56:14 AM6/17/03
to
Tim Rowe wrote:
...

> And the research I've seen indicates that the two techniques catch
> pretty much non-overlapping sets of bugs.

I guess that's the crux of the matter -- and this "research you've
seen" appears to give results that directly contradict the everyday
experience reported by the growing band of users of dynamically
typed languages. Refer, for example, to Robert Martin's article I
already gave the URL to, the one that starts with the confession
that he used to be a static-typing bigot: typing issues *just do not
arise in practice* when he develops in dynamically typed languages
(he quotes Python and Ruby) in a test-driven way (which, he says,
he's now addicted to, and uses all the time even when he has to
program in statically typed languages on the job). My own personal
experience is quite similar (I also started out as a static-typing
bigot, although in my case the epiphany about the usability of
dynamically typed languages came earlier, with APL, REXX, Scheme).

If TDD was indeed prone to leave in the code the kind of bugs
that statically-typed languages can catch, then I just don't see
how to explain "Uncle Bob"'s empirical observations, which match
so well with mine (and many other such reports such as Eckel's).

Thus, I tend to be skeptical about the "research you've seen" --
what _were_ they comparing? Was it actually about the copious
mass of simple but thorough unit-tests that emerges from TDD, or
about the more traditional practice of slapping on a few sample
runs as an afterthought and _calling_ those "tests"...?-)


Alex

Alex Martelli

unread,
Jun 17, 2003, 4:15:27 AM6/17/03
to
Bengt Richter wrote:
...

>>def isStringLike(s):
>> try: s+''
> Nit: succeeding with this test could be expensive if s is e.g., a 100MB
> file image (I think this has come up before in a discussion of such

Yes. The solution I suggested then (and keep in my sleeve for possible
future needs, but have never needed yet -- see below) was

try: s[:0]+''

> I like the concept of testing for comptible behavior, but I would want to
> feel sure that there could be no unexpected side effects from testing any
> candidate args.

You cannot "feel sure" in any language that allows "virtual methods":
whenever you call a potentially-virtual method you _might_ be hitting
a weird override that has crazy side effects. To achieve polymorphism,
you give up on the yearning to "feel sure" and trust the client to NOT
do such crazy things. Testing candidate args is not any different from
using any (potentially virtual) operation whatsoever on the args, i.e.,
no different from any other application whatsoever of polymorphism.

> I wonder if
> try: s and s[0]+'' or s+''
> would serve as well, and protect against the big-s hit. Or is
> there a sensible string-like thing that doesn't support logical
> tests and indexing but does support adding '' ?

I prefer the more concise suggestion above, which requires slicing
instead of indexing and logical tests. However, the issues are
similar. In particular, both of these tests classify mmap.mmap
instances as "string-like", while my preferred s+'' doesn't -- i.e.,
an mmap instance isn't "directly" string-like, but its slices and
items are (indeed, said slices and items ARE strings!-). So, in
my sleeve next to the slicing possibility is a small reminder to
specialcase mmap if I ever do end up needing that... and, the very
need to specialcase is in turn a reminder that the workarounds are
not entirely pleasant.


Alex

Ian McConnell

unread,
Jun 17, 2003, 3:39:23 AM6/17/03
to
Peter Hansen <pe...@engcorp.com> writes:

> Justin Johnson wrote:
>>
>> I need a way to run an external command, grab its output (stdout,stderr)
>> and get a return status for the command. It seems like popen2.Popen3
>> (Note the upper case "P") is the way to do this, but it only works on
>> unix. Is there a way to get this info on Windows?
>

> You might get more or better replies if you create your posting as
> a new one, rather than replying to an existing one and changing the
> subject. Many news and mail readers will do "threading" which means
> your article will show up as part of the discussion to which you
> replied, and many people will have killed that thread (and thus
> never see your question) by now.

Have a look at the process package on
http://starship.python.net/~tmick/

process.py is a (rather large) Python module to make process control easier
and more consistent on Windows and Linux. The current mechanisms (os.popen*,
os.system, os.exec*, os.spawn*) all have limitations.

A quick list of some reasons to use process.py:

* You don't have to handle quoting the arguments of your command line.
You can pass in a command string or an argv.

* You can specify the current working directory (cwd) and the
environment (env) for the started process.

* On Windows you can spawn a process without a console window opening.

* You can wait for process termination or kill the running process
without having to worry about weird platform issues. (Killing on
Windows should first give the process a chance to shutdown cleanly.
Killing on Linux will not work from a different thread than the
process that created it.)

* ProcessProxy allows you to interact in a pseudo-event-based way with
the spawned process. I.e., you can pass in file-like object for any of
stdin, stdout, or stderr to handle interaction with the process.

--
"Thinks: I can't think of a thinks. End of thinks routine": Blue Bottle

** Aunty Spam says: Remove the trailing x from the To: field to reply **

Moshe Zadka

unread,
Jun 17, 2003, 5:09:18 AM6/17/03
to
On Tue, 17 Jun 2003, Alex Martelli <al...@aleax.it> wrote:

> Good thing your job isn't accounting

Right

> -- if you tried putting, on a
> company's balance sheet, the source code it owns under "liabilities",
> rather than under "assets", you'd be in for malpractice lawsuits to
> dwarf Enron's (tax authorities might be particularly likely to take
> a dim view in the matter).

And *that's* the reason it's a good thing my job isn't accounting?
[What do you think those "salaries for people who maintain the source" and
"various developing environments" expenses *are*, if not the translation
to accounting-lingo? How come we weren't sued for malpractice when my
bosses authorized me to delete heaping gobs of code? Just because
translating from sane-speak to accountant-lingo is non-trivial does
not mean source code is not a liability]

Bruno Desthuilliers

unread,
Jun 17, 2003, 10:00:37 AM6/17/03
to
Tim Rowe wrote:
> On Mon, 16 Jun 2003 21:01:42 +0000, Bruno Desthuilliers
> <bdesth...@removeme.free.fr> wrote:
>
>
>>Tim Rowe wrote:
>>(snip)
>>
>>
>>>So if we're going to have meaningless
>>>polemic, let me try mine :-)
>>
>
>>Please be serious.
>
>
> I consider it at least as serious as the post I was replying to,
> though in my case the exaggeration to the point of comedy was
> intention (I don't know about successful). No, the compiler and other
> static testing won't catch all the bugs. But it has been pretty
> thoroughly shown that dynamic testing won't catch them all either.

I dont pretend that any known methodology can catch *all* bugs.

> And the research I've seen

Links, please...

> indicates that the two techniques catch
> pretty much non-overlapping sets of bugs.

Well... With static-typing and compilation, the compiler can catch type
bugs. They are not the most common bugs, nor the more difficult to catch .

> Go back to my parody of Anton's position: which do you think we can do
> without in our diet? Fibre or vitamins? Ok, now which do you think we
> can do without in our software assurance? Static testing or dynamic
> testing?

That comparison does not stand. We need both fiber and vitamins to live,
but the fact is that programs written with dynamic languages works, and
are no more no less buggy than programs written with static languages.

> (As an aside, when the project manager comes along and says we have to
> cut something to meet timescales, do you think it will be compilation
> or test that's more likely to be cut?)

Which compilation ? Remember this is c.l.python here... No overnight
compilation phase needed.

Now, if the project manager is stupid enough (yes, don't tell me, many
of them are...) to cut on the test, you're in the same trouble, 'static
testing' or not. The fact that a C++ source compile doesn't means it's
bug-free, AFAIK...

Bruno Desthuilliers

Dang Griffith

unread,
Jun 17, 2003, 8:27:42 AM6/17/03
to
Peter Hansen wrote:

> Gary Duncan wrote:
> >
> > Alex Martelli wrote:
> >
> > >>>Calling functions with invalid arguments is one of the commonest
> > >>>programming errors
> > >>
> > >>Debatable. Not true in my experience.
>
> (To clarify, in the face of trimmed attributions: I, not Alex, wrote
> the last sentence above.)
>
> > I suspect this assertion relates to juxtaposing args, not so
> > much the values thereof. Obviously passing bad values to a function
> > is a crime we have all committed - at least it's one I admit to ;)
>
> If we consider that bugs come from either bad data or bad logic, and
> that bad logic will very likely or inevitably lead to bad data, and
> that all data eventually is used as an "argument" in some fashion,
> I'll agree that calling functions with invalid arguments is quite
> common. :-)
>
> To take up Alex' point: passing in the wrong type is probably much
> less common than, say, using the wrong value, or mixing up the order
> of arguments (and many functions take a bunch of arguments of the
> same type, so type-checking doesn't help there!).
>
> -Peter
A common example--most OO GUI libraries include a type of "Point".
I've yet to see a Point class that is composed of a pair of ordinate and
abscissa type objects. Talk about an easy way to get turned around
without the compiler even hinting at a problem.
--dang

beli...@aol.com

unread,
Jun 17, 2003, 8:36:55 AM6/17/03
to
David Abrahams <da...@boost-consulting.com> wrote in message news:<uwufmk...@boost-consulting.com>...

SNIP

> I find that static typing makes a big difference for two things:
>
> 1. Readability. It really helps to have names introduced with a
> type or a type constraint which expresses what kind of thing they

> are. This is especially true when I am coming back to code after
> a long time or reading someone else's work. Attaching that
> information to the name directly is odious, though, and leads to
> abominations like hungarian notation.

I strongly agree with this. For numerical work, especially numerical
linear algebra, having the compiler check types and the # of
dimensions can detect many errors.

The beginning of the following Fortran 95 subroutine, for example,
provides useful information both to the compiler and to a programmer
trying to understand what it does:

subroutine solve_lsq_svd_rmse(aa,bb,xx,rmse,ierr)
! solve a set of least-squares linear equations and compute the rmse
real , intent(in) :: aa(:,:) ! matrix of independent variables
real , intent(in) :: bb(:) ! vector of dependent variable
real , intent(out) :: xx(:) ! solution of least-squares problem
real , intent(out) :: rmse ! rmse of regression
integer , intent(out) :: ierr ! error flag

In Python, to ensure that aa is a 2-D array and that bb and xx are 1-D
arrays, you need to write some checking code, which will not be as
clear as the above declarations IMO. Even if you do this, the errors
will be caught at run time, not compile time.

(Regardless of the language, you need to check that the # of rows in
aa equals the # of elements in bb. This usually must be done at run
time.)

Richard Brodie

unread,
Jun 17, 2003, 8:38:55 AM6/17/03
to

"Tim Rowe" <tim@remove_if_not_spam.digitig.cix.co.uk> wrote in message
news:bresev47pva1a27od...@4ax.com...

> (As an aside, when the project manager comes along and says we have to
> cut something to meet timescales, do you think it will be compilation
> or test that's more likely to be cut?)

If you're doing test driven design and your project manager wants to cut the
tests, get a new project manager. This line of argument often seems to come up
when XP style developed is discussed, and, frankly, I think it's rather weak.
Of course, test driven design isn't going to work well if you skip the tests.

You never hear the argument: "what if the project manager cuts code reviews to
save time", and that's something that's hard to automate. Likewise, in an strong
architectural design, nobody would suggest cutting the upfront design time would
save time overall.


Duncan Booth

unread,
Jun 17, 2003, 9:14:11 AM6/17/03
to
beli...@aol.com wrote in
news:3064b51d.0306...@posting.google.com:

> subroutine solve_lsq_svd_rmse(aa,bb,xx,rmse,ierr)
> ! solve a set of least-squares linear equations and compute the rmse
> real , intent(in) :: aa(:,:) ! matrix of independent variables
> real , intent(in) :: bb(:) ! vector of dependent variable
> real , intent(out) :: xx(:) ! solution of least-squares problem
> real , intent(out) :: rmse ! rmse of regression
> integer , intent(out) :: ierr ! error flag
>
> In Python, to ensure that aa is a 2-D array and that bb and xx are 1-D
> arrays, you need to write some checking code, which will not be as
> clear as the above declarations IMO. Even if you do this, the errors
> will be caught at run time, not compile time.
>
> (Regardless of the language, you need to check that the # of rows in
> aa equals the # of elements in bb. This usually must be done at run
> time.)

You don't need to check that xx is a 1-D array since it is a result, so
your function just generates a 1-D array and returns it. Likewise you don't
need to check the type of rmse, and I really hope that in Python ierr would
disappear entirely in favour of an exception.

So you are down to the question of whether aa is a 2-D array, and whether
bb is a 1-D array of the correct size. Rather than checking the type of aa,
you could just try using it as a 2-D array. You will be pretty hard pushed
to accidentally pass in something of the wrong type that produces a result
rather than an exception.

I think you also need to distinguish not just between compile time and
runtime, but between compile time, test time and runtime. Compile and test
times happen (I hope) every few minutes while you are developing the
program. If you are passing an incompatible type into a function, this
should be caught at test time, i.e. within about 2 minutes maximum of you
writing the call to the function.

--
Duncan Booth dun...@rcp.co.uk
int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3"
"\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?

Brandon Corfman

unread,
Jun 17, 2003, 9:16:18 AM6/17/03
to
1) Why can't this issue be solved by making Python like Lisp/Dylan in
this regard? Make type declarations/range limitations on variables an
optional thing. Flexibility if you need it, safety later if you're
concerned about it.

2) It also seems that the Python community wastes too much time on these
explanations. A better answer (in my mind) would be to say that the
Python environment is designed to make the development process far
different than in a statically-typed language like C++.

Python is different because a) an interactive command prompt to allow
instant evaluation/testing of return values encourages immediate
testing, b) a unit testing framework encourage more long-term testing,
and c) simple tuple-packing/unpacking allow multiple return values. All
of these combine to make Python a development environment that
encourages functional programming _and all development should be done
this way_. If your functions at the lowest levels produce the results
you are expecting, then you have confidence that the other higher-level
functions built on top of them will produce the correct results as well.

This development process is vastly different than what a C++/Java
programmer is used to, because even if they program functionally (and
MOST do not), they rely on a debugger to test values. Although the
Python environment has a debugger to make C++/Java programmers make the
transition, I think new Python programmers should be steered toward
using Python properly as soon as possible.

I think the problem is that a C++/Java programmer regards the Python
command prompt as little more than a calculator or a place to type
"prog.main()". I know I did at first.

Brandon

Peter Hansen wrote:


> David Abrahams wrote:
>
>>I find that static typing makes a big difference for two things:
>>

>> 2. Refactoring. Having a compiler which does some static checking
>> allows me to make changes and use the compiler as a kind of
>> "anchor" to pivot against. It's easy to infer that certain
>> changes will cause compiler errors in all the places where some
>> corresponding change needs to be made. When I do the same thing
>> with Python I have to crawl through all the code to find the
>> changes, and a really complete set of tests often take long
>> enough to run that using the tests as a pivot point is
>> impractical.
>
>
> I just went through a particularly straining session of refactoring
> yesterday.
>
> I couldn't get my head around how to do what I wanted to do, but I
> kept bashing away at it, trying one thing, then another. (One of
> my problems is that I'm still a little new at using generators, and
> I was trying to use a generator at the heart of the solution.)
>
> At no time was I actually changing the interface to any of my
> functions, changing order of arguments or their types.
>
> At no time did I actually encounter a syntax error or any other
> kind of error which stopped the code from running.
>
> At no time did I make a single mistake which would have been
> caught by the compiler for a statically typed language. I
> was on a roll I guess, typing accurately and not doing anything
> sloppy.
>
> Nevertheless, I repeatedly injected logic errors and other such
> problems into the code as I tried to change its structure.
>
> My tests helped me though. I'd guess I had to back out of at
> least five different false directions, as one after another
> obscure test popped up, tapped me on the shoulder, and said,
> "Uh, excuse me? What am I, chopped liver? I need to pass too!"
>
> Not proof of anything, or disproof, either, I suppose. Just a
> little reflection on refactoring and the value of tests and
> static type checking.
>
> -Peter

Justin Johnson

unread,
Jun 17, 2003, 9:08:45 AM6/17/03
to
Thank you so much. This looks like what I needed.

> --
> http://mail.python.org/mailman/listinfo/python-list
>

Dave Brueck

unread,
Jun 17, 2003, 11:36:24 AM6/17/03
to
On Tue, 17 Jun 2003, Moshe Zadka wrote:

> Just because translating from sane-speak to accountant-lingo is
> non-trivial does not mean source code is not a liability]

Hi Moshe,

Can you clarify what you mean by "source code is always a liability"? I've
read that comment from you many times in this thread and either I'm not
"getting it" or it just doesn't make much sense. If I understand you
correctly, then with that line of thinking you could just as easily argue

"employees are always a liability"
"office space is always a liability"
"computers are always a liability"

(and in that sense it is so non-revelatory that it's not worth repeating,
so hopefully I'm simply misunderstanding you)

Note, for the record, that I _don't_ consider source code to be some
priceless asset, and I think everyone realizes that having less code to
maintain is a Good Thing.

Thanks,
Dave

JanC

unread,
Jun 17, 2003, 10:53:37 AM6/17/03
to
Christopher Koppler <klap...@nusurf.at> schreef:

>> IIRC the first cars had a maximum speed of about 3.5 km/h and could
>> drive about 5 km far. Preparing the car for that took about an hour,
>> and afterwards you had to repair it for another 2 hours. Even a
>> pedestrian could go faster, further, cheaper, and carry more weight
>> with him... ;-)
>>
> Yes, but those were the *very* first cars - alpha releases, which were
> quite soon replaced by much more stable beta versions, and 1908
> already saw version 1.0: the Model T Ford.

Well, it took them rather long to reach a final version 1.0 then: the
Belgian inventor Jean Lenoir drove his first car in 1863...

Moshe Zadka

unread,
Jun 17, 2003, 10:52:33 AM6/17/03
to
On Tue, 17 Jun, Dave Brueck <da...@pythonapocrypha.com> wrote:

> Can you clarify what you mean by "source code is always a liability"?

Well, I can *try*. I don't have any research, merely anecdotal evidence.
The anecdotal evidence suggests that I *enjoy* deleting code. A public example
is how I helped the deletion of the "poly" module from Python (technically,
it's in lib-old for backward compatibility reasons, however) -- and I
consider it to be one of the best contributions I made to core Python.

Now, in what ways was the "poly" module a liability? Well, the core Python
team had to maintain it. They had to make sure its performance was decent
and that it had no bugs and that the interface is decent. They didn't.
In 1.5.2, this module carried ugly 1.2-work-aroundy baggage and did polynomial
multiplication in a silly way. It obscured from people that there are
*good* polynomial modules for Python (in Numeric, IIRC).

Now, having the *functionality* of polynomials is an asset. Having *source
code* for supporting polynomials was a liability -- it sucked the time
and energies of Python developers.

This is why "YAGNI" is such a powerful principle -- because source code
without functionality is sheer liability. This is why "do the simplest
thing" is powerful -- because less source code for the same functionality
is better. Neither of those would be true if source code was not a liability.
I'm not sure if you think this is the trivial sense -- if you think this
*is* trivial, you have some programming maturity, so good! :)

Alexander Schmolck

unread,
Jun 17, 2003, 11:36:36 AM6/17/03
to
beli...@aol.com writes:

> David Abrahams <da...@boost-consulting.com> wrote in message news:<uwufmk...@boost-consulting.com>...
>
> SNIP
>
> > I find that static typing makes a big difference for two things:
> >
> > 1. Readability. It really helps to have names introduced with a
> > type or a type constraint which expresses what kind of thing they
> > are. This is especially true when I am coming back to code after
> > a long time or reading someone else's work. Attaching that
> > information to the name directly is odious, though, and leads to
> > abominations like hungarian notation.
>
> I strongly agree with this. For numerical work, especially numerical
> linear algebra, having the compiler check types and the # of
> dimensions can detect many errors.

Have you compared developing with (i)python/Numeric to Fortran (I am not
saying you haven't, but I'd be curious to know)? If so, did you develop mainly
in a single interactive session or did you use edit/run cycles?

>
> The beginning of the following Fortran 95 subroutine, for example,
> provides useful information both to the compiler and to a programmer
> trying to understand what it does:
>
> subroutine solve_lsq_svd_rmse(aa,bb,xx,rmse,ierr)
> ! solve a set of least-squares linear equations and compute the rmse
> real , intent(in) :: aa(:,:) ! matrix of independent variables
> real , intent(in) :: bb(:) ! vector of dependent variable
> real , intent(out) :: xx(:) ! solution of least-squares problem
> real , intent(out) :: rmse ! rmse of regression
> integer , intent(out) :: ierr ! error flag


def solve_lsq_svd_rmse(aa,bb):
"""Solve a set of least-squares linear equations and compute the rmse.

Parameters:
- `aa` matrix of independent variables
- `bb` vector of dependent variable

Returns: A tuple consisting of the solution of the least squares problem
and the rmse of regression.

"""
aa, bb = asarray(aa), asarray(bb)
assert rank(aa) == 2 and rank(bb) == 1 # likely to be superfluous
[...]


>
> In Python, to ensure that aa is a 2-D array and that bb and xx are 1-D
> arrays, you need to write some checking code, which will not be as
> clear as the above declarations IMO.

Well, tastes might differ but I'd rather read the python version (chances are
I wouldn't even have to read it -- ``help(solve_lsq_svd_rmse)`` in the
interpreter would tell me what I need to know about it, provided it comes as
part of a respectable libarary and is likely to work).

> Even if you do this, the errors will be caught at run time, not compile
> time.

Typing the following into the interactive shell:

>>> 9 + "f"

will also produce a run-time rather than a compile time error (as it would in
say, Pascal -- where'd you'd have to write boilerplate code and compile it
first). Which form of feedback in this case you think is quicker and more
effective?

If you write and test your code interactively (of course also writing proper
unit tests) runtime errors provide in many cases better and more immediate
feedback than compiler errors in lesser languages such as C, especially since
the C catches only very few errors at compile time and gives about *zero*
error feedback at runtime (thus, the 9 + "f" example above (plus the
appropriate boilerplate code) compiles fine and without warnings under at
least one C compile I tried).


>
> (Regardless of the language, you need to check that the # of rows in
> aa equals the # of elements in bb. This usually must be done at run
> time.)

No, you don't need to check everything regardless of language. Obsessive
compile time checks are so important in primitive languages like C(++) Fortran
because they fail catastrophically but often silently if something goes wrong.
A dynamically typed high level language like python typically won't. In most
cases if the number of rows doesn't match the number of elements, chances are
you'll get an appropriate error message, the same if you passed an argument of
the wrong type. Plus, chances are the statically typed version will be
artificially constrained in the input types it can deal with.

'as

Alex Martelli

unread,
Jun 17, 2003, 12:00:58 PM6/17/03
to
Moshe Zadka wrote:

> On Tue, 17 Jun 2003, Alex Martelli <al...@aleax.it> wrote:
>
>> Good thing your job isn't accounting
>
> Right
>
>> -- if you tried putting, on a
>> company's balance sheet, the source code it owns under "liabilities",
>> rather than under "assets", you'd be in for malpractice lawsuits to
>> dwarf Enron's (tax authorities might be particularly likely to take
>> a dim view in the matter).
>
> And *that's* the reason it's a good thing my job isn't accounting?
> [What do you think those "salaries for people who maintain the source" and
> "various developing environments" expenses *are*, if not the translation

<sigh> the salaries for gardeners and the expenses for fertilizer do
*not* mean that a garden owned by a corporation is a liability of that
corporation -- it's still an asset. It's quite normal that there may
be expenses connected to maintenance of an asset (and different fiscal
regimes define ordinary and extraordinary maintenance in slightly
different ways, as well as, in some cases, 'required' versus 'optional'
maintenance), but those, of course, do not change its asset-nature.

> to accounting-lingo? How come we weren't sued for malpractice when my
> bosses authorized me to delete heaping gobs of code? Just because

If you didn't keep an archival trace of the previous versions (e.g.
with CVS, subversion, or the like), then I guess you're lucky you have
friendly and accomodating stockholders and/or auditors. Disposal of
some assets is a normal procedure, of course, but you don't just take
them to the middle of the sea in the night and dump them there -- you
need an audit trail for that (partly depending on your local
jurisdiction's inventory regulations, of course). Consider for
example a restaurant which at the start of a given business day
has in its inventory 3 Kg of truffles: these are assets ("stock in
trade" is the common term). At the end of the day, it's possible
that, say, 1 Kg of the truffles has been used in preparing various
delicious dishes for the restaurant's customers. On closing for the
night, the restaurant's stocks are subject to routine inspection and
it turns out that 0.5 Kg of the remaining truffles have spoiled and
should be disposed of. No problem: quite an ordinary operation. But
still, an entry must be made in the books recording the disposal of
such and such an amount of goods due to spoilage.

And if you think that the possibility of spoilage (and thus of needing
disposal), or the need for maintenance (e.g. cleaning superficial
warts that may develop on the truffles' surface, to avoid some more
of them spoiling and needing disposal) means the truffles owned by
the restaurant are "a liability", rather than an asset of the firm,
then we're not dealing with a lack of understanding of "accounting
lingo": we're dealing with something way deeper! Stock-in-trade has
cost money and/or effort to acquire; unless it spoils and needs to
be disposed of, it may be used (possibly with further expenditure of
work &c) to generate income for the firm (e.g. to prepare goods or
services for sale to customers); thus, it's an asset of the firm.
The possible need for maintenance and risk of spoilage do not change
this situation one bit!

> translating from sane-speak to accountant-lingo is non-trivial does
> not mean source code is not a liability]

It's not an issue of "lingo" vs alleged "sane-speak". It's a matter
of common sense, ordinary microeconomics, and somebody trying to be
funny without, it appears, the slightest grasp of the subject.

A liability of a firm is a financial obligation, responsibility, or
debt. If the firm owns an asset on which it is legally mandated to
provide ordinary maintenance, then that legal mandate (an obligation
which imposes costs) is a liability (it can generally be shown as
such in your balance sheet, depending on the exact accounting rules
that may apply in your jurisdiction), but that does not change the
nature of the asset as such (it may offset its _value_, but that's
another issue). When the maintenance is not mandatory by law, then
it's quite iffy to consider it a liability -- "ordinary operating
expense" is the only classification that makes _sense_, although it
may be possible to play (perhaps-legal) tricks to doctor the balance
sheet anyway (e.g., hive off the asset into a legally separate firm
and undertake to provide the maintenance by entering a binding
contract with that firm -- that _might_ let you hide some or all of
the asset's value and perhaps even show a net notional liability,
depending on how you may be required to consolidate balance sheets
and how fast-and-loose you're willing to play with the spirit vs the
letter of accounting principles and rules). But, tricks apart, an
asset which requires maintenance is NOT a liability!


Alex

Alex Martelli

unread,
Jun 17, 2003, 12:15:26 PM6/17/03
to
Terry Reedy wrote:
...
>> E.g., indexing blah[bloh] with an
>> invalid index value computed in variable bloh (invalid with respect
>> to the set of indices that container blah can accept) is, alas, far
>> from rare; but it's not a type-checking issue,
>
> Depends on what you call a type. A 'count' or a 'residue_class' is as
> much a mathematical 'type' as an 'int' (or a 'rat').

A 'type' that is checkable at compile-time is (at the very least)
a set whose membership needs to be fixed and known at compile-time.
(There may be other requirements, but this one is crucial;-).

"The set of acceptable keys/indices into 'blah' at this moment
(which depends on how big 'blah' is right now, etc etc)" is not
``a type'' in this sense. So, if 'blah' is a std::vector<...>
in C++, a Vector in Java, a Python list, or basically any sort of
usable array/vector whose capacity can be determined at runtime,
then compile-time type checking just cannot substitute for runtime
checking of indices.


Alex

Peter Hansen

unread,
Jun 17, 2003, 12:17:47 PM6/17/03
to
JanC wrote:
>
> Christopher Koppler <klap...@nusurf.at> schreef:
>
> >> IIRC the first cars had a maximum speed of about 3.5 km/h and could
> >> drive about 5 km far. Preparing the car for that took about an hour,
> >> and afterwards you had to repair it for another 2 hours. Even a
> >> pedestrian could go faster, further, cheaper, and carry more weight
> >> with him... ;-)
> >>
> > Yes, but those were the *very* first cars - alpha releases, which were
> > quite soon replaced by much more stable beta versions, and 1908
> > already saw version 1.0: the Model T Ford.
>
> Well, it took them rather long to reach a final version 1.0 then: the
> Belgian inventor Jean Lenoir drove his first car in 1863...

See http://inventors.about.com/library/weekly/aacarssteama.htm
for a picture of a the first steam-powered car, by Cugnot, from 1771,
plus comments that the first theoretical plans for a motor vehicle
were drawn up by da Vinci and Newton (in the first occurrence of
pair programming? ;-).

It also explains the use case for Cugnot's car: hauling artillery
for the military.

Other interesting points:

- "In 1789, the first U.S. patent for a steam-powered land vehicle
was granted to Oliver Evans.", proving that the U.S. patent office
was filled with morons who would grant patents for just about any
old thing that someone else had already invented, even 214 years ago. :)

- "In Britain, from 1820 to 1840, steam-powered stagecoaches were in
regular service. These were later banned from public roads and Britain's
railroad system developed as a result.", showing that even before the
alpha releases mentioned above, their was a valid use case for
non-horse-driven carriages, and customers using them, even if an
early spate of road-rage put an end to that practice. :-)

- "In 1871, Dr. J. W. Carhart, professor of physics at Wisconsin State
University, and the J. I. Case Company built a working steam car that won
a 200-mile race", without specifying whether the poor horse(*) that it was
racing against later made good eatin'. One would assume it had to go
faster than 3.5km/h to win such a race.

-Peter

(*) There's actually no indication it was a horse that was involved.

Tim Rowe

unread,
Jun 17, 2003, 12:21:54 PM6/17/03
to
On Tue, 17 Jun 2003 13:38:55 +0100, "Richard Brodie"
<R.Br...@rl.ac.uk> wrote:

>save time", and that's something that's hard to automate. Likewise, in an strong
>architectural design, nobody would suggest cutting the upfront design time would
>save time overall.

Oh, I've /often/ heard that; it's so common that it's a recognised
design antipattern. And the paycheck can be a strong disincentive to
changing ones project manager!

Donn Cave

unread,
Jun 17, 2003, 12:33:50 PM6/17/03
to
In article <3EEE7D4C...@engcorp.com>,
Peter Hansen <pe...@engcorp.com> wrote:
...
> Pah! We're not talking about a bunch of morons here, we're talking about
> intelligent people who might not have encountered a particular approach
> which could allow them to improve the quality of their output. Why
> would you think people aren't intelligent enough to understand that
> nothing is an absolute, that no process or language or generalization
> of any kind will ever be adequate for all possible situations? If I've
> said something that claims XP or TDD will work for everyone, always,
> then I hereby take it back, but I'm sure I haven't said such a thing
> unless it was in one of my 3:30 in the morning after a lousy few hours
> of sleep kind of postings...

He's not the only one who gets a message something like that
from the unit testing advocates on c.l.p. Without the hyperbolic
``all possible situations'', sure, but more to the point, does
the discussion usually acknowledge significant exceptions, or
rather tend to dismiss them?

Donn Cave, do...@u.washington.edu

Steven Taschuk

unread,
Jun 17, 2003, 12:11:16 PM6/17/03
to
Quoth Brandon Corfman:

> 1) Why can't this issue be solved by making Python like Lisp/Dylan in
> this regard? Make type declarations/range limitations on variables an
> optional thing. Flexibility if you need it, safety later if you're
> concerned about it.

This was discussed in the now-defunct types-sig, if memory serves.
<http://www.python.org/sigs/types-sig/>
I've never looked into it, but I imagine the archived discussions
there might shed some light on why this hasn't been done.

> 2) It also seems that the Python community wastes too much time on these
> explanations. A better answer (in my mind) would be to say that the
> Python environment is designed to make the development process far
> different than in a statically-typed language like C++.

That seems like a good answer (though I don't think it replaces
the other explanations in the thread so much as it augments or
summarizes them).

As for us wasting our time with these explanations, well, the
usual solution (such as it is) would be to write up a FAQ entry
and direct future querents to it. Would you like to write one?

[...]


> of these combine to make Python a development environment that
> encourages functional programming _and all development should be done

> this way_. [...]

(I don't think "functional programming" means what you think it
means. Functional programming is programming without
side-effects.)

[...]


> I think the problem is that a C++/Java programmer regards the Python
> command prompt as little more than a calculator or a place to type
> "prog.main()". I know I did at first.

Good observation! I made that mistake too.

--
Steven Taschuk stas...@telusplanet.net
Every public frenzy produces legislation purporting to address it.
(Kinsley's Law)

Dave Brueck

unread,
Jun 17, 2003, 2:18:38 PM6/17/03
to
On Tue, 17 Jun 2003, Moshe Zadka wrote:

> On Tue, 17 Jun, Dave Brueck <da...@pythonapocrypha.com> wrote:
>
> > Can you clarify what you mean by "source code is always a liability"?
>

> This is why "YAGNI" is such a powerful principle -- because source code
> without functionality is sheer liability.

Yes, yes, I understand all that. It's just that nobody is disagreeing with
the idea that "less code for the same functionality is good". The
statement "source code is always a liability" is only true in the same
watered down sense that "employees are always a liability" - IOW, you have
to make the definition so broad that it's no longer useful to mention it
and no different than, say, cost minimalization ("don't buy more compiler
licences than we need", "if we can code just as well in smaller offices,
go for it", etc.).

Have fun,
-Dave

Tim Rowe

unread,
Jun 17, 2003, 1:06:55 PM6/17/03
to
On Tue, 17 Jun 2003 07:56:14 GMT, Alex Martelli <al...@aleax.it>
wrote:

>I guess that's the crux of the matter -- and this "research you've
>seen" appears to give results that directly contradict the everyday
>experience reported by the growing band of users of dynamically
>typed languages.

[fx: looks at unstable four-foot pile of unsorted technical papers]
Er, there will be a short delay at this point! (Though if it prompts
me to sort that pile out at last then something good will have come of
this!)

Don't forget that I'm here because I /like/ Python, by the way -- I
just don't think it's right for /everything/. I don't think /any/
language /can/ be!

Donn Cave

unread,
Jun 17, 2003, 12:52:24 PM6/17/03
to
In article <4017400e.0306...@posting.google.com>,
mars...@spamhole.com (marshall) wrote:
...
> Most of the code I come across in strong typed languages makes heavy
> use of type coercion. So how does typing help if you are just going
> to coerce everything?

It's unfortunate, but then most of the code in existence today
was written for languages that are pretty old now and aren't
necessarily the best example of anything. You're right about
coercion, in my opinion but also in the opinion of people who
have created some more rigorous static typing languages.

Of the ones I know about, Haskell and Objective CAML are the
most interesting. Haskell being the more interesting language
and ocaml the more interesting compiler (cf. F# if you're a .NET
fan.) Haskell is maybe too interesting, it even types the
execution model if that makes sense (and won't right away.)

Donn Cave, do...@u.washington.edu

Peter Hansen

unread,
Jun 17, 2003, 1:36:12 PM6/17/03
to

I'm not sure, not recalling specific cases where there was any
mention of significant exceptions which were then dismissed.

I'm also not sure that the point of my comments is so much to debate
the effectiveness of agile processes as it is to encourage
experimentation with them amongst other Python users. The proper
place for debate about those processes is probably the relevant
mailing lists, such as the testdrivendevelopment and extremeprogramming
on Yahoo Groups (groups.yahoo.com).

Without that necessary step, the discussion is between those
who have tried it and (apparently, so far, almost universally)
found it highly effective, and those who have not tried it and
therefore have no particular facts on which to base their
dismissal of it. Then there are those who claim to have already
tried it years ago, by writing lots of tests before they wrote any
code, thereby demonstrating their complete failure to understand
the qualitative difference between that and the way TDD actually works.

-Peter

Terry Reedy

unread,
Jun 17, 2003, 1:44:50 PM6/17/03
to

"Alex Martelli" <al...@aleax.it> wrote in message
news:y8HHa.103114$pR3.2...@news1.tin.it...

> Terry Reedy wrote:
> ...
> >> E.g., indexing blah[bloh] with an
> >> invalid index value computed in variable bloh (invalid with
respect
> >> to the set of indices that container blah can accept) is, alas,
far
> >> from rare; but it's not a type-checking issue,
> >
> > Depends on what you call a type. A 'count' or a 'residue_class'
is as
> > much a mathematical 'type' as an 'int' (or a 'rat').
>
> A 'type' that is checkable at compile-time is (at the very least)
> a set whose membership needs to be fixed and known at compile-time.
> (There may be other requirements, but this one is crucial;-).

I agree completely.

> "The set of acceptable keys/indices into 'blah' at this moment
> (which depends on how big 'blah' is right now, etc etc)" is not
> ``a type'' in this sense. So, if 'blah' is a std::vector<...>
> in C++, a Vector in Java, a Python list, or basically any sort of
> usable array/vector whose capacity can be determined at runtime,
> then compile-time type checking just cannot substitute for runtime
> checking of indices.

Right. While my statement about counts and residue classes is
correct, it is not as applicable to indexes of variable-length
whatevers as the juxtaposition implied.

My point about counts is that if we are going to type things (as the
OP advocated, perhaps like C/C++ do), then len(whatever) is better
typed as returning a count rather than an int. At least I would claim
so if the language has exceptions and doesn't 'count' on being able to
return impossible negative lengths as error indicators. If, as in
C/C++, negative indexes are not allowed, then indexes are also better
typed as counts rather than ints. But checking upper bounds is more
complicated, even for fixed length arrays.

Terry J. Reedy


David Bolen

unread,
Jun 17, 2003, 6:31:25 PM6/17/03
to
Moshe Zadka <m...@moshez.org> writes:

> It's better to teach people the truth, no? Instead of teaching them
> about some rosy place which doesn't exist. I feel you are doing
> unit-testing a disservice: if someone buys into it, and finds that
> it isn't a win always, he'll just assume that it is worth nothing.
> In fact, I assumed tests are worth nothing after finding some glaring
> holes in XP's logic, and it took me a long time to be convinced that
> they are ever helpful. Assuming there are others like me, it is better
> to be honest about the limitations of technology first :)

One thing that made a big difference for me when I was doing my own
investigation of XP practices was the point when I realized that
equating TDD with unit testing was a disservice (probably to both
concepts), at least to my own conceptual model of things.

There have been several threads on the XP lists (at Yahoo I think)
with respect to TDD being test-driven-development, or even more
properly, test-driven-design. That it is also close to traditional
unit tests, and that it often gives you reasonable coverage testing,
is a secondary effect. It's primary effect is that you are using the
tests as an act of "design," and through the growing base of tests
letting your design evolve and emerge. It doesn't preclude other
manners of testing (or not doing them at all) and of course XP even
has an extra level in the assurance tests.

At least for me, seeing TDD really as a "design" methodology and not
just unit testing (which I can easily see being done both before and
after the fact) was a significant change. YMMV.

-- David

David Bolen

unread,
Jun 17, 2003, 7:43:51 PM6/17/03
to
"d.w. harks" <da...@psys.org> writes:

> With the win32 extensions, use the win32pipe module. It provides 'safe'
> win32 versions of popen, et al.

Note that for the popen* calls, their implementation in the win32pipe
module was incorporated into the Python core (as the os.popen*
methods) back in Python 2.0, so using the os.popen* calls is just as
safe as using win32pipe.

-- David

David Abrahams

unread,
Jun 17, 2003, 9:10:02 PM6/17/03
to
Duncan Booth <dun...@NOSPAMrcp.co.uk> writes:

Hi Duncan,

Well, first of all, the kinds of things you need to test in numerical
linear algebra often take a lot longer than 2 minutes. Systems meant
to solve large problems often need large tests in order to exercise
all the emergent behaviors.

Regardless, I fear you have conveniently missed the point here. What
prefixed the text you quoted was:

> I find that static typing makes a big difference for two things:
>
> 1. Readability. It really helps to have names introduced with a
> type or a type constraint which expresses what kind of thing they
> are. This is especially true when I am coming back to code after
> a long time or reading someone else's work. Attaching that
> information to the name directly is odious, though, and leads to
> abominations like hungarian notation.

This was not about code correctness but readability/maintainability.
Yes, you can do the same kind of thing with comments, but:

a. Comments go out-of-date, while static checks don't.

b. Documenting generic type constraints is very difficult to do
concisely in English, so people generally don't write them. Go
through any module from a large body of Python code and tell
me what percentage of functions have rigorous comments
describing their type requirements.

I have this experience all the time. Just today I was trying to fix
some problems with ReStructuredText and PDF writing via ReportLab. I
am not deeply familiar with either codebase. I was having problems
with a function and had a devil of a time trying to figure out what
kinds of things were passed as its parameter named 'node'. I added
some printing code (you shouldn't have to modify 3rd party source just
to analyze it) but polymorphism defeated that - it was all kinds of
types. Eventually I had to guess.

I'm not saying that Python isn't a wonderful language -- it is. It's
great to have the flexibility of fully dynamic typing available. All
the same, I'm not going to pretend that static typing doesn't have
real advantages. I seriously believe that it's better most of the
time, because it helps catch mistakes earlier, makes even simple code
easier to maintain, and makes complex code easier to write. I would
trade some convenience for these other strengths. Although it's very
popular around here to say that research has shown static typing
doesn't help, I've never seen that research, and my personal
experience contradicts the claim anyway.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

David Abrahams

unread,
Jun 17, 2003, 9:12:52 PM6/17/03
to
Brandon Corfman <bcor...@amsaa.army.mil> writes:

> I think the problem is that a C++/Java programmer regards the Python
> command prompt as little more than a calculator or a place to type
> "prog.main()". I know I did at first.

Ever try to write a class with more than 2 methods at the
command-line? After screwing it up several times I always revert to
emacs <wink>.

John J. Lee

unread,
Jun 17, 2003, 9:56:48 PM6/17/03
to
Donn Cave <do...@u.washington.edu> writes:

> In article <3EEE7D4C...@engcorp.com>,
[...]


> He's not the only one who gets a message something like that
> from the unit testing advocates on c.l.p. Without the hyperbolic
> ``all possible situations'', sure, but more to the point, does
> the discussion usually acknowledge significant exceptions, or
> rather tend to dismiss them?

Nobody can rationally disagree with you that there might exist
situations where writing tests is inefficient (saves less time than it
costs). But, of course, we'd like to know *what those cases are*,
because, AFAIK, they haven't been clearly pointed out.

I assume you have no problem accepting that many people have found
that writing lots of automated unit tests, and writing them early,
works better than not doing so.

So, what criteria do you have for the test/no-test decision that work
better than my own rule of "always write an automated unit test
(probably before writing the code), unless I'm too lazy or
incompetent" <0.2 wink>? (lazy in the bad sense, that is)

I suppose people might argue that if the answers to the following
questions, for example (any others to suggest?), are mostly 'no', then
the ratio of tests to code should be lower:

1. Code likely to be reused in future?
2. Code has been reused in past?
3. Code will not grow big (say, > 1000 lines)?
4. Lots of people working on it?
5. For a long time?
6. Correctness of code particularly important?
7. Emphasis on refactoring?


But you're probably already expecting the answers:

1. Often hard to tell if it will be.
2. But adding tests later is more expensive (as is fixing bugs later).
3. See 1, 2.
4. See 1, 2. But even if it's only one person, it's usually worth it (IME).
5. See 1, 2.
6. I have to admit code correctness is *not* always crucial (whatever
Bertrand Meyer may try to tell you), but I suspect that, usually, if
it's important enough to fix bugs, it's worth writing tests.
7. My mind isn't entirely made up, but it might be suggested that if
you're not refactoring, you've made another mistake.


Looking at the thread from which I'm trying to start this one, a
lesson to be learned from the static vs. dynamic typing thing is that
it doesn't always work to consider one thing at a time. Static-typing
wonks like to point out that static analysis finds bugs. But us
dynamic typing, um, types, note that static analysis isn't the only
variable: dynamic typing brings other pros, static typing other cons,
and (dynamic typing + tests) is better than the other combinations.
So maybe unit-testing enthusiasts should be thinking more about the
costs and opportunity costs of testing? Maybe if we all switched to
constraint-based / functional languages with static type inference and
god-knows-what-else, and spent some of our testing time doing more
code review, we'd be better off still?


John

JanC

unread,
Jun 17, 2003, 10:31:30 PM6/17/03
to
Peter Hansen <pe...@engcorp.com> schreef:

> One would assume it had to go faster than 3.5km/h to win such a race.

I was talking about cars with "explosion engines" (or how do you call that
in correct English ;)

The first "automobile" ("self moving vehicle") that I know about was steam-
powered and designed by the Flemish scientist father Ferdinand Verbiest
somewhere in the 1670s.
It was a rather small "toy" made for the Chinese Emperor (his employer).
He made steam ships too.

Erik Max Francis

unread,
Jun 17, 2003, 10:45:00 PM6/17/03
to
JanC wrote:

> I was talking about cars with "explosion engines" (or how do you call
> that
> in correct English ;)

Internal combustion engines. Or, at least, I hope so :-).

--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
__ San Jose, CA, USA && 37 20 N 121 53 W && &tSftDotIotE
/ \ Forgive your enemies, but never forget their names.
\__/ John F. Kennedy

Alex Martelli

unread,
Jun 18, 2003, 5:15:50 AM6/18/03
to
Tim Rowe wrote:

> On Tue, 17 Jun 2003 07:56:14 GMT, Alex Martelli <al...@aleax.it>
> wrote:
>
>>I guess that's the crux of the matter -- and this "research you've
>>seen" appears to give results that directly contradict the everyday
>>experience reported by the growing band of users of dynamically
>>typed languages.
>
> [fx: looks at unstable four-foot pile of unsorted technical papers]
> Er, there will be a short delay at this point! (Though if it prompts
> me to sort that pile out at last then something good will have come of
> this!)

I will (slightly, I hope) breach netiquette by asking for a mail Cc of
those pointers you may eventually unearth -- I'm leaving day after
tomorrow for a long and convoluted trip and following news while at
conventions, sprints &c is quite a problem...


> Don't forget that I'm here because I /like/ Python, by the way -- I
> just don't think it's right for /everything/. I don't think /any/
> language /can/ be!

I agree, and indeed believe that the attempts to make languages that
ARE good at everything produce languages that are just too large to
fit comfortably in most humans' heads.

But this has little to do with the need of 'type declarations'. I
suspect that a statically typed language would also be better off
without them, relying on type inferencing instead, a la Haskell (and
Haskell's typeclasses to keep the inferencing as wide as feasible),
for example. But I have no research to back this up;-).


Alex

Mirko Zeibig

unread,
Jun 18, 2003, 5:45:49 AM6/18/03
to
beli...@aol.com wrote:
> The beginning of the following Fortran 95 subroutine, for example,
> provides useful information both to the compiler and to a programmer
> trying to understand what it does:
>
> subroutine solve_lsq_svd_rmse(aa,bb,xx,rmse,ierr)
> ! solve a set of least-squares linear equations and compute the rmse
> real , intent(in) :: aa(:,:) ! matrix of independent variables
> real , intent(in) :: bb(:) ! vector of dependent variable
> real , intent(out) :: xx(:) ! solution of least-squares problem
> real , intent(out) :: rmse ! rmse of regression
> integer , intent(out) :: ierr ! error flag
>
> In Python, to ensure that aa is a 2-D array and that bb and xx are 1-D
> arrays, you need to write some checking code, which will not be as
> clear as the above declarations IMO. Even if you do this, the errors
> will be caught at run time, not compile time.

What about using the array-module for this?

from import array
aa = [array('d', aa[0]), array('d', aa[1])] # matrix of independent variables
bb = array('d', bb) # vector of dependent variable
xx = array('d') # solution of least-squares problem

Regards
Mirko

John J. Lee

unread,
Jun 18, 2003, 9:07:33 AM6/18/03
to al...@aleax.it
Alex Martelli <al...@aleax.it> writes:
[...]

> But this has little to do with the need of 'type declarations'. I
> suspect that a statically typed language would also be better off
> without them, relying on type inferencing instead, a la Haskell (and
> Haskell's typeclasses to keep the inferencing as wide as feasible),
> for example. But I have no research to back this up;-).

Let me magically conjure into existence a really nice language based
on static type inference (let's have it beautifully designed, learning
as many lessons from Python as possible, with oodles of good library
code, including easy connection to C, C++, Java, COM, .NET, CORBA
etc...).

Are you going to switch your most frequent first-choice from Python to
this statically-typed language?

That's *not* a rhetorical question: I don't know the answer.


John

John J. Lee

unread,
Jun 18, 2003, 9:15:11 AM6/18/03
to
David Abrahams <da...@boost-consulting.com> writes:

> Brandon Corfman <bcor...@amsaa.army.mil> writes:
>
> > I think the problem is that a C++/Java programmer regards the Python
> > command prompt as little more than a calculator or a place to type
> > "prog.main()". I know I did at first.
>
> Ever try to write a class with more than 2 methods at the
> command-line? After screwing it up several times I always revert to
> emacs <wink>.

I agree with this. I don't know what all the fuss is about with the
interactive prompt. I certainly do use it, quite frequently, but
don't see a huge difference here between Python and statically-typed
compiled languages (especially given a good IDE -- like unix + emacs
etc.).

Maybe only me and David make lots of mistakes at the interactive
prompt ;-)


John

David Abrahams

unread,
Jun 18, 2003, 10:25:54 AM6/18/03
to al...@aleax.it
Alex Martelli <al...@aleax.it> writes:

> But this has little to do with the need of 'type declarations'. I
> suspect that a statically typed language would also be better off
> without them, relying on type inferencing instead, a la Haskell (and
> Haskell's typeclasses to keep the inferencing as wide as feasible),
> for example. But I have no research to back this up;-).

I don't have any first-hand experience, but the experience of friends
of mine who have used Haskell is that it can be exceedingly difficult
to locate the source of a type error when it does occur, since the
inference engine may propagate the "wrong" type back much further than
the source of the error.

Furthermore, if you do everything by inference you lose the
explanatory power of type declarations.

Alex Martelli

unread,
Jun 18, 2003, 11:01:28 AM6/18/03
to
David Abrahams wrote:

> Alex Martelli <al...@aleax.it> writes:
>
>> But this has little to do with the need of 'type declarations'. I
>> suspect that a statically typed language would also be better off
>> without them, relying on type inferencing instead, a la Haskell (and
>> Haskell's typeclasses to keep the inferencing as wide as feasible),
>> for example. But I have no research to back this up;-).
>
> I don't have any first-hand experience, but the experience of friends
> of mine who have used Haskell is that it can be exceedingly difficult
> to locate the source of a type error when it does occur, since the
> inference engine may propagate the "wrong" type back much further than
> the source of the error.

Surely the compiler should easily be able to annotate the sources with
the information it has inferred, including, in particular, type information.
Thus it cannot possibly be any harder to identify the error point than
if the same type declarations had been laboriously, redundantly written
out by hand -- except, at worst, for a slight omission in the tool of a
feature which would be easily provided.


> Furthermore, if you do everything by inference you lose the
> explanatory power of type declarations.

I think I know where you're coming from, having quite a past as a
static-type-checking enthusiast myself, but I think you overrate the


"explanatory power of type declarations".

What I want to be able to do in my sources is assert a set of facts
about "and at this point I know X holds". Sometimes X might perhaps
be of the form "a is of type B", but that's really a very rare and
specific case. Much more often it will be "container c is non-empty",
"sequence d is sorted", "either x<y or pred(a[z]) for some x>=z>=y",
and so on, and so forth. Type declarations would have extraordinary
"explanatory power" if and only if "a is of type B" was extraordinarily
more important than the other kinds of assertions, and it just isn't --
even though, by squinting just right, you may end up seeing it that
way by a sort of "Stockholm Syndrome" applied to the constraints your
compiler forces upon you.

Types are about implementation, and one should "program to an interface,
not to an implementation" -- therefore, "a is of type B" is rarely what
one SHOULD be focusing on. Of course, some languages blur the important
distinction between a type and a typeclass (or, a class and an interface,
in Java terms -- C++ just doesn't distinguish them by different concepts,
so, if you think in C++, _seeing_ the crucial distinction may be hard;-).

"e provides such-and-such an interface" IS more often interesting, but,
except in Eiffel, the language-supplied concept of "interface" is too
weak for the interest to be sustained -- it's little more than the sheer
"signature" that you can generally infer easily. E.g.:

my procedure receiving argument x

assert "x satisfies an interface that provides a method Foo which
is callable without arguments"

x.Foo()

the ``assert'' (which might just as well be spelled "x satisfies
interface Fooable", or, in languages unable to distinguish "being
of a type" from "satisfying an interface", "x points to a Fooable")
is ridiculously redundant, the worse sort of boilerplate. Many,
_many_ type declarations are just like that, particularly if one
follows a nice programming style of many short functions/methods.
At least in C++ you may often express the equivalent of

"just call x.Foo()!"

as

template <type T>
void myprocedure(T& x)
{
x.Foo();
}

where you're basically having to spend a substantial amount of
"semantics-free boilerplate" to tell the compiler and the reader
"x is of some type T" (surprise, surprise; I'm sure this has huge
explanatory power, doesn't it -- otherwise the assumption would
have been that x was of no type at all...?) while letting them
both shrewdly infer that type T, whatever it might be, had better
provide a method Foo that is callable without arguments (e.g. just
the same as the Python case, natch).

You do get the "error diagnostics 2 seconds earlier" (while compiling
in preparation to running unit-tests, rather than while actually
running the unit-tests) if and when you somewhere erroneously call
myprocedure with an argument that *doesn't* provide the method Foo
with the required signature. But, how can it surprise you if Robert
Martin claims (and you've quoted me quoting him as if I was the
original source of the assertion, in earlier posts) that this just
isn't an issue...? If the compilation takes 3 seconds, then getting
the error diagnostics 2 seconds earlier is still a loss of time, not
a gain, compared to just running the tests w/o any compilation;-)...


I do, at some level, want a language where I CAN (*not* MUST) make
assertions about what I know to be true at certain points:
1. to help the reader in a way that won't go out of date (the assert
statement does that pretty well in most cases)
2. to get the compiler to to extra checks & debugging for me (ditto)
3. to let the compiler in optimizing mode deduce/infer whatever it
wants from the assertions and optimize accordingly (and assert is
no use here, at least as currently present in C, C++, Python)
But even if and when I get such a language I strongly doubt most of
my assertions will be similar to "type declarations" anyway...


Alex

Alex Martelli

unread,
Jun 18, 2003, 11:14:12 AM6/18/03
to
John J. Lee wrote:

I don't know the answer either. It would depend on empirical measurements
of how productive I am for my typical tasks in Python vs Magicklang, as
well as on subjective measurements of how much I enjoy using either, etc,
etc, AND market demand (effective demand, as in $$$-offered;-) for my
services in either capacity. E.g. if Magicklang had perfect compliance
with .NET, Python yet hadn't, and oodles of customers were beating at
my door, bags of gold coins in their hands, desperately pleading for
.NET applications and consultancy, then my switching to Magicklang for
this purpose would be far from impossible -- I doubt the type inference
could be as destructive of my productivity and enjoyment, compared to the
dynamic typing alternative, as to make me refuse the $$$ in question, if,
as you say, Magicklang was so beautifully designed and Python-inspired...


Alex

John Roth

unread,
Jun 18, 2003, 11:34:30 AM6/18/03
to

"John J. Lee" <j...@pobox.com> wrote in message
news:878ys06t...@pobox.com...

You make an excellent point: it's the combination of benefits and
costs that make or break a programming language, as well as the
environment in which it's typically used.

Some day, someone will build an inference engine for formal
verification that works as fast as the tests in TDD, and then I
expect we'll see another shift.

John Roth


>
>
> John


Steven Taschuk

unread,
Jun 18, 2003, 11:31:30 AM6/18/03
to
Quoth John J. Lee:
[...]

> I agree with this. I don't know what all the fuss is about with the
> interactive prompt. I certainly do use it, quite frequently, but
> don't see a huge difference here between Python and statically-typed
> compiled languages (especially given a good IDE -- like unix + emacs
> etc.).
>
> Maybe only me and David make lots of mistakes at the interactive
> prompt ;-)

I'm a big fan of the interactive prompt, but certainly not for
coding -- I too make too many mistakes for that to be practical,
and I prefer my text editor anyway. But I find it greatly eases
debugging and one-off data munging tasks.

John J. Lee

unread,
Jun 18, 2003, 12:00:20 PM6/18/03
to
Steven Taschuk <stas...@telusplanet.net> writes:

> Quoth John J. Lee:
> [...]
> > I agree with this. I don't know what all the fuss is about with the
> > interactive prompt. I certainly do use it, quite frequently, but

[...]


> > Maybe only me and David make lots of mistakes at the interactive
> > prompt ;-)
>
> I'm a big fan of the interactive prompt, but certainly not for
> coding -- I too make too many mistakes for that to be practical,
> and I prefer my text editor anyway.

Of course!!


> But I find it greatly eases
> debugging and one-off data munging tasks.

That's what I was referring to, and I assume David too. If any
function or class is more than say 5 lines, I use emacs. It's still
convenient, but can be replaced quite easily with a good IDE, I think.


John

It is loading more messages.
0 new messages