x = 3
x = x + 2
print "x =", x
x = "now I'm a string"
print x
x = [ 5, x ]
print "and now a list:", x
If you don't believe it, run it. Should any language allow
x to be such a chameleon, to where it can even be a
different type on two sides of an assignment?
---------------------------------
Dennis Roark
Dept. of Computer Science
University of Sioux Falls
Starting Points: http://home.earthlink.net/~denro
---------------------------------
What you have posted has nothing to do with typesafety in the slightest.
Not once did you actually mix types. What you have done is just change the
assignment. At first you had x equal to an integer of 5, then when you
assigned it a value of a string it deleted the old object and made a new
one. Then it did the same as a list.
What you are really complaining about is that it is easy to reassign a var
to point to a new object. If you do id(x) after each assignment you will
see x is not the same object.
I'm sure that most people you read this group/list are well aware of
this behavior :-)
I'm also pretty sure that you're conceptual model of what is going on
needs a bit of rejigging. When you write:
a = 5
a = "Test"
You are not saying that you want to somehow mangle a string so it fits
inside an integer; you are saying that the name "a" should be rebound to
the string "test". And names don't have any notions of what they can and
cannot be bound to. So Python typing is dynamic. But it is also
reasonably strong because most operations require a narrow range of
types to not raise exceptions, as you noted.
Cheers,
Brian
>For amusement, run this little script which demonstrates a
>near ultimate in the lack of type safety in the language.
>(Perhaps there is a bit of type safety in that you can't do
>this: 4 + "one") But look at what you can do in the
>following script:
>
>x = 3
>x = x + 2
>print "x =", x
>x = "now I'm a string"
>print x
>x = [ 5, x ]
>print "and now a list:", x
>
>If you don't believe it, run it. Should any language allow
>x to be such a chameleon, to where it can even be a
>different type on two sides of an assignment?
You're laboring under a very fundamental misapprehension: x
doesn't have a type. x is a name. It is bound to an object.
That object has a type. You can bind the name x to any type of
object you wish.
If you want x to be bound to an integer, bind it to an integer.
If you don't want x to be a list, don't bind it to a list.
Did you realize that FORTRAN allows you to assign the _wrong_
value to a variable?
x = 1.234
WRONG! x is supposed to be 0.49487!
No language can prevent you from writing programs that yeild
incorrect results.
--
Grant Edwards grante Yow! I think my career
at is ruined!
visi.com
I believe it, but it doesn't demonstrate a thing about type safety.
You've just discovered dynamic typing. You see, your implied view of
typing seems to come from a statically typed language, where every
*variable* has a type of its own.
In Python, types are associated with *objects*, not with *variables*.
IOW, all variables are of the same type: reference-to-object.
But type safety isn't the same as static typing. Type safety is
(IIRC) is the property that keeps you from doing stuff like this:
// C code: no type safety.
int x = 3;
int *xp = &x;
puts( (char*)xp ); // This is dangerous, but C won't stop you.
// Java code: type safety.
Integer x = new Integer(3);
System.out.println( (String) x ); // Java throws an exception here.
So by the standard definition, Python is of course type-safe.
> Should any language allow
>x to be such a chameleon, to where it can even be a
>different type on two sides of an assignment?
Once again, this is called dynamic typing, and it's more common than
you seem to think. It's at least as old as Lisp:
(let ; (ok, is actually scheme.)
((x 3))
(set! x (+ 2 x))
(set! x "now I'm a string")
(set! x (list 5 x))
)
And really, you can write the same code in Java:
Object x = new Integer(3);
x = new Integer(2+((Integer)x.intValue()));
x = "now I'm a string";
x = new Object[]{new Integer(5), x};
Or Perl, of course:
$x = 3;
$x += 2;
$x = "now I'm a string";
$x = [ 5, $x ];
Or even C++:
// Watch out! My C++ is rusty.
class Integer { int val;
public: Integer(int v) : val(v) {}
operator int() { return val; } }
void* x = new Integer(3);
x = new Integer(2 + *(Integer*)x);
x = new string("now I'm a string");
x = new vector<void*>(2, x);
// Ouch! My brain hurts. And I just leaked a bunch of memory. :)
In Perl and Lisp, you can see that variables themselves are
untyped[*], as in Python. In the Java and C++ examples, variables are
typed, but there are also a catch-all types (Object and void*) that
allow the same kind of dynamic behavior.
This is not a Pythonic innovation. But it is a feature.
Stop-me-before-I-start-looking-for-a-way-to-do-it-in-CLU-ly Yrs,
[*] It's more complicated than that in Perl, naturally. It always is.
--
Nick Mathewson <9 nick 9 m at alum dot mit dot edu>
Remove 9's to respond. No spam.
Python is an attractive language. But for a large program I
would still rather be forced to declare the name of a
variable (and its type) rather than risk misspelling or
misuse 500 lines later in the program. My original note was
not intended to be an indictment of Python, but only to
bring up some reasons that for me make more strongly typed
languages like C++ or Object Pascal better at coding very
large projects.
9ni...@alum.mit.edu (Nick Mathewson) wrote:
>On Fri, 13 Jul 2001 07:32:28 GMT, Dennis Roark <de...@earthlink.net> wrote:
>>For amusement, run this little script which demonstrates a
>>near ultimate in the lack of type safety in the language.
>>(Perhaps there is a bit of type safety in that you can't do
>>this: 4 + "one") But look at what you can do in the
>>following script:
>>
>>x = 3
>>x = x + 2
>>print "x =", x
>>x = "now I'm a string"
>>print x
>>x = [ 5, x ]
>>print "and now a list:", x
>>
>>If you don't believe it, run it.
>
>I believe it, but it doesn't demonstrate a thing about type safety.
>You've just discovered dynamic typing. You see, your implied view of
>typing seems to come from a statically typed language, where every
>*variable* has a type of its own.
>
>In Python, types are associated with *objects*, not with *variables*.
>IOW, all variables are of the same type: reference-to-object.
>...
>
>Or even C++:
> // Watch out! My C++ is rusty.
> class Integer { int val;
> public: Integer(int v) : val(v) {}
> operator int() { return val; } }
> void* x = new Integer(3);
> x = new Integer(2 + *(Integer*)x);
>...
You seem to have just discovered Python's dynamicity. You may not yet be
in a position to understand the *benefits* of this dynamicity in large
programs which go along with its costs. Sometimes Python's dynamic
features will allow thousands of lines of Java or C++ code to just melt
away into a few lines of Python. That increases the overall reliability
of the system. Python also makes testing much easier. Therefore you can
do more testing in less time and thus improve the stability of your
program.
--
Take a recipe. Leave a recipe.
Python Cookbook! http://www.ActiveState.com/pythoncookbook
That's always a popular argument, naturally from people who have never written
a large program in a dynamic language. The argument is "which method produces
'more correct' programs?".
The static people talk about rigorously enforced interfaces, correctness
proofs, contracts, etc. The dynamic people talk about rigorously enforced
testing and say that types only catch a small portion of possible errors. The
static people retort that they don't trust tests to cover everything or not
have bugs and why write tests for stuff the compiler should test for you, so
you shouldn't rely on *only* tests, and besides static types don't catch a
small portion, but a large portion of errors. The dynamic people say no
program or test is perfect and static typing is not worth the cost in language
complexity and design difficulty for the gain in eliminating a few tests that
would have been easy to write anyway, since static types catch a small portion
of errors, not a large portion. The static people say static types don't add
that much language complexity, and it's not design "difficulty" but an
essential part of the process, and they catch a large portion, not a small
portion. The dynamic people say they add enormous complexity, and they catch
a small portion, and point out that the static people have bad breath. The
static people assert that the dynamic people must be too stupid to cope with a
real language and rigorous requirements, and are ugly besides.
This is when both sides start throwing rocks.
>typing is a weaker and less safe sort of typing than the
>static typing of C++. Nick brings up the C++ example of
But if all you know is the C idea of "static typing" and "type safety" you
don't even know your own camp. To view this debate in earnest, go to wherever
the Eiffel and Ada people do battle against Smalltalkers, Lispers, & co.
There's good arguments on both sides. I've never seen a formal study, but
even if there is one it probably wouldn't be difficult for either side to poke
holes in it.
This might be better called dynamic naming or dynamic binding. 'Dynaming
typing' is confusing, as evidenced here, because of its two possible
interpretations.
1. Types are assigned at runtime (because they are created at runtime).
True.
2. Types are changeable at runtime. Untrue, unless you count changing an
instance's base class (a recent addition to the language. In this sense,
Python object types are fixed or static. They are not like C unions.
> You see, your implied view of
> typing seems to come from a statically typed language, where every
> *variable* has a type of its own.
>
> In Python, types are associated with *objects*, not with *variables*.
> IOW, all variables are of the same type: reference-to-object.
One of the keys to understanding Python.
Terry J. Reedy
Indeed. I'll try to remember to use "dynamic binding" in the future:
while 'dynamic typing' was what I used in school, it does indeed
suffer from the ambiguity you note.
Relieved-to-know-at-least-one-interpretation-of-my-post-was-accurate'ly yrs,
Absolutely!!!
Each value is a perfectly good object, so why not? (The variable "x"
references an "object" :-).
Side note: see what a Java compiler allows you to put into a collection and
then consider the same issue.
Jim
>
> Python is an attractive language. But for a large program I
> would still rather be forced to declare the name of a
> variable (and its type) rather than risk misspelling or
> misuse 500 lines later in the program. My original note was
> not intended to be an indictment of Python, but only to
> bring up some reasons that for me make more strongly typed
> languages like C++ or Object Pascal better at coding very
> large projects.
>
Umm, the world's largest systems are not written in C++ or even C.
I spent almost 10 years at UAL's Apollo Reservation system which was
the largest commercial transaction system in the world (probably still
is number one or two). All the core system was written in *assembler*
running an OS that had no memory protection. Yup, a bad application
segment could take down the whole business. Why? For SPEEEEEEED.
I have programmed extensively in C, assembler, BASIC, PL/M and a some
in C++, Forth, Pascal, AWK, Perl, Java... For my money, I have *never*
seen a language more suited for rapid applications development - even
pretty good sized applications - that strongly encourages the programmer
to write clean, easy to read code, than *Python* - this after only a
month or two of fiddling with it.
The academic debates about early- vs. late binding, static- vs. dynamic
typing, procedural- vs. object, and so on mean very little in practice,
in my experience. The thing that is never taught in language design
and compiler classes is that what matters to the programmer is
*safe and readable ***SYNTAX*** with predictable semantics*. The rest
of it is of interest only to language implementors and designers.
Furthermore, bear in mind that code "portability" is no where near as
important in the commercial sector as it is in academics. Commercial
applications are typically the largest around. They are purpose built
to do things which distinguish their company from the pack and create
unique value-add. It is rarely important to be able to move code
around at will. What *is* important is code *reliability*, *cost of
ownership*, *readability*, and portability of *programmer's skills*.
In this light, C++ is an abomination. It was a really good research
exercise which demonstrated just how flexible the whole Unix paradigm
really is for making things extensible, but as a production language
is it clumsy, ugly, hard to read, and full of subtle semantic traps.
In the past few years I have been a technology executive in a variety
of companies. In every case, unless there is no other way out, I have
stopped the use of C++. I'd rather see people coding in perl, awk, or now,
python by a bunch. The cost of maintaining and owning C++ is demonstrably
higher than any of the alternatives. It is a systems programming language
in OO drag and has the worst of all possible worlds - the ease of shooting
your foot off that a systems language has, the complexity of OO, and
syntactic obscurity that does little more than preserve the Programming
High Priesthood. It is thankfully dying a slow death precisely because
there are much better choices.
Applications programmers need to do different things in different ways
than Systems programmers. No language can serve both well, in my experience.
IMHO, that's why C++ failed - it tried to be all things to all programmers
and ended up being a convoluted mess.
------------------------------------------------------------------------------
Tim Daneliuk
tun...@tundraware.com
> My original note was not intended to be an indictment of
> Python, but only to bring up some reasons that for me make
> more strongly typed languages like C++ or Object Pascal
> better at coding very large projects.
if you haven't used Python (or any other dynamically typed language)
for large projects, why would anyone care what you think?
> Dept. of Computer Science
computer science is no longer what it was :-(
</F>
> For amusement, run this little script which demonstrates a
> near ultimate in the lack of type safety in the language.
> (Perhaps there is a bit of type safety in that you can't do
> this: 4 + "one") But look at what you can do in the
> following script:
>
> x = 3
> x = x + 2
> print "x =", x
> x = "now I'm a string"
> print x
> x = [ 5, x ]
> print "and now a list:", x
>
> If you don't believe it, run it. Should any language allow
> x to be such a chameleon, to where it can even be a
> different type on two sides of an assignment?
You're out of your depth here.
Lets run your code and then some and look what's unsafe in there:
>>> x = 3
>>> x = x + 2
>>> print "x =", x
x = 5
>>> x.sort()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: 'int' object has no attribute 'sort'
>>> x.upper()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: 'int' object has no attribute 'upper'
Currently, x is bound to an integer object and thus doesn't accept
calls to string or list methods. It complains loudly about trying to
force them on it.
>>> x = "now I'm a string"
>>> print x
now I'm a string
>>> x.sort()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: sort
>>> x.upper()
"NOW I'M A STRING"
>>> x = x + 2
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: cannot add type "int" to string
After the rebinding, x refers to a string object and accepts calls to
string methods, but complains just as loud as the former int-binding
when called with int or list methods.
>>> x = [ 5, x ]
>>> print "and now a list:", x
and now a list: [5, "now I'm a string"]
>>> x.sort()
>>> x.upper()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: upper
>>> x = x + 2
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: can only concatenate list (not "int") to list
And for a binding to a list object, just the same applies -- it will
gladly perform list methods but refuses to do any string or int stuff.
In Python, objects are typed, not references. If you are used to
languages where variables are (more or less) strongly typed but
objects aren't, you'll have to shed some old habits and acquire a new
point of view to appreciate Python's object model.
I'd recommend you go for it. Learning Python's object model will open
a new world for you and probably make you a better programmer in your
current language(s) of choice, too. A good start for this might be
http://effbot.org/guides/python-objects.htm
Cheers,
Christian
--
Christian Tanzer tan...@swing.co.at
Glasauergasse 32 Tel: +43 1 876 62 36
A-1130 Vienna, Austria Fax: +43 1 877 66 92
People who are new to Python sometimes worry about dynamic typing and
assume that it will make for unreliable code. I remember wondering
about the same thing at one point.
On the other hand, I have never heard a Python veteran complain about
dynamic typing based on actual experience. Nobody seems to ever step
forward and say, "after years of using Python, I eventually gave up on
it and went back to C because of all the bugs I kept having related to
dynamic typing." :-)
If you like Python, I would encourage you to try it out and see for
yourself what happens. Though it is easy to contrive examples where
such bugs might occur, in my experience with Python I very rarely
encounter them. When I do, they are very easy to find and fix because
of Python's excellent traceback feature.
Tom
As others have noted, your lack of experience attempting
to implement a large application in Python reduce the value of
this statement. Nevertheless, your sentiment is echoed
repeatedly by many (which doesn't make it any truer) and so we
seem to be compelled to respond. :) In the following, wherever
"you" appears, please consider it the impersonal "you" applied to
anyone who might make statements like the above.
> I have programmed extensively in C, assembler, BASIC, PL/M and a some
> in C++, Forth, Pascal, AWK, Perl, Java... For my money, I have *never*
> seen a language more suited for rapid applications development - even
> pretty good sized applications - that strongly encourages the programmer
> to write clean, easy to read code, than *Python* - this after only a
> month or two of fiddling with it.
[...]
> Furthermore, bear in mind that code "portability" is no where near as
> important in the commercial sector as it is in academics. [...]
> What *is* important is code *reliability*, *cost of
> ownership*, *readability*, and portability of *programmer's skills*.
All very true.
I can attest from personal experience with Python over the last year
and a half (to strengthen Tim's claim from only a month or two of use)
that Tim's statements above are *absolutely* on the mark for
commercial developments.
I've programmed seriously for about 23 years. I too have used a
gazillion languages. I've personally written, architected, or
managed development of large applications in C, C++, Java, Object
Pascal, LabVIEW (>shudder<) and now Python. I'm an engineer, not
a computer scientist, which might mean I'm more interested in
pragmatism than theory (not trying to be denigrating there).
(Don't flame me for saying this... some people like a little
background material to help evaluate someone's claims...)
My experience convinces me Tim is exactly right:
The most important thing about the product is *reliability*.
The most important thing about the source is *maintainability*.
Python ranks higher than any of the other languages I mentioned,
based on direct observation and lengthy consideration, in both
those areas.
It is also more productive in general ("cost of ownership")
by a factor of at least two (I'm being conservative).
My programmers (team of 16) uniformly like Python, find it
more "fun" than any previous language they've used, and produce
better results with it (as measured by any of the above
measures, and by the fact that I can understand their code
easily).
Only one of them had even heard of it prior to the first
interview. None of them took more than about a week to begin
contributing to our development, and I believe each was,
after less than a month of active use, more effective with
Python than with his or her previous "best" language.
I'll note also that I make these claims even before we've
fully implemented extensive automated testing along the lines
of what people in this group generally claim is "essential".
(I don't disagree with that claim, but even as we implement
such processes -- prior to our code releases -- I am seeing
very significant improvements over what I have seen in the
past with other languages.)
In other words, *even without testing Python leads to fewer bugs*.
You can say that's not possible. You can disbelieve it.
You can insist you need stronger typing. You can claim
that, because of your own special, unique situation or
for personal reasons it just somehow isn't true.
I say you are probably wrong. Or at least, until you give
Python a fair try on a large application, you won't convince
me otherwise based on theoretical arguments.
--
----------------------
Peter Hansen, P.Eng.
pe...@engcorp.com
And now that you're completely overwhelmed by the response,
here's yet-another-something-to-consider:
Static/strong typing and type safety are of great use - to a *compiler*.
C/C++/Pascal, etc. could not do what they do, or do it as well,
without knowing this information. They create a functioning program
out of raw hardware: memory & cpu, and they have to keep track
of every bit, or the program won't run. A smart compiler can optimize
your code as well, but only because it knows what's what.
The Python compiler doesn't have this restriction, because Python
runs on a virtual machine made out of much smarter components.
The components 'know' what they are, the compiler doesn't have to.
There is a school of thought that strong typing is extremely important
for program 'correctness'. What everybody here is saying, basically,
is that there are a lot more important considerations when it
comes to making a program correct, and they all have enough real
experience in this area for you to take their argument seriously.
A non-trivial program cannot ever be 'proven' correct anyway;
Computers are inherently chaotic systems, and the 'state space'
of a complex running system is practically infinite.
Make sure all the pieces are as good as you can make them,
(or perhaps just as good they need to be,) and use that as your
base of confidence that the program will work as well as necessary.
There are very few problem domains where perfection is a requirement,
notably anything involving potential loss of human life. These domains
require unusual and exceptional techniques to ensure high quality.
As an experienced programmer, I know I would never want to work
on anything where a coding mistake could kill someone. It does happen.
Just to belabor the point a bit more.
d
Although "programming by contracting" was historically developed in
the static-typing community, no significant contract in PbC is enforced
statically in (e.g.) Eiffel, the summit (according to its adherents) of
both static typing and PbC. Preconditions, postconditions, and class
invariants, are typically rich and complex sets of expressions and no
real-world compiler is able to check anything but very trivial subsets
of them at compile-time.
So, IMNVHO, contracts should NOT be placed in the "static typing" camp,
although it IS true that static typers do tend to talk often about them.
What PbC's practice proves is that DYNAMIC enforcement is just fine
for the really important issues -- sure, sure, it would be an even better
world if we COULD check a few more things even earlier, but wtf -0- we
can't, and PbC still helps. Dynamic strong typing -- THAT is what PbC
turns out to be in practice. And a durned fine thing it is, too!
Actually, I wouldn't mind an 'assertion language' that's not even
truly checkable (fast enough to be useful) at runtime -- one where
I could use existential and universal quantifiers, because those are
so often the things I *WANT* to express in a formal, unambiguous
way. Consider the predicate parameter P I can pass to many of
the containers and algorithms in the standard C++ library. What
I often want to assert about the type of P is: for all X, not P(X,X);
for all X and Y, P(X,Y) implies not P(Y,X); for all X, Y and Z,
P(X,Y) and P(Y,Z) imply P(X,Z). I.e., the crucial facts about P's
type is that it's antireflexive, antisymmetric, transitive. What I
_can_ assert that's static-type-checkable is that P accepts two
"foo" arguments and returns a boolean. Pish and tush -- that's
close to saying NOTHING about the known type constraints on P!!!
I'd like a language that just lets me state as little or as much as
I know and want to express unambiguously -- then the compiler
in turn can generate as little or as much static or dynamic typing
as it knows how to generate and/or it's directed to generate by
options or whatever, but meanwhile I HAVE expressed my design
intent in my sources -- *NOT* in possibly-ambiguous comments,
but in formal, unambiguous language that MAY, depending on the
state of compiler technology, turn out to help direct the compiler
to generate appropriate code. I know of no language, except maybe
a few experimental ones (which I have not tried), which would let
me use quantifiers this way.
So, no language is even halfway good at letting me *express*
types, much less check such type-constraints "statically". Might
as well use a language that doesn't *PRETEND* to support such
typing notions, no?-)
Alex
I consider this to be an advantage of Python. If you are not
skilled enough to use this feature, then I'd suggest you stick with
Pascal.
Chris.
On Fri, 13 Jul 2001 07:32:28 GMT, Dennis Roark <de...@earthlink.net>
wrote:
>For amusement, run this little script which demonstrates a
Note that I will probably read a hundred corrections from
people who replied to my original stupidity rather than reading
through the thread first...
Chris.
PS: Despite my own sleep-deprived stupidity, I still maintain that if
you are afraid of dynamic typing, you should stick to Pascal.
On 14 Jul 2001 04:12:05 -0500, Christopher L Spencer
That depends on the state of the market. Just a few years ago, it
would have KILLED us if our CAD applications didn't run portably
between VMS, Apollo Domain, and many versions of Unix -- each
OS accounted for an important segment of our customer base.
Today, running on a small number of somewhat-different Microsoft
operating systems suffices -- but any day now, it would not at all
surprise me if an important market segment suddenly demanded,
say, Linux -- in which case, lack of portability would again become
a commercial killer in this nice. (Before you scoff -- markets as
important as the French public administration and China appear
to be oriented to demanding Linux pretty soon, while others keep
demanding MS systems -- better keep those portability skills not
too badly honed, is my opinion).
Besides, portability is NOT only an issue between operating systems
and hardware platforms (...and on the latter, someday soon the
ability to exploit Itanium well may be a key market differentiator...).
Anything that qualifies as a "platform" for applications needs to be
seen in this light. Portability between IIS and Apache can double
the potential market for a commercial web framework. And it serves
our PDM product well that it's portable between SQL dialects and
RDBMS's -- some customers are totally wed to Oracle and would
never buy a product that can only run on SQL Server, and vice
versa -- this DOES give us an important marketing differentiator
wrt some of our competition.
I'm not getting into the C++ flamewar you're trying to unchain --
just noticing that one of our competitors in the PDM field recently
released their newest product version, and one of the fundamental
differences wrt the previous one is that they rewrote it from Java
(as it originally was) to C++ (as it is now) -- they claim speed-ups
of about 60% overall, more solidity, etc, as a consequence. If that
is an example of C++ "dying a slow death", I think Mark Twain's
well-known dictum may apply.
I _do_ agree that C++ is too complex for the human brain and
thus should only be used sparingly (for system interaction and
top-speed components -- but do notice that those components
ARE part of large commercial applications... and many people
still do not realize that multi-language applications are the
way to go, thus, needing maybe 10% of the app to be in C++,
they go ahead and make or rewrite it ALL in C++, rather than
make it 10% C++ and 90% Python...:-).
Alex (Brainbench MVP for C++ -- declaring interest, as it were:-)
> I appreciate those who replied, particularly Nick's
> informative reply. I would still argue that "dynamic"
> typing is a weaker and less safe sort of typing than the
> static typing of C++. Nick brings up the C++ example of
> using pointers to void and then casting them as needed or
> recasting them to allow the variable to point to different
> objects not involved in an inheritance relationship. But
> C++ is also often criticized for this because void pointers
> do allow for unsafe coding, relaxation of type safety and
> should be avoided if possible.
I think you are still missing some important facts about typing
systems. There are two dimensions: static vs. dynamic and weak vs.
strong.
Now, C++'s typing is moderately (some call it weak) strong and static.
Void pointers are not necessary to get into trouble with C++'s type
system. Type casts and implicit type conversions are two likely
gotchas. And when typing fails in a language like C++, all bets are
off.
Python's typing is strong and dynamic. Wrong interpretation of bits
just doesn't happen. And if you try to use an object incorrectly you
will get an exception. Exceptions are safe unless you choose to ignore
them in your exception handlers.
> Python is an attractive language. But for a large program I
> would still rather be forced to declare the name of a
> variable (and its type) rather than risk misspelling or
> misuse 500 lines later in the program. My original note was
> not intended to be an indictment of Python, but only to
> bring up some reasons that for me make more strongly typed
> languages like C++ or Object Pascal better at coding very
> large projects.
Strong typing is not a panacea for bad designs. Refactoring will do
better.
If you really need strong static typing, look at languages like Eiffel
or Ada. Both C++ and Object Pascal are broken in comparison.
Sure, there are places where portability is important. And portability
is not a "Yes" or "No" things - there are degrees. But the most
important thing that needs to be portable are *the skills of the
people* so that they can be used in many different roles.
DEC once estimated that there was something like an 8 or 10 to 1
ratio of the cost of owning code vs. the initial cost to produce
the code. The ongoing reliability and maintenance of software and
systems tends to be way more important than portability of the code
itself because that is a relatively lower cost to produce the code
than to maintain it.
I once was Chief Technologist for a company that did communications
messaging middleware and oversaw its transition to 20+ operating
platforms that were all over the map in OS and comms protocols.
(All IBM Mainframe, All the Unixes, MS-DOS, OS/2, Win16, Win32,
TCP/IP, SNA, NetBIOS, Novell and many others were supported in
a wide variety of combinations and configurations.)
Anyway, our code, *by intent* was *not* portable. It couldn't
be portable in any practical sense because the underlying
infrastructure paradigms are really different between, say TPF,
Unix, Stratus VOS, and Tandem Nonstop as they run comms over
VTAM/SNA, Sockets/TCP/IP, Novell, and so forth. We were far
more interested in creating competitive and distinctive market
advantage than serving the abstract idea of portability.
I think code portability really only becomes an issue if it
is a first-order component of the economics of a company. For example,
if you sell compilers, portability is probably pretty important.
OTOH, if you write drives, it is not much of an issue.
>
>
> I'm not getting into the C++ flamewar you're trying to unchain --
Sorry, that was not my intention. Apologies to all if it seemed
that way...
> -- they claim speed-ups
> of about 60% overall, more solidity, etc, as a consequence. If that
Speed-up, I believe. But "more solidity" by switching C++, I doubt.
That probably has more to do with 2nd generation software being
better than the 1st.
> is an example of C++ "dying a slow death", I think Mark Twain's
> well-known dictum may apply.
I did say it would be "slow" ;)
>
> I _do_ agree that C++ is too complex for the human brain and
> thus should only be used sparingly (for system interaction and
> top-speed components -- but do notice that those components
> ARE part of large commercial applications... and many people
> still do not realize that multi-language applications are the
> way to go, thus, needing maybe 10% of the app to be in C++,
> they go ahead and make or rewrite it ALL in C++, rather than
> make it 10% C++ and 90% Python...:-).
>
Absolutely right.
--
------------------------------------------------------------------------------
Tim Daneliuk
tun...@tundraware.com
I think we should try to keep personal insults out of what
should be a discussion of ideas. I am offended by
Christopher's assertion of lacking skill or not being smart
enough to use dynamic typing, binding, and yes I did know
what that meant before my original note. Had I not been
smart enough to see the dynamic typing, then I don't think I
could have made up my original example that started all
this.
I do know C++ (and Delphi, and VB -- ugh) much better than I
know Python. I have some familiarity with LISP and small
talk. I like the generics of other OO languages and the
templates of C++, which is part of the reason I am not a big
fan of Java. But why insult me now that I am expanding my
base into Python?
Dynamic typing, or binding, reference variables that can
change the type pointed to, are to me still part of a weaker
type safety than if they were not there. That is a personal
preference, that is all. So let's hold back the ad hominem
attacks and help each other progress. There is no perfect
language, and all of the "good" languages have appropriate
use. That includes both Python and C++. Not all
programmers will ever agree what the appropriate use of a
particular language should be.
I understand Christian Tanzer's distinction between strong
and weak typing and static verses dynamic type binding. And
one of the things I and probably most C++ programmers like
about C++ is that you can relax the typing by explicit
casting. (The compiler should at least give warnings for
implicit casts. And a cast of an instance of a user defined
class is only permitted if the the class author wrote the
corresponding copy constructor.) But that you must
explicitly cast means that you know and can limit where you
relax type safety.
As far as forcing variable declarations are concerned, as to
their names and types, yes it may be regarded as a "feature,
not a bug" that Python does not have this. But I recall the
evolution of a language that for other reasons I really do
not like, Visual Basic. Originally, there was not a way to
force declaration of variables. It was programmer pressure
that forced MS to introduce the "option explicit" code
directive which requires declaration. You can declare, and
not type a variable (in which case it is a "variant") but to
do so is often though to be sloppy coding. I am not
standing up for VB which has too many annoying features,
including inconsistent syntax, and a very broken object
model.
Along with the group, "Long live Python" but if someone
mentions something they find weaker in Python, long live
them too.
--- Dennis
Christopher L Spencer <clsp...@one.net> wrote:
>I often have to do this:
>x=getsomestringfromparameterfile()
>x=string(x)
>
> I consider this to be an advantage of Python. If you are not
>skilled enough to use this feature, then I'd suggest you stick with
>Pascal.
>
>Chris.
>
>On Fri, 13 Jul 2001 07:32:28 GMT, Dennis Roark <de...@earthlink.net>
>wrote:
>
>>For amusement, run this little script which demonstrates a
>>near ultimate in the lack of type safety in the language.
>>(Perhaps there is a bit of type safety in that you can't do
>>this: 4 + "one") But look at what you can do in the
>>following script:
>>
>>x = 3
>>x = x + 2
>>print "x =", x
>>x = "now I'm a string"
>>print x
>>x = [ 5, x ]
>>print "and now a list:", x
>>
---------------------------------
LOL -- "pish and tush" is exactly on-target.
ABC (Python's predecessor) had quantified boolean expressions, of the forms
1. each x in s has expression_involving_x
2. no x in s has expression_involving_x
3. some x in s has expression_involving_x
The keywords there are "each", "no", "some", "in" and "has". They had a
very nice twist: if an expression of form #1 was false, it left x bound to
"the first" counterexample (i.e., "the first" x in s that did not satisfy
the predicate expression); similarly if #2 was false, it left x bound to the
first witness (the first x in s that did satisfy the predicate); and if #3
was true, to the first witness. For example,
if some i in xrange(2, int(sqrt(n))+1) has n % i == 0:
print n, "isn't prime -- it's divisible by", i
or
assert no x in xrange(2, int(sqrt(n))+1) has n % i == 0, "n not prime"
According to Guido, Lambert Meertens (ABC's chief designer) spent a
sabbatical year in New York City working with some of the SETL people, and I
speculate that's where this cool little ABC subsystem came from. I would
like it in Python too, but it's a lot of new keywords to support one
conceptual gimmick. It's still a pleasant pseudo-code notation to use in
comments, though.
uncommonly-fond-of-constructs-that-capture-common-loops-ly y'rs - tim
As in, taking a world-renowned specialist in surface-modeling, and
ensuring he wast^H^H^H^H spends enough time to be able to
program a GUI, or knows enough SQL to write some queries, or...?
Thanks, but NO, THANKS. One advantage for a software development
firm to grow to middling-size (to be offset against several minuses)
is being able to AFFORD a few all-out specialists in those areas that
are key competitive differentiators for the firm's products. For us,
that means specialists in such things as 3-D kernels, surfacing,
surface/solid
integration, and PDM needs of middle-sized mechanical engineering
firms. It's *NOT* a problem if the PDM specialist doesn't know what's
a NURBS, and the surfacing specialist doesn't know what's a BOM --
it DOES matter that the firm as a whole has, say, a dozen such key
people in the right areas out of a couple hundred developers -- people
able and eager to concentrate on very specific, narrow, focused
domains, with the total unremitting concentration and enthusiasm
that such specialists tend to bring to their work. Their skills need
not be "portable" to other areas or roles -- they need to be allowed
to concentrate on what they're exceptionally good at, in order to
develop the innovations that spearhead new products and versions.
(If and when such guys *WANT* to branch out in other fields, it's
of course clever to support their explorations -- that may well be a
spring of innovations too, among other issues -- but it's exceedingly
rare that a specialist WANTS to leave his or her specific field, where
he or she always seems able to find new fascinating challenges and
new worlds to conquer).
> DEC once estimated that there was something like an 8 or 10 to 1
> ratio of the cost of owning code vs. the initial cost to produce
May make sense, depending on the code's longevity (related to
the speed of advances in the specific field).
> the code. The ongoing reliability and maintenance of software and
> systems tends to be way more important than portability of the code
> itself because that is a relatively lower cost to produce the code
> than to maintain it.
But porting, when it's needed, is part of the "cost of owning", not
of the "cost of producing". Raising the producing-cost a little bit,
by caring about portability right from the start, may significantly
lower the owning-cost, depending on porting-needs -- which,
again, will in part correlate to longevity. Code that you'll pension
off in 6 months may easily never need to be ported -- code you'll
carry around for 10 years is VERY unlikely to not need porting, in
a platform market subject to reasonably fast innovation.
So, the dominance of ongoing costs vs initial ones doesn't indicate
a LOW importance for portability -- indeed, the very contrary!
> Anyway, our code, *by intent* was *not* portable. It couldn't
> be portable in any practical sense because the underlying
> infrastructure paradigms are really different between, say TPF,
Underlying infrastructure paradigms do tend to differ deeply
when portability is a problem (otherwise, portability is basically
free and it would be very silly to ignore it:-). But it's a serious
error to claim that a program cannot be portable if the underlying
infrastructure paradigms differ widely! Such underlying differences
just raise (a bit) the COST of portability, because you have to
write most of your code to a higher-level abstraction and also
implement that abstraction on different underlying platforms.
But the costs only grow by _a bit_, because good software IS
'layered' anyway -- levels of different abstraction built on top
of each other. You just have to ensure your lowest levels are
designed in a way that is decently decoupled from specific
bits of infrastructure -- often not MUCH harder than coupling
them heavily and strongly, and often worthwhile anyway (if
the lower-level quirks aren't ALLOWED to percolate upwards
and pollute the higher-level architecture, that may often be
all to the good of the higher-level cleanness and resulting
productivity in development and maintenance). A long time
ago, in a galaxy far, far away, we had different GUI's running
on top of WIDELY different infrastructure paradigms -- the
bare paint-pixels-yourself approach of a now-obsolete HP
proprietary minicomputer, Apollo's Domain, Dec VMS (I
forget how their GUI subsystem was called), an early release
of Windows, some X or other (I think it already was X11, but
I'm not sure)... and we had not taken the trouble to design
a _portable_ GUI architecture. Fortunately, at one point, we
did, even though it did mean more than refactoring -- it was
a big-bang junk-and-rewrite of a lot of stuff, given how badly
we had allowed non-portability to 'percolate upwards' -- the
cost of porting and maintaining on (at that time) half a dozen
platforms was killing us (we were much smaller then, so the
cost was a really large portion of our total investments).
The resulting portable architecture, highlighting *decoupling*
between portable parts (the higher-levels, amounting to a
VAST majority of the applications) and non-portable ones (the
lower-levels, handled by *specialists* of both the platforms
AND the subject-area -- this early specialization freed most
developers from having to know or care how you paint a pixel
or respond to a button clic on any given platform, gave us
higher-quality GUI's AND better application-logic, and paved
the way for our growth) served us well for years. When, much
later, we jumped on the Win95 bandwagon (having sniffed out
that this cranky unreliable OS bid fair to conquer engineers'
desktops, whether we liked it or not), we gained "native Win32
look and feel" faster and more smoothly than our competitors,
and with the involvement of less than 10% of our developers --
all the others could still (lucky them) develop on Unix, respecting
our in-house portability rules, and "it all just worked".
GUI's are one example, but I have others, from the same bunch
of real-life experiences, in such things as RDBMS's and network
protocols. Attention to portability, mostly by designing intermediate
layers and letting most everybody program to those layers while
a few specialists slave away at implementing the layers on top
of various platform-quirks, has always stood us in VERY good stead...
> more interested in creating competitive and distinctive market
> advantage than serving the abstract idea of portability.
What gave you the idea that anybody around here is interested
in "serving abstract ideas"? I'm interested in serving *our
customers*, all 10,000+ of them at last count -- ensuring
the applications they buy from us (or actually, these days,
_rent_ -- we've successfully transitioned to a subscription
business model) have very good architecture (including the
careful layer separation that also ensures portability) is a
means to that end (which in turn is a means to the end of
making a whole lot of money of course:-).
> I think code portability really only becomes an issue if it
> is a first-order component of the economics of a company. For example,
> if you sell compilers, portability is probably pretty important.
> OTOH, if you write drives, it is not much of an issue.
And if you write mechanical CAD and PDM in the current
market, it may not SEEM much of an issue -- if you're too
young to remember all your former-competitors killed by
having put all their eggs into one non-portable basket, to
exploit some nifty-looking doodad in a platform that then
became defunct too fast. Fortunately, enough of our guys
used to be the best developers from just such quondam
competitors, so I don't think they're likely to forget:-).
> > -- they claim speed-ups
> > of about 60% overall, more solidity, etc, as a consequence. If that
>
> Speed-up, I believe. But "more solidity" by switching C++, I doubt.
> That probably has more to do with 2nd generation software being
> better than the 1st.
Or else the claim is unwarranted -- who knows, but it wouldn't
be the first time that marketing overclaims something:-).
Alex
No. I meant that in every commercial software/systems environment I've
every seen, the portability of the *programmer's* skills was more important
than the portability of their final work product.
<SNIP>
> > Anyway, our code, *by intent* was *not* portable. It couldn't
> > be portable in any practical sense because the underlying
> > infrastructure paradigms are really different between, say TPF,
>
> Underlying infrastructure paradigms do tend to differ deeply
> when portability is a problem (otherwise, portability is basically
> free and it would be very silly to ignore it:-). But it's a serious
> error to claim that a program cannot be portable if the underlying
> infrastructure paradigms differ widely! Such underlying differences
> just raise (a bit) the COST of portability, because you have to
> write most of your code to a higher-level abstraction and also
> implement that abstraction on different underlying platforms.
>
You are dead wrong about this. Not all engineering problems
can be fixed with money - some are just plain impossible at
ANY cost. Portability can fall into this category.
It is (or was, and possibly still is) *impossible* to write
portable code, for instance, across IBM TPF (for which no
high-level language compiler existed), Tandem Non-Stop (which has
a VERY different programming model than most other OSs),
and Unix AND still meet the performance critera we were expected to
hit.
Actually, in the case of TPF at the time, it was *literally*
impossible to write portable code - everything was written in
IBM/3090 BAL (assembler) cuz that's all there was. Short of
writing a VM to run BAL on other machines or a 'C' compiler for
TPF (which, at the time, would have been a joke considering the
performace requirements we had to hit - *everything* on the mainframe
had to be hand tuned) there was no way to do it.
Portability is not simply an artifact of willingness to spend money,
though that certainly is a factor (as is time). It is a
consequence of good requirements, good design, and similar-enough
target environments to make it feasible in the first place. It's
hard enough to do when you have these "similar enough" systems, like,
say, Unix and Win2K. But how are you going to port your nice, clean
C program to a machine that speaks EBCDIC, has no stack, has no
shell as you understand it, does not implement fork/exec, does not have
sockets, each CPU is running at 100% utilization very
second of the day? Oh, and those are the fastest CPUs for that
sort of computation that money can buy. (These conditions have
all probably changed by now, but that was the case then.)
> But the costs only grow by _a bit_, because good software IS
> 'layered' anyway -- levels of different abstraction built on top
> of each other. You just have to ensure your lowest levels are
> designed in a way that is decently decoupled from specific
> bits of infrastructure -- often not MUCH harder than coupling
> them heavily and strongly, and often worthwhile anyway (if
> the lower-level quirks aren't ALLOWED to percolate upwards
> and pollute the higher-level architecture, that may often be
> all to the good of the higher-level cleanness and resulting
This is a lovely precept that happens to work enough
of the time that people start to assume it is always true.
Abstraction layers ARE a convenient and helpful design tool
and they DO promote better portability. However, you can
forget all that if you've ever had to make performance your
first consideration. Again, in original example I gave,
performance and uptime was all that mattered because we
were already running a 7-way CPU cluster of the biggest, baddest
mainframes available. When guaranteed response time and/or
human safety and/or huge sums of money are at stake, no one
gives a Tinker's Damn about portability or any of the other
sacred cows that get flogged in "Software Engineering" community.
Get yourself invited to the development center at one of
airlines reservations systems or VISA or MasterCard and
you will discover an entire world where everything you were
taught turns out to be wrong. In other words, software that
is not neatly layered is not necessarily badly designed,
it may well be purpose-built that way.
<SNIP>
>
> > more interested in creating competitive and distinctive market
> > advantage than serving the abstract idea of portability.
>
> What gave you the idea that anybody around here is interested
> in "serving abstract ideas"? I'm interested in serving *our
> customers*, all 10,000+ of them at last count -- ensuring
None of whom give a damn if your code is portable (unless you're
selling programming tools like compilers) or how elegantly
architected it might be - they just want your application to run
on their system. How it does so, is your problem, not theirs.
> Dynamic typing, or binding, reference variables that can
> change the type pointed to, are to me still part of a weaker
> type safety than if they were not there.
Typing and binding are not the same thing. In C, C++, Pascal, etc. the name
and type of a variable are attributes of the variable name. The variable's
content (or what it points to) doesn't have any name or type--it's just a
chunk of memory that is interpreted and manipulated according to the type of
the variable name used to refer to it. The most obvious example of this is
the C/C++ union:
union {
int ival;
float fval;
} u;
u.ival and u.fval are different variables with different types referring to
the same (untyped) chunk of memory. In statically-typed languages like C,
variables can't change their types. A cast only tells the compiler to use
different rules to manipulate the memory the cast expression refers to; it
doesn't change the type of any variables.
The contents of a typed variable can be changed, but the variable can't be
rebound to another chunk of memory. C and C++ simulate binding through the
indirection of pointers. But because the compiler can't detect rebinding a
pointer, the programmer has to deal with memory management (C, C++), and all
the problems that ensue, or the language can simply disallow pointers
(Java).
Python doesn't have variables. Python has typed objects, references to
objects, and names that are bound to objects. The binding is dynamic: a name
may be bound to different objects of different types at run-time (as your
example code demonstrates). The type is a property of the object, not of the
name. Note that Python doesn't have or need pointers.
Dangling pointers and undisposed objects and chunks of memory are the
biggest problems for C/C++ programmers. The language type enforcement does
nothing to deal with these subtle and hard to find problems. An industry of
debugging and testing tools developed alongside C++ to help programmers find
and fix memory management problems.
Java is one solution: a typed language with name binding rather than
pointers. Java's C-derived syntax and complexity are a lot for new
programmers to master, and buggy and incompatible JVMs plague more
experienced programmers. Python is another solution. Python's simpler
syntax, rich built-in objects and standard library, and vastly simpler
runtime environment make it a good choice where Java is too bulky or
complicated.
I've been programming professionally for 25 years and I've mastered a lot of
languages. I still usually "think" in C, but Python is now my language of
choice for almost everything. My current day job involves building a huge
logistics system for an auto manufacturer. We're using Oracle PL/SQL and
Java. I can say without exagerration that if we were using Python we'd be
done by now, and our bug list would be one-tenth of what it is now. The
biggest problem we face is the multitude of programmers at widely varying
skill levels, trying to master complicated and subtle languages and tools,
and get their code to work with all the rest. If we were using Python the
skill gaps would be narrower, and easier to see and fix.
> So let's hold back the ad hominem
> attacks and help each other progress.
I agree--some of the postings are out of line.
> Originally [in Visual Basic], there was not a way to
> force declaration of variables. It was programmer pressure
> that forced MS to introduce the "option explicit" code
> directive which requires declaration.
The OPTION EXPLICIT feature of VB does the same thing as Fortran's IMPLICIT
NONE. VB has typed variables and the VARIANT type, which is just a UNION
with a "current type" attribute. I don't know when OPTION EXPLICIT was
introduced, but I'm pretty sure stronger typing and avoidance of the VARIANT
type was pushed so VB could play nice in the COM/DCOM world, and not in
response to programmer pressure.
Greg Jorgensen
PDXperts LLC
Portland, Oregon, USA
gr...@pdxperts.com
Go tell that to a (hopefully hypothetical) customer who's just
done some system upgrade, preferably right after the program
with which he spends the majority of his working hours has
just stopped working. "Oh but you don't give a damn if my
code is portable, so why are you giving all of these damns
right now?". Even if all it takes to get him back to work is
a few downloads of upgraded versions -- a few hours' time to
locate, grab and install them -- you WILL hear damns aplenty.
> architected it might be - they just want your application to run
> on their system. How it does so, is your problem, not theirs.
There is a grammatical error here. "their system" is singular.
It would be a singular customer indeed (for our products) who
owned one single system (without significant upgrades) and
only ran our programs on that single (singular) system.
Change the sentence to "they just want your applications to
run on a wide variety of systems, offering several trade-off
points between cost, performance, stability, familiarity of
other system tools to their employees, &c, so that choosing
your applications does not constrain their abilities to choose
the platforms they equip their employees with, upgrade them
with the newest processors or graphic cards when they think
the price/performance is right for that, and so on". In other
words, they just want our applications to be portable -- you're
right that HOW we achieve that desideratum (just like, how
do we achieve high performance, rich functionality, low price,
backwards compatibility, solidity, and a zillion other typical
customer desiderata) doesn't (well, _shouldn't_ -- reality is
often different) concern them. But portability is a plus in
itself for the customer, just like performance &c, because a
typical customer in our field doesn't own ONE system -- much
less does he or she kid him or herself that 'ONE' unchanged
system is what he or she will own for a long, long time.
It so happens that _right now_ we can get away (for most
customers) with only offering portability among a relatively
narrow range of platforms -- yes, we ARE losing business
where (for example) the customer specifies that design
applications must run on Macintosh whatever (because
that's the system used in certain key departments of the
customer's firm/suppliers/customers/whatever, and the
customer doesn't trust translation utilities to be perfect,
and so wants to make SURE the same application runs on
all systems in a certain set) as well as Windows 2000, &c;
i.e., where a potential customer specifies among the
requisites a wider portability than we can offer right now.
But, we estimate that the loss due to non-universal
portability is lower than what it would cost us to achieve
universal portability (including quality assurance on each
supported platform, of course). It's not an abstract, far
away issue: it's just as concrete and important as any
other customer desideratum, such as top performance,
low footprint, low price, total solidity, and so on. Of course,
what the customer really wants is: everything, for free,
yesterday, etc, so one must trade-off between various
such desiderata -- but portability IS typically one of them.
It may be different in fields (if there are any) where every
customer believes (rightly or wrongly) that the application
he or she buys will run for all of its useful lifetime on a
single unchanging system configuration. I suspect that may
be true of applications with relatively short useful lifes (such
as games), or when the customer is too naive to understand
how constraining it will soon prove to be unable to upgrade
his or her machines because of portability constraints of the
application being purchased. Neither of these issues applies
to CAD and PDM software, and, I'm sure, to many other
fields in the area of software development.
Alex
Jim
"Alex Martelli"
<ale...@yahoo.com To: pytho...@python.org
> cc:
Sent by: Subject: Re: not safe at all
python-list-admin@
python.org
07/14/01 02:03 AM
"Tim Daneliuk" <tun...@tundraware.com> wrote in message
news:3B4F743C...@tundraware.com...
...
> Furthermore, bear in mind that code "portability" is no where near as
> important in the commercial sector as it is in academics. Commercial
That depends on the state of the market. Just a few years ago, it
would have KILLED us if our CAD applications didn't run portably
between VMS, Apollo Domain, and many versions of Unix -- each
OS accounted for an important segment of our customer base.
Today, running on a small number of somewhat-different Microsoft
operating systems suffices -- but any day now, it would not at all
surprise me if an important market segment suddenly demanded,
say, Linux -- in which case, lack of portability would again become
a commercial killer in this nice. (Before you scoff -- markets as
important as the French public administration and China appear
to be oriented to demanding Linux pretty soon, while others keep
demanding MS systems -- better keep those portability skills not
too badly honed, is my opinion).
Besides, portability is NOT only an issue between operating systems
and hardware platforms (...and on the latter, someday soon the
ability to exploit Itanium well may be a key market differentiator...).
Anything that qualifies as a "platform" for applications needs to be
seen in this light. Portability between IIS and Apache can double
the potential market for a commercial web framework. And it serves
our PDM product well that it's portable between SQL dialects and
RDBMS's -- some customers are totally wed to Oracle and would
never buy a product that can only run on SQL Server, and vice
versa -- this DOES give us an important marketing differentiator
wrt some of our competition.
I'm not getting into the C++ flamewar you're trying to unchain --
just noticing that one of our competitors in the PDM field recently
released their newest product version, and one of the fundamental
differences wrt the previous one is that they rewrote it from Java
(as it originally was) to C++ (as it is now) -- they claim speed-ups
of about 60% overall, more solidity, etc, as a consequence. If that
is an example of C++ "dying a slow death", I think Mark Twain's
well-known dictum may apply.
I _do_ agree that C++ is too complex for the human brain and
thus should only be used sparingly (for system interaction and
top-speed components -- but do notice that those components
ARE part of large commercial applications... and many people
still do not realize that multi-language applications are the
way to go, thus, needing maybe 10% of the app to be in C++,
they go ahead and make or rewrite it ALL in C++, rather than
make it 10% C++ and 90% Python...:-).
Alex (Brainbench MVP for C++ -- declaring interest, as it were:-)