Pros and cons compared to LISP?
Pros: Haskell is cool loaded with cool features
Cons: Haskell is not Lisp
Caveat: It's been a while since I've used Haskell (2-3 yrs), and I've
only been working in common lisp recently (2 yrs)
I remember Haskell being very succinct, and having a very powerful
typing system. Lisp has more of the former, and much less of the
latter. Although very flexible, and good at catching errors at compile
time, I found the typing system in Haskell sometimes got in the way.
Lisp let's you declare types, but if you don't want to you don't have
to, and this tends to make it more flexible than Haskell. Furthermore,
with good unit testing, the sorts of errors one would catch in
haskell, also get caught early on in lisp.
Likewise, although Haskell has some capacity to define new language
constructs, Lisp's macros are more powerful, and this provides a
further degree of flexibility.
Haskell's built-in syntax has more succinct ways of describing things
like list comprehensions, return of multiple variables from a
function, and function currying. The assumed lazy evaluation also
means that you can do some very pretty stuff with 'infinite' lists,
that get's a little uglier in lisp. However, I also find Haskell to be
harder to read than lisp, largely because of these very features.
I also seem to recall haskell being pretty slow, but that might just
have been the compiler I was using, and what I was using it for. Lisp
on the other hand can be quite fast. (I believe some claim, as fast or
faster than C).
So in summary:
Haskell: a more powerful type checking system, that can catch more
errors, more succinct for some operations. Not quite as readable as
lisp
Lisp: More flexible. Faster. Possibly less succinct for particular
applications, but often the flexibility means you can define
constructs that lead to more succinct code than one would write in
Haskell. Possibly less error catching at compile time, but unit
testing can largely make up for this.
Perhaps I am biased in this regard, but I generally prefer Lisp.
Though I might consider using Haskell for a well defined problem where
I knew the sort of lazy evaluation and pattern matching that is easily
expressed in Haskell would be useful.
Actually, what the lisp type system means is that in lisp we program
in generic mode by default.
When you write:
int fact(int x){ return (x<=0)?1:x*fact(x-1); }
it's statically typed, but it's also SPECIFICALLY typed.
When you write in lisp:
(defun fact (x) (if (<= x 0) 1 (* x (fact (1- x)))))
it's already a generic function (not a lisp generic function,
technically, but what other programming languages call a generic
function):
(mapcar 'fact '(4 4.0 4.0l0 5/2))
--> (24 24.0 24.0L0 15/8)
to get the same in C++ you would have to write:
#include <rational.hxx>
template <typename T> fact (T x){ return (x<=0)?1:x*fact(x-1);}
... fact(4),fact(4.0),fact(4.0d0),fact(Rational(5,2)) ...
I hope Haskell is able to derive these instances automatically.
But you can easily get even more genericity, either using non CL
operators that you can easily redefine (eg. as lisp generic
functions), by shadowing them to the same effect, or by using
functions passed in argument. CL:SORT is generic because it takes a
LESSP function, so it can be applied on sequences of any kind of
object.
So what statically checked type system proponents detract as duck
typing is actually that lisp works in the Generic Mode gears by
default.
--
__Pascal Bourguignon__ http://www.informatimago.com/
I need a new toy.
Tail of black dog keeps good time.
Pounce! Good dog! Good dog!
Haskell's success rate at generating widely-used open source code is far
lower than most other languages:
http://flyingfrogblog.blogspot.com/2008/08/haskells-virginity.html
--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com/products/?u
In summary, Jon's blocg says the haskell compiler is GHC is dropping
the haskell-based darc as its source control system.
good news to know.
See also:
Distributed RCS, Darcs, and Math Sacrilege
http://xahlee.org/UnixResource_dir/writ/darcs.html
plain text version below.
-------------------------
Distributed RCS, Darcs, and Math Sacrilege
Xah Lee, 2007-10-19
When i first heard about distributed revision control system about 2
years ago, i heard of Darcs, which is written in Haskell↗. I was
hugely excited, thinking about the functional programing i love, and
the no-side effect pure system i idolize, and the technology of human
animal i rapture in daily.
I have no serious actual need to use a revision control system (RCS)
in recent years, so i never really tried Darcs (nor actively using any
RCS). I just thought the new-fangled distributed tech in combination
of Haskell was great.
About 2 months ago, i was updating a 5-year old page i wrote on unix
tools (The Unix Pestilence: Tools and Software) and i was trying to
update myself on the current state of the art of revision systems. I
read Wikipedia (Darcs↗) this passage:
Darcs currently has a number of significant bugs (see e.g. [1]).
The most severe of them is "the Conflict bug" - an exponential blowup
in time needed to perform conflict resolution during merges, reaching
into the hours and days for "large" repositories. A redesign of the
repository format and wide-ranging changes in the codebase are planned
in order to fix this bug, and work on this is planned to start in
Spring 2007 [2].
This somewhat bursted my bubble, as there always was some doubt in the
back of my mind about just how Darcs is not just a fantasy-ware
trumpeted by a bunch of functional tech geekers. (i heard of Darcs in
irc emacs channel, who are often student and hobbyists programers)
Also, in my light research, it was to my surprise, that Darcs is not
the only distributed system, and perhaps not the first one neither,
contrary to my impressions. In fact, today there are quite a LOT
distributed revision systems, actually as a norm. When one looks into
these, such as Git↗, one finds that some of them are already used in
the industry for large projects, as opposed to Darcs's academic/
hobbist kind of community.
In addition to these findings, one exacerbation that greatly pissed me
off entirely about Darcs, is the intro of the author (David Roundy)'s
essay about his (questionable-sounding) “theory of patches” used in
Darcs. ( http://darcs.net/manual/node8.html#Patch )
Here's a quote:
I think a little background on the author is in order. I am a
physicist, and think like a physicist. The proofs and theorems given
here are what I would call ``physicist'' proofs and theorems, which is
to say that while the proofs may not be rigorous, they are practical,
and the theorems are intended to give physical insight. It would be
great to have a mathematician work on this, but I am not a
mathematician, and don't care for math.
From the beginning of this theory, which originated as the result
of a series of email discussions with Tom Lord, I have looked at
patches as being analogous to the operators of quantum mechanics. I
include in this appendix footnotes explaining the theory of patches in
terms of the theory of quantum mechanics. I know that for most people
this won't help at all, but many of my friends (and as I write this
all three of darcs' users) are physicists, and this will be helpful to
them. To non-physicists, perhaps it will provide some insight into how
at least this physicist thinks.
I love math. I respect Math. I'm nothing but a menial servant to
Mathematics. Who the fuck is this David guy, who proclaims that he's
no mathematician, then proceed to tell us he doesn't fucking care
about math? Then, he went on about HIS personal fucking zeal for
physics, in particular injecting the highly quacky “quantum mechanics”
with impunity.
-------------------------
Btw, Jon, you often attack Lisp, Haskell. However, i don't think i've
ever seen you criticize OCaml.
Of course, since you sell OCaml thus you are probably very biased.
Is there some criticism of OCaml you could say? so at least to show
you are not that biased?
If you do reply to give some criticism of OCaml, please don't just
give some cursory lip service that appears to be negative criticism
but not. It might be true that you really think OCaml is
comparatively great in all or almost all aspects... but surely there
are some critically negative aspects?
Xah
∑ http://xahlee.org/
☄
All I know is that some google yobbo listened to sean parent try to
explain for 76 minutes why the dataflow paradigm offered two orders of
magnitude code reduction and all the yobbo could ask was, "Why didn't
you use Haskell?" Far it be it from a Lisper to accuse someone of being
obsessed with an obscure niche language, but I just did.
I think only F# is better at turning otherwise intelligent engineers
into frothing at the mouth idiots. I would have added OCaml but I saw
some investment bank in NYC was Actually Using It, and The Kenny hasa
soft spot for Programmers Actually Programming.
hth, kt
Also I think FFTW3, the very well used open source Fourier transform
library, uses OCaml to generate optimized C code...
Right. Thanks for the clarification. I didn't mean to imply that there
was no typing by default in lisp: just that it wasn't something you
had to *declare*.
As of 2007-10, the “theory of patches” introduction section of darc's
documentation that contains the “don't care for math” phrase is at
“http://darcs.net/manual/node8.html#Patch”. As of 2008-08-19, that
section seems to moved to a different section, in this url “http://
darcs.net/manual/node9.html” , and the “don't care about math” bit has
been removed.
Thanks to Xah Lee, whose harsh criticism on society sometimes takes
effect silently.
Xah
∑ http://xahlee.org/
☄
Yes.
> When you write in lisp:
>
> (defun fact (x) (if (<= x 0) 1 (* x (fact (1- x)))))
>
> it's already a generic function
Not quite. More on that in a moment.
> to get the same in C++ you would have to write:
>
> #include <rational.hxx>
> template <typename T> fact (T x){ return (x<=0)?1:x*fact(x-1);}
>
> ... fact(4),fact(4.0),fact(4.0d0),fact(Rational(5,2)) ...
The C++ example with template is far more generic than the Lisp one:
it's defined for all types T capable of handling operatotions <= and
*, whereas the Lisp one just takes them for granted over numbers.
Of course, in the case of factorial, it doesn't matter, but it means
you can have complex datatypes redefining useful operators (or
functions) *at compile time*. Same for Haskell. Functions are
defined and dispatched per type.
You could of course do something like this, say, in Scheme:
(define +
(let ((o+ +))
(lambda (a b . more)
(apply (if (string? a)
string-append
o+)
(cons a (cons b more))))))
and use it more "generically" like this:
(+ 1 2 3 4 5) => 15
(+ "foo" "bar") => "foobar"
In implementations which actually permit redefining such fundamental
functions.
But then, you gotta do run-time checks on the arguments not present in
the Haskell and C++ compile-time functions. Note such checks can go
on as you nest deep into possibly yet previous redefinitions of the
operator that do almost like the above.
> But you can easily get even more genericity, either using non CL
> operators that you can easily redefine (eg. as lisp generic
> functions), by shadowing them to the same effect, or by using
> functions passed in argument. CL:SORT is generic because it takes a
> LESSP function, so it can be applied on sequences of any kind of
> object.
These are all fine workarounds, even though perhaps more verbose than
simply being able to do function "overloading" per types. Sorry if I
sound froggy.
Physics and engineers don't care much about math, it's just a useful
tool for their more pragmatic goals.
> Btw, Jon, you often attack Lisp, Haskell. However, i don't think i've
> ever seen you criticize OCaml.
Oh, he does criticize OCaml at every oportunity there is to sell
F#. :)
I recently did some Haskell (and Erlang) hacking, too. Although I like
both languages and they both have their niches, the biggest feature
that takes me back to Common Lisp every time is macros. Whenever I use
another language (does not matter what it is), deep down I wish the
language had macros like Common Lisp.
--
Ralph
Despite my foray into other FPLs, OCaml has remained my favourite language
for getting work done (F# is arguably better for making money) but OCaml
does have some problems:
. OCaml lacks a concurrent garbage collector and, consequently, is unable to
leverage shared-memory parallelism on a wide range of tasks. F# fixed this.
. OCaml's has unsafe built-in polymorphic functions such as equality,
comparison and hashing. These can silently break if they are accidentally
applied to abstract types and the subsequent bugs can be almost impossible
to fix.
. OCaml has a cumbersome foreign function interface.
. OCaml lacks run-time type information and, consequently, type-safe
serialization and generic printing. F# solved this problem.
. OCaml lacks operator overloading so mathematical expressions involving
many different numeric types are unnecessarily. F# solved this problem.
. OCaml lacks many basic types such as int8, int16, float32, complex32. F#
solved this problem.
. OCaml lacks value types so complex numbers cannot be unboxed. F# solved
this problem and, consequently, is 5.5x faster at complex FFTs than
naively-written OCaml.
. OCaml has an old fashioned one-stage compiler design. F# improves upon
this with the CLR's ahead-of-time compilation. For example, the CLR
generates type specialized native code whereas OCaml uses a uniform generic
representation that can incur substantial performance hits.
. OCaml's uniform generic representation includes tagged 31- and 63-bit
integers. F#'s unboxed representation is ~3x faster.
. The OCaml implementation cannot be contributed to and we cannot pay INRIA
to fix or improve it. For example, I filed a bug with a verbatim source
code fix that was only actually instigated a year later.
I believe these problems can be solved in a future open source FPL without
sacrificing the key features that make OCaml so productive. However, many
of these problems do not constitute research and, consequently, they are
unlikely to be fixed by academic programming language researchers.
I am not sure how to proceed with such a project (e.g. build upon LLVM or
build upon Mono?) but there appears to be growing interest in this idea.
Ideally, FPL communities would collaborate to build a common language
run-time for OCaml, Lisp, Scheme, Haskell and so on but I cannot see that
happening for social reasons. Without such collaboration, I doubt any of
these languages will ever get a concurrent GC approaching the efficiency
of .NET's.
And FFTW is part of MATLAB. Intel, Microsoft and Citrix all have huge OCaml
code bases as well...
You realize we sell OCaml products as well:
http://www.ffconsultancy.com/products/ocaml_for_scientists/?u
http://www.ffconsultancy.com/products/ocaml_journal/?u
In fact, there is currently more money in OCaml than F#...
Whoa! Looks mighty fine to me! :D
http://liskell.org/fifteen-minute-tour
The syntax and semantics look like a nice mix of Scheme and Haskell.
I the ease of defining ADTs:
(defdata Point3DType
(Point3D Rational Rational Rational))
and pattern matching on function definition without destructuring-bind
or other verbose macro workaround:
(define (distance3D (Point3D x y z))
(sqrt (+ (^ x 2)
(+ (^ y 2) (^ z 2)))))
or function definitions in let with the same syntax sugar as in define
(no lambda):
(define (scalar-multiply s point)
(let (((Point3D x y z) point)
((p x) (* s x)))
(Point3D (p x) (p y) (p z))))
Perhaps it also comes full with lazy semantics? A project to watch
closely.
Who knows, perhaps it attracts even more interest than Haskell or Lisp
alone... now Arc has some tough competition! :P
or Common Lisp (with semi-Prolog naming convention).
(defmethod +//2 ((x string) (y string))
(concatenate 'string x y))
(defmethod +//2 ((x number) (y number))
(cl:+ x y))
(defun extmath:+ (x y &rest more-summable-things)
(reduce '+//2 more-summable-things
:initial-value (+//2 x y)))
You can expand on this code as much as you want.
Cheers
--
Marco
You mean CLAZY? (Shameless plug: common-lisp.net/project/clazy). Not
that the hack is particularly clever, but it shows you how you can do
these things *in* portable Common Lisp.
Cheers
--
Marco
Arc is one of the despicable worthless shit there is.
Arc is the type of shit that when some person becomes famous for his
achivement in some area, who began to sell all sort of shit and
visions just because he can, and people will buy it.
So, please don't mention arc. Mention Qi, NewLisp, Liskell instead.
These are far more effort, dedication, earnestness, as well much more
technical merits or social benefits, than dignitary's fame selling
shits.
Qi is great because it adds advanced functional features to lisp yet
remain compatible with Common Lisp.
Liskell is worth looking into because it is a experiment exploiting
the the nested syntax, meanwhile brings in the semantics of a well
developed functional lang.
Qi and Liskell are very welcome, and non-trivial, addition to lisp
communities that can help lispers get new insights.
NewLisp is great because it is a earnest effort in modernizing lisp,
breaking away the ~3 decades of traditions and thought patterns that
are rooted in Common Lisp and Scheme. NewLisp's community is entirely
distinct from CommonLisp/SchemeLisp, attracting much newer generation
of hobbist and non-professional programers. NewLisp isn't one of the
fashionable new lang created for the sake of new lang. It has been
around more than a decade.
http://en.wikipedia.org/wiki/NewLisp
http://en.wikipedia.org/wiki/Qi_(programming_language)
I support Qi, NewLisp, Liskell. Please mention these in your
socializing chats.
Paul Graham, the author of Arc, of course have every right to create
his lang just like anyone else, and even for the sole purpose of
selling his name just because he can. And perhaps due to his past
contribution in the lisp community, Arc might be successful and
getting a lot users purely due to Paul's fame. However, please
consider the real value of a new lang, from technical to social, and
not as opposed to who's more famous. (for example, technical
considerations can be such as what new functionality it brings, such
as Qi, or what insight it might bring, such as Liskell. Social
consideration can be such as whether it bring the ideas of lisp to
entirely new segments of society, such as NewLisp.)
See also:
What Languages to Hate
http://xahlee.org/UnixResource_dir/writ/language_to_hate.html
plain text version follows:
----------------------------------
What Languages to Hate
Xah Lee, 2002-07-18
Dear lisp comrades and other concerned parties,
First, all languages have equal rights. Do not belittle other
languages just because YOUR favorite language is a bit better in this
aspect or that. Different people have different ideas and manners of
perception. Ideas compete and thrive in all unexpected fashions.
Societies improve, inventions progress. Lisp may be a first in this or
that, or faster or flexibler, or higher level than other languages old
and new, but then there are other languages the like of Mathematica &
Haskell & Dylan et al which ridicule lisps in the same way lisp
ridicule other languages.
Just because YOU are used to more functional programing or love lots
of parenthesis doesn't mean they are the only and best concepts. The
so-called Object Oriented programing of Java fame, or the visual
programing of Visual Basic fame, or the logic programing of Prolog
fame, or the format-stable syntax of Python fame, or the “one line of
Mathematica equals ten to one thousand lines of lisp” of _A New Kind
Of Science_ fame... all are parts of healthy competing concepts,
paradigms, or directions of growth.
The way some of you deride other languages is like sneering
heterogeneousness. If unchecked, soon you'll only have your sister to
marry to. Cute, but you do not want incest to become the only sex.
Next time your superiority complex makes you sneer at non-lisp or
newfangled languages, remember this. It is diversity of ideas, that
drives the welfare of progress.
Now, there is one judgmental criterion, that if a language or computer
technology fits it, then we not only should castigate at their
missionaries, but persecute and harass the language to the harshest
death. That is: utter sloppiness, irresponsibility, and lies. These
things are often borne out of some student's homework or moron's dirty-
work, harbored by “free” and wanton lies and personal fame, amassed
thru ignorance.
Of my short but diligent industrial unix computing experience since
1998, i have identified the following targets:
* C (and consequences like csh, C++)
* vi
* Perl
* MySQL
* unix, unixism, and things grown out of unix. (languages,
protocols, philosophies, expectations, movements)
In our software industry, i like to define criminals as those who
cause inordinate harm to society, not necessarily directly. Of the
above things, some of their authors are not such criminals or are
forgivable. While others, are hypocritical fantastic liers selfish to
the core. When dealing with these self-promoting jolly lying humble
humorous priests and their insidious superficially-harmless speeches,
there should be no benefit of doubt. Tell them directly to stop their
vicious lies. Do a face-off.
As to their brain-washed followers for example the not-yet-hard-core
unix, C, or Perl coders rampant in industry, try to snap them out of
it. This you do by loudly snapping fingers in front of their face,
making it sound like a ear piercing bang. Explain to them the utter
stupidity of the things they are using, and the harm to their brain.
IMPORTANT: _teach_, not _debate_ or _discuss_ or falling back into
your philosophical deliberating indecisiveness. I've seen enough
criticisms among learned programers or academicians on these, so i
know you know what i'm talking about. When you see a unixer
brainwashed beyond saving, kick him out of the door. He has became a
zombie who cannot be helped.
There are other languages or technology, by itself technically are
perhaps mediocre but at least is not a egregious irresponsible hack,
therefore does not deserve scorn, but sometimes it comes with
overwhelming popular outrageous lies (euphemized as hype). Java is a
example. For this reason, it is equally deserving the harshest
treatment. Any loud proponents of such should be immediately slapped
in the mouth and kicked in the ass in no ambiguous ways.
Summary: all languages have equal rights. However, those utterly
SLOPPY and IRRESPONSIBLE HACKS with promoter's LIES should be severely
punished. It is these, that cause computing industry inordinate harm.
Meanwhile, it is wrong to haughtily criticize other languages just
because they are not your cup of tea. Now, please remember this and go
do society good.
Xah
∑ http://xahlee.org/
☄
> What are you LISPers opinion of Haskell?
>
> Pros and cons compared to LISP?
I tried Haskell before CL. Most of the programming I do is the
application/implementation of numerical methods for solving economic
models.
Even though I recognize the `elegance' of certain Haskell constructs, the
language was a straitjacket for me because of two things: the type system
and the functional purity.
The type system required a lot of scaffolding (Either, Maybe, ...) when I
wanted to do something non-trivial. Indeed, Haskell makes the
construction of this scaffolding really easy, but in CL, I just find that
I don't have to do it and I can spend time writing more relevant code
instead.
Also, sometimes I had difficulty rewriting my algorithms in purely
functional ways. I agree that it can always be done, but I had to spend
a lot of time fighting Haskell.
What attracted me to Haskell initially was the elegance found in toy
examples (eg the Fibonacci series). It took me a lot of time to realize
that toy examples are, well, toy examples, and whether a language handles
them well is not relevant to problems of a larger scale. For example,
pattern matching looked fascinating, until I realized that I am not using
it that often.
Also, when I tried Haskell I didn't know about macros, which give Lisp a
lot of extra power. Now of course I would not ever use a language
without CL-like macros.
Note that I am not claiming that CL is `better', just that it suits me
better. Haskell is a fine language, and I don't intend to disparage it.
You will have to make up your own mind. I would suggest that you try
both: even if you end up using only one of them, working in the other one
for at least a month will broaden your horizons.
Best,
Tamas
Good that my little sarcasm prompted such inspired response from
you. ;)
> Even though I recognize the `elegance' of certain Haskell constructs, the
> language was a straitjacket for me because of two things: the type system
> and the functional purity.
That's weird. Functional languages abstractions and syntax should go
hand-to-hand with numerical problems such as those you describe.
> The type system required a lot of scaffolding (Either, Maybe, ...) when I
> wanted to do something non-trivial.
You mean IO?
> Also, sometimes I had difficulty rewriting my algorithms in purely
> functional ways. I agree that it can always be done, but I had to spend
> a lot of time fighting Haskell.
That doesn't make any sense: you don't fight Haskell when writing in
purely functional ways, only when allowing mutation and side-effects
in. And since you're mostly dealing with numerical computations and
algorithms, I believe IO would be very limited to just a few sections,
like reading lots of data from external sources and generating output.
> For example,
> pattern matching looked fascinating, until I realized that I am not using
> it that often.
Perhaps because you're programming in Haskell as if it was Lisp?
> On Aug 20, 4:35 pm, Tamas K Papp <tkp...@gmail.com> wrote:
>> I tried Haskell before CL. Most of the programming I do is the
>> application/implementation of numerical methods for solving economic
>> models.
>
>> Even though I recognize the `elegance' of certain Haskell constructs,
>> the language was a straitjacket for me because of two things: the type
>> system and the functional purity.
>
> That's weird. Functional languages abstractions and syntax should go
> hand-to-hand with numerical problems such as those you describe.
Sometimes they do, sometimes they don't. You can always write the
problem in purely functional ways, but sometimes that required quite a
bit of effort. For example, something like using a simple hash table
(which is still "provisional") requires IO. Things like that were a pain.
>> The type system required a lot of scaffolding (Either, Maybe, ...) when
>> I wanted to do something non-trivial.
>
> You mean IO?
Nope, I mean handling cases when a non-numerical result needs to be
returned or handled. Think of using nil to indicate a missing number in
Lisp: in Haskell you would need a Maybe. Or think of a function that
could take a real number or a list, for that I had to use Either. Things
like this got tiresome after a while. But this was not why I stopped
using the language.
>> Also, sometimes I had difficulty rewriting my algorithms in purely
>> functional ways. I agree that it can always be done, but I had to
>> spend a lot of time fighting Haskell.
>
> That doesn't make any sense: you don't fight Haskell when writing in
> purely functional ways, only when allowing mutation and side-effects in.
> And since you're mostly dealing with numerical computations and
> algorithms, I believe IO would be very limited to just a few sections,
> like reading lots of data from external sources and generating output.
Your beliefs do not coincide with my experience. If you have already
specified the algorithm that you are trying to implement, perhaps you can
figure out the uber-functional way to do it in the first try. But in my
line of work, you experiment with different algorithms and because you
don't know which one of them will work a priori. I find Lisp ideal for
that kind of tinkering, whereas in Haskell, I found this hard.
>> For example,
>> pattern matching looked fascinating, until I realized that I am not
>> using it that often.
>
> Perhaps because you're programming in Haskell as if it was Lisp?
If you reread my post, you will find that I started Haskell before CL.
Anyhow, please don't feel that you need to defend Haskell from me. I
emphasized that I consider Haskell a fine language, just not suitable for
my purposes. I gave Haskell an try, I used it for about 6 weeks. Of
course you can always claim that if I had used it for 6 years, I would be
more experienced and would see how to do things the Haskell way, but
honestly, I don't care. I was more productive in Lisp after a week than
in Haskell after 6 weeks.
Best,
Tamas
I just checked out the sight but it doesn't look like much has been
done there recently. Does anyone know any more about this.
Robert
Sounds like you were not familiar with modern static typing though.
Thanks.
--
Daniel Pitts' Tech Blog: <http://virtualinfinity.net/wordpress/>
PG bet the ranch on terser code at all costs, including all sorts of
whacky syntax that is so whacky bits of it conflict and cannot be used
in combination. It was starting to look like Python when I left town.
And of course the single namespace is a disaster. Oh, and the absence of
an OO package.
kt
> xah...@gmail.com wrote:
>> Arc is one of the despicable worthless shit there is.
>> Arc is the type of shit that when some person becomes famous for his
>> achivement in some area, who began to sell all sort of shit and
>> visions just because he can, and people will buy it.
> That might be true, but I'd love it if you expounded on why you
> believe that? I don't have a lot of lisp experience, so I need
> someone to highlight facts about why Arc is bad comparatively.
As little lisp experience you may have, you have more than Xah.
Asking for advice to Xah in lisp matters (and probably in any other
matter), is like asking for financial advice to the first tramp you
meet.
--
__Pascal Bourguignon__ http://www.informatimago.com/
"This statement is false." In Lisp: (defun Q () (eq nil (Q)))
i haven't actually used arc. My opinion of it is based on reading Paul
Graham's essays about his idea of some “100 year language” and
subsequent lisper's discussions on it this year here (most are highly
critical). Also, Wikipedia info gives some insight too:
http://en.wikipedia.org/wiki/Arc_(programming_language)
Paul's essay:
“The Hundred-Year Language”
http://www.paulgraham.com/hundred.html
I consider one of the most wortheless essay. Worthless in the same
sense Larry Wall would have opinions on computer languages...
to fully detail will take several hours writing perhaps 2 thousands
words essay....
ok, perhaps a quick typing as summary of my impressions... but i know
all sort of flame will come.
Paul has some sort of infatuation with the concept of “hackers” (and
his languages is such that these “hacker” will like). I think this is
a fundamental problem in his vision.
If we look closely at the concept of “hacker”, from sociology's point
of view, there are all sort of problems rendering the term or notion
almost useless. In short, i don't think there is a meaningful class of
computer programer he calle “hackers” that one could say a computer
language is for them.
for example, hacker can be defined as knowledgeable programers, or
those exceptionally good at programing among professional programers,
or those who delights in technical details and their exploitation, or
those who are programers who tends to be extremely practical and
produce lots of useful codes, or those who's coding method heavily
relies on trial and error and less on formal methods or systems, or
perhas some element of all of the above. I don't mean to pick bones on
definitions, but i just don't see any sensible meaning in his frequent
use of the term in the context of designing a language.
put in another way... let's think about what langs are his “hackers”
like? would it be lisp?? perl?? or something such as Haskell? As you
know, these lang are quite different in their nature. However, it's
quite difficult to say which is the one hackers prefers. In some
sense, it's probably perl. But in another sense, it's lisp, since lisp
being unusual and has many qualities such as it's “symbols” and
“macros” concepts that allows it to be much more flexible at computing
tasks. But then, it could also be haskell. Certainly, lots of real
computer scientists with higher mathematical knowledge prefers it and
consider themselves real hackers in the classic sense of MIT's lambda
knight and ulimate lambda etc.
looking into detail on some of his idea about Arc's tech detail ... i
find them the most worthless.
so, yeah, in summary, i see arc as almost completely meritless, and as
a celebrity's intentional or unintentional effort to peddle himself.
Note, i find some of Paul's essays very insightful or i agree with,
such as one that talk about some psychology of nerds and highschool
bullies, and his opinion on OOP ...
----------------------------
here's some of my personal opinion on language design.
I think the criterions for “best” or “100-year” language is rather
very simple.
• it is easy to use. In the sense the masses can use it, not requiring
some advanced knowledge in math or computer science or practical
esoterica like unix bag.
• it is high level. This is related to easy to use. Exact, precise
definition of “High level” is hard to give, but typically it's those
so-called scripting langs, e.g. perl, python, php, tcl, javascript.
Typical features are typeless or dynamic typing, support many
constructs that do a lot things such as list processing, regex, etc.
• it has huge number of libraries. This can be part of the lang as
considered in java “API” or perl's regex or mathematica's math
functions, or can be bundled as many perl and python's libs, or as
external archive such as perl's cpan. The bottom line is, there are
large number of libraries that can be used right away, without having
to search for it, check reliability, etc.
the above are really down-to-earth, basic ideas, almost applicable to
anything else.
the tech geekers will you believe other things like garbage
collection, model of variable, paradigms like functional vs oop,
number systems, eval model like lazy or not lazy, closure or no, tail
recursion or no, has curry or no curry, does it have linder mayer
system or kzmolif type sytem, polymorphism yes?, and other moronic
things.
If you look at what lang becomes popular in the past 2 decades (thru
various website), basically those bubble up to top, are those with
good quality in hte above criterions. (e.g. perl, php, javascript,
visual basic.) C and Java are still at top, of course, because there
are other factors such as massive marketing done by Java, and C being
old and has been the standard low level system lang for decades.
(excuse me for typos and other errors... took me already a hour to
type and lots other newsgroup responses... will perhaps edit and put
on my website in the future...)
Xah
∑ http://xahlee.org/
☄
>> there are all sort of problems rendering the term or notion almost useless.
This page may be of use, http://www.catb.org/~esr/faqs/hacker-howto.html,
although I'm guessing you've already read it and maybe even have an
essay for it.
In fact, I wouldn't go looking for a dictionary style definition,
since he used the word "hacker" to convey what he thought a hacker
was, so his article on "great hackers" would be very relevant:
http://www.paulgraham.com/gh.html
Those language qualities you listed do seem to be a good measure for a
good language though.
It seems to me that Arc has the first 2 qualities but not the third.
i red the jargon file, perhaps some 70% of it, in late 1990s online.
At the time, i appreciate it very much.
But today, i have came to realize that the maintainer Eric Raymond is
a selfish asshole.
For example, Wikipedia has this to say
http://en.wikipedia.org/wiki/Jargon_file
quote:
«Eric S. Raymond maintains the new File with assistance from Guy
Steele, and is the credited editor of the print version, The New
Hacker's Dictionary. Some of the changes made under his watch have
been controversial; early critics accused Raymond of unfairly changing
the file's focus to the Unix hacker culture instead of the older
hacker cultures where the Jargon File originated. Raymond has
responded by saying that the nature of hacking had changed and the
Jargon File should report on hacker culture, and not attempt to
enshrine it.[2] More recently, Raymond has been accused of adding
terms to the Jargon File that appear to have been used primarily by
himself, and of altering the file to reflect his own political views.
[3]»
Eric, one of the guy largely responsible for the Open Source movement
and plays a role a antagonistic to FSF. He also created a hacker logo
to sell himself. He's essay the Cathedral and Bazzar, which i read in
late 1990s, i consider stupid. You can often read his posts or
argument online in various places, and from those post you can see
he's often just a male ass.
Wikipedia has some illuminating summary of him:
http://en.wikipedia.org/wiki/Eric_S._Raymond
basically, a money hungry and selfish ass.
It used to also talk about how he supports the iraq war with some
racist undertone or some remarks he made about blacks ... i forgot the
detail but can be easily found in the article's history log.
He used to have a page on his website, not sure if it's still around,
about what he demands if people wants him to speak. Quite rude.
> In fact, I wouldn't go looking for a dictionary style definition,
> since he used the word "hacker" to convey what he thought a hacker
> was, so his article on "great hackers" would be very relevant:
> http://www.paulgraham.com/gh.html
Paul is a interesting guy. I mean, his lisp achievements and credits
is quite sufficient for me to find him intelligent and interesting.
However, his's essays related to hackers i just find quite
wortheless... not even sufficient enough for me to scan ... as
comparison, i'd rather read text books related to sociology or
psychology.
... many famous people write essays. Philosophers, renowed scientists
of all areas, successful businessman, famous award laureates ...
often, perhaps majority, of such essays are rather riding on fame and
worthless in quality when judged in the long term like decades or
centuries ...
> Those language qualities you listed do seem to be a good measure for a
> good language though.
> It seems to me that Arc has the first 2 qualities but not the third.
i wouldn't say arc is good at all with respect to ease of use or
power.
for ease of use... first of all it uses lisp syntax, and still has
cons business. These 2 immediately disqualifies it for the masses as
easy to use. Even if we take a step back, then there's Scheme lisp, so
in no way arc is easier to use than say Scheme 5.
for power... i doubt if it is in any sense more powerful than say
Scheme lisp or common lisp. In comparison to the profileration of
functional langs like Haskell, Ocaml/f# and perhaps also erlang, Q,
Oz, Mercury, Alice, Mathematica ... the power of arc would be a
laughing stock.
... adding the fact such controversies as using ascii as char set (as
opposed to unicode), using html table as web design, no name space
mechanism (all these i gathered as hearsay or on Wikipedia) ... arc to
me is totally without merit. (NewLisp and Qi, yay!)
see also...
Proliferation of Computing Languages
http://xahlee.org/UnixResource_dir/writ/new_langs.html
Xah
∑ http://xahlee.org/
☄
> .... It used to also talk about how he supports the iraq war with some
> racist undertone or some remarks he made about blacks ... i forgot the
> detail but can be easily found in the article's history log.
>
You can find Raymond's remarks about blacks here:
http://en.wikiquote.org/wiki/Eric_S._Raymond
| In the U.S., blacks are 12% of the population
| but commit 50% of violent crimes; can
| anyone honestly think this is unconnected
| to the fact that they average 15 points of IQ
| lower than the general population? That
| stupid people are more violent is a fact
| independent of skin color.
--agt
> Haskell's success rate at generating widely-used open source code is far
> lower than most other languages:
Your Debian install figures say nothing about "success rates", and do
nothing to compare this rate against "most other languages". Did you
compare against any other language than OCaml? No.
For example, what is the success rate of say, F# at generating widely-
used open source code?
I see nothing to justify your conclusions about the productivity of
*languages in general* based soley on the install figures of a handful
of tools on Debian.
All we can conclude here is that FFTW, a C library, is popular. At a
stretch you might make some conclusions about the effectiveness of the
ocaml-debian packaging team. Any other conclusion is lost in the
noise.
(BTW. this thread is worth it just to see Harrop and Xah Lee
discussing Haskell on a Lisp thread. I do believe this is how baby
trolls are born :)
The Debian and Ubuntu popcon results cover billions of software
installations, hundreds of thousands of users from two of the most popular
Linux distributions and tens of thousands of packages. They are undoubtedly
among the most reliable quantitative sources of information for how widely
adopted open source projects have become.
The article I cited contains a thorough analysis based upon these objective
quantifications and it estimates the order of magnitude ratio of the
success rates of OCaml and Haskell for creating widely-used open source
software. The conclusion is that Haskell is an order of magnitude less
successful than OCaml.
> For example, what is the success rate of say, F# at generating widely-
> used open source code?
Unknown: no comparable data are available for F# because it is a proprietary
Microsoft language.
> I see nothing to justify your conclusions about the productivity of
> *languages in general* based soley on the install figures of a handful
> of tools on Debian.
You are misrepresenting both the conclusion and the evidence. I am
disappointed: even Ganesh Sittampalam managed to create a plausible
sounding objection, although his quantifications turned out to be nothing
more than wildly inaccurate guesses.
> All we can conclude here is that FFTW, a C library, is popular.
You can get a lot more out of these data if you analyse them properly.
Moreover, the analysis is quite easy: even a web programmer could do it.
> At a stretch you might make some conclusions about the effectiveness of
> the ocaml-debian packaging team.
On the contrary, the success of projects like FFTW, Unison and MLDonkey had
nothing to do with the OCaml-Debian packaging team.
> Any other conclusion is lost in the noise.
Here are the facts again:
. 221,293 installs of popular OCaml software compared to only 7,830 of
Haskell.
. 235,312 lines of well-tested OCaml code compared to only 27,162 lines of
well-tested Haskell code.
Those are huge differences. How else do you explain them?
> (BTW. this thread is worth it just to see Harrop and Xah Lee
> discussing Haskell on a Lisp thread. I do believe this is how baby
> trolls are born :)
I find it far more enlightening that you would ban me from the Haskell Cafe
mailing list and IRC channel in order to avoid discussion of your own
findings. You must be much less confident in your work than I am in mine.
This years results of the Debian and Ubuntu popularity contests are
based on 74011 submissions. There have never been more than 80,000
submissions in a single year.
This is hardly a comprehensive metric.
>
> The article I cited contains a thorough analysis based upon these objective
> quantifications and it estimates the order of magnitude ratio of the
> success rates of OCaml and Haskell for creating widely-used open source
> software. The conclusion is that Haskell is an order of magnitude less
> successful than OCaml.
The "article" you cite is your same bogus analysis of the debian
package popularity contest, basically it is a thinly veiled plug for
Flying Frog Consultancy.
That is obviously wrong given that the article cited 184,574 installs of
FFTW alone.
> This is hardly a comprehensive metric.
You appear to have neglected the Ubuntu popularity contest that account for
an order of magnitude more people again.
>> The article I cited contains a thorough analysis based upon these
>> objective quantifications and it estimates the order of magnitude ratio
>> of the success rates of OCaml and Haskell for creating widely-used open
>> source software. The conclusion is that Haskell is an order of magnitude
>> less successful than OCaml.
>
> The "article" you cite is your same bogus analysis of the debian
> package popularity contest, basically it is a thinly veiled plug for
> Flying Frog Consultancy.
In other words, you also have no testable objections to the analysis
presented in the article so you are resorting to ad-hominem attacks. I am
not surprised: you clearly do not put your money where your mouth is, as I
do.
___________________________
/| /| | |
||__|| | Please don't |
/ O O\__ feed |
/ \ the troll |
/ \ \ |
/ _ \ \ ----------------------
/ |\____\ \ ||
/ | | | |\____/ ||
/ \|_|_|/ | __||
/ / \ |____| ||
/ | | /| | --|
| | |// |____ --|
* _ | |_|_|_| | \-/
*-- _--\ _ \ // |
/ _ \\ _ // | /
* / \_ /- | - | |
* ___ c_c_c_C/ \C_c_c_c____________
If you have to post something about Dr. Jon Harrop (or his
sock puppets), then post it here:
True, Ubuntu has 683367 submissions. The data gets murkier once you
pick through it for instance the FFTW data that you cite
vote old recent no files
Package: fftw3 139 9884 27 165142
Where
vote - number of people who use this package regularly;
old - number of people who installed, but don't use this package
regularly;
recent - number of people who upgraded this package recently;
no-files - number of people whose entry didn't contain enough
information
(atime and ctime were 0).
So although we have ~185,000 installs of FFTW only 139 of those
installs are "used".
Surly some of the "no files" set must be used so if we use the ratio
of vote to old we come up with 2,322 more that most likely should be
in the "vote" column.
You analysis of the data was not in any way thorough, your portrayal
of the data and the conclusions you draw about haskell on this list
and in your article are misleading.
Last I looked FFTW was a C library. Has that changed?
It is an excellent C library generated by OCaml code.
Here is an excerpt from their FAQ:
Question 2.7. Which language is FFTW written in?
FFTW is written in ANSI C. Most of the code, however, was
automatically generated by a program called genfft, written in the
Objective Caml dialect of ML. You do not need to know ML or to have an
Objective Caml compiler in order to use FFTW.
The fftw3 package transitioned to libfftw3-3 a year ago:
http://people.debian.org/~igloo/popcon-graphs/index.php?packages=fftw3%2C
libfftw3-3&show_installed=on&want_ticks=on&from_date=&to_date=&hlght_date
=&date_fmt=%25Y-%25m&beenhere=1
So you have analysed data about the wrong package.
> Surly some of the "no files" set must be used so if we use the ratio
> of vote to old we come up with 2,322 more that most likely should be
> in the "vote" column.
The old package is now largely unused now that the transition is complete,
yes.
> You analysis of the data was not in any way thorough, your portrayal of
> the data and the conclusions you draw about haskell on this list and in
> your article are misleading.
Let's just review the correct data from Ubuntu:
rank name inst vote old recent no-files
1735 libfftw3-3 104150 9084 77412 8288 9366
7652 darcs 2998 271 2634 92 1
As you can see, the "recent" column shows an even larger discrepancy: FFTW
has two orders of magnitude more recent installs than Darcs (8,288 vs 92).
Please do continue trying to disprove my findings or try to build a
similarly objective and statistically meaningful study that contracts these
results. I have actually tried to do this myself but I am only finding more
and more data that substantiate my original conclusion that Haskell is not
yet a success in this context.
For example, MLDonkey alone is still seeing tens of thousands of downloads
every month from SourceForge:
http://sourceforge.net/project/stats/detail.php?group_id=156414&
ugn=mldonkey&type=prdownload&mode=12months&package_id=0
I cannot find any software written in Haskell that gets within an order of
magnitude of that.
[ “Haskell's virginity” by Jon Harrop
http://flyingfrogblog.blogspot.com/2008/08/haskells-virginity.html
(a report on OCaml and Haskell use in linux)
]
First, i like to thank you for the informative study on particular
aspect of OCaml and Haskell popularity. I think it is informative, and
the effort of your report in to some degree non-trivial.
With that said, i do believe your opinion are often biased and tends
peddle your products and website about OCaml.
In particular, i don't think the conclusion you made, about how OCaml
is one order of magnitude more use in the industry, being valid.
For example, many commercial use of languages are not public. As a
example, Wolfram Research, the maker of Mathematica, sells more
Mathematica than any lisp companies combined.
This can be gathered from company size and financial records. However,
if you go by your methods, such as polling stats from linux distros,
or other means of checking stats among open source communities, you
won't find any indication of this.
Granted, your study is specifically narrowed to OpenSource project
queries. But there is still a fact about commercial, non-public use,
which often are far more serious and important. Open Source projects
are typically just happensances of some joe hacker's enthus about a
lang, and open source products's audience is again often not with any
serious considerations.
For example, your report contains FFTW, Unison, Darcs. Unison is a bi-
way file syncing tool, which i personally use daily. Darcs is a
revision system for source code. FFTW, as i learned, is a lib for
fourier transform. These tools, basically are results of some joe
hacker who happens to love or use a particular lang. Their users,
basically use them because there isn't others around (in the case of
Darcs, because it uses a lang they like). The other tools listed in
your report: MLDonkey, Free Tennis, Planets, HPodder, LEdit, Hevea,
Polygen, i haven't checked what they are, but i think you'd agree they
basically fit into my describtion about FFTW, Unison, Darcs above.
Namely, some hobby programers happened to create a software that does
X well above others, thus other hobby programers who happens to need
it, use them.
(the above is quickly written, i'm sure there are many flaws to pick
as i phrased it, but i think you get the idea)
In your report, you made a sort of conclusion as this:
«This led us to revisit the subject of Haskell's popularity and track
record. We had reviewed Haskell last year in order to ascertain its
commercial viability when we were looking to diversify into other
functional languages. Our preliminary results suggested that Haskell
was one of the most suitable functional languages but this recent news
has brought that into question.»
That remark is slightly off. You phrased “Haskell's popularity and
track record”, but note that your stat is just a measure of some pop
tools among linux.
It is a exageration to say that it's some sort of “track record” of
haskell like a damnatation. Haskell for example, has strong academic
background. So, a fair “track record” would also measure its academic
use.
You also used the word “popularity”. Again, popularity is a fuzzy
word, but in general it also connate to mindshare. Between Haskell and
OCaml, i doubt more programer heard or knows about OCaml than Haskell,
and as far as mindshare goes, both are dwarfed by Lisp.
Then, you mentioned “commercial viability”. Again, what tools happened
to be cooked up by idle tech geekers in particular lang does not have
much to do with “commercial viability”.
So, although i do find your report meaningful and has some force in
indicating how OCaml is more used in terms of number of solid tools
among idle programers, but i don't agree with your seemingly overboard
conclusion that OCaml is actually some order of magnitude more used or
popular than Haskell for serious projects.
This is a pure guess: i think any validity of OCaml's popularity in
open source industrial use than Haskell is probably because OCaml has
more industrial background than Haskell, given the lang's histories.
------------------
On a tangent, many here accuse you being a troll. As i mentioned, i do
find your posts tends to be divisive and selling your website, but
considered on the whole of newsgroup's posts in particular by many
regulars, i think your posting behavior overall are in any sense
particularly bad.
Recently, i answered to a post used your name and your photo in
group.goople.com's profile. When i answered that post, i thought it
was from you. Thanks for letting me know otherwise.
(See:
Fake Jon Harrop post
http://groups.google.com/group/comp.lang.lisp/msg/43f971ff443e2ce5
Fake Jon Harrop profile
http://groups.google.com/groups/profile?enc_user=Cv3pMh0AAACHpIZ29S1AglWPUrDEZmMqdL-C5pPggpE8SFMrQg3Ptg
)
I think many tech geekers on newsgroups are just ignorant fuckfaces.
The guy who fake'd your identity, to the degree of using your photo in
his fake profile, perhaps thinks he's being humorous.
Xah
∑ http://xahlee.org/
☄
On Aug 27, 9:23 am, Jon Harrop <j...@ffconsultancy.com> wrote:
> parnell wrote:
> >>You appear to have neglected the Ubuntu popularity contest that account
> >>for an order of magnitude more people again.
>
> > True, Ubuntu has 683367 submissions. The data gets murkier once you
> > pick through it for instance the FFTW data that you cite
> > vote old recent no files
> > Package: fftw3 139 9884 27 165142
>
> > Where
> > vote - number of people who use this package regularly;
> > old - number of people who installed, but don't use this package
> > regularly;
> > recent - number of people who upgraded this package recently;
> > no-files - number of people whose entry didn't contain enough
> > information
> > (atime and ctime were 0).
>
> > So although we have ~185,000 installs of FFTW only 139 of those
> > installs are "used".
>
> The fftw3 package transitioned to libfftw3-3 a year ago:
>
> http://people.debian.org/~igloo/popcon-graphs/index.php?packages=fftw...
I don't need to you just proved my point.
Your claim of "221,293 installs of popular OCaml software " is
misleading at best given that there are only 10,911 installs of FFTW
out of the 184,574, that are actually used.
Without FFTW it is not looking good for either Haskell or OCaml on the
Unbuntu side:
vote old recent no-files
Package: mldonkey 7 56 2 1
Package: mldonkey-gui 262 4498 124 0
Package: mldonkey-server 830 5130 118 0
Package: unison 930 9214 277 7
Package: darcs 271 2634 92 1
Package: hpodder 157 2913 54 0
Sure OCaml is slightly ahead but your claim was that "OCaml is 30×
more successful at creating widely-used open source software" does not
hold up.
I agree with the claim:
FFTW is 10x more successful than any other open source OCaml or
Haskell software package in the Debian/Unbuntu popcon.
I made the same mistake of trying to analyse the "recent" column but any
analysis of it is fraught with problems because it is strongly dependent
upon so many irrelevant variables. That data refers to the number of users
who upgraded a package within the past month. If you trend the data it
oscillates wildly and clearly cannot be proportional to the current number
of users (which would be a much smoother function) as one might have
expected:
http://people.debian.org/~igloo/popcon-graphs/index.php?packages=unison-gtk
%2Cdarcs&show_recent=on&want_legend=on&from_date=&to_date=&hlght_date=&
date_fmt=%25Y-%25m&beenhere=1
These wild variations are due to new package releases so this does not
reflect the number of current users at all.
One could argue that the maxima in this trend may be a more accurate
reflection of the current number of users but I believe even that is very
erroneous because of the 1 month window. For example, we only upgrade every
6-12 months.
So we cannot reasonably draw any relevant conclusions from this data, at
least not for Ubuntu where such trend data is not even available (AFAIK).
> Without FFTW it is not looking good for either Haskell or OCaml on the
> Unbuntu side:
> vote old recent no-files
> Package: mldonkey 7 56 2 1
> Package: mldonkey-gui 262 4498 124 0
> Package: mldonkey-server 830 5130 118 0
> Package: unison 930 9214 277 7
Ubuntu:
1735 libfftw3-3 104150 9084 77412 8288 9366
4355 unison-gtk 10458 1380 8662 412 4
6242 mldonkey-gui 4817 264 4423 130 0
7652 darcs 2998 271 2634 92 1
7045 freetennis 3629 226 3293 110 0
6928 planets 3786 184 3511 90 1
7549 hpodder 3080 148 2870 62 0
9733 ledit 1686 106 1505 75 0
8616 hevea 2320 127 2149 44 0
8661 polygen 2289 111 2122 55 1
Debian:
1702 libfftw3-3 9449 1827 3550 2032 2040
3206 unison 2408 651 1609 148 0
4274 mldonkey-server 1305 705 527 72 1
4186 darcs 1367 240 701 426 0
10206 freetennis 250 31 189 30 0
9794 planets 271 31 187 53 0
8084 hpodder 385 144 214 27 0
4791 ledit 1037 142 758 137 0
7046 hevea 502 84 386 32 0
8292 polygen 368 68 273 27 0
> Package: darcs 271 2634 92 1
> Package: hpodder 157 2913 54 0
>
> Sure OCaml is slightly ahead but your claim was that "OCaml is 30×
> more successful at creating widely-used open source software" does not
> hold up.
Due to the aforementioned problems those data are not an accurate reflection
of anything interesting and the conclusion stands.
> I agree with the claim:
> FFTW is 10x more successful than any other open source OCaml or
> Haskell software package in the Debian/Unbuntu popcon.
You are talking about binary packages. The core of the FFTW source code is,
of course, written in OCaml. Moreover, we can quantify the number of source
code installs, which certainly does explicitly include the OCaml code. For
Debian alone, we find:
847 fftw3 15976 2305 5802 2180 5689
So the OCaml source code to FFTW is being downloaded 2,180 times per month
by Debian users alone.
I certainly do peddle our products whenever possible. However, the data I
presented are verifiable and I would encourage anyone interested to repeat
the analysis themselves. There are many caveats in doing so but I think any
reasonable study will come to the same conclusion because the data are so
clear in this case.
> In particular, i don't think the conclusion you made, about how OCaml
> is one order of magnitude more use in the industry, being valid.
That was not my conclusion! I was careful to say that this refers only to
open source software (having examined two major Linux distros). I added
that this undermines my confidence in using Haskell commercially but that
is a personal opinion and not a justifiable conclusion.
> For example, many commercial use of languages are not public. As a
> example, Wolfram Research, the maker of Mathematica, sells more
> Mathematica than any lisp companies combined.
> This can be gathered from company size and financial records. However,
> if you go by your methods, such as polling stats from linux distros,
> or other means of checking stats among open source communities, you
> won't find any indication of this.
Interestingly, Wolfram Research have used OCaml commercially as well.
> Granted, your study is specifically narrowed to OpenSource project
> queries. But there is still a fact about commercial, non-public use,
> which often are far more serious and important. Open Source projects
> are typically just happensances of some joe hacker's enthus about a
> lang, and open source products's audience is again often not with any
> serious considerations.
>
> For example, your report contains FFTW, Unison, Darcs. Unison is a bi-
> way file syncing tool, which i personally use daily. Darcs is a
> revision system for source code. FFTW, as i learned, is a lib for
> fourier transform. These tools, basically are results of some joe
> hacker who happens to love or use a particular lang. Their users,
> basically use them because there isn't others around (in the case of
> Darcs, because it uses a lang they like). The other tools listed in
> your report: MLDonkey, Free Tennis, Planets, HPodder, LEdit, Hevea,
> Polygen, i haven't checked what they are, but i think you'd agree they
> basically fit into my describtion about FFTW, Unison, Darcs above.
Some do but FFTW certainly does not. FFTW is the result of decades of work
by the world's foremost experts on the subject who received prestigious
awards for their ground breaking work. FFTW is such a tremendous
achievement that many commercial users, including The MathWorks for MATLAB,
have paid to incorporate FFTW into their commercial products under license
from MIT.
MLDonkey has been one of the most prolific file sharing utilities ever, and
is believed to have had ~250,000 users at its peak. LEdit is a widely used
tool for command line editing. Hevea is a widely used LaTeX to HTML
translator.
So some of these programs are fun toys (like Planets) but many are really
serious tools.
> «This led us to revisit the subject of Haskell's popularity and track
> record. We had reviewed Haskell last year in order to ascertain its
> commercial viability when we were looking to diversify into other
> functional languages. Our preliminary results suggested that Haskell
> was one of the most suitable functional languages but this recent news
> has brought that into question.»
>
> That remark is slightly off. You phrased ?Haskell's popularity and
> track record?, but note that your stat is just a measure of some pop
> tools among linux.
> It is a exageration to say that it's some sort of ?track record? of
> haskell like a damnatation. Haskell for example, has strong academic
> background.
While the vast majority of Haskell's use appears to be academic, I am not
even sure that it is fair to say that Haskell has a "strong academic
background".
> So, a fair ?track record? would also measure its academic
> use.
That is a question of perspective and I am interested in commercial
applications of these languages, of course.
> You also used the word ?popularity?. Again, popularity is a fuzzy
> word, but in general it also connate to mindshare. Between Haskell and
> OCaml, i doubt more programer heard or knows about OCaml than Haskell,
I agree.
> and as far as mindshare goes, both are dwarfed by Lisp.
I do not believe that.
> Then, you mentioned ?commercial viability?. Again, what tools happened
> to be cooked up by idle tech geekers in particular lang does not have
> much to do with ?commercial viability?.
I disagree. The direct commercialization of FFTW is the most obvious counter
example but many of the other tools have direct commercial counterparts.
> So, although i do find your report meaningful and has some force in
> indicating how OCaml is more used in terms of number of solid tools
> among idle programers, but i don't agree with your seemingly overboard
> conclusion that OCaml is actually some order of magnitude more used or
> popular than Haskell for serious projects.
The data speak for themselves, IMHO.
> This is a pure guess: i think any validity of OCaml's popularity in
> open source industrial use than Haskell is probably because OCaml has
> more industrial background than Haskell, given the lang's histories.
I certainly find the OCaml community to be very grounded in practicality and
the Haskell community to be intolerably theoretical.
Yes we finally agree.
> Due to the aforementioned problems those data are not an accurate reflection
> of anything interesting and the conclusion stands.
Conclusion stands, I don't think so. Remember what you just said: "we
cannot reasonably draw any relevant conclusions from this data".
If you examine the graph of the old vs the recent you see that they
are nearly the inverse of each other and as recent jumps old falls.
It would seem to be that these are caused by people who do not
otherwise use the packages updating the package and then as the
package remains unused it moves from the recent into the old category.
The voted installs seem to trend right along with the installs, so if
they are not "an accurate reflection of anything interesting" then
neither are the total installs in which case we are in complete
agreement.
> > I agree with the claim:
> > FFTW is 10x more successful than any other open source OCaml or
> > Haskell software package in the Debian/Unbuntu popcon.
>
> You are talking about binary packages. The core of the FFTW source code is,
> of course, written in OCaml. Moreover, we can quantify the number of source
> code installs, which certainly does explicitly include the OCaml code. For
> Debian alone, we find:
>
> 847 fftw3 15976 2305 5802 2180 5689
>
> So the OCaml source code to FFTW is being downloaded 2,180 times per month
> by Debian users alone.
Jon I agree 100% that FFTW is an "OCaml" package I did not mean to
imply otherwise. My point is that it is 10 times more popular than
any of the other packages that you included in your data. The other
OCaml and the Haskell packages are in the same ball park as far as
total installs or "votes" are concerned. So what one can conclude
from this is that the language that one chooses to create an open
source project in does not seem to matter nearly as much as what that
package will do (or really to be more accurate some unknown factor
that is not captured in the data).
Note that I was talking about the "recent" data that you analyzed and
not the "installed" data that was analyzed in the article that I cited.
> If you examine the graph of the old vs the recent you see that they
> are nearly the inverse of each other and as recent jumps old falls.
Yes.
> It would seem to be that these are caused by people who do not
> otherwise use the packages updating the package and then as the
> package remains unused it moves from the recent into the old category.
>
> http://people.debian.org/~igloo/popcon-graphs/index.php?packages=unis..
%2Cdarcs&show_installed=on&show_vote=on&show_old=on&show_recent=on&want_legend=on&from_date=&to_date=&hlght_date=&date_fmt=%25Y-%25m&beenhere=1
>
> The voted installs seem to trend right along with the installs,
I disagree. Indeed, over the past year the "voted" has fallen whereas
the "installs" have risen for Darcs according to the data you just cited.
From the data you cite, large fluctuations correlate between the "old"
and "recent" data and the "vote" column to a lesser degree but
the "installed" column (that I analyzed) does not exhibit these
fluctuations at all.
> so if
> they are not "an accurate reflection of anything interesting" then
> neither are the total installs in which case we are in complete
> agreement.
Had your previous assertion been true then I would agree but I cannot see
that this is the case. The "installed" data that I analyzed does not trend
with the "vote" data and does not correlate with the fluctuations in
the "recent" and "old" data. So this source of error did not affect my
analysis.
>> > I agree with the claim:
>> > FFTW is 10x more successful than any other open source OCaml or
>> > Haskell software package in the Debian/Unbuntu popcon.
>>
>> You are talking about binary packages. The core of the FFTW source code
>> is, of course, written in OCaml. Moreover, we can quantify the number of
>> source code installs, which certainly does explicitly include the OCaml
>> code. For Debian alone, we find:
>>
>> 847 fftw3 15976 2305 5802 2180 5689
>>
>> So the OCaml source code to FFTW is being downloaded 2,180 times per
>> month by Debian users alone.
>
> Jon I agree 100% that FFTW is an "OCaml" package I did not mean to
> imply otherwise. My point is that it is 10 times more popular than
> any of the other packages that you included in your data.
Yes.
> The other
> OCaml and the Haskell packages are in the same ball park as far as
> total installs or "votes" are concerned.
Unison also has 3x more installs than anything written in Haskell but my
original point was also that there are many more popular projects written
in OCaml than Haskell. Indeed, even excluding FFTW, open source OCaml
software on Debian+Ubuntu still has 7x more users than Haskell and that (to
me) is a very significant discrepancy. Moreover, it is an underestimate
because we have not counted several other popular OCaml projects like
MTASC, ADVI, HaXe and so on.
> So what one can conclude from this is that the language that one chooses
> to create an open source project in does not seem to matter nearly as much
> as what that package will do (or really to be more accurate some unknown
> factor that is not captured in the data).
That is only one of many possible explanations. For example, another
equally-plausible explanation is that Haskell is inherently incapable of
solving these problems in a usable way. I suspect the truth is somewhere in
between.
This (popularity of Haskell and OCAML) is completely off-topic in
comp.lang.lisp. Move to comp.lang.functional. You may advertise there.
> Some do but FFTW certainly does not. FFTW is the result of decades of
> work by the world's foremost experts on the subject who received
> prestigious awards for their ground breaking work. FFTW is such a
> tremendous achievement that many commercial users, including The
> MathWorks for MATLAB, have paid to incorporate FFTW into their
> commercial products under license from MIT.
FFTW is a superb piece of work, but I think that it's important to note
that its use of OCaml is strictly off-line and not performance critical.
The OCaml is used to generate C code, and is not actually run by anyone
but the FFTW developers. Everyone else just compiles and runs the
generated C code.
So the choice of OCaml comes down to familiarity and convenience for the
developers, rather than run-time performance. Given what it's doing (I
haven't looked at the OCaml code since the very first version came out,
so I've essentially forgotten it) I suspect that the pattern matching
functional nature was much more important than the type safety or
compiled performance. I suppose that Frigo and Johnson have written
about their choice of OCaml somewhere. Doesn't seem to be one on the
FFTW web site though. Skimming some of their papers, it's clearly a good
choice, because the expression simplifiers can be expressed quite neatly,
and matching seems to be used extensively. I suspect that Haskell or
Prolog or even one of the modern schemes (with structures and match)
could have been used just about as well, had the authors been so inclined.
Cheers,
--
Andrew
> hi Jon Harrop,
> [...]
>
> Recently, i answered to a post used your name and your photo in
> group.goople.com's profile. When i answered that post, i thought it
> was from you. Thanks for letting me know otherwise.
>
> (See:
>
> Fake Jon Harrop post
> http://groups.google.com/group/comp.lang.lisp/msg/43f971ff443e2ce5
>
> Fake Jon Harrop profile
> http://groups.google.com/groups/profile?enc_user=Cv3pMh0AAACHpIZ29S1AglWPUrDEZmMqdL-C5pPggpE8SFMrQg3Ptg
> )
Thanks for reminding me. I enjoyed reading those also a second time.
Nicolas
Yes.
> and not performance critical.
No. OCaml's performance is as important here as it is in any other compiler:
compile time performance is important.
> The OCaml is used to generate C code, and is not actually run by anyone
> but the FFTW developers. Everyone else just compiles and runs the
> generated C code.
On the contrary, FFTW's performance stems from its ability to generate
custom codelets for transform lengths that the user values. Users do this
by running the OCaml code to generate more C. Hence the Debian source
package is downloaded more often than the binary package because users need
the OCaml source code.
> So the choice of OCaml comes down to familiarity and convenience for the
> developers, rather than run-time performance.
I do not believe so.
> Given what it's doing (I
> haven't looked at the OCaml code since the very first version came out,
> so I've essentially forgotten it) I suspect that the pattern matching
> functional nature was much more important than the type safety or
> compiled performance.
On the contrary, generating custom codelets is a very time consuming process
that required days of computation when FFTW was first released. Moreover,
the authors of FFTW would have repeated this process countless times to
obtain their results and perform thorough benchmarking of the run-time.
So OCaml's predictable and excellent performance was unquestionably of
critical importance to this high-profile project.
> I suppose that Frigo and Johnson have written
> about their choice of OCaml somewhere. Doesn't seem to be one on the
> FFTW web site though. Skimming some of their papers, it's clearly a good
> choice, because the expression simplifiers can be expressed quite neatly,
> and matching seems to be used extensively.
Yes.
> I suspect that Haskell or
> Prolog or even one of the modern schemes (with structures and match)
> could have been used just about as well, had the authors been so inclined.
Until there is evidence to support your hypothesis, we know only that OCaml
is capable of this.
The voted category and the installed do not show the wild fluctuations
of the recent or old categories. When you count the installed
category you are counting installs of software that is never used,
completely invalidating your analysis.
> The voted category and the installed do not show the wild fluctuations
> of the recent or old categories. When you count the installed category
> you are counting installs of software that is never used, completely
> invalidating your analysis.
Has is ever occurred to you that you are tirelessly arguing about a
statistic that is quite meaningless when you consider the original
question (popularity of programming languages)? Just asking.
The fact that your opponent can't fill up his day with anything
productive and thus has the time to engage in these inane arguments does
not mean that you should, too.
Tamas
> On the contrary, FFTW's performance stems from its ability to generate
> custom codelets for transform lengths that the user values. Users do
> this by running the OCaml code to generate more C. Hence the Debian
> source package is downloaded more often than the binary package because
> users need the OCaml source code.
No, all of the codelets are already generated and shipped with the source
distribution. These are tested for performance at run-time (or at least
when the wisdom is generated). The FreeBSD port of fftw3 does not have
ocaml as a compile or run-time dependency. Sure the ocaml source ships
with the distribution too, in case anyone cares to fiddle with it, I
suppose. Indeed, I have fftw3 on my system (it's a dependency of 18
other packages on my system), but I don't have ocaml.
Maybe the Debian maintainers do things differently.
Cheers,
--
Andrew
But the mathematical factorial function is defined only on natural
numbers, hence this behavior is most likely incorrect.
Moreover, in dynamically typed languages like Lisp there is the risk
of having a statement like this buried deep in your program:
(if (or (long-and-complicated-computation-returned-true) (user-pressed-
ctrl-alt-shift-tab-right-mouse-button))
(setq bomb "goodbye")
(setq bomb 42))
And then somewhere else a (fact bomb) makes your program crash and
your client angry.
> to get the same in C++ you would have to write:
>
> #include <rational.hxx>
> template <typename T> fact (T x){ return (x<=0)?1:x*fact(x-1);}
>
> ... fact(4),fact(4.0),fact(4.0d0),fact(Rational(5,2)) ...
>
> I hope Haskell is able to derive these instances automatically.
Why do you hope so, given that this behavior is likely to be
incorrect?
Anyway, Haskell infers a generic type if you don't provide type
declarations, but it's generally considered a good programming
practice to provide type declarations precisely to avoid such kind of
behaviors.
> But you can easily get even more genericity, either using non CL
> operators that you can easily redefine (eg. as lisp generic
> functions), by shadowing them to the same effect, or by using
> functions passed in argument. CL:SORT is generic because it takes a
> LESSP function, so it can be applied on sequences of any kind of
> object.
>
> So what statically checked type system proponents detract as duck
> typing is actually that lisp works in the Generic Mode gears by
> default.
I'm not sure about Common Lisp, but in Scheme, while arithmetic
operators are overloaded, most library function aren't (there are
different procedures for doing the same operation on lists, vectors,
strings and streams, for instance), hence I wouldn't consider the
language particularly generically typed.
> --
> __Pascal Bourguignon__ http://www.informatimago.com/
> I need a new toy.
> Tail of black dog keeps good time.
> Pounce! Good dog! Good dog!
No, *some* codelets are generated and shipped but by no mean all (infinity)
of them. This is precisely the purpose of the genfft tool: you can use it
to generate new codelets that are not bundled in order to improve
performance for certain lengths of transform.
That was not the original question.
The Lisp program will not crash. It will display an error,
dump an error report or send mail. The parts of the Lisp
program that don't have the bug will continue to run just fine.
If you want you can make the program send an error
report and a backtrace to the developer. He can send
a fix back, the client can load it and continue
to work without ever leaving his program.
It is quite fascinating to see how clients react to incremental
fixes that can be loaded - instead of waiting for a complete
new build or bug fix release of the complete software.
You can often see Lisp vendors shipping releases and then for
years only giving small patches to the customers. Often the
patches can be loaded into a running system.
For example the Lisp based server I'm using does not crash
every other day. I update the software from time to time
mostly while the server runs. When there is some problem
the server sends a mail with the error condition and
a backtrace. I log into the machine, connect to the
running software and fix the problem while it is running.
Actually problems are much better to debug, since all the
data and the software information (typed self-identifying objects,
documentation, etc.) is still there.
Lisp has been perfected over the years to support exactly this:
handling errors safely at runtime. In the last years
64bit Lisp systems have been used with very large datasets.
Crashing is no option. Handling errors is.
Common Lisp has lots of limited generic functions. For example
there are operations that work on sequences. Sequences are
for example vectors and lists.
Common Lisp also has CLOS, the Common Lisp Object System,
where 'generic functions' are a basic building block.
>
> > --
> > __Pascal Bourguignon__ http://www.informatimago.com/
> > I need a new toy.
> > Tail of black dog keeps good time.
> > Pounce! Good dog! Good dog!
The program will not crash with a segfault like C/C++ programs do, but
usually it will still terminate abruptly giving an error message which
is probably unintellegible to the user.
Even if you design your programs to catch the error, send a bug report
and try to keep running, probably there will be some loss of
functionality.
Which, in the specific case under discussion, wouldn't happen in a
statically typed language.
The proportion of unused installs of OCaml software would have to be an
order of magnitude higher than for Haskell software to account for the
observed discrepancy. That is clearly not realistic but let's quantify it
nonetheless.
The proportion of "vote" to "install" for the various packages is currently:
libfftw3-3 9.6%
unison-gtk 15.8%
mldonkey-gui 15.8%
darcs 11.7%
freetennis 6.6%
planets 5.3%
hpodder 8.4%
ledit 9.1%
hevea 7.5%
polygen 6.7%
As you can see, there is no evidence of any discrepancy at all, let alone
the order of magnitude discrepancy required to justify your assumption.
Why should it terminate? It will continue to run in most cases.
Why should the error message be unintellegible? The programmer
has full control over error handling and can present any type
of error user interface she likes.
>
> Even if you design your programs to catch the error, send a bug report
> and try to keep running, probably there will be some loss of
> functionality.
Sure. But you might want to consider that Erlang for example
is dynamically typed and designed to run extremely
complex telco software with zero downtime. There
are applications where a loss of some functionality
can be tolerated, but not the crash of the whole software.
Errors are isolated and fixed at runtime. Shutting down
a central switching system of a telco with many
customers affected is no option. Patching the running
system in a controlled fashion is an option.
Isn't it ironic that dynamically typed software
is at the heart of extremely demanding
applications with zero downtime?
The same is true for some Lisp systems. Before Erlang existed,
AT&T / Lucent built similar systems (ATM switches) in Lisp.
It was also designed for zero downtime. The switching nodes
were running LispWorks on some embedded systems.
>
> Which, in the specific case under discussion, wouldn't happen in a
> statically typed language.
Check this:
RJMBP:~ joswig$ sbcl
This is SBCL 1.0.16, an implementation of ANSI Common Lisp.
More information about SBCL is available at <http://www.sbcl.org/>.
SBCL is free software, provided as is, with absolutely no warranty.
It is mostly in the public domain; some portions are provided under
BSD-style licenses. See the CREDITS and COPYING files in the
distribution for more information.
* (defun foo (a) (let ((bar 3)) (setf bar "baz") (+ bar a)))
; in: LAMBDA NIL
; (+ BAR A)
;
; note: deleting unreachable code
;
; caught WARNING:
; Asserted type NUMBER conflicts with derived type
; (VALUES (SIMPLE-ARRAY CHARACTER (3)) &OPTIONAL).
; See also:
; The SBCL Manual, Node "Handling of Types"
;
; compilation unit finished
; caught 1 WARNING condition
; printed 1 note
FOO
*
Looks like Lisp has detected that I want to set a variable
to a string and later want to use it in an addition.
> On 20 Ago, 02:07, p...@informatimago.com (Pascal J. Bourguignon)
> wrote:
>> DeverLite <derby.lit...@gmail.com> writes:
>> > [...] Although very flexible, and good at catching errors at compile
>> > time, I found the typing system in Haskell sometimes got in the way.
>> > Lisp let's you declare types, but if you don't want to you don't have
>> > to, and this tends to make it more flexible than Haskell.
>>
>> Actually, what the lisp type system means is that in lisp we program
>> in generic mode by default.
>>
>> When you write:
>>
>> int fact(int x){ return (x<=0)?1:x*fact(x-1); }
>>
>> it's statically typed, but it's also SPECIFICALLY typed.
>>
>> When you write in lisp:
>>
>> (defun fact (x) (if (<= x 0) 1 (* x (fact (1- x)))))
>>
>> it's already a generic function (not a lisp generic function,
>> technically, but what other programming languages call a generic
>> function):
>>
>> (mapcar 'fact '(4 4.0 4.0l0 5/2))
>> --> (24 24.0 24.0L0 15/8)
>
> But the mathematical factorial function is defined only on natural
> numbers, hence this behavior is most likely incorrect.
Please, explain how something that is outside of the scope encompassed
by a mathematical definition can be "incorrect"?
"Blue" is not defined by the mathmatical factorial function
definition. Does that make "blue" incorrect?
> Moreover, in dynamically typed languages like Lisp there is the risk
> of having a statement like this buried deep in your program:
>
> (if (or (long-and-complicated-computation-returned-true) (user-pressed-
> ctrl-alt-shift-tab-right-mouse-button))
> (setq bomb "goodbye")
> (setq bomb 42))
>
> And then somewhere else a (fact bomb) makes your program crash and
> your client angry.
But strangely enough, lisp programs crash less often than C and C++
programs...
> I'm not sure about Common Lisp, but in Scheme, while arithmetic
> operators are overloaded, most library function aren't (there are
> different procedures for doing the same operation on lists, vectors,
> strings and streams, for instance), hence I wouldn't consider the
> language particularly generically typed.
Indeed, scheme has almost no generic functions. A shame.
--
__Pascal Bourguignon__ http://www.informatimago.com/
PLEASE NOTE: Some quantum physics theories suggest that when the
consumer is not directly observing this product, it may cease to
exist or will exist only in a vague and undetermined state.
Who said that his definition is about the "mathematical factorial function"?
> Moreover, in dynamically typed languages like Lisp there is the risk
> of having a statement like this buried deep in your program:
>
> (if (or (long-and-complicated-computation-returned-true) (user-pressed-
> ctrl-alt-shift-tab-right-mouse-button))
> (setq bomb "goodbye")
> (setq bomb 42))
The risk is very low, unless you have total retards working on your code
base.
Pascal
--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
You don't seem to be well-informed about what's possible in good dynamic
languages.
That is nonsense. The factorial function has generalizations which behave like
factorial for the natural numbers.
Remove foot from mouth and visit:
http://en.wikipedia.org/wiki/Gamma_function
Of course, the above function isn't the gamma function. That's beside the
point; we could fix FACT so that it computes the gamma in the general case, but
is optimized using the factorial for special cases.
> Moreover, in dynamically typed languages like Lisp there is the risk
> of having a statement like this buried deep in your program:
Bombs can be buried in code written in any programming language.
> (if (or (long-and-complicated-computation-returned-true) (user-pressed-
> ctrl-alt-shift-tab-right-mouse-button))
> (setq bomb "goodbye")
> (setq bomb 42))
You can use any mainstream programming language to hide an easter egg that is
activated by ctrl-alt-shift-tab-right-mouse-button, and other conditions.
> And then somewhere else a (fact bomb) makes your program crash and
> your client angry.
This is false. The program will not crash, but rather signal a condition, which
is something different.
Type is simply being treated as a run-time value.
Yes, when some run-time object has an invalid value, that is a programming bug.
Congratulations on figuring that out!
If types are not allowed to be run-time values, programmers invent ad-hoc type
systems which use existing value representations (such as integral
enumerations) as type.
For instance, in the BSD operating system, open files are represented by a
``struct vnode''. That's the C language type, but there is a ``v_type'' field
which can be, for instance, VDIR.
Faced with the static programming language, the programmers have hacked up an
ad-hoc, fragile form of dynamic typing which /really/ breaks if a critical type
check is missing. There is no recoverable condition. What happens is that the
kernel crashes, or worse: the filesystem becomes corrupt.
Of course, the Lisp kernel is built on a foundation of static typing rigidity.
Just like the BSD kernel, thanks to C static type checking, will not (in the
absence of memory corruption or bad casts) mistakenly treat the ``v_type''
field of a struct vnode as anything but a of type ``enum vtype'', the Lisp will
correctly, reliably treat the bits of a value which represent its type.
If the given Lisp system encodes some type information in the top three bits of
the word representing a value, then by golly, all of its operations will
reliably store and retrieve that type information in those top three bits.
So you see there is the same kind of low-level static typing going on in Lisp,
and similar languages, just under the hood.
The programmer-visible types in Lisp are a higher level concept, because it's a
higher level language. They are simply an additional kind of domain value
attributed to an object. They are not like type in a static language, even
though they express the same concept in simple-minded programs which
ipmlement a design that is also directly implementable in a static language.
Dynamic typing allows an entire class maintainable, clear, useful program
/designs/ to be coded in the obvious, straightforward way, and safely executed
in the presence of run-time checks.
Static typing which rejects the straightforward implementation of these designs
nevertheless allows the programmer to implement these designs in a different
way, one which is not so straightforward, and not well supported (or at all) in
the language. Not only can this be done, but it is regular practice.
> I'm not sure about Common Lisp, but in Scheme ...
Trolling asshole, since you're sending articles to comp.lang.lisp rather than
comp.lang.scheme, maybe you should take a few steps to become sure.
> Right. Thanks for the clarification. I didn't mean to imply that there
> was no typing by default in lisp: just that it wasn't something you
> had to *declare*.
You don't have to declare it in Haskell. E.g. save this as Test.hs:
module Test where
import Data.Ratio
fact x = if x <= 0 then 1 else x * fact (x-1)
and then you can test it with a Haskell REPL, like GHCi:
*Test> :load Test
[1 of 1] Compiling Test ( Test.hs, interpreted )
Ok, modules loaded: Test.
*Test> :type fact
fact :: (Num a, Ord a) => a -> a
*Test> fact 4
24
*Test> fact 4.0
24.0
*Test> fact 4::Double
24.0
*Test> fact (5%2)
15%8
*Test> take 10 [fact x | x <- [1..]]
[1,2,6,24,120,720,5040,40320,362880,3628800]
I really like the last example :-)
--
Frank Buss, f...@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
I said *most likely* incorrect.
That behavior is correct if it's what you want to get, but when
somebody writes or uses a procedure which calculates the factorial
function he or she probably means it to be used on natural numbers.
Maybe I'm not very informed but there is no application of your
function with arguments that are anything but natural numbers I'm
aware of.
(the only partial exception I can think is using it on floats
restricted to mantissa-length integer values, which can be used as 53
bits integers on systems which support 64 bits floats but not 64 bits
integers. But on Lisp systems these low-level issues are probably
irrelevant).
> "Blue" is not defined by the mathmatical factorial function
> definition. Does that make "blue" incorrect?
>
> > Moreover, in dynamically typed languages like Lisp there is the risk
> > of having a statement like this buried deep in your program:
>
> > (if (or (long-and-complicated-computation-returned-true) (user-pressed-
> > ctrl-alt-shift-tab-right-mouse-button))
> > (setq bomb "goodbye")
> > (setq bomb 42))
>
> > And then somewhere else a (fact bomb) makes your program crash and
> > your client angry.
>
> But strangely enough, lisp programs crash less often than C and C++
> programs...
C is statically typed but type-unsafe, among other kind of unsafe
"features" it has.
C++ is as unsafe as C and, due to its object orientation model, not
completely statically typed, but the language doesn't insert any
runtime dynamical type checks, so this is another source of unsafety.
Java has an OO model similar to that of C++ (but simplified) and it's
type-safe.
Runtime type errors can occour when attempting to cast the reference
of a given class type to a reference typed with one of its subclasses.
I don't have statistics, but I presume that this happens less
frequently than type errors in completely dynamically typed languages
like Lisp or Python.
(I'm not claiming that Java is better than Lisp or Python, I'm just
comparing this aspect).
> C is statically typed but type-unsafe, among other kind of unsafe
> "features" it has.
> C++ is as unsafe as C and, due to its object orientation model, not
> completely statically typed, but the language doesn't insert any
> runtime dynamical type checks, so this is another source of unsafety.
Do you have a C++ example, where object orientation makes it not completely
statically typed? And for runtime dynamical checks in C++, take a look at
http://www.google.com/search?q=c%2B%2B+rtti
Static type systems that require declarations are not state-of-the-art. A
statically typed language isn't defined as one which requires everything to be
declared prior to use. The requirement for explicit declarations merely
reflects 1960's state-of-the-art in static typing (even though it happens to be
a feature of some programming languages designed in the 1990's).
Some of the reasons that C is type unsafe are very good, to the point that a
language which doesn't allow the same flexibility is a useless pile of crap
for any real-world software development.
> C++ is as unsafe as C and, due to its object orientation model, not
> completely statically typed, but the language doesn't insert any
> runtime dynamical type checks, so this is another source of unsafety.
That is false. C++ has dynamic checks in the dynamic_cast conversion
operator. If a class pointer is converted using dynamic_cast, the result may be
a null value, which must be checked by the program. If a reference is
converted, then a bad_cast exception may be thrown.
Of course, you can use an unsafe conversion operator which doesn't have checks.
> Java has an OO model similar to that of C++ (but simplified) and it's
> type-safe.
From Java, you can call a native platform function. That function can do
anything: crash your kernel, reformat your hard drive, blow up your video
card with bad parameters, etc.
No language whose definition allows for some kind of undefined behavior
(in the place of which an extension may be provided) can be called safe.
Without the ability to call native platform functions, a programming
environment is a crippled pile of shit, unsuitable for anything but coding
acadamic examples.
> I don't have statistics, but I presume that this happens less
> frequently than type errors in completely dynamically typed languages
> like Lisp or Python.
Right, you don't have statistics, but you presume.
> (I'm not claiming that Java is better than Lisp or Python, I'm just
> comparing this aspect).
I.e. comparing your lack of statistics about Java to your lack of statistics
about Lisp and python.
By "statically typed" I mean that the compiler is always able to prove
which types of values a variable, parameter or expression represent.
class Foo {...};
class Bar : public Foo {...};
class Baz : public Foo {...};
...
Foo * a;
if (unpredictableAtCompileTime()) {
a = new Bar();
}
else {
a = new Baz();
}
Which type of object is 'a' pointing to?
> And for runtime dynamical checks in C++, take a look athttp://www.google.com/search?q=c%2B%2B+rtti
Ok, but it's an optional feature.
> --
> Frank Buss, f...@frank-buss.dehttp://www.frank-buss.de,http://www.it4-systems.de
My example was specifically written to make the flaw apparent, but are
you sure that these situations are rare in practice?
Have you ever seen a program which manages a collection of objects of
heterogeneous type, which isn't know at compile-time?
Now let's assume there is an operation done infrequently on some
objects of that collection, which works on all the possible types
except an infrequently occurring one.
Maybe the guy who wrote that operation wasn't aware of the existence
of that type, or wasn't thinking about it, or that type was added
later by someone not aware of the existence and contract of that
operation. It doesn't seem a very unlikely scenario does it?
Anyway the result will be an infrequent runtime type error which will
probably escape the testing.
I'm aware of that function, which isn't what Pascal's code calculates.
> Of course, the above function isn't the gamma function.
Indeed.
> That's beside the
> point;
No, that's the point: You made an useless remark.
> we could fix FACT so that it computes the gamma in the general case, but
> is optimized using the factorial for special cases.
In which case it would be desirable to have two different definitions:
one for integers and one for reals, which means that generic
programming would be of no use.
> > Moreover, in dynamically typed languages like Lisp there is the risk
> > of having a statement like this buried deep in your program:
>
> Bombs can be buried in code written in any programming language.
>
> > (if (or (long-and-complicated-computation-returned-true) (user-pressed-
> > ctrl-alt-shift-tab-right-mouse-button))
> > (setq bomb "goodbye")
> > (setq bomb 42))
>
> You can use any mainstream programming language to hide an easter egg that is
> activated by ctrl-alt-shift-tab-right-mouse-button, and other conditions.
You missed the point.
That was not an example of an easter egg, that was meant to illustrate
that type errors can occur in response to external input or complex
calculations in dynamical typed systems.
> > And then somewhere else a (fact bomb) makes your program crash and
> > your client angry.
>
> This is false. The program will not crash, but rather signal a condition, which
> is something different.
Ok. Let's say it will malfunction, ok?
Scheme is Lisp.
The importance of the factorial function in production code is grossly
overestimated. ;)
Java is a language that serves as a good reference for illustrating how
static type systems can _introduce_ new sources of bugs which don't
exist in dynamically typed languages:
http://p-cos.net/documents/dynatype.pdf
Whether a type system is dynamic or not doesn't tell you a lot. You need
a _good_ type system. For example, one that gives something close to a
half when you divide one by two, or something that doesn't silently wrap
around when you get beyond 32 bits. Few type systems have these
characteristics.
Pascal
What if they are rare: How would you recognize that?
> Have you ever seen a program which manages a collection of objects of
> heterogeneous type, which isn't know at compile-time?
Yep, I use them quite a lot actually.
> Now let's assume there is an operation done infrequently on some
> objects of that collection, which works on all the possible types
> except an infrequently occurring one.
>
> Maybe the guy who wrote that operation wasn't aware of the existence
> of that type, or wasn't thinking about it, or that type was added
> later by someone not aware of the existence and contract of that
> operation. It doesn't seem a very unlikely scenario does it?
>
> Anyway the result will be an infrequent runtime type error which will
> probably escape the testing.
That's all just speculation.
Where do you see the masses of experience reports and anecdotes how such
errors _actually_ screw up software systems?
The most often reported screw up is buffer overruns, which can be
exploited in security attacks. Such overruns are best handled by dynamic
checks.
You see the pattern?
Pascal
This is almost the same as asking, ``Have you worked in the software
industry for any period within the last twenty years?''
Yes, there are programming errors that can be caught by static type
checking. I don't think anyone is disputing that.
But there are also plenty of errors that cannot be caught by static
typing. Any substantial program these days, even in a statically
typed language, has (or should have) numerous calls to `assert' or the
equivalent, checking preconditions of operations at runtime that
cannot be checked statically. Often, a formal proof that a particular
assertion is not violated would be quite difficult to produce; there
certainly is no static analyzer out there that can generate such
proofs in very many cases.
So although static typing does catch some errors, and can be useful,
it can't catch all of them. Furthermore, the ones it does catch are
those that are relatively easy to find anyway because they primarily
show up as violations of local constraints. Those that require deep
analysis, and particularly those requiring inductive proofs, are out
of its reach.
Static typing advocates want to make this a difference of kind, but
it's really just a difference of degree. With static typing, you
invest more effort up front and accept some limitations on the way you
can write your program, and in exchange the compiler finds some bugs
for you. It's a tradeoff. I'm not even saying it's never a trade
worth making. But in the end you still have to write interface
documentation to (try to) make sure that clients of your interface
don't abuse it.
-- Scott
>>
>> When you write in lisp:
>>
>> (defun fact (x) (if (<= x 0) 1 (* x (fact (1- x)))))
>>
But why would you write it like that?
I would write
(defun fact (n)
(check-type n (integer 0 *))
(when (< n 2) (return-from fact 1))
(iter (for i from 2 to n) (multiplying n)))
The function you wrote is underspesified.
It assumes that x can be a negative number for instance and thus does not
comply with the mathematical definition.
Also it is inefficient.
(I assume you ment (- x 1). 1- only works for integers and would signal a
error.)
For that matter you could write
float fact (float n)
{
return n * fact(n - 1);
}
--------------
John Thingstad
> På Thu, 28 Aug 2008 17:33:25 +0200, skrev Vend <ven...@virgilio.it>:
>
>>>
>>> When you write in lisp:
>>>
>>> (defun fact (x) (if (<= x 0) 1 (* x (fact (1- x)))))
>>>
>
> But why would you write it like that?
> I would write
>
> (defun fact (n)
> (check-type n (integer 0 *))
> (when (< n 2) (return-from fact 1))
> (iter (for i from 2 to n) (multiplying n)))
>
> The function you wrote is underspesified.
Yes, it was my point, it doesn't need any overspecification of the
types, and avoiding such overspecification, it is quite a generic
function, something you would have to write with much heavier
machinery in languages such as C++.
> It assumes that x can be a negative number for instance and thus does
> not comply with the mathematical definition.
It's just an example, and doesn't mean to match any mathematical
definition.
By the same token the apparent use of CL operators such as + and <= is
just a basic example, which restricts the genericity since users
cannot redefine CL operators, but which give already enough genericity
(amongst the different kind of numbers) that I hoped my point would be
clear.
A similar function using only user defined operators would be even
more generic, since all these user defined operators could be
_redefined_ or defined as generic functions in the first place.
I have not mentionned in which package, and with which used package
this fact form was read in, it could as well be in a package where all
the operators are shadowed from the CL package, and are actually user
defined generic functions.
Perhaps this could be qualified of meta-genericity; you could try to
process:
> float fact (float n)
> {
> return n * fact(n - 1);
> }
with any kind of #define macros, you would never be able to apply that
to polynoms, to colors or to chords. But it would be trivial with
the lisp form..
> Also it is inefficient.
Granted, but this is not what is discussed here.
> (I assume you ment (- x 1). 1- only works for integers and would
> signal a error.)
(mapcar '1- '( 3 3/2 3.0 #C(3 3))) --> (2 1/2 2.0 #C(2 3))
> For that matter you could write
>
> float fact (float n)
> {
> return n * fact(n - 1);
> }
and again, the _meaning_ of that C fact, is EXTREMELY different from
the _meaning_ of this Lisp fact:
(defun fact (x) (if (<= x 0) 1 (* x (fact (1- x)))))
--
__Pascal Bourguignon__ http://www.informatimago.com/
"You question the worthiness of my code? I should kill you where you
stand!"
Which affect programs written in C and C++.
> Such overruns are best handled by dynamic
> checks.
They are best handled by a combination of static and dynamic checks,
put in place by the compiler. Leaving array access unchecked like C/C+
+ do, is the worst way of doing it. It was justifiable given the
original design goal of C: a semi-portable efficient mid-level
language for OS programming, but it's a shame that it still remains in
modern C++.
Your paper lists three anti-patterns.
The first one, method stubs inserted due the need to define all
abstract methods in non-abstract classes, doesn't seem an anti-patter
to me.
It actually forces the programmer to think about the methods that can
be called on a class.
Sure, it's possible to implement a stub and then forget about it, but
I think that's far less likely than forgetting to implement a method
in a language that doesn't complain about it.
(Anyway, you are right that the stub generated by Eclipse is bad).
The second issue, checked exceptions, is Java specific as far as I
know, and doesn't appear to be particularly related to static vs.
dynamic typing.
The third example is flawed:
First of all you present it as a flaw of static typing while it's
actually due to dynamic typing introduced by polymorphism. If Java was
fully statically typed, all occurrences of 'instanceof' would be
resolved at compile-time.
In fact, that kind of race condition can happen in dynamically typed
languages, which, if the language runtime is thread-safe, would raise
an exception equivalent to ClassCastException, otherwise would cause
unspecified and unsafe behavior.
Second, that pattern is an example of bad programming since in
multithreading environments you should always synchronize before
accessing a shared mutable resource.
> Whether a type system is dynamic or not doesn't tell you a lot. You need
> a _good_ type system. For example, one that gives something close to a
> half when you divide one by two, or something that doesn't silently wrap
> around when you get beyond 32 bits. Few type systems have these
> characteristics.
Agreed.
<snip>
That's what you think. I think something else. Now who's right? Hint:
We're all only guessing.
> (Anyway, you are right that the stub generated by Eclipse is bad).
...and the static type system is happy after what Eclipse generated. So
the combination of a static type system and a development environment
lures programmers into thinking he's safe, and nobody forced anyone to
think about anything.
> The second issue, checked exceptions, is Java specific as far as I
> know, and doesn't appear to be particularly related to static vs.
> dynamic typing.
A static type system classifies expressions according to one or more
static types. Java's exception system provides a classification into
whether an expression throws a checked exception or not. Hence, it's a
static type system. (Including the fact that if the type checker cannot
classify an expression into one of the two categories, it will reject
the corresponding program.)
This may not be a good a type system, but it's a static type system
nonetheless, a widely used one, and at least once propagated as a very
good idea to include in a programming language by static thinkers.
> The third example is flawed:
>
> First of all you present it as a flaw of static typing while it's
> actually due to dynamic typing introduced by polymorphism. If Java was
> fully statically typed, all occurrences of 'instanceof' would be
> resolved at compile-time.
> In fact, that kind of race condition can happen in dynamically typed
> languages, which, if the language runtime is thread-safe, would raise
> an exception equivalent to ClassCastException, otherwise would cause
> unspecified and unsafe behavior.
>
> Second, that pattern is an example of bad programming since in
> multithreading environments you should always synchronize before
> accessing a shared mutable resource.
It's funny. Everybody agrees with at least one of the pattern and
disagrees with at least one other pattern. But it's always different
ones. (And this is independent whether they are supporters of static or
dynamic languages.)
Go figure.
As I said, we're all just guessing. Don't try to sell this as rational
thought. That's all I'm asking for...
Pascal was a language that checked array boundaries statically. If I
recall correctly, that was a serious failure in language design. Most
languages I am aware of perform array boundary checks dynamically, and
for good reason.
Bet you've never held down a serious software job.
What exactly are your credentials?
> Leaving array access unchecked like C/C++ do
Under-informed idiot, C++ has higher level arrays in the std::vector template.
Of course, it retains compatibility with C by supporting C-like arrays.
In C++ you can easily program in a style which makes buffer overflows
impossible.
> is the worst way of doing it. It was justifiable given the
> original design goal of C: a semi-portable efficient mid-level
> language for OS programming, but it's a shame that it still remains in
> modern C++.
Doh, without its high level of C compatibility, C++ wouldn't be what it is.
C++ code can directly use platform interfaces which are defined in C, if only a
little care is taken in the header files (which is common practice nowadays).
In C++ you can directly #include some platform header, POSIX, Win32 or
whatever, and link to the corresponding library. The functions in that library
might use C style arrays. For instance, if you call uname on a Unix-like
system, the struct utsname structure contains C arrays. You can use that
directly from C++, arrays and all, without having to deal with any kind of
clunky foreign function wrapping.
Go back on kook medication.
I fail to see a problem here.
The obvious answer is that 'a' is pointing to an object of type Foo. For
any of the purposes for which one wants to use 'a', that should be all
that one needs to know. In fact, I would presume from this code that
the express intent of the programmer is to treat the object as being of
type Foo, which it in fact is.
So where's the problem? Since we have explicitly declared our intention
to not care which particular subclass is used, why is there any need to
worry about the issue?
You only get into trouble when you try to treat the object as being of a
different type. But doing that is a bad idea regardless of your
language.
--
Thomas A. Russ, USC/Information Sciences Institute
Did I say there was a problem?
> The obvious answer is that 'a' is pointing to an object of type Foo.
Right. But Foo is not the most specific type of the object 'a' is
pointing to. The most specific type is not knowable to the compiler,
hence the language is not completely statically typed.
Un(i)typed and dynamically typed languages can be seen as an extreme
case of this situation: all variables are of the same type which can
be seen as a base class of any value type.
Are you attempting to mount and argument from authority?
> > Leaving array access unchecked like C/C++ do
>
> Under-informed idiot, C++ has higher level arrays in the std::vector template.
Which isn't the language default implementation of arrays.
Moreover, the [] operator of std::vector isn't required to perform
boundary checks.
> Of course, it retains compatibility with C by supporting C-like arrays.
It uses C-like arrays as it's default and native implementation of
arrays, that's more than just retaining compatibility.
A compatibility that isn't complete anyway: an array allocated with
'new' can't be safely deallocated with 'free()' and an array allocated
with 'malloc()' can't be safely deallocated with 'delete []'.
And anyway I presume that it wouldn't be so difficult to add array
bounds checking to C and C++.
All heap-allocated arrays already contain length information in some
form to allow deallocation, so one would just need to add length
information for stack-allocated and global arrays in a consistent
representation.
I presume that the main problem would be preserving compatibility with
existing binary interfaces.
> In C++ you can easily program in a style which makes buffer overflows
> impossible.
Yes, in C too.
> > is the worst way of doing it. It was justifiable given the
> > original design goal of C: a semi-portable efficient mid-level
> > language for OS programming, but it's a shame that it still remains in
> > modern C++.
>
> Doh, without its high level of C compatibility, C++ wouldn't be what it is.
>
> C++ code can directly use platform interfaces which are defined in C, if only a
> little care is taken in the header files (which is common practice nowadays).
> In C++ you can directly #include some platform header, POSIX, Win32 or
> whatever, and link to the corresponding library. The functions in that library
> might use C style arrays. For instance, if you call uname on a Unix-like
> system, the struct utsname structure contains C arrays. You can use that
> directly from C++, arrays and all, without having to deal with any kind of
> clunky foreign function wrapping.
>
> Go back on kook medication.
Says somebody who insults people on the internet.
>
> And anyway I presume that it wouldn't be so difficult to add array
> bounds checking to C and C++.
> All heap-allocated arrays already contain length information in some
> form to allow deallocation, so one would just need to add length
> information for stack-allocated and global arrays in a consistent
> representation.
>
> I presume that the main problem would be preserving compatibility with
> existing binary interfaces.
>
And do.. Microsoft's C/C++ has this as a optional feature.
It can also check for stack overflow and stack overwrite (in software)
Also on newer Intel processors you can have DEP protection. (Data
Execution Prevention)
(I recommend changing the OS setting to enable it for all programs even
that can cause some older programs to abort.)
For the last 25 years I have always run C/C++ with a bounds checker.
(CodeGuard or NuMega BoundsChecker)
It will inform me if I write to deleted item on the heap or if I forget to
deallocate a object.
In fact overflows and heap management are two of the most over-hyped
criticisms in C++ development.
I have to go back to 80's to see the last time this was a serious issue.
Similarly the first fit heap management scheme is long dead. Heap
fragmentation is less of a problem.
--------------
John Thingstad
Good.
> Also on newer Intel processors you can have DEP protection. (Data
> Execution Prevention)
> (I recommend changing the OS setting to enable it for all programs even
> that can cause some older programs to abort.)
> For the last 25 years I have always run C/C++ with a bounds checker.
> (CodeGuard or NuMega BoundsChecker)
> It will inform me if I write to deleted item on the heap or if I forget to
> deallocate a object.
Ok.
> In fact overflows and heap management are two of the most over-hyped
> criticisms in C++ development.
> I have to go back to 80's to see the last time this was a serious issue.
I disagree. Many security-critical bugs of programs still in use are
caused by unchecked array overflows. Not everybody is using bounds
chekers.
If enough every-day Lispers would mention it accidentally, maybe I
would care...
-PM
That's true ... but except in some simple instances, inference systems
can't infer the programmer's intended theoretical type, but only an ad
hoc type, or set of types, consistent with how the programmer uses the
variables. Declarations are (potentially at least) closer to the
intended theoretical type.
Personally I believe inference is fine for internal use but external
interfaces should always be explicitly declared. YMMV.
George
Do you think that's easier to forget to implement a method after
writing a stub for it rather than after writing nothing about it?
> Now who's right? Hint:
> We're all only guessing.
So the discussion we had up to now is all meaningless?
> > (Anyway, you are right that the stub generated by Eclipse is bad).
>
> ...and the static type system is happy after what Eclipse generated. So
> the combination of a static type system and a development environment
> lures programmers into thinking he's safe, and nobody forced anyone to
> think about anything.
Shame on Eclipse.
> > The second issue, checked exceptions, is Java specific as far as I
> > know, and doesn't appear to be particularly related to static vs.
> > dynamic typing.
>
> A static type system classifies expressions according to one or more
> static types. Java's exception system provides a classification into
> whether an expression throws a checked exception or not. Hence, it's a
> static type system. (Including the fact that if the type checker cannot
> classify an expression into one of the two categories, it will reject
> the corresponding program.)
Yes, but the existence of checked exceptions is Java-specific, as far
as I know.
> This may not be a good a type system, but it's a static type system
> nonetheless, a widely used one, and at least once propagated as a very
> good idea to include in a programming language by static thinkers.
>
> > The third example is flawed:
>
> > First of all you present it as a flaw of static typing while it's
> > actually due to dynamic typing introduced by polymorphism. If Java was
> > fully statically typed, all occurrences of 'instanceof' would be
> > resolved at compile-time.
> > In fact, that kind of race condition can happen in dynamically typed
> > languages, which, if the language runtime is thread-safe, would raise
> > an exception equivalent to ClassCastException, otherwise would cause
> > unspecified and unsafe behavior.
>
> > Second, that pattern is an example of bad programming since in
> > multithreading environments you should always synchronize before
> > accessing a shared mutable resource.
>
> It's funny. Everybody agrees with at least one of the pattern and
> disagrees with at least one other pattern. But it's always different
> ones. (And this is independent whether they are supporters of static or
> dynamic languages.)
>
> Go figure.
>
> As I said, we're all just guessing. Don't try to sell this as rational
> thought. That's all I'm asking for...
There are random guesses and there are reasonable hypotheses. And then
there are facts.
My claim about your first pattern is a reasonable hypothesis in my
opinion, the second is more a question of semantics and the third is a
fact.
In a dynamically typed language, you get runtime exception for a missing
method/function as soon as there is an attempt to invoke it at runtime,
not sooner, not later. As soon as you get that exception, you know that
there is something that still needs to be implemented. You can even do
this at that stage, when the default exception handler is waiting for
interactive resolution. As soon as the method / function is defined, you
can just continue execution from the place where the exception occurred.
If you are forced to implement a stub by the static type system, it's
likely that you get the stub wrong (as my paper shows), and the
situation will be much worse. The only correct stub to write is to
implement the functionality a dynamically typed language already
provides by default anyway.
You cannot force programmers to do anything. Programming languages
should support programmers, not stand in their way. That's what I think.
>> Now who's right? Hint:
>> We're all only guessing.
>
> So the discussion we had up to now is all meaningless?
No, it's not meaningless. But you have to realize that it's only an
exchange of opinions, not of hard facts.
There may be "subjectively hard" facts, like which programming style
suits your personality better or not. It may very well be that (certain)
statically typed languages are easier for you to handle than (certain)
dynamically typed languages. But that doesn't say anything about static
typing and dynamic typing in general.
The errors static thinkers assume to pop all the time in dynamic
languages don't exist in practice, at least when programmers use those
languages to who the corresponding programming style fits well. That's
another ("subjectively hard") fact. Better deal with it.
>>> (Anyway, you are right that the stub generated by Eclipse is bad).
>> ...and the static type system is happy after what Eclipse generated. So
>> the combination of a static type system and a development environment
>> lures programmers into thinking he's safe, and nobody forced anyone to
>> think about anything.
>
> Shame on Eclipse.
No, this is not Eclipse's fault. It's the fault of the overall eco
system surrounding Java, including the fact that is has a relatively bad
type system.
>>> The second issue, checked exceptions, is Java specific as far as I
>>> know, and doesn't appear to be particularly related to static vs.
>>> dynamic typing.
>> A static type system classifies expressions according to one or more
>> static types. Java's exception system provides a classification into
>> whether an expression throws a checked exception or not. Hence, it's a
>> static type system. (Including the fact that if the type checker cannot
>> classify an expression into one of the two categories, it will reject
>> the corresponding program.)
>
> Yes, but the existence of checked exceptions is Java-specific, as far
> as I know.
As I said, it's not so much important whether a type system is static or
not, but it's rather important whether it's good or not.
>>> The third example is flawed:
>>> First of all you present it as a flaw of static typing while it's
>>> actually due to dynamic typing introduced by polymorphism. If Java was
>>> fully statically typed, all occurrences of 'instanceof' would be
>>> resolved at compile-time.
>>> In fact, that kind of race condition can happen in dynamically typed
>>> languages, which, if the language runtime is thread-safe, would raise
>>> an exception equivalent to ClassCastException, otherwise would cause
>>> unspecified and unsafe behavior.
>>> Second, that pattern is an example of bad programming since in
>>> multithreading environments you should always synchronize before
>>> accessing a shared mutable resource.
>> It's funny. Everybody agrees with at least one of the pattern and
>> disagrees with at least one other pattern. But it's always different
>> ones. (And this is independent whether they are supporters of static or
>> dynamic languages.)
>>
>> Go figure.
>>
>> As I said, we're all just guessing. Don't try to sell this as rational
>> thought. That's all I'm asking for...
>
> There are random guesses and there are reasonable hypotheses. And then
> there are facts.
>
> My claim about your first pattern is a reasonable hypothesis in my
> opinion, the second is more a question of semantics and the third is a
> fact.
Matthias Felleisen stated that the third example is the only one which
is actually indeed about the static type system. (He dismissed the other
two.)
I think the fact that people cannot even agree which of the examples are
valid ones or not proves the point of the paper: Static type systems
don't guarantee anything.
> In a dynamically typed language, you get runtime exception for a missing
> method/function as soon as there is an attempt to invoke it at runtime,
> not sooner, not later.
Good compilers, like LispWorks, gives a warning at compile time, if you try
to call a function which is undefined.
--
Frank Buss, f...@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
True, forgot to mention that. (But they don't disallow programmers to
execute the code in such a case...)