>I'm sure you're aware that this is a perennial subject of contention. >There are those of us who maintain that "dynamic type" is an oxymoron. >The main problem with such a notion is that there is little, if any, >distinction between a type and a general predicate. I suppose you might >say that types are _total_ predicates, and this distinction might be >marginally useful for a language like Lisp, but I think it begins to >fall down for nonstrict languages. >This is not a value judgement, mind you. There are weird and wonderful >things you can do in Lisp or in the untyped lambda calculus, but I would >say that describing Lisp as "dynamically typed", rather than "untyped", >doesn't really say much.
1. You're doing too much violence to established usage. There's more than one established notion ot types and of typing. This needn't have much "cost" because and there's terminology for identifying which notion(s) you're talking about. "Dynamic typing", for example.
2. Saying "dynamically typed" rather than "untyped" says something worth saying. There have been Lisps that were pretty much what I'd call "untyped". For instance, you could take the car and cdr of symbols and numbers. (I know this can be seen as not being about types, but that's not the only worthwhile way to see it.) I think calling Lisp "untyped" makes pretty much the same mistake as saying "in Lisp, everything is a list".
h...@aplcenmp.apl.jhu.edu (Marty Hall) writes: >In article <BLUME.95Sep21153...@atomic.cs.princeton.edu> >bl...@atomic.cs.princeton.edu (Matthias Blume) writes: >[...] >>When I still was a Scheme addict I used to believe in this, too. But >>after programming extensively in SML I now have to say that this is >>not true at all. Early _complete_ error checking leads (at least for >>me) to a huge decrease in development time. >I have been and continue to do a majority of my professional >development work in Lisp (the remainder in C), and like the dynamic >typing aspect. But for those who haven't tried SML, it really is worth >looking at. I have considered the strongly typed aspect of other >strongly typed languages a pain, but SML [...]
I agree that SML is a good language and that ML-style type inference, and the power of the type system, make for a usaable and flexible form of strong-typing.
However, I nonetheless find SML a pain to use. I don't think it's the strong-typing _per se_, though I still find it too restrictive. I hate having to write output routines (Lisp typically provides enough automatically), pattern-matching is often too "positional", ML's not object-oriented, it doesn't have macros, the module system is in itself about as complex as all of Scheme, ... -- stuff like that.
| Yes, in Lisp you cannot mis-apply a primitive but, for me, safety also | implies never mis-applying user-defined functions. Without this | guarantee, I'd be hard-pressed to concede that Lisp is a type-safe | language.
this doesn't make sense.
| To take a specific example: if a point in space is represented as a | pair of numbers, then the typical Lisp function that selects the | x-coordinate: | | (define (x-coord pt) (car pt)) | | will happily work with any list I give it, whereas in most statically | typed languages, this mis-application is caught during type-checking.
sorry, you're seriously confused. first of all, the above code is Scheme, not Lisp. second, if you use a primitive type to implement a higher-level type and you don't tell the type system, you get exactly the same kind of behavior in statically typed languages. the clue you missed is that you have to tell the type system what you think. in the case of Common Lisp, you use structures or classes, which become new types, which you can properly declare. (it is not Lisp's fault that Scheme lacks structures and classes. credit where credit is due, and blame where blame is due.)
| The fact that Lisp has a ``safety-net'' (not present in C and | assembler) in that primitives are never mis-applied, is very useful and | nice, but I still wouldn't call it a type-safe language.
that's because you have misunderstood Lisp's safety-net, confuse Scheme with Lisp, and generally talk more than you should.
| [Some may claim that my X-COORD reflects bad style: that I should have | used an explicitly tagged data structure for points and checked the tag | in X-COORD. Fine; I can, and do, write such type-safe programs in | Lisp, and so can I, in C and assembler; but the LANGUAGE isn't | type-safe.]
yes, it's bad style, but not for the reason you think. the problem is that _you_ aren't type-safe, because you have used the "Lisp" ambiguously. Lisp is a class of languages, not a single language. properties of one of the instances of that class are not necessarily properties of all instances, nor of the class.
I strongly suspect you refer to Scheme, and not necessarily a functionally useful implementation of Scheme, either.
In article <446l4l$...@info.epfl.ch>, "Stefan Monnier" <stefan.monn...@epfl.ch> writes:
|> In article <444h1m$...@larry.rice.edu>, |> Shriram Krishnamurthi <shri...@europa.cs.rice.edu> wrote: |> ] The problem is not what happens when the type checker accepts your |> ] program, but rather when it doesn't. The issue is not that it will |> ] reject some correct programs; I know it will, and accept than when I |> ] use SML. Rather, I often find the Hindley-Milner algorithm |> ] unintuitive in its type judgements. In particular, reverse flow |> ] problems can make some type errors fairly hard to understand. |> |> This is a serious problem and some research has been done that tries to keep |> track of where which choices have been made so that when an error is detected, |> the compiler can tell the user where he got some type from. I don't know how |> well this works, tho.
As an ml programmer, I don't see this problem as very serious. It is true that every once in a while the compiler prints a confusing type mismatch error. However in the addition of a type constraint or two can quickly yield better error messages. Only a tiny fraction of my time programming in ml is spent tracking confusing type errors.
Kenneth Cline <cli...@SILVER.FOX.CS.CMU.EDU> wrote: >|> [stuff about how hard it can be to track down type errors in ML]
>As an ml programmer, I don't see this problem as very serious. It is >true that every once in a while the compiler prints a confusing type >mismatch error. However in the addition of a type constraint or two can >quickly yield better error messages. Only a tiny fraction of my time >programming in ml is spent tracking confusing type errors.
I both agree and disagree with Ken. I agree that for experienced ML programmers, tracking down type errors is rarely a problem. In the rare instance that you can't find the error fairly quickly, there are techniques that will usually help, such as adding a few type constraints.
However, I disagree because I do believe it is a serious problem, not for experienced ML programmers, but rather for novice ML programmers. The learning curve for figuring out how to deal with these type errors is fairly steep, and I think too many people give up in frustration. Without better tools and techniques for *novices* to track down type errors, ML and its relatives will never make a dent in mainstream programming.
I have a recurring friendly argument over these issues with a friend. She's an experienced programmer, but hates ML because when she tried to learn it, she found the type error messages too confusing. I maintain that however bad the type error messages might be, they're better than the type error message that C would give you in most cases ("Bus error - core dumped."). However, she has over the years developed the skills to cope with that kind of error (such as sprinkling the program with printf's) and is unwilling to invest the significant chunk of time and effort that would be required to learn the new set of skills for dealing with ML-style type errors.
Chris -- --------------------------------------------------------------------------- ---- | Chris Okasaki | Life is NOT a zero-sum game! | | cokas...@cs.cmu.edu | http://foxnet.cs.cmu.edu/people/cokasaki/home.html | --------------------------------------------------------------------------- ----
matom...@lig.di.epfl.ch (Fernando D. Mato Mira) wrote:
>In article <002btk...@celeborn.satlink.net>, ferna...@celeborn.satlink.net (Fernando Martinez) writes: >|> Sorry but I'm shock because nobody talk about Smalltalk, it's have all the >|> advantages of ADA and LISP together.
>Sorry, but I think all Lisp and Ada programmers around must agree that you're wrong.
Smalltalk is more like a subset of Common Lisp.
A very powerful subset, but a subset nonetheless.
There are advantages to this fact, BTW. Being a subset of Common Lisp is not necessarily a *bad* thing!
"Poor design is a major culprit in the software crisis... ..Beyond the tenets of structured programming, few accepted... standards stipulate what software systems should be like [in] detail..." -IEEE Computer, August 1995, Bruce W. Weide
In article <19950925T1639...@naggum.no>, Erik Naggum <e...@naggum.no> writes:
|> ... first of all, the above code is Scheme, |> not Lisp.
I stand suitably corrected, thank you. Yes, I was thinking Scheme. Most of the people I know who use Scheme, including the inventors of Scheme, insist that Scheme is Lisp, but no doubt they, like me, are seriously misinformed.
|> that's because you ... generally talk more than you should.
I apologize profusely for what is probably my first posting to this list this year, a total of less than 20 lines of text. Obviously, it's too much.
|> I strongly suspect you refer to ... not necessarily a functionally |> useful implementation of Scheme ...
You are utterly right, of course. I am in admiration of your ability to discern this from my posting.
Now to go hide in my corner with some Earl Grey tea.
My guess is that one of the reasons we use (need) types is to distinguish between what is being denoted, and the syntax used to denote it. For example '2' could be an integer, a rational, a finite field element, a character, etc. In each of these denotations the behaviour of '2' differs.
graham -- A cat named Easter He say "will you ever learn" Its just an empty cage girl If you kill the bird
In article <DFH4Mv....@cogsci.ed.ac.uk> j...@cogsci.ed.ac.uk (Jeff Dalton) writes: >d...@silicon.csci.csusb.edu (Dr. Richard Botting) writes: >>Most language have evolved towards the strongest form of typing. For >>example: CPL->BCPL->B->C->ANSI C->C++ >Well, for one thing, C++ came before ANSI C.
And CPL had a strong Algol-60-like type system, even if BCPL doesn't. It had real, integer, Boolean (sic capital), complex, list, string and array types; machine registers and memory words were given their own types, 'index' and 'logical' respectively. On at least one machine 'integer' was represented internally as floating-point.
Erik Naggum writes: In article <19950925T1639...@naggum.no> Erik Naggum <e...@naggum.no> writes:
nikhil> [R.S. Nikhil] | Yes, in Lisp you cannot mis-apply a primitive nikhil> but, for me, safety also | implies never mis-applying nikhil> user-defined functions. Without this | guarantee, I'd be nikhil> hard-pressed to concede that Lisp is a type-safe | language.
erik> this doesn't make sense.
It does make sense to me. In LISP you make get a run-time error for applying "+" to a list. In ML this can never happen.
nikhil> | To take a specific example: if a point in space is nikhil> represented as a | pair of numbers, then the typical Lisp nikhil> function that selects the | x-coordinate: | | (define (x-coord nikhil> pt) (car pt)) | | will happily work with any list I give it, nikhil> whereas in most statically | typed languages, this nikhil> mis-application is caught during type-checking.
erik> sorry, you're seriously confused. first of all, the above code erik> is Scheme, not Lisp.
I think rather you are confused. Scheme is a LISP dialect, there is actually quite a number of them. Or do you confuse LISP and Common Lisp?
erik> second, if you use a primitive type to erik> implement a higher-level type and you don't tell the type erik> system, you get exactly the same kind of behavior in statically erik> typed languages.
It is certainly a good idea to use structures but this wasn't the point.
nikhil> | The fact that Lisp has a ``safety-net'' (not present in C nikhil> and | assembler) in that primitives are never mis-applied, is nikhil> very useful and | nice, but I still wouldn't call it a nikhil> type-safe language.
erik> that's because you have misunderstood Lisp's safety-net, confuse erik> Scheme with Lisp, and generally talk more than you should.
I find the combination of arrogance and ignorance you display rather shocking. I have to say that the discussion in comp.lang.functional is usually on a higher level, but then this is a crossposting...
-- Thorsten Altenkirch LFCS, University of Edinburgh
nik...@mattapoisett.crl.dec.com (R.S. Nikhil) writes: > Yes, in Lisp you cannot mis-apply a primitive but, for me, safety > also implies never mis-applying user-defined functions.
As some other responses have pointed out -- with, sadly, varying degrees of vituperation -- the difficulty here is that you're not making explicit the invariant you're trying to maintain with regard to points (in this case). Tagging, mind, is not only no better than an assembler-level solution, it still doesn't *solve the problem*.
However, there seems to be an increasing trend toward wanting the addition of what some call "opaque" types (or structures); I think "generative" might be a better name for them. For instance, we have them in our local Scheme dialect (<http://www.cs.rice.edu/~scheme/>), and I've seen some talk on the RnRS authors' mailing list about them. (Unfortunately, the dialog there suggests some confusion between opacity and "generativity"; I use quotes advisedly because it is both less and more than ML's generative `datatype' declaration.) In fact, I have more than a few times been lead into wild debugging romps because of the current confusion of structures with vectors.
However, you know all this, and I think you're trying to make the more interesting point that Scheme is insufficiently powerful to *express* the constraints you want to place. This is true, and problematic. I suspect Joe Fassel was trying to say the same thing, but I mangled the point somewhat.
I recently discovered a similar issue with very "practical" consequences. Say I'm writing something that inserts run-time checks at primitives. What do I do with `force'? Well, I could re-write
but there's no way of determining the arity of that procedure. In short, I can (as an annotator) never be certain that the object being forced is a procedure of arity one. (In addition, there is the matter of "What is a promise?", a question that so-called standard R4RS leaves fairly open.)
> I am wondering why we really need typing in > programming languages. I know that typing > information is beneficial for compiling, for > example, error checking. And there are some > languages, such as Lisp, which are non typing > or weak typing. Besides these, are there any > benefits?
It has often been "quoted" that the development times of large projects is significantly reduced and the reliability of the result is much greater.
I haven't got any references to hand - can anyone supply some hard evidence ??
In article <44qkqp$...@goanna.cs.rmit.EDU.AU> o...@goanna.cs.rmit.EDU.AU (Richard A. O'Keefe) writes:
>d...@silicon.csci.csusb.edu (Dr. Richard Botting) writes: >>Consider the following facts. There exist languages that have >>no types(LISP), weak typing(K&R C), strong typing(Pascally,Ada), automatic >>dynamic typing(APL,BASIC)
>APL's dynamic typing is highly unusual in that it has syntactic consequences: >a statement like "X <- F G Y" is *parsed* differently depending on whether >F and G are unary functions or G is a binary function, and that can change >at run time, so strictly speaking APL statements have to be reparsed whenever >they are executed (at least according to the semantics in the APL2 draft >standard I have floating around).
This is correct, but it is RARELY, if ever, used by anyone who wishes to remain sane while maintaining an application. People write functions with a specific valence (monadic or dyadic), and leave them there.
The dynamic typing, by and large, is not a big deal, since you can deduce the type of just about everything statically. APEX, my parallel APL compiler, does just that. The place where dynamic typing REALLY pays off [and my compiler won't pick up on this one, sad to say] is when types change underfoot:
In APL, if you (for example), add two integer arrays together: X+Y, and the result won't fit in an integer format, you get an array of doubles out, rather than an array of integers. Everything down the line will then work on these floating values instead of the integer values.
This is why a large New York merchant bank running SHARP APL was able to make money (fast 8^} ) on Black Monday when all the other trading systems were either: - dead, because integer overflow took them off the air - slow or giving wrong answers[ignoring the overflow!] to traders, letting them trade themselves into Really Big Losses
What happened was that trading volumes and total $$ were so large that normally happy integer values went floating. APL kept humming along. Other systems, coded in C, COBOL, or other languages, did something else.
Now, having said that: Most APL programmers will claim that they know the types and dimensionality of all the arrays in their code. They are usually right.
Erik Naggum <e...@naggum.no> writes: >[Peter Hermann] >| The benefit pays off in a drastic(!) reduction of programming errors. >the value is in early detection of programming errors, ideally causing a >reduction of programming errors in the running code, but this is not >necessarily the case as even those "errors" that there is no value in >detecting are detected, and "correcting" them only adds unnecessary >complexity to the program.
It's only "unnecessary" if you didn't really want any help from the type system in the first place (in which case, there are other languages out there for you). Those of us that *like* strong types feel that the occasional need to satisfy the type system is a small price to pay for the detection of a large class of errors. Think of it as a lifestyle choice - if you would like to get to work faster, you might choose a motorcycle, rather than a car with seat belts.
A static type system is a subpart of a program specification which can be checked at compile time. If you're dashing off a small program, such checks may be unnecessary and unwelcome; write it in LISP. If you're building a large one, they can be damned useful; write it in a language that does (at least) some static type checking. As a side effect, you usually get some optimisations.
The challenge is to devise more flexible type systems that still manage to hold the programmer's hand at compile time (rather than crashing and burning at runtime), while remaining reasonably intuitive. That's hard, and, because some things are only semidecidable, there will always be people for whom current "static type system technology" is too restrictive to use.
These don't necessarily represent better or worse programmers - just different styles. There are also "soft" type systems, for dynamically typed languages, which don't impose type constraints but will issue warnings (when they can) and optimise code.
>further, as the number of types increases [...] >the ability of even very intelligent and highly experienced >programmers to keep track of them and obtain the desired result in the >presence of overloaded operators is decreasing dramatically.
Gotta agree. But this is the same as keeping track of the function calls in a large dynamically typed system. It's precisely when systems become large that you'd *like* the compiler to check some of the more obvious potential errors for you.
>this leads me to my second argument against strong typing. getting a type >system right requires so much detailed specification that the cost is so >high that mistakes are actively discouraged.
Strong type systems discourage mistakes! And you're arguing *against* it? :)
Hmm. There's Haskell (actually, Gofer in particular). Pretty flexible if you want it to be, but still reasonably safe. And you don't have to *use* the extra stuff if you're comfortable with a subset of the type system. Haskell/Gofer aren't the ultimate, of course (just reasonably nifty and intuitive); they seem like a reasonable trade off.
For a better example of a powerful dynamic type system which allows type constraints to be checked at compile time, there's Quest (Cardelli et al; papers on the web, and well worth reading). Quest is essentially explicitly typed (you mention type parameters to functions when necessary), which allows its type system to be very general, but still checkable. It supports parametric polymorphism, encapsulation, overloading, record (and other) subsumption, etc; it also seems to scale reasonably well between small, dashed off programs and very large systems.
All of these languages are strongly typed, being based on similar styles of typing technology, but go to different lengths to be flexible and/or efficient. Different people will choose different languages. This is how it should be.
Mike. -- Mike McGaughey AARNET: m...@molly.cs.monash.edu.au
"Thousands at his bidding speed, And post o'er land and ocean without rest" - Milton.
Mike Mc Gaughey (m...@cs.monash.edu.au) wrote: [good discussion of different typing systems delete] : Think of it : as a lifestyle choice - if you would like to get to work faster, you : might choose a motorcycle, rather than a car with seat belts.
Does this choice effect your career? I'm thinking of the truck driver who insists on using a motorbike for his trucking! Or the cross-town messenger in a truck!
I guess my point is that it is *more* than a life-style choice. A safety belt may have saved my life once, but it didn't stop the car from crashing into somebody else's property. When my word processor crashes I loose work. When a bridge on our network crashes labs, memos, etc on campus are delayed. A type system or language that lets a program crash may be a problem to others tho' a solution to the programmer.
I'm redirecting followups to this posting to comp.soft-eng.
In article <mmcg.812867...@bruce.cs.monash.edu.au> m...@cs.monash.edu.au "Mike Mc Gaughey" writes:
> All of these languages are strongly typed, being based on similar styles > of typing technology, but go to different lengths to be flexible and/or > efficient. Different people will choose different languages. This is > how it should be.
What usually interests me far more than the stength of the typing system (which can easily be debated, as this thread shows), is the size of the compilation unit supported by the compiler. I like to compile small units very fast, as I often change small (sometimes _very_ small) amounts of code repeatedly.
Like with your description of typing as a "lifestyle" choice, I think that the compilation unit size is another "lifestyle" choice. I'll happily go without types altogether (e.g. using Forth) if it means I can develop something quickly. After it works, I might ask myself if I should rewrite it in another language, perhaps for more speed, or even for "delivery". That's a personal choice, as the Forth implementation I've used is my own, and its support for producing stand-alone code is currently limited.