Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

mathementical/formal foundations of computing ?

4 views
Skip to first unread message

ne...@absamail.co.za

unread,
Feb 22, 2007, 9:50:49 AM2/22/07
to
2 independant issues here:

1. I'm still searching for material re. "the mathementical/formal
foundation[s] of computing" to help move programming away from an
art towards a science. Patterns seem to be a heuristic step in that
direction ?

Here's some text re. the language :joy
http://www.latrobe.edu.au/philosophy/phimvt/joy/j00ovr.html
> Overview of the language JOY
> Two other functional languages are the combinatory logic of Curry and
> the FP language of Backus. They are not based on the lambda calculus,
> they eliminate abstraction completely and hence do not have to deal
> with substitution and environments. As a result these languages can be
> manipulated using simple algebraic techniques
> ........Backus acknowledges a dept to
> combinatory logic, but his aim was to produce a variable free notation
> that is amenable to simple algebraic manipulation by people. Such ease
> should produce clearer and more reliable programs. .........
> Paulson remarked that "Programming and pure mathematics are
> difficult to combine into one formal framework". Joy attempts this task.

2. the second issue is about how NOT to write tutors.
The correct way is to chain-backwards from the goal.
Ie. put the 'bottom line' first, so the reader knows if he wants
to invest the labour to 'build forward' towards the goal.

I've been reading pages of the details of 'joy', without knowing
if & when & how it will claim to allow manipulation of 'joy code'
in a formal, mathematical way.

Q - does anybody know if joy can help me reach that goal ?

I don't want to learn another language which is a hybrid of forth, lisp
& pop-11 just to repeat "hello world: 2+3=5".

The correct way to present the capabilities which I/we seek is:

1. In order to simplify/formalise/be-able-to-prove-XYZ, we
need to have the 'format ABC'.

2. This is acheived by transforming .....

3. On the basis of proven rules & manipulations ...

Ie. don't start with pages of rules & manipulations, where the
reader doesn't know if the effort will take him towards his goal.


== Chris Glur.

Chris Dollin

unread,
Feb 22, 2007, 10:00:42 AM2/22/07
to
ne...@absamail.co.za wrote:

> I don't want to learn another language which is a hybrid of forth, lisp
> & pop-11 just to repeat "hello world: 2+3=5".

pop-11 is /already/ viewable as a hybrid of forth and lisp.

--
Chris "electric hedgehog" Dollin
"People are part of the design. It's dangerous to forget that." /Star Cops/

Bruce McFarling

unread,
Feb 22, 2007, 10:05:05 AM2/22/07
to
On Feb 22, 9:50 am, n...@absamail.co.za wrote:
> 1. I'm still searching for material re. "the mathementical/formal
> foundation[s] of computing" to help move programming away from an
> art towards a science. Patterns seem to be a heuristic step in that
> direction ?

This may be the boy scout who helped the little old lady across the
street who did not want to cross the street.

The target of moving some aspects of programming from an art (ie,
craft) to a science is not to move all of programming to a science,
but to clear clutter from the palette to provide more leverage to the
craft of design.

Its like the move from pen to typewriter to word processor ... the
goal is not to eliminate the writer from the act of writing, it is to
reduce the effort of recording and editing the written word so that
attention can be focused on the actual composition.

werty

unread,
Feb 22, 2007, 11:29:45 AM2/22/07
to
On Feb 22, 7:50 am, n...@absamail.co.za wrote:
> 2 independant issues here:
>
> 1. I'm still searching for material re. "the mathementical/formal
> foundation[s] of computing" to help move programming away from an
> art towards a science. Patterns seem to be a heuristic step in that
> direction ?
>
> Here's some text re. the language :joyhttp://www.latrobe.edu.au/philosophy/phimvt/joy/j00ovr.html

arnuld

unread,
Feb 22, 2007, 11:52:15 AM2/22/07
to
> 1. I'm still searching for material re. "the mathementical/formal
> foundation[s] of computing" to help move programming away from an
> art towards a science.

ever tried algorithms ?

ever tried Google with keyword: Lisp (where functions are data)

also, http://www.cs.mu.oz.au/research/mercury/

> Patterns seem to be a heuristic step in that direction ?


you mean Desgn-Patterns ?


> Here's some text re. the language :joyhttp://www.latrobe.edu.au/philosophy/phimvt/joy/j00ovr.html

i can not understand that as my a programming Newbie.

Joachim Durchholz

unread,
Feb 22, 2007, 2:23:27 PM2/22/07
to
ne...@absamail.co.za schrieb:

> 1. I'm still searching for material re. "the mathementical/formal
> foundation[s] of computing" to help move programming away from an
> art towards a science.

First, programming is a craft, not an art (though it *can* be art, just
as craft can be art).
Second, mathematics alone isn't going to help programming move along. It
won't help with getting customer requirements into something that can be
programmed (mathematics could even be an obstacle here), and it won't
help with getting bugs out of programs (which is a question of human
error, and quality testing: again, the obstacles here are more human
than technological, and mathematics isn't going to help with that).

That doesn't mean that mathematics is unimportant in programming, just
as an architect needs mathematics to do his work when calculating loads
and stability. However, it's just a tool, not an end to be pursued.

That said, "functional programming languages" are "more mathematical" in
some sense, and they do have properties that make it easier to avoid the
technical bugs, and a portion of the customer requirements bugs, too.
That's not because it's mathematics, it's because being nearer to
mathematics simplifies the semantics of these languages...

> Patterns seem to be a heuristic step in that direction ?

No. Patterns are just like tricks of the trade: helpful if you know what
you're doing, nonsensical if you don't.

> Here's some text re. the language :joy

Language rationales list what the designers had in mind.
I have observed that few if any languages actually do as the designer
intended. Most underperform; a few overperform (Lisp and PHP being the
cases where the difference is most marked). Fewer perform as expected
(Haskell, I think).

> 2. the second issue is about how NOT to write tutors.
> The correct way is to chain-backwards from the goal.
> Ie. put the 'bottom line' first, so the reader knows if he wants
> to invest the labour to 'build forward' towards the goal.

An interesting thought, though I can't tell whether that style will be
successful. It's good to determine whether a language (or other
technology) fits your needs, but it doesn't help with actually learning
it, so I fear this style won't become very popular.

Regards,
Jo

Paul Rubin

unread,
Feb 22, 2007, 3:25:21 PM2/22/07
to
Joachim Durchholz <j...@durchholz.org> writes:
> > 2. the second issue is about how NOT to write tutors.
> > The correct way is to chain-backwards from the goal.
> > Ie. put the 'bottom line' first, so the reader knows if he wants
> > to invest the labour to 'build forward' towards the goal.
>
> An interesting thought, though I can't tell whether that style will be
> successful. It's good to determine whether a language (or other
> technology) fits your needs, but it doesn't help with actually
> learning it, so I fear this style won't become very popular.

Reading this group makes me feel like the underwear gnomes from South
Park:

1. Abstract category theory, GADT, Curry-Howard isomorphism, etc.
2. ???
3. Software!

I've bought into it enough to be spending time banging my head against
learning Haskell, but it would help a lot if step 2 were explained
better.

Jerry Avins

unread,
Feb 22, 2007, 4:15:33 PM2/22/07
to
ne...@absamail.co.za wrote:
> 2 independant issues here:
>
> 1. I'm still searching for material re. "the mathementical/formal
> foundation[s] of computing" to help move programming away from an
> art towards a science. Patterns seem to be a heuristic step in that

That direction is a dead end. It is no more possible to create a
mathematical/formal foundation[s] of computing than it is to create one
of bridge or cathedral design. To be sure, mathematics is a useful tool
in the process, but we would have no Ponte Veccio or Chartres if it were
the foundation.

The foundation is craft. Math builds on that foundation.

Jerry
--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯

Neelakantan Krishnaswami

unread,
Feb 22, 2007, 4:30:31 PM2/22/07
to
In article <7x7iuar...@ruckus.brouhaha.com>, Paul Rubin wrote:
>
> Reading this group makes me feel like the underwear gnomes from South
> Park:
>
> 1. Abstract category theory, GADT, Curry-Howard isomorphism, etc.
> 2. ???
> 3. Software!
>
> I've bought into it enough to be spending time banging my head
> against learning Haskell, but it would help a lot if step 2 were
> explained better.

Step 2 is:

2. Algebraic structures == insanely effective design patterns for libraries

--
Neel R. Krishnaswami
ne...@cs.cmu.edu

Paul E. Bennett

unread,
Feb 22, 2007, 5:45:57 PM2/22/07
to
Joachim Durchholz wrote:

> ne...@absamail.co.za schrieb:
>> 1. I'm still searching for material re. "the mathementical/formal
>> foundation[s] of computing" to help move programming away from an
>> art towards a science.

Which gets me wondering why he hasn't done a search on Formal methods with
google to start with before worrying us here.

> First, programming is a craft, not an art (though it *can* be art, just
> as craft can be art)

I'm with you on that front. It is also a craft that requires a degree of
skill and intelligence to accomplish a satisfactory end product.

> Second, mathematics alone isn't going to help programming move along. It
> won't help with getting customer requirements into something that can be
> programmed (mathematics could even be an obstacle here), and it won't
> help with getting bugs out of programs (which is a question of human
> error, and quality testing: again, the obstacles here are more human
> than technological, and mathematics isn't going to help with that).

I have seen some places where maths (formal methods) has been a help in
extracting sense out of complex customer requirements in order to re-write
the requirements in a way that made a great deal more sense to system
developers. It is a process best left to those with a distinctly weird bent
of mathematical mind. On the other hand, there are limitations to the use
of formal methods and it is of no use at all in some situations.

> That doesn't mean that mathematics is unimportant in programming, just
> as an architect needs mathematics to do his work when calculating loads
> and stability. However, it's just a tool, not an end to be pursued.

Sometimes a highly specialised tool that needs to be wielded by experts.



> That said, "functional programming languages" are "more mathematical" in
> some sense, and they do have properties that make it easier to avoid the
> technical bugs, and a portion of the customer requirements bugs, too.
> That's not because it's mathematics, it's because being nearer to
> mathematics simplifies the semantics of these languages...

Have heard a proponent of Haskell and Gopher talk of Forth as a language
that is half way between procedural and functional and "quite nice for some
things".

--
********************************************************************
Paul E. Bennett ....................<email://p...@amleth.demon.co.uk>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-811095
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************

Paul Rubin

unread,
Feb 23, 2007, 1:06:21 AM2/23/07
to
Neelakantan Krishnaswami <ne...@cs.cmu.edu> writes:
> Step 2 is:
> 2. Algebraic structures == insanely effective design patterns for libraries

Interesting, but similar library functions are showing up in languages
like C++ and Python, without all that theory, or am I missing something?

It also seems to me that the mathematical hair in the Haskell stuff
that I look at comes more from mathematical logic than from algebra.
Certainly all the stuff about types, which is pretty new to me (I did
get hold of Pierce's TAPL book and have started to look at it). I
can't help wondering how much of this approach is really necessary.

Logan Shaw

unread,
Feb 23, 2007, 2:39:28 AM2/23/07
to
ne...@absamail.co.za wrote:
> 2 independant issues here:
>
> 1. I'm still searching for material re. "the mathementical/formal
> foundation[s] of computing" to help move programming away from an
> art towards a science.

You'll probably not have much luck in that. Programming consists of
making computers do what people want them to. People are capable
of wanting new things that break any formalism you come up with,
over and over and over again. That is why it is partially an art.

Now, understanding how computers work and what is possible can be
a science. But computer science is not programming.

> Patterns seem to be a heuristic step in that direction ?

I read an interesting perspective on patterns the other day, which
is that patterns are standard ways of dealing with weaknesses in
computer languages. Or to describe it a different way, patterns
are all about writing down abstractions that the languages doesn't
support so that you can do them the same way and not have to
reinvent them. If the language (or library, whatever) already supported
the abstraction, you wouldn't need the pattern; you would just ask the
language to do it.

I'm not sure whether I agree with this or not, but it raises an
interesting point: once we have added some abstractions into a
language, we think of more abstractions. We codify those as part
of the language, or we codify them as part of a book on design
patterns. But either way, we are codifying them. And why is that
important? Because once we are done codifying them, we move on
to the next thing.

The point here is that there will always be a next thing. No matter
how many abstractions you think of and nail down as a science, there
will be more that can be formed.

Does anyone ever seriously suggest that we ought to turn "coming up
with new theorems in math" into a science? What about coming up
with new axioms for mathematical systems that are interesting or
useful? Does anyone suggest we should make that into a scientific
process? No, because the point is that it requires high-level
analysis. The analysis *is* the work.

- Logan

Logan Shaw

unread,
Feb 23, 2007, 2:44:40 AM2/23/07
to
Paul Rubin wrote:
> Reading this group makes me feel like the underwear gnomes from South
> Park:
>
> 1. Abstract category theory, GADT, Curry-Howard isomorphism, etc.
> 2. ???
> 3. Software!
>
> I've bought into it enough to be spending time banging my head against
> learning Haskell, but it would help a lot if step 2 were explained
> better.

I recommend banging your head against functional programming some more.
At some point, you gain some enlightenment out of it. I've even found
myself gravitating towards writing in short fragments of functional style
in C++ (constructors wrapped around constructors wrapped around
construtors...) and Perl (map() instead of for loop, etc.).

It's not the be-all and end-all of software (because *nothing* is or
ever will be), but there is something very satisfying about it.

- Logan

Paul Rubin

unread,
Feb 23, 2007, 3:00:39 AM2/23/07
to
Logan Shaw <lshaw-...@austin.rr.com> writes:
> Does anyone ever seriously suggest that we ought to turn "coming up
> with new theorems in math" into a science?

Sure, Hilbert's formalist program of the 1930's, and prior to that,
Leibniz's in the 18th century. Hilbert's program apart when they
found out about Godel's incompleteness theorem, but for a while they
were trying to do precisely what you describe. Godel himself wrote to
von Neumann in the 1950's asking basically about the complexity of
proving decidable theorems leaving aside the undecidable ones; this
foresaw the development of NP-completeness theory.

I don't understand this foundations-of-computing stuff much myself,
but foundations of math is a perfectly respectable subject that
everyone should know something about. We have a much more precise
notion of what a mathematical theorem is today, than we had 100 years
ago. And it's only been recently that we've had full-blown formal
proofs of substantial theorems. We have a traditional "proof" thats
so complicated that nobody is sure it's correct (Hales' proof of
Kepler's conjecture) so we have the (ongoing) Flyspeck project to
settle the matter once and for all. And then we have godawful
software like Windows that crashes all the time; can we use (e.g.)
constructive type theory to make programs that verifiably do what
they're supposed to? I'm a wide-eyed newbie but this stuff seems much
more promising than the insanity that goes on in most of the software
world today.

Paul Rubin

unread,
Feb 23, 2007, 3:08:28 AM2/23/07
to
Logan Shaw <lshaw-...@austin.rr.com> writes:
> > I've bought into it enough to be spending time banging my head
> > against learning Haskell...

> I recommend banging your head against functional programming some more.
> At some point, you gain some enlightenment out of it. I've even found
> myself gravitating towards writing in short fragments of functional style
> in C++ (constructors wrapped around constructors wrapped around
> construtors...) and Perl (map() instead of for loop, etc.).

I'm not having much trouble with functional part (it's familiar from
Lisp and Scheme). It's the type systems that are confusing me. I've
borrowed a 600 page book about type systems (Pierce, TAPL) which I
hope will help, but I see that it's volume 1 of a two-volume work.
Sigh.

arnuld

unread,
Feb 23, 2007, 3:31:04 AM2/23/07
to
> On Feb 23, 12:39 pm, Logan Shaw <lshaw-use...@austin.rr.com> wrote:

> Now, understanding how computers work and what is possible can be
> a science. But computer science is not programming.

what ?

then what is programming?

and how will you define computer science?

algorithms ?

i really want to understand what you are talking about. i want to
learn.


> I read an interesting perspective on patterns the other day, which
> is that patterns are standard ways of dealing with weaknesses in
> computer languages. Or to describe it a different way, patterns
> are all about writing down abstractions that the languages doesn't
> support so that you can do them the same way and not have to
> reinvent them. If the language (or library, whatever) already supported
> the abstraction, you wouldn't need the pattern; you would just ask the
> language to do it.
>
> I'm not sure whether I agree with this or not, but it raises an
> interesting point: once we have added some abstractions into a
> language, we think of more abstractions. We codify those as part
> of the language, or we codify them as part of a book on design
> patterns. But either way, we are codifying them. And why is that
> important? Because once we are done codifying them, we move on
> to the next thing.
>
> The point here is that there will always be a next thing. No matter
> how many abstractions you think of and nail down as a science, there
> will be more that can be formed.

Logan, it reminds me of Bruce Lee. this is exactly how he developed
"jeet kuan do" after geting dissatisfied with Wing-Chun and other
expert-styles.


-- arnuld
http://arnuld.blogspot.com

Joachim Durchholz

unread,
Feb 23, 2007, 5:32:58 AM2/23/07
to
Paul Rubin schrieb:

> I can't help wondering how much of this approach is really necessary.

I'm not sure that you understood that the term "algebraic structures"
means just those sum types (unions-of-structs in C parlance), and
possibly the associated machinery (pattern matching in particular).

Of course, "algebraic structure" has less concepts than
"union-of-structs", and "union-of-structs" doesn't capture that the
entire thing is typesafe.
Propose a less intimidating terminology if you wish :-)

Regards,
Jo

Joachim Durchholz

unread,
Feb 23, 2007, 5:43:39 AM2/23/07
to
Paul Rubin schrieb:

> And then we have godawful software like Windows that crashes all the
> time; can we use (e.g.) constructive type theory to make programs
> that verifiably do what they're supposed to?

Insofar as you can formalize a requirement, yes.

E.g. one approach that actually made it into practice is Spark (Ada with
those parts stripped that are difficult to reason about, plus a proof
checker).

Problems with this approach are:
a) Formal requirements can be longer than the programs that implement
them, and then you end up debugging the requirements. This makes the
method unsuitable for some tasks.
b) As soon as intangibles like aesthetics or almost-intangibles like
ergonomy come into play, there's no set of formal rules that you can
check against, so such requirements are impossible to verify with a
formalism.
c) You need a way to deal with situations where the formalism says
something and reality says something different. E.g. if a file read
operation fails because the disk was disconnected, and or some random
bit in RAM flipped, or whatever; and the formal requirements didn't
consider that case. You can largely ignore the issue for application
programs, but you cannot for operating systems and safety-critical
software. However, capturing such situations in a formalism is extremely
hard; in practice, people usually make sure that the system detects such
a failure and falls back to a restricted but safe mode of operation.
(Things get *really* hairy if there is no such mode, e.g. in nuclear
plants.)

Regards,
Jo

Joachim Durchholz

unread,
Feb 23, 2007, 5:52:47 AM2/23/07
to
arnuld schrieb:

>> On Feb 23, 12:39 pm, Logan Shaw <lshaw-use...@austin.rr.com> wrote:
>
>> Now, understanding how computers work and what is possible can be
>> a science. But computer science is not programming.
>
> what ?
>
> then what is programming?
>
> and how will you define computer science?

Computer science is scientific research about programming.

The relationship between computer science and programming is similar to
that between metallurgy and mechanical engineering.

> algorithms ?

Study of the property of algorithms is CS.
Implementing them is programming.

>> The point here is that there will always be a next thing. No matter
>> how many abstractions you think of and nail down as a science, there
>> will be more that can be formed.
>
> Logan, it reminds me of Bruce Lee. this is exactly how he developed
> "jeet kuan do" after geting dissatisfied with Wing-Chun and other
> expert-styles.

The personal styles developed by martial arts experts are biased towards
the specific strengths and weaknesses of the developer. It is impossible
to appreciate the advantage of a specific style unless you have spent
years practicing it, at which point your opinion will be heavily biased.

I see a parallel to learning a concrete programming language here.
Different schools of sometimes violent opposition, each founded by an
"enlightened master".

Computer science is different. New knowledge about type systems is
always an improvement. (That's because it's just knowledge, not practice.)

Regards,
Jo

Ingo Menger

unread,
Feb 23, 2007, 6:17:26 AM2/23/07
to
On Feb 23, 9:08 am, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:

> Logan Shaw <lshaw-use...@austin.rr.com> writes:
> > > I've bought into it enough to be spending time banging my head
> > > against learning Haskell...
> > I recommend banging your head against functional programming some more.
> > At some point, you gain some enlightenment out of it. I've even found
> > myself gravitating towards writing in short fragments of functional style
> > in C++ (constructors wrapped around constructors wrapped around
> > construtors...) and Perl (map() instead of for loop, etc.).
>
> I'm not having much trouble with functional part (it's familiar from
> Lisp and Scheme). It's the type systems that are confusing me.

How?

> I've
> borrowed a 600 page book about type systems (Pierce, TAPL) which I
> hope will help, but I see that it's volume 1 of a two-volume work.
> Sigh.

This is as if you bought a work about thermodynamics in order to
better understand the engine of your car, and also in order to drive
better.
This is not to say that TAPL isn't a great book.

For me, the type discipline has been a blessing in all languages that
have some form of strong typing. Unfortunately, the ALGOL descendants
require us to always write down the obvious, sometimes twice, as e.g.
in java:
List<Map<String, Integer>> foo = new
LinkedList<HashMap<String,Integer>>();
AFAIK, C# has (or will have) something called local type inference,
that would allow to drop at least part of the clutter.
But, in functional languages, it is just wonderfull, when the compiler
says something like: Look, here you use "foo bar" as a list, but in
the definition of foo, you don't compute a list. This is surely not
what you intended, ohh great master.

Tester

unread,
Feb 23, 2007, 10:40:38 AM2/23/07
to
On Thu, 22 Feb 2007 08:50:49 -0600, ne...@absamail.co.za wrote:
j

>2 independant issues here:
>
>1. I'm still searching for material re. "the mathementical/formal
>foundation[s] of computing" to help move programming away from an
>art towards a science. Patterns seem to be a heuristic step in that
>direction ?
>
Are most aspects of the formal mathematical theory of computing all
that relevant to the day-to-day process of coming up with practical
programs which will handle the data they are likely to be given in
reasonable time and space?

For example, consider:

http://en.wikipedia.org/wiki/NP-complete

How is this likely to make practical computing more of a science?

Joachim Durchholz

unread,
Feb 23, 2007, 11:32:31 AM2/23/07
to
Tester schrieb:

> http://en.wikipedia.org/wiki/NP-complete
>
> How is this likely to make practical computing more of a science?

NP-completeness and undecidability just delinate what a knowledgeable
programmer will simply refuse to do, on grounds that it can't be done.

(Similar to an engineer who will refuse to build a dam made of bubblegum.)

Most of the time, other issues than the limits of what can be computed
in reasonable space and time are more prevalent. Such as bridging the
gap between requirements and programs, debugging, hardening against
attacks, occasionally performance improvements, and adapting software to
changing requirements.

Regards,
Jo

J Thomas

unread,
Feb 23, 2007, 11:47:45 AM2/23/07
to
On Feb 23, 11:32 am, Joachim Durchholz <j...@durchholz.org> wrote:
> Tester schrieb:
>
> >http://en.wikipedia.org/wiki/NP-complete
>
> > How is this likely to make practical computing more of a science?
>
> NP-completeness and undecidability just delinate what a knowledgeable
> programmer will simply refuse to do, on grounds that it can't be done.

No, that's a red herring. For some particular NP-complete problem you
might find that a solution for N=100 would take the current entire
world computing resources a million years to solve. But if your
customer wants a solution for N=50 and you can get your N=50 solution
in 250 milliseconds on an old PC, why refuse the job?

NP-completeness deserves at most a warning to the customer about
future upgrade problems.

> (Similar to an engineer who will refuse to build a dam made of bubblegum.)

Doesn't it depend on the job? A small dam that needs only a short
lifespan might feasibly be made of bubblegum, though some other
material might fit the needs better and/or cheaper.

Jerry Avins

unread,
Feb 23, 2007, 12:47:11 PM2/23/07
to

For the effective equivalent of a dam made of bubblegum, look into the
history of D.B. Steinman's Peace River Bridge on the ALCAN highway. He
promised five years of service before collapse. It provided seven.

Paul Rubin

unread,
Feb 23, 2007, 2:02:34 PM2/23/07
to
Joachim Durchholz <j...@durchholz.org> writes:
> > I can't help wondering how much of this approach is really necessary.
> I'm not sure that you understood that the term "algebraic structures"
> means just those sum types (unions-of-structs in C parlance), and
> possibly the associated machinery (pattern matching in particular).

I see, thanks. I thought it meant the collection of higher order
functions in libraries that are easily composible so you end up
writing programs that are sort of like commutative diagrams.

The part I'm having trouble with is the connection between
theoretical, foundational stuff and actual programming. In most other
programming language communities there's a vast disconnect between
foundations and practice, just like automobile engineering usually
doesn't concern itself with elementary particle physics.

> Of course, "algebraic structure" has less concepts than
> "union-of-structs", and "union-of-structs" doesn't capture that the
> entire thing is typesafe.
> Propose a less intimidating terminology if you wish :-)

This is pretty cool, and I think it's more like C++ templates than
union-of-structs.

Paul Rubin

unread,
Feb 23, 2007, 2:31:05 PM2/23/07
to
"Ingo Menger" <quetz...@consultant.com> writes:
> > It's the type systems that are confusing me.
> How?

It's a lot of new stuff to try to internalize. As a simple example,
I don't understand why (Maybe a) is a monad. Past that, there's
a bunch of concepts like "higher kinded polymorphism" where I don't
even know what the words mean.

I also have trouble figuring out what's wrong with a program based on
error messages from Hugs. I don't know if that's from my own
inexperience or because the error messages actually aren't very good.
However it seems like the compiler faces a difficult problem, type
inference in the presence of type classes, so it doesn't know the
initial concrete type of anything. It's as if it tries to solve some
big system of equations based on type info from 17 different places in
the program, finds there's no solution, and announces that the program
must have an error, but is not so helpful in saying where the error is.

> But, in functional languages, it is just wonderfull, when the compiler
> says something like: Look, here you use "foo bar" as a list, but in
> the definition of foo, you don't compute a list. This is surely not
> what you intended, ohh great master.

That's nice in theory but the error messages (so far) are often hard
to understand. All I can tell is that the compiler was complaining
about -something-. Maybe it's easier in ML than in Haskell, because
there aren't type classes.

Joachim Durchholz

unread,
Feb 23, 2007, 2:52:45 PM2/23/07
to
Paul Rubin schrieb:

> That's nice in theory but the error messages (so far) are often hard
> to understand.

That's a typical newcomer problem in Haskell.
It seems that you get used to finding the real sources of the errors,
similar to when you get a syntax error, you almost automatically look at
the line *before* the one that the error was reported in.


Regards,
Jo

Thant Tessman

unread,
Feb 23, 2007, 3:34:46 PM2/23/07
to
Paul Rubin wrote:
> Joachim Durchholz <j...@durchholz.org> writes:

[...]

>> Of course, "algebraic structure" has less concepts than
>> "union-of-structs", and "union-of-structs" doesn't capture that the
>> entire thing is typesafe.
>> Propose a less intimidating terminology if you wish :-)
>
> This is pretty cool, and I think it's more like C++ templates than
> union-of-structs.

This is the closest I've ever seen C++ come to capturing the concept.
And it still isn't quite there:

www.oonumerics.org/tmpw01/alexandrescu.pdf

-thant

Paul Rubin

unread,
Feb 23, 2007, 4:08:10 PM2/23/07
to
Joachim Durchholz <j...@durchholz.org> writes:
> It seems that you get used to finding the real sources of the errors,
> similar to when you get a syntax error, you almost automatically look
> at the line *before* the one that the error was reported in.

Thanks, I'll keep that in mind. For amusement purposes, here's a
syntax error that confused me. I did manage to fix it by trial and
error, but I'm still not absolutely clear on the exact workings of the
rule that it broke. I'd be interested to know if you can spot the
error without trying to compile the program. In my case, even with
the error message I was not able to figure out what was wrong except
through a bunch of experiments.

main :: IO ()
main = do
putStr "Enter num1: "
n1 <- read1 "n1: "
putStr "enter operation: "
op <- getLine
n2 <- read1 "Enter num2: "
let func = case op of
"+" -> Just (+);
"-" -> Just (-);
"*" -> Just (*);
"/" -> Just (/);
"**" -> Just (**);
_ -> Nothing

case func of
Nothing -> print "unknown operator"
Just op -> print (n1 `op` n2)

main -- loop

Phil Armstrong

unread,
Feb 23, 2007, 3:49:30 PM2/23/07
to
Paul Rubin <http://phr...@nospam.invalid> wrote:
> It's a lot of new stuff to try to internalize. As a simple example,
> I don't understand why (Maybe a) is a monad. Past that, there's
> a bunch of concepts like "higher kinded polymorphism" where I don't
> even know what the words mean.

(Maybe a) happens to be a monad because it obeys the Monad laws.

> That's nice in theory but the error messages (so far) are often hard
> to understand. All I can tell is that the compiler was complaining
> about -something-. Maybe it's easier in ML than in Haskell, because
> there aren't type classes.

Sometimes it's helpful to start putting type annotations on your
functions to push the error around until it gets to the point where
you've made the actual mistake (as against the point where the
compiler finally notices that it has two irreconcilable types and
gives up, which can of course be somewhere else entirely.)

Phil

--
http://www.kantaka.co.uk/ .oOo. public key: http://www.kantaka.co.uk/gpg.txt

Paul Rubin

unread,
Feb 23, 2007, 4:27:48 PM2/23/07
to
Phil Armstrong <phil...@kantaka.co.uk> writes:
> (Maybe a) happens to be a monad because it obeys the Monad laws.

Well why does it even support the >>= operator? It could be that
I'm simply confused about how typeclasses work, but I thought something
like that couldn't happen by accident.

Neelakantan Krishnaswami

unread,
Feb 23, 2007, 7:00:04 PM2/23/07
to
In article <7xzm75q...@ruckus.brouhaha.com>, Paul Rubin wrote:
> Neelakantan Krishnaswami <ne...@cs.cmu.edu> writes:
>> Step 2 is:
>> 2. Algebraic structures == insanely effective design patterns for libraries
>
> Interesting, but similar library functions are showing up in
> languages like C++ and Python, without all that theory, or am I
> missing something?

I don't think this is quite true. Many of the simpler constructions
show up, but as a general tendency the more complex ones do not. I
think this is because you don't get enough language support to use
them effectively. Either you don't get enough typechecking support
to catch stupid but hard-to-debug errors (eg, in Python), or the type
annotations you must write grow without bound (eg, Java or C++).

> It also seems to me that the mathematical hair in the Haskell stuff
> that I look at comes more from mathematical logic than from algebra.
> Certainly all the stuff about types, which is pretty new to me (I
> did get hold of Pierce's TAPL book and have started to look at it).
> I can't help wondering how much of this approach is really
> necessary.

If you're a mathematically minded programmer, you'll start noticing
all the algebraic structures that appear in your program -- for
example, you'll recognize that time intervals form a vector field,
that integers form a ring, that strings form a monoid, and so on. So
you'll start organizing your APIs to emphasize the algebraic
properties of each type, and then you'll start to hack.

At that point, you'll notice that your code is mostly transformations
from one type to another, and most of them have homomorphic structure.
At this point, the natural thing (for a mathematician) is to start
using some category theory to organize all these types and mappings
between them.

This turns out to connect very naturally to type theory, because the
typed lambda calculus has a very elegant categorical semantics. So now
your programs end up very tightly connected to the semantic, algebraic
concepts you had: your programs are the constructive witnesses to the
existential statements in the math.

It all kind of happens just by following your nose, and I think that's
really, really cool.

Mark T.B. Carroll

unread,
Feb 23, 2007, 7:03:03 PM2/23/07
to
Paul Rubin <http://phr...@NOSPAM.invalid> writes:

> Phil Armstrong <phil...@kantaka.co.uk> writes:
>> (Maybe a) happens to be a monad because it obeys the Monad laws.
>
> Well why does it even support the >>= operator?

Largely because it makes some sort of sense for it to.

> It could be that I'm simply confused about how typeclasses work, but I
> thought something like that couldn't happen by accident.

Indeed it doesn't: somewhere in the standard libraries there'll be an
instance definition for Maybe being an instance of Monad.

Monads are often about chaining computations together in one way or
another. The Maybe monad basically says, if one part of the computation
fails (returning Nothing), don't bother with the rest of it, just return
Nothing for the whole thing.

Mark.

--
Functional programming vacancy at http://www.aetion.com/

Daniel C. Wang

unread,
Feb 23, 2007, 11:03:53 PM2/23/07
to Joachim Durchholz
Joachim Durchholz wrote:
{stuff deleted}

> Insofar as you can formalize a requirement, yes.
>
> E.g. one approach that actually made it into practice is Spark (Ada with
> those parts stripped that are difficult to reason about, plus a proof
> checker).

BTW I've heard the Spark people give a talk, they basically were
verifying that their software implemented a particular military
encryption algorithm correctly.

The ACL2 folk have verified both hardware and software stacks implement
various other security related systems correctly.

arnuld

unread,
Feb 23, 2007, 11:46:49 PM2/23/07
to
> On Feb 23, 3:52 pm, Joachim Durchholz <j...@durchholz.org> wrote:
> arnuld schrieb:
>
> >> On Feb 23, 12:39 pm, Logan Shaw <lshaw-use...@austin.rr.com> wrote:
>
> >> Now, understanding how computers work and what is possible can be
> >> a science. But computer science is not programming.
>
> > what ?
>
> > then what is programming?
>
> > and how will you define computer science?
>
> Computer science is scientific research about programming.
>
> The relationship between computer science and programming is similar to
> that between metallurgy and mechanical engineering.
>
> > algorithms ?
>
> Study of the property of algorithms is CS.
> Implementing them is programming.


ok

> >> The point here is that there will always be a next thing. No matter
> >> how many abstractions you think of and nail down as a science, there
> >> will be more that can be formed.
>
> > Logan, it reminds me of Bruce Lee. this is exactly how he developed
> > "jeet kuan do" after geting dissatisfied with Wing-Chun and other
> > expert-styles.
>
> The personal styles developed by martial arts experts are biased towards
> the specific strengths and weaknesses of the developer.


Bruce Lee's "Jeet Kuan Do" style is *no style*. you adapt your self
according to the present situation, you develop a new style every time
you use Jeet Kuan Do. Jeet Kuan Do means, how one can express himself,
totally and completel, without any restrictions imposed by all other
styles, Wing-Chun e.g.

> It is impossible
> to appreciate the advantage of a specific style unless you have spent
> years practicing it, at which point your opinion will be heavily biased.

you can't practice Jeet Kuan Do for years because, as i said, if you
spend years practicing on Bruce Lee's style, then you are just
practicing a new style everyday

arnuld

unread,
Feb 23, 2007, 11:48:47 PM2/23/07
to
> On Feb 23, 3:52 pm, Joachim Durchholz <j...@durchholz.org> wrote:

> I see a parallel to learning a concrete programming language here.
> Different schools of sometimes violent opposition, each founded by an
> "enlightened master".

yes, it is. IMVHO, Martial-Arts, Hacking and Music Composition are
very-closely related, as per my experience. yu can check the article
on my BLOG on this.

--arnuld
http://arnuld.blogspot.com

arnuld

unread,
Feb 23, 2007, 11:51:44 PM2/23/07
to

to be clear, what i i meant by "yes, it is". it means, next sentence
is right:

"i see a parallel to learning a concrete programming language"

Marlene Miller

unread,
Feb 24, 2007, 3:04:59 AM2/24/07
to
"Paul Rubin" wrote:
> I'm not having much trouble with functional part (it's familiar from
> Lisp and Scheme). It's the type systems that are confusing me. I've
> borrowed a 600 page book about type systems (Pierce, TAPL) which I
> hope will help, but I see that it's volume 1 of a two-volume work.
> Sigh.

You might be interested in the lectures notes for a class Pierce taught last
fall.
http://www.cis.upenn.edu/~bcpierce/home.html

See Old Course Materials: Software Foundations, Schedule

(I downloaded the lectures and exams with solutions, because I plan to read
TAPL after EoPL and ToPL - if I live that long - and I am not sure how long
the links will stay around.)

Marlene


Paul Rubin

unread,
Feb 24, 2007, 3:11:11 AM2/24/07
to
"Marlene Miller" <marlen...@worldnet.att.net> writes:
> You might be interested in the lectures notes for a class Pierce taught last
> fall.
> http://www.cis.upenn.edu/~bcpierce/home.html

Thanks, interesting. "A Gentle Introduction to Haskell" is also
looking useful. I think I looked at it when I was first getting
started and couldn't make sense of it then, but it's easier now.

Dirk Thierbach

unread,
Feb 24, 2007, 2:47:09 AM2/24/07
to
Paul Rubin <http://phr...@nospam.invalid> wrote:
> Phil Armstrong <phil...@kantaka.co.uk> writes:
>> (Maybe a) happens to be a monad because it obeys the Monad laws.

> Well why does it even support the >>= operator?

For some Monads, one can think about (>>=) as some sort of "sequencing".
Then in an expression like "F >>= \x -> G[x]" (by G[x] I mean that x is
a free variable in the expression G), it means "first do F, bind the
result of F to x, if possible, and then do G".

For the Maybe monad, the result can be either "Just v" or "Nothing".
So the obvious thing to is to bind v to x in the first case, and then
do G, and in the second case just "stop" and return "Nothing" for the
whole thing. (Exercise: What should return do?)

This is like error handling: If an error happens in F, you stop, otherwise
you continue with G. That's the reason the Maybe monad is sometimes also
called the error monad. (Exercise: What do you have to do if not only
want to flag an error, but also have an extra argument (say, a string)
indicting what sort of error happened? What well-known datatype from
the library corresponds to that, and what new monad do you get this
way?)

It's not always so simple, the list monad for example does something
a bit more complex.

> It could be that I'm simply confused about how typeclasses work, but
> I thought something like that couldn't happen by accident.

Of course it doesn't happen by accident :-) You must prove to the
compiler that Maybe is indeed an instance of the Monad typeclass,
which is what the library does in the corresponding instance
declaration. Just look it up, and see if you can match the
implementation to the effect described above.

- Dirk

[Unnecessary NGs removed from f'up]

Markus E Leypold

unread,
Feb 23, 2007, 12:13:02 PM2/23/07
to

"J Thomas" <jeth...@gmail.com> writes:

> On Feb 23, 11:32 am, Joachim Durchholz <j...@durchholz.org> wrote:
>> Tester schrieb:
>>
>> >http://en.wikipedia.org/wiki/NP-complete
>>
>> > How is this likely to make practical computing more of a science?
>>
>> NP-completeness and undecidability just delinate what a knowledgeable
>> programmer will simply refuse to do, on grounds that it can't be done.
>
> No, that's a red herring. For some particular NP-complete problem you
> might find that a solution for N=100 would take the current entire
> world computing resources a million years to solve. But if your
> customer wants a solution for N=50 and you can get your N=50 solution
> in 250 milliseconds on an old PC, why refuse the job?
>
> NP-completeness deserves at most a warning to the customer about
> future upgrade problems.

Ah - no. Some 15 years ago I had a customer who wanted a custom
database (and UI) which would be loaded with ~2000 records of
potential customers. The first thing he then did, was to load it with
all ~20000 data sets from a CD directory (kind of an industry guide).
Since the index building algorithm was all but efficient (we took some
short cuts because dur to limited memory we could not just sort in
memory), the first build of the index of the larger database took more
than 2 days.

So scaling is not only an upgrade problem: Often your customer is a
bit vague about the problem size and you should probably write scaling
behaviour in the specs.

>> (Similar to an engineer who will refuse to build a dam made of bubblegum.)
>
> Doesn't it depend on the job? A small dam that needs only a short
> lifespan might feasibly be made of bubblegum, though some other
> material might fit the needs better and/or cheaper.

Right. But without theory you might just use bubble gum for every dam,
since you don't knoe about the principal restrictions of using bubble
gum.

My answer to the OP, BTW, would have been Jo's.

And, @tester: where do ypu read this thread? I don't want to continue
cross posting.

Regards -- Markus


Ingo Menger

unread,
Feb 24, 2007, 8:20:13 AM2/24/07
to
On 23 Feb., 20:31, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
> "Ingo Menger" <quetzalc...@consultant.com> writes:

> > But, in functional languages, it is just wonderfull, when the compiler
> > says something like: Look, here you use "foo bar" as a list, but in
> > the definition of foo, you don't compute a list. This is surely not
> > what you intended, ohh great master.
>
> That's nice in theory but the error messages (so far) are often hard
> to understand. All I can tell is that the compiler was complaining
> about -something-. Maybe it's easier in ML than in Haskell, because
> there aren't type classes.

Not really. It's - unfortunatley - a property of the basic type
inference algorithm. AFAIK, there is some research going on to make
things better in this respect. Your notion of a set of equations that
can't be solved is not so far off the mark.
It helps sometimes to keep in mind that type inference has a left to
right bias (although not necessarily a top to bottom one), so, for
example, in

foo xs = head xs && xs

the compiler will most probably infer xs :: [Bool] (from the
definition of head and the appearance of (head xs) as first argument
of &&) and then it will bitterly complain that it can't unify Bool and
[Bool]. But note that if it'd do it the other way around, assuming
first that xs must be Bool, the result would be no better. In any
case, the compiler arrives at some conclusion about the type a certain
variable should have, and then it will stick to that, even if all
other uses clearly suggest another type. Often, this will be
counterintuitive to the human mind (which has it's own notion about
this matter).
Note that this example is quite simple, it is perhaps not a good idea
to propose, that the compiler should print out a detailed log about
what assumptions it has made, and how they were justified. A single
error message could easily become 1000 lines and more this way.
As others have pointed out, whenever you have problems with the type
system, you should explicitely annotate your definitions. Not only
will the error messages get more understandable, but the discipline of
thinking every moment about the question: "Well, what will the type of
this expression, this function, this variable be?" will most probably
prevent you from making type errors in the first place.
Last but not least: No matter how bad the messages from the type
inference algorithm are, they are certainly more informative than
Segmentation fault, core dumped
which is what you get when you run programs containing type errors
written in languages that allow you to confuse a boolean and a list of
booleans.

Chris Uppal

unread,
Feb 24, 2007, 1:17:07 PM2/24/07
to
Paul Rubin wrote:

> I also have trouble figuring out what's wrong with a program based on
> error messages from Hugs. I don't know if that's from my own
> inexperience or because the error messages actually aren't very good.

It would be nice if there was some sort of switch to tell the system "I
know it's wrong, but I want to run it anyway", perhaps implemented as
some simple meta-interpreter. Beginers could then /watch/ it go wrong
in the debugger (I assume there's a halfway decent debugger available?)
which is usually enlightening.

For all I know, the feature may already be there.

-- chris

Arthur J. O'Dwyer

unread,
Feb 24, 2007, 2:28:50 PM2/24/07
to

I believe this thread is discussing languages which are high-level
enough that "run it anyway" is a meaningless request. If the program
is malformed, you /can't/ "run it anyway"; there's no legal program
present to run!

Some interpreted languages (Turbo Pascal 3.0, Perl) will let you
start running a malformed program, bailing only when they get to the
unrecoverable error. However, those languages are far closer to the
machine than ML or Haskell, and don't do lots of inferences about
type, so they can afford to "ignore" large parts of the program at
first. With type inference, you need to figure out all the types in
advance, whether they're on executable codepaths or not.

And with a compiled language, the compiler /must/ understand everything
in the program, because it has to translate it to machine code. There's
no way for the compiler to "ignore" parts of the program it doesn't
understand; everything must be assigned a meaning.

HTH,
-Arthur

Arjan

unread,
Feb 24, 2007, 3:28:35 PM2/24/07
to
In message <11721557...@vasbyt.isdsl.net>
ne...@absamail.co.za wrote:

> 2 independant issues here:
>
> 1. I'm still searching for material re. "the mathementical/formal
> foundation[s] of computing" to help move programming away from an
> art towards a science. Patterns seem to be a heuristic step in that
> direction ?

Have you read Edsger Dijkstra's "A discipline of programming"
or David Gries' "The science of programming"?

Regards,
Arjan

Paul Rubin

unread,
Feb 24, 2007, 2:52:21 PM2/24/07
to
Arjan <ar...@example.com> writes:
> Have you read Edsger Dijkstra's "A discipline of programming"
> or David Gries' "The science of programming"?

Those sound hopelessly out of date. I looked at Dijkstra's book
a long time ago but I think current approaches are much different.

Someone on #haskell recommend this a while back:

http://www-2.cs.cmu.edu/~rwh/plbook/

It looks excellent and I hope to read it someday.

Henrik Huttunen

unread,
Feb 24, 2007, 3:18:39 PM2/24/07
to
Paul Rubin wrote:
> Someone on #haskell recommend this a while back:
>
> http://www-2.cs.cmu.edu/~rwh/plbook/

Thank you for sharing this! It seems to discuss lots of things I am
currently hoping for to learn.

--
Scalad - a salad of Scala abstractions:
http://users.utu.fi/hvkhut/scalad/scalad.htm

Ben Pfaff

unread,
Feb 24, 2007, 3:23:20 PM2/24/07
to
"Arthur J. O'Dwyer" <ajon...@andrew.cmu.edu> writes:

> Some interpreted languages (Turbo Pascal 3.0, Perl) will let you
> start running a malformed program, bailing only when they get to the
> unrecoverable error.

I used Turbo Pascal 3.0 back in, um, perhaps 1988 or so, and I
don't remember it working that way. It compiled the whole
program, then ran it. Is there some newer product also called
Turbo Pascal 3.0 that works differently?
--
"In the PARTIES partition there is a small section called the BEER.
Prior to turning control over to the PARTIES partition,
the BIOS must measure the BEER area into PCR[5]."
--TCPA PC Specific Implementation Specification

David Hopwood

unread,
Feb 24, 2007, 4:08:14 PM2/24/07
to
Chris Uppal wrote:
> Paul Rubin wrote:
>
>>I also have trouble figuring out what's wrong with a program based on
>>error messages from Hugs. I don't know if that's from my own
>>inexperience or because the error messages actually aren't very good.
>
> It would be nice if there was some sort of switch to tell the system "I
> know it's wrong, but I want to run it anyway", [...]

To the extent that this is possible, it can be done by replacing the term
that caused the compilation error with "undefined".

--
David Hopwood <david.nosp...@blueyonder.co.uk>

Paul Rubin

unread,
Feb 24, 2007, 4:17:08 PM2/24/07
to
David Hopwood <david.nosp...@blueyonder.co.uk> writes:
> > It would be nice if there was some sort of switch to tell the system "I
> > know it's wrong, but I want to run it anyway", [...]
> To the extent that this is possible, it can be done by replacing the term
> that caused the compilation error with "undefined".

I could imagine compiling it with Lispish semantics, i.e. sticking the
inferred type into the runtime object:

(*) :: Integer -> Integer -> Integer -- ordinary multiplication
foobar = "foo" * "bar" -- attempt to multiply two strings

You'd get a runtime error when you try to actually evaluate foobar.
What good that would do, I don't know.

Benjamin Franksen

unread,
Feb 24, 2007, 4:18:29 PM2/24/07
to
Joachim Durchholz wrote:
> Paul Rubin wrote:
> > Neelakantan Krishnaswami <ne...@cs.cmu.edu> writes:
> >> Step 2 is:
> >> 2. Algebraic structures == insanely effective design patterns for
> >> libraries
> >
> > Interesting, but similar library functions are showing up in languages
> > like C++ and Python, without all that theory, or am I missing something?
> >
> > It also seems to me that the mathematical hair in the Haskell stuff
> > that I look at comes more from mathematical logic than from algebra.
> >> I can't help wondering how much of this approach is really necessary.
>
> I'm not sure that you understood that the term "algebraic structures"
> means just those sum types (unions-of-structs in C parlance), and
> possibly the associated machinery (pattern matching in particular).

>
> Of course, "algebraic structure" has less concepts than
> "union-of-structs", and "union-of-structs" doesn't capture that the
> entire thing is typesafe.
> Propose a less intimidating terminology if you wish :-)

I think you misunderstood 'algebraic structure', especially as an 'insanely
effective design pattern for libraries'. The term here refers to the set of
algebraic laws that are satisfied by the operations exported from the
library. In most cases the interface to such a library /must/ make the
exported data types abstract so that the laws can be guaranteed to hold.
The so called 'algebraic data types' you are refering to are merely the
basic building blocks, one could say they are '0-th order' algebraic
structures, that is, the laws that apply to them are the ones guaranteed by
the language alone.

The following paper was a real eye-opener for me. It explains how the right
algebraic structure leads to a simpler, yet more effective interface. It
also explains how to use such laws internally to tweak performance while
maintaining correctness.

homepages.inf.ed.ac.uk/ wadler/papers/prettier/prettier.pdf

Cheers
Ben

Randy Howard

unread,
Feb 24, 2007, 4:22:43 PM2/24/07
to
On Sat, 24 Feb 2007 14:23:20 -0600, Ben Pfaff wrote
(in article <87hctbz...@blp.benpfaff.org>):

> "Arthur J. O'Dwyer" <ajon...@andrew.cmu.edu> writes:
>
>> Some interpreted languages (Turbo Pascal 3.0, Perl) will let you
>> start running a malformed program, bailing only when they get to the
>> unrecoverable error.
>
> I used Turbo Pascal 3.0 back in, um, perhaps 1988 or so,

I used it earlier than that, probably 3 or 4 years at least.

> and I don't remember it working that way.

Neither do I.

> It compiled the whole program, then ran it.

Right, no interpreter at all. No idea what he's talking about.


--
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those
who have not got it." - George Bernard Shaw

Benjamin Franksen

unread,
Feb 24, 2007, 4:38:36 PM2/24/07
to

I suppose it's something to do either with the indentation of the (first)
case branches or with the semicolons separating them (or both). The
semicolons are not necessary (since you are using layout anyway) and may
interfere with the 'let'. I'd leave them out. And the case branches ("+" ->
Just (+) etc...) should be indented at least one space further than the
start of the let item (starting with "func = case op of").

Haskell's layout rule really has its subtleties, especially in connection
with do-notation. There are some pitfalls that almost all beginners fall
into.

Cheers
Ben

Benjamin Franksen

unread,
Feb 24, 2007, 4:45:05 PM2/24/07
to
Phil Armstrong wrote:
> Paul Rubin <http://phr...@nospam.invalid> wrote:
>> It's a lot of new stuff to try to internalize. As a simple example,
>> I don't understand why (Maybe a) is a monad. Past that, there's
>> a bunch of concepts like "higher kinded polymorphism" where I don't
>> even know what the words mean.
>
> (Maybe a) happens to be a monad because it obeys the Monad laws.

More precisely, (Maybe a) /can be viewed as/ a monad by giving suitable
definitions of (>>=) and (return). Here, 'suitable' is of course meant to
entail that the monad laws are obeyed, but to be useful the monad should be
non-trivial, too.

Cheers
Ben

Paul Rubin

unread,
Feb 24, 2007, 5:16:25 PM2/24/07
to
Benjamin Franksen <benjamin...@bessy.de> writes:
> I suppose it's something to do either with the indentation of the (first)
> case branches or with the semicolons separating them (or both).

Oh woops, I put in the semicolons while trying to debug the problem
and then I forgot to take them out. And yes, the problem was that the
case branches were not indented far enough. It amazed me. Coming
from Python, I was used to significant indentation, but Haskell's
indentation rules are more complex and seem to make less sense, and
theyaren't described in the tutorial I looked at.

In Python, if you indent a line further to the right than the previous
line, the lexical scanner calls that an "indent" and if you indent it
further to the left than the previous line, the scanner calls it a
"dedent". Python's "indent" and "dedent" are equivalent to begin/end
or left and right curly braces in other languages. But the amount of
additional indentation is not significant. Only the direction matters.

Joachim Durchholz

unread,
Feb 24, 2007, 5:32:50 PM2/24/07
to
Arthur J. O'Dwyer schrieb:

> With type inference, you need to figure out all the types in
> advance, whether they're on executable codepaths or not.

Sorry, that's plain wrong.
You can always assign an Error type to those names where type inference
didn't give a meaningful result, and run the code anyway. Just make sure
you have run-time checks in place when assigning to or from that name,
that's all you need to do to get a perfectly running program (which may
terminate with a type error, of course, but hey! - that's what was
requested).

> And with a compiled language, the compiler /must/ understand
> everything in the program, because it has to translate it to machine
> code.

That's even more wrong. You can do type checks with machine code just fine.

There's one thing you need to do: have the type information available at
runtime. If you leave that out, then the run-time system will have
trouble checking types (unsurprisingly).
I'm not sure how much of an impact on run-time system design that has.
It would probably be part of making the run-time data structures
printable (and without that "just running the code" doesn't make much
sense, you want to see the code in action), so it's probably no
additional overhead anyway.

Regards,
Jo

J Thomas

unread,
Feb 24, 2007, 11:00:19 PM2/24/07
to
On Feb 23, 12:13 pm, Markus E Leypold
<development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:

> "J Thomas" <jethom...@gmail.com> writes:
> > On Feb 23, 11:32 am, Joachim Durchholz <j...@durchholz.org> wrote:

> >> NP-completeness and undecidability just delinate what a knowledgeable
> >> programmer will simply refuse to do, on grounds that it can't be done.
>
> > No, that's a red herring. For some particular NP-complete problem you
> > might find that a solution for N=100 would take the current entire
> > world computing resources a million years to solve. But if your
> > customer wants a solution for N=50 and you can get your N=50 solution
> > in 250 milliseconds on an old PC, why refuse the job?
>
> > NP-completeness deserves at most a warning to the customer about
> > future upgrade problems.
>
> Ah - no. Some 15 years ago I had a customer who wanted a custom
> database (and UI) which would be loaded with ~2000 records of
> potential customers. The first thing he then did, was to load it with
> all ~20000 data sets from a CD directory (kind of an industry guide).
> Since the index building algorithm was all but efficient (we took some
> short cuts because dur to limited memory we could not just sort in
> memory), the first build of the index of the larger database took more
> than 2 days.
>
> So scaling is not only an upgrade problem: Often your customer is a
> bit vague about the problem size and you should probably write scaling
> behaviour in the specs.

Yes, but at this point I believe that NP-complete is a red herring.
What the customer cares about is at what N the resources become an
issue, and at what N the solution becomes impractical.

As I understand it, to prove a problem is NP-complete or prove it
isn't, will not in either case answer those questions. To know that
scaling issues will at some point make the solution completely
impractical doesn't at all say whether it's impractical for the
customer's uses. And vice versa, it can scale gently and be too
unwieldy even at the scale the customer needs now.

So it looks to me like finding out whether it's NP-complete is solving
the wrong problem. What you need to know is something else. Some of
the details you use to decide NP-completely might be used for that
other question, though.

> >> (Similar to an engineer who will refuse to build a dam made of bubblegum.)
>
> > Doesn't it depend on the job? A small dam that needs only a short
> > lifespan might feasibly be made of bubblegum, though some other
> > material might fit the needs better and/or cheaper.
>
> Right. But without theory you might just use bubble gum for every dam,
> since you don't knoe about the principal restrictions of using bubble
> gum.

If you don't understand your materials you can expect problems. On the
other hand, civil engineering has traditionally operated with a very
large component of precedent. If a theoretician decides that a
particular procedure is unsafe, but it has been in widespread use
without incident, he's likely to generate lots of comments about
proofs that bumblebees can't fly. On the other hand if a design is
constructed with traditional safety margins much reduced because
theory says they're not needed, it's likely to get a lot of interest
while the engineers wait for it to fall down.

Theory is a useful supplement to practice. When you operate outside
your area of expertise it's all you have. And this is probably why
people are generally so hesitant to do significant projects outside
their areas of expertise.

J Thomas

unread,
Feb 24, 2007, 11:11:20 PM2/24/07
to
On Feb 24, 4:38 pm, Benjamin Franksen <benjamin.frank...@bessy.de>
wrote:
> Paul Rubin wrote:

> > For amusement purposes, here's a
> > syntax error that confused me. I did manage to fix it by trial and
> > error, but I'm still not absolutely clear on the exact workings of the
> > rule that it broke. I'd be interested to know if you can spot the
> > error without trying to compile the program. In my case, even with
> > the error message I was not able to figure out what was wrong except
> > through a bunch of experiments.

> I suppose it's something to do either with the indentation of the (first)


> case branches or with the semicolons separating them (or both). The
> semicolons are not necessary (since you are using layout anyway) and may
> interfere with the 'let'. I'd leave them out. And the case branches ("+" ->
> Just (+) etc...) should be indented at least one space further than the
> start of the let item (starting with "func = case op of").
>
> Haskell's layout rule really has its subtleties, especially in connection
> with do-notation. There are some pitfalls that almost all beginners fall
> into.

This is why I say it's important for Forth to become more intuitive
for beginners. Each little trap that beginners predictably fall into
is a barrier, something that lengthens the learning curve.

Of course it's hard to expect the experts to change their habits for
beginners, when we have so many more experts than beginners. But
stil....

Bjorn Reese

unread,
Feb 25, 2007, 6:26:16 AM2/25/07
to
J Thomas wrote:

> proofs that bumblebees can't fly. On the other hand if a design is
> constructed with traditional safety margins much reduced because
> theory says they're not needed, it's likely to get a lot of interest
> while the engineers wait for it to fall down.

That is not the reason why structures fall down. Initially new
structural designs are built with a large safety margin. Subsequently
this safety margin is not reduced dramatically because of theory, but
is rather gradually lowered little by little for practical reasons
(cost, aesthetics, etc.) as people gain confidence in the design.
Eventually the safety margin is gone and the structure falls down.

See Henry Petroski's "Design Paradigms" for a thorough explanation
of this phenomenon.

--
mail1dotstofanetdotdk

Joachim Durchholz

unread,
Feb 25, 2007, 7:12:48 AM2/25/07
to
J Thomas schrieb:

> On Feb 23, 11:32 am, Joachim Durchholz <j...@durchholz.org> wrote:
>> Tester schrieb:
>>
>>> http://en.wikipedia.org/wiki/NP-complete
>>> How is this likely to make practical computing more of a science?
>> NP-completeness and undecidability just delinate what a knowledgeable
>> programmer will simply refuse to do, on grounds that it can't be done.
>
> No, that's a red herring.

Isn't :-)

Seriously, you're right that NP-complete algorithms are occasionally
useful. Even undecidable algorithms are used (some varieties of
dependent type systems do that).

However, NP-complete and undecidable algorithms drastically reduce your
design space, so the first reaction of a programmer asked to do such a
thing will be a search for alternatives.

I.e. CS defines bounds. You can overstep them, but you pay steep price.

Regards,
Jo

Chris Smith

unread,
Feb 25, 2007, 12:03:41 PM2/25/07
to
<Paul Rubin <http://phr...@NOSPAM.invalid>> wrote:
> And then we have godawful
> software like Windows that crashes all the time; can we use (e.g.)
> constructive type theory to make programs that verifiably do what
> they're supposed to?

No. People claim that they do this, but all they really prove is that
one extrapolation of the requirement into a formal language (the
program) is equivalent to another (the input to the validator, which is
often confusingly called the "specification" or something like that, in
order to distract you from the fact that it can be every bit as likely
to contain bugs as the program). Sometimes, the fact that both
techniques, which are traditionally somewhat radically different from
each other, agree can be useful information. It probablistically
supports the proposition that the program is correct. Any claim beyond
this -- that such a method actually proves that the program contains no
bugs or does "what it's supposed to do", for example -- is misleading
and exaggerated.

At a lower level, there are many commonly used proof systems that prove
the absence of specific program behaviors. They are called type
systems, and a good number of programming languages have them built in.

--
Chris Smith

Chris Uppal

unread,
Feb 25, 2007, 4:28:52 PM2/25/07
to
Arthur J. O'Dwyer wrote:

[me:]


> > It would be nice if there was some sort of switch to tell the
> > system "I know it's wrong, but I want to run it anyway", perhaps
> > implemented as some simple meta-interpreter.

> I believe this thread is discussing languages which are high-level


> enough that "run it anyway" is a meaningless request. If the program

> is malformed, you can't "run it anyway"; there's no legal program
> present to run!

I don't think that's true at all.

Functional languages (real ones, I mean, not LISP) all have trivial
variations on lambda calculus as their runtime semantics (the
NON-trivial variation is in evaluation order, which is vitally
important, but irrelevant here). It is simple to create an "untyped"
implementation of that concept. For instance the earliest functional
programming langage implementation that I, personally, know of
(remember that I don't count LISP) was Turner's SASL -- an untyped,
lazy, pure functional programming language (for his PhD in 1979).

His SASL interpreter was done in C, which meant he had to implement GC,
etc (3.5 K lines of code in total). But these days nobody would follow
that path -- they'd probably write it /in/ Haskell, and the thing would
be trivial.

-- chris

Chris Smith

unread,
Feb 25, 2007, 8:47:09 PM2/25/07
to
Chris Uppal <chris...@metagnostic.REMOVE-THIS.org> wrote:
> Functional languages (real ones, I mean, not LISP)

> For instance the earliest functional


> programming langage implementation that I, personally, know of
> (remember that I don't count LISP)

I'm curious why you don't count LISP as a functional language. I can't
stand LISP myself, but I don't see how you'd call it non-functional.
The mutation features are not much different from ML's mutation features
except that they aren't so clearly delineated so that mutation can
pollute programs. Other than that, the only significant difference I
see between LISP and the pure lambda calculus is that function
parameters are lists rather than being curried.

Is it one of those things, or something else?

--
Chris Smith

Ingo Menger

unread,
Feb 26, 2007, 4:55:02 AM2/26/07
to
On Feb 24, 11:16 pm, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
>Coming
> from Python, I was used to significant indentation, but Haskell's
> indentation rules are more complex and seem to make less sense, and
> theyaren't described in the tutorial I looked at.

They *are* described, however, in the Haskell report. (in the
appendix)


Chris Uppal

unread,
Feb 26, 2007, 8:17:10 AM2/26/07
to
Chris Smith wrote:

[me:]


> > Functional languages (real ones, I mean, not LISP)

...


> I'm curious why you don't count LISP as a functional language. I
> can't stand LISP myself, but I don't see how you'd call it
> non-functional.

That Lisp has an important functional aspect is beyond dispute. But
for me (and, as far as I know, for most people outside the US[*]) Lisp
is not a member of that category of languages known as Functional
Programming Languages.

([*] At least that's how it was when I last used a functional
programming language in anger -- years ago.)

Why ? A matter of history, a matter of emphasis.

History: it never has been -- Lisp has always been in a category by
itself, functional, programming is a separate, albeit related,
category. <shrug/>

Emphasis: Lisp is to a large degree about lists. Remove lists from
Lisp and you are left with nothing. Functional Programming languages
are about higher order functions, remove lists from them and you are
left with an FP language with reduced expressiveness. Applicative
evaluation order. Currying. Pattern matching. They all add up to a
style and feel which is common to all functional languages, and which
is not shared by Lisp. (I know there are exceptions to all the above
-- ML has the "wrong" evaluation order, for instance -- it's a "family
resemblence" thing, not a list of formal criteria).

OTOH, Lisp has a feel and a collection of features of its own, which is
not shared by functional programming languages.

I see no benefit in conflating the two.

-- chris

Message has been deleted

Rainer Joswig

unread,
Feb 26, 2007, 9:06:03 AM2/26/07
to
In article <xn0f2x2j9...@news.gradwell.net>,
"Chris Uppal" <chris...@metagnostic.REMOVE-THIS.org> wrote:

> Chris Smith wrote:
>
> [me:]
> > > Functional languages (real ones, I mean, not LISP)
> ...
> > I'm curious why you don't count LISP as a functional language. I
> > can't stand LISP myself, but I don't see how you'd call it
> > non-functional.
>
> That Lisp has an important functional aspect is beyond dispute. But
> for me (and, as far as I know, for most people outside the US[*]) Lisp
> is not a member of that category of languages known as Functional
> Programming Languages.
>
> ([*] At least that's how it was when I last used a functional
> programming language in anger -- years ago.)
>
> Why ? A matter of history, a matter of emphasis.
>
> History: it never has been -- Lisp has always been in a category by
> itself, functional, programming is a separate, albeit related,
> category. <shrug/>
>
> Emphasis: Lisp is to a large degree about lists.

I'd say you are stuck even before 1958.

Lisp is a family of programming languages.
LISP (all caps) is typically used to denote some ancient dialect of Lisp
btw.

In the family of Lisp typical dialects are Scheme and Common Lisp.
Both are not pure FPLs. Both support a functional programming
style. Scheme a bit more than Common Lisp. But Common Lisp also
carries the functional programming style over to object-oriented
programming (which is not provided in Scheme by default).
In Common Lisp, functions are objects, methods
are objects, there are meta-classes for those and so on.
A higher-order functional programming style is quite
common (!) in Common Lisp - even with CLOS. More so in Scheme.

True is that Scheme and Common Lisp are quite different from Haskell,
SML and other some other FPLs.

> Remove lists from
> Lisp and you are left with nothing. Functional Programming languages
> are about higher order functions,

I use higher-order functions in Lisp all day. Thank you. It is
hard to write any Lisp code ignoring higher-order functions.

> remove lists from them and you are
> left with an FP language with reduced expressiveness. Applicative
> evaluation order. Currying. Pattern matching.

Pattern matching has nothing to do with FP. (like it might be
that some cars have automatic transmission for convenience,
but a car is perfectly a car with manual transmission).

> They all add up to a
> style and feel which is common to all functional languages, and which
> is not shared by Lisp.

Programming in Haskell is quite different from programming in SML.
Laziness, Monads, Type Classes, Syntax (significant whitespace), etc.
Erlang is different again. J? Mercury? Oz? Clean?


> (I know there are exceptions to all the above
> -- ML has the "wrong" evaluation order, for instance -- it's a "family
> resemblence" thing, not a list of formal criteria).
>
> OTOH, Lisp has a feel and a collection of features of its own, which is
> not shared by functional programming languages.
>
> I see no benefit in conflating the two.

FP is so vague as OOP is nowadays. It is almost useless.
Inclusion or not inclusion is mostly political.

>
> -- chris

Rainer Joswig

unread,
Feb 26, 2007, 9:09:51 AM2/26/07
to
In article <Lisp-2007...@ram.dialup.fu-berlin.de>,
r...@zedat.fu-berlin.de (Stefan Ram) wrote:

> "Chris Uppal" <chris...@metagnostic.REMOVE-THIS.org> writes:
> >Remove lists from Lisp and you are left with nothing.
>

> (Try to post this on comp.lang.lisp, and see what happens.)
>
> ~
>
> A functional language /enables/ functional programming,
> which means:
>
> - functions are values and at the same time can be
> applied (to an argument)
>
> - it has function literals for a wide class of functions
>
> - functions can be returned from functions
>
> - evaluation of arguments can be suppressed somewhat,
> so as to implement »if« or »cond« as a function
>
> - it has a garbage collector
>
> - it has a constructor for some kind of container
> (like a list)
>
> A /pure/ functional language /enforces/ functional programming.
>
> - no inbuilt notion of state or statements or an
> execution sequence in time, only a notation for
> values being specified in terms of other values.

Which makes Haskell 'pure'. But not SML. Which I would say
is a much more fundamental difference than, say, having
pattern matching or not.
Probably I'm not telling anything new. ;-) But maybe for C.U..?

Matthias Blume

unread,
Feb 26, 2007, 10:34:49 PM2/26/07
to

Rainer Joswig <jos...@lisp.de> writes:

> FP is so vague as OOP is nowadays. It is almost useless.
> Inclusion or not inclusion is mostly political.

A better term -- one I heard first from Andrew Appel (although I am
not sure where it originated) -- is "value-oriented programming",
which hits the nail on the head much better.

Whether a language is functional is (to me) a matter of degree:
Haskell is certainly functional (unless you program in the IO monad,
in which case it is still functional, but more so only in name, and
less so in spirit).

ML is a bit less functional yet, since you are forced to program in
the IO monad all the time. Still, most of its data structures are
pure, and there is a significant and useful subset of the language
where programmers, compilers, and program analysis tools can safely
rely on freedom from side effects.

Scheme is even less functional, although it is still quite close to
the ML. However, there are significantly fewer "pure" constructs.
Almost anything involving structured data has side effects. (CONS in
Scheme has a side effect, and even LAMBDA does, for crying out loud!)

Common Lisp and other Lisp 2s is still further removed, as they give
up on the notion of functions being "just ordinary data" by making a
distinction between the rules of evaluation in operator position and
other positions.

That's my take on it, anyway.

> Pattern matching has nothing to do with FP. (like it might be
> that some cars have automatic transmission for convenience,
> but a car is perfectly a car with manual transmission).

To me, as the elimination form for sums, pattern matching is an
important aspect of /typed/ functional programming.

Matthias

Rainer Joswig

unread,
Feb 27, 2007, 3:15:45 AM2/27/07
to
In article <m2hct82...@hanabi.local>,
Matthias Blume <fi...@my.address.elsewhere> wrote:

> Rainer Joswig <jos...@lisp.de> writes:
>
> > FP is so vague as OOP is nowadays. It is almost useless.
> > Inclusion or not inclusion is mostly political.
>
> A better term -- one I heard first from Andrew Appel (although I am
> not sure where it originated) -- is "value-oriented programming",
> which hits the nail on the head much better.

A Lisp programmer knows the value of everything, but the cost of
nothing. -- Alan Perlis



> Whether a language is functional is (to me) a matter of degree:
> Haskell is certainly functional (unless you program in the IO monad,
> in which case it is still functional, but more so only in name, and
> less so in spirit).

The not-side-effect-free outside world makes a lot problems. ;-)

> ML is a bit less functional yet, since you are forced to program in
> the IO monad all the time. Still, most of its data structures are
> pure, and there is a significant and useful subset of the language
> where programmers, compilers, and program analysis tools can safely
> rely on freedom from side effects.

How practical is it in 'reality'?
Say, you don't (re)use hash-tables and arrays in 'typical' ML
programs?

> Scheme is even less functional, although it is still quite close to
> the ML. However, there are significantly fewer "pure" constructs.
> Almost anything involving structured data has side effects. (CONS in
> Scheme has a side effect, and even LAMBDA does, for crying out loud!)

What is the LAMBDA or CONS side effect?

> Common Lisp and other Lisp 2s is still further removed, as they give
> up on the notion of functions being "just ordinary data" by making a
> distinction between the rules of evaluation in operator position and
> other positions.

For my practical programming purposes, this is mostly a
complication, but not a limitation.

>
> That's my take on it, anyway.
>
> > Pattern matching has nothing to do with FP. (like it might be
> > that some cars have automatic transmission for convenience,
> > but a car is perfectly a car with manual transmission).
>
> To me, as the elimination form for sums, pattern matching is an
> important aspect of /typed/ functional programming.

If you drive all time a car with automatic transmission, you
start to think that it is necessary to have it.
If you remove or add pattern matching, it does not make a
programming language more or less 'functional', IMHO.


>
> Matthias

Markus E Leypold

unread,
Feb 26, 2007, 11:13:53 AM2/26/07
to

"Chris Uppal" <chris...@metagnostic.REMOVE-THIS.org> writes:


And how does that relate to

"If the program is malformed, you can't "run it anyway"; there's no
legal program present to run!"

?

Regards -- Markus


Markus E Leypold

unread,
Feb 26, 2007, 11:18:38 AM2/26/07
to

"J Thomas" <jeth...@gmail.com> writes:

> On Feb 24, 4:38 pm, Benjamin Franksen <benjamin.frank...@bessy.de>
> wrote:
>> Paul Rubin wrote:
>
>> > For amusement purposes, here's a
>> > syntax error that confused me. I did manage to fix it by trial and
>> > error, but I'm still not absolutely clear on the exact workings of the
>> > rule that it broke. I'd be interested to know if you can spot the
>> > error without trying to compile the program. In my case, even with
>> > the error message I was not able to figure out what was wrong except
>> > through a bunch of experiments.
>
>> I suppose it's something to do either with the indentation of the (first)
>> case branches or with the semicolons separating them (or both). The
>> semicolons are not necessary (since you are using layout anyway) and may
>> interfere with the 'let'. I'd leave them out. And the case branches ("+" ->
>> Just (+) etc...) should be indented at least one space further than the
>> start of the let item (starting with "func = case op of").
>>
>> Haskell's layout rule really has its subtleties, especially in connection
>> with do-notation. There are some pitfalls that almost all beginners fall
>> into.
>
> This is why I say it's important for Forth to become more intuitive

^^^^^
I completely agree with that. You have my support,

Regards -- Markus

Markus E Leypold

unread,
Feb 26, 2007, 11:24:18 AM2/26/07
to

"J Thomas" <jeth...@gmail.com> writes:

The OPs problem was "what the **** is all that theory good for". Well:
You won't be able to discuss "upgrade problems" or "scaling" or even
see that there is a problem (and will stay here) if you don't know
about NP-completeness etc.

>
> As I understand it, to prove a problem is NP-complete or prove it
> isn't, will not in either case answer those questions. To know that
> scaling issues will at some point make the solution completely
> impractical doesn't at all say whether it's impractical for the
> customer's uses. And vice versa, it can scale gently and be too
> unwieldy even at the scale the customer needs now.
>
> So it looks to me like finding out whether it's NP-complete is solving
> the wrong problem. What you need to know is something else. Some of

? Whoever said that?

> the details you use to decide NP-completely might be used for that
> other question, though.

>> >> (Similar to an engineer who will refuse to build a dam made of bubblegum.)
>>
>> > Doesn't it depend on the job? A small dam that needs only a short
>> > lifespan might feasibly be made of bubblegum, though some other
>> > material might fit the needs better and/or cheaper.
>>
>> Right. But without theory you might just use bubble gum for every dam,
>> since you don't knoe about the principal restrictions of using bubble
>> gum.
>
> If you don't understand your materials you can expect problems. On the

Same applies to CS and algorithms and their complexity.

> other hand, civil engineering has traditionally operated with a very
> large component of precedent. If a theoretician decides that a
> particular procedure is unsafe, but it has been in widespread use
> without incident, he's likely to generate lots of comments about
> proofs that bumblebees can't fly. On the other hand if a design is
> constructed with traditional safety margins much reduced because
> theory says they're not needed, it's likely to get a lot of interest
> while the engineers wait for it to fall down.

> Theory is a useful supplement to practice. When you operate outside

Supplement? It's actually the _fundament_ of practice. If not, in
chemistry we'd still be searching how to make gold,
because "it might just work this time".

> your area of expertise it's all you have. And this is probably why
> people are generally so hesitant to do significant projects outside
> their areas of expertise.


Regards -- Markus

Chris Smith

unread,
Feb 27, 2007, 10:27:04 AM2/27/07
to
> "Chris Uppal" <chris...@metagnostic.REMOVE-THIS.org> writes:
> > Functional languages (real ones, I mean, not LISP) all have trivial
> > variations on lambda calculus as their runtime semantics (the
> > NON-trivial variation is in evaluation order, which is vitally
> > important, but irrelevant here). It is simple to create an "untyped"
> > implementation of that concept.

Markus E Leypold wrote:
> And how does that relate to
>
> "If the program is malformed, you can't "run it anyway"; there's no
> legal program present to run!"
>
> ?

Simple. The error messages from Hugs are almost certainly type errors.
In that case, they are complaining that something can go wrong, but the
program still has well-defined semantics. If one has trouble
identifying the meaning of the type error, one could just run the
program and try to watch something go wrong, and thereby identify the
meaning of the error.

That is assuming we're talking about type errors; but in practice, we
almost always are. Syntax errors are comparatively rare (except for
beginners when they have to do with Haskell's bizarre indentation rules)
and easy to fix. I think you can assume Chris's comment was intended
for type errors only.

--
Chris Smith

Jerry Avins

unread,
Feb 27, 2007, 10:29:19 AM2/27/07
to
Markus E Leypold wrote:
> "J Thomas" <jeth...@gmail.com> writes:

...

>> Theory is a useful supplement to practice. When you operate outside
>
> Supplement? It's actually the _fundament_ of practice. If not, in
> chemistry we'd still be searching how to make gold,
> because "it might just work this time".

Theory developed to explain practice. Practice came first. I recall a
well written eleventh-century treatise titled something like "How to
Build a Thirty-Foot Bridge with Twelve-Foot Timbers". (Wood was becoming
scarce in England around then.) No theory, but instructions I could
follow. The heuristics were confined to footings and piers.

Jerry
--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯

Markus E Leypold

unread,
Feb 27, 2007, 10:49:13 AM2/27/07
to

Chris Smith <cds...@twu.net> writes:

>> "Chris Uppal" <chris...@metagnostic.REMOVE-THIS.org> writes:
>> > Functional languages (real ones, I mean, not LISP) all have trivial
>> > variations on lambda calculus as their runtime semantics (the
>> > NON-trivial variation is in evaluation order, which is vitally
>> > important, but irrelevant here). It is simple to create an "untyped"
>> > implementation of that concept.
>
> Markus E Leypold wrote:
>> And how does that relate to
>>
>> "If the program is malformed, you can't "run it anyway"; there's no
>> legal program present to run!"
>>
>> ?
>
> Simple. The error messages from Hugs are almost certainly type errors.
> In that case, they are complaining that something can go wrong, but the
> program still has well-defined semantics.

No, I don't think so. The set of all well typed language constructs is
contains the set of all programs with a well defined semantics. The
language definitions of Haskell and ML languages I know of are thus,
that there is no such thing as a "badly typed program". "Well typed"
is a requirement for "being a program". If it doesn't type, there is
only some text that could be parsed into tokens (in adheres to the
lexis of the language), could perhaps even be used to construct a
sysntax tree (it adheres to the syntax of the language) but since it
couldn't be typed it's not a program. The semantic is only defined on
(well typed) programs.

BTW it is exactly your kind of thinking -- that "well typed",
"compiled", "can be run", "has well defined semantics" are disjoint
concepts -- that make C such a mess (at least for people that haven't
yet developed reflexes to stay well clear of areas of doubtful
reputation).


> If one has trouble identifying the meaning of the type error, one
> could just run the program and try to watch something go wrong, and
> thereby identify the meaning of the error.

Hardly. Doesn't sound like a good idea. Constructing a test case to
find the "error" would be tantamount to understanding the cause for
the type error in the beginning. No need to run the non-program then.

> That is assuming we're talking about type errors; but in practice, we
> almost always are.

> Syntax errors are comparatively rare (except for
> beginners when they have to do with Haskell's bizarre indentation rules)
> and easy to fix. I think you can assume Chris's comment was intended
> for type errors only.

I know. I still think it a really bad idea. Esp. from a sociological
point of view: We know how warnings from the C-compiler are usually
treated by people who want to get their stuff done before the dead
line: --no-warn-all-errors (...).

Regards -- Markus

Chris Uppal

unread,
Feb 27, 2007, 11:05:29 AM2/27/07
to
Chris Smith wrote:

> That is assuming we're talking about type errors; but in practice, we
> almost always are. Syntax errors are comparatively rare (except for
> beginners when they have to do with Haskell's bizarre indentation
> rules) and easy to fix. I think you can assume Chris's comment was
> intended for type errors only.

It was -- in fact it had not even occurred to me that the OP might be
getting any /other/ kind of error from HUGS...

(Not an unreasonable assumption, given the OP's stated difficulty with
the type system -- but if I'd reallised I was making that assumption
then I'd have been more explicit.)

-- chris

Neelakantan Krishnaswami

unread,
Feb 27, 2007, 11:13:01 AM2/27/07
to
In article <joswig-23FF7D....@news-europe.giganews.com>, Rainer
Joswig wrote:
>
> How practical is it in 'reality'? Say, you don't (re)use
> hash-tables and arrays in 'typical' ML programs?

For mappings, I normally use a purely functional balanced tree of some
kind rather than a hash table.

I use arrays very rarely, and in most of those cases I only need the
constant time lookup, so I don't ever modify an array after its
creation. In fact, I honestly can't remember the last time I needed to
modify an array after creation. (For example, in a state machine
implementation I might end with an array to hold the transition table,
which never gets modified after creation.)

This is real in the sense that it's how I program, but I dunno if that's
realistic enough for you. :)

> What is the LAMBDA or CONS side effect?

Scheme has the eq? primitive which tests for object identity, and cons
and lambda allocate memory in the heap. That means that

(let ((x (cons 1 2))
(y (const 1 2)))
(cons x y))

and

(let ((x (cons 1 2)))
(cons x x))

are observably different. (In fact, Scheme cons cells are mutable via
set-car! and set-cdr!, but eq? is enough to make allocation visible.
Ocaml has the same problem/feature as Scheme, via its == operator, but
SML does not.)

--
Neel R. Krishnaswami
ne...@cs.cmu.edu

Neelakantan Krishnaswami

unread,
Feb 27, 2007, 11:26:27 AM2/27/07
to
In article <MPG.204dfbb4f...@news.altopia.net>, Chris Smith wrote:
>
> Simple. The error messages from Hugs are almost certainly type
> errors. In that case, they are complaining that something can go
> wrong, but the program still has well-defined semantics. If one has
> trouble identifying the meaning of the type error, one could just
> run the program and try to watch something go wrong, and thereby
> identify the meaning of the error.

With some effort, you could make this true for ML, but you definitely
cannot for Haskell. This is because Haskell relies on typing to
resolve overloading, and so without a type you don't necessarily have
a program to run.

Andreas Kochenburger

unread,
Feb 27, 2007, 3:31:26 PM2/27/07
to
Jerry Avins wrote:
>> Supplement? It's actually the _fundament_ of practice. If not, in
>> chemistry we'd still be searching how to make gold,
>> because "it might just work this time".
>
> Theory developed to explain practice. Practice came first. I recall a
> well written eleventh-century treatise titled something like "How to
> Build a Thirty-Foot Bridge with Twelve-Foot Timbers". (Wood was becoming
> scarce in England around then.) No theory, but instructions I could
> follow. The heuristics were confined to footings and piers.
>
> Jerry

Nice example, but how many trial bridges have collapsed to get to that
invention? IMO theory helps us to save time and costs - often just by
counter-proving wrong assumptions. And sometimes you can reach new
results by following the "mathematical mechanics" in a theory. Not
everybody is a Stephen Hawking who has to rotate quantum physics in his
mind without a helpful sheet of scribbling paper.

Andreas
-------
MinForth: http://minforth.net.ms

Chris Uppal

unread,
Feb 27, 2007, 3:44:55 PM2/27/07
to
Neelakantan Krishnaswami wrote:

So that resolution would have to happen at runtime. And if it didn't
resolve "correctly" (in the sense of "how it would have done at
compile-time if the type checker had worked") then /that/ is the error
which is discovered at runtime.

Some langauage need the type system before they can even /parse/ the
language (e.g. Avail, and maybe Maude?). But I don't think Haskell is
in that rather bizarre group, is it ?

-- chris

J Thomas

unread,
Feb 27, 2007, 4:07:16 PM2/27/07
to

Correct theory helps us save time and costs.

Very few bridges get built to test how correct the theory is.

Meanwhile, there are engineers who spend their entire professional
lives re-using the rules they've learned that work within their area
of expertise. Try to get them to work outside their expertise and they
quite properly send you to somebody else. I haven't noticed so much of
that in programming outside SEI's CMM where organisations get the
highest ratings if they can manage to do the same project over and
over again. The software you're best qualified to write is the same
software you've already written a dozen times.

Markus E Leypold

unread,
Feb 27, 2007, 4:25:47 PM2/27/07
to

"J Thomas" <jeth...@gmail.com> writes:

> On Feb 27, 3:31 pm, Andreas Kochenburger <a...@privat.de> wrote:
>> Jerry Avins wrote:
>> >> Supplement? It's actually the _fundament_ of practice. If not, in
>> >> chemistry we'd still be searching how to make gold,
>> >> because "it might just work this time".
>>
>> > Theory developed to explain practice. Practice came first. I recall a
>> > well written eleventh-century treatise titled something like "How to
>> > Build a Thirty-Foot Bridge with Twelve-Foot Timbers". (Wood was becoming
>> > scarce in England around then.) No theory, but instructions I could
>> > follow. The heuristics were confined to footings and piers.
>>
>> > Jerry
>>
>> Nice example, but how many trial bridges have collapsed to get to that
>> invention? IMO theory helps us to save time and costs - often just by
>> counter-proving wrong assumptions. And sometimes you can reach new
>> results by following the "mathematical mechanics" in a theory. Not
>> everybody is a Stephen Hawking who has to rotate quantum physics in his
>> mind without a helpful sheet of scribbling paper.
>
> Correct theory helps us save time and costs.
>
> Very few bridges get built to test how correct the theory is.
>
> Meanwhile, there are engineers who spend their entire professional
> lives re-using the rules they've learned that work within their area

They haven't (usually) learned that rules by trial and error, though,
but by memorizing theory or the condensed rules for application of
theory during their professional education.

> of expertise. Try to get them to work outside their expertise and they
> quite properly send you to somebody else. I haven't noticed so much of
> that in programming outside SEI's CMM where organisations get the
> highest ratings if they can manage to do the same project over and
> over again. The software you're best qualified to write is the same
> software you've already written a dozen times.


Regards -- Markus

Markus E Leypold

unread,
Feb 27, 2007, 4:27:34 PM2/27/07
to

"Chris Uppal" <chris...@metagnostic.REMOVE-THIS.org> writes:

I think it's a useless exercise. Sort of making a kind of Scheme from
a statically typed language. As I already wrote: It will not help
testing or finding the cause for type errors.

Regards -- Markus

Steve Schafer

unread,
Feb 27, 2007, 4:57:40 PM2/27/07
to
On Tue, 27 Feb 2007 10:29:19 -0500, Jerry Avins <j...@ieee.org> wrote:

>Theory developed to explain practice. Practice came first.

And practice is extended by applying theory. Otherwise, it's _all_ just
trial and error.

Steve Schafer
Fenestra Technologies Corp.
http://www.fenestra.com/

Chung-chieh Shan

unread,
Feb 27, 2007, 5:15:44 PM2/27/07
to
Chris Uppal <chris...@metagnostic.remove-this.org> wrote in article <xn0f2ys4z...@news.gradwell.net> in comp.lang.functional:

> So that resolution would have to happen at runtime. And if it didn't
> resolve "correctly" (in the sense of "how it would have done at
> compile-time if the type checker had worked") then /that/ is the error
> which is discovered at runtime.

Could you please describe how runtime resolution would work for the
expression

negate (read "123")

? Here "negate" negates a number, and the overloaded function "read"
converts a string to a value (which could be a number or a Boolean).

--
Edit this signature at http://www.digitas.harvard.edu/cgi-bin/ken/sig
I think that it is much more likely that the reports of flying saucers
are the result of the known irrational characteristics of terrestrial
intelligence rather than the unknown rational efforts of extraterrestrial
intelligence. -Richard Feynman

Rainer Joswig

unread,
Feb 27, 2007, 5:26:03 PM2/27/07
to
In article <slrneu8m4d...@gs3106.sp.cs.cmu.edu>,
Neelakantan Krishnaswami <ne...@cs.cmu.edu> wrote:

> In article <joswig-23FF7D....@news-europe.giganews.com>, Rainer
> Joswig wrote:
> >
> > How practical is it in 'reality'? Say, you don't (re)use
> > hash-tables and arrays in 'typical' ML programs?
>
> For mappings, I normally use a purely functional balanced tree of some
> kind rather than a hash table.
>
> I use arrays very rarely, and in most of those cases I only need the
> constant time lookup, so I don't ever modify an array after its
> creation. In fact, I honestly can't remember the last time I needed to
> modify an array after creation. (For example, in a state machine
> implementation I might end with an array to hold the transition table,
> which never gets modified after creation.)
>
> This is real in the sense that it's how I program, but I dunno if that's
> realistic enough for you. :)

Ha, that's hard to say from here. ;-)
From my personal experience, I find it
difficult to work without mutable hashtables and arrays,
even though I could. Practical experience with 'production'
Lisp systems shows that manual code optimizations to reuse
arrays for example improves runtime performance.

>
> > What is the LAMBDA or CONS side effect?
>
> Scheme has the eq? primitive which tests for object identity, and cons
> and lambda allocate memory in the heap. That means that
>
> (let ((x (cons 1 2))
> (y (const 1 2)))
> (cons x y))
>
> and
>
> (let ((x (cons 1 2)))
> (cons x x))
>
> are observably different.

Referential transparency or side effects? Isn't there a difference?

> (In fact, Scheme cons cells are mutable via
> set-car! and set-cdr!, but eq? is enough to make allocation visible.

Finite memory is already enough to make allocation visible.

Matthias Blume

unread,
Feb 27, 2007, 5:29:07 PM2/27/07
to
Rainer Joswig <jos...@lisp.de> writes:

> In article <m2hct82...@hanabi.local>,
> Matthias Blume <fi...@my.address.elsewhere> wrote:
>
>> Rainer Joswig <jos...@lisp.de> writes:
>>
>> > FP is so vague as OOP is nowadays. It is almost useless.
>> > Inclusion or not inclusion is mostly political.
>>
>> A better term -- one I heard first from Andrew Appel (although I am
>> not sure where it originated) -- is "value-oriented programming",
>> which hits the nail on the head much better.
>
> A Lisp programmer knows the value of everything, but the cost of
> nothing. -- Alan Perlis

Indeed. But that's not where the phrase originated, I think. (It may
have been inspired by this quote, though.)

>> Whether a language is functional is (to me) a matter of degree:
>> Haskell is certainly functional (unless you program in the IO monad,
>> in which case it is still functional, but more so only in name, and
>> less so in spirit).
>
> The not-side-effect-free outside world makes a lot problems. ;-)

No. The beauty is that Haskell lets you choose effectful programming
when it is deemed necessary for some reason -- and backs this up with
a type system that clearly identifies pure parts of the code.

>> ML is a bit less functional yet, since you are forced to program in
>> the IO monad all the time. Still, most of its data structures are
>> pure, and there is a significant and useful subset of the language
>> where programmers, compilers, and program analysis tools can safely
>> rely on freedom from side effects.
>
> How practical is it in 'reality'?
> Say, you don't (re)use hash-tables and arrays in 'typical' ML
> programs?

Others have already answered. I personally use immutable data
structures in ML all the time -- together with a judicious sprinkling
of mutable ones. There is no reason for CONS (or any other datatype
constructor) to allocate mutable state. Same for tuples and other
records, or closures. When stateful computations are needed, this can
be requested explicitly. Not every language construct has to be
infected by it.

>> Scheme is even less functional, although it is still quite close to
>> the ML. However, there are significantly fewer "pure" constructs.
>> Almost anything involving structured data has side effects. (CONS in
>> Scheme has a side effect, and even LAMBDA does, for crying out loud!)
>
> What is the LAMBDA or CONS side effect?

Cons allocates mutable state. Lambda allocates a closure (which is
not mutable, but which has identity).

>> Common Lisp and other Lisp 2s is still further removed, as they give
>> up on the notion of functions being "just ordinary data" by making a
>> distinction between the rules of evaluation in operator position and
>> other positions.
>
> For my practical programming purposes, this is mostly a
> complication, but not a limitation.

Sure. When I program in assembler, the complete lack of high-level
language features is also mostly a complication but not a
limitation. :-)

>>
>> That's my take on it, anyway.
>>
>> > Pattern matching has nothing to do with FP. (like it might be
>> > that some cars have automatic transmission for convenience,
>> > but a car is perfectly a car with manual transmission).
>>
>> To me, as the elimination form for sums, pattern matching is an
>> important aspect of /typed/ functional programming.
>
> If you drive all time a car with automatic transmission, you
> start to think that it is necessary to have it.

Speak for yourself. This statement is not true for me.

> If you remove or add pattern matching, it does not make a
> programming language more or less 'functional', IMHO.

I realize that we may be talking about two different (albeit related)
issues here:

I was talking about matching against simple (non-nested) patterns.
The corresponding "case" expressing is the commonly used as the
elimination form of sum types. I don't see any other (equally simple,
yet fundamentally different, and still safe) way of providing the
same functionality. (Church-encoding "case" does not count.)

As for nested patterns -- I completely agree that they are nice to
have, but not essential.

Matthias

J Thomas

unread,
Feb 27, 2007, 5:48:10 PM2/27/07
to
On Feb 27, 4:25 pm, Markus E Leypold

<development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:
> "J Thomas" <jethom...@gmail.com> writes:

> >> Nice example, but how many trial bridges have collapsed to get to that
> >> invention?
>

> > Correct theory helps us save time and costs.
>
> > Very few bridges get built to test how correct the theory is.
>
> > Meanwhile, there are engineers who spend their entire professional
> > lives re-using the rules they've learned that work within their area
>
> They haven't (usually) learned that rules by trial and error, though,
> but by memorizing theory or the condensed rules for application of
> theory during their professional education.

I am not a civil engineer, so maybe I shouldn't pontificate too much
about how their programs are structured. Still, it looks to me like
it's a very different domain. Whenever an important bridge fails, they
have a complex inquest where they try to figure out why it happened,
and which safety margins need to be increased. They do a lot of theory
but it's in response to actual data. A lot of their theory isn't very,
well, theoretical.

When software developers have a large project fail, how often is there
a major investigation to find out just why it happened? How often are
the results of the investigation published? I occasionally see the
factoid that more than half of all large software projects fail,
though of course "large software project" is left undefined, and if
the size is increased sufficiently it becomes trivially true -- of
course more than half of the largest software projects fail. But if
more than half of the largest bridge projects failed....

So anyway, proceeding as if this factoid meant something, the obvious
conclusion is that we know how to build bridges but -- theory aside --
we don't know how to run large software projects. It's a different
domain.

And one of the differences is that engineers know how to keep small
failures from morphing into big failures. They know how to make 27
cubic meters of concrete even more stable than 9 cubic meters of
concrete. Within wide limits, a bigger beam is stronger than a smaller
beam. But software people don't know how to make 10,000 lines of code
more stable than 1000 LOC, and in general a bigger program is not more
robust than a smaller program.

It might be a better comparison if every bridge had to be made
entirely of glass. Tighten a glass nut on a glass bolt too tight and
something cracks. A glass ballast might get a tiny crack that expands
conchoidally until the whole thing splits. A car breaking through the
glass guardrails on an upper deck might spin as it falls and hit the
lower deck hard enough to break it all the way across.... Lots of
minor-looking flawss that can turn into fatal flaws.

And then it would also be more like software if the engineers simply
accepted that things go wrong sometimes.

"Er, well, it was an accident."
"Ho! Vell, zee? An exident. Vell, vun ting ve Jagerkin *understend* is
dat crezy exidents *happen*, right boyz?"
"Hoo boy, yez."
"Dot's de truth."

If software was like civil engineering, it would be a rare disaster
when a software project failed. And it would be a scandal when a bug
fix was needed. We would have a large body of shared experience, with
a large collection of tools that work reliably. But somehow software
is a lot more like military R&D contracting.

Rainer Joswig

unread,
Feb 27, 2007, 6:31:21 PM2/27/07
to
In article <m17iu38...@hana.uchicago.edu>,
Matthias Blume <fi...@my.address.elsewhere> wrote:

...

> >> Scheme is even less functional, although it is still quite close to
> >> the ML. However, there are significantly fewer "pure" constructs.
> >> Almost anything involving structured data has side effects. (CONS in
> >> Scheme has a side effect, and even LAMBDA does, for crying out loud!)
> >
> > What is the LAMBDA or CONS side effect?
>
> Cons allocates mutable state. Lambda allocates a closure (which is
> not mutable, but which has identity).

I asked that in another posting. Are we talking about side effects
or referential transparency? Whether CONS allocates mutable state
or not does not make CONS having side effects. The side effect
happens if I (avoiding the 'you' so I can speak of myself)
change the CONS cell later. As long as I don't do that my
programs are side effect free - the usage of the CONS function
won't change that.

> >> Common Lisp and other Lisp 2s is still further removed, as they give
> >> up on the notion of functions being "just ordinary data" by making a
> >> distinction between the rules of evaluation in operator position and
> >> other positions.
> >
> > For my practical programming purposes, this is mostly a
> > complication, but not a limitation.
>
> Sure. When I program in assembler, the complete lack of high-level
> language features is also mostly a complication but not a
> limitation. :-)

I don't think this is on the same level. Having higher-order
functions, but using a slighty different notation/mechanism
doesn't really complicate the use of higher-order function
that much. In Lisp 2 higher-order functions are used all the time.
Lots of them are part of the language standard.
In assembler I haven't seen them often (I should check
the assembler of my Lisp machine), though
I haven't looked at x86/PowerPC/... assembler much lately.

...

Matthias Blume

unread,
Feb 27, 2007, 6:51:23 PM2/27/07
to
Rainer Joswig <jos...@lisp.de> writes:

> Whether CONS allocates mutable state or not does not make CONS
> having side effects.

That is not the way I (and many others) use the term "effect".

> The side effect happens if I (avoiding the 'you' so I can speak of
> myself) change the CONS cell later.

That would be another side effect.

Frank Buss

unread,
Feb 27, 2007, 7:09:40 PM2/27/07
to
J Thomas wrote:

> We would have a large body of shared experience, with
> a large collection of tools that work reliably.

There are some good books of best practices, like "The Pragmatic
Programmer" by Andrew Hunt and David Thomas, and if you study computer
science, you'll learn things like how to mathematically prove software or
how to indentify equivalence classes of a problem to design test cases,
which covers most of the possible program states. And there are tools,
which supports these concepts.

But I think for many projects the problem is, that it is more important to
release fast, with the idea that errors can be fixed later and too many
customers have learned to accept this. And many programmers doesn't know
the theoretical background good enough. Of course, you can't mathematically
prove big programs in reasonable time, but knowing the concepts helps you
to write better programs anyway.

--
Frank Buss, f...@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de

jos...@corporate-world.lisp.de

unread,
Feb 27, 2007, 7:34:34 PM2/27/07
to
On 28 Feb., 00:51, Matthias Blume <f...@my.address.elsewhere> wrote:
> Rainer Joswig <jos...@lisp.de> writes:
> > Whether CONS allocates mutable state or not does not make CONS
> > having side effects.
>
> That is not the way I (and many others) use the term "effect".

How should above sentence help me to understand your point?

Jerry Avins

unread,
Feb 27, 2007, 10:21:40 PM2/27/07
to

There were plenty of collapses after theory ruled also. Also in England
-- great Britain by that time -- a great deal was learned about metal
fatigue from the frequent collapse of cast-iron railroad bridges, and of
course there was the Firth of Tay of bridge (and more recently,
Galloping Gertie over the Tacoma Narrows). Theory is being refined all
the time, and that's good, but the foundation of good design is
observing disasters and near disasters, then using whatever means
available -- usually physics and math -- to minimize the chance of it
happening again. The cathedral builders knew nothing about the "middle
ninth" rule for compression stress in piers, but they raised spires on
the pier tops to achieve it. Using the weight of the spires (later
generations with theory to understand their purpose nevertheless assumed
they were decorative) to close gaps in the masonry shifted the line of
action into that region. Theory is good. Observation is good. I'm an
engineer. I make mistakes. I try to learn from them.

Matthias Blume

unread,
Feb 27, 2007, 10:40:39 PM2/27/07
to
jos...@corporate-world.lisp.de writes:

> On 28 Feb., 00:51, Matthias Blume <f...@my.address.elsewhere> wrote:
>> Rainer Joswig <jos...@lisp.de> writes:
>> > Whether CONS allocates mutable state or not does not make CONS
>> > having side effects.
>>
>> That is not the way I (and many others) use the term "effect".
>
> How should above sentence help me to understand your point?

It is not a point that I am making but merely a matter of terminology.

You seem to make a distinction between what you call "not
referentially transparent" and "has an effect". I don't.

A computation is free of effects if I can run it twice without being
able to distinguish (now or later) the two results (in any way
whatsoever). Applications of CONS in Lisp or Scheme do not have this
property, application of :: in ML or : in Haskell do.

Jerry Avins

unread,
Feb 28, 2007, 12:05:18 AM2/28/07
to
Steve Schafer wrote:
> On Tue, 27 Feb 2007 10:29:19 -0500, Jerry Avins <j...@ieee.org> wrote:
>
>> Theory developed to explain practice. Practice came first.
>
> And practice is extended by applying theory. Otherwise, it's _all_ just
> trial and error.

Theory isn't always pertinent, or insight doesn't leap to mind to get
the proper theory applied. Phlogiston was a theory useful for guiding
some experiments but ultimately misleading. Nobody supposed that a beam
could fail in diagonal shear until slanty cracks appeared in what
practice predicted should be a safe design. The theory of web crippling
is known but not used in practice. Follow the rules in the handbook and
it'll come out safe. Structural builders learned to avoid brittle
materials in tension or bending. Very little in theory suggests why we
should. We turn to theory when things go wrong. Usually, we turn to the
handbook.

Jerry Avins

unread,
Feb 28, 2007, 7:31:13 AM2/28/07
to
Steve Schafer wrote:
> On Tue, 27 Feb 2007 10:29:19 -0500, Jerry Avins <j...@ieee.org> wrote:
>
>> Theory developed to explain practice. Practice came first.
>
> And practice is extended by applying theory. Otherwise, it's _all_ just
> trial and error.

There's a shorter way to say it than I thought of last night. It _is_
all trial and error. _Somebody_ has to try out the theory to see if it
works and if not, why not.

Steve Schafer

unread,
Feb 28, 2007, 8:12:11 AM2/28/07
to
On Wed, 28 Feb 2007 07:31:13 -0500, Jerry Avins <j...@ieee.org> wrote:

>There's a shorter way to say it than I thought of last night. It _is_
>all trial and error. _Somebody_ has to try out the theory to see if it
>works and if not, why not.

You're using some very nonstandard definitions.

Theory predicts what will happen when you try something; you use it to
_direct_ your trials. If you build a bridge and it falls down, and you
look at the remains and say to yourself, "Hmmm, I'll bet that if I beef
up those two beams, it will work," then you're using theory. _Any_ time
you do a trial based on a prediction of its performance, you're using
theory. The only way to _not_ use theory is to randomly try stuff until
something works.

Jerry Avins

unread,
Feb 28, 2007, 8:26:54 AM2/28/07
to

Theory was too weak to support the approach you describe until well
after Newton's contributions to math and physics. Many excellent
structures were built before that time. How?

jerry

J Thomas

unread,
Feb 28, 2007, 9:45:29 AM2/28/07
to
On Feb 28, 8:26 am, Jerry Avins <j...@ieee.org> wrote:
> Steve Schafer wrote:
> > On Wed, 28 Feb 2007 07:31:13 -0500, Jerry Avins <j...@ieee.org> wrote:

> > Theory predicts what will happen when you try something; you use it to
> > _direct_ your trials. If you build a bridge and it falls down, and you
> > look at the remains and say to yourself, "Hmmm, I'll bet that if I beef
> > up those two beams, it will work," then you're using theory. _Any_ time
> > you do a trial based on a prediction of its performance, you're using
> > theory. The only way to _not_ use theory is to randomly try stuff until
> > something works.
>
> Theory was too weak to support the approach you describe until well
> after Newton's contributions to math and physics. Many excellent
> structures were built before that time. How?

By stretching the idea of what theory is until it fits pretty much
anything anybody does.

But anyway, my point isn't that you can make things that work without
a sophisticated or correct theory about why they work. That's a given.
My point is that large engineering projects usually don't fall down,
but large software projects usually do fall down. This suggests that
our theory as currently understood is not adequate.

Tenth century builders did have sophisticated theory to fall back on.
It was called astrology. It may have acted to reduce construction
accidents -- I don't know, I haven't seen the data. Certainly if the
project astrologer says not to work on Monday or there's likely to be
an accident, and you work on Monday anyway and sure enough there's an
accident, you're likely to believe he knows what he's talking about.

So is modern computer science more like pre-Newtonian physics, or is
it more like astrology? It may take decades of experience to find out.

But here's one reason that things might seem to go slower today -- we
are doing things far more in parallel. If we started on average one
large software project a year, we'd pay a lot more attention to it.
We'd put a lot more effort into analysing the failures. The ratio of
theory to experience would go up. And yet with so little data the
total amount of theory might stay fairly small. As it is we have more
projects than we can pay attention to, and more theories than we can
pay attention to. It's hard to pick out the good ones, there's just
too much there.

For whatever it's worth, my own theory about project development is
based heavily on John Gall.

"A complex system that works is invariably found to have evolved from
a simple system that works. A complex system designed from scratch
never works and cannot be patched up to make it work. You have to
start over, beginning with a working simple system."

I think he overstates the case a little, what he says is more like 95%
or 99% true. But it's still the way to bet.

It is loading more messages.
0 new messages