> -----Original Message-----
> From: Manuel M. T. Chakravarty [SMTP:ch...@cse.unsw.edu.au]
> Sent: Wednesday, May 09, 2001 12:57 AM
> To: b...@jenkon.com
> Cc: haskel...@haskell.org
> Subject: RE: Functional programming in Python
>
[Bryn Keller] [snip]
>
> > and I have to agree with Dr. Mertz - I find
> > Haskell much more palatable than Lisp or Scheme. Many (most?) Python
> > programmers also have experience in more typeful languages (typically at
> > least C, since that's how one writes Python extension modules) so
> perhaps
> > that's not as surprising as it might seem.
>
> Ok, but there are worlds between C's type system and
> Haskell's.[1]
>
[Bryn Keller]
Absolutely! C's type system is not nearly so powerful or unobtrusive
as Haskell's.
> > Type inference (to my mind at least) fits the Python mindset very
> > well.
>
> So, how about the following conjecture? Types essentially
> only articulate properties about a program that a good
> programmer would be aware of anyway and would strive to
> reinforce in a well-structured program. Such a programmer
> might not have many problems with a strongly typed language.
[Bryn Keller]
I would agree with this.
> Now, to me, Python has this image of a well designed
> scripting language attracting the kind of programmer who
> strives for elegance and well-structured programs. Maybe
> that is a reason.
[Bryn Keller]
This, too. :-)
[Bryn Keller] [snip]
> Absolutely. In fact, you have just pointed out one of the
> gripes that I have with most Haskell texts and courses. The
> shunning of I/O in textbooks is promoting the image of
> Haskell as a purely academic exercise. Something which is
> not necessary at all, I am teaching an introductory course
> with Haskell myself and did I/O in Week 5 out of 14 (these
> are students without any previous programming experience).
> Moreover, IIRC Paul Hudak's book <http://haskell.org/soe/>
> also introduces I/O early.
>
> In other words, I believe that this a problem with the
> presentation of Haskell and not with Haskell itself.
>
> Cheers,
> Manuel
>
> [1] You might wonder why I am pushing this point. It is
> just because the type system seems to be a hurdle for
> some people who try Haskell. I am curious to understand
> why it is a problem for some and not for others.
[Bryn Keller]
Since my first mesage and your and Simon Peyton-Jones' response,
I've taken a little more time to work with Haskell, re-read Tackling the
Awkward squad, and browsed the source for Simon Marlow's web server, and
it's starting to feel more comfortable now. In the paper and in the server
souce, there is certainly a fair amount of IO work happening, and it all
looks fairly natural and intuitive.
Mostly I find when I try to write code following those examples (or
so I think!), it turns out to be not so easy, and the real difficulty is
that I can't even put my finger on why it's troublesome. I try many
variations on a theme - some work, some fail, and often I can't see why. I
should have kept all the versions of my program that failed for reasons I
didn't understand, but unfortunately I didn't... The only concrete example
of something that confuses me I can recall is the fact that this compiles:
main = do allLines <- readLines; putStr $ unlines allLines
where readLines = do
eof <- isEOF
if eof then return [] else
do
line <- getLine
allLines <- readLines
return (line : allLines)
but this doesn't:
main = do putStr $ unlines readLines
where readLines = do
eof <- isEOF
if eof then return [] else
do
line <- getLine
allLines <- readLines
return (line : allLines)
Evidently this is wrong, but my intuition is that <- simply binds a
name to a value, and that:
foo <- somefunc
bar foo
should be identical to:
bar somefunc
That was one difficulty. Another was trying to figure out what the $
sign was for. Finally I realized it was an alternative to parentheses,
necessary due to the extremely high precedence of function application in
Haskell. That high precedence is also disorienting, by the way. What's the
rationale behind it?
Struggling along, but starting to enjoy the aesthetics of Haskell,
Bryn
p.s. What data have your students' reactions given you about what is
and is not difficult for beginners to grasp?
_______________________________________________
Haskell-Cafe mailing list
Haskel...@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
> > From: Manuel M. T. Chakravarty [SMTP:ch...@cse.unsw.edu.au]
> > Absolutely. In fact, you have just pointed out one of the
> > gripes that I have with most Haskell texts and courses. The
> > shunning of I/O in textbooks is promoting the image of
> > Haskell as a purely academic exercise. Something which is
> > not necessary at all, I am teaching an introductory course
> > with Haskell myself and did I/O in Week 5 out of 14 (these
> > are students without any previous programming experience).
> > Moreover, IIRC Paul Hudak's book <http://haskell.org/soe/>
> > also introduces I/O early.
> >
> > In other words, I believe that this a problem with the
> > presentation of Haskell and not with Haskell itself.
>
No, that is not the case. It does more, it executes an I/O action.
> foo <- somefunc
> bar foo
>
> should be identical to:
>
> bar somefunc
But it isn't; however, we have
do
let foo = somefunc
bar foo
is identical to
do
bar somefunc
So, this all boils down to the question, what is the
difference between
do
let foo = somefunc -- Version 1
bar foo
and
do
foo <- somefunc -- Version 2
bar foo
The short answer is that Version 2 (the arrow) executes any
side effects encoded in `somefunc', whereas Version 1 (the
let binding) doesn't do that. Expressions given as an
argument to a function behave as if they were let bound, ie,
they don't execute any side effects. This explains why the
identity that you stated above does not hold.
So, at the core is that Haskell insists on distinguishing
expressions that can have side effects from those that
cannot. This distinction makes the language a little bit
more complicated (eg, by enforcing us to distinguish between
`=' and `<-'), but it also has the benefit that both a
programmer and the compiler can immediately tell which
expressions do have side effects and which don't. For
example, this often makes it a lot easier to alter code
written by somebody else. It also makes it easier to
formally reason about code and it gives the compiler scope
for rather radical optimisations.
To reinforce the distinction, consider the following two
pieces of code (where `readLines' is the routine you defined
above):
do
let x = readLines
y <- x
z <- x
return (y ++ z)
and
do
x <- readLines
let y = x
let z = x
return (y ++ z)
How is the result (and I/O behaviour) different?
> That was one difficulty. Another was trying to figure out what the $
> sign was for. Finally I realized it was an alternative to parentheses,
> necessary due to the extremely high precedence of function application in
> Haskell. That high precedence is also disorienting, by the way. What's the
> rationale behind it?
You want to be able to write
f 1 2 + g 3 4
instead of
(f 1 2) + (g 3 4)
> p.s. What data have your students' reactions given you about what is
> and is not difficult for beginners to grasp?
They found it to be a difficult topic, but they found
"Unix/Shell scripts" even harder (and we did only simple
shell scripts). I actually made another interesting
observation (and keep in mind that for many that was their
first contact with programming). I had prepared for the
distinction between side effecting and non-side-effecting
expressions to be a hurdle in understanding I/O. What I
hand't taken into account was that the fact that they had
only worked in an interactive interpreter environment (as
opposed to, possibly compiled, standalone code) would pose
them a problem. The interactive interpreter had allowed
them to type in input and get results printed all way long,
so they didn't see why it should be necessary to complicate
a program with print statements.
I append the full break down of the student answers.
Cheers,
Manuel
-=-
Very difficult
Average
Very easy
Recursive functions
3.8%
16.1%
44.2%
25.2%
12.1%
List processing
5.2%
18%
44%
25.4%
8.8%
Pattern matching
3%
15.2%
41.4%
27.8%
14%
Association lists
4.5%
28.5%
48.5%
15.4%
4.5%
Polymorphism/overloading
10.9%
44.2%
37.8%
5.9%
2.6%
Sorting
5.7%
33.5%
47.6%
11.6%
3%
Higher-order functions
16.9%
43%
31.4%
8.5%
1.6%
Input/output
32.6%
39.7%
19.7%
7.3%
2.1%
Modules/decomposition
12.8%
37.1%
35.9%
12.1%
3.5%
Trees
29.5%
41.9%
21.9%
5.7%
2.6%
ADTs
35.9%
36.4%
20.9%
6.1%
2.1%
Unix/shell scripts
38.5%
34.7%
20.7%
5.7%
1.9%
Formal reasoning
11.1%
22.6%
31.9%
20.9%
15%
> The short answer is that Version 2 (the arrow) executes any
> side effects encoded in `somefunc', whereas Version 1 (the
> let binding) doesn't do that. Expressions given as an
> argument to a function behave as if they were let bound, ie,
> they don't execute any side effects. This explains why the
> identity that you stated above does not hold.
>
> So, at the core is that Haskell insists on distinguishing
> expressions that can have side effects from those that
> cannot. This distinction makes the language a little bit
> more complicated (eg, by enforcing us to distinguish between
> `=' and `<-'), but it also has the benefit that both a
> programmer and the compiler can immediately tell which
> expressions do have side effects and which don't. For
> example, this often makes it a lot easier to alter code
> written by somebody else. It also makes it easier to
> formally reason about code and it gives the compiler scope
> for rather radical optimisations.
>
[Bryn Keller]
Exactly the clarification I needed, thank you!
[Bryn Keller] [snip]
> > p.s. What data have your students' reactions given you about what is
> > and is not difficult for beginners to grasp?
>
> They found it to be a difficult topic, but they found
> "Unix/Shell scripts" even harder (and we did only simple
> shell scripts). I actually made another interesting
> observation (and keep in mind that for many that was their
> first contact with programming). I had prepared for the
> distinction between side effecting and non-side-effecting
> expressions to be a hurdle in understanding I/O. What I
> hand't taken into account was that the fact that they had
> only worked in an interactive interpreter environment (as
> opposed to, possibly compiled, standalone code) would pose
> them a problem. The interactive interpreter had allowed
> them to type in input and get results printed all way long,
> so they didn't see why it should be necessary to complicate
> a program with print statements.
[Bryn Keller]
Interesting!
Thanks for your help, and for sharing your students' observations. I
always knew shell scripting was harder than it ought to be. ;-)
Bryn
> You want to be able to write
> f 1 2 + g 3 4
> instead of
> (f 1 2) + (g 3 4)
I do? Personally, I find it a bit confusing, and I still often get it
wrong on the first attempt. The good thing is that the rule is simple
to remember. :-)
-kzm
--
If I haven't seen further, it is by standing in the footprints of giants
Same here. A while back someone said something along the lines that people
come to Haskell because of the syntax. For me it is the other way around.
My background is in Scheme/Lisp, and I still find it irritating that I cannot
just say indent-sexp and the like in Emacs. It is the other properties of the
language that keep me using it. I also get irritated when I get
precedence wrong, so in fact I tend to write (f 1 2) + (g 2 3), which to
my eye conveys the intended structure much better and compiles at first try.
--
pertti
> > From: Ketil Malde <ke...@ii.uib.no>
> > "Manuel M. T. Chakravarty" <ch...@cse.unsw.edu.au> writes:
> > > You want to be able to write
> > =
> > > f 1 2 + g 3 4
> > =
> > > instead of
> > =
> > > (f 1 2) + (g 3 4)
> > =
> > I do? Personally, I find it a bit confusing, and I still often get =
it
> > wrong on the first attempt. =
> =
> Same here. A while back someone said something along the lines that pe=
ople
> come to Haskell because of the syntax. For me it is the other way arou=
nd.
> My background is in Scheme/Lisp, and I still find it irritating that I=
cannot
> just say indent-sexp and the like in Emacs. It is the other properties=
of the
> language that keep me using it. I also get irritated when I get
> precedence wrong, so in fact I tend to write (f 1 2) + (g 2 3), which =
to
> my eye conveys the intended structure much better and compiles at firs=
t try.
In languages that don't use curring, you would write =
f (1, 2) + g (2, 3) =
which also gives application precedence over infix
operators. So, I think, we can safely say that application
being stronger than infix operators is the standard
situation.
Nevertheless, the currying notation is a matter of habit.
It took me a while to get used to it, too (as did layout).
But now, I wouldn't want to miss them anymore. And as far
as layout is concerned, I think, the Python people have made
the same experience. For humans, it is quite natural to use
visual cues (like layout) to indicate semantics.
Cheers,
Manuel
Agreed, though you must remember that where I come from there is no
precedence at all.
> And as far
> as layout is concerned, I think, the Python people have made
> the same experience. For humans, it is quite natural to use
> visual cues (like layout) to indicate semantics.
Two points: I have been with Haskell less than half a year, and already
I have run into a layout-related bug in a tool that produces Haskell
source.
This does not raise my confidence on the approach very much.
Second, to a Lisp-head like myself something like
(let ((a 0)
(b 1))
(+ a b))
does exactly what you say: it uses layout to indicate semantic. The
parentheses
are there only to indicate semantics to the machine, and to make it easy
for
tools to pretty print the expression in such a way that the layout
reflects
the semantics as seen by the machine.
But all this is not very constructive, because Haskell is not going to
change
into a fully parenthesized prefix syntax at my wish.
--
Pertti Kellom\"aki, Tampere Univ. of Technology, Software Systems Lab
I agree, but let us not try to do that with just two (already overloaded)
symbols.
> (let ((a 0)
> (b 1))
> (+ a b))
let { a = 0; b = 1; } in a + b
is valid Haskell and the way I use the language. Enough and more descriptive
visual cues, I say.
Using layout is an option, not a rule (although the thing is called layout
rule...)
> But all this is not very constructive, because Haskell is not going to
> change into a fully parenthesized prefix syntax at my wish.
Thank god :-)
Arjan
Why not have your tool generate layout-less code? Surely that would be
easier to program, and be less error prone.
> Second, to a Lisp-head like myself something like
> (let ((a 0)
> (b 1))
> (+ a b))
> does exactly what you say: it uses layout to indicate semantic.
Yes, but the layout is not ENFORCED. I programmed in Lisp for many
years before switching to Haskell, and a common error is something like
this:
> (let ((a 0)
> (b 1)
> (+ a b)))
In this case the error is relatively easy to spot, but in denser code it
can be very subtle. So in fact using layout in Lisp can imply a
semantics that is simply wrong.
-Paul
Paul Hudak wrote:
> Why not have your tool generate layout-less code? Surely that would be
> easier to program, and be less error prone.
The tool in question is Happy, and the error materialized as an
interaction
between the tool-generated parser code and the hand-written code in
actions.
So no, this was not an option since the tool is not written by me, and
given
my current capabilities in Haskell I could not even fix it. On the other
hand
the bug is easy to work around, and it might even be fixed in newer
versions
of Happy.
> Yes, but the layout is not ENFORCED. I programmed in Lisp for many
> years before switching to Haskell, and a common error is something like
> this:
>
> > (let ((a 0)
> > (b 1)
> > (+ a b)))
>
> In this case the error is relatively easy to spot, but in denser code it
> can be very subtle. So in fact using layout in Lisp can imply a
> semantics that is simply wrong.
Maybe I did not express my point clearly. What I was trying to say was
that
because of the syntax, it is very easy for M-C-q in Emacs to convert
that to
(let ((a 0)
(b 1)
(+ a b)))
which brings the layout of the source code to agreement with how it is
perceived
by the compiler/interpreter. So it is easy for me to enforce the layout.
This is not so much of an issue when you are writing the code in the
first place,
but I find it a pain to have to adjust indentation when I move bits of
code around
in an evolving program. If there is good support for that, then I'll
just shut up
an start using it. After all, I have only been using Haskell for a very
short
period of time.
--
pertti
No problem :-)
> Maybe I did not express my point clearly. What I was trying to say was
> that
> because of the syntax, it is very easy for M-C-q in Emacs to convert
> that to ...
Ok, I understand now. So clearly we just need better editing tools for
Haskell, which I guess is part of your point.
By the way, there are many Haskell programmers who prefer to write their
programs like this:
let { a = x
; b = y
; c = z
}
in ...
which arguably has its merits.
-Paul
> -----Original Message-----
> From: Manuel M. T. Chakravarty [SMTP:ch...@cse.unsw.edu.au]
> Sent: Tuesday, May 22, 2001 6:55 AM
> To: p...@cs.tut.fi
> Cc: haskel...@haskell.org
> Subject: Re: Functional programming in Python
>
> Pertti Kellomäki <p...@cs.tut.fi> wrote,
>
> > > From: Ketil Malde <ke...@ii.uib.no>
> > > "Manuel M. T. Chakravarty" <ch...@cse.unsw.edu.au> writes:
> > > > You want to be able to write
> > >
> > > > f 1 2 + g 3 4
> > >
> > > > instead of
> > >
> > > > (f 1 2) + (g 3 4)
> > >
> > > I do? Personally, I find it a bit confusing, and I still often =
get it
> > > wrong on the first attempt.
> >
> > Same here. A while back someone said something along the lines that
> people
> > come to Haskell because of the syntax. For me it is the other way
> around.
> > My background is in Scheme/Lisp, and I still find it irritating =
that I
> cannot
> > just say indent-sexp and the like in Emacs. It is the other =
properties
> of the
> > language that keep me using it. I also get irritated when I get
> > precedence wrong, so in fact I tend to write (f 1 2) + (g 2 3), =
which to
> > my eye conveys the intended structure much better and compiles at =
first
> try.
>
> In languages that don't use curring, you would write
>
> f (1, 2) + g (2, 3)
>
> which also gives application precedence over infix
> operators. So, I think, we can safely say that application
> being stronger than infix operators is the standard
> situation.
[Bryn Keller]
There's another piece to this question that we're overlooking, I
think. It's not just a difference (or lack thereof) in precedence, it's =
the
fact that parentheses indicate application in Python and many other
languages, and a function name without parentheses after it is a =
reference
to the function, not an application of it. This has nothing to do with
currying that I can see - you can have curried functions in Python, and =
they
still look the same. The main advantage I see for the Haskell style is
(sometimes) fewer keypresses for parentheses, but I still find it =
surprising
at times. Unfortunately in many cases you need to apply nearly as many
parens for a Haskell expression as you would for a Python one, but =
they're
in different places. It's not:
foo( bar( baz( x ) ) )
it's:
(foo ( bar (baz x) ) )
I'm not sure why folks thought this was an improvement. I suppose it
bears more resemblance to lambda calculus?
> Nevertheless, the currying notation is a matter of habit.
> It took me a while to get used to it, too (as did layout).
> But now, I wouldn't want to miss them anymore. And as far
> as layout is concerned, I think, the Python people have made
> the same experience. For humans, it is quite natural to use
> visual cues (like layout) to indicate semantics.
[Bryn Keller]
Absolutely. Once you get used to layout (Haskell style or Python
style), everything else looks like it was designed specifically to =
irritate
you. On the other hand, it's nice to have a brace-delimited style since =
that
makes autogenerating code a lot easier.
Bryn
> Cheers,
> Manuel
> There's another piece to this question that we're overlooking, I
> think. It's not just a difference (or lack thereof) in precedence, it's the
> fact that parentheses indicate application in Python and many other
> languages, and a function name without parentheses after it is a reference
> to the function, not an application of it. This has nothing to do with
> currying that I can see - you can have curried functions in Python, and they
> still look the same. The main advantage I see for the Haskell style is
> (sometimes) fewer keypresses for parentheses, but I still find it surprising
> at times. Unfortunately in many cases you need to apply nearly as many
> parens for a Haskell expression as you would for a Python one, but they're
> in different places. It's not:
>
> foo( bar( baz( x ) ) )
> it's:
> (foo ( bar (baz x) ) )
>
> I'm not sure why folks thought this was an improvement. I suppose it
> bears more resemblance to lambda calculus?
In Haskell, one doesn't need to distinguish "a reference to the function" from
"an application of it". As a result, parentheses need to serve only a single
function, that of grouping. Parentheses surround an entire function
application, just as they surround an entire operation application:
foo (fum 1 2) (3 + 4)
I find this very consistent, simple, and elegant.
Dean
> Unfortunately in many cases you need to apply nearly as many
> parens for a Haskell expression as you would for a Python one, but
> they're in different places. It's not:
>
> foo( bar( baz( x ) ) )
> it's:
> (foo ( bar (baz x) ) )
Clearly the outer parentheses are unnecessary in the last expression.
One undeniable advantage of (f a) is it saves parentheses.
My feeling is that the f(a) (mathematical) notation works well when
type set or handwritten, but the (f a) (combinatory logic) notation
looks better with non-proportional fonts.
In a way the f(a) notation "represents things better": the f is at a
higher parenthesis level than the a.
Peter Hancock
> > foo( bar( baz( x ) ) )
> > it's:
> > (foo ( bar (baz x) ) )
>
> Clearly the outer parentheses are unnecessary in the last expression.
> One undeniable advantage of (f a) is it saves parentheses.
Yes and no. In
( ( ( foo bar) baz) x )
the parens can be omitted to leave
foo bar baz x
but in ( foo ( bar (baz x) ) )
You would want the following I think.
foo . bar . baz x
which does have the parens omitted, but requires the composition operator.
--PeterD
Almost. To preserve the meaning, the composition syntax would need to
be
(foo . bar . baz) x
or
foo . bar . baz $ x
or something along those lines. I favour the one with parens around
the dotty part, and tend to use $ only when a closing paren is
threatening to disappear over the horizon.
do ...
return $ case ... of
... -- many lines
Regards,
Tom
Haskell Non-Haskell
Left Associative Right Associative
foo (bar (baz (x))) foo bar baz x
foo $ bar $ baz x foo bar baz x
add (square x) (square y) add square x square y
add (square x) y add square x y
------------From Prelude----------------------
map f x (map f) x
f x (n - 1) x f x n - 1 x
f x (foldr1 f xs) f x foldr1 f xs
showChar '[' . shows x . showl xs showChar '[] shows x showl xs
You just need to read from right to left accumulating a stack of
arguments. When you hit a function that can consume some arguments, it
does so. There is an error if you end up with more than one value on
the argument stack.
-Alex-
___________________________________________________________________
S. Alexander Jacobson Shop.Com
1-646-638-2300 voice The Easiest Way To Shop (sm)
Note that in your proposal,
add square x y
is parsed as
add (square x) y
instead of
add (square (x y)),
so it's not right associative either.
As you explained, the parse of an expression depends the types of the
sub-expressions, which imo is BAD. Just consider type inference...
-- Zhanyong
Ok, your complaint is that f a b c=a b c could have type
(a->b->c)->a->b->c or type (b->c)->(a->b)->a->c depending on the arguments
passed e.g. (f head (map +2) [3]) has different type from (f add 2 3).
Admittedly, this is different from how haskell type checks now. I guess
the question is whether it is impossible to type check or whether it just
requires modification to the type checking algorithm. Does anyone know?
-Alex-
___________________________________________________________________
S. Alexander Jacobson Shop.Com
1-646-638-2300 voice The Easiest Way To Shop (sm)
_______________________________________________
>Admittedly, this is different from how haskell type checks now. I guess
>the question is whether it is impossible to type check or whether it just
>requires modification to the type checking algorithm. Does anyone know?
I don't think so... The only ambiguity that I can think of is with
passing functions as arguments to other functions, and you showed that it
can be resolved by currying:
map f x
would have to be force-curried using parenthesis:
(map f) x
because otherwise, it would mean:
map (f x)
which is both: very wrongly typed and NOT the intention.
I like your parsing scheme. I still DO like more explicit languages
better, though (i.e. map(f, x) style, like C & Co.). Currying is cool, but
it can be kept at a conceptual level, not affecting syntax.
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jc...@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html
Also, we can no longer take a divide-and-conquer approach to reading
code, since the syntax may depend on the types of imports.
| Ok, your complaint is that f a b c=a b c could have type
| (a->b->c)->a->b->c or type (b->c)->(a->b)->a->c depending on the arguments
| passed e.g. (f head (map +2) [3]) has different type from (f add 2 3).
|
| Admittedly, this is different from how haskell type checks now. I guess
| the question is whether it is impossible to type check or whether it just
| requires modification to the type checking algorithm. Does anyone know?
Here's a troublesome example.
module M(trouble) where
f, g :: (a -> b) -> a -> b
f = undefined
g = undefined
trouble = (.) f g
-- ((.) f) g :: (a -> b) -> a -> b
-- (.) (f g) :: (a -> b -> c) -> a -> b -> c
<Villainous cackle>
Regards,
Tom
> Haskell Non-Haskell
> Left Associative Right Associative
> ------------From Prelude----------------------
> f x (foldr1 f xs) f x foldr1 f xs
Wouldn't the rhs actually mean f x (foldr1 (f xs)) in current notation?
> showChar '[' . shows x . showl xs showChar '[] shows x showl xs
Wouldn't the rhs actually mean showChar '[' (shows x (showl xs))
in current notation? This is quite different to the lhs composition.
For these two examples, the correct right-associative expressions,
as far as I can tell, should be:
f x (foldr1 f xs) f x (foldr1 f) xs
showChar '[' . shows x . showl xs showChar '[' . shows x . showl xs
Regards,
Malcolm
> It seems that right-associativity is so intuitive that even the
> person proposing it doesn't get it right. :-)
And even those who correct them :-)
>> f x (foldr1 f xs) f x foldr1 f xs
>
> Wouldn't the rhs actually mean f x (foldr1 (f xs)) in current notation?
No: f (x (foldr1 (f xs)))
Basically Haskell's style uses curried functions, so it's essential
to be able to apply a function to multiple parameters without a number
of nested parentheses.
BTW, before I knew Haskell I exprimented with a syntax in which 'x f'
is the application of 'f' to 'x', and 'x f g' means '(x f) g'. Other
arguments can also be on the right, but in this case with parentheses,
e.g. 'x f (y)' is a function f applied to two arguments.
--
__("< Marcin Kowalczyk * qrc...@knm.org.pl http://qrczak.ids.net.pl/
\__/
^^ SYGNATURA ZASTĘPCZA
QRCZAK
> BTW, before I knew Haskell I exprimented with a syntax in which 'x f'
> is the application of 'f' to 'x', and 'x f g' means '(x f) g'. Other
> arguments can also be on the right, but in this case with parentheses,
> e.g. 'x f (y)' is a function f applied to two arguments.
Hmmm. An experimental syntax, you say...
Oh, say, you reinvented FORTH?
(No args in parentheses there, a function taking something at its right
simply *knows* that there is something there).
Jerzy Karczmarczuk
Caen, France
>> BTW, before I knew Haskell I exprimented with a syntax in which 'x f'
>> is the application of 'f' to 'x', and 'x f g' means '(x f) g'.
> Hmmm. An experimental syntax, you say...
> Oh, say, you reinvented FORTH?
Wouldn't
x f g
in a Forth'ish machine mean
g(f,x) -- using "standard" math notation, for a change
rather than
g(f(x))
?
-kzm
--
If I haven't seen further, it is by standing in the footprints of giants
_______________________________________________
> Jerzy Karczmarczuk <kar...@info.unicaen.fr> writes:
>
> Wouldn't
> x f g
> in a Forth'ish machine mean
> g(f,x) -- using "standard" math notation, for a change
> rather than
> g(f(x))
> ?
In PostScript, a Forth derivative, it would mean g(f(x)). The
difference comes down when tokens in the input stream are
evaluated: as they are encountered or at the very end. The input
stream is a queue - first in, first out (FIFO). A language *could*
treat the input stream as a stack (LIFO), but that would require
storing the entire stream in memory before computation could begin.
As Forth-like languages are usually designed for embedded systems
with low memory and processing power, a large LIFO stack including
the entire program is contra-indicated :-) Instead PostScript (and
other Forth-likes I've seen), treat the input stream as a FIFO
queue. This way, the interpreter can handle tokens immediately and
the stack doesn't get any larger than intermediate values in
computations. And it also matches well with the serial
communication these machines usually have with the producer of the
program (there's a reason PostScript laser printers used serial
ports easily enough, not parallel ones).
So, evaluation proceeds as follows:
for each item in the stream (FIFO)
evaluate it
pop items off the stack if evaluation requires it
push result(s) on to stack
So, presuming x is a variable with value "3", and f and g are
functions of one parameter:
x is identified and its value is pushed on to the stack.
so the stream is now "f g" and the stack is now "3"
f is identified as a function. The value of x is popped off the
stack (if f needed more than one parameter, more values would be
popped off - in this case, resulting in an error from an empty
stack).
The stream is now "g" and the stack is empty. The interpreter is
loaded with the function f and the value 3.
f is evaluated with the value of x. The result is pushed onto the stack.
The stream is "g" and the stack contains the result of f(3).
g is identified as a function and the stack is popped.
The stream is empty and the stack is too. The interpreter is loaded
with the function g and the value f(3).
g is evaluated with the value f(3). The result is pushed onto the stack.
As the stream is now empty, but the stack has items in it, a
PostScript interpreter would typically print the contents of the
stack.
This is of course g(f(x), not g(f,x). If the input stream was a
stack, too, then "g" would be evaluated first. If g took two
arguments, it would produce g(f,x). If g took one argument, it
would produce g(f). If that was a function, then it would continue
with (g(f))(x), otherwise it would end with a stack containing two
items: g(f) on top of x.
The way you get g(f,x) in PostScript or other forths is quoting (as
in Lisp). PostScript handles this with a /slash for a single token
or {braces} for lists. Either can be evaluated later: if it's a
stream of tokens, then those are evaluated (FIFO). So, procedure
definition in PostScript looks like this:
/inches { 72 * } def
which pushes the symbol "inches" onto the stack and then the list
of tokens "72 *" onto the stack (PostScript's native unit is the
point, defined as 1/72 of an inch). "def" pops the top of the stack
and attaches it as a value to the variable named in the symbol next
in the stack (no symbol equals an error). Later, a statement like
"3 inches" (an unusually readable statement in a Forth-like
language :-) is equivalent to "3 {72 *}" which is equivalent to "3
72 *", or 216.
Brook