Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[Haskell-cafe] What's the deal with Clean?

288 views
Skip to first unread message

Deniz Dogan

unread,
Nov 3, 2009, 3:31:33 PM11/3/09
to haskell
Recently there has been a lot of discussion on this list about the
programming language Clean and converting Clean programs to Haskell.
Reading the Wikipedia article on the language, I can't really see any
major difference between that and Haskell, except for the monads vs.
uniqueness types.

So what's the deal with Clean? Why is it preferable to Haskell? Why is it not?

--
Deniz Dogan
_______________________________________________
Haskell-Cafe mailing list
Haskel...@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Andrew Coppin

unread,
Nov 3, 2009, 4:37:45 PM11/3/09
to haskel...@haskell.org
Deniz Dogan wrote:
> Recently there has been a lot of discussion on this list about the
> programming language Clean and converting Clean programs to Haskell.
> Reading the Wikipedia article on the language, I can't really see any
> major difference between that and Haskell, except for the monads vs.
> uniqueness types.
>
> So what's the deal with Clean? Why is it preferable to Haskell? Why is it not?
>

As far as I can tell, Clean is to Haskell as C is to Pascal. I.e., Clean
is notionally very similar to Haskell, but with lots of added clutter,
complexity and general ugliness - but it's probably somehow more
machine-efficient as a result.

(All of which makes the name "Clean" rather ironic, IMHO.)

Of course, this is merely the opinion I formed after performing a
cursory scan of some of the introductory documentation. I haven't
actually seen any code written with it or anything, so my opinion
probably doesn't mean a lot...

David Leimbach

unread,
Nov 3, 2009, 4:57:24 PM11/3/09
to Andrew Coppin, haskel...@haskell.org
On Tue, Nov 3, 2009 at 1:37 PM, Andrew Coppin
<andrew...@btinternet.com>wrote:

> Deniz Dogan wrote:
>
>> Recently there has been a lot of discussion on this list about the
>> programming language Clean and converting Clean programs to Haskell.
>> Reading the Wikipedia article on the language, I can't really see any
>> major difference between that and Haskell, except for the monads vs.
>> uniqueness types.
>>
>> So what's the deal with Clean? Why is it preferable to Haskell? Why is it
>> not?
>>
>>
>
> As far as I can tell, Clean is to Haskell as C is to Pascal. I.e., Clean is
> notionally very similar to Haskell, but with lots of added clutter,
> complexity and general ugliness - but it's probably somehow more
> machine-efficient as a result.
>
> (All of which makes the name "Clean" rather ironic, IMHO.)
>
> Of course, this is merely the opinion I formed after performing a cursory
> scan of some of the introductory documentation. I haven't actually seen any
> code written with it or anything, so my opinion probably doesn't mean a
> lot...
>
>

It's preferable to Haskell in situations where Haskell isn't the best
choice.

The criteria for that decision is different from problem to problem.

Example:

I had to implement a ring buffer, and I wanted the code using it to be
written in Haskell. I ended up implementing the buffer in C, and wrapping
it in FFI from Haskell because implementing a destructive array in Haskell
is kind of unwieldy to someone of my experience level. In Clean, it looks
like the uniqueness typing allows for destructive updates in a very
controlled manner.

Disciplined Disciple might be interesting to look at here too, but i'm not
sure I'd deploy anything with DDC just yet :-)

Dave

Tracy Wadleigh

unread,
Nov 3, 2009, 5:16:47 PM11/3/09
to haskel...@haskell.org
> I had to implement a ring buffer, and I wanted the code using it to be
> written in Haskell. I ended up implementing the buffer in C, and wrapping
> it in FFI from Haskell because implementing a destructive array in Haskell
> is kind of unwieldy to someone of my experience level. In Clean, it looks
> like the uniqueness typing allows for destructive updates in a very
> controlled manner.
>

The ST monad provides this functionality. The
never-instantiated-in-a-visible-way state parameter of the ST monad provides
the "uniqueness" required for doing destructive updates in a pure way.

Erik de Castro Lopo

unread,
Nov 3, 2009, 5:17:15 PM11/3/09
to haskel...@haskell.org
David Leimbach wrote:

> Disciplined Disciple might be interesting to look at here too, but i'm not
> sure I'd deploy anything with DDC just yet :-)

Indeed. What DDC needs most at the moment is more people working
on it.

I've fixed a couple of bugs and I'm working on some others, but
there are a large chunk of them in the bug tracker which are
simply too deep for me with my current level of knowledge.

Erik
--
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/

Stephen Tetley

unread,
Nov 3, 2009, 5:50:12 PM11/3/09
to Andrew Coppin, Haskel...@haskell.org
2009/11/3 Andrew Coppin <andrew...@btinternet.com>:

>
> As far as I can tell, Clean is to Haskell as C is to Pascal. I.e., Clean is
> notionally very similar to Haskell, but with lots of added clutter,
> complexity and general ugliness - but it's probably somehow more
> machine-efficient as a result.
>
> (All of which makes the name "Clean" rather ironic, IMHO.)


Ouch - you really could put it the other way round.

Clean has very little clutter, though I suppose some people might take
offence to it having simple macros (:==), but wait so does GHC via
-XCPP...

I think Clean had generics before Haskell had Data.Generics, otherwise
Haskell generally has more innovation, more people work on Haskell,
Haskell's motivation was language research...

Clean has far fewer libraries, more people use Haskell...

Clean used to be considered faster than Haskell, though I don't know
what the situation is now:
http://www.haskell.org/pipermail/haskell-cafe/2007-October/033854.html

Clean is a very fine language, if I wasn't using Haskell I know what
my alternative choice would be.

Best wishes

Stephen

Evan Laforge

unread,
Nov 3, 2009, 6:01:22 PM11/3/09
to Deniz Dogan, haskell
> So what's the deal with Clean? Why is it preferable to Haskell? Why is it not?

Purely from glancing through the language reference, two things that
it looks like clean has that I would love to have in haskell are
better records and better arrays. The records don't implement any of
the fancy subtyping stuff that the various haskell proposals do, but
they have the benefits of a nicer syntax and actually being
implemented.

Haskell arrays (by which I mean the IArray and MArray interfaces) are,
to me at least, really hard to use. From little things like using
closed ranges where the rest of the world uses half-open ones and
opaque documentation and no consistency between IArray and MArray, to
bigger things like how do you insert or delete something. My
conclusion, after wrestling with ixmap for 15 minutes, was to convert
to a list, concatMap across [(i, a)], convert back to an array, which
has the bonus of runtime crashes if you forget an 'i'.

Sorry if this turned into a rant about arrays, but it's bothered me
for a while :) I think the clean guys got it right when they decided
to make good array support an explicit goal. I suppose haskell has
since gone a different way with the various array fusion libraries
with listlike interfaces, and maybe that's better in the long term.
Maybe type families can make a nice interface someday. Meanwhile it's
a mess though.

Ben Lippmeier

unread,
Nov 3, 2009, 6:15:04 PM11/3/09
to David Leimbach, haskel...@haskell.org
David Leimbach wrote:
> Disciplined Disciple might be interesting to look at here too, but i'm
> not sure I'd deploy anything with DDC just yet :-)
:) Nor would I (and I wrote most of it). I think the approach is right,
but the compiler itself is still in the "research prototype" stage.

Ben.

David Leimbach

unread,
Nov 3, 2009, 6:42:44 PM11/3/09
to Tracy Wadleigh, haskel...@haskell.org

Someone suggested that to me on IRC once I'd already cranked out a C
implementation with FFI bindings. It's just too easy to use the FFI in
Haskell :-)

If we raise the barrier of FFI, more people will use ST!

David Leimbach

unread,
Nov 3, 2009, 6:45:35 PM11/3/09
to Ben Lippmeier, haskel...@haskell.org
On Tue, Nov 3, 2009 at 3:14 PM, Ben Lippmeier <Ben.Li...@anu.edu.au>wrote:

> David Leimbach wrote:
>
>> Disciplined Disciple might be interesting to look at here too, but i'm not
>> sure I'd deploy anything with DDC just yet :-)
>>
> :) Nor would I (and I wrote most of it). I think the approach is right, but
> the compiler itself is still in the "research prototype" stage.
>
> Ben.
>
>

I have to admit, the first time I hit the wiki page for DDC I said to myself
"Self, this sounds crazy complicated". Then I read part of the PDF (your
thesis I believe) about Region Types on the bus ride to work and thought.
"Gee I think I scared myself off too quickly".

Uniqueness typing is quite interesting in Clean, but to control aliasing,
like really *control* aliasing, that's just far out man.

So I still have to wrap my head around "why this isn't going to get
completely out of control" and see why it's all safer than just writing C
code but I must say the attention I will be paying to DDC has just gone
quite a bit up.

Dave

Don Stewart

unread,
Nov 3, 2009, 6:57:52 PM11/3/09
to Stephen Tetley, Haskel...@haskell.org
stephen.tetley:

> 2009/11/3 Andrew Coppin <andrew...@btinternet.com>:
>
> >
> > As far as I can tell, Clean is to Haskell as C is to Pascal. I.e., Clean is
> > notionally very similar to Haskell, but with lots of added clutter,
> > complexity and general ugliness - but it's probably somehow more
> > machine-efficient as a result.
> >
> > (All of which makes the name "Clean" rather ironic, IMHO.)
>
> Clean used to be considered faster than Haskell, though I don't know
> what the situation is now:
> http://www.haskell.org/pipermail/haskell-cafe/2007-October/033854.html

We've come a long way in 5 years. Haskell is almost always faster on
the shootout now. And parallelism goes a long way to helping there:

http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=ghc&lang2=clean&box=1

Though this is also true on a single core:

http://shootout.alioth.debian.org/u64/benchmark.php?test=all&lang=ghc&lang2=clean&box=1

It's just a lot closer. Clean continues to have a very small memory
footprint.

-- Don

brian

unread,
Nov 3, 2009, 9:12:52 PM11/3/09
to David Leimbach, haskel...@haskell.org
Really, arrays in Haskell are the most @#!$! confusing thing in the
world.

There's a bunch of different array structures.

I can't tell which one works best, and all I want to do is x[i] = value.

I thought uvector was the answer, you know, fast unboxed ARRAYs.
Imagine my surprise when I saw this

indexU :: UA e => UArr e -> Int -> e

O(n). indexU extracts an element out of an immutable unboxed array.

An array implementation with an order N lookup. huh ?? That's not an
array, that's a list. I was looking for an array.

However, I then found in the same hackage:

readMU :: MUArr e s -> Int -> ST s e

O(1). readMU reads the element at the specified index of a mutable
unboxed array.

So O(1) for mutable, but O(n) for immutable ? See, confusing... I'm
sure there's a really good, lofty type safety, something
or other reason for that, that I'm sure I don't care about ;-)

There's also ST. So why is there a uvector, when there's ST ??

etc, etc, etc...

and then there's monads...

other than that, having fun with haskell :-)

Brian

Daniel Peebles

unread,
Nov 3, 2009, 9:24:06 PM11/3/09
to brian, haskel...@haskell.org
In the presence of fusion (as is the case in uvector), it's hard to
give meaningful time complexities for operations as they depend on
what operations they are paired with. We need to think of a better way
to express this behavior in the documentation though.

>> never-instantiated-in-a-visible-way state parameter of the ST monad provides

Roman Leshchinskiy

unread,
Nov 3, 2009, 9:25:52 PM11/3/09
to brian, haskel...@haskell.org
On 04/11/2009, at 13:12, brian wrote:

> indexU :: UA e => UArr e -> Int -> e
>
> O(n). indexU extracts an element out of an immutable unboxed array.

This is a typo (unless Don inserted a nop loop into the original DPH
code).

Roman

wren ng thornton

unread,
Nov 3, 2009, 9:26:54 PM11/3/09
to Haskel...@haskell.org
Stephen Tetley wrote:
> 2009/11/3 Andrew Coppin <andrew...@btinternet.com>:
>
>> As far as I can tell, Clean is to Haskell as C is to Pascal. I.e., Clean is
>> notionally very similar to Haskell, but with lots of added clutter,
>> complexity and general ugliness - but it's probably somehow more
>> machine-efficient as a result.
>>
>> (All of which makes the name "Clean" rather ironic, IMHO.)
>
> Ouch - you really could put it the other way round.

Part of this really comes down to how one feels about the monads vs
uniqueness types argument. It's a silly argument to have since the ideas
are orthogonal and only really intersect at IO, but there's history
there which lead to the current state of things.

Sometimes in Haskell I've thought about how uniqueness typing would make
something faster, but in general all the plumbing associated with it in
Clean makes me feel like I'm writing systems-level code (i.e. C, asm)
instead of using a high-level language. The extra plumbing really makes
it feel dirtier to work with. That doesn't mean Clean is bad, but I
think it does contribute to the "cluttered" feeling Haskellers get.

But as I said, it's a silly argument and folks should use whichever
gives them warm fuzzies. I also have a vague unnameable distaste
whenever working with Python, and rather enjoy working with Perl.
Nobody's perfect :)

--
Live well,
~wren

Roman Leshchinskiy

unread,
Nov 3, 2009, 9:27:29 PM11/3/09
to Daniel Peebles, haskel...@haskell.org
On 04/11/2009, at 13:23, Daniel Peebles wrote:

> In the presence of fusion (as is the case in uvector), it's hard to
> give meaningful time complexities for operations as they depend on
> what operations they are paired with. We need to think of a better way
> to express this behavior in the documentation though.

I have to disagree here. Fusion never makes the complexity of
operations worse. If it does, it's a bug.

Roman

wren ng thornton

unread,
Nov 3, 2009, 9:36:02 PM11/3/09
to haskel...@haskell.org
Roman Leshchinskiy wrote:
> On 04/11/2009, at 13:23, Daniel Peebles wrote:
>
>> In the presence of fusion (as is the case in uvector), it's hard to
>> give meaningful time complexities for operations as they depend on
>> what operations they are paired with. We need to think of a better way
>> to express this behavior in the documentation though.
>
> I have to disagree here. Fusion never makes the complexity of operations
> worse. If it does, it's a bug.

I think the point was more that the relevant complexity bound can change
in the presence of fusion. For a poor example: the first map over a list
is O(n) but all subsequent ones in a chain of maps are O(1) with fusion.
I'm sure there are better examples than that, but you get the idea. Some
people may care to know about that latter complexity rather than just
the "independent" complexity.

While this comes up with fusion, it's not a new problem. The same sort
of thing is gotten at by distinguishing worst-case vs average-case
complexity, or amortized worst-case vs non-amortized wost-case, etc.

--
Live well,
~wren

Don Stewart

unread,
Nov 3, 2009, 9:48:39 PM11/3/09
to brian, haskel...@haskell.org
briand:

> Really, arrays in Haskell are the most @#!$! confusing thing in the
> world.
>
> There's a bunch of different array structures.
>
> I can't tell which one works best, and all I want to do is x[i] = value.
>
> I thought uvector was the answer, you know, fast unboxed ARRAYs.
> Imagine my surprise when I saw this
>
> indexU :: UA e => UArr e -> Int -> e
>
> O(n). indexU extracts an element out of an immutable unboxed array.

Umm.... That's a typo in the docs. Thanks.

-- Don

Gregory Crosswhite

unread,
Nov 3, 2009, 10:08:37 PM11/3/09
to Roman Leshchinskiy, haskel...@haskell.org
Actually, it's not a typo. If you look at the source, what you'll see
is

indexU arr n = indexS (streamU arr) n

and then tracking down indexS, you'll see


indexS (Stream next s0 _) n0
| n0 < 0 = error "Data.Array.Vector.Stream.indexS: negative
index"
| otherwise = loop_index n0 s0
where
loop_index n s = case next s of
Yield x s' | n == 0 -> x
| otherwise -> s' `seq` loop_index (n-1) s'
Skip s' -> s' `seq` loop_index n s'
Done -> error
"Data.Array.Vector.Stream.indexS: index too large"


So in other words, indexU really does have O(n) complexity since it
first converts the array into a stream and then walks down the stream
in order to find the desired element.

Cheers,
Greg

a...@spamcop.net

unread,
Nov 3, 2009, 10:16:01 PM11/3/09
to haskel...@haskell.org
G'day all.

Quoting wren ng thornton <wr...@freegeek.org>:

> Sometimes in Haskell I've thought about how uniqueness typing would
> make something faster, but in general all the plumbing associated with
> it in Clean makes me feel like I'm writing systems-level code (i.e. C,
> asm) instead of using a high-level language. The extra plumbing really
> makes it feel dirtier to work with. That doesn't mean Clean is bad, but
> I think it does contribute to the "cluttered" feeling Haskellers get.

I think you're right here.

Haskell has developed something of an aversion to naming things that
aren't important enough to have a name, such as variables whose only
reason to exist is "plumbing". We'd far rather spend effort on more
higher-order functions, monads, combinators and points-freeness than
name something that's unimportant. And the funny thing about this is
that it usually works, because in Haskell, abstraction is cheap.

I believe that this is the main reason why Haskell programmers haven't
embraced arrows, despite their theoretical advantages: Every notation
that has been implemented so far requires names for things that shouldn't
need names.

> But as I said, it's a silly argument and folks should use whichever
> gives them warm fuzzies.

I'd like to think that professional developers are a bit more scientific
than this.

Cheers,
Andrew Bromage

Philippos Apolinarius

unread,
Nov 3, 2009, 10:39:08 PM11/3/09
to haskel...@haskell.org
Brian wrote:
> Really, arrays in Haskell are the most @#!$! confusing thing in the world.

Hi, Brian.
I am having a great difficulty with arrays in Haskell.� In the university where I study, functional programming is taught in Clean or in Haskell, depending on the professor who is teaching the subject in a given year. One year ago, when I took functional programming, the professor used Clean in his classes. I had no difficulty in learning how arrays and input/output work in Clean.� In the case of arrays, the idea is very simple: One can update arrays, provided that s/he does not try to access the old array. Therefore, one needs to make a copy of any value of the old array that s/he will use before performing the update; the operation that makes copies also provides a new name for the array, that obliterates the old name.� In order to get a better feeling of the thing, here is the `solvit� function, in Clean and Haskell (you can consider the # as a kind of do):

// Clean
leftSide acc i j n arr | j >= n= (acc, arr);
�� # (v, arr)= arr![j, n];
���� (a, arr)= arr![i, j];
�� = leftSide (acc-v*a) i (j+1) n arr;

solvit i n arr | i < 0 = arr
� # (a, arr)= arr![i, i];
��� (acc, arr)= arr![i, n];
��� (v, arr)= leftSide acc i (i+1) n arr;
� = solvit (i-1) n {arr&[i, n]= v/a};

-- HASKELL
leftSide acc i j n arr | j>n= return acc
leftSide acc i j n arr = do
�� v <- readArray arr (j, n+1)
�� a <- readArray arr (i, j)
�� leftSide (acc-v*a) i (j+1) n arr

solvit i n arr | i<1= return ()
solvit i n arr= do
�� a <- readArray arr (i, i)
�� acc <- readArray arr (i, n+1)
�� v <- leftSide acc i (i+1) n arr
�� writeArray arr (i, n+1) $! (v/a)
�� solvit (i-1) n arr

And here comes the reason for writing this article. In the previous version of the Gauss elimination algorithm, I have imported Data.Array.IO. I also wrote a version of the program that imports Data.Array.ST. The problem is that I� don't know how to read an STUArray from a file, process it, and write it back to a file. Is it possible to transform it into an IOUArray pro tempore, read it, make it into an STUArray again in order to process it, and bring it back to IOUArray in order to print it? Below,� you will find the Gauss elimination program in STUArray (by the way, it is slower than IOUArray). Could you modify the main function so it can read array `arr� from a file, and write the result to a file?� Here is the Gauss Elimination for STUArray (the main function is the first one; modify it to read the array from a file, and write it back to a file):

import Control.Monad
import Control.Monad.ST
import Data.Array.ST
import Data.Array.IO
import System.IO
import System.Random
import System (getArgs)


main = do
���� xs <- rnList (1.0,1000.0)
���� args <- getArgs
���� let (n, m)= dims args
���� xx <-� stToIO $ do
���������������� arr <- newArray_ ((1,1),(n,m+1)) ::
������������������ ST s (STUArray s (Int, Int) Double)
���������������� fillArray xs 0.0 (1,n) (1,m) arr
���������������� sLU arr n
���������������� solvit n n arr
���������������� x1 <- readArray arr (1, n+1)
���������������� x2 <- readArray arr (1, n+1)
���������������� return [x1, x2]
���� print xx


{-� -- Other option:
main = do
���� xs <- rnList (1.0,1000.0)
���� args <- getArgs
���� let (n, m)= dims args
���� print $ runST $ do
���������������� arr <- newArray_ ((1,1),(n,m+1)) ::
������������������ ST s (STUArray s (Int, Int) Double)
���������������� fillArray xs 0.0 (1,n) (1,m) arr
���������������� sLU arr n
���������������� solvit n n arr
���������������� x1 <- readArray arr (1, n+1)
���������������� x2 <- readArray arr (1, n+1)
���������������� return [x1, x2]
-}��

fillArray xs s (i, n) (j, m) arr |� i > n= return ()
fillArray xs s (i,n) (j, m) arr | i==n && j>m= do
� writeArray arr (i, j) $! s
� return ()
fillArray xs s (i, n) (j, m) arr | j > m� = do
�� writeArray arr (i, j) $! s
�� fillArray xs 0.0 (i+1, n) (1, m) arr
fillArray (val:xs) s (i, n) (j, m) arr= do
�� writeArray arr (i, j) $! val
�� fillArray xs (s+val) (i, n) (j+1, m) arr

sLU arr n= sIJ 2 1 2 n arr

sIJ i j k n arr | i > n = return ()
sIJ i j k n arr | k > n = sIJ (i+1) i (i+1) n arr
sIJ i j k n arr = do
�{- im <- pmax (j+1) j
� swap j im 1 -}
� a <- readArray arr (k, j)
� forM_ [j..n+1] $� \l -> do
����� ajj <- readArray arr (j, j)
����� ajl <- readArray arr (j, l)
����� akl <- readArray arr (k, l)
����� writeArray arr (k, l) $! (akl-a*(ajl/ajj))
� sIJ i j (k+1) n arr where
���� pmax line imax | line > n = return imax
���� pmax line imax = do
������ alj <- readArray arr (line, j)
������ aij <- readArray arr (imax, j)
������ if (abs alj)> (abs aij)
��������� then pmax (line+1) line
��������� else pmax (line+1) imax
���� swap r s q | q>n+1 = return ()
���� swap r s q | r==s = return ()
���� swap r s q = do
������� arq <- readArray arr (r,q)
������� asq <- readArray arr (s,q)
������� writeArray arr (s,q) $! arq
������� writeArray arr (r,q) $! asq
������� swap r s (q+1)
����

leftSide acc i j n arr | j>n= return acc
leftSide acc i j n arr = do
�� v <- readArray arr (j, n+1)
�� a <- readArray arr (i, j)
�� leftSide (acc-v*a) i (j+1) n arr

solvit i n arr | i<1= return ()
solvit i n arr= do
�� a <- readArray arr (i, i)
�� acc <- readArray arr (i, n+1)
�� v <- leftSide acc i (i+1) n arr
�� writeArray arr (i, n+1) $! (v/a)
�� solvit (i-1) n arr

rnList :: (Double, Double) -> IO [Double]
rnList r=getStdGen>>=(\x->return(randomRs r x))

dims [input] = (read input, read input)
dims _ = (1000, 1000)

--- On Tue, 11/3/09, brian <bri...@aracnet.com> wrote:

etc, etc, etc...

Brian

> The ST monad provides this functionality. The never-instantiated-in-a-visible-way state parameter of the ST monad provides the "uniqueness" required for doing destructive updates in a pure way.


>
> Someone suggested that to me on IRC once I'd already cranked out a C implementation with FFI bindings.� It's just too easy to use the FFI in Haskell :-)
>
> If we raise the barrier of FFI, more people will use ST!
>
> Dave
>
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskel...@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskel...@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe

_______________________________________________
Haskell-Cafe mailing list
Haskel...@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

__________________________________________________________________
Get the name you've always wanted @ymail.com or @rocketmail.com! Go to http://ca.promos.yahoo.com/jacko/

Roman Leshchinskiy

unread,
Nov 3, 2009, 11:20:42 PM11/3/09
to wren ng thornton, haskel...@haskell.org
On 04/11/2009, at 13:35, wren ng thornton wrote:

> Roman Leshchinskiy wrote:
>> On 04/11/2009, at 13:23, Daniel Peebles wrote:
>>> In the presence of fusion (as is the case in uvector), it's hard to
>>> give meaningful time complexities for operations as they depend on
>>> what operations they are paired with. We need to think of a better
>>> way
>>> to express this behavior in the documentation though.
>> I have to disagree here. Fusion never makes the complexity of
>> operations worse. If it does, it's a bug.
>
> I think the point was more that the relevant complexity bound can
> change in the presence of fusion. For a poor example: the first map
> over a list is O(n) but all subsequent ones in a chain of maps are
> O(1) with fusion. I'm sure there are better examples than that, but
> you get the idea. Some people may care to know about that latter
> complexity rather than just the "independent" complexity.

I think asymptotic complexity is the wrong tool for what you're trying
to do. You implement your algorithm using operations with known
complexities. This allows you to compute the complexity of the entire
algorithm. That's all you can use operation complexities for. The
compiler is then free to optimise the algorithm as it sees fit but is
supposed to preserve (or improve) its complexity. It is not guaranteed
or even supposed to preserve the original operations. To stay with
your example, each of the two maps is linear regardless of whether
fusion happens. Executing the two maps, be it one after another or
interlocked, is linear simply because O(n) + O(n) = O(n), not because
of fusion.

Essentially, you're trying to use complexity to describe an
optimisation which doesn't actually affect the complexity.

Roman

Roman Leshchinskiy

unread,
Nov 3, 2009, 11:30:55 PM11/3/09
to Gregory Crosswhite, haskel...@haskell.org
On 04/11/2009, at 14:07, Gregory Crosswhite wrote:

> Actually, it's not a typo. If you look at the source, what you'll
> see is
>
> indexU arr n = indexS (streamU arr) n

I suspect it gets rewritten back to the O(1) version somewhere after
is has had a chance to fuse. If not, then it's a bug. In the vector
package, I do this instead, though:

indexU arr n = <O(1) implemetation>

{-# RULES

"indexU/unstreamU" forall s n. indexU (unstreamU s) n = indexS s n

#-}

Jason Dusek

unread,
Nov 3, 2009, 11:31:56 PM11/3/09
to Philippos Apolinarius, haskel...@haskell.org
How do you read in the IOUArray? By parsing a character string
or do you treat the file as binary numbers or ... ?

--
Jason Dusek

brian

unread,
Nov 3, 2009, 11:35:18 PM11/3/09
to Philippos Apolinarius, haskel...@haskell.org

On Nov 3, 2009, at 7:38 PM, Philippos Apolinarius wrote:

> Brian wrote:
> > Really, arrays in Haskell are the most @#!$! confusing thing in
> the world.
>
> Hi, Brian.
> I am having a great difficulty with arrays in Haskell. In the
> university where I study, functional programming is taught in Clean
> or in

me too :-)

> And here comes the reason for writing this article. In the previous
> version of the Gauss elimination algorithm, I have imported

you're asking me ?? I have no idea. I can't even figure out which
package to use.

However if I had to guess, it seems to me that you want to read the
data into a list and then find some ST function which can initialize
an array using a list (maybe ?)

Brian

Roman Leshchinskiy

unread,
Nov 3, 2009, 11:38:14 PM11/3/09
to Philippos Apolinarius, haskel...@haskell.org
On 04/11/2009, at 14:38, Philippos Apolinarius wrote:

> And here comes the reason for writing this article. In the previous
> version of the Gauss elimination algorithm, I have imported
> Data.Array.IO. I also wrote a version of the program that imports
> Data.Array.ST. The problem is that I don't know how to read an
> STUArray from a file, process it, and write it back to a file.

Why don't you use the IOUArray directly instead of converting it to
STUArray and back?

Roman

Richard O'Keefe

unread,
Nov 4, 2009, 12:04:56 AM11/4/09
to Deniz Dogan, haskell

On Nov 4, 2009, at 9:30 AM, Deniz Dogan wrote:
> So what's the deal with Clean? Why is it preferable to Haskell? Why
> is it not?

(1) Speed.
(2) If you are a Windows developer, the fact that Windows is the primary
platform and others (even Mac OS, which is historically ironic) are
second- (or in the case of Solaris) third-class citizens.
(3) Did I mention speed?
(4) It comes with its own IDE. I don't think it can do anything much
that
Haskell tools can't do, but if you don't like looking for things,
it's
a help.
(5) Plus of course there's speed.
(6) They're working on a Haskell front end, so you won't actually have
to
choose. (Anyone doing a Clean front end for Haskell?)
(7) Haskell now has bang-patterns so you can specify (a bound on)
intended
strictness when you declare a function. But that's not in
Haskell 98.
(8) As a result of this, speed is a bit more "declarative" than adding
$! in strange places.
(9) There's a theorem prover for Clean, called Sparkle.
Sadly, it's Windows-only, but we all know what most computers on
the
planet run, don't we? (It's probably Symbian, actually.)
(10) And finally, of course, there's speed. Did I mention that?

Don Stewart

unread,
Nov 4, 2009, 12:15:38 AM11/4/09
to Gregory Crosswhite, haskel...@haskell.org
Well, it depends on which indexU the OP means. The one linked in the docs is
the O(1) UA type class version.

-- Don

gcross:

Gregory Crosswhite

unread,
Nov 4, 2009, 12:17:29 AM11/4/09
to Richard O'Keefe, haskell
So I take it you are saying that it really *cleans* Haskell's clock
when it comes to speed? ;-)

- Greg

Joe Fredette

unread,
Nov 4, 2009, 12:19:43 AM11/4/09
to Gregory Crosswhite, haskell
Given the Shootout results, the difference is a matter of a few
seconds. If Clean Programmers need those few extra seconds, they're
welcome to them.

We're Lazy around here. :)

/Joe

Don Stewart

unread,
Nov 4, 2009, 12:20:23 AM11/4/09
to Richard O'Keefe, haskell
ok:

>
> On Nov 4, 2009, at 9:30 AM, Deniz Dogan wrote:
>> So what's the deal with Clean? Why is it preferable to Haskell? Why is
>> it not?
>
> (1) Speed.

I'd strongly argue that speed is not a reason to use Clean -- esp. if
you have more than one core:

http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=ghc&lang2=clean&box=1

Nor a reason to use OCaml, for that matter:

http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=ghc&lang2=ocaml&box=1

The Haskell compiler isn't the bottleneck. Use it when performance matters. I do.

Reading the OP's statements, though, it seems picking the right library
is more of a problem. And widespread expertise in fast Haskell.

-- Don

Gregory Crosswhite

unread,
Nov 4, 2009, 12:23:11 AM11/4/09
to Don Stewart, haskel...@haskell.org
Oh, that's strange... the type class "UA" is defined twice, once in
Data.Array.Vector and once in Data.Array.Vector.UArr; in the first
module indexU is a separate function with the sources I exhibited, in
the second module it is a method of the UA type-class which seems to
have O(1) access for most of the defined instances.

That's incredibly confusing...

- Greg

Don Stewart

unread,
Nov 4, 2009, 12:25:31 AM11/4/09
to Gregory Crosswhite, haskel...@haskell.org
gcross:

> Oh, that's strange... the type class "UA" is defined twice, once in
> Data.Array.Vector and once in Data.Array.Vector.UArr; in the first

No, its exported from the former.

> module indexU is a separate function with the sources I exhibited, in
> the second module it is a method of the UA type-class which seems to
> have O(1) access for most of the defined instances.
>
> That's incredibly confusing...

There's direct and stream-based versions. You can choose which
version you need. If you use the stream-based implementations, the
compiler will apply the stream fusion optimization to your loops. If you
use the direct versions, that won't apply.

I'd be happy to talk more about the design of the library, if you like.

-- Don

brian

unread,
Nov 4, 2009, 12:26:05 AM11/4/09
to Don Stewart, haskel...@haskell.org
Don,

There is more than one indexU ?

In Data.Array.Vector there is only 1 indexU that I can find.

Brian

On Nov 3, 2009, at 9:15 PM, Don Stewart wrote:

Don Stewart

unread,
Nov 4, 2009, 12:29:27 AM11/4/09
to brian, haskel...@haskell.org
UArr operations subject to stream fusion:

http://code.haskell.org/~dons/code/uvector/Data/Array/Vector/Strict/

Direct-style operations, not subject to the optimization:

http://code.haskell.org/~dons/code/uvector/Data/Array/Vector/UArr.hs

/me needs to write a tutorial on this.

-- Don

briand:

Ben Lippmeier

unread,
Nov 4, 2009, 1:44:51 AM11/4/09
to David Leimbach, haskel...@haskell.org
David Leimbach wrote:
> I have to admit, the first time I hit the wiki page for DDC I said to
> myself "Self, this sounds crazy complicated". Then I read part of the
> PDF (your thesis I believe) about Region Types on the bus ride to work
> and thought. "Gee I think I scared myself off too quickly".
>
> Uniqueness typing is quite interesting in Clean, but to control
> aliasing, like really *control* aliasing, that's just far out man.
>
> So I still have to wrap my head around "why this isn't going to get
> completely out of control" and see why it's all safer than just
> writing C code but I must say the attention I will be paying to DDC
> has just gone quite a bit up.

:) A correct C program is just as safe as a correct Haskell/Disciple
program.

If you're using destructive update then aliasing, side effects and
mutability all start to matter. It might look complicated when you
reflect all these things in the type system, but you're really just
getting a handle on the inherent complications of the underlying program.

I suppose the trick is to be able to ignore said complications when you
just don't care, or they're not relevant for your particular problem...

Ben.

Martin DeMello

unread,
Nov 4, 2009, 3:57:09 AM11/4/09
to Richard O'Keefe, haskell
On Wed, Nov 4, 2009 at 10:34 AM, Richard O'Keefe <o...@cs.otago.ac.nz> wrote:
> (4) It comes with its own IDE. �I don't think it can do anything much that
> � �Haskell tools can't do, but if you don't like looking for things, it's
> � �a help.

And a well-integrated GUI toolkit. If it weren't for the Windows bias
I'd have definitely taken the time to learn the language.

martin

Alberto G. Corona

unread,
Nov 4, 2009, 4:48:58 AM11/4/09
to haskell-cafe
The code executed by uniqueness types is somehow similar to the internal
code executed in a state monad (or in the case of IO, the IO monad). The
main difference is that the pairs of results (state, value) are explicitly
written in Clean by the programmer and the type sytem assures that the
order of executions makes sense at compile time, whereas in the case of the
state monad the sequence of instructions is lazily assembled at runtime in
the first step and executed in a second step. So there is a little more
overhead in haskell but the code is higher level.

Am I right?

2009/11/4 wren ng thornton <wr...@freegeek.org>

> Stephen Tetley wrote:
>
>> 2009/11/3 Andrew Coppin <andrew...@btinternet.com>:
>>
>> As far as I can tell, Clean is to Haskell as C is to Pascal. I.e., Clean
>>> is
>>> notionally very similar to Haskell, but with lots of added clutter,
>>> complexity and general ugliness - but it's probably somehow more
>>> machine-efficient as a result.
>>>
>>> (All of which makes the name "Clean" rather ironic, IMHO.)
>>>
>>

>> OUuch - you really could put it the other way round.

Duncan Coutts

unread,
Nov 4, 2009, 4:57:11 AM11/4/09
to brian, haskel...@haskell.org
On Tue, 2009-11-03 at 18:12 -0800, brian wrote:
> Really, arrays in Haskell are the most @#!$! confusing thing in the
> world.
>
> There's a bunch of different array structures.
>
> I can't tell which one works best, and all I want to do is x[i] = value.

> I thought uvector was the answer, you know, fast unboxed ARRAYs.

Rather than confusing yourself with new packages like uvector I suggest
you just use the arrays from the standard 'array' package that comes
with GHC. It provides mutable and immutable, boxed and unboxed arrays.

The mutable ones have to be used in a monad (ST or IO).

The boxed ones can be used with any element type (eg an array of
records) while unboxed ones work with simple primitive types like ints,
floats etc. The difference is about memory layout and therefore
performance: unboxed ones are simple flat C-like arrays while the boxed
ones are arrays of pointers to heap objects.

Duncan

Ketil Malde

unread,
Nov 4, 2009, 5:14:51 AM11/4/09
to haskel...@haskell.org
Duncan Coutts <duncan...@googlemail.com> writes:

> The boxed [array types] can be used with any element type (eg an array of


> records) while unboxed ones work with simple primitive types like ints,
> floats etc. The difference is about memory layout and therefore
> performance

..and of strictness. A boxed array can contain pointers to unevaluated
thunks (including references to other cells in the array), an unboxed
array only contains evaluated values.

But yes, it'd be nice to tidy up the set of available array libraries,
and perhaps related functionality (bytestring, text) to provide a
unified and non-redundant, whole. Platform prime, anyone?

-k
--
If I haven't seen further, it is by standing in the footprints of giants

Artyom Shalkhakov

unread,
Nov 4, 2009, 5:18:14 AM11/4/09
to Alberto G. Corona, haskell-cafe
Hello,

2009/11/4 Alberto G. Corona <agoc...@gmail.com>:


> The code executed by uniqueness types is somehow similar to the internal
> code executed in a state monad (or in the case of IO, the IO monad). The
> main difference is that the pairs of results  (state, value) are explicitly
> written in Clean by the programmer and the  type sytem assures that the
> order of executions makes sense at compile time, whereas in the case of the
> state monad the sequence of instructions is lazily assembled at runtime in
> the first step and executed in a second step. So there is a little more
> overhead in haskell but the code is higher level.
> Am I right?

I would rather say: code with uniqueness types allows for safe
destructive updates.

In Clean, a variable of unique type is ensured to have only one
reference to it, at any time (that's why it's called "uniqueness
typing"). So you can't write the code like this

> f(x) + f(x)

where f : *a -> int (x is of unique type), because x is clearly
referenced two times here. What to do? Let f yield another reference
to x! That also means that the old reference is not usable any more,
since you have new one. f becomes:

> f : *a -> (int, *a)

and the code looks very familiar:

> let (a, x') = f(x)
> (b, x'') = f(x')
> in a + b

The function f can use destructive updates under the hood though it
doesn't violate referential transparency. I bet you can you see why.

I'd say that call-by-need is orthogonal to uniqueness typing.

Cheers,
Artyom Shalkhakov.

Philippos Apolinarius

unread,
Nov 4, 2009, 5:48:37 AM11/4/09
to brian, haskel...@haskell.org
Brian wrote:

> However if I had to guess, it seems to me that you want to read the
data into
> a list and then find some ST function which can initialize an
array using a list (maybe ?)

It is the other way around. I want to avoit lists. I would like to read the array elements from a file, and store then directly into the array. This approach would spare me from writing using a possibly expensive heap hungry intermediate structure.

--- On Tue, 11/3/09, brian <bri...@aracnet.com> wrote:

From: brian <bri...@aracnet.com>
Subject: Re: [Haskell-cafe] Arrays in Clean and Haskell
To: "Philippos Apolinarius" <phi5...@yahoo.ca>
Cc: haskel...@haskell.org

Received: Tuesday, November 3, 2009, 9:34 PM


On Nov 3, 2009, at 7:38 PM, Philippos Apolinarius wrote:

> Brian wrote:
> > Really, arrays in Haskell are the most @#!$! confusing thing in the world.
>
> Hi, Brian.
> I am having a great difficulty with arrays in Haskell.� In the university where I study, functional programming is taught in Clean or in

me too :-)

> And here comes the reason for writing this article. In the previous version of the Gauss elimination algorithm, I have imported

you're asking me ??� I have no idea.� I can't even figure out which package to use.

However if I had to guess, it seems to me that you want to read the data into a list and then find some ST function which can initialize an array using a list (maybe ?)

Brian


__________________________________________________________________
Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your favourite sites. Download it now
http://ca.toolbar.yahoo.com.

Philippos Apolinarius

unread,
Nov 4, 2009, 5:58:02 AM11/4/09
to Jason Dusek, haskel...@haskell.org
Jason Dusek wrote:

> How do you read in the IOUArray? By parsing a character string
> or do you treat the file as binary numbers or ... ?

I always pare the file. Parsing the file has the advantage of alowing me to have files of any format. In general, in homeworks, TA generate files using different tools.� For instance, a professor of electrical protection of hardware made a lot of measurements of transient currents due to lightning. The file has thousands of three column lines, each one containing time, voltage and current.� Students are supposed to read the file, and plot voltage and current time series. Even the numbers are in a strange format... So, one needs to parse the file.

--- On Tue, 11/3/09, Jason Dusek <jason...@gmail.com> wrote:

From: Jason Dusek <jason...@gmail.com>
Subject: Re: [Haskell-cafe] Arrays in Clean and Haskell
To: "Philippos Apolinarius" <phi5...@yahoo.ca>
Cc: haskel...@haskell.org

Received: Tuesday, November 3, 2009, 9:31 PM

� How do you read in the IOUArray? By parsing a character string


� or do you treat the file as binary numbers or ... ?

--
Jason Dusek

__________________________________________________________________

Ketil Malde

unread,
Nov 4, 2009, 8:34:23 AM11/4/09
to Bulat Ziganshin, haskell
Bulat Ziganshin <bulat.z...@gmail.com> writes:

>> http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=ghc&lang2=clean&box=1
>> http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=ghc&lang2=ocaml&box=1

>> The Haskell compiler isn't the bottleneck. Use it when performance matters. I do.

> Don, shootout times may be used to measure how many people was
> contributed solutions for each language, but nothing more.

Well, it clearly demonstrates that it is possible to write fast code in
Haskell.

Last time I looked, much of the shootout code was overly complicated
(i.e. "enthusiasts writing low-level code"). And I can't help but notice
that Clean beats Haskell on code compactness. It'd be interesting to
see how well more naïve/idiomatic code fares. While it's nice to be
able to write fast programs, the main reason to use Haskell is to write
succinct and correct programs. (Is it possible to have an alternative
Haskell "track" in the shootouts?)

Since this was done, there has been great strides in available libraries
and GHC optimizations, and it'd also be interesting to see whether we
now are able to optimize ourselves away from much of the overhead.

-k
--
If I haven't seen further, it is by standing in the footprints of giants

Alberto G. Corona

unread,
Nov 4, 2009, 8:36:27 AM11/4/09
to haskell-cafe
Artyom.

I know what uniqueness means. What I meant is that the context in which
uniqueness is used, for imperative sequences:

(y, s')= proc1 s x
(z, s'')= proc2 s' y
....

is essentially the same sequence as if we rewrite an state monad to make the
state explicit. When the state is the "world" state, then it is similar to
the IO monad.

An state monad forces a single use of the implicit state variable too
(unless you pass it trough the next step without changes. That can be done
in Clean too.

2009/11/4 Artyom Shalkhakov <artyom.s...@gmail.com>

Bulat Ziganshin

unread,
Nov 4, 2009, 8:47:34 AM11/4/09
to Ketil Malde, Bulat Ziganshin, haskell
Hello Ketil,

Wednesday, November 4, 2009, 4:31:20 PM, you wrote:

> Well, it clearly demonstrates that it is possible to write fast code in
> Haskell.

my measures says that by psending 3x more time than for C you can
optimize haskell code to be only 3x slower than C one

> succinct and correct programs. (Is it possible to have an alternative
> Haskell "track" in the shootouts?)

even w/o enthusiasts Shootout mainly measure speed of libraries

> Since this was done, there has been great strides in available libraries
> and GHC optimizations, and it'd also be interesting to see whether we
> now are able to optimize ourselves away from much of the overhead.

eh, if it was possible, we have seen this. both on shootout and here
when people are crying that their code isn't as fast as those ads say.
haskell compilation can't yet automatically avoid laziness and convert
pure high-level code into equivalent of C one. libraries doesn't
change anything - they provide low-level optimized solutions for
particular tasks but can't optimize your own code once you started to
write it


--
Best regards,
Bulat mailto:Bulat.Z...@gmail.com

Alberto G. Corona

unread,
Nov 4, 2009, 9:58:49 AM11/4/09
to haskell-cafe
I personally don�t care about raw performance. Haskell is in the top of the
list of language performance. It has all the ingredients for
improving performance in the coming years: A core language, clear execution
strategy, analysis and parsing, transformations based on math rules. So my
code will improve with each new compiler version at the same or better pace
than any other language. Moreover I can not care less about how fast is C,
when I simply can not program many things I need in C or C++ or Java and in
general any of the language of the performance list that are above... or
below, because they lack the necessary type safety, expressiveness,
abstraction.etc. Not to mention time. Not to mention the growing community
etc.

Regards.

2009/11/4 Bulat Ziganshin <bulat.z...@gmail.com>

Edsko de Vries

unread,
Nov 4, 2009, 10:15:42 AM11/4/09
to Alberto G.Corona, haskell-cafe

On 4 Nov 2009, at 13:36, Alberto G. Corona wrote:

> Artyom.
>
> I know what uniqueness means. What I meant is that the context in
> which uniqueness is used, for imperative sequences:
>
> (y, s')= proc1 s x
> (z, s'')= proc2 s' y

> .....


>
> is essentially the same sequence as if we rewrite an state monad to
> make the state explicit. When the state is the "world" state, then
> it is similar to the IO monad.

Yes, as long as there is a single thing that is being updated there's
little difference between the state monad and a unique type. But
uniqueness typing is more general. For instance, a function which
updates two arrays

f (arr1, arr2) = (update arr1 0 'x', update arr2 0 'y')

is easily written in functional style in Clean, whereas in Haskell we
need to sequentialize the two updates:

f (arr1, arr2)
= do writeArray arr1 0 'x'
writeArray arr2 0 'y'

You can find a more detailed comparison in my thesis (https://www.cs.tcd.ie/Edsko.de.Vries/pub/MakingUniquenessTypingLessUnique-screen.pdf
, Section 2.8.7).

-Edsko

David Leimbach

unread,
Nov 4, 2009, 10:25:58 AM11/4/09
to Ben Lippmeier, haskel...@haskell.org
On Tue, Nov 3, 2009 at 10:44 PM, Ben Lippmeier <Ben.Li...@anu.edu.au>wrote:

> David Leimbach wrote:
>
>> I have to admit, the first time I hit the wiki page for DDC I said to
>> myself "Self, this sounds crazy complicated". Then I read part of the PDF
>> (your thesis I believe) about Region Types on the bus ride to work and
>> thought. "Gee I think I scared myself off too quickly".
>>
>> Uniqueness typing is quite interesting in Clean, but to control aliasing,
>> like really *control* aliasing, that's just far out man.
>>
>> So I still have to wrap my head around "why this isn't going to get
>> completely out of control" and see why it's all safer than just writing C
>> code but I must say the attention I will be paying to DDC has just gone
>> quite a bit up.
>>
>
> :) A correct C program is just as safe as a correct Haskell/Disciple
> program.
>

Well, of course, the question is in what sort of guarantees a language or
compiler provides I guess.

>
> If you're using destructive update then aliasing, side effects and
> mutability all start to matter. It might look complicated when you reflect
> all these things in the type system, but you're really just getting a handle
> on the inherent complications of the underlying program.
>

So it's just really more notation to let you know which tools are being used
when you use them?

Does Disciple completely avoid the need for such things as unsafePerformIO?

(a perhaps overly paranoid comment but...)
I realize we're probably not supposed to worry about the existence of
unsafePerformIO, and that library authors "know what they're doing". But
doesn't it automatically mean that there's a bit of implicit trust whenever
I see a function that's of type (a -> a) that there *isn't* IO going on in
there? :-)

If Disciple can guarantee that no one is allowed to cheat, is that not a
better approach?


> I suppose the trick is to be able to ignore said complications when you
> just don't care, or they're not relevant for your particular problem...
>

Yes, the Disciple documentation says that this stuff can be inferred, but I
don't even let Haskell infer my types for *any* functions I write in any
code. I like to restrict what can go in and out of the function even if
it's more general. Perhaps this is the knee-jerk reaction of an angry
Erlang programmer who really wanted some types to reign in the
overly-dynamic evaluations that are allowed in that environment, but that's
how I roll baby! I will admit that on occasion I will write and expression
that I think does what I want, and look at in in ghci, having it tell me the
type because sometimes I've not had enough coffee to do it in my head, but I
either look at the inferred type and realize that it's what I originally
wanted, or add further restrictions.

Dave


>
> Ben.
>
>
>
>
>
>
>
>

David Leimbach

unread,
Nov 4, 2009, 10:27:43 AM11/4/09
to Edsko de Vries, haskell-cafe
On Wed, Nov 4, 2009 at 7:11 AM, Edsko de Vries <edskod...@gmail.com>wrote:

>
> On 4 Nov 2009, at 13:36, Alberto G. Corona wrote:
>
> Artyom.
>>
>> I know what uniqueness means. What I meant is that the context in which
>> uniqueness is used, for imperative sequences:
>>
>> (y, s')= proc1 s x
>> (z, s'')= proc2 s' y
>> .....
>>
>> is essentially the same sequence as if we rewrite an state monad to make
>> the state explicit. When the state is the "world" state, then it is similar
>> to the IO monad.
>>
>
> Yes, as long as there is a single thing that is being updated there's
> little difference between the state monad and a unique type. But uniqueness
> typing is more general. For instance, a function which updates two arrays
>
> f (arr1, arr2) = (update arr1 0 'x', update arr2 0 'y')
>
> is easily written in functional style in Clean, whereas in Haskell we need
> to sequentialize the two updates:
>
> f (arr1, arr2)
> = do writeArray arr1 0 'x'
> writeArray arr2 0 'y'
>

Those sequential updates can be run concurrently on both, just with
different syntax though right?

Bulat Ziganshin

unread,
Nov 4, 2009, 10:35:46 AM11/4/09
to Alberto G. Corona, haskell-cafe
Hello Alberto,

Wednesday, November 4, 2009, 5:58:31 PM, you wrote:

> I personally don�t care about raw performance.

me too. actually, i write time-critical parts of my app in c++

> Haskell is in the
> top of the list of language performance.

this list is meaningless, as i said before

> It has all the�ingredients
> for improving�performance in the coming years:

that's different question. i think that at the last end lazy languages
will become as efficient as assembler, it just may happen not in my
lifespan :)

> I can not care less about how fast is C, when I simply can not
> program many things I need in C or C++ or Java �and in general any
> of the language of the performance list that are above...

if you don't know how to implement things in C, you cannot do it
efficiently in Haskell too. when i write efficient code in Haskell, i
actually use Haskell as obscure assembler (the same holds for C)

> or below,
> because they lack the necessary type safety, expressiveness,
> abstraction.etc.

they also can't make you coffee but it's is different story. i use
Haskell too. i just know that it is slow and use other languages when
i really need speed. it's why my archiver is world's fastest one :)

Ketil Malde

unread,
Nov 4, 2009, 11:44:02 AM11/4/09
to haskell
Bulat Ziganshin <bulat.z...@gmail.com> writes:

>> Well, it clearly demonstrates that it is possible to write fast code
>> in Haskell.

> my measures says that by psending 3x more time than for C you can
> optimize haskell code to be only 3x slower than C one

Right¹, the interesting thing is not how fast I can get with N times the
effort, but if I can get fast enough with 1/N.

> when people are crying that their code isn't as fast as those ads say.
> haskell compilation can't yet automatically avoid laziness and convert
> pure high-level code into equivalent of C one.

Many of those people are making fairly simple mistakes. I think a
somewhat seasoned programmer using good libraries can write declarative,
concise, and readable code that still is reasonably fast.

-k

¹) At least for some approximation of the word. Only one benchmark on
the shootout has C at a 3x advantage.


--
If I haven't seen further, it is by standing in the footprints of giants

Bulat Ziganshin

unread,
Nov 4, 2009, 12:02:16 PM11/4/09
to Ketil Malde, haskell
Hello Ketil,

Wednesday, November 4, 2009, 7:43:38 PM, you wrote:

> Right?, the interesting thing is not how fast I can get with N times the


> effort, but if I can get fast enough with 1/N.

it depends entirely on how fast you need. so it's again changing the
topic - while i say that haskell is slow compared to other languages,
i don't say that it is slow for you or that you need sped at all. why
it's repeated again and again? why you don't write to Don what you
don't need speed when he wrote that haslkell is fast but wrote this to
me? :(

>> when people are crying that their code isn't as fast as those ads say.
>> haskell compilation can't yet automatically avoid laziness and convert
>> pure high-level code into equivalent of C one.

> Many of those people are making fairly simple mistakes. I think a
> somewhat seasoned programmer using good libraries can write declarative,
> concise, and readable code that still is reasonably fast.

i don't think that omitting strictness declarations is a mistake :)

> ?) At least for some approximation of the word. Only one benchmark on


> the shootout has C at a 3x advantage.

oh, can we stop saying about shootout? if you want to see speed of
pure haskell code, look at papers about fast arrays/strings - their
authors have measured that lazy lists are hundreds times slower than
idiomatic C code. is use of lazy lists counted as mistake too and
paper authors had too small haskell experience?

--
Best regards,
Bulat mailto:Bulat.Z...@gmail.com

_______________________________________________

Edsko de Vries

unread,
Nov 4, 2009, 12:12:06 PM11/4/09
to David Leimbach, haskell-cafe
I'm not sure I follow you? The compiler can't reorder the two updates
or do them in parallel (IO is not a commutative monad). You might tell
the compiler this explicitly, but then are you writing lower and lower
level code, further removed from the functional paradigm.

Edsko

Don Stewart

unread,
Nov 4, 2009, 1:56:28 PM11/4/09
to Bulat Ziganshin, haskell
bulat.ziganshin:

> oh, can we stop saying about shootout? if you want to see speed of
> pure haskell code, look at papers about fast arrays/strings - their
> authors have measured that lazy lists are hundreds times slower than
> idiomatic C code. is use of lazy lists counted as mistake too and
> paper authors had too small haskell experience?

Comparing apples against oranges is a mistake, yes.

-- Don

Richard O'Keefe

unread,
Nov 4, 2009, 4:15:04 PM11/4/09
to Martin DeMello, haskell

On Nov 4, 2009, at 9:56 PM, Martin DeMello wrote [about Clean]:
> And a well-integrated GUI toolkit. If it weren't for the Windows bias
> I'd have definitely taken the time to learn the language.

The GUI toolkit was originally available on the Mac.
But now, ah, now!
"The Object I/O Library 1.2 is currently
only available for the Wintel platform. "

The last time I tried Clean on a SPARC, using even the old
GUI toolkit required you to locate and install a graphics
library (XView) that Sun abandoned a _long_ time ago.

They have some _really_ interesting ideas about building web sites
using generics.

I paid for the Intel C compiler. I'd pay for Clean, if only I
_could_ use it.

Jason Dusek

unread,
Nov 4, 2009, 4:22:32 PM11/4/09
to Philippos Apolinarius, haskel...@haskell.org
2009/11/4 Philippos Apolinarius <phi5...@yahoo.ca>

> Jason Dusek wrote:
> > How do you read in the IOUArray? By parsing a character
> > string or do you treat the file as binary numbers or ... ?
>
> I always pare the file. Parsing the file has the advantage of
> alowing me to have files of any format.

From this description, it's hard for me to see what is hard
for you. When you "parse the file" I imagine you in face
"parse a String" or "parse a lazy ByteString" (a much better
idea). Take that `String` or `ByteString` and pass it to an
`ST` computation that parses it to make an `ST` array and then
operates on the array.

--
Jason Dusek

Roman Leshchinskiy

unread,
Nov 4, 2009, 5:22:37 PM11/4/09
to Bulat Ziganshin, haskell
On 05/11/2009, at 04:01, Bulat Ziganshin wrote:

> oh, can we stop saying about shootout? if you want to see speed of
> pure haskell code, look at papers about fast arrays/strings - their
> authors have measured that lazy lists are hundreds times slower than
> idiomatic C code. is use of lazy lists counted as mistake too and
> paper authors had too small haskell experience?

In the papers I coauthored, I don't think we measured any such thing.
What we measured was that in algorithms that are best implemented with
(unboxed) arrays, using boxed lists is going to cost you. That's not a
very surprising conclusion and it's by no means specific to Haskell.
The problem was/is the lack of nice purely declarative array libraries
but that changing, albeit slowly. It's a question of using the right
data structure for the algorithm, not a C vs. Haskell thing.

Roman

Jason Dusek

unread,
Nov 4, 2009, 6:56:08 PM11/4/09
to Philippos Apolinarius, haskel...@haskell.org
2009/11/04 Jason Dusek <jason...@gmail.com>:
>  ...you "parse the file" I imagine you in face...

in face -> in fact

Sorry.

Philippos Apolinarius

unread,
Nov 4, 2009, 10:52:44 PM11/4/09
to Jason Dusek, haskel...@haskell.org
Let me see whether I understoodnd you correctly... If I read the contents of a file, the string will may be lazy (or something like that) and� not consume memory? In fewer words,� will the string behave like the infinite list of random numbers that I have used in the examples I posted?

--- On Wed, 11/4/09, Jason Dusek <jason...@gmail.com> wrote:

From: Jason Dusek <jason...@gmail.com>
Subject: Re: [Haskell-cafe] Arrays in Clean and Haskell
To: "Philippos Apolinarius" <phi5...@yahoo.ca>
Cc: haskel...@haskell.org

Received: Wednesday, November 4, 2009, 2:22 PM

2009/11/4 Philippos Apolinarius <phi5...@yahoo.ca>
> Jason Dusek wrote:
> > How do you read in the IOUArray? By parsing a character
> > string or do you treat the file as binary numbers or ... ?
>
> I always pare the file. Parsing the file has the advantage of
> alowing me to have files of any format.

� From this description, it's hard for me to see what is hard
� for you. When you "parse the file" I imagine you in face
� "parse a String" or "parse a lazy ByteString" (a much better
� idea). Take that `String` or `ByteString` and pass it to an
� `ST` computation that parses it to make an `ST` array and then
� operates on the array.

--
Jason Dusek

__________________________________________________________________
Make your browsing faster, safer, and easier with the new Internet Explorer� 8. Optimized for Yahoo! Get it Now for Free! at http://downloads.yahoo.com/ca/internetexplorer/

Jason Dusek

unread,
Nov 5, 2009, 12:20:11 AM11/5/09
to Philippos Apolinarius, haskel...@haskell.org
2009/11/04 Philippos Apolinarius <phi5...@yahoo.ca>

> Let me see whether I understoodnd you correctly... If I read
> the contents of a file, the string will may be lazy (or
> something like that) and not consume memory?

A `String` or a lazy `ByteString` will be lazy and consume
minimal memory. You can parse lazy `ByteString`s with
AttoParsec. To the best of my knowledge, the most patched up
and version of that parser is here:

http://hackage.haskell.org/package/bytestringparser-temporary/

It is really in your best interest to parse with `ByteString`s
instead of `String`s.

Disclosure of conflict of interest: The package I mention is my
own fork of Bryan O'Sullivan's AttoParsec (which is a little
broken in places).

> In fewer words, will the string behave like the infinite list
> of random numbers that I have used in the examples I posted?

In so far as it is lazy, yes.

Lazy IO. A terrible idea, except when it's a good idea.

Ketil Malde

unread,
Nov 5, 2009, 4:01:00 AM11/5/09
to haskell
Bulat Ziganshin <bulat.z...@gmail.com> writes:

>> Right, the interesting thing is not how fast I can get with N times the


>> effort, but if I can get fast enough with 1/N.

> - while i say that haskell is slow compared to other languages,

The "other languages" being C and C++. I believe Haskell is fast
compared to Ruby, Python or Lisp.

> i don't say that it is slow for you or that you need sped at all.
> why it's repeated again and again? why you don't write to Don what you
> don't need speed when he wrote that haslkell is fast

I'm not saying I don't need speed. I need programs that are fast
*enough*, and I would like to build them with a minimum of effort on my
part.

Which is why I would like to see how fast GHC can make more idiomatic
Haskell code.

> but wrote this to me? :(

Because I am trying to agree with you. :-)

>> Many of those people are making fairly simple mistakes. I think a
>> somewhat seasoned programmer using good libraries can write declarative,
>> concise, and readable code that still is reasonably fast.

> i don't think that omitting strictness declarations is a mistake :)

If your program is slow because you misunderstand how laziness affects
execution, and if it's easily fixed by adding a bit of strictness, then,
yes, that is the kind of mistake I'm talking about.

>> At least for some approximation of the word. Only one benchmark on
>> the shootout has C at a 3x advantage.

> oh, can we stop saying about shootout?

I'm using the numbers that are available, you are free to suggest better
benchmarks.

> if you want to see speed of pure haskell code, look at papers about
> fast arrays/strings - their authors have measured that lazy lists are
> hundreds times slower than idiomatic C code. is use of lazy lists
> counted as mistake too

Yes!¹ You don't have to program a lot of Haskell to realize that for
performance sensitive data, you need to use ByteStrings or some other
packed data structure, not lazy lists of Chars. Many of those
complaining on poor performance are advised to use ByteStrings instead,
and more often than not, it helps tremendously.

Again, I'm not contradicting your claim that C will usually be faster,
or that writing very fast programs in Haskell takes a lot of effort.
I'm saying that programs have to be fast *enough* and correct *enough*,
and when those goals are achieved, the question is how we can achieve
them with a minimum of effort, and with a maximum of clarity.

-k

¹) Probably not a mistake in those papers, that depends on what the authors
wanted to illustrate. But clearly a mistake in a performance sensitive
program, and clearly a mistake if you want to argue that Haskell is too
slow to use for real programs.


--
If I haven't seen further, it is by standing in the footprints of giants

Bulat Ziganshin

unread,
Nov 5, 2009, 4:46:07 AM11/5/09
to Ketil Malde, haskell
Hello Ketil,

Thursday, November 5, 2009, 12:00:44 PM, you wrote:

>> - while i say that haskell is slow compared to other languages,

> The "other languages" being C and C++. I believe Haskell is fast
> compared to Ruby, Python or Lisp.

of course

>> i don't say that it is slow for you or that you need sped at all.
>> why it's repeated again and again? why you don't write to Don what you
>> don't need speed when he wrote that haslkell is fast

> I'm not saying I don't need speed. I need programs that are fast
> *enough*, and I would like to build them with a minimum of effort on my
> part.

well, why you don't write this to Don? what i said is just what ghc
generates much slower code that gcc. there is nothing in my letter
about your haskell usage, so why it's all?

> Which is why I would like to see how fast GHC can make more idiomatic
> Haskell code.

i think it would be more natural to write this to someone who thinks
that haskell is fast based on measurements on non-idiomatic code
rather than to me

>> but wrote this to me? :(

> Because I am trying to agree with you. :-)

really? :/

>>> Many of those people are making fairly simple mistakes. I think a
>>> somewhat seasoned programmer using good libraries can write declarative,
>>> concise, and readable code that still is reasonably fast.

>> i don't think that omitting strictness declarations is a mistake :)

> If your program is slow because you misunderstand how laziness affects
> execution, and if it's easily fixed by adding a bit of strictness, then,
> yes, that is the kind of mistake I'm talking about.

for me, analyzing of slow program and finding where to add "bit of
strictness" isn't something easy. it's whole optimization work and
finally you got with need of strict lists or strict arrays

>>> At least for some approximation of the word. Only one benchmark on
>>> the shootout has C at a 3x advantage.

>> oh, can we stop saying about shootout?

> I'm using the numbers that are available, you are free to suggest better
> benchmarks.

it's here:

>> if you want to see speed of pure haskell code, look at papers about
>> fast arrays/strings - their authors have measured that lazy lists are
>> hundreds times slower than idiomatic C code. is use of lazy lists
>> counted as mistake too

> Yes!? You don't have to program a lot of Haskell to realize that for


> performance sensitive data, you need to use ByteStrings or some other
> packed data structure, not lazy lists of Chars. Many of those
> complaining on poor performance are advised to use ByteStrings instead,
> and more often than not, it helps tremendously.

the problems is what it's string-only and doesn't speed up even all
programs using strings (for example, we have hand-made BS.readInt
operation since it cannot be made efficient w/o low-level coding).
another example - afaik, there is no BS version of ParseC what
demonstrates substantial speed increase

> Again, I'm not contradicting your claim that C will usually be faster,
> or that writing very fast programs in Haskell takes a lot of effort.
> I'm saying that programs have to be fast *enough* and correct *enough*,
> and when those goals are achieved, the question is how we can achieve
> them with a minimum of effort, and with a maximum of clarity.

it's obvious. why i don't write to everyone complaining about lack of
some haskell features that i don't need it?


> ?) Probably not a mistake in those papers, that depends on what the authors


> wanted to illustrate. But clearly a mistake in a performance sensitive
> program, and clearly a mistake if you want to argue that Haskell is too
> slow to use for real programs.

i don't say that haskell is slow for real programs. i say that
idiomatic haskell is much slower than idiomatic C and these papers
demonstrates how much, for simple computation-intensive loops

--
Best regards,
Bulat mailto:Bulat.Z...@gmail.com

_______________________________________________

David Virebayre

unread,
Nov 5, 2009, 4:50:02 AM11/5/09
to Ketil Malde, bulat.z...@gmail.com, haskell
I used to think, was I clueless !, that if the compiler was good enough,
then I'd perform magic for me; like transforming that list-based quicksort
in a unboxed, mutable array based one, automatically, if that made sense.
Like I said, I was clueless.

Since then, I understood that while modern GHC compilers are awesome, they
don't compete with Gandalf yet.

I think that's in a way what's Bulat is saying : for Haskell to really
compete with C in *his view*, if I understand it, the compiler has to be
able to take idiomatic Haskell code, and translate it in idomatic C code or
better.

Or said another way, we have to be able to write things like SDL, jpeg or
mpeg processing in Haskell, instead of writing bindings to C libraries,
without losing on performance.

In short, maybe Bulat wishes to be able to write the time-critical parts of
his archiver, in Haskell, without resorting to low-level hacking. Then he'd
be happy with Haskell speed ?

Bulat Ziganshin

unread,
Nov 5, 2009, 5:06:18 AM11/5/09
to David Virebayre, bulat.z...@gmail.com, haskell
Hello David,

Thursday, November 5, 2009, 12:49:38 PM, you wrote:

> I think that's in a way what's Bulat is saying : for Haskell to
> really compete with C in *his view*, if I understand it, the
> compiler has to be able to take idiomatic Haskell code, and
> translate it in idomatic C code or better.

> In short, maybe Bulat wishes to be able to write the time-critical


> parts of his archiver, in Haskell, without resorting to low-level
> hacking. Then he'd be happy with Haskell speed ?

i'm happy with haskell speed since i don't expect that it may be used
for time-critical parts (btw, i mostly use C libraries written by
other people). what i mean is that other people shouldn't expect that
Haskell may compete with C based on meaningless shootout numbers or
rare successful benchmarks published here (unsuccessful benchmarks
are just not published, i hope it's obvious?). the same people that
are proving that haskell is as fast as C when they want, will prove
that Haskell is much slower when they need opposite conclusion. don't
believe ads - check it yourself with code you actually write

--
Best regards,
Bulat mailto:Bulat.Z...@gmail.com

_______________________________________________

John Lato

unread,
Nov 5, 2009, 5:59:45 AM11/5/09
to haskel...@haskell.org, Bulat Ziganshin
Hi Bulat,

I hope you don't mind me asking this, but when you state that
"idiomatic haskell is much slower than idiomatic C", what do you mean
by "much slower"? 2 times? 3 times? 20 times? Order of magnitude?

I'm not going to disagree with you, however it seems to me the gap is
closing. My opinion may be colored by having just examined Masayuki
Takagi's SPH code, which I considered very idiomatic Haskell and for
me performed equivalently to his C++ code with the changes I detailed
in an earlier email (none of which involved any substantial changes to
his code or style in my opinion).

Best,
John


> From: Bulat Ziganshin <bulat.z...@gmail.com>
>
> i don't say that haskell is slow for real programs. i say that
> idiomatic haskell is much slower than idiomatic C and these papers
> demonstrates how much, for simple computation-intensive loops
>

Bulat Ziganshin

unread,
Nov 5, 2009, 6:26:26 AM11/5/09
to John Lato, Bulat Ziganshin, haskel...@haskell.org
Hello John,

Thursday, November 5, 2009, 1:59:14 PM, you wrote:

> I hope you don't mind me asking this, but when you state that
> "idiomatic haskell is much slower than idiomatic C", what do you mean
> by "much slower"? 2 times? 3 times? 20 times? Order of magnitude?

it depends on task, of course. with a number crunching code, it may be
hundreds or even thousands, but this application area isn't typical
for haskell, after all. for typical apps like text processing it may
be dozens. when we are going into real life, use of FFI libraries and
OS calls makes difference even smaller, so i think that practical
difference is 3-10 times, in most cases

> I'm not going to disagree with you, however it seems to me the gap is
> closing.

except for BS library, i don't see too much practical improvements.
GHC by itself was made ~20% faster with pointer tagging

> My opinion may be colored by having just examined Masayuki
> Takagi's SPH code, which I considered very idiomatic Haskell and for
> me performed equivalently to his C++ code with the changes I detailed
> in an earlier email (none of which involved any substantial changes to
> his code or style in my opinion).

i don't know why his haskell code is so fast - it may be due to
improvements in ghc about compiling tight loops, unoptimal C code,
bounds placed by memory speed. well, i know one problem in his C
compilation - he doesn't used -O3 -fexcess-precision and other funny
optimization tricks

there is difference in treating good and bad comparisons. when someone
shows that haskell is much slower, there are lot of suggestions how to
improve the code, add strictness, use other libs, apply tricky
compiling options, even unroll loops by hand. when someone shows small
gap between C and Haskell, noone checks that C code is optimized as
much as possible. of course, it's mainly because many haskellers can't
optimize C code. but this should lead to the conclusion "i'm
incompetent in comparison" rather than "haskell is fast"

John Lato

unread,
Nov 5, 2009, 8:05:13 AM11/5/09
to Bulat Ziganshin, haskel...@haskell.org
On Thu, Nov 5, 2009 at 11:25 AM, Bulat Ziganshin
<bulat.z...@gmail.com> wrote:
> Hello John,
>
> Thursday, November 5, 2009, 1:59:14 PM, you wrote:
>
>> I hope you don't mind me asking this, but when you state that
>> "idiomatic haskell is much slower than idiomatic C", what do you mean
>> by "much slower"? �2 times? �3 times? �20 times? �Order of magnitude?
>
> it depends on task, of course. with a number crunching code, it may be
> hundreds or even thousands, but this application area isn't typical
> for haskell, after all. for typical apps like text processing it may
> be dozens. when we are going into real life, use of FFI libraries and
> OS calls makes difference even smaller, so i think that practical
> difference is 3-10 times, in most cases

Ok, this is a reasonable starting point.

>
>> I'm not going to disagree with you, however it seems to me the gap is
>> closing.
>
> except for BS library, i don't see too much practical improvements.
> GHC by itself was made ~20% faster with pointer tagging
>

I'm really only interested in looking at data of comparisons here.

>> My opinion may be colored by having just examined Masayuki
>> Takagi's SPH code, which I considered very idiomatic Haskell and for
>> me performed equivalently to his C++ code with the changes I detailed
>> in an earlier email (none of which involved any substantial changes to
>> his code or style in my opinion).
>
> i don't know why his haskell code is so fast - it may be due to
> improvements in ghc about compiling tight loops, unoptimal C code,
> bounds placed by memory speed. well, i know one problem in his C
> compilation - he doesn't used -O3 -fexcess-precision and other funny
> optimization tricks

True enough. If I did so with ghc, it's only fair to give g++ the
chance as well. Although when I tried it with -O3 -ffast-math, I
didn't see any gross changes in runtime speed gained from -O2. There
may be benefits, but it would take statistical analysis to determine
their significance.

Also, you may not believe it, but the results I posted were for the
first unfolding thresholds I picked. Those could probably be tuned
further as well.

>
> there is difference in treating good and bad comparisons. when someone
> shows that haskell is much slower, there are lot of suggestions how to
> improve the code, add strictness, use other libs, apply tricky
> compiling options, even unroll loops by hand. when someone shows small
> gap between C and Haskell, noone checks that C code is optimized as
> much as possible. of course, it's mainly because many haskellers can't
> optimize C code. but this should lead to the conclusion "i'm
> incompetent in comparison" rather than "haskell is fast"
>

Re: most optimal C code

I will agree that for most discussions on this list nobody checks that
the C versions are as optimal as possible, however I would suggest
it's for a different reason. Many C libraries and programs are
already widely available in a highly-optimized form, often as a result
of years of work. Programs like the shootout lead to a similar effect
(that is, Haskellers don't bother checking that C is optimized because
the C submitters will be doing that). When doing comparisons against
code like that, I think it's fair to assume that the C versions are
already, if not as fast as possible, within a close enough percentage
to establish a fair benchmark.

For this specific case, I would look at it slightly differently. The
C++ version has a level of performance the author was satisfied with.
It presumably isn't optimized to the hilt, but it's very readable.
The Haskell version is similarly readable, however the author wasn't
happy with the performance (I never saw anything as poor as originally
reported, BTW) and asked how it could be improved. I made a few
suggestions, as did you and others, none of which IMO make the code
non-idiomatic Haskell, and none of which takes more than a few minutes
to implement.

Could the speed of the C++ version be improved? Probably so, but it
wouldn't look as nice. This is probably true for the Haskell code
too. Doing either one would likely take a significant amount of work.
Neither one may be most optimal, but I would argue that the C++ and
Haskell (with suggestions from this list incorporated) programs are
basically equivalent in being idiomatic to the language and
performance. You could say that both are essentially optimized to the
same standard, which is good enough for the task at hand.

Even without any changes, for me the starting difference was roughly
50%, which is under your threshold of 3x as "much slower". Most of
that difference was made up with compiler flags, whereas I doubt that
any flags you pass to g++ (on top of -O3) will yield anything close
to a 40% improvement.

Cheers,
John

brian

unread,
Nov 5, 2009, 9:47:18 AM11/5/09
to David Virebayre, bulat.z...@gmail.com, haskell

On Nov 5, 2009, at 1:49 AM, David Virebayre wrote:

> I think that's in a way what's Bulat is saying : for Haskell to
> really compete with C in *his view*, if I understand it, the
> compiler has to be able to take idiomatic Haskell code, and
> translate it in idomatic C code or better.
>
> Or said another way, we have to be able to write things like SDL,
> jpeg or mpeg processing in Haskell, instead of writing bindings to C
> libraries, without losing on performance.
>

And this is confusing to those of us who are not compiler experts.

Haskell knows when I have a list of Doubles, you know, because it's
strongly typed.

Then it proceeds to box them. Huh ?

The laziness thing has many example od _reducing_ efficiency, but
there seems to be a real lack of example
where it helps. In fact it seems to _always_ hurt. People sure seem
excited about it. Me, not so excited.

I've asked this question before and the answer, apparently, is
polymorphism.

Maybe I'm oversimplifying :-)

Maybe the folks at MLTON will add type classes to ML ;-)

Brian

Ketil Malde

unread,
Nov 5, 2009, 9:58:17 AM11/5/09
to haskell
brian <bri...@aracnet.com> writes:

> Haskell knows when I have a list of Doubles, you know, because it's
> strongly typed.

Right - so you can do

foo, bar :: [Double]
foo = 1:2:3:foo
bar = [1,2,3,undefined,4,5,6]

> Then it proceeds to box them. Huh ?

Otherwise, the above wouldn't work.

For lists of known constants, they could be unboxed and packed into an
array. But that means you'd have to have two different types of lists,
and if you really want an unboxed array, why not use one explicitly?

> The laziness thing has many example of _reducing_ efficiency, but


> there seems to be a real lack of example where it helps. In fact it
> seems to _always_ hurt. People sure seem excited about it. Me, not
> so excited.

Well - it allows me to easily process files larger than memory. Many of
my programs tend to follow a pattern like

(build operation based on command line switches etc)
writeFile of =<< map operation . readFile if

Lazy bytestrings is my current favorite, since it reads bytes in an
efficient, packed format, presents a list-like interface, and is
chunkwise lazy, so streaming can be done in constant time.

The "build operation" part often ends up a bit gross, but I have a plan
for that which I hope to come back to later on.

> I've asked this question before and the answer, apparently, is
> polymorphism.

Really?

-k


--
If I haven't seen further, it is by standing in the footprints of giants

Bulat Ziganshin

unread,
Nov 5, 2009, 9:58:49 AM11/5/09
to brian, bulat.z...@gmail.com, haskell
Hello brian,

Thursday, November 5, 2009, 5:46:51 PM, you wrote:

> The laziness thing has many example od _reducing_ efficiency, but
> there seems to be a real lack of example
> where it helps.

main = interact id


--
Best regards,
Bulat mailto:Bulat.Z...@gmail.com

_______________________________________________

S. Doaitse Swierstra

unread,
Nov 5, 2009, 10:20:15 AM11/5/09
to Deniz Dogan, haskell
One of this differences between Haskell and Clean I did not see
mentioned in this discussion is that Clean does not allow so-called
partial parametrisation. I.e. all function calls have to be fully
saturated. Although the GHC can sometimes (often) find out that a call
is saturated, this becomes more complicated if higher-order functions
are involved. As a consequence extra tests have to be performed at
each call, which partially explains the speed difference.

I have no idea how much difference it would make if such tests could
completely be avoided in Haskell implementations. We hope to to be
able to say something more about this difference in the future, based
on the global (GRIN-based) analysis done in the Utrecht Haskell
Compiler.

Doaitse


On 3 nov 2009, at 21:30, Deniz Dogan wrote:

> Recently there has been a lot of discussion on this list about the
> programming language Clean and converting Clean programs to Haskell.
> Reading the Wikipedia article on the language, I can't really see any
> major difference between that and Haskell, except for the monads vs.
> uniqueness types.
>
> So what's the deal with Clean? Why is it preferable to Haskell? Why
> is it not?
>
> --
> Deniz Dogan

Duncan Coutts

unread,
Nov 5, 2009, 10:35:08 AM11/5/09
to Ketil Malde, haskell
On Thu, 2009-11-05 at 15:57 +0100, Ketil Malde wrote:

> Lazy bytestrings is my current favorite, since it reads bytes in an
> efficient, packed format, presents a list-like interface, and is
> chunkwise lazy, so streaming can be done in constant time.
>
> The "build operation" part often ends up a bit gross, but I have a plan
> for that which I hope to come back to later on.

Yes, they're not good for construction atm. The Builder monoid from the
binary package is pretty good though. I've considered pulling it out and
putting it in the bytestring package.

Duncan

Jason Dagit

unread,
Nov 5, 2009, 11:26:25 AM11/5/09
to brian, bulat.z...@gmail.com, haskell
On Thu, Nov 5, 2009 at 6:46 AM, brian <bri...@aracnet.com> wrote:

>
> On Nov 5, 2009, at 1:49 AM, David Virebayre wrote:
>
> I think that's in a way what's Bulat is saying : for Haskell to really
>> compete with C in *his view*, if I understand it, the compiler has to be
>> able to take idiomatic Haskell code, and translate it in idomatic C code or
>> better.
>>
>> Or said another way, we have to be able to write things like SDL, jpeg or
>> mpeg processing in Haskell, instead of writing bindings to C libraries,
>> without losing on performance.
>>
>>
> And this is confusing to those of us who are not compiler experts.
>
> Haskell knows when I have a list of Doubles, you know, because it's
> strongly typed.
>
> Then it proceeds to box them. Huh ?
>

Imagine a computation which will yield a Double if evaluated, but has not
yet been evaluated. How do you store that in the list?


> The laziness thing has many example od _reducing_ efficiency, but there
> seems to be a real lack of example
> where it helps. In fact it seems to _always_ hurt. People sure seem
> excited about it. Me, not so excited.
>
> I've asked this question before and the answer, apparently, is
> polymorphism.
>

I can't really think of how laziness and polymorphism are related. For me
the big win with laziness is composability. Laziness allows us to express
things in ways that are more natural. The prelude function 'take' is a
perfect example. It allows you to use finite portions of infinite lists.
You could then express an infinite series very naturally and then decouple
from that the logic to process finite parts. The implication here is that
laziness allows you to use data structures for control flow. This all works
together to enable separation of concerns. Which is generally a very good
thing if you want to reason about your source code.

Laziness can also be thought of as a transformation on the time complexity
of algorithms. Sure, the worst-case complexity still remains but often you
can get a better average case by only computing as much as you need.

I hope that helps,
Jason

Andrew Coppin

unread,
Nov 5, 2009, 4:52:35 PM11/5/09
to haskel...@haskell.org
Martin DeMello wrote:
> On Wed, Nov 4, 2009 at 10:34 AM, Richard O'Keefe <o...@cs.otago.ac.nz> wrote:
>
>> (4) It comes with its own IDE. I don't think it can do anything much that
>> Haskell tools can't do, but if you don't like looking for things, it's
>> a help.

>>
>
> And a well-integrated GUI toolkit. If it weren't for the Windows bias
> I'd have definitely taken the time to learn the language.
>

I'm dissapointed that Haskell doesn't have *more* of a Windows bias. It
_is_ the platform used by 90% of the desktop computers, after all. (As
unfortunate as that undeniably is...)

In particular, I really wish we could make is so that stuff from Hackage
actually compiles on Windows. (Disclaimer: Stuff written in Haskell
compiles just fine. It's FFI bindings that unanimously refuse to
compile.) It's also somewhat irritating that the I/O libraries have a
few quirks apparently related to mingw32. But hey, that one at least
should be fixable...

Erik de Castro Lopo

unread,
Nov 5, 2009, 5:02:53 PM11/5/09
to haskel...@haskell.org
Andrew Coppin wrote:

> I'm dissapointed that Haskell doesn't have *more* of a Windows bias. It
> _is_ the platform used by 90% of the desktop computers, after all. (As
> unfortunate as that undeniably is...)

That is not true in my home and its not true where I work.

In addition, saying "90% of all desktop computers" is misleading;
instead we should be talking about the computers of software developers
and there, the figure is almost certainly well below 90%.

> In particular, I really wish we could make is so that stuff from Hackage
> actually compiles on Windows. (Disclaimer: Stuff written in Haskell
> compiles just fine. It's FFI bindings that unanimously refuse to
> compile.) It's also somewhat irritating that the I/O libraries have a
> few quirks apparently related to mingw32. But hey, that one at least
> should be fixable...

The problem here is that window is the odd one out.

Stuff written for userspace Linux will usually compile with little
more than minor alterations on OSX and all the other Unix-like
systems. Making that same code build on windows can be a significant
amount of work and that work should not be the responsibility of
the people who write code on Linux and Mac.

Erik
--
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/

Andrew Coppin

unread,
Nov 5, 2009, 5:28:24 PM11/5/09
to haskel...@haskell.org
Erik de Castro Lopo wrote:
> Andrew Coppin wrote:
>
>
>> I'm dissapointed that Haskell doesn't have *more* of a Windows bias. It
>> _is_ the platform used by 90% of the desktop computers, after all. (As
>> unfortunate as that undeniably is...)
>>
>
> That is not true in my home and its not true where I work.
>
> In addition, saying "90% of all desktop computers" is misleading;
> instead we should be talking about the computers of software developers
> and there, the figure is almost certainly well below 90%.
>

Depends what you develop. I know of plenty of developers who use MS
Visual Studio for everything, for example.

You can pretend that Windows isn't popular and thus there's no need to
support it, but to me that seems like a fairly unrealistic point of view.

> The problem here is that window is the odd one out.
>

This, it seems, is why there are programs in this world that are
designed for Windows but (sometimes) also run on Unix, and other
programs which are designed for Unix but (sometimes) also run on Windows.

I would like to add that GHC itself is really rather well-behaved under
Windows. Many Unix programs simply get recompiled under Cygwin or
something, resulting in a program that *runs* on Windows, but doesn't
follow any Windows-related conventions and so forth. GHC actually
behaves very well under Windows. And we have some quite nice library
support for writing Haskell programs which compile unmodified under
Windows and Linux. (E.g., filepath, ansi-terminal, etc.) I'm just
saying, we could still do better...

Deniz Dogan

unread,
Nov 5, 2009, 5:29:28 PM11/5/09
to haskel...@haskell.org
2009/11/5 Erik de Castro Lopo <mle...@mega-nerd.com>:

> Andrew Coppin wrote:
>
>> I'm dissapointed that Haskell doesn't have *more* of a Windows bias. It
>> _is_ the platform used by 90% of the desktop computers, after all. (As
>> unfortunate as that undeniably is...)
>
> That is not true in my home and its not true where I work.
>
> In addition, saying "90% of all desktop computers" is misleading;
> instead we should be talking about the computers of software developers
> and there, the figure is almost certainly well below 90%.
>

Why? After all, software is always (in one way or another) written for
users, not other software developers.

--
Deniz Dogan

Daniel Fischer

unread,
Nov 5, 2009, 5:32:14 PM11/5/09
to haskel...@haskell.org
Am Donnerstag 05 November 2009 23:02:30 schrieb Erik de Castro Lopo:
> Andrew Coppin wrote:
> > I'm dissapointed that Haskell doesn't have *more* of a Windows bias. It
> > _is_ the platform used by 90% of the desktop computers, after all. (As
> > unfortunate as that undeniably is...)
>
> That is not true in my home and its not true where I work.

Neither is it true in the group of people I know.
However, the number of computers in that group which had Windows installed
when they were bought may be close 90% - it's close to impossible to buy a completely
assembled, ready to go computer without here - which is easily remedied by inserting an
openSuse or Ubuntu disk as soon as it's connected to power :)

>
> In addition, saying "90% of all desktop computers" is misleading;
> instead we should be talking about the computers of software developers
> and there, the figure is almost certainly well below 90%.
>
> > In particular, I really wish we could make is so that stuff from Hackage
> > actually compiles on Windows. (Disclaimer: Stuff written in Haskell
> > compiles just fine. It's FFI bindings that unanimously refuse to
> > compile.) It's also somewhat irritating that the I/O libraries have a
> > few quirks apparently related to mingw32. But hey, that one at least
> > should be fixable...
>
> The problem here is that window is the odd one out.

Still it would be nice if things were easily installable on Windows, so maybe some Windows
user should write a tool which makes installing C libraries on windows feasible.

>
> Stuff written for userspace Linux will usually compile with little
> more than minor alterations on OSX and all the other Unix-like
> systems. Making that same code build on windows can be a significant
> amount of work and that work should not be the responsibility of
> the people who write code on Linux and Mac.
>
> Erik

_______________________________________________

Erik de Castro Lopo

unread,
Nov 5, 2009, 5:46:09 PM11/5/09
to haskel...@haskell.org
Deniz Dogan wrote:

> 2009/11/5 Erik de Castro Lopo <mle...@mega-nerd.com>:
>

> > In addition, saying "90% of all desktop computers" is misleading;
> > instead we should be talking about the computers of software developers
> > and there, the figure is almost certainly well below 90%.
> >
>
> Why? After all, software is always (in one way or another) written for
> users, not other software developers.

We're talking about Haskell libraries. The *only* people who are
interested in Haskell libraries are Haskell developers (ie not even
developers in general).

Counting desktop users who use nothing more than Word, Excel
and IE are not interested in Haskell libraries.

Erik
--
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/

Alberto G. Corona

unread,
Nov 5, 2009, 5:54:21 PM11/5/09
to haskell-cafe
A comparison of the evolution in speed of Haskell from version to version
rather than with other languages could have been very informative about the
progress in haskell speed. I think that the progression has been
astonishing.

I though that while seeing this language shootout in windows

http://dada.perl.it/shootout/

where ghc performance is rather mean. Until I realized that the version used
was GHC 5.04.2 <http://www.haskell.org/>

2009/11/4 Don Stewart <do...@galois.com>

> bulat.ziganshin:
> > oh, can we stop saying about shootout? if you want to see speed of


> > pure haskell code, look at papers about fast arrays/strings - their
> > authors have measured that lazy lists are hundreds times slower than

> > idiomatic C code. is use of lazy lists counted as mistake too and
> > paper authors had too small haskell experience?
>

> Comparing apples against oranges is a mistake, yes.
>
> -- Don

Erik de Castro Lopo

unread,
Nov 5, 2009, 5:59:41 PM11/5/09
to haskel...@haskell.org
Andrew Coppin wrote:

> Depends what you develop. I know of plenty of developers who use MS
> Visual Studio for everything, for example.

And those developers do not care whether Haskell libraries compile
on windows or not.

> You can pretend that Windows isn't popular and thus there's no need to
> support it, but to me that seems like a fairly unrealistic point of view.

Windows is popular amongst general computer users, but less popular
amongst developers and less popular still amongst Haskell developers.

> > The problem here is that window is the odd one out.
>
> This, it seems, is why there are programs in this world that are
> designed for Windows but (sometimes) also run on Unix,

This is a very small proportion of all windows applications and many/
most of them run via Wine, a code base of 1.8 million lines with a
worth according to Ohloh of $29Mil (which I consider very conservative):

http://www.ohloh.net/p/wine

> and other
> programs which are designed for Unix but (sometimes) also run on Windows.

A far larger proportion of Unix program run on windows, due to two
factors:

- Unix people write command line apps and libraries which are always
easier to port to windows.
- Unix people doing things like windows backends to GTK+ and QT to make
GUI applications portable to windows.

> I'm just saying, we could still do better...

My point is that its up to people who care about windows to fix things
for windows.

I am a user of Debian and Ubuntu. About a year ago I became sick and
tired of the poor state of haskell in Debian. I didn't complain, I
joined the debian-haskell-maintainers group and started packaging
haskell stuff for Debian. Now, the situation has improved vastly.

Erik
--
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/

wren ng thornton

unread,
Nov 5, 2009, 8:14:03 PM11/5/09
to haskel...@haskell.org
Roman Leshchinskiy wrote:
> wren ng thornton wrote:
>> Roman Leshchinskiy wrote:
>>> On 04/11/2009, at 13:23, Daniel Peebles wrote:
>>>> In the presence of fusion (as is the case in uvector), it's hard to
>>>> give meaningful time complexities for operations as they depend on
>>>> what operations they are paired with. We need to think of a better way
>>>> to express this behavior in the documentation though.
>>>
>>> I have to disagree here. Fusion never makes the complexity of
>>> operations worse. If it does, it's a bug.
>>
>> I think the point was more that the relevant complexity bound can
>> change in the presence of fusion. For a poor example: the first map
>> over a list is O(n) but all subsequent ones in a chain of maps are
>> O(1) with fusion. I'm sure there are better examples than that, but
>> you get the idea. Some people may care to know about that latter
>> complexity rather than just the "independent" complexity.
>
> I think asymptotic complexity is the wrong tool for what you're trying
> to do. [...] Executing the two maps, be it one after another or interlocked,
> is linear simply because O(n) + O(n) = O(n), not because of fusion.

As I said, it was a bad example. Off-hand I can't think of any examples
where fusion actually does affect asymptotic complexity rather than just
reducing the constant factor. But I think such examples (if they exist)
are what Daniel was concerned with, rather than any bugs where fusion
makes the complexity (or constant factors) worse.

--
Live well,
~wren

brian

unread,
Nov 5, 2009, 9:15:33 PM11/5/09
to da...@codersbase.com, bulat.z...@gmail.com, haskell

On Nov 5, 2009, at 8:26 AM, Jason Dagit wrote:

>
> I can't really think of how laziness and polymorphism are related.
> For me the big win with laziness is composability. Laziness allows
> us to express things in ways that are more natural. The prelude
> function 'take' is a perfect example. It allows you to use finite
> portions of infinite lists. You could then express an infinite
> series very naturally and then decouple from that the logic to
> process finite parts. The implication here is that laziness allows
> you to use data structures for control flow. This all works
> together to enable separation of concerns. Which is generally a
> very good thing if you want to reason about your source code.
>

My bad, I meant polymorphism as the answer as to why things are boxed.

> Laziness can also be thought of as a transformation on the time
> complexity of algorithms. Sure, the worst-case complexity still
> remains but often you can get a better average case by only
> computing as much as you need.
>
> I hope that helps,

It does.

brian

unread,
Nov 5, 2009, 9:16:12 PM11/5/09
to da...@codersbase.com, bulat.z...@gmail.com, haskell

On Nov 5, 2009, at 8:26 AM, Jason Dagit wrote:

>
>
> Haskell knows when I have a list of Doubles, you know, because it's
> strongly typed.
>
> Then it proceeds to box them. Huh ?
>
> Imagine a computation which will yield a Double if evaluated, but
> has not yet been evaluated. How do you store that in the list?
>

So laziness is causing the boxing to be necessary ?

Jason Dagit

unread,
Nov 6, 2009, 1:15:10 AM11/6/09
to brian, bulat.z...@gmail.com, haskell
On Thu, Nov 5, 2009 at 6:15 PM, brian <bri...@aracnet.com> wrote:

>
> On Nov 5, 2009, at 8:26 AM, Jason Dagit wrote:
>
>
>>
>> Haskell knows when I have a list of Doubles, you know, because it's
>> strongly typed.
>>
>> Then it proceeds to box them. Huh ?
>>
>> Imagine a computation which will yield a Double if evaluated, but has not
>> yet been evaluated. How do you store that in the list?
>>
>>
> So laziness is causing the boxing to be necessary ?
>
>

"Necessary" is a strong word within formal/mathematical communities. If you
mean it in that sense, then I'm not sure it's necessary. My (incomplete)
understanding is that no one has a better way than boxing that has as wide
applicability as boxing. Perhaps there are techniques that work better. My
guess is that they are either 1) special cases; or 2) have yet to be
discovered. I wonder if perhaps supercompilation or perhaps whole program
optimizations will eventually be able to eliminate much of the boxing we
have today. Strictness analysis has done a lot to remove boxing but it is
not perfect due to the halting problem.

Jason

Evan Laforge

unread,
Nov 6, 2009, 2:02:19 AM11/6/09
to brian, bulat.z...@gmail.com, haskell
> And this is confusing to those of us who are not compiler experts.
>
> Haskell knows when I have a list of Doubles, you know, because it's strongly
> typed.
>
> Then it proceeds to box them. Huh ?
>
> The laziness thing has many example od _reducing_ efficiency, but there
> seems to be a real lack of example
> where it helps. �In fact it seems to _always_ hurt. �People sure seem
> excited about it. �Me, not so excited.

I have a program that involves a somewhat involved "compilation"
pipeline. As a result of laziness, it's all done incrementally and I
can start getting results right away, which is essential to my
application. Without laziness I'd have to do something awkward and
complicated like break it into a lot of message passing threads or
process in chunks (and it's impossible to know how much context each
chunk will need without further hackery). I can abort the computation
cleanly by simply stopping the consumer, and everything gets GCed.

And I get all sorts of convenient idioms like 'zip [0..]' and
calculating stuff in 'where' that I may not need.

And it's just fun.

So I'm still excited about it :)

Ketil Malde

unread,
Nov 6, 2009, 2:45:29 AM11/6/09
to Duncan Coutts, haskell
Duncan Coutts <duncan...@googlemail.com> writes:

>> The "build operation" part often ends up a bit gross, but I have a plan
>> for that which I hope to come back to later on.
>
> Yes, they're not good for construction atm. The Builder monoid from the

I've used that a bit, but that's not what I'm talking about.
Serializing and deserializing to/from bytestrings is sometimes
complicated, but usually straightforward enough.

The operation in between is what is giving me headaches. Basically, I
have a bunch of command line options - filter on this, modify on that,
produce some kind of output to some file - that must be interpreted in
order to produce a combined filter/modifier processing.

The pipeline looks something like this:

readFoo :: IO [Foo]
process :: [Foo] -> IO [Foo]
writeFoo :: [Foo] -> IO ()

The hard part is how to elegantly construct the "process" part.
If it were just filtering or modifications, it could be a pure
function. The complexity comes from sometimes needing to split off
some output to some file.

Currently, I'm opening handles in advance, and processing one Foo at a
time, writing it to the correct handles, and finally closing handles
when done. This is a rather pedestrian approach.

I'm now considering defining

branch :: ([Foo] -> IO ()) -> [Foo] -> IO [Foo]
branch iop fs = do forkIO (iop fs)
return fs

Which, if I understand correctly, would allow me to write

process = filterM this >>= mapM that
>>= branch (writeBar . map foo2bar) >>= filterM theother

So - is this a good way to approach it? I feel slightly queasy about
using concurrency for this, although I think it'll work well in
practice. It is very important that this is lazy - the list of Foos can
be larger than available memory, so one risk is that one thread might
run off with the list of Foos with the other trailing far behind,
leading to increased memory use. Previous experience seems to indicate
that the 'head' thread will be slowed by disk/memory and allow the
trailing threads to keep up.

I do have a nagging feeling that this could be solved more elegantly
with arrows or a lazy writer monad, or something else that I don't know
enough about. I'd be happy to hear any suggestions.

-k

PS: I probably need to catch the threadID, and wait for all threads to
finish as well. This is left as an excercise for the writer. :-)


--
If I haven't seen further, it is by standing in the footprints of giants

David Virebayre

unread,
Nov 6, 2009, 3:12:20 AM11/6/09
to Ketil Malde, haskell
On Fri, Nov 6, 2009 at 8:45 AM, Ketil Malde <ke...@malde.org> wrote:

>
> enough about. I'd be happy to hear any suggestions.
>

This is more a question than a suggestion, but would the iteratees package
fit your needs ?

David.

John van Groningen

unread,
Nov 6, 2009, 6:50:43 AM11/6/09
to S. Doaitse Swierstra, haskel...@haskell.org

Doaitse Swierstra wrote:
>One of this differences between Haskell and Clean I did not see mentioned in this discussion is that Clean does not allow so-called partial parametrisation. I.e. all function calls have to be fully saturated

I don't understand what you mean. Can you give an example ?

Kind regards,

John van Groningen

Paolo Losi

unread,
Nov 6, 2009, 11:11:35 AM11/6/09
to haskel...@haskell.org
Don Stewart wrote:
> I'd be happy to talk more about the design of the library, if you like.

Don,

I would be personally grateful if you could talk about the design of
the library and/or point to some comprehensive documentation.

Can you confirm that uvector is going to stay almost api compatible
with dph, and that the knowledge investment is going to be "reusable"
on dph?

Paolo

Roman Leshchinskiy

unread,
Nov 6, 2009, 11:32:40 AM11/6/09
to Paolo Losi, haskel...@haskell.org
On 07/11/2009, at 03:10, Paolo Losi wrote:

> Don Stewart wrote:
>> I'd be happy to talk more about the design of the library, if you
>> like.
>
> Don,
>
> I would be personally grateful if you could talk about the design of
> the library and/or point to some comprehensive documentation.
>
> Can you confirm that uvector is going to stay almost api compatible
> with dph, and that the knowledge investment is going to be "reusable"
> on dph?

uvector has (almost) nothing in common with DPH's API. It is forked
off the flat sequential array layer which DPH uses internally and
which the users aren't supposed to even know about. Also, the fork
happened quite a while ago, DPH has changed a lot since then and is
going to change a lot more in the future. My plan is to eventually use
my vector package to replace those flat arrays but sadly I don't have
a lot of time to work on it (although vector is quite usable by now
and even implements recycling which should improve DPH's performance
by quite a bit). The fact that everything DPH depends on will have to
be distributed with GHC doesn't help, either, since adding a new
package into the mix is a pretty big step.

Roman

L Spice

unread,
Nov 7, 2009, 7:50:28 PM11/7/09
to haskel...@haskell.org
John van Groningen <johnvg <at> cs.ru.nl> writes:

> Doaitse Swierstra wrote:
> >One of this differences between Haskell and Clean I did not see mentioned in
this discussion is that Clean
> does not allow so-called partial parametrisation. I.e. all function calls have
to be fully saturated
>
> I don't understand what you mean. Can you give an example ?
>
> Kind regards,
>
> John van Groningen

I think the idea was that Clean doesn't support a syntax like "map (**2)" for a
function that will take a list and square its elements. The call to map there
is not fully saturated, since it's waiting for another argument.

(As a disclaimer, I've not used Clean, so I could be speaking nonsense; it's
just how I read the original statement.)

Ketil Malde

unread,
Nov 8, 2009, 3:36:59 AM11/8/09
to haskel...@haskell.org
L Spice <jaden...@yahoo.com> writes:

>> Doaitse Swierstra wrote:

>>>One of this differences between Haskell and Clean I did not see mentioned in
>>> this discussion is that Clean does not allow so-called partial
>>> parametrisation. I.e. all function calls have to be fully saturated

>> I don't understand what you mean. Can you give an example ?

> I think the idea was that Clean doesn't support a syntax like "map
> (**2)"

This terminology is new to me, I would normally call that "partial
application". Googling "partial parametrization" gives me some papers¹
that appear to use this term as a synonym.

I'm surprised that (if) Clean doesn't support it.

-k

¹ http://cat.inist.fr/?aModele=afficheN&cpsidt=5420616
and http://www.springerlink.com/content/wg64116566522061/


--
If I haven't seen further, it is by standing in the footprints of giants

Stephen Tetley

unread,
Nov 8, 2009, 3:57:28 AM11/8/09
to L Spice, haskel...@haskell.org
Why speak nonsense when you can test it?

// ----------------------------------------------------------------------------

module nonsense

import StdEnv

nonsense = map ((^) 2)

Start = nonsense [1,2,3]

// ----------------------------------------------------------------------------

... Running gives:

[2,4,8]


Best wishes

Stephen

2009/11/8 L Spice <jaden...@yahoo.com>:

Stephen Tetley

unread,
Nov 8, 2009, 4:22:35 AM11/8/09
to haskel...@haskell.org
My impression is the saturated-ness that Doaitse speaks of is covered
in Urban Boquist's phd thesis on the GRIN intermediate language -
circa page 31.

http://www.cs.chalmers.se/~boquist/phd/

As per the code snippet above Clean handles partial application
entirely adequately.

Best wishes

Stephen

>>> Doaitse Swierstra wrote:
>>> >One of this differences between Haskell and Clean I did not see mentioned in
>> this discussion is that Clean
>>> does not allow so-called partial parametrisation. I.e. all function calls have
>> to be fully saturated

Richard O'Keefe

unread,
Nov 8, 2009, 8:03:40 PM11/8/09
to Ketil Malde, haskel...@haskell.org

>>>>
>>>> One of this differences between Haskell and Clean I did not see
>>>> mentioned in
>>>> this discussion is that Clean does not allow so-called partial
>>>> parametrisation. I.e. all function calls have to be fully saturated

I think there may be a misunderstanding here.

Beware: I haven't used Clean in a while.

(1) Clean didn't have sections.
This is no big deal. Clean does have "flip" in StdFunc.
(x +) => (+) x
(+ y) => (flip (+)) y

(2) Clean requires saturated *DEFINITIONS*.
If you declare
f :: A B C -> D
then each rule you give for f must have exactly three arguments.
If you declare
f :: A -> B -> C -> D
then each rule you give for f must have exactly one argument.
See section 3.7 of the Clean 2.1 language report.

This has no consequences for how you can *apply* such a
function. Section 3.7.1 of the report is explicit:

In CLEAN all symbols (functions and constructors) are
defined with fixed arity. However, in an application
it is of course allowed to apply them to an arbitrary
number of arguments. A curried application of a
function is an application of a function with a number
of arguments which is less than its arity (note that
in CLEAN the arity of a function can be derived from
its type). With the aid of the predefined internal
function _AP a curried function applied on the
required number of arguments is transformed into an
equivalent uncurried function application. The type
axiom's (sic.) of the CLEAN type system include for
all s defined with arity n the equivalence of
s::(t1->(t2->(...(tn->tr)...)) with
s::t1 t2 ... tn -> tr.

Henning Thielemann

unread,
Nov 11, 2009, 4:37:53 PM11/11/09
to Stephen Tetley, haskel...@haskell.org
Stephen Tetley schrieb:

> Why speak nonsense when you can test it?
>
> // ----------------------------------------------------------------------------
>
> module nonsense
>
> import StdEnv
>
> nonsense = map ((^) 2)
>
> Start = nonsense [1,2,3]
>
> // ----------------------------------------------------------------------------
>
> .... Running gives:
>
> [2,4,8]

I think they wanted square numbers, not powers of two.

0 new messages