8 views

Skip to first unread message

Mar 7, 2006, 12:44:14 PM3/7/06

to

I'm writing some Haskell code to interpret mathematical functions

expressed as trees. Here's a simplified example:

expressed as trees. Here's a simplified example:

data Expr = Identity | Constant Float | Binop (Float -> Float) Expr

Expr

with

eval Identity x = x

eval (Constant k) x = k

eval (Binop op l r) = op (eval l x) (eval r x)

But I want 'eval e' to be polymorphic. Ie. I'd like to be able to write

let f = eval e

a = f 1.0

b = f (1.0 :: Double)

c = f (1.0 :+ 2.0) in ...

In order to do this I'd like to redefine Expr as

data Expr = Id | Constant Double | Binop (a -> a) Expr Expr

This isn't legal Haskell. How do I define Expr and eval correctly to be

polymorphic in this way?

(Also, is this on topic for comp.lang.functional? I'd ask in the

Haskell Cafe but I can't figure out a way of posting there without

using my real email address and hence publicising it for spammers.)

Mar 7, 2006, 1:29:59 PM3/7/06

to

"Dan Piponi" <goog...@sigfpe.com> writes:

(snip)

> data Expr = Id | Constant Double | Binop (a -> a) Expr Expr

>

> This isn't legal Haskell. How do I define Expr and eval correctly to be

> polymorphic in this way?

(snip)

(snip)

> data Expr = Id | Constant Double | Binop (a -> a) Expr Expr

>

> This isn't legal Haskell. How do I define Expr and eval correctly to be

> polymorphic in this way?

Maybe,

data Expr a = Id | Constant Double | Binop (a -> a) (Expr a) (Expr a)

-- Mark

Mar 7, 2006, 2:01:40 PM3/7/06

to

Mark T.B. Carroll suggested:

> data Expr a = Id | Constant Double | Binop (a -> a) (Expr a) (Expr a)

Some working code this time:

data Expr a = Identity | Constant a | Binop (a -> a -> a) (Expr a)

(Expr a)

eval Identity x = x

eval (Constant k) x = k

eval (Binop op l r) x = op (eval l x) (eval r x)

test = let e = Constant 1.0

f = eval e

a = f (1.0 :: Float)

b = f (1.0 :: Double) in (a,b)

I could have used Constant Float, not Constant a, but that fails for

the same reason. It seems to deduce that type a = Float and sticks with

it.

(Aside: what is the correct way to cast from Float to Double in

Haskell?)

Mar 7, 2006, 2:25:24 PM3/7/06

to

On 2006-03-07, Dan Piponi <goog...@sigfpe.com> wrote:

> (Aside: what is the correct way to cast from Float to Double in

> Haskell?)

> (Aside: what is the correct way to cast from Float to Double in

> Haskell?)

realToFrac

--

Aaron Denney

-><-

Mar 7, 2006, 2:44:02 PM3/7/06

to

Aaron Denney said:

>> (Aside: what is the correct way to cast from Float to Double in

>> Haskell?)

>realToFrac

Thanks. With the clues given me so far the best solution I could come

up with was:

data Expr a = Identity | Constant Float | Binop (a -> a -> a) (Expr a)

(Expr a)

class Evaluable a where

eval :: Expr a -> a -> a

fromFloat :: Float -> a

eval Identity x = x

eval (Constant k) x = fromFloat k

eval (Binop op l r) x = op (eval l x) (eval r x)

instance Evaluable Float where

fromFloat = id

instance Evaluable Double where

fromFloat = realToFrac

test = let e = Binop (+) (Constant 1.0) Identity

e' = Binop (+) (Constant 1.0) Identity

f = eval e

f' = eval e'

a = f (1.0 :: Float)

a' = f' (1.0 :: Double) in (a,a')

Is it possible to do better? In particular is it possible for f and f'

(or even e and e') to in fact be the same object rather than two

separate objects?

Mar 7, 2006, 4:29:50 PM3/7/06

to

Dan Piponi <goog...@sigfpe.com> wrote:

> data Expr a = Identity | Constant Float | Binop (a -> a -> a) (Expr a)

> (Expr a)

> class Evaluable a where

> eval :: Expr a -> a -> a

> fromFloat :: Float -> a

> eval Identity x = x

> eval (Constant k) x = fromFloat k

> eval (Binop op l r) x = op (eval l x) (eval r x)

> instance Evaluable Float where

> fromFloat = id

> instance Evaluable Double where

> fromFloat = realToFrac

> test = let e = Binop (+) (Constant 1.0) Identity

> e' = Binop (+) (Constant 1.0) Identity

> f = eval e

> f' = eval e'

> a = f (1.0 :: Float)

> a' = f' (1.0 :: Double) in (a,a')

> Is it possible to do better? In particular is it possible for f and f'

> (or even e and e') to in fact be the same object rather than two

> separate objects?

Yes. To make that work, you have to think about type classes, and

when they are bound. Your original definition was better:

> data Expr a =

> Identity | Constant a | Binop (a -> a -> a) (Expr a) (Expr a)

This means that the expression tree uses the same type a to denote in

number in the whole tree, so it should be indeed "Constant a" and

not "Constant Float".

> eval :: Expr a -> a -> a

> eval Identity x = x

> eval (Constant k) x = k

> eval (Binop op l r) x = op (eval l x) (eval r x)

That as fine. Now we need f. The defaulting rules will supply the

type class, so we have to make it explicit by adding type annotation:

> f :: Fractional a => a -> a

> f = eval (Constant 1.0)

Keep in mind that the literal "1.0" is interpreted as "fromRational 1.0",

and so is converted to the correct type. If you want to abbreviate

"Constant 1.0" by "e", then you again have to add the type annotation, for

the same reason, otherwise it will default.

Now, you can apply f to any fractional type:

*Main> f (1.0 :: Float)

1.0

*Main> f (1.0 :: Double)

1.0

*Main> :m Ratio

Prelude Ratio> Main.f (1 % 2)

1 % 1

- Dirk

Mar 7, 2006, 5:34:03 PM3/7/06

to

Dirk said:

>To make that work, you have to think about type classes, and

> when they are bound.

> Now, you can apply f to any fractional type:

> *Main> f (1.0 :: Float)

> 1.0

> *Main> f (1.0 :: Double)

> 1.0

> *Main> :m Ratio

> Prelude Ratio> Main.f (1 % 2)

> 1 % 1

Fantastic. This is getting close to what I want. So now...how should I

annotate the function test below (a toy version of what I really want)

as my own attempt at annotation fails to compile:

> test :: (Fractional a) => (a -> a) -> (Double,Float)

> test f = (f (1.0 :: Double),f (1.0 :: Float))

--

Dan

Mar 8, 2006, 2:08:47 AM3/8/06

to

Dan Piponi <goog...@sigfpe.com> wrote:

> Fantastic. This is getting close to what I want. So now...how should I

> annotate the function test below (a toy version of what I really want)

> as my own attempt at annotation fails to compile:

> Fantastic. This is getting close to what I want. So now...how should I

> annotate the function test below (a toy version of what I really want)

> as my own attempt at annotation fails to compile:

>> test :: (Fractional a) => (a -> a) -> (Double,Float)

>> test f = (f (1.0 :: Double),f (1.0 :: Float))

The problem here is that when you pass f in this way, it has type

f :: (a -> a), where the type variable a must be the same for the

first and second application. That's not what you want, you want f to

be polymorphic, so a can change.

There are several ways to do that. For example, you don't pass f as an

argument, but declare f locally:

test1 = (f (1.0 :: Double), f (1.0 :: Float)) where

f :: Fractional a => a -> a

f = eval (Constant 1.0)

Or, if you need to pass f as an argument, you must make sure it has

type f :: forall a. Fractional a => a -> a. That means the type of

test needs now higher rank polymorphism (explicit "forall"

annotations, not just the implicit ones in front), so you need to

enable the apropriate compiler extensions (-fglasgow-exts for GHC).

test2 :: (forall a. (Fractional a) => (a -> a)) -> (Double,Float)

test2 f = (f (1.0 :: Double), f (1.0 :: Float))

I don't know your application, but probably this is overkill, and

there's a way to do it without higher rank polymorphism.

- Dirk

Mar 8, 2006, 2:04:35 PM3/8/06

to

Dirk Thierbach said:

> I don't know your application, but probably this is overkill, and

> there's a way to do it without higher rank polymorphism.

My application is a numerical equation solver that solves equations

input as strings. The reason I want polymorphism is that the solver

algorithm wants to work with the 'same' equation over different types.

Eg. imagine an equation solver that wants to finds real solutions but

which occasionally uses evaluations at complex values to glean useful

clues about what happens for real values. This isn't exactly what I'm

doing, but it's conceptually close.

The code actually works already but I was hoping to make better use of

polymorphism to make the code more elegant. You've been a great help so

far, thanks.

Mar 8, 2006, 3:53:11 PM3/8/06

to

Dan Piponi <goog...@sigfpe.com> wrote:

> My application is a numerical equation solver that solves equations

> input as strings. The reason I want polymorphism is that the solver

> algorithm wants to work with the 'same' equation over different types.

Note that there is a performance penalty to pay for more polymorphism:

The compiler has to pass the typeclass dictionaries around all the

time, and also it has to keep the values boxed. I'd guess that

transforming the input multiple times, say first into a real valued

function, and then into a complex valued (or whateever you need), is

probably more efficient.

But better test that (if it matters at all, and your equation

solver is not already fast enough, anyway).

- Dirk

Mar 9, 2006, 11:50:28 AM3/9/06

to

Dirk Thierbach said:

> Note that there is a performance penalty to pay for more polymorphism

Yes, I think I'm seeing that. I actually have a function that parses a

string representing an equation and returns a function. Call this

function 'parse'. 'parse' is written so that if I evaluate something

like

map (parse "some equation") [some list of values]

the equation is parsed only once (as I can tell from using 'trace').

When I use the higher rank polymorphism approach 'parse' seems to be

reevaluated every time (parse "...") is applied to a new value. (This

isn't the exact same situation as I have in my code but it's

conceptually similar.)

I'm using the "transforming the input multiple times" approach and that

works fine, even if it doesn't look quite as pretty.

BTW parse was defined something like this:

parse eqn = ...

It could have been written as

parse' eqn x = ...

It would have had the same type but Haskell wouldn't be able to reduce

expressions involving 'parse' until it had been supplied both of the

arguments, whereas with the first definition it can reduce as soon as

the equation string is provided meaning it doesn't need to be

reevaluated in an example like map (parse ...) [...]. So even though

parse and parse' have the same type their behaviour is different (from

the point of CPU time required). This is a long-winded explanation - is

their some standard terminology that differentiates between these two

types of function so I can say this in fewer words in the future? (I

guess I want to differentiate between functions (A x B) -> C and A ->

(B -> C) which are usually identified when we talk about Haskell

functions.)

Mar 10, 2006, 2:03:49 AM3/10/06

to

Dan Piponi <goog...@sigfpe.com> wrote:

> Dirk Thierbach said:

> Dirk Thierbach said:

>> Note that there is a performance penalty to pay for more polymorphism

> Yes, I think I'm seeing that. I actually have a function that parses a

> string representing an equation and returns a function. Call this

> function 'parse'. 'parse' is written so that if I evaluate something

> like

> map (parse "some equation") [some list of values]

> the equation is parsed only once (as I can tell from using 'trace').

That was not the point I was trying to make. The point was that when

being less polymorphic (i.e., using a concrete type instead of a type

variable), the compiler can generate more efficient code, especially

for floats and doubles. Example:

double1 :: Num a => a -> a

double1 x = x + x

will work for every numeric type, but the compiled function will have

an extra parameter (the typeclass dictionary), and the values will

be represented by pointers ("boxed"). Now

double2 :: Float -> Float

double2 x = x + x

is potentially much more efficient, because there is no extra parameter

(which hurts more in a recursive function like "eval"), and the values

can be kept in a floating point registers (in principle). You can control

this kind of optimization with GHC pragmas to some degree, but if things

are too complex, maybe it's better just to write one parsing function

that builds an AST, and several functions "evalReal", "evalComplex" that

do the evaluation for a concrete type.

That is not as elegant, but may be more efficient, especially if the

evaluation is needed in an inner loop somewhere. When in doubt, test

with profiling.

> BTW parse was defined something like this:

>

> parse eqn = ...

>

> It could have been written as

>

> parse' eqn x = ...

>

> It would have had the same type but Haskell wouldn't be able to reduce

> expressions involving 'parse' until it had been supplied both of the

> arguments, whereas with the first definition it can reduce as soon as

> the equation string is provided meaning it doesn't need to be

> reevaluated in an example like map (parse ...) [...]. So even though

> parse and parse' have the same type their behaviour is different (from

> the point of CPU time required). This is a long-winded explanation - is

> their some standard terminology that differentiates between these two

> types of function so I can say this in fewer words in the future?

Maybe you're looking for "partial application". The application of parse'

to eqn is partial, so unless the compiler does some magic, it won't

be evaluated. But with lazy evaluation, evaluation often happens later

then you expect, anyway :-)

> (I guess I want to differentiate between functions (A x B) -> C and

> A -> (B -> C) which are usually identified when we talk about

> Haskell functions.)

This is called Currying, and while it enables partial application in

the first place, it's still a different principle.

- Dirk

Mar 14, 2006, 7:30:12 PM3/14/06

to

> Maybe you're looking for "partial application"

I think the term I wanted was "runtime compilation":

http://www.haskell.org/hawiki/RunTimeCompilation

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu