264 views

Skip to first unread message

Nov 14, 2007, 12:30:13 PM11/14/07

to

Hello, all,

Working at state monad again. Wikipedia (http://en.wikipedia.org/wiki/

Ackermann_function) gives the standard definition of the Ackermann

function:

> a :: Integer -> Integer -> Integer

> a m n | m == 0 = n + 1

> | m > 0 && n == 0 = a (m - 1) 1

> | m > 0 && n > 0 = a (m - 1) (a m (n - 1))

After running it, I immediately thought of using the State monad to

memo intermediate results (sort of a top-down dynamic programming

approach):

> type AckMap = Map (Integer, Integer) Integer

> type AckMapS = State AckMap Integer

> ackify :: Integer -> Integer -> AckMapS

> ackify m n = do mappo <- get

> case Map.lookup (m, n) mappo of

> Just num -> return num

> Nothing -> ackify' m n

now we need to recursively update the map

> ackify' :: Integer -> Integer -> AckMapS

> ackify' m n | m == 0 = return (n + 1) >>= ackify'' m n

> | m > 0 && n == 0 = ackify (m - 1) 1 >>= ackify'' m n

> | m > 0 && n > 0 =

> ackify m (n - 1) >>= ackify (m - 1) arg >>= ackify'' m n

> ackify'' :: Integer -> Integer -> Integer -> AckMapS

> ackify'' m n ans = do mappo <- get

> put (Map.insert (m, n) ans mappo)

> return ans

... and that works fine (ackify' is my monadic version of my Ackermann

function 'a'), speeding up the computation exponentially, it seems to

me.

My question is, am I violating any principles of monadic style or

brevity? If so, where are the guides for me to correct my style? Put

another way: what are better ways to use Haskell programming

techniques (monads or otherwise) here to increase function 'a's

efficiency while keeping its declarative nature intact?

Sincerely,

Doug Auclair

BTW, wikipedia has a HO functional definition that transliterates to

Haskell very nicely:

> ack :: Integer -> Integer -> Integer

> ack 0 = succ

> ack m = iter $ ack (m - 1)

> iter :: (Integer -> Integer) -> Integer -> Integer

> iter f 0 = f 1

> iter f n = f $ iter f (n - 1)

(perhaps the wikipedia def has an error, but that's not a propos to

this discussion)

Nov 16, 2007, 2:44:10 AM11/16/07

to

dauc...@gmail.com wrote:

>> a :: Integer -> Integer -> Integer

>> a m n | m == 0 = n + 1

>> | m > 0 && n == 0 = a (m - 1) 1

>> | m > 0 && n > 0 = a (m - 1) (a m (n - 1))

[Version with State monad]

> My question is, am I violating any principles of monadic style or

> brevity? If so, where are the guides for me to correct my style? Put

> another way: what are better ways to use Haskell programming

> techniques (monads or otherwise) here to increase function 'a's

> efficiency while keeping its declarative nature intact?

I am not aware of any specific recommended monadic style. Nevertheless,

I'd like to point out two alternatives to your way, using a more

general approach. (I'd say the monadic style used is more or less the

same; it's the way of thinking about memoization that's different.)

It's well known that recursion can be modelled as fixpoint

> fix :: (a -> a) -> a

> fix f = let x = f x in x

of a function that has an extra argument which is used instead of the

recursive calls. So let's rewrite the ackermann function from above in

that way. While we're at it, we replace the two arguments by a pair

(the reason for that will be seen shortly); we assume the arguments

are positive to avoid checking them all the time; and we use pattern

matching for the zeroes:

> acker :: ((Integer, Integer) -> Integer) ->

> ((Integer, Integer) -> Integer)

> acker a' (0,n) = n + 1

> acker a' (m,0) = a' (m-1, 1)

> acker a' (m,n) = a' (m-1, a' (m, n-1))

We can then write the old function as

> ackermann0 :: Integer -> Integer -> Integer

> ackermann0 m n

> | n < 0 || m < 0 = error "negative arguments"

> | otherwise = fix acker (m,n)

Now, a simple way to memoize calls is to use an array. As arrays are lazy,

we just have to create an array with an expression for all the values

we wish to memoize, and call-by-need will replace them with values as the

calculation proceeds. We use the extra argument for recursive calls to

"squeeze in" accesses to the array instead, but otherwise we're again

just calculating the fixpoint. The only difference is that instead of

starting out with any type a, we now have to make the assumption the

we're looking at the function of type a -> b, because we need to

explicitely process the arguments (and that's the reason for the

uncurried form of "acker" we used above). In this version, we also

simply calculate any values outside of the memoized range with recursion,

as before (instead we could throw an error, for example).

To compare, fix had type

fix :: ( a -> a ) -> a

> memoizeA :: Ix a => (a,a) -> ((a -> b) -> (a -> b)) -> Array a b

> memoizeA b t = a where

> a = listArray b [t f x | x <- range b]

> f x | inRange b x = a ! x

> | otherwise = t f x

Now we can pick some bounds and write

> ackermann1 :: Integer -> Integer -> Integer

> ackermann1 m n

> | n < 0 || m < 0 = error "negative arguments"

> | otherwise = memoizeA ((0,0),(4,1000)) acker ! (m,n)

If we know the expected range of arguments in advance, this version is

likely quite fast, because we can exploit O(1) element access. But for

the ackermann function, n can get easily very large, so let's use a

Map instead, like you did. First, we again rewrite the original

function, but this time in monadic style:

> ackerM :: Monad m => ((Integer, Integer) -> m Integer) ->

> ((Integer, Integer) -> m Integer)

> ackerM a' (0,n) = return $ n + 1

> ackerM a' (m,0) = a' (m-1, 1)

> ackerM a' (m,n) = a' (m, n-1) >>= \n' -> a' (m-1, n')

Now we can "squeeze in" the lookup and update of the map in a very

similar way to above:

> type StateMap a b = State (Map a b) b

>

> memoizeM :: Ord a => ((a -> StateMap a b) -> (a -> StateMap a b)) -> (a -> b)

> memoizeM t x = evalState (f x) Map.empty where

> g x = do

> y <- t f x

> m <- get

> put $ Map.insert x y m;

> return y

> f x = get >>= \m -> maybe (g x) return (Map.lookup x m)

and get the top-level function

> ackermann2 :: Integer -> Integer -> Integer

> ackermann2 m n

> | n < 0 || m < 0 = error "negative arguments"

> | otherwise = memoizeM ackerM (m,n)

The advantage of these approaches is that they leave the original function

more or less intact, and one can re-use the memoization HOFs in other

code. The disadvantage is that due to the extra arguments, it's probably

a bit less efficient than your inlined version.

- Dirk

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu