Apparently in that paper, Hughes and O'Donnell describe a means of
introducing nondeterminism in a pure, referentially transparent,
functional notation, through the use of some type of sets where in
some sense each set is represented by a single element chosen from the
set. Can someone outline how that works, or point to (or publish) a
copy of the paper online?
> Apparently in that paper, Hughes and O'Donnell describe a means of
> introducing nondeterminism in a pure, referentially transparent,
> functional notation, through the use of some type of sets where in
> some sense each set is represented by a single element chosen from the
> set. Can someone outline how that works, or point to (or publish) a
> copy of the paper online?
This may be the "stupid answer" but maybe it was elaborating on what is now
the "list monad", i.e. the type list when inserted in the typeclass "monad"
in haskell?
V.
> This may be the "stupid answer" but maybe it was elaborating on what
> is now the "list monad", i.e. the type list when inserted in the
> typeclass "monad" in haskell?
Disclaimer: the following reflects my current understanding of monads in
general and the List monad in particular. Any corrections,
clarifications and rectifications appreciated.
Which, essentially, means that you simulate nondeterminism by evaluating
all possible results and storing them in a list (the "monad" bit just
refers to a specific way of implementing this - an elegant way, but it's
essentially still "just implementation").
Regards,
Jo
--
Currently looking for a new job.
> you simulate nondeterminism by evaluating
> all possible results and storing them in a list
of course you meant "by storing all possible results in a list" since you
don't evaluate them, you know but others might misunderstand:
Prelude> [1,2] >>= (\ x -> [x,x])
[1,1,2,2]
Prelude> [undefined] >>= (\ x -> [x]) >> [3]
[3]
Bye
Vincenzo
The way it works is pretty straight-forward, really. Each function
is declared to return a set of results. At the very top level, even
the main function is declared to return a set of results. However,
the system won't try to compute every element in the set returned from
the main function; it will just compute a single element of the set.
The set ADT provides operations to convert a value into a singleton set,
to take the union of two sets, and map a function over every element of a
set to give you a new set, but it provides no way to test to see whether a
particular value is in a set. As a result of this, the representation of
these sets doesn't need to hold all the elements, just a single element.
System functions whose behaviour is nondeterministic can be modelled as
returning the set of all possible results that they might return,
and all the usual mathematical principlces of functional programming can
be applied when reasoning about these functions; but the actual
implementation of such functions only needs to compute a single value,
not the set of all possible values.
See also my mail to the Haskell mailing list
<http://www.dcs.gla.ac.uk/mail-www/haskell/msg00560.html> which
gives some concrete Haskell code for the NDSet ADT, shows how it can
be made an instance of the Monad class, and suggests how it can be used
for modelling the nondeterminism that arises when catching exceptions.
(This nondeterminism arises because the order of execution is unspecified.
That implies that in cases where the code being executed might throw
more than one exception, it is unspecified which of these exceptions
will occur.)
--
Fergus Henderson <f...@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
>Vincenzo aka Nick Name wrote:
>
>> This may be the "stupid answer" but maybe it was elaborating on what
>> is now the "list monad", i.e. the type list when inserted in the
>> typeclass "monad" in haskell?
No, it was not.
There are two common ways of using nondeterminism: committed choice
nondeterminism, where you may have a set of results, but you only want
to find a single member of that set, and all-solutions nondeterminism,
where you may need to find all solutions. The Hughes and O'Donnell
paper is about a way to model committed choice nondeterminism.
Modelling committed choice nondeterminism is important, because some
important operations are inherently nondeterministic unless the system
is overspecified, and yet cannot reasonably be implemented in a way that
returns all possible solutions. For example, merging the outputs of
two concurrently executing threads in the order in which those outputs
are generated is nondeterministic, unless the language specifies in
detail the exact thread scheduling policy and also the relative time
that each language operation takes, which would essentially prevent
any optimization; and we certainly don't want to compute every possible
interleaving -- we just want to compute the particular interleaving
based on the order in which the values happened to be output.