Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[Haskell-cafe] Yampa vs. Reactive

11 views
Skip to first unread message

Tony Hannan

unread,
Dec 16, 2008, 9:14:32 PM12/16/08
to haskel...@haskell.org
Hello,

Can someone describe the advantages and disadvantages of the Yampa library
versus the Reactive library for functional reactive programming, or point me
to a link.

Thanks,
Tony

P.S. It is hard to google for Yampa and Reactive together because "reactive"
as in "function reactive programming" always appears with Yampa

Thomas Davie

unread,
Dec 17, 2008, 3:02:01 AM12/17/08
to Tony Hannan, haskel...@haskell.org

Advantages of Yampa:
• Just at the moment, slightly more polished.
• (maybe) harder to introduce space/time leaks.

Advantages of Reactive:
• More functional programming like -- doesn't require you to use
arrows everywhere, and supports a nice applicative style.
• In very active development.
• Active community.

Hope that helps -- my personal preference is that Reactive is the one
I'd use for any FRP project at the moment.

Bob_______________________________________________
Haskell-Cafe mailing list
Haskel...@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Henrik Nilsson

unread,
Dec 17, 2008, 9:31:34 AM12/17/08
to Thomas Davie, haskel...@haskell.org
Thomas Davie wrote:

> Advantages of Yampa:
> • Just at the moment, slightly more polished.
> • (maybe) harder to introduce space/time leaks.
>
> Advantages of Reactive:
> • More functional programming like -- doesn't require you to use
> arrows everywhere, and supports a nice applicative style.
> • In very active development.
> • Active community.

I have not used Reactive as such, but I did use "Classic FRP"
extensively, and as far as I know the setup is similar, even
if Reactive has a modern and more principled interface.

Based on my Classic FRP experience (which may be out of date, if so,
correct me), I'd say advantages of Yampa are:

* More modular.

Yampa's signal function type is explicitly parameterized on the
input signal type. In Classic FRP and Reactive (as far as I know),
the system input is implicitly connected to all signal functions
(or behaviours) in the system.

One consequence of this is that it there were issues with reusing
Classical FRP for different kinds of systems inputs, and difficult
to combine systems with different kinds of input. This was what
prompted a parameterization on the type of the system input in the
first place, which eventually led to Arrowized FRP and Yampa.

I don't know what the current story of Reactive is in this respect.
But having parameterized input has been crucial for work on big,
mixed-domain, applications. (For example, a robot simulator with an
interactive editor for setting up "the world". The robots were
FRP systems too, but their input is naturally of a different kind
that the overall system input. It also turned out to be very useful
to have an FRP preprocessor for the system input, which then was
composed with the rest of the system using what effectively was
arrow composition (>>>), but called something else at the time.)

* A clear separation between signals, signal functions, and ordinary
functions and values, yet the ability to easily integrate all kinds
of computations.

Arguably a matter of taste, and in some ways more a consequence of
the Arrow syntax than Arrows themselves. But in Classical FRP,
one had to do either a lot of explicit lifting (in practice, we
often ended up writing lifting wrappers for entire libraries), or
try to exploit overloading for implicit lifting. The latter is
quite limited though, partly due to how Haskell's type classes
are organized and that language support for overloaded constants
is limited to numerical constants.

In any case, when we switched to arrows and arrow syntax, I found
it liberating to not have to lift "everything" to signal functions
first, but that I could program both with signals and signal
functions on the one hand, and plain values and functions on the
other, at the same time and fairly seamlessly. And personally,
I also felt this made the programs conceptually clearer and easier
to understand,

My understanding is that Reactive is similar to Classical FRP in
this respect.

* Classical FRP lacked a satisfying approach to handle dynamic
collections of reactive entities as needed when programming
typical video games for example. Yampa has a way. One can
argue about how satisfying it is, but at least it fulfills
basic requirements such that allowing logically removed entities to
be truly removed (garbage collected).

I don't know where Reactive stands here.

* There was also an issue with Classical FRP having to do with the
need to observe the output from one part of the system in
another part of the system. This is quite different from
parameterizing the second part of the the system on the first,
as this approach loses sharing across switches. This led to the
development for "running in" constructs, which effectively
made it possible for behaviours to be both signals (i.e.
signal functions already applied to the system input),
and signal functions (not yet applied to the system input).

Yet there was no distinction between these "running behaviours"
(= signals) and normal behaviours (= signal functions) at the
type level. The approach also led to semantic difficulties, and,
when trying to resolve those, to a very complicated design
involving complicated overloading and auxiliary classes.

The arrows approach obviated the need for all of this, and
I consider that and other distinct advantage.

Again, I don't know where Reactive stands here, but it needs to
have a good answer to this issue, or it is going to suffer from
limited expressivity.

Many of the above advantages are matters of opinion (but so are the
advantages initially put forward for Reactive above). However,
the development of AFRP and Yampa was motivated by fundamental
expresssivity limitations of Classical FRP, in at least some ways
related to the very way the system was set up. To the extent Reactive
is similar in style to Classical FRP, I think Reactive also needs to
address those questions.

All the best,

/Henrik

--
Henrik Nilsson
School of Computer Science
The University of Nottingham
n...@cs.nott.ac.uk


This message has been checked for viruses but the contents of an attachment
may still contain software viruses, which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.

Thomas Davie

unread,
Dec 17, 2008, 11:05:57 AM12/17/08
to Henrik Nilsson, haskel...@haskell.org
I'll have an attempt at addressing the questions, although I freely
admit that I'm not as "into" Reactive as Conal is yet, so he may come
and correct me in a minute.

I'm not sure how this was set up in a classic FRP system, so I'm
unable to comment on how exactly it's changed. What I will say is
that as far I understand what you're saying, Reactive has explicitly
parameterized inputs. In your robot example I would expect something
along the lines of

data RobotInputs = RI {lightSensor :: Behavior Colour; bumbSwitch ::
Event ()} -- A couple of example robot sensors

robotBehavior :: RobotInputs -> Behavior Robot
robotBehavior sensors = a behovior that combines the light sensor and
the bumb switch to stay in the light, but not keep driving into things.

data UIInputs = UI {mousePoint :: Behavior Point; mouseClick :: Event
(); ...}

world :: UIInputs -> Behavior World
world = interpret mouse and produce a world with barriers, robots and
lights in it

robotInputs :: World -> Behavior Robot -> RobotInputs
robotInputs = given a robot in a world, generate the robot's inputs

> * A clear separation between signals, signal functions, and ordinary
> functions and values, yet the ability to easily integrate all kinds
> of computations.
>
> Arguably a matter of taste, and in some ways more a consequence of
> the Arrow syntax than Arrows themselves. But in Classical FRP,
> one had to do either a lot of explicit lifting (in practice, we
> often ended up writing lifting wrappers for entire libraries), or
> try to exploit overloading for implicit lifting. The latter is
> quite limited though, partly due to how Haskell's type classes
> are organized and that language support for overloaded constants
> is limited to numerical constants.
>
> In any case, when we switched to arrows and arrow syntax, I found
> it liberating to not have to lift "everything" to signal functions
> first, but that I could program both with signals and signal
> functions on the one hand, and plain values and functions on the
> other, at the same time and fairly seamlessly. And personally,
> I also felt this made the programs conceptually clearer and easier
> to understand,
>
> My understanding is that Reactive is similar to Classical FRP in
> this respect.

I agree and disagree here (that'll be the matter of taste creeping
in). I agree that in Reactive you often spend a lot of keystrokes
lifting pure values into either an Event or a Behavior. Having said
that I'd argue that Yampa requires us to do this too -- it merely
enforces the style in which we do it (we must do it with arrows).

My personal opinion on this one is that I prefer the applicative
interface to the arrow based one, because it feels more like just
writing a functional program.

> * Classical FRP lacked a satisfying approach to handle dynamic
> collections of reactive entities as needed when programming
> typical video games for example. Yampa has a way. One can
> argue about how satisfying it is, but at least it fulfills
> basic requirements such that allowing logically removed entities to
> be truly removed (garbage collected).
>
> I don't know where Reactive stands here.

I reserve judgement at the moment because I haven't explicitly written
a reactive program involving a collection of behaviors, having said
that, I see no reason why removing a value from the list in a Behavior
[a], it should not get garbage collected.

> * There was also an issue with Classical FRP having to do with the
> need to observe the output from one part of the system in
> another part of the system. This is quite different from
> parameterizing the second part of the the system on the first,
> as this approach loses sharing across switches. This led to the
> development for "running in" constructs, which effectively
> made it possible for behaviours to be both signals (i.e.
> signal functions already applied to the system input),
> and signal functions (not yet applied to the system input).
>
> Yet there was no distinction between these "running behaviours"
> (= signals) and normal behaviours (= signal functions) at the
> type level. The approach also led to semantic difficulties, and,
> when trying to resolve those, to a very complicated design
> involving complicated overloading and auxiliary classes.
>
> The arrows approach obviated the need for all of this, and
> I consider that and other distinct advantage.
>
> Again, I don't know where Reactive stands here, but it needs to
> have a good answer to this issue, or it is going to suffer from
> limited expressivity.

My understanding is that Conal went to great lengths to make sure that
Behaviors get correctly cached, so that incremental values are only
evaluated once, but I'm affraid I can't answer this more sensibly.

> Many of the above advantages are matters of opinion (but so are the
> advantages initially put forward for Reactive above). However,
> the development of AFRP and Yampa was motivated by fundamental
> expresssivity limitations of Classical FRP, in at least some ways
> related to the very way the system was set up. To the extent Reactive
> is similar in style to Classical FRP, I think Reactive also needs to
> address those questions.

Yep, I very much agree that these are mostly matters of taste, I hope
I did something to answer some of the questions, and sorry I didn't
give a better example particularly in the case of the last question.

Your email triggered to think about a couple of the other significant
differences between Yampa and Reactive
* Reactive deals with continuous functions of time, not sampled ones.
This allows for asynchronous sampling, for example the ability to
sample a Behavior at 1/60th of a second rate for screen refreshes,
while sampling the same behavior also at 1/10th second for logging,
and 1/1000th for euler integration of un-integratable Behaviors.
* Reactive is push based. The result of this is that we will not
waste CPU cycles for example refreshing the screen when nothing has
changed on the output.

There are however a number of problems that Yampa certainly did solve
that Reactive does not (yet), for example, recursive integrals do not
yet always work correctly in Reactive (although when working, they
should be totally transparent). It is also much harder to make a
space leak in a Yampa program than in a Reactive one, thanks to the
protective nest of Signal Functions. I think this is really a general
problem of Haskell programming though -- it's possible to create space
leaks, be careful!

Thanks

Tom Davie

Henrik Nilsson

unread,
Dec 18, 2008, 8:59:19 AM12/18/08
to Thomas Davie, haskel...@haskell.org
Hi Tom,

> I'll have an attempt at addressing the questions, although I freely
> admit that I'm not as "into" Reactive as Conal is yet, so he may come
> and correct me in a minute.

> [...]


> Reactive has explicitly parameterized inputs. In your robot example
I > would expect something along the lines of
>
> data RobotInputs =
> RI {lightSensor :: Behavior Colour;
> bumbSwitch :: Event ()} -- A couple of example robot sensors
>
> robotBehavior :: RobotInputs -> Behavior Robot
> robotBehavior sensors = a behovior that combines the light sensor and
> the bumb switch to stay in the light, but not keep driving into
> things.

This looks exactly like Classical FRP.
And if it is like Classical FRP behind the scenes, it nicely
exemplifies the problem.

In Classical FRP, Behavior is actually what I would call a signal
function. When started (switched into), they map the system input
signal from that point in time to a signal of some particular type.

So, the record RobotInputs is just a record of lifted projection
functions that selects some particular parts of the overall system
input. Behind the scenes, all Behaviors are connected to the one
and only system input.

> data UIInputs = UI {mousePoint :: Behavior Point; mouseClick :: Event
> (); ...}
>
> world :: UIInputs -> Behavior World
> world = interpret mouse and produce a world with barriers, robots and
> lights in it

Fine, of course, assuming that all behaviours share the same kind
of system input, in this case UI input.

But what if I want my reactive library to interface to different kinds
of systems? The robot code should clearly work regardless of whether
we are running it on a real hardware platform, or in a simulated
setting where the system input comes form the GUI. In Classical FRP,
this was not easily possible, because all combinators at some level
need to refer to some particular system input type which is hardwired
into the definitions.

Had Haskell had ML-style parameterized modules,
that would likely have offered a solution: the libraries could
have been parameterized on the system input, and then one could obtain
say robot code for running on real hardware or in a simulated
setting by simply applying the robot module to the right kind of
system input.

An alternative is to parameterize the behaviour type explicitly on
the system input type:

Behavior sysinput a

This design eventually evolved into Arrowized FRP and Yampa.

So, from your examples, it is not clear to what extent Reactive
as addressed this point. Just writing functions that maps behaviours
to behaviours does not say very much.

On a more philosophical note, I always found it a bit odd that if
I wanted to write a function that mapped a signal of, say, type "a",
which we can think of as

type Signal a = Time -> a

to another signal, of type "b" say, in Classical FRP, I'd have to write
a function of type

Behavior a -> Behavior b

which really is a function of type

(Signal SystemInput -> Signal a) -> (Signal SystemInput -> Signal b)


I find this unsatisfying, as my mapping from a signal of type a to
a signal of type b is completely independent from the system
input (or the function wouldn't have a polymorphic type).

> > * A clear separation between signals, signal functions, and ordinary
> > functions and values, yet the ability to easily integrate all
> > kinds of computations.
>

> I agree and disagree here (that'll be the matter of taste creeping
> in). I agree that in Reactive you often spend a lot of keystrokes
> lifting pure values into either an Event or a Behavior. Having said
> that I'd argue that Yampa requires us to do this too -- it merely
> enforces the style in which we do it (we must do it with arrows).

Yes, there is lifting in Yampa, but the arrow syntax mostly does it for
the programmer, which in practice (in my experience) translates to a lot
less effort, and, in my opinion, leads to clearer code as it is easy to
maintain a distinction between signals and static values. After all, why
should I want to live a constant to a signal, if all I'm going to do
with it is to apply one and the same function to it over and over?

(I'm not worried about efficiency here, that can be fixed: it's
a philosophical point.)

Also, form practical experience when programming with Classical FRP,
we often lifted entire libraries we wanted to use to avoid having
to write explicit lifts all the time. Tedious, but OK, doable.

However, quite often we then discovered that actually, we needed the
unlifted version of the library too, leading to name clashes and
thus extra noise to do the need to disambiguate, be it by qualified
input or naming the lifted versions differently.

Not a show stopper by any means, but a tedious extra level of concerns.

The arrow framework offer clear guidance in this case which translates
to convenient coding practice: just use whatever library
you need and let the arrow syntax take care of liftings where
necessary.

> My personal opinion on this one is that I prefer the applicative
> interface to the arrow based one, because it feels more like just
> writing a functional program.

It is true that the arrow syntax sometimes is a bit too "linear". For
example, if (without using the basic arrow combinators), I want to apply
first one function sf1 and then another sf2 to some signal x, one
might write

y <- sf1 -< x
z <- sf1 -< y

whereas

z = sf1 (sf2 x)

would arguably be clearer in a case like this.

On balance, though, I find that the advantages of the arrow framework
outweighs such inconveniences.

(Of course, one can also write

z <- sf2 <<< sf1 -< x

And I think the arrow syntax likely could be tweaked to allow
something more similar to the second version, but that's of course
beside the point.)

> I reserve judgement at the moment because I haven't explicitly written
> a reactive program involving a collection of behaviors, having said
> that, I see no reason why removing a value from the list in a
Behavior > [a], it should not get garbage collected.

But just the possibility of have list output is not sufficient.

What is needed is a way to run a collection of independent behaviours in
parallel. Classical FRP provided essentially the following functionality
for this purpose:

[Behavior a] -> Behavior [a]

This is fine, until we get to a point where we want to remove
one of those behaviors without disturbing the others.

In Classical FRP, the only thing that could be done was to
apply a filter to the output list to *hide* the output(s) from some
of the behaviours from the outside world. But this only makes it
*look* as if they're gone. In fact, they're still there, consuming
computational resources, and can be resurrected at any point.

Yampa provides a way of maintaining dynamic collections of signal
functions, allowing new signal functions to be started and others
to be removed without affecting the other signal functions in the
collection.

It is still unclear to me if Reactive offers anything similar.
In principle, just looking from the outside, I cannot see why reactive
couldn't do something similar to Yampa or possibly adopt some other
design to the same effect. But my understanding is that Reactive has
a fairly elaborate run-time machinery behind the scenes, and I don't
know if that would get in the way or not.

> > * There was also an issue with Classical FRP having to do with the
> > need to observe the output from one part of the system in
> > another part of the system.
>

> My understanding is that Conal went to great lengths to make sure
that > Behaviors get correctly cached, so that incremental values are only
> evaluated once, but I'm affraid I can't answer this more sensibly.

This is a semantical issue, not one about efficiency.

The question is this. Suppose we define

let
n :: Behavior Int
n = <behaviour that counts left mouse button clicks>
in
n `until` <some event> -=> n

I'm not sure I got the syntax right. But the idea is that we
output the number of left mouse button clicks, and then at some
point, we switch to a behavior that again output the number of left
mouse button clicks, notionally the "same" one "n".

The question is, after the switch, do we observe a count that
continues from where the old one left off, i.e. is there a
single *shared* instance of "n" that is merely being *observed*
from within the two "branches" of the switch, or is the counting
behavior "n" restarted (from 0) after the switch?

In Classical FRP, the answer is the latter (because "n" really
is a signal function mapping system input to an signal carrying
integer counts).

Sometimes that's what one wants, other times not. Which is what
motivated the "running in" design to effectively allow a single
instance of a behavior to be created, whose output then could
be observed within the scope of the definition thus avoiding
restarting the behavior after each switch. But this design
became very complicated, and also somewhat confusing as there
was no type distinction between "running behaviors" (effectively
signals) and behaviors (signal functions).

Yampa, by effectively providing both signals and signal functions,
provides a much simpler answer, albeit at the price of a bit of
extra "plumbing" sometimes.

Again, I don't know where Reactive stands on this. But it is a real
concern in terms of being able to express what one need to express
in real-life reactive programming, i.e. more than just a matter of
taste in this case.

> Your email triggered to think about a couple of the other significant
> differences between Yampa and Reactive
> * Reactive deals with continuous functions of time, not sampled ones.
> This allows for asynchronous sampling, for example the ability to
> sample a Behavior at 1/60th of a second rate for screen refreshes,
> while sampling the same behavior also at 1/10th second for logging,
> and 1/1000th for euler integration of un-integratable Behaviors.

I'm not quite sure what you're getting at here.

On the one hand, Yampa also notionally has continuous time.
On the other hand, ANY implementation will have to do sampling
at some point.

But I suppose what you're saying is that Reactive allows different
part of a system to be sampled at different rates in a transparent
manner?

Which is nice.

But the tradeoffs are not clear to me.

> * Reactive is push based. The result of this is that we will not
> waste CPU cycles for example refreshing the screen when nothing has
> changed on the output.

The optimizations of Yampa also achieves a fair amount of "pushing".
But granted, Yampa is fundamentally pull-based.

That said, for truly hybrid systems, that do a lot of continuous
computation, it is not at all clear that push is a clear winner.

Only extensive benchmarking can really provide genuine insight
into the actual pros and cons here of different FRP implementations
for various applications, I'd say.

Also, when there is a need to combine output from different
parts of a system, and this is done by combining the various
outputs into a tuple or record, then one have to push combined
output whenever any one of the constituent parts changes, which
means one lose track of the changes down the line, possibly
resulting in redundant computations anyway.

Best,

/Henrik

--
Henrik Nilsson
School of Computer Science
The University of Nottingham
n...@cs.nott.ac.uk


This message has been checked for viruses but the contents of an attachment
may still contain software viruses, which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.

_______________________________________________

Thomas Davie

unread,
Dec 18, 2008, 11:53:52 AM12/18/08
to Henrik Nilsson, haskel...@haskell.org
Hi Henrik,

I don't think this is really true. Behaviors and Events do not reveal
in their type definitions any relation to any system that they may or
may not exist in. A Behavior can exist wether or not it is being run
by a particular legacy adapter (a piece of code to adapt it to work as
expected on a legacy, imperative computer). I can define an Event e =
(+1) <$ atTimes [0,10..] and use it as a Haskell construct without
needing any system at all to run it within. Similarly I can define a
Behavior b = accumB 0 e that depends on this event, completely
independant of any system, or definition of what basic events and
behaviors I get to interact with it.

> > data UIInputs = UI {mousePoint :: Behavior Point; mouseClick ::
> Event
> > (); ...}
> >
> > world :: UIInputs -> Behavior World
> > world = interpret mouse and produce a world with barriers, robots
> and > lights in it
>
> Fine, of course, assuming that all behaviours share the same kind
> of system input, in this case UI input.
>
> But what if I want my reactive library to interface to different kinds
> of systems? The robot code should clearly work regardless of whether
> we are running it on a real hardware platform, or in a simulated
> setting where the system input comes form the GUI. In Classical FRP,
> this was not easily possible, because all combinators at some level
> need to refer to some particular system input type which is hardwired
> into the definitions.

There are no hardwired definitions of what inputs I'm allowed to use
or not use. If I would like my reactive program to run on a "legacy"
robot which uses imperative IO, then I may write a legacy adapter
around it to take those IO actions and translate them into Events and
Behaviors that I can use. One such legacy adapter exists, called
reactive-glut, which ties glut's IO actions into reactive events one
can use. I could easily imagine several others, for example one that
interacts with robot hardware and presents the record above to the
behaviors it's adapting, or another still which works much like the
"interact" function, but instead of taking a String -> String, takes
an Event Char -> Event Char.

Yes, certainly that would be unsatisfactory. But I don't agree about
the type of the function -- this really is a (Time -> a) -> (Time ->
a). It may be though that the argument (Time -> a) is a system input
from our legacy adapter, or an internal part of our program.

> > > * A clear separation between signals, signal functions, and
> ordinary
> > > functions and values, yet the ability to easily integrate all
> > > kinds of computations.
> >
> > I agree and disagree here (that'll be the matter of taste creeping
> > in). I agree that in Reactive you often spend a lot of keystrokes
> > lifting pure values into either an Event or a Behavior. Having said
> > that I'd argue that Yampa requires us to do this too -- it merely
> > enforces the style in which we do it (we must do it with arrows).
>
> Yes, there is lifting in Yampa, but the arrow syntax mostly does it
> for
> the programmer, which in practice (in my experience) translates to a
> lot
> less effort, and, in my opinion, leads to clearer code as it is easy
> to
> maintain a distinction between signals and static values. After all,
> why
> should I want to live a constant to a signal, if all I'm going to do
> with it is to apply one and the same function to it over and over?
>
> (I'm not worried about efficiency here, that can be fixed: it's
> a philosophical point.)

I'm not sure I understand you clearly. If I wish to apply a constant
function to a signal, can I not just use fmap?

Or one could write z = sf2 <$> (sf1 <$> x) in Reactive --
unfortunately, we pay the penalty of having to use a bulkier syntax
for application than just ' ', but it still works rather nicely. I
agree that the arrow notation is nice in some circumstances, but my
(admitedly limited) experience with Yampa left me often finding that I
wanted to express something much more simply, and less sequentially
without the arrows.

> And I think the arrow syntax likely could be tweaked to allow
> something more similar to the second version, but that's of course
> beside the point.)
>
> > I reserve judgement at the moment because I haven't explicitly
> written
> > a reactive program involving a collection of behaviors, having said
> > that, I see no reason why removing a value from the list in a
> Behavior > [a], it should not get garbage collected.
>
> But just the possibility of have list output is not sufficient.
>
> What is needed is a way to run a collection of independent
> behaviours in
> parallel. Classical FRP provided essentially the following
> functionality
> for this purpose:
>
> [Behavior a] -> Behavior [a]

Certainly, I can imagine something along the lines of
listB = foldr (liftA2 (:)) (pure []) -- there's that lifting creaping
in.

> This is fine, until we get to a point where we want to remove
> one of those behaviors without disturbing the others.
>
> In Classical FRP, the only thing that could be done was to
> apply a filter to the output list to *hide* the output(s) from some
> of the behaviours from the outside world. But this only makes it
> *look* as if they're gone. In fact, they're still there, consuming
> computational resources, and can be resurrected at any point.
>
> Yampa provides a way of maintaining dynamic collections of signal
> functions, allowing new signal functions to be started and others
> to be removed without affecting the other signal functions in the
> collection.
>
> It is still unclear to me if Reactive offers anything similar.
> In principle, just looking from the outside, I cannot see why reactive
> couldn't do something similar to Yampa or possibly adopt some other
> design to the same effect. But my understanding is that Reactive has
> a fairly elaborate run-time machinery behind the scenes, and I don't
> know if that would get in the way or not.

You would certainly need to ask Conal on this point, but I have no
reason to suspect that b' = [1,2,3,4,5] `stepper` listE [(1,[])] would
not deallocate the first list once it had taken its step.

> > > * There was also an issue with Classical FRP having to do with the
> > > need to observe the output from one part of the system in
> > > another part of the system.
> >
> > My understanding is that Conal went to great lengths to make sure
> that > Behaviors get correctly cached, so that incremental values
> are only
> > evaluated once, but I'm affraid I can't answer this more sensibly.
>
> This is a semantical issue, not one about efficiency.
>
> The question is this. Suppose we define
>
> let
> n :: Behavior Int
> n = <behaviour that counts left mouse button clicks>
> in
> n `until` <some event> -=> n
>
> I'm not sure I got the syntax right. But the idea is that we
> output the number of left mouse button clicks, and then at some
> point, we switch to a behavior that again output the number of left
> mouse button clicks, notionally the "same" one "n".
>
> The question is, after the switch, do we observe a count that
> continues from where the old one left off, i.e. is there a
> single *shared* instance of "n" that is merely being *observed*
> from within the two "branches" of the switch, or is the counting
> behavior "n" restarted (from 0) after the switch?

Yes, we really do get a shared n -- without doing that we certainly
would see a large space/time leak.

> In Classical FRP, the answer is the latter (because "n" really
> is a signal function mapping system input to an signal carrying
> integer counts).

Yep, in Reactive, a signal function is a

newtype BehaviorG tr tf a = Beh { unBeh :: (R.ReactiveG tr :. Fun tf)
a }

i.e. it's a Reactive value containing functions of time. Each mouse
click event would create a new step in the Reactive value, allowing
(a) the old step to be garbage collected, (b) the value for the
current number of mouse clicks to be cached.

What may be a problem, is if we do not look at our mouse-click
counting behavior for a long time, we may spend a lot of time when we
do look at it, computing all of the (+1)s we have built up (but only
once).

> Sometimes that's what one wants, other times not. Which is what
> motivated the "running in" design to effectively allow a single
> instance of a behavior to be created, whose output then could
> be observed within the scope of the definition thus avoiding
> restarting the behavior after each switch. But this design
> became very complicated, and also somewhat confusing as there
> was no type distinction between "running behaviors" (effectively
> signals) and behaviors (signal functions).

Yep, such Behaviors are seperated in Reactive only by the method you
create them with. I may use the `stepper` function to create a
behavior that increases in steps based on an event occurring, or I may
use fmap over time to create a continuously varying Behavior.

> > Your email triggered to think about a couple of the other
> significant > differences between Yampa and Reactive
> > * Reactive deals with continuous functions of time, not sampled
> ones. > This allows for asynchronous sampling, for example the
> ability to
> > sample a Behavior at 1/60th of a second rate for screen refreshes,
> > while sampling the same behavior also at 1/10th second for logging,
> > and 1/1000th for euler integration of un-integratable Behaviors.
>
> I'm not quite sure what you're getting at here.
>
> On the one hand, Yampa also notionally has continuous time.
> On the other hand, ANY implementation will have to do sampling
> at some point.
>
> But I suppose what you're saying is that Reactive allows different
> part of a system to be sampled at different rates in a transparent
> manner?
>
> Which is nice.
>
> But the tradeoffs are not clear to me.

Yes, indeed reactive allows different sample rates as you say. The
key though is that reactive itself does not ever do any sampling. The
user may create Events, and snapshot Behaviors at the times that the
Events occur, but reactive will never explicitly say "I am going to
sample now". What may happen, is that the legacy adapter may have a
built in Event at which it samples for (for example) screen refreshes.

> > * Reactive is push based. The result of this is that we will not
> > waste CPU cycles for example refreshing the screen when nothing
> has
> > changed on the output.
>
> The optimizations of Yampa also achieves a fair amount of "pushing".
> But granted, Yampa is fundamentally pull-based.
>
> That said, for truly hybrid systems, that do a lot of continuous
> computation, it is not at all clear that push is a clear winner.

Yes, I agree. This is one of the things I'll be interested to
discover, working with Reactive, is exactly when we have problems with
the push based semantics.

> Only extensive benchmarking can really provide genuine insight
> into the actual pros and cons here of different FRP implementations
> for various applications, I'd say.
>
> Also, when there is a need to combine output from different
> parts of a system, and this is done by combining the various
> outputs into a tuple or record, then one have to push combined
> output whenever any one of the constituent parts changes, which
> means one lose track of the changes down the line, possibly
> resulting in redundant computations anyway.

I'm not certain that this is the case. I *hope* that the caching in
the Behaviors is enough that this computation is avoided, but I'm not
100% certain.

I hope I got more on top of the problems you were pointing out this
time.

Thanks

Tom Davie

Henrik Nilsson

unread,
Dec 18, 2008, 1:09:19 PM12/18/08
to Thomas Davie, haskel...@haskell.org
Hi Tom,

> I don't think this is really true. Behaviors and Events do not reveal
> in their type definitions any relation to any system that they may or
> may not exist in.

OK. So how does e.g.

mousePoint :: Behavior Point

get at the mouse input? unsafePerformIO?

I.e. it is conceptually a global signal?

> I'm not sure I understand you clearly. If I wish to apply a constant
> function to a signal, can I not just use fmap?

The question is why I would want to (conceptually). I'm just saying
I find it good and useful to be able to easily mix static values
and computations and signals and computations on signals.

> You would certainly need to ask Conal on this point, but I have no
> reason to suspect that b' = [1,2,3,4,5] `stepper` listE [(1,[])]
would > not deallocate the first list once it had taken its step.

It's not the lists that concern me, nor getting rid of a collection
of behaviors all at once. The problem is if we ant to run a collection
of behaviors in parallel, all potentially accumulating internal
state, how do we add/delete individual behaviors to/from that
collection, without "disturbing" the others?

For the sake of argument, say we have the following list of behaviours:

[integral time, integral (2 * time), integral (3 * time)]

We turn them into a single behavior with a list output in order to
run them. After one second the output is thus

[1,2,3]

Now, we want to delete the second behavior, but continue to
run the other two, so that the output at time 2 is

[2,6]

Simply mapping postprocessing that just drops the second element
from the output isn't a satisfactory solution.

> > let
> > n :: Behavior Int
> > n = <behaviour that counts left mouse button clicks>
> > in
> > n `until` <some event> -=> n
> >
> > I'm not sure I got the syntax right. But the idea is that we
> > output the number of left mouse button clicks, and then at some
> > point, we switch to a behavior that again output the number of left
> > mouse button clicks, notionally the "same" one "n".
> >
> > The question is, after the switch, do we observe a count that
> > continues from where the old one left off, i.e. is there a
> > single *shared* instance of "n" that is merely being *observed*
> > from within the two "branches" of the switch, or is the counting
> > behavior "n" restarted (from 0) after the switch?
>
> Yes, we really do get a shared n -- without doing that we certainly
> would see a large space/time leak.

Interesting, although I don't see why not sharing would imply
a space/time leak: if the behavior is simply restarted, there
is no catchup computation to do, nor any old input to hang onto,
so there is neither a time nor a space-leak?

Anyway, let's explore this example a bit further.

Suppose "lbp" is the signal of left button presses, and that
we can count them by

"count lbp"

Then the question is if

let
n :: Behavior Int
n = count lbp


in
n `until` <some event> -=> n

means the same as

(count lbp) `until` <some event> -=> (count lbp)

If no, then Reactive is not referentially transparent, as we manifestly
cannot reason equationally.

If yes, the question is how to express a counting that starts over
after the switch (which sometimes is what is needed).

> Yep, such Behaviors are seperated in Reactive only by the method you
> create them with. I may use the `stepper` function to create a
> behavior that increases in steps based on an event occurring, or I
may > use fmap over time to create a continuously varying Behavior.

But the question was not about events vs continuous signals. The
question is, what is a behavior conceptually, and when is it started?
E.g. in the example above, at what point do the various instances of
"count lbp" start counting? Or are the various instances of "count lbp"
actually only one?

Or if you prefer, are beahviours really signals, that conceptually
start running all at once at a common time 0 when the system starts?
The answers regarding input behaviors like mousePosition, that
"n is shared", and the need to do catchup computations all seem
to indicate this. But if so, that leaves open an important
question on expressivity, examplified by how to start counting from
the time of a switch above, and makes if virtually impossible
to avoid time and space leaks in general, at least in an embedded
setting. After all, something like "count lbp" can be compiled into
a function that potentially may be invoked at some point. And as
long as this possibility exists, the system needs to hang on to
the entire history of mouse clicks so that they can be coounted
at some future point if necessary.

These are all questions that go back to classical FRP, which we
didn't find any good answers to back then, and which also were
part of the motivation for moving to AFRP/Yampa.

If Reactive has come up with better answers, that would be very
exciting indeed!

Best,

/Henrik

--
Henrik Nilsson
School of Computer Science
The University of Nottingham
n...@cs.nott.ac.uk


This message has been checked for viruses but the contents of an attachment
may still contain software viruses, which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.

_______________________________________________

Thomas Davie

unread,
Dec 18, 2008, 2:53:16 PM12/18/08
to Henrik Nilsson, haskel...@haskell.org
Hi Henrik,

On 18 Dec 2008, at 19:06, Henrik Nilsson wrote:

> Hi Tom,
>
> > I don't think this is really true. Behaviors and Events do not
> reveal
> > in their type definitions any relation to any system that they may
> or
> > may not exist in.
>
> OK. So how does e.g.
>
> mousePoint :: Behavior Point
>
> get at the mouse input? unsafePerformIO?
>
> I.e. it is conceptually a global signal?

main = adapter doSomeStuff

-- Note here that different adapters provide different "UI"s.
adapter :: (Behavior UI -> Behavior
SomethingFixedThatYouKnowHowToInterpret) -> IO ()
adapter f = set up the system behaviors, pass them into f, grab the
outputs, and do the "something" to render.

doSomeStuff :: Behavior UI -> Behavior
SomethingFixedThatYouKnowHowToInterpret

> > I'm not sure I understand you clearly. If I wish to apply a
> constant > function to a signal, can I not just use fmap?
>
> The question is why I would want to (conceptually). I'm just saying
> I find it good and useful to be able to easily mix static values
> and computations and signals and computations on signals.

Yep, I can see that, I think we need to agree to disagree on this
front, I would prefer to use fmap, or <$>, while you prefer arrow
syntax.

> > You would certainly need to ask Conal on this point, but I have no
> > reason to suspect that b' = [1,2,3,4,5] `stepper` listE [(1,[])]
> would > not deallocate the first list once it had taken its step.
>
> It's not the lists that concern me, nor getting rid of a collection
> of behaviors all at once. The problem is if we ant to run a collection
> of behaviors in parallel, all potentially accumulating internal
> state, how do we add/delete individual behaviors to/from that
> collection, without "disturbing" the others?
>
> For the sake of argument, say we have the following list of
> behaviours:
>
> [integral time, integral (2 * time), integral (3 * time)]
>
> We turn them into a single behavior with a list output in order to
> run them. After one second the output is thus
>
> [1,2,3]
>
> Now, we want to delete the second behavior, but continue to
> run the other two, so that the output at time 2 is
>
> [2,6]
>
> Simply mapping postprocessing that just drops the second element
> from the output isn't a satisfactory solution.

I'm not sure why mapping the function is not satisfactory -- It would
create a new Behavior, who's internals contain only the two elements
from the list -- that would expose to the garbage collector that the
second element has no reference from this behavior any more, and thus
the whole behavior could be collected.

> > Yes, we really do get a shared n -- without doing that we certainly
> > would see a large space/time leak.
>
> Interesting, although I don't see why not sharing would imply
> a space/time leak: if the behavior is simply restarted, there
> is no catchup computation to do, nor any old input to hang onto,
> so there is neither a time nor a space-leak?
>
> Anyway, let's explore this example a bit further.
>
> Suppose "lbp" is the signal of left button presses, and that
> we can count them by
>
> "count lbp"
>
> Then the question is if
>
> let
> n :: Behavior Int
> n = count lbp
> in
> n `until` <some event> -=> n
>
> means the same as
>
> (count lbp) `until` <some event> -=> (count lbp)
>
> If no, then Reactive is not referentially transparent, as we
> manifestly
> cannot reason equationally.
>
> If yes, the question is how to express a counting that starts over
> after the switch (which sometimes is what is needed).

That's a yes. My first answer to how to implement the resetting
counter would be someting along the lines of this, but I'm not certain
it's dead right:

e = (1+) <$ mouseClick
e' = (const 0) <$ <some event>
b = accumB 0 (e `mappend` e')

i.e. b is the behavior got by adding 1 every time the mouse click
event occurs, but resetting to 0 whenever <some event> occurs.

> > Yep, such Behaviors are seperated in Reactive only by the method you
> > create them with. I may use the `stepper` function to create a
> > behavior that increases in steps based on an event occurring, or I
> may > use fmap over time to create a continuously varying Behavior.
>
> But the question was not about events vs continuous signals. The
> question is, what is a behavior conceptually, and when is it started?
> E.g. in the example above, at what point do the various instances of
> "count lbp" start counting? Or are the various instances of "count
> lbp"
> actually only one?

They are indeed, only 1.

> Or if you prefer, are beahviours really signals, that conceptually
> start running all at once at a common time 0 when the system starts?
> The answers regarding input behaviors like mousePosition, that
> "n is shared", and the need to do catchup computations all seem
> to indicate this. But if so, that leaves open an important
> question on expressivity, examplified by how to start counting from
> the time of a switch above, and makes if virtually impossible
> to avoid time and space leaks in general, at least in an embedded
> setting. After all, something like "count lbp" can be compiled into
> a function that potentially may be invoked at some point. And as
> long as this possibility exists, the system needs to hang on to
> the entire history of mouse clicks so that they can be coounted
> at some future point if necessary.

Yes, this certainly is an ongoing problem. Currently there are
certain situations in which the mouse click history is disposed of
when we can show no one wants them any more, but this is not ideal.
One possible solution is to make sure that all events and behaviors
are updated continuously, but that may take a little more time than
your solution.

> These are all questions that go back to classical FRP, which we
> didn't find any good answers to back then, and which also were
> part of the motivation for moving to AFRP/Yampa.
>
> If Reactive has come up with better answers, that would be very
> exciting indeed!

I'm not certain that the answers are better yet. They feel more
comfortable to me from a functional programming point of view, but it
may yet turn out that space time leaks really are impossible to
suppress. My inclination at the moment is to believe that they are
though.

Thanks

Tom Davie

Henrik Nilsson

unread,
Dec 18, 2008, 8:09:38 PM12/18/08
to Thomas Davie, haskel...@haskell.org
Hi Tom,

> I'm not sure why mapping the function is not satisfactory -- It would
> create a new Behavior, who's internals contain only the two elements
> from the list -- that would expose to the garbage collector that the
> second element has no reference from this behavior any more, and thus
> the whole behavior could be collected.

We must be talking at cross purposes here: there is no way that
deleting the *output* from one of the behaviours from a list
of outputs would cause the underlying behavior whose output no longer
is observed to be garbage collected. After all, that list of
three numbers is just a normal value: why should removing one of
its elements, so to speak, affect the producer of the list?

But if we have a list of running behaviors or signals, and that list
is changed, then yes, of course we get the desired behavior (this
is what Yampa does).

So maybe that's what you mean?

> That's a yes. My first answer to how to implement the resetting
> counter would be someting along the lines of this, but I'm not
certain > it's dead right:
>
> e = (1+) <$ mouseClick
> e' = (const 0) <$ <some event>
> b = accumB 0 (e `mappend` e')
>
> i.e. b is the behavior got by adding 1 every time the mouse click
> event occurs, but resetting to 0 whenever <some event> occurs.

Hmm. Looks somewhat complicated to me.

Anyway, it doesn't really answer the fundamental question: how
does one start a behavior/signal function at a particular point in time?

I consider the fact that Yampa, through supporting both signals and
signal functions, provides simple yet flexible answers to the question
when a signal function starts to be one of its key strengths over
Classical FRP and maybe then also over Reactive.

Best,

/Henrik

--
Henrik Nilsson
School of Computer Science
The University of Nottingham
n...@cs.nott.ac.uk

This message has been checked for viruses but the contents of an attachment
may still contain software viruses, which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.

_______________________________________________

Conal Elliott

unread,
Dec 18, 2008, 9:53:02 PM12/18/08
to Tony Hannan, haskel...@haskell.org
Hi Tony,

Reactive so far has focused mainly on events and functions of time
(behaviors/signals), while Yampa on transformations between signals. I'm in
the process of building a higher-level interface with some semantic
similarity to the arrow/Yampa style. See recent posts at
http://conal.net/blog to get some flavor of where I'm going. The post "Why
classic FRP does not fit interactive behavior" in particular mentions part
of my motivation for doing something different from both classic FRP and
Yampa.

- Conal

2008/12/16 Tony Hannan <tonyh...@gmail.com>

Thomas Davie

unread,
Dec 19, 2008, 4:45:29 AM12/19/08
to Henrik Nilsson, haskel...@haskell.org
Hi Henrik,

On 19 Dec 2008, at 02:05, Henrik Nilsson wrote:

> Hi Tom,
>
> > I'm not sure why mapping the function is not satisfactory -- It
> would
> > create a new Behavior, who's internals contain only the two elements
> > from the list -- that would expose to the garbage collector that the
> > second element has no reference from this behavior any more, and
> thus
> > the whole behavior could be collected.
>
> We must be talking at cross purposes here: there is no way that
> deleting the *output* from one of the behaviours from a list
> of outputs would cause the underlying behavior whose output no longer
> is observed to be garbage collected. After all, that list of
> three numbers is just a normal value: why should removing one of
> its elements, so to speak, affect the producer of the list?
>
> But if we have a list of running behaviors or signals, and that list
> is changed, then yes, of course we get the desired behavior (this
> is what Yampa does).
>
> So maybe that's what you mean?

I'm afraid not, rereading what I said, I really didn't explain what I
was talking about well. A Behavior in reactive is not just a
continuous function of time. It is a series of steps, each of which
carries a function of time. One such behavior might look like this:

(+5) -> 5 , (+6) -> 10 , integral

That is to say, this behavior starts off being a function that adds 5
to the current time. At 5 seconds, it steps, and the value changes to
a function that adds 6 to time. At this point, the function that adds
5 to time can be garbage collected, along with the step. At 10
seconds, it becomes the integral of time, and the (+6) function, along
with the next step is GCed.

To come back to your example, I'd expect the behavior to look like
this (using named functions only so that I can refer to them):
i1 t = integral t
i2 t = integral (2 * t)
i3 t = integral (3 * t)
f t = [i1 t, i2 t, i3 t)]
g t = [i1 t, i3 t]
f -> 2, g

After 2 seconds, both f, and the first step may be garbage collected.
As g does not have any reference to i2 t, it too can be garbage
collected.

I hope that answers you more clearly.

> > That's a yes. My first answer to how to implement the resetting
> > counter would be someting along the lines of this, but I'm not
> certain > it's dead right:
> >
> > e = (1+) <$ mouseClick
> > e' = (const 0) <$ <some event>
> > b = accumB 0 (e `mappend` e')
> >
> > i.e. b is the behavior got by adding 1 every time the mouse click
> > event occurs, but resetting to 0 whenever <some event> occurs.
>
> Hmm. Looks somewhat complicated to me.
>
> Anyway, it doesn't really answer the fundamental question: how
> does one start a behavior/signal function at a particular point in
> time?

In reactive, one doesn't. All behaviors and events have the same
absolute 0 value for time.

One can however simulate such a behavior, by using a `switcher`, or
accumB. In practice, having potentially large numbers of behaviors
running but not changing until a certain event is hit is not a major
problem. This is not a problem because reactive knows that the
current step contains a constant value, not a "real" function of time,
because of this, no changes are pushed, and no work is done, until the
event hits.

I believe Conal is however working on semantics for relative time
based behaviors/events though.

Thanks

Tom Davie

Paul L

unread,
Dec 19, 2008, 9:32:23 AM12/19/08
to haskel...@haskell.org
Nice to see this discussion, and I just want to comment on the
applicative v.s. arrow style. The example Henrik gave is

z <- sf2 <<< sf1 -< x

which models a composition, and is in general the strength of a
combinator approach. But the strength of Applicative, in my opinion,
is not composition but currying:

f <*> x <*> y

where f can have the type Behavior a -> Behavior b -> Behavior c. I
don't think there is an exact match in arrows. One could, however,
require sf to be of type SF (a, b) c, and write

z <- sf -< (x, y)

The tupling may seem an extra burden, but it's an inherent design
choice of arrows, which builds data structure on top of products, and
people can't run away from it when writing arrow programs.

--
Regards,
Paul Liu

Yale Haskell Group
http://www.haskell.org/yale

Tony Hannan

unread,
Dec 19, 2008, 2:49:01 PM12/19/08
to haskel...@haskell.org
Hi guys,

Thanks for the comments and lively discussion. After reading these posts, a
few papers, and Conal's blogs, I'm going to try Reactive because it is newer
and thus likely to incorporate most of the good things from past FRP sytems
including Yampa, actively being developed, and mature enough to start using
now.

I'll see you on the Reactive mailing list.

Cheers,
Tony

Conal Elliott

unread,
Dec 19, 2008, 3:01:26 PM12/19/08
to Paul L, haskel...@haskell.org
>
> The example Henrik gave [...] models a composition, and is in general the

> strength of a
> combinator approach. But the strength of Applicative, in my opinion,
> is not composition but currying [...]
>

Well put, Paul.

I really do like the semantic model of Yampa. Signal transformers model
interactive behaviors, where the behaviors/signals of classic FRP model
non-interactive behaviors. (See
http://conal.net/blog/posts/why-classic-frp-does-not-fit-interactive-behavior/.)
I also like currying.

As long as we use not just the arrow abstraction but also *arrow notation*,
I don't know how we'll ever be able to get an efficient implementation, in
which portions of computed signals get recomputed only when necessary. And
probably the Arrow abstraction itself is a bit too restrictive, given that
it disallows any conditions on its type arguments. So I've been noodling
some about formulations of signal functions that don't fit into the standard
arrow discipline.

Regards, - Conal

Conal Elliott

unread,
Dec 19, 2008, 3:10:47 PM12/19/08
to Tony Hannan, haskel...@haskell.org
Hi Tony,

I'm glad for your interest in Reactive. Yes, it is certainly being
developed actively. To start you off with realistic expectations, I'd like
you to know that Reactive currently has one or more sneaky laziness bugs
that block serious application at the moment. While the Reactive
implementation is almost entirely pure (free of IO), it has some quite
subtle aspects, and it's taking a while to get it solid. There's also a new
higher-level programming model on the way, as hinted at in my blog.

Welcome!

- Conal

2008/12/19 Tony Hannan <tonyh...@gmail.com>

Paul L

unread,
Dec 19, 2008, 5:53:31 PM12/19/08
to Conal Elliott, haskel...@haskell.org
On 12/19/08, Conal Elliott <co...@conal.net> wrote:
>
> As long as we use not just the arrow abstraction but also *arrow notation*,
> I don't know how we'll ever be able to get an efficient implementation, in
> which portions of computed signals get recomputed only when necessary. And
> probably the Arrow abstraction itself is a bit too restrictive, given that
> it disallows any conditions on its type arguments. So I've been noodling
> some about formulations of signal functions that don't fit into the standard
> arrow discipline.

Paul (Hudak) and I recently worked on a notion called Causal
Commutative Arrows, which actually gave a very good optimization
result for Yampa like arrows. One notable feature is that all programs
normalize, regardless of whether they were originally written using
the Arrow combinators or translated from Arrow notations. I recently
give a talk at NEPLS on this, the slides are here:

http://www.cs.yale.edu/homes/hl293/download/NEPLS-talk.pdf

Due to the use of arrow laws, our technique remains fully abstract
without committing to any concrete representation of arrows or
signals/streams.

The re-computation problem is another issue though. I fully agree with
Henrik's comment on push v.s. pull. But if one really wants to avoid
re-computation at all efforts, here is one possibility:

pass :: SF a b -> SF (Maybe a) (Maybe b)

It'll only invoke the given SF when the input is Just something, and
do nothing otherwise. Coupled with hold, it shall lead to efficient
implementation that avoids re-computation when inputs don't change.

Intuitively it's like selectively turning on/off part of a circuit
according to inputs, which naturally falls in the ArrowChoice class.
Also one has to extract the implicit time from the inplementation and
make it an explicit input in order for this to be semantically sound.

Henrik Nilsson

unread,
Dec 21, 2008, 7:11:28 AM12/21/08
to Thomas Davie, haskel...@haskell.org
Hi Tom,

> In reactive, one doesn't. All behaviors and events have the same
> absolute 0 value for time.

Right. I believe the possibility of starting behaviors later
is quite important.

And from what Conal wrote in a related mail, I take it that this
is recognized, and that this capability is something that is
being considered for reactive?

Best,

/Henrik

--
Henrik Nilsson
School of Computer Science
The University of Nottingham
n...@cs.nott.ac.uk

This message has been checked for viruses but the contents of an attachment
may still contain software viruses, which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.

_______________________________________________

Thomas Davie

unread,
Dec 22, 2008, 3:20:18 AM12/22/08
to Henrik Nilsson, haskel...@haskell.org

On 21 Dec 2008, at 13:10, Henrik Nilsson wrote:

> Hi Tom,
>
> > In reactive, one doesn't. All behaviors and events have the same
> > absolute 0 value for time.
>
> Right. I believe the possibility of starting behaviors later
> is quite important.
>
> And from what Conal wrote in a related mail, I take it that this
> is recognized, and that this capability is something that is
> being considered for reactive?

Yep, it is indeed.

Thanks for this series of emails by the way. It's helped clarify in
my head exactly what problems Yampa solved, and exactly which of them
Reactive does or doesn't solve.

Thanks

Tom Davie

0 new messages