Thanks, that is very interesting. Here's my attempt to understand and boil it down:
It seems like they are trying avoid some of the leak issues found in other FRP while retaining the ability to create/destroy the data flow graph based on the fly.
They introduce the concept of a "future type" and you can have, for example, a cons cell where the tail is a "delayed value" and can't be accessed immediately. So a stream of values becomes a list of values where the head is the "current" value and the tail is a delayed computation of the tail. The consumer of the stream can observe the head value and keep a reference to the tail value for use the "next" time it gets another input for itself.
The delayed computation closes over the state of a signal-generating function so that's how you keep your state - put a recursive invokation into a delayed expression and you now have a recursive stream.
Since the recursive invokation is delayed until a future time step, it allows an otherwise infinitely recursive function to run in lockstep with its consumer(s) rather than use up all the resources.
I believe their "unfold" function there roughly corresponds to the automaton in Elm - it takes a function that returns two values, one is the output to use and the other is the parameter to use in the next call. That function can generate an infinite stream of arbitrary "stable" values.
Their "delayed value" mechanism does allow the data flow graph to vary dynamically; the streams are similar to a list and so you can just start pulling values from a different list when conditions change. You can have a dynamically sized list of streams that produces a dynamically sized list of values.
The type system avoids memory leaks by blocking the use of past and future values; future values are kept as a lazy computation and past ones can be garbage collected. You can still manually buffer up past or future values, as long as they are "stable", but the compiler won't do it for you behind the scenes, you have to create that leak explicitly. This would reduce accidentally created memory leaks.
It seems like they were thinking of a strict implementation but I didn't find any mention of that in the text.
I'd guess the whole point of the research they are doing is to prevent the programmer from accidentally creating a memory leak in their FRP program by forcing the programmer to leak memory themselves, if that's what they really want. Memory leaks must have been a big problem in the past.
If my memory serves, a big part of Elm's design is that signals have to be defined ahead of time in the application and must always have a value, and that this also relates to the memory leak issues of other FRP implementations.
Somehow I'm not connecting with the point of having switching done at the level of a stream or a signal; the Elm approach seems to be fine. To manage a bunch of entities whose number varies over time I don't need a signal/automaton for each one, I can just have one automaton with a list of entities as part of its state. The data flow graph would really only vary if program itself was changing structure a lot.
The concern with switching I feel from this thread is actually about performance rather than capabilities - what if we're "wasting time" reading inputs or updating automaton states that won't be used for the current screen. I guess the way that I design my game - basically one giant automaton accepting all inputs - I can easily ignore inputs I'm not using and avoid code that isn't relevant to the current state. The cost of reading unused inputs seems small compared to everything else that is going on.