I'm wondering how to model inactivity in a networked RDP implementation, similar to [1]:
In normal, "local" RDP it's easy for a downstream behavior to not do any work when its input signal is inactive.
But in networked "remote" RDP, even when the input signal is inactive, the behavior has to do work: namely to send messages to the signal from time to time to find out if it is still inactive.
With a large number of behaviors, this could lead to a lot of background work.
Any ideas on how to address this problem?