Hmm, good question. I'll give an example that was one of the inspirations for Swarm in the first place:
Let's say you run a dating website, and you have millions of users. You have an algorithm that can take data about two users, and predict how well they will match.
The problem is that there is too much data to keep it all on one computer, and you need to make predictions very quickly, so you can't be retrieving data from other computers every time you make a prediction.
With Swarm, the data would be distributed across multiple computers automatically. If you need a prediction for a user or users with data on a remote computer, then Swarm will transparently transfer the continuation to the remote computers.
But clearly this would be very inefficient if it happened for every prediction, so the Swarm loadbalancer would identify which users tend to get tested against each-other frequently (perhaps because they are in a similar geographic area), and automatically try to ensure that their data is stored on the same computer.
Through this mechanism, the dating website's data is automatically and intelligently clustered so that it can be distributed across multiple computers, without an excessive communication overhead.
Does that make sense?
Runtime. It would be a background supervisor process similar to "just in time" compilation or garbage collection.
Ian.