The problem with long running computations in a single threaded UI is that they may cause the UI to block and become unresponsive for a period. Definitely not an acceptable user experience.
I have created a library for helping with AI-type searches. This provides a 'next' function that steps the search along by examining a single 'node' in the search space. The search space forms a graph, and as each node is de-queued from the search buffer, it is checked to see if it is a goal state, and all its successor states are created from a user supplied 'step' function, and en-queued onto the search buffer.
next : SearchResult state -> SearchResult state
type SearchResult state
= Complete
| Goal state (() -> SearchResult state)
| Ongoing state (() -> SearchResult state)
So I can run just 1 or perhaps a few hundred iterations of the search in each loop around the Elm 'update' function. I can return a Cmd that will use the cause the () -> continuation to run some more iterations next time, and that will also give control back to the Elm kernel to process other Cmds and keep the UI responsive.
Its explicit time-slicing of the CPU in application code, which is not so nice.
Ideally, I would have a background thread running the search. On every iteration it would check a volatile (shared memory) flag in case the user got bored waiting for the results and clicked cancel.
I know there have been a few experiments in hooking up Elm with webworker threads. Has anyone tried this out? In particular was there a way to do inter-process communication, specifically the cancel flag I described above?
There is some IPC stuff mentioned under 'Future Plans' (spawn/kill/send) in the Process module:
Could this be implemented on top of webworker threads?