Walking the ui ast and injecting memoizers is an incomplete solution, it will yield render-tree pruning, but it is understood that render-tree pruning will not work with graph values. Walking the ui ast to inject thick components (that can be forceUpdated) is still a candidate in my mind.
The way an indexer/reconciler works (in both Om Next, and in Reagent) i believe, you want each horizontal layer in the UI tree responsible for connecting to his own dependencies. Reagent tracks reaction derefs to imply the connection between instance and dependency; om next provides the query protocol and uses that to connect the instance to its dependency. The approaches are more or less equivalent, right? Either way, each strata in the tree is managed standalone and forceUpdated out of band.
Any forceUpdate approach will necessarily violate function laws like dynamic scope, because we're perverting the evaluation rules of the AST to evaluate breadth first instead of depth first? So even if we can write our UI code as just functions, and use a macro to inject the components, the ui as it evaluates is going to violate function laws. So i dont know if i see a point to exploring this approach further, as the goal was to make UI code evaluation follow the same proper laws as regular code evaluation.
Another approach is to use incremental rendering as described in
John Degoes's talk on Purescript Halogen. Essentially, if react lets us define our UI :: state -> VirtualDom, incremental lets us define ΔUI :: Δstate -> ΔVirtualDom, and the runtime provides UI=reduce(ΔUI, Δstates) where Δstates is a stream of state changes. This will give us all our laws and be fast and work with graph values in state, but the types involved are complex. Incremental also loses the intuitive nature of react though where it feels like we are defining html templates. Thinking in terms of Δhtml templates, i dont even know what that would make the ui code look like, probably something like redux reducers. But it would be correct and optimal performance?