At work we don't use Spark, but have the concept of a jobs server that does heavy calculations on demand and then we have our web app. The way we have them work together is we use RabbitMQ to send a message from any of our 4 web servers to the jobs server, and then the jobs server runs the task written in Scala, or it runs a tool we wrote in Go, both of them save the results to our database and then Lift reads the results from there.
In our case, we want to keep the results in a persistent store, vs just having results in memory in the jobs server and have Lift display them just once.
Some of our jobs are really fast, just a few seconds, and others can take 8+ hours to process the amount of data we have and this setup has worked well.
Not sure if in your case your calculations are still useful after an initial render of Lift.
In short, I think having Spark store the results in your database isn't such a bad idea, unless I'm missing critical details about your use case.
Thanks
Diego