Hi, Eran
Thanks for opening interesting topic.
In my understanding (i have no deep understanding), spark-jobserver runs spark job with their dependencies
The jobs are packaged in Jar. Then the jars are uploaded into spark-jobserver and run by REST API.
Zeppelin is more like spark-shell. It runs as a spark job, but unlike any other spark job, it does not have any prebuilt code, instead it does have scala interpreter. While zeppelin is running, a spark job is keep running and Zeppelin's scala interpreter dynamically interpret user code and send it to SparkWorkers to run.
right now, i have no good idea how spark-jobserver can be used as a Zeppelin backend.
In the other way, if zeppelin have ability to submit prebuilt spark job using spark-jobserver with scheduling, triggering support, like oozie / azkaban for map-reduce, that might be helpful for some cases.
I'm not sure if i understand your question about monitoring process working with spark. can you explain little bit more about it?
Thanks,
moon