Small number of use cases is an important reason. I see many people interested in Julia & Spark integration, but almost nobody interested
enough to invest time into its development.
Another reason is that Julia infrastructure (and especially Julia-Java integration) is not mature enough to make integrations of such level. Instability of JNI, inconsistency between Java and Scala, serialization issues in Julia - these are just few difficulties I faced while working on Sparta.jl. Many people do great work to fix such issues, but at the moment Julia is far behind, say, Python.
Finally, it's just huge amount of work. I don't mean basic functionality like map and reduce operations over text file, but the whole variety of supported data formats, DataFrames, subprojects like Spark Streaming and MLlib, etc. And without these features we get back to paragraph 1 - nobody is interested enough to invest time when there's already PySpark and SparkR.
All of these makes me think that similar framework for big data analytics written in pure Julia could bypass many of these issues and generate more interest in Julia community. I wonder if somebody would want to take part in such a challenge.