SequenceFile is not such a good choice here. Unfortunately (as I just checked) that class is not documented well, so I apologize. That uses Cascading's SequenceFileScheme, which encodes the cascading tuple and its contents into a Hadoop sequence file. The issue is you will have to set up the Hadoop serializations the same as you did in the scalding job to read it. While this is doable, it might be easier to just run a job in scalding to convert data.
I try to encourage people as often as I can to use thrift, Avro, or protobuf for output data. This is exactly the reason.
Also note we recently added a TypedJson source and sink which is safe to use (but slower than the above mentioned choices). With the Json you could also read that in spark.
Lastly, you might look at WritableSequenceFile. For that you will need to implement Hadoop's writable interface for your output data. This should be readable with spark as well.
--