What foldable-seq does (or at least, tries to do) is carve off n "chunks" from the start of the sequence (where by default, n is 10) and reduce those chunks in parallel. As the result of each reduction becomes available, it's combined into an accumulator and one additional chunk carved off to be reduced.
This means that the max memory that should ever be in use is n * the size of a chunk, plus whatever working memory the reduce and combine functions require.
In your example, this will certainly mean that it will run out of RAM because each chunk contains 100 arrays. Multiply that by the 10 chunks that foldable-seq uses by default and you're trying to process all 1000 arrays at once.
But it should be possible to parallelise successfully by choosing different parameters to fold and foldable-seq. I would expect the following, for example, to both parallelise and not run out of RAM:
(fold 10 + (fn ([] 0) ([x y] (+ x (count y)))) (foldable-seq 4 (repeatedly 1000 #(int-array 10000000))))
Because in this case, each chunk contains 10 arrays, and foldable-seq only creates 4 chunks at a time. So the max memory should be 4 * 10 * array-size.
Having said that, the above *does* give an out of memory error, which implies (damn!) that I have a bug somewhere in my implementation. Thanks for leading me to it - I'll see if I can work out what I've screwed up.