Other details:- My file is never totally loaded in memory even with ./bin/alluxio fs load /path/to/bigTextFile- The worker memory size is 2 GB * 6 ~ 8 workers: 12 GB min ~ 16 GB max total2018-04-27 16:48 GMT+02:00 Pascal Gillet <pascal...@gmail.com>:Hi Gene,- Yes, the Spark job succeeds but it takes as much time as if it reads the file directly from the S3 backend.- Yes, I have a 8GB file that is never totally loaded in memory: from 25% to 50%, if I run the job multiple times, and I can see that the data in memory sometimes moves from workers to others.- Alluxio is deployed with High Availability with ZooKeeper but not through Mesos: I have 3 Alluxio masters on separate servers, 3 Mesos masters, and the Alluxio workers are also the Mesos agents (whose resources are partitioned for Mesos and Alluxio)- My Spark job is packaged in a Docker container executed through Mesos: I mounted the Alluxio client jar in the Spark classpath, and I also had to mount /mnt/ramdisk, otherwise Spark complains about FileNotFoundExceptions- 6 ~ 8 Alluxio Workers (Mesos agents)Thanks,Pascal
--
You received this message because you are subscribed to a topic in the Google Groups "Alluxio Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/alluxio-users/dQYdzrgBb6M/unsubscribe.
To unsubscribe from this group and all its topics, send an email to alluxio-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--