--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-user...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--Pei Sun
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-users+unsubscribe@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-users+unsubscribe@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-users+unsubscribe@googlegroups.com.
We havent solved this issue yet. Not sure how can i share the details.Hi Pei,Thanks for checking on this.Here are the steps we do1. Create a Alluxio docker image - refer the docker file from docker.zip2. Deploy the Alluxio in Mesos using marahon below command and configuration- Command : /alluxio/integration/mesos/bin/alluxio-mesos-start.sh -w leader.mesos:5050 master -DHADOOP_USER_NAME=hdfs- Environment Variable:
"ALLUXIO_MASTER_HOSTNAME": "master",
"ALLUXIO_JAVA_OPTS": "-Dalluxio.integration.worker.resource.mem=1048MB -Dalluxio.worker.memory.size=1048MB -Dalluxio.integration.mesos.alluxio.jar.url=http://downloads.alluxio.org/downloads/files/1.4.0/alluxio-1.4.0-bin.tar.gz -Dalluxio.user.file.writetype.default=CACHE_THROUGH -Dalluxio.underfs.address=hdfs://hdfs:8020/tmp"
3. In the spark driver also we set the configure below Java options
> fs.alluxio.impl>alluxio.hadoop.FileSystem
>spark.driver.extraJavaOptions=-Dalluxio.integration.worker.resource.mem=1048MB -Dalluxio.worker.memory.size=1048MB -Dalluxio.integration.mesos.alluxio.jar.url=http://downloads.alluxio.org/downloads/files/1.4.0/alluxio-1.4.0-bin.tar.gz -Dalluxio.user.file.writetype.default=CACHE_THROUGH -Dalluxio.underfs.address=hdfs://hdfs:8020/tmp
>spark.executor.extraJavaOptions=-Dalluxio.integration.worker.resource.mem=1048MB -Dalluxio.worker.memory.size=1048MB -Dalluxio.integration.mesos.alluxio.jar.url=http://downloads.alluxio.org/downloads/files/1.4.0/alluxio-1.4.0-bin.tar.gz -Dalluxio.user.file.writetype.default=CACHE_THROUGH -Dalluxio.underfs.address=hdfs://hdfs:8020/tmp"Can you1. Verify our docker image and tell us the configurations are good or do we need different setting.2. We see some error message related to Journal path etc .. Attached the logs [ logs from master]3. Also attached the exact configuration values [ Configuration values.txt]Our requirement is Alluxio should be deployed in Mesos. UnderFS as HDFS for protecting data incase Alluxio crashes.Please review and let me know your feedback.Regards,Jais
--Regards,
Jais Sebastian
+919980722994
"There are only 10 types of people in the world: Those who understand binary, and those who don't."
(¨`•.•´¨) Keep
`•.¸(¨`•.•´¨) Smiling
(¨`•.•´¨)¸.•´ Always
`•.¸.•´