Sorry, we don't have
prepared VM to create Hadoop/Spark cluster, we only have single node VM to try BigDL.
It looks like you are new to Hadoop/Spark, Hadoop/Spark cluster has two options,
1. HDFS+Yarn cluster, spark installed on the client machine, (Most used, and recommended)
2. HDFS+Spark Standlone Cluster
BigDL is just a standard library for spark, only need to be installed on the client machine.
There are a lots of guides for "How to set up a three-nodes hadoop cluster" and "How to run spark on yarn cluster" on Internet, when you run spark pi on yarn successfully, we can go to "How to run BigDL machine learning algorithms".
You can ask me, if you meet any problems.
And I'm wondering if you'd like to use BigDL's python or scala API?
Bests,
-Xin