Hello
I am a new user to Cloudlab. I am looking for a simple way to re-run experiments without having to install software, packages etc. for multi-node cluster experiments. I read through the documentation for creating profiles (which takes disk snapshots) and storing all the software and data etc. in any directory (such as /local/) other than the user home directory. But that seems to work for single node experiments. How to do the same for multi-node cluster experiments? For example, I am looking to run experiments for Apache Spark (along with HDFS which requires Hadoop setup). Setting up a multi-node cluster (one master and multiple slaves) for Apache Spark is very very time taking and I would like to store its setup in some sort of profile so that I can just use it whenever I need to run experiments.
Can some one please help in this regard? This is becoming a major bottleneck in my research work. Any help will be greatly appreciated.
Thanks
Dhruv