Hi Alfred,
Getting your head around everything with OpenSTF will take some time and work. And, the available documentation, while good, doesn't seem to be targeted toward people without a lot of experience with networking, docker, Linux, nginx, and Android development. I had no docker experience when I started to try to put together my OpenSTF system, and originally tried to set it up using the "local" command. I quickly ran in to all kinds of weird problems, which ultimately led me to bite the bullet and just set it up using the documentation on the web site. I've been meaning to write a blog post about how I set things up, but I have been entirely too busy with a break-neck development schedule here at work.
So, let me see if I can share a little bit of hard-earned wisdom about OpenSTF, with the hope that it helps you out.
First, the documentation is really geared toward setting up OpenSTF as a large cluster of machines. I suspect that if you have a lot of devices, and lots of people trying to use them at the same time, that using lots of machines would be a good idea. However, my initial setup for testing involved about 3-4 dozen devices, and only myself using them all. I was able to run a decent test setup like that on a small low-profile Dell machine with a Core i3 @ 3.3Ghz and 8 GB of RAM. It wasn't blazingly fast, but it work well enough for my testing needs.
Next, it is important to understand that *MOST* of the "pieces" of OpenSTF can run wherever you want them to run. They can all run on the same box, or they can (mostly) each run on their own box. There are a few exceptions, but those our outlined in the documentation. However, the documentation seems a little scary at the beginning. But, once you start to get the feel of how everything works, it starts to become more approachable.
Next, it is worth pointing out that there are a LOT of dependencies to get OpenSTF up and running. I've tried to get them one-at-a-time and get it all going, and it was horribly painful. Along the way I ran in to issues with incorrect versions of various things when the Linux version I was using didn't have the exactly correct version of a library or tool. The dockers, while a little scary at first, completely remove the need to figure all of those dependencies out. When the docker container is downloaded to your machine, everything that is needed to run the system is already there! Even better, if you build your systemd startup scripts properly (which is the default from the documentation), you will get the latest release builds of OpenSTF any time you reboot the machine, or restart the systemd services!
Finally, it really isn't that hard to get things up and running if you use the available documentation, and a Linux system that can run docker and uses systemd. I'm running it on Ubuntu 16.04 server. I do *STRONGLY* recommend that you use a version of your favorite Linux distro that can easily install things like docker, and that already has systemd integrated and running. It will save you a *LOT* of time and pain.
But, on to my configuration. I currently have two OpenSTF instances running. The one in my home office is running everything *BUT* the provider service in a VM on a rather beefy Dell server. The provider is running on the previously mentioned low profile Dell desktop machine. Both machines are in the same layer 2 domain, and on the same IP subnet. (One at 192.168.64.55, the other at 192.168.64.56.)
However, as I said, I originally ran everything on a single box. If the documentation at
https://github.com/openstf/stf/blob/master/doc/DEPLOYMENT.md makes you feel like an anxiety sufferer on a sugar bender, this is probably a good way to start out. It'll let you get everything working, and then allow you to branch out slowly from there. If, as I suggested, you are running a Linux distro that is current enough that you can "apt-get" (or yum, or whatever) docker, and that is running systemd as the default mechanism, then pretty much all you need to do is install docker ("apt-get install
docker.io", FWIW), then copy/paste all of the .service files (that *AREN'T* listed as optional) in that document, put them in /etc/systemd/system (or whever your OS of choice puts them), change the IP addresses used (and *ONLY* the IP addresses! If you mess with the port numbers you will create a LOT of pain for yourself!) and then start them up with something like "systemctl start <service name>".
With a few minor exceptions. The systemd scripts that end with @.service need you to start them up with a parameter. So, you would use "systemctl start <service name>@<parameter>". (You can generally leave off the .service portion of the file name. Systemd is smart enough to figure that out.) The parameters to use appear to be somewhat arbitrary, but to keep myself sane, I used the values that were in the documentation for each of the .services files. (i.e. stf-app@3100, etc.)
Once you have all of those systemd service files on your system, you may want to have them run at startup. To do that, I added the following lines to the bottom of each service file :
[Install]
WantedBy=multi-user.target
And then used "systemctl enable <service-file>@<param>" for each one. From then on, when the machine boots, it should start those services.
Do note, that the first time you start the services, it might take a while to start. It will be going out and downloading the docker containers for each of the pieces. If you network connection is slow, you might have to wait a bit. But, after that first download, it starts up pretty fast.
Once you have everything running on a single machine with those service files, you can start to branch out and move services files to other machines. (Or, add other machines to the cluster.) For my current two machine setup, I put the adbd service, and the stf-provider@ service on the second machine, and edited the service file to use the VM instance's IP address to connect to the cluster.
I think the only other thing that messed me up was the nginx configuration. When you bring up the stf-provider container, you will end up using a parameter like "01". In the nginx configuration file, you will see a configuration block like this :
# Handle stf-pr...@floor4.service
location ~ "^/d/floor4/([^/]+)/(?<port>[0-9]{5})/$" {
proxy_pass http://192.168.255.200:$port/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Real-IP $remote_addr;
}
On the "location" line, you will see the text "floor4". Whatever parameter you use when you start stf-provider@ needs to go where the "floor4" text is. And, you need a block like the one above for each provider that you have running.
So, in the end, probably the best way to get going is just to start with that deployment file and jump in. Then, when you hit snags you can ask smaller, more simplistic questions along the way and it will probably be easier to help.
Good luck!