Have you checked out the readme file on github yet?
That gives a pretty good overview of getting everything built, but does assume you can handle the dependencies (Postgres, PySide, thrift, cython).
In a nutshell..
Create the plow database
Build the java client and server, and start the plow server
Build the python client, and start rndaemon
Start the plow-wrangler application
With plow-wrangler running, you should be able to open up the cluster and node panels, and see your single (or more) rndaemon instances reporting in.
This will make it super easy to run a test job
With Blueprint installed, you should have the bluerun command. You can submit a test sleep job like this:
bluerun blueprint/tests/scripts/sleep.bps 1-5
This will create a job, with a layer that has 5 sleep tasks in it, and it will start running right away on the available cores of your rndaemon. plow-wrangler should show you this job in the Job Wrangler/Watch panels, and in the task panel after double clicking the job.
The plow-wrangler is still being developed, so more features are still to come. But it definitely gives you the ability to see clusters, nodes, jobs, tasks, log viewing and tailing, and some properties. Some of the management features are there already, like locking/unlocking nodes, killing and pausing jobs, and killing/retrying tasks.
We have a ways to go on the documentation obviously. Eventually there will be binary distributions of the server and client so that you won't have to build from source. But as of right now, it is really just in development stages, so building from source is usually requires to work on it and pick up all the changes happening daily.