| Add an initial tool (say ext/bin/check-command-perf ... stockpile-queue.tgz or something) to run some command processing tests against puppetdb, likely via ./pdb, when given a stockpile queue archive. It's fine for it to be a requirement that lein uberjar has already been run. For now, this is just testing command processing, i.e. what's described intentionally side-steps the command ingestion (via http) costs. For the initial version, let's just assume we're using pdbbox, and PDBBOX is set in the environment, and assume pdb is stopped. Then, we'll require that the $PDBBOX/var/stockpile/cmd/q dir be empty, untar the archive into the $PDBBOX/stockpile/cmd/q dir, start ./pdb services -c "$PDBBOX/pdb.ini", and time how long it takes for the queue to become empty. Let's also check relevant metrics at the end to make sure nothing unexpected has happened (i.e. too many commands deferred or sent to the DLO, etc.). Given the metrics checks, we might want to consider writing this in something like clojure (perhaps as a lein alias), python, or ruby, though it might also be feasible in bash with help from jq. At the moment, one way to create a suitable stockpile queue from an existing timeshifted database, for the purposes of working on this ticket, would be to export the database via the archive endpoint, create a new pdbbox, make sure postgresql is stopped, import the export, and then cd "$PDBBOX/var/stockpile/q" && tar czpSf ../stockpile.tgz .. In the longer run, we may want to augment the command described in PDB-5095 to support an output option for writing directly to a stockpile queue instead of a wireformat archive, which should be reasonably easy, and much more efficient than having to round-trip through an export. Perhaps something like:
lein timeshift-export ... --out-format stockpile "$PDBBOX/var/stockpile"
|
|