To make things easier, I typically create two programs:
* "aiapply" will apply all the machine's current recipes (e.g., "ai
/var/local/automateit/recipes/all.rb").
* "aiupgrade" will pull the latest changes from the recipe source
repository (e.g., git) and apply them.
I find that using cron to "pull" by running "aiupgrade" once an hour on
machines is good when managing many systems and the admin team has good
discipline about not making changes outside the recipes and does good
quality assurance to ensure that there are no surprises. If you're
automatically applying changes, you REALLY should setup a monitoring
system to alert you if an automated pull breaks something (e.g., use
"monit" to retrieve a web page from a Rails app that makes database
calls and ensure that it contains an expected string as a sanity check).
Meanwhile, I find that doing a manual "push" that just runs "aiupgrade"
on remote machines via ssh or aissh is nice for applying complex
changes, or in situations where you've had less QA or are concerned that
changes were made behind your back, or are generally more worried about
something going wrong, because using a push lets you babysit the process
and fix anything that comes up.
A useful middle-ground is to update the recipes from the source
repository and apply them with "--preview" once a day at 2am using cron
so that cron emails you if there are any outstanding changes you forgot
to apply recipes or alert you that someone modified stuff outside the
recipes.
-igal
I have been fooling around a bit here is what I have done so far....
I have created a ruby project which is checked into version control.
This project contains recipe bundles (a directory like puppet or chef)
which contain the recipe and all the templates needed to run that
recipe. It also contains a directory called "hosts" which contains
host bundles (host recipe and all the files). I have built up a pretty
simple framework which runs these bundles after determining the host
name. I don't use the tag mechanism and instead the host recipe has a
series of method calls like
has_tag 'blah'
has_tag 'something_else'
This allows me to share recipes amongst hosts, isolates the recipes to
a single directory, and allows me to put the configuration for all the
hosts in the same project.
Each recipe is executed inside of a rescue block which captures all
the errors, the entire invocation is wrapped around a rescue block
too. At the end of the process it checks to see if there are errors
and emails them to me. It also sends a heartbeat to a zabbix server
which raises an alarm if it doesn't get the heartbeat (meaning
something serious went wrong).
I am thinking of incorporating facter into the process so I can send
"facts" to zabbix as well. Zabbix then can raise alarms if the disk
drive is getting low on space or if there is not enough RAM or
something.
so far I have not put this into production but my simple tests are
showing that it seems to work OK. After I polish it up a bit I might
put it someplace where you guys can catch all the stupid mistakes I
must have made.