Hi Steve,
We're doing something like this in my team.
We have a jenkins master machine that has 4 slaves:
- VMware hypervisor
- VirtualBox hypervisor
- QEMU hypervisor
- Instance on AWS with an instance profile that grants the necessary rights to build an ami
The packer definitions are checked out from a git repository, a git tag is created, a pre-build script is ran, validation with jq, and packer validate, an image is produced by each hypervisor. The image is uploaded to jFrog artifactory into a staging repository. Its pulled down by kitchen, booted up on each hypervisor and tested. Additionally, we do automated security tests with openscap and get a compliance report for every machine image.
At the end of our jenkins pipeline there is a manual approval stage, virtualbox vmware and qemu this is moving from the staging repository into a production repository, artifactory has a vagrant cloud like API and notifies all users of new images being available. The other images are just left sitting there with their appropriate version information.
On AWS we change a tag on the image of stage: release-canidate to stage: release and we set permissions on the AMI so all our other AWS accounts can access it. Our AMIs are stored in an isolated account that serves no other purpose than to build and distribute AMIs. As images age we'll remove the permissions for our other accounts to boot them.
We have defined what we call "the machine image catalog" it is effectively a hierarchy of machine images, which go as follows:
Foundation image: This runs on virtualbox only, it basically lets us get from cdrom to virtual disk. It gives just enough OS to boot.
Membrane image: Before stepping into this image the virtualbox vmdk is converted to formats that will boot on vmware and qemu. We boot up the image on all three hypervisors and install the specific tooling for each (os guest additions, vagrant user/ssh key etc) our developers consume these. For the AWS case we boot the vmdk up in virtualbox, install cloud-init and then use the amazon import post processor, there's some tricks required bring this in under the RedHat enterprise linux metered billing that AWS have but it is possible.
Base image: here we implement all our company standards, umask, ipv6 disabling, logging, ntp servers, etc. etc.
From the base image we make several other images such as one with the JDK baked, one with MySQL baked, etc. etc.
From these images we bake some more images for example from the JDK image we'll bake tomcat.
Hope this is what your looking for.
Ian.