I wanted to share how we're running our AutoPkg recipes at Gusto using Github Actions to see if anyone is doing the same and gather any optimization tips.
This process is the outcome of migrating our AutoPkg infrastructure to be version controlled/auditable and has been in production since December without any hiccups. We're pretty happy with the results, especially since we were able to retire a Mac Pro and updating AutoPkg or Munki versions is a one line change. It currently takes about an hour to run through 89 recipes. We get 50,000 runner minutes free with Github Enterprise. Since no other teams are using Github Actions our AutoPkg builds are gratis, but if we had to pay it would be $40/month.
AutoPkg runs out of a private repo on
github.com. The repo structure isn't anything special, but the general concept is everything related to AutoPkg should run out of this repo. We have a package promotion and a repoclean job that run daily/weekly respectively. Those run on Ubuntu runners and finish in a few seconds so they barely cost anything. Recipe repos are cloned at the beginning of each run to `repo/`, which is excluded from version control by our `.gitignore`. Recipe trust overrides are stored at `overrides/` and recipe repos are stored in `repo_list.txt`.
My teammate Harry Seeber forked and rewrote Facebook's `autopkg_tools.py` script, which iterates over a list of recipes in `recipe_list.json`, and pushes successful recipes builds into a separate Git LFS repo. The build results are posted to a Slack channel so we can fix any recipe trust issues with a pull request.
We've done some optimization already, like only doing shallow git clones and running tasks on non-macOS runners where possible.
Future plans:
I'd like to add our developer cert as a Github Actions Secret to sign packages like munkitools, but stamping our approval on unreviewed code scares me a little bit so I haven't tackled it yet.
Storing the `last-modified` headers from the AutoPkg URLDownloader processor in S3 to avoid unnecessary installer downloads. Yesterday 81 of our 89 recipes downloaded binaries, verified code signatures, and ended up not importing them because the same version existed in Munki already. This wastes CPU cycles, bandwidth, and (probably coal-powered) electricity. Caching the last-modified headers could reduce our run times to 10% of our current workflow.
Migrating our Munki repo to S3/Cloudfront/Lambda. The existing basic auth implementations I've seen are vulnerable to timing attacks so we'd need to write a fix for that.
Here's a redacted and workflow file for running AutoPkg:
```
name: AutoPkg run
on:
schedule:
- cron: 00 14 * * 1-5
jobs:
AutoPkg:
runs-on: macos-latest
timeout-minutes: 90
steps:
- name: Checkout AutoPkg recipes
uses: actions/checkout@01aecccf739ca6ff86c0539fbc67a7a5007bbc81
with:
fetch-depth: 1
- name: Install Munki
run: |
curl -L
https://github.com/munki/munki/releases/download/v4.1.4/munkitools-4.1.4.3949.pkg --output /tmp/munkitools.pkg
sudo installer -pkg /tmp/munkitools.pkg -target /
- name: Install AutoPkg
run: |
curl -L
https://github.com/autopkg/autopkg/releases/download/v2.1/autopkg-2.1.pkg --output /tmp/autopkg.pkg
sudo installer -pkg /tmp/autopkg.pkg -target /
- name: Checkout Gusto munki repo
uses: actions/checkout@01aecccf739ca6ff86c0539fbc67a7a5007bbc81
with:
repository:
token:
fetch-depth: 1
ref: refs/heads/master
path: munki_repo
- name: Configure AutoPkg
run: |
defaults write com.github.autopkg RECIPE_OVERRIDE_DIRS $(pwd)/overrides/
defaults write com.github.autopkg RECIPE_REPO_DIR $(pwd)/repos/
defaults write com.github.autopkg FAIL_RECIPES_WITHOUT_TRUST_INFO -bool YES
defaults write com.github.autopkg MUNKI_REPO $GITHUB_WORKSPACE/munki_repo
defaults write com.github.autopkg GITHUB_TOKEN
- name: Add AutoPkg repos
run: |
for repo in $(cat repo_list.txt); do autopkg repo-add $repo && autopkg repo-update $repo; done
- name: Run makecatalogs
run: |
/usr/local/munki/makecatalogs munki_repo
- name: Run AutoPkg
run: |
python3 autopkg_tools.py -l recipe_list.json
```
I'm happy to answer any questions people might have and welcome any suggestions for improvements. I'm hoping we can share an example repo and blog article of this process before the beginning of July.