The strategy we use at SFU is to maintain our
own fork of the Instructure canvas repository with our local mods and deployment scripts. We use Atlassian Bamboo and Capistrano to deploy to our cluster; it's a complex setup, but you could replicate most of it manually:
The canvas root (/var/rails/canvas, for us) looks like this:
/var/rails/canvas/
├── current -> /var/rails/canvas/releases/20150317171944
├── releases
├── repo
├── revisions.log
└── shared
Canvas is deployed into datestamped directories in the /releases directory (e.g. 20150317171944). The datestamp is the date of the deployment. /current is a symlink to the active release. Capistrano keeps the last five deploys in the /releases directory, so we can move the symlink and restart if we need to roll back.
/shared contains the log and pids directory. These are symlinked from the canvas installation directory, which looks like this:
.
├── app
├── bin
├── bower.json
├── build-number.txt
├── canvas-compiled.tar
├── Capfile
├── client_apps
├── config
├── CONTRIBUTING.md
├── COPYRIGHT
├── db
├── doc
├── Gemfile
├── Gemfile.d
├── Gemfile.lock
├── gems
├── guard
├── Guardfile
├── karma.conf.js
├── lib
├── LICENSE
├── log -> /var/rails/canvas/shared/log
├── loom
├── mnt
│ └── data
│ └── canvasfiles -> /mnt/data/canvasfiles
├── node_modules
├── package.json
├── provision
├── public
├── Rakefile
├── README.md
├── REVISION
├── script
├── spec
├── tmp
├── tmp
│ ├── pids -> /var/rails/canvas/shared/tmp/pids
│ └── restart.txt
├── Vagrantfile
├── Vagrant.md
└── vendor
The apache configuration points at /var/rails/canvas/current; we never have to change that setup when we deploy.
When we do a deploy, the following happens:
- Capistrano clones the deployment branch of our fork to a new directory inside /releases
- Config files are copied from a central location (an NFS mount on each of our 20 app nodes) into the config directory
- The file store (another NFS mount) is symlinked into place in mnt/data
- The log and pids directories are symlinked back to /var/rails/canvas/shared
- The usual Canvas installation steps take place (bundle install, npm install, bundle exec rake canvas:compile_assets, bundle exec rake db:migrate)
- Assuming everything goes well (Capistrano aborts if any step exits with a non-zero exit code), the /current symlink is moved to the new release directory, restart.txt is touched and the app restarts on the next request. We use Passenger Enterprise so we get rolling restarts - there's no downtime.
In essence, we're not doing an "upgrade" - we're installing the new release and just move a pointer.
The above is a bit oversimplified; there's a build server (Atlassian Bamboo) that does all of this for us, and it actually does it a bit differently (it builds the assets on a build box, and SCPs the built Canvas tarball over to each machine, rather than building it on all 20 servers).
This has worked very well for us; we mirror Instructure's three week upgrade cycle (but remain one release behind), so we're deploying live every three weeks. It takes up about 20 minutes from pushing the "GO" button to having the new release serving users on all nodes.
--
Graham Ballantyne
IT Services
Simon Fraser University