Hoping To Get Some Advice On An Upgrade (Rails 4.2 -> Rails 5)

15 views
Skip to first unread message

Matt Payne

unread,
Aug 20, 2016, 12:53:58 PM8/20/16
to rubber
Hi,

Sorry - bit of a long message below, but this needs a bit of explaining. Hopefully someone's bored on a Saturday and doesn't mind taking a look (Kevin?)

I'm running Rubber 3.2.2 and am in the process of upgrading our application from Rails 4.2 to Rails 5. As a result of this, I need to bump our Ruby version from 2.1.2 to Ruby 2.3.1.

I was hoping to validate my thoughts with someone familiar with Rubber since I'm a bit of an amateur with it. This is the first time I've tried something like this.

Here's our current infrastructure setup:
  1. PostgreSQL on RDS (not managed via Rubber)
  2. 10 EC2 instances, one of which has the db:primary role. All instances also have the web and app roles
  3. 2 EC2 instances both of which have the sidekiq role
  4. 1 EC2 instance running Redis (not managed via Rubber)
  5. The web instances are behind several ELBs
So far, I have done the following:

In rubber-ruby.yml I:
  1. updated the ruby_build_version to: 20160602
  2. updated the ruby_version to: 2.3.1
  3. Updated the Rails app appropriately
My goal is to get everything upgraded with as little downtime as possible, since we have a fairly busy application.

I see 2 possible approaches to doing the server upgrades:

The first:
  1. Merge my Rails 5 branch into master
  2. Go into offline mode
  3. cap rubber:sidekiq:quiet on the sidekiq workers so they stop accepting new jobs
  4. cap rubber:bootstrap across all instances to get the Ruby upgrade
  5. cap deploy across all instances to deploy the new code
  6. cap rubber:sidekiq:restart on the sidekiq workers
  7. Go back into online mode
  8. Hopefully everything works
The second:
  1. Do not merge to master yet. Switch to my Rails 5 branch
  2. cap rubber:create new instances, one for each of the original instances. 
  3. cap rubber:bootstrap all new instances. Then I should have 12 EC2 instances that are now running Ruby 2.3.1.
  4. Merge my Rails 5 branch into master
  5. Switch to master
  6. cap deploy all new instances. 
  7. Do not add the new instances to the ELBs yet.
  8. Test new instances
  9. Put app in offline mode
  10. Remove old instances from ELBs
  11. Add new instances to ELBs
  12. Put app back in online mode
  13. Hopefully everything works
  14. Decommission old instances
The problem that I see with the first approach is that it's a bit of a black box. I'm concerned about potential failures and downtime due to unforeseen issues.

The second approach seems logical, but also potentially problematic for the following reasons:
  1. The minute I cap deploy the sidekiq workers, they will begin processing jobs. I don't really want this. I guess I could quickly quiet them if I have to, but maybe there's a better approach?
  2. I tested this approach with a single new instance on the web side of things, and it seemed to work, except - for some reason I've been unable to understand, it also installed and started PostgreSQL, which is not what I want.

If anyone on the list has an opinion or advice on how best to go about this, I'd really appreciate hearing it!

Cheers,
Matt

Matt Payne

unread,
Aug 21, 2016, 12:36:39 PM8/21/16
to rubber
Hmmm ... just to add to this:

Just tried provisioning a new instance from my current config in master with only the roles of web and app and for some reason it installs and starts PostgreSql - I must have something mis-configured

I do still have rubber-postgresql.yml and a deploy-postgresql.rb files in the project because in our staging environment everything is installed on a single server. If it helps, I can supply the contents of these files.

If anyone has any ideas about this issue, it would also be most appreciated as this is starting to become a blocker to getting this all moved forward.

Cheers,
Matt
Reply all
Reply to author
Forward
0 new messages