Atomic deployments

195 views
Skip to first unread message

Stan Lemon

unread,
Dec 26, 2013, 6:01:50 PM12/26/13
to ansible...@googlegroups.com
Hello,
I was wondering if anyone was aware of a playbook or module out in the wild that handled atomic deployments similar to the copy strategy in Capistrano 2?

Thanks in advance,
Stan

Brian Coca

unread,
Dec 27, 2013, 9:49:54 AM12/27/13
to ansible...@googlegroups.com
no but easy to implement with ansible:

- action: whatever you need to deploy code to release specific dir (git/copy/unarchive/etc)

- file: path=/procuduction/link src=/path/you/just/deployed/to

- service: name=appserver state=reloaded|restarted

Michael DeHaan

unread,
Dec 27, 2013, 2:11:59 PM12/27/13
to ansible...@googlegroups.com
Yep, exactly. 

I would probably do something like making the path the git module include the version number in the destination path {{ version }}

and that way you wouldn't have to do a live update on the code existing code tree.

(General disclaimer -- Many folks would use a load balancer instead for this purpose, as not all applications can deal with simply swapping out the code directory and must require service restarts.   Of course, for those that can, this is great)




--
You received this message because you are subscribed to the Google Groups "Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ansible-proje...@googlegroups.com.
To post to this group, send email to ansible...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.



--
Michael DeHaan <mic...@ansibleworks.com>
CTO, AnsibleWorks, Inc.
http://www.ansibleworks.com/

PePe Amengual

unread,
Dec 27, 2013, 3:16:49 PM12/27/13
to ansible...@googlegroups.com
I'm doing exactly this and once I finish I will uploaded it to Ansible galaxy.

The rolling restart and LB pool problem is very easy to solve if you have a LB that could check a status file to be present on the webserver and if not present take that server out of the pool automatically then you will be sure that you don't have any connections on the server so you can perform your upgrade/deploy.

Michael DeHaan

unread,
Dec 27, 2013, 5:37:57 PM12/27/13
to ansible...@googlegroups.com
Great! This would make an awesome role.

-- Michael

> On Dec 27, 2013, at 3:16 PM, PePe Amengual <jose.a...@gmail.com> wrote:
>
> I'm doing exactly this and once I finish I will uploaded it to Ansible galaxy.
>
> The rolling restart and LB pool problem is very easy to solve if you have a LB that could check a status file to be present on the webserver and if not present take that server out of the pool automatically then you will be sure that you don't have any connections on the server so you can perform your upgrade/deploy.
>

Stan Lemon

unread,
Dec 27, 2013, 4:59:36 PM12/27/13
to ansible...@googlegroups.com
Thanks! I’d love to take a look at that.  Rolling LB’s isn’t a requirement for my current project, though it’s not entirely out of the foreseeable future.

Here is a gist of what I threw together today while fiddling around:

I’d appreciate any feedback or suggestions as I’m rather new to Ansible.

I have two, arguably three apps that need this same type of deployment and so I was leaning towards sticking the tasks in their own separate file and including that in an app-specific playbook.  Any advice there would also be appreciated.

Thanks,

-- 
Stan Lemon


On December 27, 2013 at 3:16:50 PM, PePe Amengual (jose.a...@gmail.com) wrote:

I'm doing exactly this and once I finish I will uploaded it to Ansible galaxy.

The rolling restart and LB pool problem is very easy to solve if you have a LB that could check a status file to be present on the webserver and if not present take that server out of the pool automatically then you will be sure that you don't have any connections on the server so you can perform your upgrade/deploy.

Brian Coca

unread,
Dec 29, 2013, 10:52:49 AM12/29/13
to ansible...@googlegroups.com
nothing broken I can see, but a few things:

- local_action: and delegate_to: 127.0.0.1 mean the same thing you can remove one, don't need both.

- sudo: false is the default, since you don't set it to true at the play level you should only need to set it to true for tasks that require it.

- git archive can create a tarball that already doesn't have special files/dirs



--
Brian Coca
Stultorum infinitus est numerus
0110000101110010011001010110111000100111011101000010000001111001011011110111010100100000011100110110110101100001011100100111010000100001
Pedo mellon a minno

Michael DeHaan

unread,
Dec 29, 2013, 7:25:28 PM12/29/13
to ansible...@googlegroups.com
I'd maybe consider passing in what version (git tag, etc?) you are deploying with "-e" and then you could probably save the "timestamp" step too.

Note that the timestamp might be different between hosts, so passing it in seems better to me, that way your directory names would be consistent. 

(Or otherwise, use the hostvars trick to get the variable from a very specific host)



--
You received this message because you are subscribed to the Google Groups "Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ansible-proje...@googlegroups.com.
To post to this group, send email to ansible...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Stan Lemon

unread,
Dec 30, 2013, 5:36:14 PM12/30/13
to ansible...@googlegroups.com
Which is more preferable, local_action or delegate_to then - does it matter at all?

With regards to sudo... I'm sure this is a noob question... but right now I've been running my play books with the sudo flag, but the local operations I did not want to run with sudo, ergo the sudo:false. I have needed to run my playbooks with the sudo for those that run yum installs and so forth... should I not be doing that?  What's the best/recommended approach on this front?

The reason I didn't use git archive is because I have some post-checkout build operations that I wanted to perform before compressing to send to the server.

Thanks so much for your help!

Stan Lemon

unread,
Dec 30, 2013, 5:41:58 PM12/30/13
to ansible...@googlegroups.com
I'd actually done this with my local copy over the weekend and tied it into the deploy.yml that gets included. I'm still very much iterating, but ansible let me get atomic deploys up and running over the weekend for three deploys (two apps, one with a staging branch) on a super-crusty-old server that is due to be rebuilt next month.

"-e" flag, for git?  Not sure I'm familiar with this... maybe I misunderstand?

I may not understand how this works, so please correct me if this is the case - but I was exporting the timestamp on the local (aka deployment) machine and register that as a variable to be used on the node(s) when deploying.  I considered using a git sha1 for the build too - but then my "keepers" pruning won't work reliably. I don't have strict build tag numbers (yet) from my CI system, but that would also be another option.

Michael DeHaan

unread,
Dec 30, 2013, 5:45:32 PM12/30/13
to ansible...@googlegroups.com
The issue is that if you have a play that runs across 50 hosts and you delegate 50 steps to localhost they will all use a different timestamp because it was told to register 50 different versions of that variable.

-e to Ansible is the --extra-vars flag.

ansible-playbook foo.yml -e "version=1.2.3.4"

etc

The alternative would be to do the timestamp step only once:

- hosts: localhost
  tasks:
     - shell:  whatever
       register: time

- hosts: webservers
  roles:
     - do real things here

and then where needed:

{{ hostvars["localhost"]["time"] }}

and that way the timestamp would be consistent amongst hosts.



Stan Lemon

unread,
Dec 30, 2013, 6:02:11 PM12/30/13
to ansible...@googlegroups.com
Ahhhh, that makes sense!  That’s a great suggestion, thank you!

Brian Coca

unread,
Dec 30, 2013, 6:10:29 PM12/30/13
to ansible...@googlegroups.com
I normally have 3 plays for my deployment playbook.

- hosts: localhost
  tasks:
   - notify start (mail/jabber/irc)
   - checkout from repo
   - generate unique ids for release (git rev-parse --short HEAD)
   - build stuff
   - create deployment package
 
And then a deploy play:

- hosts: targets
  tasks:
  - take out of rotation
  - copy package
  - install package/symlink into place
  - restart services
  - put into rotation

And a cleanup play

- hosts: localhost
  tasks: 
   -  notify end
   -  cleanup!
  
Reply all
Reply to author
Forward
0 new messages