Deployment techniques

578 views
Skip to first unread message

Jake

unread,
May 25, 2014, 8:32:11 AM5/25/14
to nod...@googlegroups.com
I'd like to know what people have done for node.js application deployments.

Coming from a compiled-language background, I'm used to compiling things, packing them up into some artifact, and deploying that to a server.  This seems unnecessary in node though.  

A few questions on this:
  1. Have you seen any benefit to packing up (perhaps just a zip file) all the files from a node application and putting that in an artifact repository?
  2. Is there any reason not to just tag the source repo and copy (rsync, scp, etc) the files to the servers from there?
  3. Is there any benefit to publishing the application to an internal npm repository and deploying from there?

Patrick Debois

unread,
May 25, 2014, 3:27:45 PM5/25/14
to nod...@googlegroups.com
Hi Jake,

the question is do you want repeatable and fast deploys to production:

- running npm install on each of the server will potentionally install different version (have fun debugging that)
- minizing, hashing, uploading to the CDN should be done only once, not per server

On your testing you install all your devdependencies, grunt tasks, pull things from github if needed

After testing a version, you package the app version:
  • `npm prune` (remove dev dependencies)
  • add all dependencies left as bundledDependencies (basically vendoring) (see npm instal bundle-deps)
  • `npm shrinkwrap` (lock versions)
You can then use:
  • `npm pack` (to create a local tarball) and put it somewhere on s3
  • or `use npm publish` (with a private repo like sinopia)

Then we tag the version with `npm tag app@VERSION pre-prod`

Then you install it to pre-prod however you like:

We use sinopia as a distribution of the artefact (for the meta information) and curl the tar ball directly

NAME='app' TAG='pre-prod' VER=`npm view $NAME@$TAG dist-tags.$TAG`
URL=`npm view $NAME@$TAG dist.tarball` curl $URL -u $USER:$PASSWORD -o $NAME-$VER.tgz



The only thing left is to run `npm rebuild` to eventually recompile your native libraries (if your test system does not have the same)

If that works you only need to tag this version to production `npm tag app@VERSION production`

Tagging the source repo is usually not enough: image you have to create a new build because a dependency changed. There are now two version of the same source version.  Unless you add all vendored versions to git as well - but we find this confusing (but some like it this way)

Some Notes:
- npm shrinkwrap does not play with some versions of npm (we use 1.4.5 for now)
- npm install is slow for big npm packages, curl does not have that issue
- tagging with npm publish --tag (always adds latest), adding the tag to the publishConfig in package.json does NOT add it to latest


After using this process, our builds became reliable, without dependencies of (git, npmjs) while deploying/scaling new servers. Also less tools are needed on the server so the initial machine creation is very fast. Same for updates, just curl the new version, unpack it, switch the version in nginx.

 So my vote is:

  • YES build all things in test
  • package it so you're working on the same artifact in test, pre-prod, prod
  • use a repo just a matter of convenience

hope this helps.
--
Job board: http://jobs.nodejs.org/
New group rules: https://gist.github.com/othiym23/9886289#file-moderation-policy-md
Old group rules: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
---
You received this message because you are subscribed to the Google Groups "nodejs" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nodejs+un...@googlegroups.com.
To post to this group, send email to nod...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/nodejs/e8700c10-dd53-48a4-9bc0-0919675d3f0c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Aria Stewart

unread,
May 25, 2014, 3:27:50 PM5/25/14
to nod...@googlegroups.com

On May 25, 2014, at 8:32 AM, Jake <jp02...@gmail.com> wrote:

> I'd like to know what people have done for node.js application deployments.
>
> Coming from a compiled-language background, I'm used to compiling things, packing them up into some artifact, and deploying that to a server. This seems unnecessary in node though.
>

I think it is not needed. Maybe if you’re using binary modules for node (there’s not a lot of common ones, but they exist!). Most of those concerns, though, go away if your development and production environment are the same OS and revision.

I don’t check my dependencies in to my git repos, though, so if you want to lock those down, this can be one way to accomplish that.

> A few questions on this:
> • Have you seen any benefit to packing up (perhaps just a zip file) all the files from a node application and putting that in an artifact repository?

Not really — unless it simplifies transporting things to servers. I treat it as a transfer tool, not an artifact.

It can get you verifiability with a hash; that said, git gets you that too.

> • Is there any reason not to just tag the source repo and copy (rsync, scp, etc) the files to the servers from there?

Not at all. Also consider git — it’s quite good for this!

> • Is there any benefit to publishing the application to an internal npm repository and deploying from there?

I wouldn’t say so. Often with an application, you want to control where it’s unpacked and run from, and don’t particularly want it in some global path.

Aria
signature.asc

// ravi

unread,
May 25, 2014, 4:27:11 PM5/25/14
to nod...@googlegroups.com
On May 25, 2014, at 8:32 AM, Jake <jp02...@gmail.com> wrote:
I'd like to know what people have done for node.js application deployments.


I have been searching up and down for something that works and am unsatisfied with everything that I have come across. More on my travails further below.


Coming from a compiled-language background, I'm used to compiling things, packing them up into some artifact, and deploying that to a server.  This seems unnecessary in node though.  

A few questions on this:
  1. Have you seen any benefit to packing up (perhaps just a zip file) all the files from a node application and putting that in an artifact repository?

Yes on the first (benefit in packing up) and no on the second (putting them in an artefact repo: in fact, it is important to me that using release tags and tools, I should be able to recreate any build).

  1. Is there any reason not to just tag the source repo and copy (rsync, scp, etc) the files to the servers from there?

There are Git based deployment tools like Propagit/Fleet. Or even a ‘git pull’ might work. However, I would like to differentiate between sources (in the repository hierarchy that makes sense for sources) and “build”s (even if there is no compile step). And I see no reason for each deployment to contain the Git history of the entire repo.


  1. Is there any benefit to publishing the application to an internal npm repository and deploying from there?

I would think so, yes. To me that is a step up from deploying with Git (it addresses the two issues I raise above), but what about other non-node dependencies your project might have such as a DB?

At a general level, I have looked at three alternative approaches to solving the deployment problem (for scenarios involving more than a handful of servers): heavyweight systems like Chef/Puppet/etc, self-contained images using LXC/Docker, and other simpler approaches like Ansible (SSH) and Propagit/Fleet (Git). Of these, I find LXC (with or without Docker) the one most worth pursuing. While I figure out how to move forward, here is what I am doing:

* Homegrown “build” script (much simpler and application specific than Grunt, Gulp, etc) to create install packages (tar).

* Deployment using SSH/tar.

* Process management using PM2 and monitoring using PM2-web.

All of this is in the context of non-PaaS environments like mine (i.e., no EC2, Heroku, etc).

—ravi


Tim Walling

unread,
May 26, 2014, 10:14:40 PM5/26/14
to nod...@googlegroups.com
We've been using pac on a project for about 8 months and have really liked it.


- Tim

Jake

unread,
May 27, 2014, 9:38:20 AM5/27/14
to nod...@googlegroups.com
Thanks Patrick, this is just what I was looking for!  It never occurred to me that npm would cover all this, although now it sure seems obvious.

Thanks for the sinopia recommendation too.  A good internal npm registry has been on my list of tools to find.

Darren DeRidder

unread,
May 28, 2014, 10:02:38 AM5/28/14
to nod...@googlegroups.com

pac looks good. There's also npmbox (https://github.com/arei/npmbox) and bundle.js (https://gist.github.com/jackgill/7687308). What I've been doing is adding my deps as bundledDependencies, run npm pack, and then - believe it or not - building RPMs out of the whole schmozzle. The bundledDependencies property seems to get overlooked sometimes. Its super useful. If you want to see native functionality for bundling dependencies in npm you can try and upvote the issue at https://github.com/npm/npm/issues/4210

Sam Roberts

unread,
Jun 12, 2014, 7:56:55 PM6/12/14
to nod...@googlegroups.com
There's a bit of a rough consensus on how to deploy, see some of these
info sources:

- [The npm Debacle Was Partly Your
Fault](http://www.letscodejavascript.com/v3/blog/2014/03/the_npm_debacle)
- [Heroku Buildpack
README](https://github.com/heroku/heroku-buildpack-nodejs/blob/master/README.md)
Good even if you don't use heroku.
- [10 steps to nodejs nirvana in
production](http://qzaidi.github.io/2013/05/14/node-in-production/)
- <http://addyosmani.com/blog/checking-in-front-end-dependencies/>
- <http://www.futurealoof.com/posts/nodemodules-in-git.html>


Basically, it comes down to bundling as much as possible during build,
so you have no deploy-time dependencies on external resources, and you
deploy the same thing, every time.

There is some debate on compiled dependencies, building them means no
need for a compiler on your deploy servers... but you need same build
env as deploy. Pros and cons both ways.

Where it gets heated is how do you transport your app? tarball works
ok for some, in which case bundling your dependencies and doing an npm
pack works OK, but you have to get the .npmignore file correct, as
well.

Also, lots of the above suggest commiting your build deps to git... I
did that for fun for a loopback app that had an angular front-end,
used bower, your basic full-stack node app. Over a million lines went
into the git commit... craziness. And then there are all the command
line shannanigans to keep it up to date, and modify your .gitignore.
Doing this on a development tree doesn't make any sense to me.

And, its unnecessary with git, you can use git commit-tree to keep an
exact source copy of your development HEAD in a deploy branch, and
THEN add the build products. This allows robust git push to PaaS
deployment, but it also allows you to do an npm pack from completely
git controlled state, to tag, to rollback, etc. All the good things
you get with git, minus the pain of your dependencies in your dev
tree.

One problem with all this advice is it takes a pretty ad-hoc set of
tools to do it all. Lots of steps are manual or script-it-yourself, or
when scripted, are a bit creeky (bundle-deps doesn't bundle optional
dependencies, for example).

Anyhow, I've been working a tool to do the client-side build part of
this, encapsulating what I think is the mostly-consensus best way to
do these things, and some extras like git commit-tree. Its in
pre-release, which means get it from
https://github.com/strongloop/strong-build until its npm published.

When you run it, you'll find the steps can be run one-by-one, or all
together, are reasonably customizable, and log all the git, npm, and
shell commands they execute to make it as transparent as possible
about what its actually doing to your source.

I'd love to have some feedback if you give it a whirl.

Cheers,
Sam

Will Hoover

unread,
Jun 13, 2014, 11:18:17 AM6/13/14
to nod...@googlegroups.com
If your using Grunt/GitHub another option would be https://www.npmjs.org/package/releasebot
The next release will have option to bundle dependencies in released assets.
--
Job board: http://jobs.nodejs.org/
New group rules: https://gist.github.com/othiym23/9886289#file-moderation-policy-md
Old group rules: https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
---
You received this message because you are subscribed to the Google Groups "nodejs" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nodejs+un...@googlegroups.com.
To post to this group, send email to nod...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/nodejs/CACmrRmQtVO%3DBw5oNmTdooHJcNmyExqV%2BcJp4uVTuE7ok%2ByCJFw%40mail.gmail.com.

henrique matias

unread,
Jun 24, 2014, 1:10:49 PM6/24/14
to nod...@googlegroups.com
I'm really interested in any way zero downtime deploys could happen.


Regarding switching between versions of the app:

Recently a friend of mine ( necker at gibhub ) introduced me to Thalassa and Aqueduct:

In short: 

 - Aqueduct will connect to an haproxy instance and dynamically update / refresh the configuration

 - Your node application register itself with Aqueduct, saying which version of the app it is running
 
 - You have a GUI to see which applications have been registered, which are running, and also switch between different versions of your app


Seems pretty cool. 


Is anyone running this kind of "load balancer" management system? Any recommendations ?









Matt

unread,
Jun 24, 2014, 2:08:35 PM6/24/14
to nod...@googlegroups.com
For zero downtime I use a hand-crafted cluster based restarter. It's very much like recluster (on npm) though, so I recommend using that instead.

Basic process is install the new code, give the process SIGUSR2, and it reaps the children one by one and restarts them (using cluster's disconnect() method, so it won't restart until the child stopped processing all connections).

You only need a full-blown restart if you change recluster, which is super-rare.



henrique matias

unread,
Jun 24, 2014, 5:45:36 PM6/24/14
to nod...@googlegroups.com
Cool, recluster seems useful in some cases.

Still if you want to roll back or see which version of the app your server is running, you would have to build it yourself.

I think that is the beauty of Aqueduct ( still i might be wrong ).


Ryan Graham

unread,
Jun 24, 2014, 5:46:02 PM6/24/14
to nod...@googlegroups.com
On Tue, Jun 24, 2014 at 7:00 AM, henrique matias <hems....@gmail.com> wrote:
Recently a friend of mine ( necker at gibhub ) introduced me to Thalassa and Aqueduct:
 <snip>

Is anyone running this kind of "load balancer" management system? Any recommendations ?

I've experimented with a similar setup using Seaport+Bouncy (Thalassa was inspired by Seaport).

The part I didn't like, which seems to also be present in Thalassa, is how the service registry becomes part of the application instead of being purely a deployment-time decision.

Whether that's acceptable is up to the person building/deploying the app, of course, but it is something to keep in mind when looking into different deployment approaches.

~Ryan
--
http://twitter.com/rmgraham

Matt

unread,
Jun 24, 2014, 11:29:19 PM6/24/14
to nod...@googlegroups.com

On Tue, Jun 24, 2014 at 2:21 PM, henrique matias <hems....@gmail.com> wrote:
Cool, recluster seems useful in some cases.

Still if you want to roll back or see which version of the app your server is running, you would have to build it yourself.

For that we run deploy_to_runit (deploying straight from Git), which creates a LAST_GIT_HASH environment variable, which we provide via an API in the app.

If we want to roll back we have to git revert, which may not be the best solution, but it works.

Matt.
Reply all
Reply to author
Forward
0 new messages