What is the best approach in deploying a ruby based application with nginx

17 views
Skip to first unread message

Maneesh M P

unread,
Dec 9, 2017, 12:45:57 PM12/9/17
to devops
We have a web application,ie, XYZ which is a ruby service and deployed on three EC2 instances . It gets traffic through below nginx configuration.
4sadf24323, dhfsg345345 and sadfsd23423 uniquely identifies servers A, B and C. These ids are used for session stickiness, so each instance with the id can continue to get traffic and complete the request. 

    .....
   location /XYZ/rotate/4sadf24323/ {
      proxy_pass http://192.168.1.2:8088;
    }
       location /XYZ/rotate/dhfsg345345/  {
       proxy_pass http://192.168.1.3:8088;
    }
       location /XYZ/rotate/sadfsd23423/  {
       proxy_pass http://192.168.1.4:8088;
    }

location /XYZ {
      proxy_pass http://xyz-conf;
    }

    ......
$ cat xyz-conf
        upstream ows-oss {
        ip_hash;
        server   192.168.1.2:8088 fail_timeout=10s;
        server   192.168.1.3:8088 fail_timeout=10s;
        server   192.168.1.4:8088 fail_timeout=10s;
    }
We have been storing session data in memory(8GB instances) so far. Everytime we deploy new version of the code, we have to restart the ruby service which will clear up all the in memory sessions.Our customers were getting impacted by this. So we decided to bring in redis cache to store sessions. While it is advisible to move the sessions storing completely to redis, it takes lot of time and effort for the complete redesign of application. So at this point, what we decided to do is to store all the sessions in memory and a copy of  the sessions in redis as well. So for example, when the request goes to server A(4sadf24323) and the requested session id is not found in A, service will fetch session from redis and continue to function. While this design is working fine for us in development, we are facing a challenge during deployment. 

The solution we have currently is, when we deploy on Server A, we update the nginx configuration with Server B's ip address (ie, location /XYZ/rotate/4sadf24323/ {proxy_pass http://192.168.1.3:8088;}. When deploying to Server B, we update the nginx configuration  of server B with server C's ip address and so on.  This way, none of the user will face an outage even though the request that should have gone to server A now is going to Server B, our service will fetch the session from redis and works as expected.

Though this deployment stratgey works fine in development, I am not sure if it is a good idea to change nginx configuration on the fly while doing production deployment each time. 
Is this really bad ? Can anyone give suggestions ?
If it is really bad, what alternate approach can we take ?  ( I know the application design is bad and needs fundamental changes but we do not have money or bandwidth to do that at this point). 

Juan Paulo Breinlinger

unread,
Dec 10, 2017, 9:00:31 AM12/10/17
to devops
The good thing is that you know what needs to be done, although you are lacking time / bandwidth...

Changing nginx upstreams on the flight is fine... nginx can handle that very well and will add / remove upstream members once the request has been completed, so there won't be any downtime to your users. Bear in mind that before taking down any of the backend servers, you have to check if all the connections to that server have been terminated by nginx. You can easily do that with the command netstat or ss, depending on the linux version. That will ensure your users won't notice any glimpse in the service.

I did something similar with one of my customers, but I built an API where the backend servers can take themselves down or up with a simple curl command from the nginx box. The API is smart enough to add /remove backend servers from the nginx configuration , reload the config file and make sure no connections are alive before returning a status to the client. I believe is a nicer / safer way to do it.

Nginx also provides an enterprise version that comes with this functionality built in, but is quite expensive so probably not an option.

I don't see many options to be honest... but if your service is AWS based, you could explore autoscale up and down your web tier behind an ELB. It works pretty well, but it bring other challenges like making sure your new release can work in parallel with your existing release for a short period of time. The ELB will automatically tell if a new instance is available to serve traffic, which will remove the logic behind adding / removing servers from nginx.

 Now... to be honest, I'm afraid that without money or bandwidth... there is not much you'll be able to improve

Good luck!

Juan
Reply all
Reply to author
Forward
0 new messages