We have a web application,ie, XYZ which is a ruby service and deployed on three EC2 instances . It gets traffic through below nginx configuration.
4sadf24323, dhfsg345345 and sadfsd23423 uniquely identifies servers A, B and C. These ids are used for session stickiness, so each instance with the id can continue to get traffic and complete the request.
.....
location /XYZ/rotate/4sadf24323/ {
}
location /XYZ/rotate/dhfsg345345/ {
}
location /XYZ/rotate/sadfsd23423/ {
}
location /XYZ {
}
......
$ cat xyz-conf
upstream ows-oss {
ip_hash;
}
We have been storing session data in memory(8GB instances) so far. Everytime we deploy new version of the code, we have to restart the ruby service which will clear up all the in memory sessions.Our customers were getting impacted by this. So we decided to bring in redis cache to store sessions. While it is advisible to move the sessions storing completely to redis, it takes lot of time and effort for the complete redesign of application. So at this point, what we decided to do is to store all the sessions in memory and a copy of the sessions in redis as well. So for example, when the request goes to server A(4sadf24323) and the requested session id is not found in A, service will fetch session from redis and continue to function. While this design is working fine for us in development, we are facing a challenge during deployment.
The solution we have currently is, when we deploy on Server A, we update the nginx configuration with Server B's ip address (ie, location /XYZ/rotate/4sadf24323/ {proxy_pass http://192.168.1.3:8088;}. When deploying to Server B, we update the nginx configuration of server B with server C's ip address and so on. This way, none of the user will face an outage even though the request that should have gone to server A now is going to Server B, our service will fetch the session from redis and works as expected.
Though this deployment stratgey works fine in development, I am not sure if it is a good idea to change nginx configuration on the fly while doing production deployment each time.
Is this really bad ? Can anyone give suggestions ?
If it is really bad, what alternate approach can we take ? ( I know the application design is bad and needs fundamental changes but we do not have money or bandwidth to do that at this point).