Google Groups

timeouts in facebook/omniauth callbacks causing app to freeze


Alex Mar 29, 2011 10:38 AM
Posted in group: Devise
I'm using devise v1.2.rc2, rails 3.0.5 on heroku

We've been having trouble with our app freezing around oauth [and now
omniauth] callbacks for a few versions of devise now.  I'm not sure if
it is something in devise or in our code, but with an upgrade to the
new 1.2.rc2 tag we re-wrote our omniauth callback controller so, the
new codepath is entirely new, but the errors are always around
oauth/omniauth callbacks.

One thing is that we are currently only running with 1 dyno on heroku,
so it is understandable requests would time out if there is a lot of
activity, but, the app is totally unresponsive for 3 minutes at a time
[and then my boss has a script which reboots after that much time of
inactivity].

Here is an example log of what we're seeing [the name
facebook_register_questionmoms just lets us know what path they took
into the facebook auth]:

2011-03-29T05:25:45-07:00 heroku[nginx]: GET
/users/auth/facebook_register_questionmoms HTTP/1.1 | 10.208.195.21 |
1930 | https | 302
2011-03-29T05:25:45-07:00 app[web.1]:
2011-03-29T05:25:45-07:00 app[web.1]:
2011-03-29T05:25:45-07:00 app[web.1]: Started GET
+"/users/auth/facebook_register_questionmoms/callback?code=2.kc__Bz2yepEu7t7rq1jNwA__.3600.1301407200-1479867708%7CMT9kOn3JNhRVhqgBh0Kk6FwDHKk"
for 10.208.195.21 at
+2011-03-29 05:25:45 -0700
2011-03-29T05:25:46-07:00 heroku[router]: GET
www.questionmoms.com/users/auth/facebook_register_questionmoms/callback
dyno=web.1
queue=0 wait=0ms service=889ms bytes=1802
2011-03-29T05:25:46-07:00 heroku[router]: GET
www.questionmoms.com/status dyno=web.1 queue=0 wait=0ms service=470ms
bytes=225
2011-03-29T05:25:46-07:00 app[web.1]:
2011-03-29T05:25:46-07:00 app[web.1]:
2011-03-29T05:25:46-07:00 app[web.1]: Started GET "/status" for
10.203.85.7 at 2011-03-29 05:25:46 -0700
2011-03-29T05:25:46-07:00 heroku[nginx]: GET /status HTTP/1.1 |
10.203.85.7 | 250 | http | 200
2011-03-29T05:25:46-07:00 app[web.1]:
2011-03-29T05:25:46-07:00 app[web.1]:
2011-03-29T05:25:46-07:00 app[web.1]: Started GET
+"/users/auth/facebook_register_questionmoms/callback?code=2.kc__Bz2yepEu7t7rq1jNwA__.3600.1301407200-1479867708%7CMT9kOn3JNhRVhqgBh0Kk6FwDHKk"
for 10.208.195.21 at
+2011-03-29 05:25:46 -0700
2011-03-29T05:25:46-07:00 heroku[nginx]: GET
+/users/auth/facebook_register_questionmoms/callback?code=2.kc__Bz2yepEu7t7rq1jNwA__.3600.1301407200-1479867708%7CMT9kOn3JNhRVhqgBh0Kk6FwDHKk
HTTP/1.1 | 10.208.195.21 |
+1853 | http | 302
2011-03-29T05:25:51-07:00 heroku[nginx]: GET /images/PowerMember.png
HTTP/1.1 | 10.203.85.7 | 17059 | http | 200
2011-03-29T05:26:02-07:00 heroku[nginx]: GET /images/PowerMember.png
HTTP/1.1 | 10.203.85.7 | 17059 | http | 200
2011-03-29T05:26:09-07:00 heroku[router]: Error H11 (Backlog too deep)
-> GET www.questionmoms.com/status dyno=none queue=0 wait=0ms
service=0ms bytes=0
2011-03-29T05:26:09-07:00 heroku[nginx]: GET /status HTTP/1.1 |
10.203.85.7 | 795 | http | 503
2011-03-29T05:26:14-07:00 heroku[nginx]: GET /images/PowerMember.png
HTTP/1.1 | 10.208.195.21 | 236 | http | 304
2011-03-29T05:26:16-07:00 heroku[nginx]: GET /images/PowerMember.png
HTTP/1.1 | 10.208.195.21 | 17052 | http | 200
2011-03-29T05:26:16-07:00 heroku[router]: Error H12 (Request timeout)
-> GET www.questionmoms.com/users/auth/facebook_register_questionmoms/callback
dyno=web.1
queue=0
+wait=0ms service=0ms bytes=0
2011-03-29T05:26:16-07:00 heroku[nginx]: GET
+/users/auth/facebook_register_questionmoms/callback?code=2.kc__Bz2yepEu7t7rq1jNwA__.3600.1301407200-1479867708%7CMT9kOn3JNhRVhqgBh0Kk6FwDHKk
HTTP/1.0 | 127.0.0.1 | 790 |
+http | 503
2011-03-29T05:26:16-07:00 heroku[nginx]: GET
+/users/auth/facebook_register_questionmoms/callback?code=2.kc__Bz2yepEu7t7rq1jNwA__.3600.1301407200-1479867708%7CMT9kOn3JNhRVhqgBh0Kk6FwDHKk
HTTP/1.1 | 10.208.195.21 |
+795 | https | 503

Has anyone seen this sort of activity?  This happened to us 5 times
last night.  Do we just need to increase our dyno count or is there
something else going on here?

Thanks,
Alex