High Response Time from Passenger Workers.

78 views
Skip to first unread message

Abhishek Singh

unread,
Jul 16, 2017, 4:02:25 PM7/16/17
to Phusion Passenger Discussions
Hi,

Recently our servers (using Nginx & PhusionPassenger) have started to show long response time to HTTP connections which results in slowing the client apps.

We are not able to figure out what exactly is clogging the networks. Initially we felt these issues might be due to redis latency our backend server is using. But, changing it's capacity and subnets didn't show a sign of improvement 

Below is our passenger-status log

----------- General information -----------
Max pool size : 20
App groups    : 1
Processes     : 10
Requests in top-level queue : 0

----------- Application groups -----------
<some-location> (production):
  App root: <some-location>
  Requests in queue: 0
  * PID: 17251   Sessions: 131      Processed: 3404    Uptime: 2h 3m 38s
    CPU: 8%      Memory  : 253M    Last used: 4s a
  * PID: 17277   Sessions: 131      Processed: 2013    Uptime: 2h 3m 38s
    CPU: 8%      Memory  : 242M    Last used: 4s a
  * PID: 17303   Sessions: 131      Processed: 481     Uptime: 2h 3m 38s
    CPU: 7%      Memory  : 232M    Last used: 3s a
  * PID: 17325   Sessions: 131      Processed: 2733    Uptime: 2h 3m 38s
    CPU: 7%      Memory  : 243M    Last used: 3s a
  * PID: 17353   Sessions: 130      Processed: 797     Uptime: 2h 3m 37s
    CPU: 7%      Memory  : 233M    Last used: 28m 33s
  * PID: 17382   Sessions: 130      Processed: 3633    Uptime: 2h 3m 37s
    CPU: 7%      Memory  : 232M    Last used: 28m 4s a
  * PID: 17410   Sessions: 130      Processed: 4633    Uptime: 2h 3m 37s
    CPU: 7%      Memory  : 236M    Last used: 39s
  * PID: 17441   Sessions: 130      Processed: 2496    Uptime: 2h 3m 36s
    CPU: 5%      Memory  : 222M    Last used: 38s
  * PID: 17471   Sessions: 130      Processed: 953     Uptime: 2h 3m 35s
    CPU: 4%      Memory  : 210M    Last used: 18s
  * PID: 17501   Sessions: 130      Processed: 1629    Uptime: 2h 3m 35s
    CPU: 5%      Memory  : 218M    Last used: 4s 

Inspite of heavy load on server, processes are idle and requests in queue is zero.

I have attached passenger-status  --show=requests data as well (log3.log file in attachment).
I am not able to exactly understand the logs. 

Please help me out. Your help will be highly appreciated. 
Thanks in Advance :)
log3.log

Daniel Knoppel

unread,
Jul 17, 2017, 5:54:28 AM7/17/17
to Phusion Passenger Discussions
Please describe your setup in more detail. For example, is it a Ruby app? Node.js? Python? Does it happen right away? over time? Does restarting everything change anything?

- Daniel

Abhishek Singh

unread,
Jul 18, 2017, 2:07:27 AM7/18/17
to Phusion Passenger Discussions
Hi Daniel,

We have rails app running on Rails 4.2.4 and Ruby 2.3.1

It doesn't happen straight away. It happens over time and restarting nginx service resolves the high response time issue but it again happens over time under high load.

More Info:

2 EC2 m4.large instances are running behind Amazon ELB. Each instance has 2 nginx workers with each nginx configured to spawn 5 passenger workers.

nginx keepalive-timeout is set to 10s.

Passenger configuration:

passenger_max_pool_size 20;
passenger_min_instances 10;
passenger_max_instances_per_app 0;
passenger_pre_start <our-url-endpoint>;
passenger_pool_idle_time 0;
passenger_max_request_queue_size 0;

Thanks for your time. :)

Daniel Knoppel

unread,
Jul 18, 2017, 10:50:44 AM7/18/17
to Phusion Passenger Discussions
If you're using Passenger Open Source, and you're on Ruby, each process should be able to handle 1 session at a time (no multithreading). However I see the session counter at 130, so there seems to be something going on with stuck sessions there. You can send a SIGQUIT to the PID of one of your processes to get Ruby backtraces to see what is stuck.

I'm also assuming you haven't (mistakenly) overridden passenger_force_max_concurrent_requests_per_process.

Btw. you should upgrade Passenger, you're using a version with Nginx that has security vulnerabilities.

- Daniel

Abhishek Singh

unread,
Jul 18, 2017, 11:58:21 AM7/18/17
to Phusion Passenger Discussions
Hi Daniel,

We haven't overridden passenger_force_max_concurrent_requests_per_process.

We will send a SIGQUIT to get backtrace once this situation arises again.

BTW, in the log file attached if you have seen. The requests are in processing state for 26s (min) to 1m 45s (max). Can you please let me know, when this happens? And most of the requests are stuck at PARSING_HEADERS. Is it a normal behaviour?

Thanks
-Abhishek

Camden Narzt

unread,
Jul 18, 2017, 1:47:55 PM7/18/17
to Phusion Passenger Discussions
Have you checked the size and contents of your headers? Perhaps there's something out of the ordinary there.

It's not normal for things to be slow, but it may not be surprising depending on what the requests look like.
Reply all
Reply to author
Forward
0 new messages