Hello,
I need to reach at least 50000 users per second during 5 minutes without breaking the server, but when I try to do so, I get this in the logs:
2017/01/17 14:48:50 [error] 78#78: *77876 connect() to unix:/tmp/passenger.XVzTG7z/agents.s/core failed (11: Resource temporarily unavailable) while connecting to upstream, client: 54.89.44.6, server: xxxx.xx, request: "GET /api/v1/clients/configuration/nova HTTP/1.1", upstream: "passenger:unix:/tmp/passenger.XVzTG7z/agents.s/core:", host: "xxx.xxx.xx"
(Changed my host for x there).
This endpoint answers with data from Redis, which is configured inside the same container, being connected via Unix socket. By issuing the "top" command while the load tests are running, I can see that Redis is not getting overloaded, so I don't consider it to be my bottleneck here.
The server has 8 cores, so Nginx has a worker for each core. It has also 30GB of RAM, so with the result of "free -m", I got 29452, which I used for the calculation of the amount of Passenger instances that would be configured: (29452 * 0.75) / 135 = 163.
I just considered each instance of my app to be using 135MB, just to be sure, but I know they're probably using less than this.
So passenger_min_instances and passenger_max_pool_size are both set to 163, passenger_pre_start is set to the path to my server and passenger_max_request_queue_size is set to 65536.
Both set to the same amount, so the instances are already all spawned when I receive the requests.
When I load tested it for 10 minutes with 30k clients per second, the free RAM was kept around 5GB, but the test finished successfully.
As kernel parameters, I configured:
net.core.somaxconn=65535
vm.overcommit_memory = 1
echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
And I configured nginx.conf like this:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
daemon off;
include /etc/nginx/main.d/*.conf;
events {
# This command will show you how many worker_connections you can use for the current server:
# ulimit -n
worker_connections 65536;
use epoll;
multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 20;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# allow the server to close connection on non responding client, this will free up memory
reset_timedout_connection on;
##
# Phusion Passenger config
##
# Uncomment it if you installed passenger or passenger-enterprise
##
include /etc/nginx/passenger.conf;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Do you have any idea of how I could increase the number of connections my server is able to accept and stop returning this Resource temporarily unavailable error?
Thank you in advance,
Jonathas Ribeiro