Lua Nginx solving WSARecv() failed (10054: An existing connection was forcibly closed by the remote host) while reading response header from upstream, client:

5,198 views
Skip to first unread message

c0nw...@googlemail.com

unread,
Dec 28, 2016, 2:42:08 PM12/28/16
to openresty-en
So I have a fastcgi upstream as follows here

# loadbalancing PHP
upstream myLoadBalancer
{
server
127.0.0.1:9001 weight=1 fail_timeout=5;
server
127.0.0.1:9002 weight=1 fail_timeout=5;
server
127.0.0.1:9003 weight=1 fail_timeout=5;
server
127.0.0.1:9004 weight=1 fail_timeout=5;
server
127.0.0.1:9005 weight=1 fail_timeout=5;
server
127.0.0.1:9006 weight=1 fail_timeout=5;
server
127.0.0.1:9007 weight=1 fail_timeout=5;
server
127.0.0.1:9008 weight=1 fail_timeout=5;
server
127.0.0.1:9009 weight=1 fail_timeout=5;
server
127.0.0.1:9010 weight=1 fail_timeout=5;
least_conn
;
}

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000+
#
location
~ \.php$ {
root           html
;
fastcgi_pass   myLoadBalancer
; # or multiple, see example above
fastcgi_index  index
.php;
fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name
;
include        fastcgi_params
;
}

Occasionally i will receive a internal server error when accessing a page processed by php.

The error logs shows this.

WSARecv() failed (10054: An existing connection was forcibly closed by the remote host) while reading response header from upstream, client:

Upon investigating it seems that the PHP process started serving / responding or sending Nginx info then cut Nginx of either because PHP crashed or was closed abruptly.

"PHP_FCGI_MAX_REQUESTS" could be closing the process for hitting the request limit.





Using Lua is it possible to prevent receiving a 500 internal server error and pass the request onto the next upstream that is what i had in mind as a good way to solve this if the process closes abruptly just take that request with lua and pass it to the next upstream in the que.

My "fastcgi_next_upstream error timeout;" Is default as seen here http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_next_upstream

Anyone know a decent way to go about solving this annoyance. Thanks in advance looking forward to light anyone can shed / share upon this dilemma :)

nginx for Windows

unread,
Dec 29, 2016, 5:10:59 AM12/29/16
to openresty-en
Did you look at "multi_runcgi.cmd" with the service installer? (http://nginx-win.ecsds.eu/download/Install_nginx_php_services.zip)
ea.:
set PHP_FCGI_CHILDREN=0
set PHP_FCGI_MAX_REQUESTS=10000


Op woensdag 28 december 2016 20:42:08 UTC+1 schreef c0nw...@googlemail.com:

c0nw...@googlemail.com

unread,
Dec 29, 2016, 2:21:33 PM12/29/16
to openresty-en
Yes but that does not stop a PHP process crashing every once in a while. I see my event viewer it shows the following details.

Faulting application name: php-cgi.exe
Fault bucket , type 0
Event Name: APPCRASH
Response: Not available
Cab Id: 0
Problem signature:

Now we all know PHP suffers from memory leaks and a vast number of problems why it could be crashing but as a universal solution since these crashes are so far few between and rare i would rather use Lua to just make sure the upstream completed its response and not closed or cut of abruptly. In the scenario it does close abruptly just pass the request to next upstream in the que where it will complete and be served just fine.

c0nw...@googlemail.com

unread,
Dec 30, 2016, 5:29:09 PM12/30/16
to openresty-en
So something like the following.

upstream myLoadBalancer {
server
127.0.0.1:9001 weight=1 fail_timeout=5;
server
127.0.0.1:9002 weight=1 fail_timeout=5;
server
127.0.0.1:9003 weight=1 fail_timeout=5;
server
127.0.0.1:9004 weight=1 fail_timeout=5;
server
127.0.0.1:9005 weight=1 fail_timeout=5;
server
127.0.0.1:9006 weight=1 fail_timeout=5;
server
127.0.0.1:9007 weight=1 fail_timeout=5;
server
127.0.0.1:9008 weight=1 fail_timeout=5;
server
127.0.0.1:9009 weight=1 fail_timeout=5;
server
127.0.0.1:9010 weight=1 fail_timeout=5;
least_conn
;


balancer_by_lua_block
{
-- use Lua to do something interesting here
-- as a dynamic balancer

--check first chunk of response
--check that last chunk of upstream response recieved
--check upstream did not prematurely / abruptly close connection end or crash while transmitting the response.

--If upstream response failed
--pass the same request to next upstream server in que to try again

--else upstream response was success and all went well so do nothing.

} #End balancer_by_lua_block

} #End upstream


Is there anything that exist already that can demon straight what I am after or trying to achieve here ?

nginx for Windows

unread,
Dec 30, 2016, 7:38:35 PM12/30/16
to openresty-en
The problem is caused by a php session accepting the connection, accepting the request and crashes while handling the request.

What you could do, inside the .cmd script which starts cgi and auto-restarts on exit, is to detect an exit based on a real crash or an exit based on 'PHP_FCGI_MAX_REQUESTS' (which looks like exit error code zero.)
When the code is zero you can exit back to nginx with some other code and use next_upstream based on this code to skip the cgi node.

I've seen examples on the forum about such a construction with other backends with next_upstream (based on an error code).
ea. let the cgi node restart and feed it a php script which only returns your required html exit code, nginx should take it from there.

c0nw...@googlemail.com

unread,
Dec 30, 2016, 9:48:48 PM12/30/16
to openresty-en
Thanks! That is also a good idea! :), I feel sorry for the unlucky client who made the 1 request in lets say 10,000+ where the process crashed and for perhaps their first visit to my site will see the internal server error. (I think that is the status code Nginx shows)

Is my idea as a solution not better than that ?, I figured mine could also be used proxy upstreams to other servers too if those servers restart or close. I don't have any proxy upstream problems but I would use the Lua response check code on them since the same concept would apply. (Kills a few birds with one stone) Lua can do everything and don't need to hack the cmd/batch scripts etc then right. Lua can pass the request to next upstream on the current upstream open connection / session being closed.

Either way Lua is required for both methods and can definitely solve this and if anyone can point me in the right direction i will get my hands dirty since i enjoy lua related projects.

The way I view this PHP crash issue is like a fire burning in the corner of the room we can't put the fire out but we sure as hell can contain it so lets do that before it burns the building down.

On your forum how often are your users PHP processes crashing are the developers of PHP aware of these issues does it affect linux PHP distros too.

c0nw...@googlemail.com

unread,
Dec 31, 2016, 12:18:06 AM12/31/16
to openresty-en
So I put this together but once again stuck since no documentation or wiki or examples i can see or find on how the hell you pass a request to a upstream...

upstream myLoadBalancer {
server
127.0.0.1:9001 weight=1 fail_timeout=5;
server
127.0.0.1:9002 weight=1 fail_timeout=5;
server
127.0.0.1:9003 weight=1 fail_timeout=5;
server
127.0.0.1:9004 weight=1 fail_timeout=5;
server
127.0.0.1:9005 weight=1 fail_timeout=5;
server
127.0.0.1:9006 weight=1 fail_timeout=5;
server
127.0.0.1:9007 weight=1 fail_timeout=5;
server
127.0.0.1:9008 weight=1 fail_timeout=5;
server
127.0.0.1:9009 weight=1 fail_timeout=5;
server
127.0.0.1:9010 weight=1 fail_timeout=5;
least_conn
;


balancer_by_lua_block
{
-- use Lua to do something interesting here
-- as a dynamic balancer

local upstream_status = (ngx.var.upstream_status or "")

if upstream_status == nil then
--code here to pass the same request to the next upstream in the que
end
} #End balancer_by_lua_block

} #End upstream

Perhaps a function in lua currently does not exist a method to pass a request to the next upstream in que ?

c0nw...@googlemail.com

unread,
Dec 31, 2016, 6:05:22 AM12/31/16
to openresty-en
Made some changes but have not tested this in action yet but here it is.

upstream myLoadBalancer {
server
127.0.0.1:9001 weight=1 fail_timeout=5;
server
127.0.0.1:9002 weight=1 fail_timeout=5;
server
127.0.0.1:9003 weight=1 fail_timeout=5;
server
127.0.0.1:9004 weight=1 fail_timeout=5;
server
127.0.0.1:9005 weight=1 fail_timeout=5;
server
127.0.0.1:9006 weight=1 fail_timeout=5;
server
127.0.0.1:9007 weight=1 fail_timeout=5;
server
127.0.0.1:9008 weight=1 fail_timeout=5;
server
127.0.0.1:9009 weight=1 fail_timeout=5;
server
127.0.0.1:9010 weight=1 fail_timeout=5;
least_conn
;


balancer_by_lua_block
{
-- use Lua to do something interesting here
-- as a dynamic balancer

local upstream_status = (ngx.var.upstream_status or "")

if upstream_status == nil then
--code here to pass the same request to the next upstream in
the que
ngx
.var.upstream_status = "499" --set status so Nginx _next_upstream config will pass request to next in que
end
} #End balancer_by_lua_block

} #End upstream


fastcgi_next_upstream error timeout http_499
;
proxy_next_upstream error timeout http_499
;

nginx for Windows

unread,
Dec 31, 2016, 7:59:32 AM12/31/16
to openresty-en
This is not going to work, the balancer in nginx is a totally different thing in Lua(*), either stick to nginx and deal with it with a php script returning a custom code for next_upstream, or stick to Lua's (openresty) loadbalancer which AFAIK can't deal with this specific issue yet.

* the point here is 'who' is in control when processing is passed to php, this 'who' is who gets the return status and can only be one 'who'.

c0nw...@googlemail.com

unread,
Dec 31, 2016, 4:15:26 PM12/31/16
to openresty-en
Do you have a example of a cmd script sending a status code when PHP crashes or that does not exist either ?
Reply all
Reply to author
Forward
0 new messages