Can you tell us more about your setup?
On Thursday, August 31, 2017 at 3:05:56 PM UTC-7, Massimo Di Pierro wrote:Can you tell us more about your setup?
AWS Linux, Python=2.7.12, web2py-shake-the-box: webp2y (2.14.6) + Rocket + Sqlite.
Since this is still evolving into production, and usually doesn't involve a lot of maintenance, I'm still hand-starting from the command line (-i 0.0.0.0 -p 443 -c ...pem -k ....pem ), doing the password (I know, -a exists), and after a few seconds hitting ^z and then bg.
Similar for port 80, which should return a 302, and then 2 -K invocations for scheduler stuff (I have 2 apps).
I had a more recent failure-to-respond where there didn't seem to be anything in /var/log/messages, or any error in logs/web2py.log.
The 443 process was still running (in some sense), so I killed it and did a fresh one, and things were back to normal.
(This is the same system that isn't fully happy with uploading large files from a Windows inet client ... but that doesn't go comatose; it eventually times out and has a stack trace in logs/web2py.log; the uploaded file is properly saved at that point. I think it responds to other requests between the client thinking it's done and the timeout appears.)
/dps
On Thursday, August 31, 2017 at 3:05:56 PM UTC-7, Massimo Di Pierro wrote:Can you tell us more about your setup?
AWS Linux, Python=2.7.12, web2py-shake-the-box: webp2y (2.14.6) + Rocket + Sqlite.
Since this is still evolving into production, and usually doesn't involve a lot of maintenance, I'm still hand-starting from the command line (-i 0.0.0.0 -p 443 -c ...pem -k ....pem ), doing the password (I know, -a exists), and after a few seconds hitting ^z and then bg.
Similar for port 80, which should return a 302, and then 2 -K invocations for scheduler stuff (I have 2 apps).
I had a more recent failure-to-respond where there didn't seem to be anything in /var/log/messages, or any error in logs/web2py.log.
The 443 process was still running (in some sense), so I killed it and did a fresh one, and things were back to normal.
[ec2-user@ip-172-31-16-18 web2py-2.14.6]$ curl -v --GET https://127.0.0.1/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* NSS error -5961 (PR_CONNECT_RESET_ERROR)
* TCP connection reset by peer
* Curl_http_done: called premature == 1
* Closing connection 0
curl: (35) TCP connection reset by peer
[ec2-user@ip-172-31-16-18 web2py-2.14.6]$
[ec2-user@ip-172-31-16-18 web2py-2.14.6]$ date
Thu Sep 14 04:39:03 UTC 2017
63.143.42.248, 2017-09-13 22:19:39, HEAD, /, HTTP/1.1, 200, 0.022028
63.143.42.248, 2017-09-13 22:34:39, HEAD, /, HTTP/1.1, 200, 0.021980
63.143.42.248, 2017-09-13 22:49:39, HEAD, /, HTTP/1.1, 200, 0.022100
63.143.42.248, 2017-09-14 00:16:36, GET, /, HTTP/1.1, 200, 0.097000
63.143.42.248, 2017-09-14 00:19:15, HEAD, /, HTTP/1.1, 200, 0.022115
139.162.106.181, 2017-09-14 00:25:05, GET, /, HTTP/1.1, 303, 0.015381
139.162.106.181, 2017-09-14 00:25:05, GET, /user/login, HTTP/1.1, 200, 0.020568
63.143.42.248, 2017-09-14 00:34:15, HEAD, /, HTTP/1.1, 200, 0.021668
63.143.42.248, 2017-09-14 00:49:15, HEAD, /, HTTP/1.1, 200, 0.021978
63.143.42.248, 2017-09-14 01:04:15, HEAD, /, HTTP/1.1, 200, 0.022382
63.143.42.248, 2017-09-14 01:10:26, HEAD, /, HTTP/1.1, 200, 0.022271
63.143.42.248, 2017-09-14 01:25:26, HEAD, /, HTTP/1.1, 200, 0.024839
63.143.42.248, 2017-09-14 01:40:25, HEAD, /, HTTP/1.1, 200, 0.022224
63.143.42.248, 2017-09-14 01:55:25, HEAD, /, HTTP/1.1, 200, 0.021916
[...]
(This is the same system that isn't fully happy with uploading large files from a Windows inet client ... but that doesn't go comatose; it eventually times out and has a stack trace in logs/web2py.log; the uploaded file is properly saved at that point. I think it responds to other requests between the client thinking it's done and the timeout appears.)
[...]
I had a more recent failure-to-respond where there didn't seem to be anything in /var/log/messages, or any error in logs/web2py.log.
The 443 process was still running (in some sense), so I killed it and did a fresh one, and things were back to normal.
On Friday, September 1, 2017 at 4:07:33 PM UTC-7, Dave S wrote:[...](This is the same system that isn't fully happy with uploading large files from a Windows inet client ... but that doesn't go comatose; it eventually times out and has a stack trace in logs/web2py.log; the uploaded file is properly saved at that point. I think it responds to other requests between the client thinking it's done and the timeout appears.)
In working to get this client working with nginx-uwsgi, I discovered I needed to add a call to HttpEndRequest(). This doesn't solve the Rocket issue, but it changes it somewhat. The client now gets a timeout during that call, it still takes 10 minutes for the request to show up in either httpserver.log or logs/web2py.log, but it now claims a 200 status, and there is no traceback.
On Friday, September 22, 2017 at 11:50:15 PM UTC-7, Dave S wrote:
On Friday, September 1, 2017 at 4:07:33 PM UTC-7, Dave S wrote:[...](This is the same system that isn't fully happy with uploading large files from a Windows inet client ... but that doesn't go comatose; it eventually times out and has a stack trace in logs/web2py.log; the uploaded file is properly saved at that point. I think it responds to other requests between the client thinking it's done and the timeout appears.)
In working to get this client working with nginx-uwsgi, I discovered I needed to add a call to HttpEndRequest(). This doesn't solve the Rocket issue, but it changes it somewhat. The client now gets a timeout during that call, it still takes 10 minutes for the request to show up in either httpserver.log or logs/web2py.log, but it now claims a 200 status, and there is no traceback.
This upload, btw, was 5,933,947 bytes. A linux client,where I get to use libcurl, did 11,721,087 bytes with no delay symptoms.
BTW, when uwsgi calls web2py, it seems to be starting a new logs/web2py.log (rotating the old ones). Is there a way to have it continue to use the currently open one? I'm not yet using --emperor; is that part of the issue?
[...]
I did an experiment with going back to http (no-ess) so that I could read the frames in tcpdump, and I see that the client is getting the Content-Length header in (and it appears to this Mk I eyeball to be correct). This experiment was with the nginx front-end, and I only captured the client side (because I specified the port in the tcpdump command). After what appears to be the last byte of the data, which is the last byte of the hex dump for that frame, there are 2 4500 0028 frames as the final handshake.
It may be a couple of days before I can repeat the experiment with Rocket getting the no-ess data.
BTW, when uwsgi calls web2py, it seems to be starting a new logs/web2py.log (rotating the old ones). Is there a way to have it continue to use the currently open one? I'm not yet using --emperor; is that part of the issue?
Lots of one-line log files is a bit annoying.