mod_wsgi process restarting (not) logging and "faulthandler"

63 views
Skip to first unread message

Jesus Cea

unread,
Feb 6, 2018, 9:53:01 PM2/6/18
to mod...@googlegroups.com
I find annoying that mod_wsgi doesn't log when it decides to restart a
daemon process because it becomes unresponsive. It could be useful even
log it when the maximum request number is reached (if defined) and the
daemon is recycled.

Beside that, it could be really useful mod_wsgi call "faulthandler" when
is about to reboot the daemon. This module was introduced in Python 3.3
and it is invaluable in deadlock situations, just the situation when
mod_wsgi decides to restart a daemon because it is unresponsible.

Patch seems quite easy: Instead of killing the process as is, send first
a signal to request "faulthandler" dump, and then do the killing.

An "atexit()" handler could be not enough if python interpreter is in
bad shape, etc. Moreover, I don't think "atexit()" knows if the daemon
is being killed because a service timeout or because somebody just
uploaded a new wsgi script.

--
Jesús Cea Avión _/_/ _/_/_/ _/_/_/
jc...@jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/
Twitter: @jcea _/_/ _/_/ _/_/_/_/_/
jabber / xmpp:jc...@jabber.org _/_/ _/_/ _/_/ _/_/ _/_/
"Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/
"My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz

signature.asc

Graham Dumpleton

unread,
Feb 6, 2018, 9:55:41 PM2/6/18
to mod...@googlegroups.com


> On 7 Feb 2018, at 1:52 pm, Jesus Cea <jc...@jcea.es> wrote:
>
> I find annoying that mod_wsgi doesn't log when it decides to restart a
> daemon process because it becomes unresponsive. It could be useful even
> log it when the maximum request number is reached (if defined) and the
> daemon is recycled.

On this specific issue, it will logs lots of stuff if you have Apache log level set to info.

LogLevel info

For request-timeout expiring it even logs stack traces for you if it can so you can see where it gets stuck.

It is too noisy if logged at warn/err and not appropriate either.

I will look at the other issues you raised later.

> Beside that, it could be really useful mod_wsgi call "faulthandler" when
> is about to reboot the daemon. This module was introduced in Python 3.3
> and it is invaluable in deadlock situations, just the situation when
> mod_wsgi decides to restart a daemon because it is unresponsible.
>
> Patch seems quite easy: Instead of killing the process as is, send first
> a signal to request "faulthandler" dump, and then do the killing.
>
> An "atexit()" handler could be not enough if python interpreter is in
> bad shape, etc. Moreover, I don't think "atexit()" knows if the daemon
> is being killed because a service timeout or because somebody just
> uploaded a new wsgi script.
>
> --
> Jesús Cea Avión _/_/ _/_/_/ _/_/_/
> jc...@jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/
> Twitter: @jcea _/_/ _/_/ _/_/_/_/_/
> jabber / xmpp:jc...@jabber.org _/_/ _/_/ _/_/ _/_/ _/_/
> "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/
> "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/
> "El amor es poner tu felicidad en la felicidad de otro" - Leibniz
>
> --
> You received this message because you are subscribed to the Google Groups "modwsgi" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to modwsgi+u...@googlegroups.com.
> To post to this group, send email to mod...@googlegroups.com.
> Visit this group at https://groups.google.com/group/modwsgi.
> For more options, visit https://groups.google.com/d/optout.

Graham Dumpleton

unread,
Feb 6, 2018, 10:01:23 PM2/6/18
to mod...@googlegroups.com
BTW, you can see example of stack trace generated when request-timeout occurs in recent mailing list discussion at:


Graham

Jesus Cea

unread,
Feb 6, 2018, 10:49:46 PM2/6/18
to mod...@googlegroups.com
On 07/02/18 04:01, Graham Dumpleton wrote:
> BTW, you can see example of stack trace generated when request-timeout
> occurs in recent mailing list discussion at:
>
>     https://groups.google.com/d/msg/modwsgi/_i6MGs6fh6w/nH3x7_nuAwAJ

Humm. If you are refering to this comment of you:

"""
You started using request-timeout, good. When using that option you get
the track trace automatically when process shutdown due to it.
"""

"request-timeout" generating tracebacks is not documented anywhere in
<https://modwsgi.readthedocs.io/en/develop/configuration-directives/WSGIDaemonProcess.html>.

It should be documented too that the traceback is dumped at loglevel "info".

A random thought about this is that the general advice in the
documentation about "LogLevel info" should be replaced by "LogLevel
wsgi:info". This way it could be left in production, as I just did :).
The other approach is very verbose for the entire Apache server and modules.
signature.asc

Graham Dumpleton

unread,
Feb 6, 2018, 10:59:51 PM2/6/18
to mod...@googlegroups.com
There is an actual example of what you see in the logs further down in that post.

[Mon Feb 05 21:11:52.378725 2018] [wsgi:info] [pid 31535:tid 140409583761152] mod_wsgi (pid=31535): Exiting process 'blah'.
[Mon Feb 05 21:11:52.404508 2018] [wsgi:info] [pid 31550:tid 140409652811520] mod_wsgi (pid=31550): Daemon process request time limit exceeded, stopping process 'blah'.
[Mon Feb 05 21:11:52.404559 2018] [wsgi:info] [pid 31550:tid 140409849104256] mod_wsgi (pid=31550): Shutdown requested 'blah'.
[Mon Feb 05 21:11:52.404619 2018] [wsgi:info] [pid 31550:tid 140409849104256] mod_wsgi (pid=31550): Dumping stack trace for active Python threads.
[Mon Feb 05 21:11:52.404624 2018] [wsgi:info] [pid 31550:tid 140409849104256] mod_wsgi (pid=31550): Thread 140409636026112 executing file "/usr/lib/python3.5/socket.py", line 677, in create_connection
[Mon Feb 05 21:11:52.404628 2018] [wsgi:info] [pid 31550:tid 140409849104256] mod_wsgi (pid=31550): called from file "/usr/lib/python3.5/smtplib.py", line 300, in _get_socket,
[Mon Feb 05 21:11:52.404631 2018] [wsgi:info] [pid 31550:tid 140409849104256] mod_wsgi (pid=31550): called from file "/usr/lib/python3.5/smtplib.py", line 308, in connect,
[Mon Feb 05 21:11:52.404633 2018] [wsgi:info] [pid 31550:tid 140409849104256] mod_wsgi (pid=31550): called from file "/usr/lib/python3.5/smtplib.py", line 226, in __init__,
[Mon Feb 05 21:11:52.404636 2018] [wsgi:info] [pid 31550:tid 140409849104256] mod_wsgi (pid=31550): called from file "/var/www/.virtualenvs/www-elNfpBxP/lib/python3.5/site-packages/django/core/mail/backends/smtp.py", line 42, in open,
....

As to docs changes, have created:

Jesus Cea

unread,
Feb 6, 2018, 11:14:49 PM2/6/18
to mod...@googlegroups.com
On 07/02/18 03:55, Graham Dumpleton wrote:
> On this specific issue, it will logs lots of stuff if you have Apache log level set to info.
>
> LogLevel info
>
> For request-timeout expiring it even logs stack traces for you if it can so you can see where it gets stuck.

Not documented :-).

Could you possibly change "LogLevel info" to "LogLevel wsgi:info" in the
docs?.

Also, traceback is partial when a thread is inside a C routine, because
it is showed as the last called C function and the line number of the
entry point. We can not see in what line is it waiting for a lock, for
instance.

For instance, I am causing a deadlock on purpose and I am seeing this:

"""
[Wed Feb 07 04:43:41.021026 2018] [wsgi:info] [pid 27347:tid
140119465424768] mod_wsgi (pid=27347): Thread 140119266129664 executing
file "/home/buffy/wsgi.py", line 449, in get_listado
"""

Line 449 is the "def get_listado()" definition line, not the "with lock"
line inside that function that is actually waiting for the lock.

I see the same effect in all the traceback: the documented line numbers
are the starting line of each calling function, not the linenumber of
the actual call.

Is this a bug?.
signature.asc

Graham Dumpleton

unread,
Feb 6, 2018, 11:23:03 PM2/6/18
to mod...@googlegroups.com


> On 7 Feb 2018, at 3:14 pm, Jesus Cea <jc...@jcea.es> wrote:
>
> On 07/02/18 03:55, Graham Dumpleton wrote:
>> On this specific issue, it will logs lots of stuff if you have Apache log level set to info.
>>
>> LogLevel info
>>
>> For request-timeout expiring it even logs stack traces for you if it can so you can see where it gets stuck.
>
> Not documented :-).
>
> Could you possibly change "LogLevel info" to "LogLevel wsgi:info" in the
> docs?.

Will work out what is best when get around to updating docs.

I cleared out 60+ issues from issue tracker when I was on holiday a few weeks back. Next holiday not until April. :-)

> Also, traceback is partial when a thread is inside a C routine, because
> it is showed as the last called C function and the line number of the
> entry point. We can not see in what line is it waiting for a lock, for
> instance.
>
> For instance, I am causing a deadlock on purpose and I am seeing this:
>
> """
> [Wed Feb 07 04:43:41.021026 2018] [wsgi:info] [pid 27347:tid
> 140119465424768] mod_wsgi (pid=27347): Thread 140119266129664 executing
> file "/home/buffy/wsgi.py", line 449, in get_listado
> """
>
> Line 449 is the "def get_listado()" definition line, not the "with lock"
> line inside that function that is actually waiting for the lock.
>
> I see the same effect in all the traceback: the documented line numbers
> are the starting line of each calling function, not the linenumber of
> the actual call.

I'll look into. Python stack traces are a bit of a pain as they don't give a separate stack frame for when you call a function which is actually implemented in C code. Usually though one can at least work out the line in calling code where C function was called though. At least that is the case if using profile hooks. Don't remember what happens with stack frame dumping.

Graham

Jesus Cea

unread,
Feb 6, 2018, 11:30:00 PM2/6/18
to mod...@googlegroups.com
On 07/02/18 05:14, Jesus Cea wrote:
> On 07/02/18 03:55, Graham Dumpleton wrote:
>> On this specific issue, it will logs lots of stuff if you have Apache log level set to info.
>>
>> LogLevel info
>>
>> For request-timeout expiring it even logs stack traces for you if it can so you can see where it gets stuck.
>
> Not documented :-).
>
> Could you possibly change "LogLevel info" to "LogLevel wsgi:info" in the
> docs?.
>
> Also, traceback is partial when a thread is inside a C routine, because
> it is showed as the last called C function and the line number of the
> entry point. We can not see in what line is it waiting for a lock, for
> instance.

This is a non sequitor for the rest of the email. Ignore for now.


> I see the same effect in all the traceback: the documented line numbers
> are the starting line of each calling function, not the linenumber of
> the actual call.
>
> Is this a bug?.

Let me elaborate. I have this traceback (I an generating a deadlock on
purpose):

"""
[Wed Feb 07 04:43:41.021026 2018] [wsgi:info] [pid 27347:tid
140119465424768] mod_wsgi (pid=27347): Thread 140119266129664 executing
file "/home/buffy/wsgi.py", line 449, in get_listado
[Wed Feb 07 04:43:41.021032 2018] [wsgi:info] [pid 27347:tid
140119465424768] mod_wsgi (pid=27347): called from file
"/home/buffy/wsgi.py", line 946, in do_get_listing,
[Wed Feb 07 04:43:41.021037 2018] [wsgi:info] [pid 27347:tid
140119465424768] mod_wsgi (pid=27347): called from file
"/home/buffy/wsgi.py", line 1033, in application.
"""

Let see what lines 449, 496 and 1033 are:

"""
jcea@jcea:~/hg/webdav2cloud$ sed -n -e 449p -e 946p -e 1033p wsgi.py
def get_listado(self, URI):
def do_get_listing(environ, URI, start_response) :
def application(environ, start_response) :
"""

The line number is the function definition point, not the actual line
doing the call.
signature.asc

Jesus Cea

unread,
Feb 6, 2018, 11:32:42 PM2/6/18
to mod...@googlegroups.com
On 07/02/18 05:22, Graham Dumpleton wrote:
> I'll look into. Python stack traces are a bit of a pain as they don't
> give a separate stack frame for when you call a function which is
> actually implemented in C code. Usually though one can at least work
> out the line in calling code where C function was called though. At
> least that is the case if using profile hooks. Don't remember what
> happens with stack frame dumping.

I am hitting this issue myself in other project. Look like a bug of
"feature missing" of Python interpreter. Maybe something to work for
Python 3.8, two years away.

What I am experiencing now is something different. I just send an email
about it.

5:30 AM in Spain, time for some sleep!.
signature.asc

Graham Dumpleton

unread,
Feb 6, 2018, 11:36:01 PM2/6/18
to mod...@googlegroups.com
I could be naughty as not using:


    int f_lasti;                /* Last instruction if called */
    /* Call PyFrame_GetLineNumber() instead of reading this field
       directly.  As of 2.3 f_lineno is only valid when tracing is
       active (i.e. when f_trace is set).  At other times we use
       PyCode_Addr2Line to calculate the line from the current
       bytecode index. */
    int f_lineno;               /* Current line number */
    int f_iblock;               /* index in f_blockstack */
    char f_executing;           /* whether the frame is still executing */
    PyTryBlock f_blockstack[CO_MAXBLOCKS]; /* for try and loop blocks */
    PyObject *f_localsplus[1];  /* locals+stack, dynamically sized */

In other words, I am accessing f_lineno directly which may not work correctly now.

Have to remember some of this code was written a very very long time ago. :-)

Graham

Graham Dumpleton

unread,
Feb 7, 2018, 12:10:13 AM2/7/18
to mod...@googlegroups.com

Jesus Cea

unread,
Feb 7, 2018, 8:37:10 PM2/7/18
to mod...@googlegroups.com
On 07/02/18 06:10, Graham Dumpleton wrote:
> Fixed in:
>
> https://github.com/GrahamDumpleton/mod_wsgi/commit/f635d7f76380e9826b35d9d3ef09c2176a3e14d8

Cool. It works nicely.

Another request: When the Threads tracebacks are being dumped, please
use "thread.name" if available. I annotate my threads, but mod_wsgi is
dumping internal id, not name (if available).

So many things to do, so little time :-)
signature.asc

Graham Dumpleton

unread,
Feb 7, 2018, 8:40:00 PM2/7/18
to mod...@googlegroups.com
What are you setting the thread name too?

I did have this issue:


I closed in a few weeks back when did my big purge of issues and deferred doing it until had demonstrated need.

If you are logging thread ID in access log, then setting thread ID to request ID and attaching it to the traceback sounds reasonable.

Graham

Jesus Cea

unread,
Feb 7, 2018, 9:08:41 PM2/7/18
to mod...@googlegroups.com
On 08/02/18 02:39, Graham Dumpleton wrote:
> What are you setting the thread name too?

Beside the mod_wsgi threads running "application()", my code creates
tons of long term threads like "cache_cleanup",
"periodic_cache_flush_to_disk", "map generation workers", "audio
transcoding", etc.

> https://github.com/GrahamDumpleton/mod_wsgi/issues/160

Uhm, first thing in "application()" could be a "thread.name=URI", and
the "finally" statement could be "thread.name='idle'", or in the
"close()" code of the iterator returned.

This looks like a pattern for a near trivial middleware.

> If you are logging thread ID in access log, then setting thread ID to
> request ID and attaching it to the traceback sounds reasonable.

I am looking for being able to easily identify my threads in a
"request-timeout" traceback dump when I have like 130 threads running.
They are nicely labeled in my code, but the mod_wsgi traceback dump
doesn't show the "name" field, but opaque and uninformative "thread.ident".

I have a "futures._WorkItem" overloaded to accept an extra "thread_name"
parameter in the "futures.executor.submit()" so I can annotate threads
doing background work.

Now that I am using "request-timeout" traceback dumps, I would love to
have all that information available. Just dump "thread.name" if
available, instead of "thread.ident" :-).
signature.asc

Graham Dumpleton

unread,
Feb 7, 2018, 9:41:10 PM2/7/18
to mod...@googlegroups.com

On 8 Feb 2018, at 1:08 pm, Jesus Cea <jc...@jcea.es> wrote:

On 08/02/18 02:39, Graham Dumpleton wrote:
What are you setting the thread name too?

Beside the mod_wsgi threads running "application()", my code creates
tons of long term threads like "cache_cleanup",
"periodic_cache_flush_to_disk", "map generation workers", "audio
transcoding", etc.

https://github.com/GrahamDumpleton/mod_wsgi/issues/160

Uhm, first thing in "application()" could be a "thread.name=URI", and
the "finally" statement could be "thread.name='idle'", or in the
"close()" code of the iterator returned.

This looks like a pattern for a near trivial middleware.

Using a WSGI middleware for that is a bad idea because of the complexity of implementing a WSGI middleware that properly bounds the full execution of the code involved in all parts of handling the request. The better way is to use the event system in mod_wsgi to be notified of the start and end of the request. This is more efficient and you don't need to wrap the WSGI application entry point.

import mod_wsgi
import threading

def event_handler(name, **kwargs):
    cache = mod_wsgi.request_data()
    thread = threading.current_thread()

    if name == 'request_started':
        cache['original_thread_name'] = thread.name
        environ = kwargs['request_environ']
        thread.name = environ['REQUEST_URI']

    elif name == 'request_finished':
        thread.name = cache['original_thread_name']

mod_wsgi.subscribe_events(event_handler)

I don't think overriding the thread name with a request URI is a good idea here though. I think it would be better to have mod_wsgi set it based on the existing request ID that Apache generates as that then matches what you can log for the request in the access log.

Overall what is a probably a better approach is for me to extend the event mechanism in a couple of ways.

The first is to add a new event type of 'request_active'. In mod_wsgi I could have a default reporting interval to pick up on long running requests and generate a 'request_active' event every 15 seconds (configurable), so long as the request is running.

A second event could be of type 'request_timeout'. This could be triggered for each request, specifically for the case where there are active requests when the process is being shutdown due to request-timeout expiring for the process.

From the event handler for either of these you could log any information you want to. The only hard bit for me is that currently the mod_wsgi.request_data() call which provides access to a per request dictionary where you can stash data, is based on using thread locals, so calling that in these two events wouldn't work as it isn't being called from the request thread, but a separate thread. I had refrained from passing it as an explicit argument to the event handler for reasons I couldn't remember. Otherwise need to know when calling mod_wsgi.request_data() that doing it for these special cases and calculate the cache of request data another way based on knowing what request am going working with when triggering the event.

If you are logging thread ID in access log, then setting thread ID to
request ID and attaching it to the traceback sounds reasonable.

I am looking for being able to easily identify my threads in a
"request-timeout" traceback dump when I have like 130 threads running.
They are nicely labeled in my code, but the mod_wsgi traceback dump
doesn't show the "name" field, but opaque and uninformative "thread.ident".

I have a "futures._WorkItem" overloaded to accept an extra "thread_name"
parameter in the "futures.executor.submit()" so I can annotate threads
doing background work.

Now that I am using "request-timeout" traceback dumps, I would love to
have all that information available. Just dump "thread.name" if
available, instead of "thread.ident" :-).

Graham

Graham Dumpleton

unread,
Feb 7, 2018, 10:44:50 PM2/7/18
to mod...@googlegroups.com
FWIW, the other current events are.

* response_started
* request_exception
* process_stopping

Passing the per request data as "request_data" argument to event handler for request/response events was easy enough to do.

Getting access to the thread object in order to work out thread.name when dumping stack traces is definitely not easy. This is because you only have access to the integer thread ID and Python frame object. You would have to use the thread ID to look up threading._active and hope it is the same value used for that. The problem is though that you don't know what Python interpreter the thread was active in and which you need to switch to to do that lookup. When you have multiple sub interpreters, the thread could also have been used in more than one sub interpreter and associated with multiple high level Python thread objects. This is the sort of mess that just gives one headaches. I therefore don't think that is practical.

So I don't think it can be done from where that current code gets triggered. The point where the "process_stopping" event is generated could instead be used. This is different because that is generated for each sub interpreter just before destroying it, so you are already executing code in the right context.

Right now at least, using the per request events, you could track current requests yourself in a global dictionary, with link to thread as part of the request data. The "process_stopping" could access that cache to dump things out. What you will not know is why the process was being stopped, but that is possibly something that can pass as an argument to "process_stopping" so could decide what to do based on reason for process shutdown.

What mod_wsgi could do to make it easier is provide a way of getting back a dictionary of references, by request ID, to all request_data objects for active requests. What you stash in them from the event handlers would be up to you. I need to think about the best way to expose that giving access to those request data objects.

Graham

Graham Dumpleton

unread,
Feb 8, 2018, 6:59:34 AM2/8/18
to mod...@googlegroups.com
Passing request_data wasn't a good idea for certain reasons. I probably just rediscovered why I didn't do it that way in the first place. :-)

Anyway, provided you use the latest from 'develop' branch of GitHub repo, play around with the following code.

import os
import sys
import time
import threading
import traceback
import mod_wsgi

def event_handler(name, **kwargs):
    if name == 'request_started':
        request_data = mod_wsgi.request_data()
        request_data.update(kwargs)
        request_data['python_thread_id'] = threading.get_ident()

    elif name == 'process_stopping':
        if kwargs['shutdown_reason'] == 'request_timeout':
            print('SHUTDOWN')

            current_time = time.time()

            stacks = dict(sys._current_frames().items())
            active = dict(mod_wsgi.active_requests.items())

            for request_id, request_data in active.items():
                python_thread_id = request_data['python_thread_id']
                application_start = request_data['application_start']
                request_environ = request_data['request_environ']
                request_uri = request_environ['REQUEST_URI']

                running_time = current_time - application_start

                print()

                print('THREAD_ID', python_thread_id)
                print('REQUEST_ID', request_id)
                print('REQUEST_URI', request_uri)
                print('RUNNING_TIME', running_time)

                if python_thread_id in stacks:
                    print('STACK-TRACE')
                    traceback.print_stack(stacks[python_thread_id])

mod_wsgi.subscribe_events(event_handler)

def application(environ, start_response):
    sleep_duration = environ.get('HTTP_X_SLEEP_DURATION', 0)
    sleep_duration = float(sleep_duration or 0)

    status = '200 OK'
    output = b'Hello World!'

    response_headers = [('Content-type', 'text/plain'),
                        ('Content-Length', str(len(output)))]
    start_response(status, response_headers)

    if sleep_duration:
        time.sleep(sleep_duration)

    yield output

I am running that with:

    mod_wsgi-express start-server tests/request-timeout.wsgi --log-to-terminal --request-timeout 1 --threads 1 --log-level info

and running curl as:

    curl -H X-Sleep-Duration:60 http://localhost:8000/

That gets me the following. One stack trace I generate myself just for that sub interpreter out of event mechanism. The other is process wide generated from C code level.

[Thu Feb 08 22:52:41.635819 2018] [wsgi:info] [pid 21486] mod_wsgi (pid=21486): Daemon process request time limit exceeded, stopping process 'localhost:8000'.
[Thu Feb 08 22:52:41.636053 2018] [wsgi:info] [pid 21486] mod_wsgi (pid=21486): Shutdown requested 'localhost:8000'.

[Thu Feb 08 22:52:41.636126 2018] [wsgi:error] [pid 21486] SHUTDOWN
[Thu Feb 08 22:52:41.636149 2018] [wsgi:error] [pid 21486]
[Thu Feb 08 22:52:41.636156 2018] [wsgi:error] [pid 21486] THREAD_ID 123145557110784
[Thu Feb 08 22:52:41.636162 2018] [wsgi:error] [pid 21486] REQUEST_ID YSp5DBLFVJs
[Thu Feb 08 22:52:41.636166 2018] [wsgi:error] [pid 21486] REQUEST_URI /
[Thu Feb 08 22:52:41.636179 2018] [wsgi:error] [pid 21486] RUNNING_TIME 1.85565185546875
[Thu Feb 08 22:52:41.636187 2018] [wsgi:error] [pid 21486] STACK-TRACE
[Thu Feb 08 22:52:41.819058 2018] [wsgi:error] [pid 21486]   File "/Volumes/graham/Projects/mod_wsgi/tests/request-timeout.wsgi", line 56, in application
[Thu Feb 08 22:52:41.819071 2018] [wsgi:error] [pid 21486]     time.sleep(sleep_duration)

[Thu Feb 08 22:52:41.819080 2018] [wsgi:info] [pid 21486] mod_wsgi (pid=21486): Dumping stack trace for active Python threads.
[Thu Feb 08 22:52:41.819083 2018] [wsgi:info] [pid 21486] mod_wsgi (pid=21486): Thread 123145557110784 executing file "/Volumes/graham/Projects/mod_wsgi/tests/request-timeout.wsgi", line 56, in application
[Thu Feb 08 22:52:46.636172 2018] [wsgi:info] [pid 21486] mod_wsgi (pid=21486): Aborting process 'localhost:8000'.
[Thu Feb 08 22:52:46.636375 2018] [wsgi:info] [pid 21486] mod_wsgi (pid=21486): Exiting process 'localhost:8000'.

Graham
Reply all
Reply to author
Forward
0 new messages