Re: nginx + apache2 + modwsgi + django random freeze

222 views
Skip to first unread message

Rodrigo Campos

unread,
Nov 19, 2011, 10:17:49 PM11/19/11
to mod...@googlegroups.com
On Sun, Nov 20, 2011 at 12:08 AM, Rodrigo Campos <rodr...@gmail.com> wrote:
>
> This application uses a Mysql (percona, a fork of mysql actually)
> hosted on some other machine. Looking the munin and new-relic stats I
> don't see anything weird. Although in muning, with the mysql plugin, I
> see a huge increase for some minutes of "binlog cache usage". That's
> the only "anomaly" I see.

Also, forgot to tell that the DB is replicated (although the replica
is not being used). But I don't even know if the DB has *anything* to
do with the problem I saw.


Thanks a lot,
Rodrigo

Rodrigo Campos

unread,
Nov 19, 2011, 10:08:50 PM11/19/11
to mod...@googlegroups.com
Hi,

I have an application that is freezing at some random times. It
doesn't seem to be obviously related with anything. And restarting
apache seems to fix it.

My first guess was to lower the "maximum-requests" used in WSGI from
500 to 200 to avoid any leak that we might be hitting (open fds, mem,
or something like that), change the LogLevel from warning to info in
apache config and add "inactivity-timeout=600". Just in case, the WSGI
relevant parts in the apache vhost looks like this now:

WSGIDaemonProcess <name> processes=4 maximum-requests=200 threads=1
inactivity-timeout=600
WSGIProcessGroup <some_user>
WSGIScriptAlias / /<path>/apache.wsgi
WSGIApplicationGroup %{GLOBAL}


But with this same config, it happened again today. On the error logs
during that time there several errors like this one:

[Sat Nov 19 02:59:28 2011] [error] [client 127.0.0.1] Script timed
out before returning headers: apache.wsgi

In fact, there are 47 of them. And then 3 like this:

[Sat Nov 19 08:02:25 2011] [info] mod_wsgi (pid=11550): Daemon
process inactivity timer expired, stopping process '<name>'.

And 1 like:

[Sat Nov 19 08:03:06 2011] [info] mod_wsgi (pid=18334): Maximum
requests reached '<name>'.
[Sat Nov 19 08:03:06 2011] [info] mod_wsgi (pid=18334): Shutdown
requested '<name>'.
[Sat Nov 19 08:03:06 2011] [info] mod_wsgi (pid=18334): Stopping
process '<name>'.
[Sat Nov 19 08:03:06 2011] [info] mod_wsgi (pid=18334): Destroying
interpreters.
[Sat Nov 19 08:03:06 2011] [info] mod_wsgi (pid=18334): Cleanup
interpreter ''.
[Sat Nov 19 08:03:07 2011] [info] mod_wsgi (pid=18334): Terminating Python.
[Sat Nov 19 08:03:07 2011] [info] mod_wsgi (pid=18334): Python has shutdown.


After this it all seems to be normal and working again. So it seems
the inactivity time out parameter worked as expected, it wasn't
necessary to restart apache this time.

Also, it seems there is something wrong with the date/hour I didn't
yet look. It didn't happen so much time between the "Script timed out"
and the killing. It seems some messages are logged using one time and
some others using other timezone/something like that. Because I have
this in the logs, for example:

[Sat Nov 19 08:04:31 2011] [info] mod_wsgi (pid=13853): Aborting
process '<name>'.
[Sat Nov 19 03:04:31 2011] [error] [client 127.0.0.1] Premature end of
script headers: apache.wsgi
[Sat Nov 19 08:04:31 2011] [info] mod_wsgi (pid=18356): Python has shutdown.
[Sat Nov 19 03:04:32 2011] [info] mod_wsgi (pid=18404): Attach interpreter ''.


But the time is actually bad configured on that server (is not on our
timezone and I will fix it as soon I can confirm the application
wouldn't have any problems with that), so perhaps it is reported in
some place as one timezone and some other in some other config file.

This application uses a Mysql (percona, a fork of mysql actually)
hosted on some other machine. Looking the munin and new-relic stats I
don't see anything weird. Although in muning, with the mysql plugin, I
see a huge increase for some minutes of "binlog cache usage". That's
the only "anomaly" I see.


Anyone has any idea/tip on how can I further debug this ?

I've seen there are commits in WSGI (not included in any release yet)
that add a blocked timeout and blocker request parameters and an other
one to get some information of the processes/threads (which
information ? a call-trace perhaps ? :-)). If I understand correctly
this could be *very* helpful to debug this problem. Also, if we could
have a trace where the blocked process where hanging it will be *very
very* useful :-D

Is there some other configuration option that could help ? Do you know
when will be a new release that includes the "blocked request"
parameters ?


Oh, btw, I'm using libapache2-mod-wsg 3.3-2ubuntu2


Also, if you need more information, please let me know.

Thanks a lot,
Rodrigo

Graham Dumpleton

unread,
Nov 20, 2011, 1:18:57 AM11/20/11
to mod...@googlegroups.com
I have lost Internet at home at moment so hard to reply in depth.

Go to mod_wsgi docs and look for debugging tips page. Go to very last section. Implement that mechanism for being able to dump out python stack traces for process on demand.

Likely your code is dead locking at some point.

More later when able to.

Graham
> --
> You received this message because you are subscribed to the Google Groups "modwsgi" group.
> To post to this group, send email to mod...@googlegroups.com.
> To unsubscribe from this group, send email to modwsgi+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/modwsgi?hl=en.
>
>

Rodrigo Campos

unread,
Nov 20, 2011, 10:02:14 AM11/20/11
to mod...@googlegroups.com
On Sun, Nov 20, 2011 at 3:18 AM, Graham Dumpleton
<graham.d...@gmail.com> wrote:
> I have lost Internet at home at moment so hard to reply in depth.

Please, reply when you can. Thanks a lot for your quick reply anyways :)

>
> Go to mod_wsgi docs and look for debugging tips page. Go to very last
> section. Implement that mechanism for being able to dump out python stack
> traces for process on demand.

Ohh, forgot to tell that I read the debugging tips too. We considered
doing this, but we wanted to try with the options I mentioned first,
just in case. Also, mod_wsgi is not planned to dump a trace of the
application when it find blocked-requests ? If that is the case, I
really prefer to update to a newer mod_wsgi :)

Also, it would be great if getting the trace could be automated
somehow, because it tends to happen at 4:00 am, so if it could get the
trace and then restart it automatically would be great. If I
instrument the python application code to wait for the file to dump
stack traces, is there any parameter on WSGI I can use to run some
script (that creates that file, for example) before it restarts the
process because of the inactivity timer ?

Looking through the WSGI code I see:

if (inactivity_time <= now) {
ap_log_error(APLOG_MARK, WSGI_LOG_INFO(0), wsgi_server,
"mod_wsgi (pid=%d): Daemon process "
"inactivity timer expired, stopping "
"process '%s'.", getpid(),
daemon->group->name);

restart = 1;

Calling system() here or in the "if (restart)" bellow this is
acceptable (in wsgi_monitor_thread()) ? (I don't know in which context
is this run, and if apache modules have some restrictions). If it is,
perhaps it could be added a parameter to run a script before the
process is restarted ? And perhaps the program would have to "guess"
how much it will take to dump the trace, so the system() call does not
return before the python app have dumped all stack traces. Although
there must be a better way if we thnik about it more than 1 minute :-)

Also, we are using django too. Perhaps there is a simpler way using
some django stuff to instrument our code to dump stack traces ?

>
> Likely your code is dead locking at some point.

Yeah, we were afraid of that :)
But we don't have any idea where. Perhaps waiting for the DB, perhaps
for disk, perhaps when internet has a mini-downtime, or perhaps it is
just a bug... :S

>
> More later when able to.

Yes, when you can please. Take your time :)

Thanks a lot,
Rodrigo

Graham Dumpleton

unread,
Nov 21, 2011, 1:07:46 AM11/21/11
to mod...@googlegroups.com

The timezone difference will be because one log message comes from
Apache child process and the other from the mod_wsgi daemon process.
If Django is overriding timezone then what is displayed will be
different to what Apache child processes show.

> [Sat Nov 19 08:04:31 2011] [info] mod_wsgi (pid=13853): Aborting
> process '<name>'.
> [Sat Nov 19 03:04:31 2011] [error] [client 127.0.0.1] Premature end of
> script headers: apache.wsgi
> [Sat Nov 19 08:04:31 2011] [info] mod_wsgi (pid=18356): Python has shutdown.
> [Sat Nov 19 03:04:32 2011] [info] mod_wsgi (pid=18404): Attach interpreter ''.
>
>
> But the time is actually bad configured on that server (is not on our
> timezone and I will fix it as soon I can confirm the application
> wouldn't have any problems with that), so perhaps it is reported in
> some place as one timezone and some other in some other config file.
>
> This application uses a Mysql (percona, a fork of mysql actually)
> hosted on some other machine. Looking the munin and new-relic stats I
> don't see anything weird. Although in muning, with the mysql plugin, I
> see a huge increase for some minutes of "binlog cache usage". That's
> the only "anomaly" I see.
>
> Anyone has any idea/tip on how can I further debug this ?
>
> I've seen there are commits in WSGI (not included in any release yet)
> that add a blocked timeout and blocker request parameters

Yep. The inactivity-timeout has been serving double duty. It was
really intended for when the process is completely idle with no
requests arriving or being handled. In place of a better solution at
the time, also made it so that it would detect case where no requests
arriving but there were existing requests and for the total of those
active requests there was no input or output. So, worked as a poor
failsafe for when the process as a whole hung because of all requests
blocking for some reason.

In mod_wsgi 4.0 those concepts are properly separated. The
inactivity-timeout is only for case where process is completely idle,
no active requests at all.

The new options for the busy case are blocked-requests and
blocked-timeout. By default the blocked restart is not enabled. It can
be enabled by setting blocked-timeout in much the same was as
inactivity-timeout option. When blocked-requests is not set, it
defaults to what ever the number of threads is for the mod_wsgi daemon
process.

What happens is that when the blocked-timeout is defined, when the
number of concurrent requests is equal to or exceeds blocked-requests,
if there is no input or output for any requests for period set by
blocked-timeout, then the process will be restarted.

So, if blocked-requests isn't set and defaults to total threads, then
when whole process effectively hangs, then it will restart.

If blocked-requests is set lower, for example, to half total threads,
then behaviour is a bit different. In this case the check for input
and output starts at a lower number of potentially blocked requests.
So, the checks for no activity against blocked timeout period will
happen, but not all threads will be in blocked state at this point and
so new requests could still be accepted and handled. If it so happened
that you had a lull period where no requests arrived and so timeout
expired with the requests still blocked, then the process will be
restarted.

Setting blocked-requests to a lower value than total number of threads
therefore allows you to potentially have some head room where can
still handle requests but will try and do a restart if can in an idle
period. So, a form of graceful restart of sorts. In practice though,
if you are under a constant request load, it is unlikely this will
occur.

I need to think about how this all works some more and see whether I
can better determine if individual requests are producing no input and
output and so likely blocked. I may yet be able to come up with a
slightly better way of handling this and cause a restart sooner as
soon as no other active requests in addition to the blocked ones. That
way will not have situation where in busy site will still end up
reaching maximum number of threads being blocked before forced
restart.

> and an other
> one to get some information of the processes/threads (which
> information ?

This is some experimental stuff that works in conjunction with New
Relic agent to track number of concurrent requests and thread
utilisation. It can be graphed in New Relic using custom views. I am
not entirely happy with interface into mod_wsgi for this at this point
and so will be changing how this works.

> a call-trace perhaps ? :-)).

I have just committed some changes that adds a feature where by if a
restart is performed due to blocked requests then it will dump out
what any active requests in the process are doing. You would therefore
see in the Apache logs:

[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Daemon process
busy inactivity timer expired, stopping process 'hello-1'.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Shutdown
requested 'hello-1'.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Dumping stack
trace for active Python threads.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Thread
4320296960 executing file
"/Library/WebServer/Sites/hello-1/htdocs/environ.wsgi", line 8, in
function
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): called from
file "/Library/WebServer/Sites/hello-1/htdocs/environ.wsgi", line 12,
in application.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Thread
4320833536 executing file
"/Library/WebServer/Sites/hello-1/htdocs/environ.wsgi", line 8, in
function
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): called from
file "/Library/WebServer/Sites/hello-1/htdocs/environ.wsgi", line 12,
in application.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Thread
4321370112 executing file
"/Library/WebServer/Sites/hello-1/htdocs/environ.wsgi", line 8, in
function
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): called from
file "/Library/WebServer/Sites/hello-1/htdocs/environ.wsgi", line 12,
in application.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Thread
4313858048 executing file
"/Library/WebServer/Sites/hello-1/htdocs/environ.wsgi", line 8, in
function
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): called from
file "/Library/WebServer/Sites/hello-1/htdocs/environ.wsgi", line 12,
in application.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Exiting thread
10 in daemon process 'hello-1'.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Exiting thread
9 in daemon process 'hello-1'.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Exiting thread
5 in daemon process 'hello-1'.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Exiting thread
8 in daemon process 'hello-1'.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Exiting thread
7 in daemon process 'hello-1'.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Exiting thread
6 in daemon process 'hello-1'.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Exiting thread
4 in daemon process 'hello-1'.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Exiting thread
3 in daemon process 'hello-1'.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Exiting thread
2 in daemon process 'hello-1'.
[Mon Nov 21 15:18:52 2011] [info] mod_wsgi (pid=12256): Exiting thread
1 in daemon process 'hello-1'.
[Mon Nov 21 15:18:57 2011] [info] mod_wsgi (pid=12256): Aborting
process 'hello-1'.
[Mon Nov 21 15:18:57 2011] [error] [client 127.0.0.1] Premature end of
script headers: environ.wsgi
[Mon Nov 21 15:18:57 2011] [error] [client 127.0.0.1] Premature end of
script headers: environ.wsgi
[Mon Nov 21 15:18:57 2011] [error] [client 127.0.0.1] Premature end of
script headers: environ.wsgi
[Mon Nov 21 15:18:57 2011] [error] [client 127.0.0.1] Premature end of
script headers: environ.wsgi

It does require LogLevel in Apache to be set to 'info' though. There
is only file name, line number and function name context. It isn't
really feasible to give code line like normal Python tracebacks as
this is reaching from underneath all interpreters and written in C
code. It is happening on shutdown and also has to avoid doing stuff
that might result in GIL being released and other code being run. So,
has to be simple as possible. Should be sufficient though in helping
to target trouble spot.

> If I understand correctly
> this could be *very* helpful to debug this problem. Also, if we could
> have a trace where the blocked process where hanging it will be *very
> very* useful :-D
>
> Is there some other configuration option that could help ? Do you know
> when will be a new release that includes the "blocked request"
> parameters ?

Above is all I can think of right now in rush before have to go back
home to no proper internet.

Graham

> Oh, btw, I'm using libapache2-mod-wsg 3.3-2ubuntu2
>
>
> Also, if you need more information, please let me know.
>
>
>
>
>
> Thanks a lot,
> Rodrigo
>

Rodrigo Campos

unread,
Nov 21, 2011, 10:55:33 AM11/21/11
to mod...@googlegroups.com
On Mon, Nov 21, 2011 at 3:07 AM, Graham Dumpleton
<graham.d...@gmail.com> wrote:
> On 20 November 2011 14:08, Rodrigo Campos <rodr...@gmail.com> wrote:
>>
>> Also, it seems there is something wrong with the date/hour I didn't
>> yet look. It didn't happen so much time between the "Script timed out"
>> and the killing. It seems some messages are logged using one time and
>> some others using other timezone/something like that. Because I have
>> this in the logs, for example:
>
> The timezone difference will be because one log message comes from
> Apache child process and the other from the mod_wsgi daemon process.
> If Django is overriding timezone then what is displayed will be
> different to what Apache child processes show.

It's probably that, then. Thanks :-)

>
>> Anyone has any idea/tip on how can I further debug this ?
>>
>> I've seen there are commits in WSGI (not included in any release yet)
>> that add a blocked timeout and blocker request parameters
>
> Yep. The inactivity-timeout has been serving double duty. It was
> really intended for when the process is completely idle with no
> requests arriving or being handled. In place of a better solution at
> the time, also made it so that it would detect case where no requests
> arriving but there were existing requests and for the total of those
> active requests there was no input or output. So, worked as a poor
> failsafe for when the process as a whole hung because of all requests
> blocking for some reason.
>
> In mod_wsgi 4.0 those concepts are properly separated. The
> inactivity-timeout is only for case where process is completely idle,
> no active requests at all.

Is there any estimated date when mod_wsgi 4.0 will be released ?
Is it too risky to try to compile todays trunk and use it in
production (previous sevral days on testing) ? Are there any "blocker"
issue you are aware of ?

>
> The new options for the busy case are blocked-requests and
> blocked-timeout. By default the blocked restart is not enabled. It can
> be enabled by setting blocked-timeout in much the same was as
> inactivity-timeout option. When blocked-requests is not set, it
> defaults to what ever the number of threads is for the mod_wsgi daemon
> process.
>
> What happens is that when the blocked-timeout is defined, when the
> number of concurrent requests is equal to or exceeds blocked-requests,
> if there is no input or output for any requests for period set by
> blocked-timeout, then the process will be restarted.

Great, thanks!
That's why I asked for these parameters, I've read some similar
description on your post on stackoverflow and now saw commits on trunk
that seems to implement them. This is a great feature for us right
now, we are *just* needing it =)

>
> So, if blocked-requests isn't set and defaults to total threads, then
> when whole process effectively hangs, then it will restart.

Right, makes sense

> I need to think about how this all works some more and see whether I
> can better determine if individual requests are producing no input and
> output and so likely blocked. I may yet be able to come up with a
> slightly better way of handling this and cause a restart sooner as
> soon as no other active requests in addition to the blocked ones. That
> way will not have situation where in busy site will still end up
> reaching maximum number of threads being blocked before forced
> restart.
>
>> and an other
>> one to get some information of the processes/threads (which
>> information ?
>
> This is some experimental stuff that works in conjunction with New
> Relic agent to track number of concurrent requests and thread
> utilisation. It can be graphed in New Relic using custom views. I am
> not entirely happy with interface into mod_wsgi for this at this point
> and so will be changing how this works.
>
>> a call-trace perhaps ? :-)).
>
> I have just committed some changes that adds a feature where by if a
> restart is performed due to blocked requests then it will dump out
> what any active requests in the process are doing. You would therefore
> see in the Apache logs:

Great, thanks a lot!

Sorry, I'm not sure I followed you here. The trace gives file name,
line number and function name context but the line number is not
feasible ? Or what is not feasible ? Sorry, I lost you :S

>
>> If I understand correctly
>> this could be *very* helpful to debug this problem. Also, if we could
>> have a trace where the blocked process where hanging it will be *very
>> very* useful :-D
>>
>> Is there some other configuration option that could help ? Do you know
>> when will be a new release that includes the "blocked request"
>> parameters ?
>
> Above is all I can think of right now in rush before have to go back
> home to no proper internet.

That's enough :-)


Thanks a lot!

Rodrigo

Graham Dumpleton

unread,
Nov 21, 2011, 2:37:39 PM11/21/11
to mod...@googlegroups.com
A python tracebck will normally show the actual python code on the line number in the file. Because the python code is not shown you will always need to go to the original code file and go to the line to work out what code was being executed. Often the display of the single line of python code from each executing stack frame in the traceback is enough to work it out, but no such luxury here am afraid.

I would cut and paste example to explain but can't do that on my phone.

Graham

Graham Dumpleton

unread,
Nov 21, 2011, 7:07:17 PM11/21/11
to mod...@googlegroups.com
Here is an example to clarify now that am connected back to the real
world at last.

[Mon Oct 24 15:29:31 2011] [error] Traceback (most recent call last):
[Mon Oct 24 15:29:31 2011] [error] File
"/Library/WebServer/Sites/mingus-1/lib/python2.6/site-packages/newrelic-0.5.57.0/newrelic/core/application.py",
line 211, in harvest
[Mon Oct 24 15:29:31 2011] [error] connection, stats.metric_data())
[Mon Oct 24 15:29:31 2011] [error] File
"/Library/WebServer/Sites/mingus-1/lib/python2.6/site-packages/newrelic-0.5.57.0/newrelic/core/remote.py",
line 122, in send_metric_data
[Mon Oct 24 15:29:31 2011] [error] res =
self.invoke_remote(conn,"metric_data",True,self._agent_run_id,self._agent_run_id,self._metric_data_time,now,metric_data)
[Mon Oct 24 15:29:31 2011] [error] File
"/Library/WebServer/Sites/mingus-1/lib/python2.6/site-packages/newrelic-0.5.57.0/newrelic/core/remote.py",
line 156, in invoke_remote
[Mon Oct 24 15:29:31 2011] [error] return
self._remote.invoke_remote(connection, method, compress, agent_run_id,
*args)
[Mon Oct 24 15:29:31 2011] [error] File
"/Library/WebServer/Sites/mingus-1/lib/python2.6/site-packages/newrelic-0.5.57.0/newrelic/core/remote.py",
line 270, in invoke_remote
[Mon Oct 24 15:29:31 2011] [error] raise Exception("%s failed:
status code %i" % (method, response.status))
[Mon Oct 24 15:29:31 2011] [error] Exception: metric_data failed:
status code 503

So for normal Python traceback you get code snippet, eg:

return self._remote.invoke_remote(connection, method, compress,
agent_run_id, *args)

I don't give that as can see issues with doing that.

Graham

Rodrigo Campos

unread,
Nov 21, 2011, 8:52:02 PM11/21/11
to mod...@googlegroups.com
On Mon, Nov 21, 2011 at 9:07 PM, Graham Dumpleton
<graham.d...@gmail.com> wrote:
> Here is an example to clarify now that am connected back to the real
> world at last.

I'm glad you have inet again :)

Oh, I misunderstand you. When you said "code line" I thought "code
line number" and I mixed up. Sorry. It's clear now :)

And yes, of course, the info it will provide it's enough for us =)

Thanks a lot,
Rodrigo

Rodrigo Campos

unread,
Nov 23, 2011, 9:38:33 AM11/23/11
to mod...@googlegroups.com
On Mon, Nov 21, 2011 at 12:55 PM, Rodrigo Campos <rodr...@gmail.com> wrote:
> On Mon, Nov 21, 2011 at 3:07 AM, Graham Dumpleton
> <graham.d...@gmail.com> wrote:
>> On 20 November 2011 14:08, Rodrigo Campos <rodr...@gmail.com> wrote:
>>
>> In mod_wsgi 4.0 those concepts are properly separated. The
>> inactivity-timeout is only for case where process is completely idle,
>> no active requests at all.
>
> Is there any estimated date when mod_wsgi 4.0 will be released ?
> Is it too risky to try to compile todays trunk and use it in
> production (previous sevral days on testing) ? Are there any "blocker"
> issue you are aware of ?

Sorry to bother you again. But was this question lost ? If you didn't
answer because you are too busy, no problem. Please take your time.
But I wanted to make sure it wasn't lost :)


Thanks a lot,
Rodrigo

Graham Dumpleton

unread,
Nov 23, 2011, 4:02:11 PM11/23/11
to mod...@googlegroups.com

It wasn't lost, I did see it. I just don't have a good answer right now.

I would like to get mod_wsgi 4.0 out soon as Apache people are getting
closer to Apache 2.4 being released and so need to have it out before
then as mod_wsgi 3.3 is not compatible with Apache 2.4.

There are a number of things to be done before can release mod_wsgi
4.0. These are:

1. Validate compiles against last Python 3.3 trunk.
2. Solve Python library linking issues with Python 3.2+.

Also ideally:

3. Changes to New Relic configuration directives. (Not happy with how done now).
4. Changes to New Relic statistics sampler. (Not happy with how done now).

Despite above, the current mod_wsgi 4.0 trunk is stable. This version
has got more testing than any prior version when being developed as I
have for most of the past year had it sitting running load tests
continuously with New Relic monitoring it so as to test New Relic
Python agent. So, has got a good working out.

I guess I would like to have it out by the end of the year at the
latest. The sooner the better though.

Graham

Rodrigo Campos

unread,
Nov 25, 2011, 9:22:55 AM11/25/11
to mod...@googlegroups.com
On Wed, Nov 23, 2011 at 6:02 PM, Graham Dumpleton

<graham.d...@gmail.com> wrote:
> On 24 November 2011 01:38, Rodrigo Campos <rodr...@gmail.com> wrote:
>> On Mon, Nov 21, 2011 at 12:55 PM, Rodrigo Campos <rodr...@gmail.com> wrote:
>>> On Mon, Nov 21, 2011 at 3:07 AM, Graham Dumpleton
>>> <graham.d...@gmail.com> wrote:
>>>> On 20 November 2011 14:08, Rodrigo Campos <rodr...@gmail.com> wrote:
>>>>
>>>> In mod_wsgi 4.0 those concepts are properly separated. The
>>>> inactivity-timeout is only for case where process is completely idle,
>>>> no active requests at all.
>>>
>>> Is there any estimated date when mod_wsgi 4.0 will be released ?
>>> Is it too risky to try to compile todays trunk and use it in
>>> production (previous sevral days on testing) ? Are there any "blocker"
>>> issue you are aware of ?
>>
>> Sorry to bother you again. But was this question lost ? If you didn't
>> answer because you are too busy, no problem. Please take your time.
>> But I wanted to make sure it wasn't lost :)
>
> It wasn't lost, I did see it. I just don't have a good answer right now.

=)

>
> I would like to get mod_wsgi 4.0 out soon as Apache people are getting
> closer to Apache 2.4 being released and so need to have it out before
> then as mod_wsgi 3.3 is not compatible with Apache 2.4.

Ohh, I see.

>
> There are a number of things to be done before can release mod_wsgi
> 4.0. These are:
>
> 1. Validate compiles against last Python 3.3 trunk.
> 2. Solve Python library linking issues with Python 3.2+.
>
> Also ideally:
>
> 3. Changes to New Relic configuration directives. (Not happy with how done now).
> 4. Changes to New Relic statistics sampler. (Not happy with how done now).

Sound nice

>
> Despite above, the current mod_wsgi 4.0 trunk is stable. This version
> has got more testing than any prior version when being developed as I
> have for most of the past year had it sitting running load tests
> continuously with New Relic monitoring it so as to test New Relic
> Python agent. So, has got a good working out.

Great, good to know

>
> I guess I would like to have it out by the end of the year at the
> latest. The sooner the better though.

Great! I'll let you know how it works for us if we decide to compile
it from source


Thanks a lot!
Rodrigo

Reply all
Reply to author
Forward
0 new messages