Question for my webservice performance using Apache + mod_wsgi + Flask

25 views
Skip to first unread message

Ice Prince

unread,
Oct 30, 2014, 11:29:31 AM10/30/14
to mod...@googlegroups.com
Hello again,
I'm getting a situation related to my application performance that out of my knowledge.
In short, my app has 2 steps: first, queries a external web service to retrieve some data and the second, process that data and return the result.
For every single request, the 1st step takes 0.18 sec and the 2nd step takes 0.02 sec for average, so it takes 0.2 sec in total to serve an single request. (just put a time measurement in code to get these numbers)
And if i using: siege -i -d1 -c1 http://my_applition  , the "Response time:" also is 0.2sec too.

Now the issue happens if i run: siege -i -d1 -c150 http://my_applition , the 1st step which queries external web service increase the time consumption to 1sec (i'm not sure it is overloaded or not), the 2nd step still the same 0.02sec as normal, but overall, my app become slowly and the "Response time:" by siege show the result is 6sec.

I don't know which progress consume my time, please help me an idea. Thanks in advance.

Minh Tuan.

Jason Garber

unread,
Oct 30, 2014, 2:16:33 PM10/30/14
to mod...@googlegroups.com
You have not mentioned if you are in daemon mode or embedded mode.  I'll assume Daemon Mode, and if you are not already, maybe switching is a good idea.

Assuming the external web service can handle the load you are throwing at it (which isn't something we should do), then the issue perhaps has to do with your processes and threads configuration.  Unless you have 10 processes with 15 threads, and enough oomph on your server to handle that, then your requests are going to be backlogged while the others complete.

Suggest you write the request start, api call start, processing start, and processing end time to a log file for each request, and then see what is really happening.


--
You received this message because you are subscribed to the Google Groups "modwsgi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to modwsgi+u...@googlegroups.com.
To post to this group, send email to mod...@googlegroups.com.
Visit this group at http://groups.google.com/group/modwsgi.
For more options, visit https://groups.google.com/d/optout.

Graham Dumpleton

unread,
Oct 30, 2014, 5:11:48 PM10/30/14
to mod...@googlegroups.com
Backing up what Jason says, really need to see the Apache/mod_wsgi configuration you are using. This should include which Apache MPM is being used, MPM settings and any mod_wsgi daemon settings.

Also indicate what version of mod_wsgi you are using.

What you are seeing is typical behaviour when you overload an application. Benchmarking at full tilt is nearly always not help to understand where bottlenecks are except perhaps to see how your application will die.

One could use a web monitoring solution such as New Relic to try and understand bottlenecks, or there are some internal ways of getting at least some capacity utilisation metrics out of mod_wsgi itself. Recent mod_wsgi versions (4.X) also add in various keys to the WSGI environ dictionary with start times for when request hits Apache, is proxied, when hits daemon process etc. This can be used to track queuing time and backlogging.

Anyway, place to start is what configuration you are using.

Graham

Minh Tuan

unread,
Oct 30, 2014, 10:53:01 PM10/30/14
to mod...@googlegroups.com
Thanks Jason for the response,
I'm using daemon mode with 6 processes and 5 threads. My server is HP gen7 with 24 core and 12GB RAM.
I had measured the time from api call start to the end at 150 concurrent requests, it's always around 1 + 0.02 sec. Don't know how to calculate at start of request, just refer to siege result and actual experience that the response average is 6 sec.
So, seem the requests are queued to wait for other ones complete. Should i change to 10 processes and 15 threads?

P/S: i temporarily assume the external web service that i call api to can handle this load cause it is IN production by Comverse which serve the charging activities for entire of Mobile Operator business.

Minh Tuan.

--
You received this message because you are subscribed to a topic in the Google Groups "modwsgi" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/modwsgi/j52MhsN4JfI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to modwsgi+u...@googlegroups.com.

To post to this group, send email to mod...@googlegroups.com.
Visit this group at http://groups.google.com/group/modwsgi.
For more options, visit https://groups.google.com/d/optout.



--
  <====((=o-( ',_,' )-o=))=====>

Bản chất tốt nhưng cuộc đời xô đẩy!

Minh Tuan

unread,
Oct 30, 2014, 11:00:56 PM10/30/14
to mod...@googlegroups.com
oh anyway, forget the complicated api, i just make the app that just sleep 1.02 second then return a default value. The overall response still 6 sec too at 150 concurrent requests.

Minh Tuan.

Graham Dumpleton

unread,
Oct 30, 2014, 11:08:01 PM10/30/14
to mod...@googlegroups.com
Can you still provide details on which Apache MPM you are using and the MPM settings.

The requests are going to be backlogging and the MPM settings are important in understanding the reason why.

Graham

Jason Garber

unread,
Oct 30, 2014, 11:40:10 PM10/30/14
to mod...@googlegroups.com
As Graham says, we need to see the config.  Just FYI, I regularly benchmark nginx + apache + mod_wsgi at 4,000 requests per second using a test application.  It is really fast when configured correctly.


Reply all
Reply to author
Forward
0 new messages