Thats exactly what i'm saying anthony. Those total times are enormous considering the whole page including images is now a little under 300kb. And i actually dont care if its called ttfb or something else i just want to have quick response times.
--
---
You received this message because you are subscribed to a topic in the Google Groups "web2py-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/web2py/Suuc7DdjDn0/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to web2py+un...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
There are 6 files which they start to load at the same time. Or what do you did you mean?
edit: the real problem on that graph is that there's no concurrency: I don't know if it's a feature of webpagetest.org, but apparently there's no new connection until the previous one is finished.
Before i had the mod-deflate i had times around 3.5 secs the ttfbs were about the same +- 50 ms
probably nothing, just check that they are not sequential. probably is just how they draw their graphs.
On the TTFB note: did you try timing it without gzip compression turned on, just to check ?
--
---
You received this message because you are subscribed to the Google Groups "web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to web2py+un...@googlegroups.com.
when I did that test application(welcome app) I got an average about 80 ms which seems pretty slow too (localhost no db requests...).
migrations are false and lazy tables are true.
I tried to do something with cache.ram (for another app) which had no effekt at all perfhaps I did it wrong.
consider moving code to (imported) modules #I started on it when I realised I was using request.folder in my sitemap funktion which I currently have in a module (which isn't that smart I know thats one of the first things I will change)
I don't know how to deal with those two points:
5. Add session.forget for methods which don't use the session object. #when I dont use session.something I can add the session.forget to the beginning of the controler?
6. Enable connection pooling depending on the database. #no idea what to do here
haven't tried any of those:compile the app #haven't tried
In your homepage you are serving 10 images, at least, in fast_download controller, here it's safe
to put session.forget() if not already. It will allow concurrent request from the same browser.
thanks for all the suggestions I'm trying to work at one after the other:
thanks for all the suggestions I'm trying to work at one after the other:
kompiling it made it a lot worse: http://www.webpagetest.org/result/130410_9P_14QQ/
thanks to the awesome help of Ricardo Pedroso my performance increased by a factor of 100 (at least).
Well what were the problems:
first of all my vserver cpu is very slow. (600MHz 1 Core) Which is just rediculous in those times my mobile phone has a faster processor.
apache-modwsgi-web2py is pretty processor intensive especially if you ad a mod_pagespeed.
what did we do (decending order of performance increase):
we enabled a lot of caching,
removed code from models to modules
splitted controlers (from over 20 funktions to less than 7 each)
compiled application
used html instead of python helper functions
The last thing we did was switching from apache to nginx: which is just awesome:
just to show the diffrence I list some of our test results:
after optimization in apache:
apache bench:
ab -n 100 -c 2 domain.com
This is ApacheBench, Version 2.3
...
Server Software: Apache/2.2.22
Document Path: /
Document Length: 27429 bytes
Concurrency Level: 2
Time taken for tests: 17.198 seconds
Complete requests: 100
...
Requests
per second: 5.81 [#/sec] (mean)
Time per request: 343.961 [ms] (mean)
Time per request: 171.981 [ms] (mean, across all concurrent requests)
Transfer rate: 158.22 [Kbytes/sec] received
Connection Times (ms)
min mean [+/-sd] median
max
Connect: 0 0 0.0
0 0
Processing: 138 343 81.8 332 639
Waiting: 138 341 82.3
328 638
Total: 138 343 81.8
332 639
Percentage of the requests served within a certain time (ms)
50% 332
66% 358
75% 365
80% 396
90% 456
95% 493
98% 634
99% 639
100% 639 (longest request)
with nginx(and yes the size is smaller because I compressed the pngs)
ab -n 100 -c 2 domain.com
...
Server Software: nginx/1.1.19
...
Document Length: 25420 bytes
Concurrency Level: 2
Time taken for tests: 10.821 seconds
Complete requests: 100
...
Requests per second: 9.24 [#/sec] (mean)
Time per request: 216.427 [ms] (mean)
Time per request: 108.214 [ms] (mean, across all concurrent requests)
Transfer rate: 233.14 [Kbytes/sec] received
Connection Times (ms)
min mean [+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 25 215 42.9 207 387
Waiting: 25 214 42.6 207 387
Total: 25 215 42.9 207 387
Percentage of the requests served within a certain time (ms)
50% 207
66% 210
75% 217
80% 265
90% 272
95% 278
98% 286
99% 387
100% 387 (longest request)
webpagetest.org gives now results which are not that good but I suspect
that it depends on their server cpu utilization a lot. and after I tried some
other pages which I knew where good before I come to the conclusion that this
site isn't reliable at all at the moment. the compilation which made things worse I mentioned earlier is probably a result of that too.
loadimpact.com tells me that the server can handle 100 users at the same time before the server starts to fail. So a pretty good improvement. unfurtuanatly I don't have results from before from loadimpact.
I hope it helps some people which are looking for more performance especially with slow cpus