Adrian,
I finally got round to doing some benchmarks on Lusca-r14809
CPU
Intel Quad Core Xeon 2.93GHz
RAM
8GB
NIC
1 x Intel Gigabit
Storage
2 x 10 GB AUFS on MLC SSD (max object size 1.5 MB)
1 x 650GB AUFS on HDD (max object size 150 MB)
Lusca r14809
Linux 3.0.3 (tickless)
We use Web Polygraph and 2 slightly modified versions of the polymix-4 recipe.
1) Short Inc: for quick finding of forward proxy nocache limits - this
is just the polymix-4 inc1 phase on its own with a very high max
request rate
2) Ramper: For testing the request rate limits with caching enabled -
this is the fill phase followed by the inc1 phase with a high max
request rate
Here are some summary measurements:
* In both cases, I am using memory_pools 2 GB
* I have used taskset to fix Lusca to CPU-0 and...
* I use /proc/irq/.../smp_affinity to fix the eth0 IRQ to CPU-1
No caching:
* configured Lusca with "acl bench dstdomain bench" and "cache deny bench"
~6000 req/sec
~350 Mbit/sec
At this point the main Lusca process is consuming 100% of one CPU
core. Response times start increasing quickly and eventually Lusca
runs out of available file descriptors
Caching:
~4500 req/sec
~250 Mbit/sec
Again, the main Lusca process reaches 100% CPU, but this time, Lusca
quickly logs the following warnings and appears to stop responding:
squid[6974]: squidaio_queue_request: Async request queue growing uncontrollably!
squid[6974]: squidaio_queue_request: Syncing pending I/O operations.. (blocking)
I would love to find a way to make Squid / lusca behave more gracefully.
Could Lusca bypass the cache stores when the AUFS queue grows too long?
I see that you have many feature branches open, are there any that you
would like me to checkout and test for performance?
Thanks as always for your work on Lusca and if you are interested in
seeing the polygraph results in full, just let me know..
-RichardW.