Well, if you look at the benchmarks, several things stand out.
1. All the WSGI servers are pre-loaded. That means the code is already pre-compiled and loaded in ram, ready to run. Web2py doesn't quite work that way, because each request loads the database, does migrations if necessary, and inserts a lot of objects into the scope.
2. They are all using gunicorn, which spawns about 4 workers on startup. We could certainly run Web2py in WSGI mode with gunicorn, but it wouldn't make much a difference. I believe if we are going to support sessions, they should be off by default and use a decorator @session_enabled or something like that. However, that may go against Web2py principles.
3. Those benchmarks are not development type setups. If we are going to compare benchmarks, we'll need to make sure that our configuration matches theirs.
4. Web2py was meant to be a teaching tool. However, as it is now, what it's teaching is years old. Now everything needs to be WSGI so you can run it on a PAAS such as Heroku, Iron.io, Azure, etc.
It's good to know how we compare with performance, but at the same time, I think we need to be realistic. Web2py needs to be re-architectured if we're going to see wider adoption in the future.