I don’t think middlewares are necessary the solution. I strongly believe partitioning should be managed at a higher level.
This doesn’t mean you can’t use such middleware and the Paste libraries has a configuration file driven approach for grafting together WSGI applications at different sub URL contexts so they can all run in one process.
I would only use such WSGI middlewares as a fall back though when you need to run it all in one process as part of development, or if you had no other choice because you were deployed to a host service that didn’t give you the flexibility to use a decent WSGI server, or provide some means at their routing layer to direct traffic for different sub URLs to different backends.
Even with such WSGI middleware in place you can still use the above with Apache/mod_wsgi to map the different URLs into different processes, although now that you are using WSGI middleware for grafting, it means you end up importing potentially dead code into processes as they will have the part of the URL namespace you aren’t handling loaded as well, unless the WSGI middleware were smart enough to do lazy loading, which usually they aren’t.
The next level above doing separation with Apache/mod_wsgi alone is if you are using Docker to bundle up separate WSGI applications. In this case you use Apache as purely a front end to proxy then to different Docker containers with each WSGI application running in them, where in the Docker containers you can use mod_wsgi-express.
I talk about that topic in:
As far as now handling this at the level of a PaaS, the typical PaaS doesn’t provide such support.
Amusingly older types of hosting services such as WebFaction can, but Heroku and OpenShift 2 cannot.
Next generation PaaS offerings coming out such as OpenShift 3 (based around Docker and Kubernetes) will allow you to use it handle vertically separating WSGI applications under sub URLs of the same host name.
On OpenShift 3 for example you can deploy your two separate WSGI applications and when you expose the service using a route, when specifying a hostname, you can also specify a path. The routing layer of OpenShift will for HTTP requests then handle passing through requests under the different URL namespaces for you. This means you don’t need to set up Apache to be doing such proxying.
OpenShift 3 has some really interesting capabilities around handling of many micro services. This is not just related to routing and exposing them under the one site at different URLs, but also the fact that each micro service can run independently, with different CPU and memory resources allocated to them. This way you can adjust the resources allocated to the actual amount used for your tuned WSGI server and application.
You don’t therefore have situation you do with current generation PaaS where you get this fixed bucket of resources and you never use it all. You either try and screw around all the time with your WSGI server processes/threads to try and fill the space, or you give up, waste resources when adding more instances.
With OpenShift, you tune your WSGI server and application as best can, then set CPU and memory based on what that uses. When you need to scale, you simply create more replicas. You don’t have wasted CPU and memory as your allocation is a more accurate depiction of what is used. Thus when you scale you can fit more instances in from your global allocation of CPU and memory.
So the important difference here is that next generation PaaS has your CPU and memory allocation per project. Not per instance. That way you can divide up the allocation how you see fit. This need not even be restricted to a single WSGI application as within the one project you can run more than one service, api, main, database etc, and they take from the project level bucket of CPU and memory. You this have maximum flexibility.
Of course monitoring becomes even more important in this than it has in the past. If you don’t have good monitoring, you are going to lack the ability to properly tune your application and WSGI server, understand what resources they do use, and so make the most of the new flexibility to break up resources.
Anyway, hopefully you understand this ramble.
Graham