I doubt anyone here may have experienced this, but I've run out of resources to explore on this...
We use Fabric (
fabfile.org) to automate a lot of things. It is great.
I built a new routine in it this week, and I can't get it to clean up properly. The routine simply spins up an admin version of a pyramid app, then hits some API endpoints to POST some filesystem data to it.
This is executed in a virtualenv. The problematic part of the routine is roughly...
@task
def import_data(c):
with c.cd("localpath/to/pyramid_app"):
proc_server = c.run("pserve dev_admin.ini", replace_env=False, asynchronous=True)
The issue is that I see two different processes on the operating system:
* cd localpath/to/pyramid_app && pserve dev_admin.ini
* /Library/Frameworks...Python /virtualenv/bin/pserve dev_admin.ini
asynchronous is used because running pyramid would otherwise block forever. i just analyze the process stderr for the "Serving" line, and continue once it is emitted.
In fabric, I can access the FIRST process via `proc_server.runner` and I can stop/kill/terminate that -- but that leaves the second process running. That second process is one PID higher, and is the actual process that is running Pyramid and bound to the ports.
I have a temporary workaround where I increase the PID by 1 and `c.run("kill %s" % pid2)` that process, but this is janky.
Has anyone encountered this before or have an idea on how to better handle this?