In this particular use case:
* The Pyramid Application may only be occasionally publicly available (firewall rules)
* The Scripts and Server are on the same machine.
I've done a test implementation pretty quickly using config-file hashes and that is working well. A Pyramid route publishes 2 hashes: the config file's filepath, and the config file's contents. Based on that info, a "client" script can reasonably ascertain it is being invoked by the same configuration and is probably communicating with "itself". I've also tested adding this to my exception logging, as it looks promising for troubleshooting and identifying regressions.
The reason for hashes is that a Pyramid application can function very differently based on the config file - so I can't risk a script invoked with `staging.ini` thinking that is talking to a server running `production.ini`. They may have completely different databases, or functional behaviors.
The actual use case:
The Pyramid Application is a TLS Certificate Manager and centralized ACME Client; when running, it responds to public ACME challenges, and offers an API to OpenResty/Nginx servers for dynamic certificate loading.
I invoke periodic operations that are often long running, such as renewing enrolled certificates, pulling ARI information, etc. These all need commandline scripts to run on demand anyways, so I didn't bother with Celery. I need checks in place to ensure both the server and the script are invoked with the same configuration file. I guess I could probably add the MAC address into there to identify the machine...