I prefer to let any 'long-running' component, like a Change Detection Connector, timeout occasionally. Since your solution is keeping an audit log, this will give you some way to monitor that it is alive and well. Of course, if there can be gaps in changes detected, having the task stop occasionally lets you drop a timestamp into your audit log to indicate when it stopped, as well as another when it is restarted. I've worked with a couple of monitors that managed to use this as a heartbeat for the solution. Allowing an AL to run indefinitely requires you to add monitor-ability otherwise. Since the change detection mechanisms are 'black box' loops that monopolize processing during the wait, you have to do this using additional ALs.
Another advantage with stopping and restarting the ALs is that you can avoid connection timeouts. These could be for connected systems or intervening tech, like firewalls. Long periods of inactivity could cause tunnels to close, session tokens to expire or connections to be lost. Re-establishing these periodically - and ideally in shorter timeframes than the timeout settings themselves - means that you don't have to dig into TDI's (SDI's) Connection Lost and Failover functionality, which although powerful will still mean more configuration.
When it comes to keeping ALs afloat, I prefer to use Schedulers. For something like Change Detection I would use a Keep Alive Scheduler. In the case of periodic tasks, like exports to BI or scanning for file uploads, you set the Scheduler to use a Schedule. This latter acts like a crontab - and in fact, you could put the mask itself in your properties file so it is easy to tune.
Finally, you mentioned that when an AL is started then it is given a client/tenant id which is used to read the correct properties. I am not sure how you implemented this, but my approach is to create my own getProperty() function in a library Script (Resources > Scripts) which I have preloaded for each AL. In addition to the property name, I also pass in some context info - like the client id, or server instance name, or whatever. My properties have these values encoded in them in the property file:
stark.enterprise.ldap.url=...
lannister.enterpise.ldap.url=...
...
My getProperty() method uses the context info to create the extended property name, fetch the property and return it. In those projects where the preference was one set of properties for each context - e.g. stark.properties - then getProperties read in the properties file (in our case, each time) based on the context information. To get the built-in handling of property encryption/decryption, we would use PropertyStore functions (found in the Java Docs) for doing this.
If the purpose of starting the ALs via command line is to pass in the client id, then perhaps this your simplest approach going forward. If you were to use a Scheduler then you would need to prepare your AL by defining Operations - again more configuration - while passing in the client id when launching the AL is simpler and more flexible.
Ok, I've rambled long enough. Let me know if you want to talk about any of these items :)
/Eddie