Here is the set up: There is a grid of files from which a user can generate 'offspring' files in all possible combinations. Previous inquiry has led me to think the scheduler could be used for multiprocessing here (instead of the ostensibly problematic multiprocessing module).The queued tasks are derived from the python script that processes user-selected files, called in the controller when these file id's are passed via ajax.
Is there a brick wall here? If, for instance, the app was on Google App Engine, could a large number of idle workers simply be started to handle spikes in user requests?
If I understand, using the scheduler in my case would only be a viable option for my own processing purposes, not multiple users. If so, it appears that my only option would be to export a desktop version of the interface to be used for this processing. If I am offtrack here, please let me know. Otherwise, any other comments are welcome.
Could there be scalability issues with too many users attempting to run too many processes (on GAE for instance)?
My basic interpretation of your post: The scheduler shouldn't be managed by the webserver (shouldn't be controlled by user requests) which could basically create zombie processess and / or will drop long-running processes.If you see no reason the scheduler shouldn't work for this purpose (while preventing long-running processes from dropping (which is what I thought the timeout feature was for)),
This was my basic interpretation of your post: Scheduler processes shouldn't be managed by the webserver (shouldn't be controlled by user requests) which could basically create zombie processess and / or will drop long-running processes.