9-9-15: The following code was expecting the scheduler to automatically start the queued tasks:
for dataID in dataIDs:
scheduler.queue_task(ImportData, [dataID], immediate=True, timeout=100)
# tried without immediate
# tried db.commit() after the loop or each queued task
Is anything missing?
Any hints or redirection appreciated
To reiterate some notes below, the task function is in a module and the queued tasks need to operate concurrently.
____ previous notes _____
edit: Based on Niphlod's response, it appears my main concern is 'probable issues in concurrently writing data to the database.' Could someone clarify the scheduler's limitations for this purpose?
1.My goal is to use the scheduler to concurrently process as many files as possible (I assume there is no limit to the number of processes (edit: that can be posted to the scheduler) and the scheduler will prevent the server from dropping them).
2.My tasks are defined in a module, not model.
3.It is in the view that a potentially large number of items are selected which triggers the callback and process below.
4. In the function importData, the database is accessed and records added.
Is there anything I am missing to properly and fully utilize the scheduler for this purpose?
Please post any caveats, corrections, or tips, and let me know if any other information is needed.
Thanks,
PV
1.My goal is to use the scheduler to concurrently process as many files as possible (I assume there is no limit to the number of processes and the scheduler will prevent the server from dropping them).
2.My tasks are defined in a module, not model.
3.It is in the view that a potentially large number of items are selected which triggers the callback and process below.
4. In the function importData, the database is accessed and records added.
for dataID in dataIDs:
scheduler.queue_task(ImportData, [dataID], timeout=100)