Hi everyone,
I need to generate a PDF report for each entry of a django queryset. There'll be between between 30k and 40k entries.
The PDF is generated through an external API. Since currently is generated on demand, this is handled synchronously via an HTTP request/response. That will be different for this task, since I think I'll use a django management command to loop through the queryset and perform the PDF generation.
Which approach should I follow for this task? I thought about 3 possibile solutions, although are technologies that I never used:
1)
Celery: assign a task (http request with a different payload) to a worker, then retrieve it once it's done;
2)
request-futures: using requests in a non-blocking way;
3)
multiprocessing package, e.g. assigning a Pool of workers.
the goal is to use the API concurrently (e.g. send 10 or 100 http requests simultaneously, depending on how many concurrent requests the API can handle).
the project runs on Python 2.7 and this task will occur approximately once a year.
Anybody here that handled a similar task and can give advices on how to proceed on this?