Background task - how to use request.add_finished_callback or tgext.asyncjob ?

158 views
Skip to first unread message

Juraj Variny

unread,
Oct 4, 2012, 9:10:31 AM10/4/12
to turbo...@googlegroups.com
Hi, I need to send notification email without user having to wait for the sending. Tried to call request.add_finished_callback(function) from controller as per pylons docs but I got AttributeError: add_finished_callback . Alternatively, I was thinking about using tgext.asyncjob but there is no mention at all in 2.2 docs. What/how should I use?

Juraj

Alessandro Molina

unread,
Oct 4, 2012, 9:39:51 AM10/4/12
to turbo...@googlegroups.com
The TurboMail package provides asynchronous email sending through a queue.
You can of course perform the same by using tgext.asyncjob, the
documentation for asyncjobs is available on the asyncjob page itself:
http://pypi.python.org/pypi/tgext.asyncjob

Bests,
Alessandro
> --
> You received this message because you are subscribed to the Google Groups
> "TurboGears" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/turbogears/-/oUl_um26JZsJ.
> To post to this group, send email to turbo...@googlegroups.com.
> To unsubscribe from this group, send email to
> turbogears+...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/turbogears?hl=en.

Juraj Variny

unread,
Oct 5, 2012, 4:12:25 AM10/5/12
to turbo...@googlegroups.com
Thanks! I did not know about TurboMail.

Juraj

Dňa štvrtok, 4. októbra 2012 15:39:56 UTC+2 Alessandro Molina napísal(-a):

kgk

unread,
Nov 26, 2012, 1:44:12 PM11/26/12
to turbo...@googlegroups.com


On Thursday, October 4, 2012 6:39:56 AM UTC-7, Alessandro Molina wrote:
The TurboMail package provides asynchronous email sending through a queue.
You can of course perform the same by using tgext.asyncjob, the
documentation for asyncjobs is available on the asyncjob page itself:
http://pypi.python.org/pypi/tgext.asyncjob


Any chance you are making a version of asyncjob that uses multiprocessing instead of threading ?

Alessandro Molina

unread,
Nov 26, 2012, 2:25:47 PM11/26/12
to TurboGears .
If your concern is with GIL it should be fairly easy to make a version of asyncjob on multiprocessing, for more difficult would be to prevent issues might happen on the code that the developer will run in the separate process.
Just keep in mind that asyncjob is meant for short term tasks and on production you will probably start more that one process of your application.
Starting more than one process will result in having multiple execution queues (one for each process) which greatly reduces the side effects of having the GIL in place. 

If otherwise the concern is having a separate process that handles all the requests using a task queue it's probably a better solution but requires a separate tool.

On Mon, Nov 26, 2012 at 7:44 PM, kgk <kkvil...@gmail.com> wrote:
http://pypi.python.org/pypi/tgext.asyncjob

Craig Small

unread,
Nov 26, 2012, 4:30:37 PM11/26/12
to TurboGears .
On Mon, Nov 26, 2012 at 08:25:47PM +0100, Alessandro Molina wrote:
> Just keep in mind that asyncjob is meant for short term tasks and on
> production you will probably start more that one process of your
> application.
> Starting more than one process will result in having multiple execution
> queues (one for each process) which greatly reduces the side effects of
> having the GIL in place.
Just to calrify that, are you suggesting something like.
$ python myjob.py &
$ python myotherjob.py &

ie two different python interpreters for different functions? I've got
a lot of back-end work going on and while most (all?) is asynchronous
calls (with the associated extra madness that entails) I'm stil worried
about processing times, mainly around the database access speeds though
of course I'm testing on sqlite so that might be the problem.

- Craig
--
Craig Small VK2XLZ http://enc.com.au/ csmall at : enc.com.au
Debian GNU/Linux http://www.debian.org/ csmall at : debian.org
GPG fingerprint: 5D2F B320 B825 D939 04D2 0519 3938 F96B DF50 FEA5

Alessandro Molina

unread,
Nov 26, 2012, 5:28:59 PM11/26/12
to TurboGears .
On Mon, Nov 26, 2012 at 10:30 PM, Craig Small <csm...@enc.com.au> wrote:
On Mon, Nov 26, 2012 at 08:25:47PM +0100, Alessandro Molina wrote:
>    Just keep in mind that asyncjob is meant for short term tasks and on
>    production you will probably start more that one process of your
>    application.
>    Starting more than one process will result in having multiple execution
>    queues (one for each process) which greatly reduces the side effects of
>    having the GIL in place.
Just to calrify that, are you suggesting something like.
$ python myjob.py &
$ python myotherjob.py &


No, no :D
I was just saying that on production your probably have more than one worker and the asyncjob queue is for each worker.
So you already have multiple processes running async jobs.
 
ie two different python interpreters for different functions?  I've got
a lot of back-end work going on and while most (all?) is asynchronous
calls (with the associated extra madness that entails) I'm stil worried
about processing times, mainly around the database access speeds though
of course I'm testing on sqlite so that might be the problem.

DB slowness is usually related to the time the data you created inside the controller takes to propagate to the real database,
that is the reason why asyncjob provided the asyncjob_timed_query helper.

You might want to give a try to a tasks queue like celery and see if you have better results.
I'm also open to any patch to asyncjob, feel free to send them if you end up implementing changes.

kgk

unread,
Nov 27, 2012, 10:26:40 AM11/27/12
to turbo...@googlegroups.com



You might want to give a try to a tasks queue like celery and see if you have better results.
I'm also open to any patch to asyncjob, feel free to send them if you end up implementing changes.

Actually I was comparing to celery.   I realized that could many of things with multiprocessing that I needed with celery.  Basically it would be nice to start with threading or local processes and then move to remote processes if the need arises in production.  Celery looks like a good solution, but takes more initial setup time and I not sure if its full capabilities are needed.

Carlos Daniel Ruvalcaba Valenzuela

unread,
Nov 27, 2012, 12:55:53 PM11/27/12
to turbo...@googlegroups.com
I find celery quite convenient for background, scheduled and
distributed task processing, the problem I have is that celery with
turbogears is not as simple as celery on django, and while
tgext.asyncjob is enough for most common use cases it would be good to
have an option, I saw that there is a celery-pylons but looks like a
minimal integration, thus I began working on tgext.celery which I
expect to release soon with the aim of being as convenient and easy to
get up and running as django-celery.

Regards,
Carlos Daniel Ruvalcaba Valenzuela
> --
> You received this message because you are subscribed to the Google Groups
> "TurboGears" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/turbogears/-/iV6vsCcPfQ0J.

Michael Pedersen

unread,
Jan 2, 2013, 11:09:47 PM1/2/13
to tg-trunk
I'm interesting in hearing more about this. Carlos, do you have a release yet? (since I'm still catching up, you might have announced it and I just haven't seen that. If so, please ignore me)
--
Michael J. Pedersen
My Online Resume: http://www.icelus.org/ -- Google+ http://plus.ly/pedersen
Google Talk: m.ped...@icelus.org -- Twitter: pedersentg
Reply all
Reply to author
Forward
0 new messages