@taskdef flaky_add(a,b):time.sleep(5)if random.random() < 0.1:1/0return a+b
mbp:~/dev/try/djc$ ./manage.py shellPython 2.6.1 (r261:67515, Jun 24 2010, 21:47:49)[GCC 4.2.1 (Apple Inc. build 5646)] on darwinType "help", "copyright", "credits" or "license" for more information.(InteractiveConsole)>>> import cbridge.tasks as t>>> r = t.flaky_add.delay(5,10)>>> r.result15>>> r = t.flaky_add.delay(5,20)>>> r.result>>> r.result>>>
import djcelerydjcelery.setup_loader()BROKER_HOST = "localhost"BROKER_PORT = 5672BROKER_USER = "guest"BROKER_PASSWORD = "guest"BROKER_VHOST = "/"CELERY_IMPORTS = ("cbridge.tasks",)CELERY_RESULT_BACKEND = "database"CELERY_RESULT_DBURI = "mysql://root:@localhost/trydjc"
> I'm having trouble getting my task results reliably delivered through the database backend. The problem repros in a trivial installation of celery + django. Either I'm missing something or there's a bug here.
[...]
> When I run the task, I will sometimes get the results and sometimes not:
>
> mbp:~/dev/try/djc$ ./manage.py shell
> Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49)
> [GCC 4.2.1 (Apple Inc. build 5646)] on darwin
> Type "help", "copyright", "credits" or "license" for more information.
> (InteractiveConsole)
> >>> import cbridge.tasks as t
> >>> r = t.flaky_add.delay(5,10)
> >>> r.result
> 15
> >>> r = t.flaky_add.delay(5,20)
> >>> r.result
> >>> r.result
> >>>
>
> I see the tasks running and completing or failing in celeryev. I see the results in the celery_taskmeta table in mysql. All that is reliable. But the results *usually* don't get back to the AsyncResult object in the shell where the task was initiated. The exception is the first time I enter the interactive shell, if I wait for the task to complete before asking for the result, then and only then will I get the result. Then on subsequent attempts in the same interactive shell it doesn't work. But the fact that I do get the results back in these limited and repeatable circumstances make me think I have the system properly configured.
>
I see that you are using MySQL, what transaction isolation level is it configured to use?
If it using the default of repeatable-read, then you should try changing that to read-committed
as described here:
http://ask.github.com/celery/faq.html#mysql-is-throwing-deadlock-errors-what-can-i-do
--
Ask Solem
twitter.com/asksol | +44 (0)7713357179
> I figured this out. It's fixed by:
>
> mysql> set global transaction isolation level READ COMMITTED;
>
> This is quite a landmine. The docs warn about this if you're having deadlocks, but Django-celery's database backend basically doesn't work with innodb unless you change this setting because django runs everything in transactions by default.
>
> I'd be happy to update the documentation to help with this if somebody tells me how.
>
Yeah, it is very unfortunate. Some code has been written to issue a warning
in this case, but it doesn't seem like it works.
For docs, you can create a pull request on github or submit a patch!
--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To post to this group, send email to celery...@googlegroups.com.
To unsubscribe from this group, send email to celery-users...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/celery-users?hl=en.