The error occurs every time the task is executed and seems to be
related to the execution time, in fact when I process smaller files
(let's say 8000 lines) everything is fine and task ends with success.
The current files count about 242000 lines and execution started at
13:17:01 and failed at 16:08:47.
I noticed using htop that cpu load is about 100% on both cores;
(Intel(R) Xeon(R) CPU 3040 @ 1.86GHz). Could this cause the mysql
error 2013?
The python code is quite long, and I am not copying it here because
the error is mysql related...
Does anybody have any ideas?
[2010-10-09 15:50:15,265: ERROR/MainProcess] Task
igs.tasks.advanced_statistics[e886e849-cfcc-42b9-a171-0a2d56778f6b]
raised exception: OperationalError(2013, 'Lost connection to MySQL
server during query')
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/celery-2.0.3-py2.6.egg/celery/worker/job.py",
line 86, in execute_safe
return self.execute(*args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/celery-2.0.3-py2.6.egg/celery/worker/job.py",
line 101, in execute
return super(WorkerTaskTrace, self).execute()
File "/usr/local/lib/python2.6/dist-packages/celery-2.0.3-py2.6.egg/celery/execute/trace.py",
line 62, in execute
retval = self._trace()
File "/usr/local/lib/python2.6/dist-packages/celery-2.0.3-py2.6.egg/celery/execute/trace.py",
line 76, in _trace
return handler(trace.retval, trace.exc_type, trace.tb, trace.strtb)
File "/usr/local/lib/python2.6/dist-packages/celery-2.0.3-py2.6.egg/celery/worker/job.py",
line 122, in handle_failure
exc = self.task.backend.mark_as_failure(self.task_id, exc, strtb)
File "/usr/local/lib/python2.6/dist-packages/celery-2.0.3-py2.6.egg/celery/backends/base.py",
line 46, in mark_as_failure
traceback=traceback)
File "/usr/local/lib/python2.6/dist-packages/celery-2.0.3-py2.6.egg/celery/backends/base.py",
line 152, in store_result
return self._store_result(task_id, result, status, traceback)
File "/usr/local/lib/python2.6/dist-packages/django_celery-2.0.3-py2.6.egg/djcelery/backends/database.py",
line 12, in _store_result
traceback=traceback)
File "/usr/local/lib/python2.6/dist-packages/django_celery-2.0.3-py2.6.egg/djcelery/managers.py",
line 42, in _inner
transaction.rollback_unless_managed()
File "/usr/local/lib/python2.6/dist-packages/Django-1.2.3-py2.6.egg/django/db/transaction.py",
line 188, in rollback_unless_managed
connection._rollback()
File "/usr/local/lib/python2.6/dist-packages/Django-1.2.3-py2.6.egg/django/db/backends/mysql/base.py",
line 306, in _rollback
BaseDatabaseWrapper._rollback(self)
File "/usr/local/lib/python2.6/dist-packages/Django-1.2.3-py2.6.egg/django/db/backends/__init__.py",
line 36, in _rollback
return self.connection.rollback()
OperationalError: (2013, 'Lost connection to MySQL server during query')
skype: masdero, icq: 473891447, yim: mas_dero, msn: mas_...@hotmail.com
------------
Mi scriva in italiano; Write me in English; Skribu al mi Esperante!
> Using django 1.2.3 with celery 2.0.3 and django-celery 2.0.3 and mysql
> 5.1.41 on kubuntu 10.04, I receive the following error:
> OperationalError(2013, 'Lost connection to MySQL server during query').
>
> The error occurs every time the task is executed and seems to be
> related to the execution time, in fact when I process smaller files
> (let's say 8000 lines) everything is fine and task ends with success.
> The current files count about 242000 lines and execution started at
> 13:17:01 and failed at 16:08:47.
> I noticed using htop that cpu load is about 100% on both cores;
> (Intel(R) Xeon(R) CPU 3040 @ 1.86GHz). Could this cause the mysql
> error 2013?
>
> The python code is quite long, and I am not copying it here because
> the error is mysql related..
* Are you keeping a connection open to MySQL the whole time?
* What are the values of your timeout settings in the MySQL server configuration?
* Anything in the MySQL server error log?
* Anything in the MySQL slow query log, if you have that set up?
* Which process(es) on the server is using 100% CPU time?
I suspect that you're opening a connection to the server, doing something in Django for a long time, and then trying to use the connection. At this point MySQL has lost patience and closed the connection.
Thanks,
Erik
This, and the fact that you have no errors or seriously long queries in your slow query log, indicates that the MySQL server is operating fine.
Something seems to be timing out on the MySQL side. I'm not sure if you mentioned it, but are these InnoDB tables? The stack trace you posted is in the rollback of a transaction. It's possible that everything is running within the same transaction which MySQL is shutting down at some point. Try this:
SQL> show variables where variable_name LIKE '%timeout%';
to see the values in your current server instance.
If nothing suspicious turns up, I guess it's time to follow the stack trace into the celery source code or where your own code calls celery and add some debugging. Maybe it's a specific query that trips the code. Maybe it's a specific number of queries. Maybe it's at a specific timespan after connecting to MySQL.
Thanks,
Erik
That's the new piece of information:
Out of memory (Needed 24492 bytes)
Out of memory (Needed 24492 bytes)
Out of memory (Needed 24492 bytes)
Out of memory (Needed 24492 bytes)
Out of memory (Needed 24492 bytes)
Out of memory (Needed 24492 bytes)
Out of memory (Needed 16328 bytes)
Out of memory (Needed 16328 bytes)
Out of memory (Needed 16328 bytes)
Out of memory (Needed 16328 bytes)
Out of memory (Needed 16328 bytes)
Out of memory (Needed 16328 bytes)
Out of memory (Needed 8164 bytes)
Out of memory (Needed 8164 bytes)
Out of memory (Needed 8164 bytes)
Out of memory (Needed 8164 bytes)
Out of memory (Needed 8164 bytes)
Looks suspiciously like a memory leak...