Logging transaction statements?

25 views
Skip to first unread message

Erik Cederstrand

unread,
Feb 10, 2015, 11:16:36 AM2/10/15
to Django Users
Hi list,

I'm tracking down a bug in my app that shouldn't be happening with the transaction statements I added in my code. In my logging settings, I can set 'django.db.backends' to DEBUG to log the queries (I'm using the postresql backend). I see 'SAVEPOINT' statements logged, but I also need 'BEGIN' and 'COMMIT' so I can see when the transaction started and ended. How do I do that?

Thanks,
Erik

Erik Cederstrand

unread,
Feb 10, 2015, 12:29:10 PM2/10/15
to Django Users

Den 10/02/2015 kl. 17.15 skrev Erik Cederstrand <erik+...@cederstrand.dk>:

Hi list,

I'm tracking down a bug in my app that shouldn't be happening with the transaction statements I added in my code. In my logging settings, I can set 'django.db.backends' to DEBUG to log the queries (I'm using the postresql backend). I see 'SAVEPOINT' statements logged, but I also need 'BEGIN' and 'COMMIT' so I can see when the transaction started and ended. How do I do that?

Hmm, I'm beginning to think I don't understand Django transactions. I have the following two snippets, boiled down from my original code. The first one calls cache methods via helper functions, the other one calls the cache methods directly. If I run the code in parallel in two different processes, then the first example asserts almost instantly, while the second example survives several minutes. Can someone explain this?

import random
from django.db import transaction
from django.core.cache import cache

def get_val(key):
cache.get(key)

def set_val(key, val):
cache.add(key, val)

def del_val(key):
cache.delete(key)


# First example, fails
while True:
with transaction.atomic():
if not get_val('foo'):
print('no key found')
time.sleep(random.random())
if set_val('foo', 'bar'):
print('key added')
time.sleep(random.random())
else:
assert False
del_val('foo')
print('key deleted')
time.sleep(random.random())


# Second example, runs correctly
while True:
with transaction.atomic():
if not cache.get('foo'):
print('no key found')
time.sleep(random.random())
if cache.add('foo', 'bar'):
print('key added')
time.sleep(random.random())
else:
assert False
cache.delete('foo')
print('key deleted')
time.sleep(random.random())

Thanks,
Erik

Carl Meyer

unread,
Feb 10, 2015, 12:35:33 PM2/10/15
to django...@googlegroups.com
Hi Erik,

On 02/10/2015 10:28 AM, Erik Cederstrand wrote:
> Hmm, I'm beginning to think I don't understand Django transactions. I
> have the following two snippets, boiled down from my original code. The
> first one calls cache methods via helper functions, the other one calls
> the cache methods directly. If I run the code in parallel in two
> different processes, then the first example asserts almost instantly,
> while the second example survives several minutes. Can someone explain this?

I assume you're using the 'db' cache backend? Otherwise, it wouldn't
make sense to expect transactions to affect cache calls at all.
The difference between the snippets is that you are doing a conditional
branch based on the return value of the cache add, but your set_val
helper method always returns None, it doesn't return the return value of
cache.add().

Carl

signature.asc

Erik Cederstrand

unread,
Feb 10, 2015, 3:28:23 PM2/10/15
to Django Users
Hi Carl,

Den 10/02/2015 kl. 18.34 skrev Carl Meyer <ca...@oddbird.net>:

I assume you're using the 'db' cache backend? Otherwise, it wouldn't
make sense to expect transactions to affect cache calls at all.

Yes, I'm using the db backend.


The difference between the snippets is that you are doing a conditional
branch based on the return value of the cache add, but your set_val
helper method always returns None, it doesn't return the return value of
cache.add().

Wow, *that* was embarrassing :-) You see what you want to see, I guess...

I'm still having issues with this, though. I reduced my own code to the following. The intent of the code is to use the cache to implement multiprocess/multiserver locking:

from django.db import transaction
from django.core.cache import cache

class CacheError(Exception):
pass

cache.delete('foo')
while True:
print('Trying to cache foo')
try:
with transaction.atomic():
if cache.get('foo'):
raise CacheError()
print('adding foo to cache')
assert cache.add(key='foo', value='bar')
print('foo cached')
time.sleep(random.random())
with transaction.atomic():
if cache.get('foo'):
cache.delete('foo')
except CacheError:
print('Failed to cache foo')
time.sleep(random.random())

Running this in parallel in two processes on the same machine returns this after a while:

Process A:
Trying to cache foo
2015-02-10 21:02:25,781 DEBUG    (0.001) SELECT cache_key, value, expires FROM "dispatch_cache" WHERE cache_key = 'foo'; args=['foo']
adding foo to cache
2015-02-10 21:02:25,782 DEBUG    (0.000) SELECT COUNT(*) FROM "dispatch_cache"; args=None
2015-02-10 21:02:25,782 DEBUG    (0.000) SAVEPOINT "s140735261451008_x1"; args=None
2015-02-10 21:02:25,783 DEBUG    (0.000) SELECT cache_key, expires FROM "dispatch_cache" WHERE cache_key = 'foo'; args=['foo']
2015-02-10 21:02:25,784 DEBUG    (0.000) INSERT INTO "dispatch_cache" (cache_key, value, expires) VALUES ('foo', 'gASVBwAAAAAAAACMA2JhcpQu', '9999-12-31 23:59:59'); args=['foo', 'gASVBwAAAAAAAACMA2JhcpQu', '9999-12-31 23:59:59']
2015-02-10 21:02:25,784 DEBUG    (0.000) RELEASE SAVEPOINT "s140735261451008_x1"; args=None
foo cached
2015-02-10 21:02:26,771 DEBUG    (0.000) SELECT cache_key, value, expires FROM "dispatch_cache" WHERE cache_key = 'foo'; args=['foo']
2015-02-10 21:02:26,772 DEBUG    (0.000) DELETE FROM "dispatch_cache" WHERE cache_key = 'foo'; args=['foo']


Process B:
Trying to cache foo
2015-02-10 21:02:25,782 DEBUG    (0.000) SELECT cache_key, value, expires FROM "dispatch_cache" WHERE cache_key = 'foo'; args=['foo']
adding foo to cache
2015-02-10 21:02:25,783 DEBUG    (0.000) SELECT COUNT(*) FROM "dispatch_cache"; args=None
2015-02-10 21:02:25,784 DEBUG    (0.000) SAVEPOINT "s140735261451008_x1"; args=None
2015-02-10 21:02:25,784 DEBUG    (0.000) SELECT cache_key, expires FROM "dispatch_cache" WHERE cache_key = 'foo'; args=['foo']
2015-02-10 21:02:25,791 DEBUG    (0.007) INSERT INTO "dispatch_cache" (cache_key, value, expires) VALUES ('foo', 'gASVBwAAAAAAAACMA2JhcpQu', '9999-12-31 23:59:59'); args=['foo', 'gASVBwAAAAAAAACMA2JhcpQu', '9999-12-31 23:59:59']
2015-02-10 21:02:25,792 DEBUG    (0.000) ROLLBACK TO SAVEPOINT "s140735261451008_x1"; args=None
Traceback (most recent call last):
  File "tmp.py", line 30, in <module>
    assert cache.add(key='foo', value='bar')
AssertionError


I don't see how this is possible when I'm using transaction.atomic().


Thanks,
Erik

Carl Meyer

unread,
Feb 10, 2015, 4:47:04 PM2/10/15
to django...@googlegroups.com
On 02/10/2015 01:27 PM, Erik Cederstrand wrote:
>> Den 10/02/2015 kl. 18.34 skrev Carl Meyer <ca...@oddbird.net
>> <mailto:ca...@oddbird.net>>:
I believe you're running into the difference between the transaction
isolation levels REPEATABLE READ vs READ COMMITTED. The former is the
default for MySQL, the latter for PostgreSQL.

In REPEATABLE READ isolation level, once you enter a transaction your
view of the world will never change, except due to your own actions,
until the end of that transaction. In READ COMMITTED, your changes
within the transaction will still all be committed or rolled back as an
atomic unit, but your view of the rest of the world can change
mid-transaction, if another transaction commits in the meantime.

I think the results you are seeing are explainable if you are using READ
COMMITTED.

Either one of these levels can cause people trouble if they don't
understand the difference and know which one they are using. (I often
see MySQL users confused why a long-running shell process never seems to
see any database updates made by a different process.) You have to
decide which one is more appropriate for your application, and code
accordingly.

Carl

signature.asc

Carl Meyer

unread,
Feb 10, 2015, 4:50:41 PM2/10/15
to django...@googlegroups.com
You might find it useful to look at the implementation of Django's
QuerySet.get_or_create() method, if you want to see how to (mostly)
safely implement a get-or-create pattern like this without race
conditions under READ COMMITTED isolation level.

Carl

signature.asc

Erik Cederstrand

unread,
Feb 10, 2015, 5:31:12 PM2/10/15
to Django Users
Den 10/02/2015 kl. 21.27 skrev Erik Cederstrand <erik+...@cederstrand.dk>:
>
> Running this in parallel in two processes on the same machine returns this after a while:
>
> Process A:
> [...]
>
> Process B:
> [...]
> Traceback (most recent call last):
> File "tmp.py", line 30, in <module>
> assert cache.add(key='foo', value='bar')
> AssertionError
>
>
> I don't see how this is possible when I'm using transaction.atomic().

Phew. This situation is actually possible since "Read Committed" is the default isolation level in PostgreSQL, which means that non-repeatable reads are possible within a transaction.

My code relies on isolation level "Serializable" for a cache.get() followed by cache.add() to be reliable, but Django uses the default from PostgreSQL.

The isolation level is supposed to be configurable with the 'isolation_level' setting in OPTIONS (https://docs.djangoproject.com/en/1.7/ref/databases/#isolation-level). Except it doesn't work because django/db/backends/postgresql_psycopg2/base.py::_set_isolation_level() is never called anywhere, AFAICS.

I tried disabling autocommit (the docs are wrong BTW, https://docs.djangoproject.com/en/1.7/ref/databases/#autocommit-mode says to put it in OPTIONS, but django/db/backends/__init__.py (line 123) looks at settings_dict['AUTOCOMMIT'], not in OPTIONS) and hacking postgresql_psycopg2/base.py::_set_autocommit() to call _set_isolation_level(), but now process A blocks process B entirely, even when process A is outside the atomic() context manager.

Has anyone got isolation_level "Serializable" to work in Django 1.7?


Thanks,
Erik

Carl Meyer

unread,
Feb 10, 2015, 5:46:03 PM2/10/15
to django...@googlegroups.com
Hi Erik,

On 02/10/2015 03:30 PM, Erik Cederstrand wrote:
> Den 10/02/2015 kl. 21.27 skrev Erik Cederstrand
> <erik+...@cederstrand.dk>:
>>
>> Running this in parallel in two processes on the same machine
>> returns this after a while:
>>
>> Process A: [...]
>>
>> Process B: [...] Traceback (most recent call last): File "tmp.py",
>> line 30, in <module> assert cache.add(key='foo', value='bar')
>> AssertionError
>>
>>
>> I don't see how this is possible when I'm using
>> transaction.atomic().
>
> Phew. This situation is actually possible since "Read Committed" is
> the default isolation level in PostgreSQL, which means that
> non-repeatable reads are possible within a transaction.

Yes, that's what I said :-)

> My code relies on isolation level "Serializable" for a cache.get()
> followed by cache.add() to be reliable, but Django uses the default
> from PostgreSQL.

I don't think you need full Serializable, Repeatable Read should be
sufficient. Serializable is dangerous, has a tendency to cause deadlocks
(as I think you've discovered). Even with Repeatable Read, you'll need
be a bit careful about any long-running processes that might hold open a
transaction.

> The isolation level is supposed to be configurable with the
> 'isolation_level' setting in OPTIONS
> (https://docs.djangoproject.com/en/1.7/ref/databases/#isolation-level).
> Except it doesn't work because
> django/db/backends/postgresql_psycopg2/base.py::_set_isolation_level()
> is never called anywhere, AFAICS.

But `self.connection.set_isolation_level()` is called in
`_set_autocommit()`, and that should be sufficient. (The
`_set_isolation_level()` method is dead code in 1.7 and has since been
removed, but what really matters is that
`self.connection.set_isolation_level()` is called.)

> I tried disabling autocommit (the docs are wrong BTW,
> https://docs.djangoproject.com/en/1.7/ref/databases/#autocommit-mode
> says to put it in OPTIONS, but django/db/backends/__init__.py (line
> 123) looks at settings_dict['AUTOCOMMIT'], not in OPTIONS)

OPTIONS['autocommit'] has been ignored since Django 1.6, as the docs you
linked clearly state: "This configuration is ignored and can be safely
removed."

You're looking for
https://docs.djangoproject.com/en/1.7/ref/settings/#autocommit instead.

(It would be good for the former to have a link to the latter, though.)

and
> hacking postgresql_psycopg2/base.py::_set_autocommit() to call
> _set_isolation_level(), but now process A blocks process B entirely,
> even when process A is outside the atomic() context manager.
>
> Has anyone got isolation_level "Serializable" to work in Django 1.7?

I strongly recommend against using the Serializable isolation level, or
turning off autocommit mode. The transaction.atomic() API will not work
correctly or predictably with autocommit turned off. See
https://docs.djangoproject.com/en/1.7/topics/db/transactions/#deactivate-transaction-management

You could switch to Repeatable Read (but leave AUTOCOMMIT on). Or (what
I would probably do) you could leave the isolation level at the default
(since it's the default for Django, some parts of Django, such as
QuerySet.get_or_create(), may not work correctly under a different
isolation level) and simply update your locking logic to guard against
that race condition (if the add fails, assume that another process has
grabbed the lock in the interim).

Carl

signature.asc

Erik Cederstrand

unread,
Feb 10, 2015, 6:05:06 PM2/10/15
to Django Users

> Den 10/02/2015 kl. 23.45 skrev Carl Meyer <ca...@oddbird.net>:
>
> Yes, that's what I said :-)

Thanks for the great explanations; I didn't see your previous answer before posting my latest reply, hence the duplicate explanations :-)
>
>> My code relies on isolation level "Serializable" for a cache.get()
>> followed by cache.add() to be reliable, but Django uses the default
>> from PostgreSQL.
>
> I don't think you need full Serializable, Repeatable Read should be
> sufficient. Serializable is dangerous, has a tendency to cause deadlocks
> (as I think you've discovered). Even with Repeatable Read, you'll need
> be a bit careful about any long-running processes that might hold open a
> transaction.

I was thinking Seralizable is needed since cache.get() is a query that could result in a phantom read. But since I have multiple long-running management jobs and incoming changes from a REST API, I also think I'll bee too vulnerable to deadlocks if I change the isolation level.

> But `self.connection.set_isolation_level()` is called in
> `_set_autocommit()`, and that should be sufficient. (The
> `_set_isolation_level()` method is dead code in 1.7 and has since been
> removed, but what really matters is that
> `self.connection.set_isolation_level()` is called.)

It's only called if self.psycopg2_version < (2, 4, 2), at least in the file I'm looking at. My version is 2.5.2.

> OPTIONS['autocommit'] has been ignored since Django 1.6, as the docs you
> linked clearly state: "This configuration is ignored and can be safely
> removed."
>
> You're looking for
> https://docs.djangoproject.com/en/1.7/ref/settings/#autocommit instead.

Right you are :-) I should get some sleep.

> You could switch to Repeatable Read (but leave AUTOCOMMIT on). Or (what
> I would probably do) you could leave the isolation level at the default
> (since it's the default for Django, some parts of Django, such as
> QuerySet.get_or_create(), may not work correctly under a different
> isolation level) and simply update your locking logic to guard against
> that race condition (if the add fails, assume that another process has
> grabbed the lock in the interim).

Yeah, I think I'll do just that. My production code is a bit more complicated since it relies on two cache.get()'s before the cache.add(), but that will probably just have to go away.

Thanks again for you excellent help,
Erik

Carl Meyer

unread,
Feb 10, 2015, 6:39:53 PM2/10/15
to django...@googlegroups.com
On 02/10/2015 04:03 PM, Erik Cederstrand wrote:
>> But `self.connection.set_isolation_level()` is called in
>> `_set_autocommit()`, and that should be sufficient. (The
>> `_set_isolation_level()` method is dead code in 1.7 and has since
>> been removed, but what really matters is that
>> `self.connection.set_isolation_level()` is called.)
>
> It's only called if self.psycopg2_version < (2, 4, 2), at least in
> the file I'm looking at. My version is 2.5.2.

You're totally right. That's a bug. Thanks for catching it! I've filed
https://code.djangoproject.com/ticket/24318

Carl

signature.asc
Reply all
Reply to author
Forward
0 new messages