Checkout the queryset-refactor branch and run the null_queries test with
the 'mysql' (or 'mysql_old') backend.
`--> ./runtests.py --settings=settings1 null_queries
----------------------------------------------------------------------
File "/home/malcolm/BleedingEdge/Django/django.git/tests/regressiontests/null_queries/models.py", line ?, in regressiontests.null_queries.models.__test__.API_TESTS
Failed example:
Choice.objects.filter(id__exact=None)
Expected:
[]
Got:
[<Choice: Choice: Why Not? in poll Q: Why? >]
----------------------------------------------------------------------
Ran 1 test in 0.021s
FAILED (failures=1)
So the test at [1] is failing (note this is slightly different from
trunk because of ticket #2737 has been fixed/changed on the branch).
Clearly something is amiss here, since the "id" attribute is a primary
key column (not NULL and unique, by definition), so nothing should match
against the query. The generated SQL can be seen by looking at
Choices.objects.filter(id__exact=None).query.as_sql() and gives
'SELECT `null_queries_choice`.`id`, `null_queries_choice`.`poll_id`, `null_queries_choice`.`choice` FROM `null_queries_choice` WHERE `null_queries_choice`.`id` IS NULL'
which looks correct.
Now things get really weird: If you cut and paste the line that is
failing so you that you run it twice in a row, it fails the first time
(returning a row) and passes the second time (returning nothing)!
Normally I wouldn't worry too much about this and just chalk it up to
the high quality weed they're smoking over there at MySQL Headquarters,
but this behaviour might actually explain some questions we see from
time to time on django-users. Somebody using MySQL will update a model
and then try to query it immediately and the new value won't appear in
the query for a little bit. We've always managed to eliminate
transaction issues, so the problem was just odd. Now I have a repeatable
case (at least for values of "repeatable" that mean "on my reasonably
fast machine, every time, using MySQL 5.0.45").
If somebody else can repeat this and has the brains to work out what is
going on, that would be interesting knowledge. Does MySQL delay writing
to the database for a second or so, just for laughs? And then not take
that into account when reading? I've never seen this before, but every
time I've used MySQL in really big projects, writes and reads of the
same piece of data have been fairly widely separated.
Thanks in advance,
Malcolm
--
Remember that you are unique. Just like everyone else.
http://www.pointy-stick.com/blog/
Are you sure these are not artifacts from a previous query? Maybe the
cursor was not properly freed or something?
--
Patryk Zawadzki
PLD Linux Distribution
Could be that. I meant it when I said I don't know what is causing it. I
haven't ruled out killer attack bunnies from Mars, either. All options
are on the table.
That's why I'm hoping somebody familiar with MySQL and the Python
wrapper will look more deeply and write a fix if something's going wrong
in the mysql backend.
What I do know is that all the other database backends behave as
expected: the data is written and read serially and the right result is
returned. There are no errors raised by MySQL and it's not as if it
hasn't written anything at all (since data is returned). It's just
decided not to write all the data and the bit it misses is 100%
repeatable. The output is consistent with somehow retaining the data we
sent to the database (which doesn't include the 'id') and not reading
back the real result, but that would be slightly insane behaviour.
Regards,
Malcolm
--
How many of you believe in telekinesis? Raise my hand...
http://www.pointy-stick.com/blog/
SQL_AUTO_IS_NULL = {0 | 1}
If set to 1 (the default), you can find the last inserted
row for a table that contains an
AUTO_INCREMENT
column by using the
following construct:
WHEREauto_increment_column
IS NULL
This behavior is used by some ODBC programs, such as Access.
KarenOh dear. :-(
Thanks for hunting that down, Karen; it hadn't occurred to me to look in
that part of the documentation.
That's just wrong.
Regards,
Malcolm
--
I intend to live forever - so far so good.
http://www.pointy-stick.com/blog/
The mysql docs and several things I've found online (including code from
Rails) indicates that simply "SET SQL_AUTO_IS_NULL=0" should work,
unless they broke it :)
--
Collin Grady
The past always looks better than it was. It's only pleasant because
it isn't here.
-- Finley Peter Dunne (Mr. Dooley)
Why are you including the "SESSION" in your SET statement?
mysql> SELECT @@SESSION.SQL_AUTO_IS_NULL;
+----------------------------+
| @@SESSION.SQL_AUTO_IS_NULL |
+----------------------------+
| 1 |
+----------------------------+
1 row in set (0.00 sec)
mysql> SET SQL_AUTO_IS_NULL=0;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT @@SESSION.SQL_AUTO_IS_NULL;
+----------------------------+
| @@SESSION.SQL_AUTO_IS_NULL |
+----------------------------+
| 0 |
+----------------------------+
1 row in set (0.00 sec)
--
Collin Grady
(1) Never draw what you can copy.
(2) Never copy what you can trace.
(3) Never trace what you can cut out and paste down.
Okay, everybody (or, at least, some people) are putting way to much
thought into this. Stop now.
I've already worked around it in queryset-refactor so that we never
generate this type of query. It was used in a test and as a way to avoid
incorrect results in one other place that I've now changed. It's a
complete non-issue now that I know what the bug is.
Regards,
Malcolm
--
What if there were no hypothetical questions?
http://www.pointy-stick.com/blog/
Malcolm, wouldn't it be reasonable to just call "SET SQL_AUTO_IS_NULL
= 0" whenever a connection is established? It would prevent people
from hitting this bug even when using custom queries (like
QuerySet.extra) and would eliminate any need of workarounds as django
itself does not depend on such quirks and oddities in SQL behaviour.
Since we establish a new connection for every front-end request, it's
worth keeping any initialisation stuff down to a minimum, so, no, I
don't think this is necessarily a good idea. I'm also a little reluctant
to take configuration control out of people's hands like that (if we
enforce it inside Django, it's difficult to turn off and that's when you
find out a couple people rely on that behaviour for some reason).
Fortunately, my thinking here isn't burning any bridges. If somebody
wanted to test that out, or use it by default, they can use the database
initialisation options via DATABASE_OPTIONS in settings (see [1]). Both
"read_default_file" and "init_command" are useful there.
I'm not saying this should be "no, forever", but right now I'm not
convinced it's worth doing this. Balanced against that is how to let
people know that it might happen... *shrug*.. I might add something to
databases.txt, for a start. I hate this sort of balancing act. :-(
[1]
http://www.djangoproject.com/documentation/databases/#connecting-to-the-database
Regards,
Malcolm
--
Honk if you love peace and quiet.
http://www.pointy-stick.com/blog/