* easy: => 0
Comment:
Same problem here and changing column to CLOB resolved my issue with
ORA-12704: character set mismatch
--
Ticket URL: <http://code.djangoproject.com/ticket/11487#comment:25>
Django <https://code.djangoproject.com/>
The Web framework for perfectionists with deadlines.
* ui_ux: => 0
Comment:
#9152 was a duplicate.
--
Ticket URL: <https://code.djangoproject.com/ticket/11487#comment:26>
Comment (by aaugustin):
#20201 was probably a duplicate.
--
Ticket URL: <https://code.djangoproject.com/ticket/11487#comment:29>
* cc: CollinAnderson (removed)
--
Ticket URL: <https://code.djangoproject.com/ticket/11487#comment:30>
* cc: shai@… (added)
--
Ticket URL: <https://code.djangoproject.com/ticket/11487#comment:31>
* cc: mboersma (removed)
--
Ticket URL: <https://code.djangoproject.com/ticket/11487#comment:32>
Comment (by graham.boyle@…):
We're seeing this when putting more than 4 message.success(..) on a page.
It seems the success messages go onto the session, and the session_data
gets too big.
The len() of the string going into the SESSION_DATA field is 2120, but
that's a unicode string, so I _strongly suspect_ the byte length of the
data going into the django_session.session_data column is twice that.
Looking at the HEX values of django_session.session_data already in the
database, we're seeing 0x6C00 for "1", for example.
We are using TextFields in our models with no problem, even with large
chunks of text, and even when creating a new row. We see the sql being run
to save the session is
u'INSERT INTO "DJANGO_SESSION" ("SESSION_KEY", "SESSION_DATA",
"EXPIRE_DATE") SELECT %s, %s, %s FROM DUAL'
from a call to cursor.execute(sql, params)
at line 937 of django\db\models\sql\compiler.py in execute_sql()
We note that there's a bulk_insert_sql() in
django.db.backends.oracle.base.py (line 411) that's doing inserts from
selects off dual.
We're using Oracle 11g, Django 1.5, Python 2.7.3, the python running under
Windows 7 Professional (64 bit), Service Pack 1.
--
Ticket URL: <https://code.djangoproject.com/ticket/11487#comment:33>
Comment (by graham.boyle@…):
and the NLS database parameters of interest are:
NLS_CHARACTERSET AL32UTF8
NLS_NCHAR_CHARACTERSET AL16UTF16
and django_session.session_data is nclob
--
Ticket URL: <https://code.djangoproject.com/ticket/11487#comment:34>
* needs_tests: 0 => 1
Comment:
I'm not an Oracle expert to comment on the fix, but I don't see a test
that's integrated as part of Django's test suite so marking as "needs
tests".
--
Ticket URL: <https://code.djangoproject.com/ticket/11487#comment:35>
Comment (by ikelly):
The test is at backends.OracleChecks.test_long_string and was added in
[11285]. I note that it claims to test strings longer than 4000
characters but actually tests a string of exactly 4000 characters; this is
probably not important as the Oracle limitation is actually 4000
''bytes'', and the string in question is more than 4000 bytes in any
encoding.
The test is probably not reliable though, as the details of reproducing
this issue seem to be dependent on the configuration of the database.
--
Ticket URL: <https://code.djangoproject.com/ticket/11487#comment:36>
Comment (by timo):
Sorry, I was referring to a test for the patch that was added as an
attachment since the ticket was reopened (`long_string.diff`).
--
Ticket URL: <https://code.djangoproject.com/ticket/11487#comment:37>
Comment (by Arpit10jain):
Is there any update on this issue? we are still experiencing this issue
with 2000 characters. It happens usually when it has unicode characters in
it. I read about the work around to solve this issue but its not an ideal
solution, Does anyone have a fix for this.
--
Ticket URL: <https://code.djangoproject.com/ticket/11487#comment:38>
Comment (by RohitV24):
We are currently experiencing the same issue while inserting large strings
(between size 2000-4000 characters) into the Oracle database.The
NLS_NCHAR_CHARACTERSET on our database is AL16UTF16 which assigns 2 bytes
for a char .From digging in a little deeper, it looks like a string is
mapped to cx_Oracle.STRING which is then mapped to either a VARCHAR,
NVARCHAR or LONG in Oracle and the conversion to long in case of long
values is causing the error. It looks like the issue with 4000 characters
was fixed by setting the input size to cx_Oracle.CLOB when it reached the
character limit. Using 2000( for utf-16) seems to work fine and solve the
problem. Would setting the comparison value to 1000 (taking into
consideration other encoding formats) before setting it to CLOB be the fix
for this issue?
--
Ticket URL: <https://code.djangoproject.com/ticket/11487#comment:39>
Comment (by Tim Graham <timograham@…>):
In [changeset:"950171d7b20a148a964281aed16aff3d2c734ab4" 950171d]:
{{{
#!CommitTicketReference repository=""
revision="950171d7b20a148a964281aed16aff3d2c734ab4"
Refs #11487 -- Removed redundant test_long_string() test.
Redundant with model_regress.tests.ModelTests.test_long_textfield
since 3ede430b9a94e3c2aed64d2cf898920635bdf4ae.
}}}
--
Ticket URL: <https://code.djangoproject.com/ticket/11487#comment:40>
* cc: felixx (added)
* status: new => closed
* resolution: => needsinfo
Comment:
Apparently this hasn't been a problem lately, and the original description
and suggested solution have been made largely irrelevant.
If you have a similar problem you can reproduce, please re-open, or, more
usefully, just open a new ticket.
--
Ticket URL: <https://code.djangoproject.com/ticket/11487#comment:41>