The other issue is with NCLOB not accepting strings longer than 2000
characters. In my experiements, even though this is supposed to work, it
doesn't unless you truncate the string first, save that to the database,
then try and save the longer version again. Very strange behavior and I
don't know if it's an Oracle bug or a cx_Oracle bug. I am using cx_Oracle
5.1.2 with Python 2.7 x86 on Windows 7 for the lab environment.
I have managed to solve these issues by editing the
django/db/backends/oracle/creation.py file and swapping all instances of
NVARCHAR2 with VARCHAR2 and NCLOB with CLOB. I've also noticed a lot of
conversation about switching these and it was my understanding that the
decision had already been made to correct this by using above
recommendation. I have attached the current patch I am using which
resolves all issues.
--
Ticket URL: <https://code.djangoproject.com/ticket/20200>
Django <https://code.djangoproject.com/>
The Web framework for perfectionists with deadlines.
* needs_better_patch: => 0
* needs_tests: => 0
* needs_docs: => 0
Comment:
The second issue (with NCLOB was not resolved with CLOB) was not resolved
by this patch. I have discovered some very strange behavior with that and
have created a new ticket for that: #20201
The submitted patch is still valid though for the max_length issue.
--
Ticket URL: <https://code.djangoproject.com/ticket/20200#comment:1>
* cc: deejross (added)
--
Ticket URL: <https://code.djangoproject.com/ticket/20200#comment:2>
* cc: shai@… (added)
* needs_docs: 0 => 1
* needs_tests: 0 => 1
Comment:
Hi,
References to the conversations about switching nvarchar2 to varchar2 may
be nice here. I haven't seen any such conversations lately, and I don't
think I'd like Django deciding that it's ok to have strings that don't fit
into fields (when the database CHARSET is not unicode).
W.r.t the patch: Especially given the claim that you are not doing an
enhancement, but fixing a bug, please add a test that fails without your
fix, and passes with it (I am not a core committer, and as I said, I am
against your fix, but without tests and a documentation note about the
change, the patch shouldn't even be considered by core).
--
Ticket URL: <https://code.djangoproject.com/ticket/20200#comment:3>
* status: new => closed
* resolution: => wontfix
Comment:
Closing as "won't fix" given shai's objections and lack of follow-up from
OP.
--
Ticket URL: <https://code.djangoproject.com/ticket/20200#comment:4>
* status: closed => new
* resolution: wontfix =>
Comment:
I never got any notifications on responses from this, so I'm sorry for not
getting back to you. Here's the perfect reason why N-type fields should
not be used with Oracle and Django:
http://stackoverflow.com/questions/18978536/poor-performance-of-django-
orm-with-oracle
We are at the tail end of the conversion process from MySQL or Oracle, and
I had noticed a severe performance problem with Oracle. It turns out there
are cases where Oracle's implicit type conversion rules prevent indexes
from being used sometimes. This means full table scans, regardless of your
indexes. The only workaround is to use cursor.execute() or to create a C2C
index on every field, neither of which are suitable options.
That question also refers to these threads on the subject:
http://comments.gmane.org/gmane.comp.python.db.cx-oracle/3049
http://comments.gmane.org/gmane.comp.python.db.cx-oracle/2940
--
Ticket URL: <https://code.djangoproject.com/ticket/20200#comment:5>
* status: new => closed
* resolution: => wontfix
--
Ticket URL: <https://code.djangoproject.com/ticket/20200#comment:6>