Before upgrading we had not this type of error. Does anybody has a similar
problem?
Arne Ortlinghaus
ACS Data Systems
What database product are you actually working with? I can't think,
though I haven't had coffee yet, of any case in Oracle where one would
refer to a field as <name>.<name>.<name>. Nor do I understand what
actual: 13, maximum 7 indicates. Again this doesn't sound like Oracle.
If what you are saying is that occasionally your trigger is trying
to stuff 13 bytes into a 7 byte bucket this issue is more likely
something other than the patch you applied. Put in exception handling
and write the offending values to a table so you can view them.
--
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damo...@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
> What database product are you actually working with? I can't think,
> though I haven't had coffee yet, of any case in Oracle where one would
> refer to a field as <name>.<name>.<name>. Nor do I understand what
> actual: 13, maximum 7 indicates. Again this doesn't sound like Oracle.
Excuse me?
$ sqlplus scott/tiger@zen1
SQL*Plus: Release 10.2.0.1.0 - Production on Tue Oct 7 15:48:20 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Release 10.2.0.3.0 - Production
scott@ZEN1> create table x ( x varchar2(2) ) ;
Table created.
scott@ZEN1> insert into scott.x values ('foo');
insert into scott.x values ('foo')
*
ERROR at line 1:
ORA-12899: value too large for column "SCOTT"."X"."X" (actual: 3,
maximum: 2)
Y.
SQL> create table x (x char(14));
Table created.
SQL> insert into x (x) Values ('abcdefghijklm');
1 row created.
SQL> insert into x (x) Values ('abcdefghijklmn');
1 row created.
SQL> insert into x (x) Values ('abcdefghijklmnop');
insert into x (x) Values ('abcdefghijklmnop')
*
ERROR at line 1:
ORA-12899: value too large for column "OPS$ORACLE"."X"."X" (actual:
16,
maximum: 14)
Yes, normally I know this message only from illegitimate inserts. But now I
had this type of error at least two different program/places at two
different sites for perhaps 20 times in total. In this time the
corresponding SQL statements have been executed several 10000 of times
without errors. When executing the statement afterwards again everything is
ok.
Before updating to 10.2.0.4 I have never seen this error at these two
different places.
It is difficult to generate a Service request with these informations
because I have not the the statement which is really executed.
<snip>
Perhaps this is a character set problem. The OP had a .it email, but the
text in the OP looked German (??). Either way, probably requires a
multibyte characterset to represent and maybe that got hosed in the
upgrade somehow?
--
"Care must be exorcised when handring Opiticar System as it is apts to
be sticked by dusts and hand-fat." --Japanese Translators
"Keep your fingers off the lens." --Elton Byington, English Translator
I'm guessing you are relying on implicit date conversions, and you see
this because it is converting the internal date representation of 7
characters into an external representation of some kind (which I would
expect to be 9 or 11 actual). Check your date environment variables,
and make your date conversions explicit in the trigger.
It also could be the characterset issue, where your abracadabrastring
is now defined differently.
"This is seen when exporting from a database with a 8 bit
NLS_CHARACTERSET (like WE8ISO8859P1, WE8MSWIN1252, WE8DEC …) or 16-bit
NLS_CHARACTERSET (like JA16SJIS, ZHS16GBK, KO16MSWIN949) to a database
with a NLS_CHARACTERSET set to AL32UTF8 or UTF8." (from a peoplesoft
blog found by googling)
jg
--
@home.com is bogus.
"Download complete. You may now disconnect from the Internet."
In a DATE column? Hm... I could also imagine some trigger
implementation issues. Maybe there is another trigger that's firing as
well. Reminds me of [1], especially item 2.
Cheers
robert
[1] http://rwijk.blogspot.com/2007/09/database-triggers-are-evil.html
> The OP had a .it email, but the text in the OP looked German (??).
The OP writes from the beautiful italian autonomous province called
Provincia autonoma di Bolzano – Alto Adige / Autonome Provinz Bozen –
Südtirol / Provinzia Autonòma de Bulsan, which is a trilingual zone
(69,15% German, 26,47% Italian, 4,37% Ladin).
Y.
> Joey.D...@gmail.com <Joey.D...@gmail.com> wa:
>>I tried to recreate your issue on 10.2.0.4 HP-Itanium. The only place
>>I got the error, it was legitimate. Is there any chance your input
>>data has changed?
>
> <snip>
>
> Perhaps this is a character set problem. The OP had a .it email, but the
> text in the OP looked German (??). Either way, probably requires a
> multibyte characterset to represent and maybe that got hosed in the
> upgrade somehow?
This is what I was wondering. However, I've never had to work with many
different character sets, so I'm a bit out of my depth. I would
certainly be looking to make sure NLS settings are consistent with what
they were under the 9i platform. I'd also check any clients.
I think the best advice so far is Daniel's advice to put in some
auditing and logging of values and see what patterns are found. Without
some real data to examine, you are just shooting in the dark - you may
get lucky, but......
Tim
P.S. while it may seem tempting, I would be very skeptical of any
theories that tended towards labelling the problem as an Oracle
bug. concentrate on environment settings and the application as I think
they are far more likely candidates.
--
tcross (at) rapttech dot com dot au
You are right, it would be very helpful to know which SQL statement
causes the error.
I could not find anything similar on Metalink; all references that sounded
remotely relevant were for SQL*Loader.
Can you reproduce the error ar will or does it only occur sometimes?
What program or application issues the statement, and which API is
used to access Oracle?
In any case I would try to enable SQL tracing on the server side to
catch the statement that causes the problem.
You can use DBMS_MONITOR.DATABASE_TRACE_ENABLE(FALSE, TRUE) to enable
tracing for the whole database or
DBMS_MONITOR.SESSION_TRACE_ENABLE(NULL, NULL FALSE, TRUE)
to trace the current session.
Tracing will slow down things and generate a lot of output.
To find the statement that causes the error, you can search for the error
number in the trace files.
To reduce the amount of data logged, you could write a logon trigger
that enables tracing only for a certain user.
Yours,
Laurenz Albe
thank you for your comments.
It is already an answer that nobody knows about a similar problem.
At the moment it is difficult to insert some monitoring because the errors
happened to rarely compared to the throughput but I will think about it.
Reproducing with other tools is problematic.
BTW: I am living in the North of Italy, in South Tyrol, where there are
three official languages, mainly Italian and German. Our company is mixed
German/Italian and produces administration software, for example ERP systems
designed for the multi lingual market in South Tyrol together with the
financial situation in Italy. I am originally from Germany.
The strange variable ADCABRRZDATUMSTORNIERUNG which seems to be nonsense is
an abbreviation for:
"AuftraegeDurchlaeufeChecksAbrechnungRechenzentrumDatumStornierung"
which would be in English something like
"Date cancellation for the calculation of the data center of the checks in
the orders and cycles " :-).
Arne Ortlinghaus
"Arne Ortlinghaus" <Arne.Ort...@acs.it> schrieb im Newsbeitrag
news:gcd457$9mq$1...@aioe.org...
Why would you need an abbreviation for that?
ORA-12899: value too large for column "ISC"."JOBSERVERDATEN"."UPDATEUSER"
(actual: 12594, maximum: 14)
Update Jobserverdaten set Jobstextausgefuehrt = '0032 - Datensicherung:
...' ,
Jobstexttransient = 'start Job' ,
Jobstextfehler = '' ,
Jobstextwarnungen = '' , Jobsdatumzeit = sysdate,
Jobsversion = '1.38b0' ,
Jobsprogstart = To_date('2008-10-08 17:54:42', 'YYYY-MM-DD HH24:MI:SS'),
Jobslaufminuten = 0 ,
Jobsinausfuehrung = 1,
Jobsjobserverfreigabe = 2
where JobsbenID = 'MAUTOBACKUP'
The field UpdateUser is written by an Update before trigger
cuser VARCHAR (100);
...
SYS.DBMS_APPLICATION_INFO.read_client_info (cuser);
cuser := SUBSTR (cuser, 1, 14);
...
:NEW.updateuser := cuser;
...
Arne Ortlinghaus
Arne Ortlinghaus
Unless ADCABRRZ translates in your local language to something that
complies with the ANSI or ISO naming standards I don't think you
can justify what looks like it was created with DBMS_CRYPTO.RANDOMBYTES.
Actually I am testing as a work around the following system change:
alter system set event="10778 trace name context forever, level 1"
scope=spfile;
I still have to wait some time if it resolves both types of errors.
---------------------------------------
Bug 5745084 - Abstract: OERI[ksmals] [sql txt in kkslod] querying V$ views
which access
X$KGLLK
Fixed-Releases: A205
Details:
This problem is introduced in 9.2.0.8.
This fix can be used in preference to the fix in bug 4969005 as that fix
only protects against dumps under kglLockIterator. Queries against v
iews which access X$KGLLK can fail with dumps or errors such as ORA-600
[ksmals]
[sql txt in kkslod], or may return garbage for some column values.
Note: This fix is disabled by default. To enable the fix set event 10778 to
level 1. With
the fix enabled the latchless iterator code is not used (as was the case
prior t
o 9.2.0.8) which avoids the dumps / ORA-600 issues but can results in higher
lat
ch gets from queries on the affected views.
---------------------------------------
did your solution work ? Because we are having the same problem since
we upgraded one of our databases from 10.2.0.3 to 10.2.0.4. (value too
large notifications on programs that haven't changed. always
triggers).
Yves Van Wert
Did anyone get this resolved? I am in the same situation, using an
large in-house application suite that has worked (in this scenario)
without problems on 8i, 9i (all patch levels) and 10.2.0.1. We have
more than 60 customers running this software. Since patching some
customers to 10.2.0.4, we have experienced this issue, where they have
not changed the application at all. Our developers have simulated
test runs with 10,000 records where only a handful have generated
ORA-12899 and the other 9000+ have gone through OK. It is not even
the same field that generates the ORA-12899 despite the fact that the
same rows are being processed in a batch job with the same trigger.
Sometimes it is a date field, but other times, it can be a text field
that holds the userid of the session, that is always a fixed length.
There are several triggers of the same triggeringevent that exist on
the table, but this has not been an issue in the past [pre 10.2.0.4].
We do not have any non-default NLS settings. This seems lika bug in
10.2.0.4 that has not yet been resolved or even reported, as I cannot
find anything useful on Metalink.
--andrew