ArrayIndexOutOfBoundsException

345 views
Skip to first unread message

Vasco Visser

unread,
Mar 17, 2012, 3:23:40 PM3/17/12
to h2-da...@googlegroups.com
Hi,

I'm getting an ArrayIndexOutOfBoundsException in both 1.3.162 and 1.3.164:

03-17 19:33:32 jdbc[2]: exception
org.h2.jdbc.JdbcSQLException: General error:
"java.lang.ArrayIndexOutOfBoundsException: 0"; SQL statement:
SELECT * FROM "SYS_2003652629_mc" ORDER BY "p_T1_ORG_NAME_edi" DESC [50000-164]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:329)
at org.h2.message.DbException.get(DbException.java:158)
at org.h2.message.DbException.convert(DbException.java:281)
at org.h2.command.Command.executeQuery(Command.java:191)
at org.h2.jdbc.JdbcStatement.executeInternal(JdbcStatement.java:173)
at org.h2.jdbc.JdbcStatement.execute(JdbcStatement.java:152)
at org.h2.server.web.WebApp.getResult(WebApp.java:1311)
at org.h2.server.web.WebApp.query(WebApp.java:1001)
at org.h2.server.web.WebApp$1.next(WebApp.java:964)
at org.h2.server.web.WebApp$1.next(WebApp.java:967)
at org.h2.server.web.WebThread.process(WebThread.java:166)
at org.h2.server.web.WebThread.run(WebThread.java:93)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at org.h2.index.PageDataLeaf.getRowAt(PageDataLeaf.java:327)
at org.h2.index.PageDataCursor.nextRow(PageDataCursor.java:97)
at org.h2.index.PageDataCursor.next(PageDataCursor.java:49)
at org.h2.index.IndexCursor.next(IndexCursor.java:238)
at org.h2.table.TableFilter.next(TableFilter.java:353)
at org.h2.command.dml.Select.queryFlat(Select.java:513)
at org.h2.command.dml.Select.queryWithoutCache(Select.java:618)
at org.h2.command.dml.Query.query(Query.java:297)
at org.h2.command.dml.Query.query(Query.java:267)
at org.h2.command.dml.Query.query(Query.java:36)
at org.h2.command.CommandContainer.query(CommandContainer.java:82)
at org.h2.command.Command.executeQuery(Command.java:187)
... 9 more


I haven't been able to reproduce on simple data. The table has 1
million rows and about 160 cols. About fifty cols are varchar(255), 10
cols Integer and the rest is tinyint (at this point all null values).
There is a one field auto increment primary key.

I noticed that the reducing the size of the selection enough makes
that the problem does not occur. I also noticed that the actual size
of the selection starting from which the problem occurs various for
different instances of the database (building the table multiple
times, each time with the same content). Also important is that the
order by seems not relevant at all, that is, the problem also occurs
when selecting without ordering (I used a query with an order by
because the web interface has an implicit limit which makes the
problem not occur without the order by). The data I use cannot be
posted in the group. If this is a bug in H2 and you need anything from
me wrt the data used, then please contact me directly.

Kind regards,

Vasco Visser

Vasco Visser

unread,
Mar 20, 2012, 5:26:00 PM3/20/12
to h2-da...@googlegroups.com
I wanted to run a debugger over the H2 source today but haven't been
able to reproduce this bug. Seems like something undeterministic is
going on. This is also corroborated by the fact that the problem last
week occurred with different selections for different database
instances. So, Last week I had the problem three times in a row, each
time starting a DB from scratch and now I can't reproduce it. I
realise this is probably to vague for anyone in the group to says
something useful about, nevertheless, I want to ask if anyone has any
input?

regards, Vasco

Thomas Mueller

unread,
Mar 30, 2012, 6:10:45 AM3/30/12
to h2-da...@googlegroups.com
Hi,

If you get such problems, it's typically a database corruption. There
are some known reasons to get such problems, for example by disabling
the transaction log; it's also possible that there is still a bug in
the database engine in this area. To recover the data, use the tool
org.h2.tools.Recover to create the SQL script file, and then re-create
the database using this script.

Some known causes are:

With version 1.3.162 and older: on out of disk space, the database can
get corrupt sometimes, if later write operations succeed. The same
problem happens on other kinds of I/O exceptions (where one or some of
the writes fail, but subsequent writes succeed). Now the file is
closed on the first unsuccessful write operation, so that later
requests fail consistently.

Important corruption problems were fixed in version 1.2.135 and
version 1.2.140 (see the change log). Known causes for corrupt
databases are: if the database was created or used with a version
older than 1.2.135, and the process was killed while the database was
closing or writing a checkpoint. Using the transaction isolation level
READ_UNCOMMITTED (LOCK_MODE 0) while at the same time using multiple
connections. Disabling database file protection using (setting
FILE_LOCK to NO in the database URL). Some other areas that are not
fully tested are: Platforms other than Windows XP, Linux, Mac OS X, or
JVMs other than Sun 1.5 or 1.6; the feature MULTI_THREADED; the
features AUTO_SERVER and AUTO_RECONNECT; the file locking method
'Serialized'.

If this is not the problem, I am very interested in analyzing and
solving this problem. Corruption problems have top priority for me.
The questions I typically ask is:

- Did the system ever run out of disk space?
- Could you send the full stack trace of the exception including message text?
- Did you use SHUTDOWN DEFRAG or the database setting DEFRAG_ALWAYS
with H2 version 1.3.159 or older?
- What is your database URL?
- How many connections does your application use concurrently?
- Do you use temporary tables?
- Did you use LOG=0 or LOG=1?
- With which version of H2 was this database created?
You can find it out using:
select * from information_schema.settings where name='CREATE_BUILD'
or have a look in the SQL script created by the recover tool.
- Did the application run out of memory (once, or multiple times)?
- Do you use any settings or special features (for example cache settings,
two phase commit, linked tables)?
- Do you use any H2-specific system properties?
- Is the application multi-threaded?
- What operating system, file system, and virtual machine
(java -version) do you use?
- How did you start the Java process (java -Xmx... and so on)?
- Is it (or was it at some point) a networked file system?
- How big is the database (file sizes)?
- How much heap memory does the Java process have?
- Is the database usually closed normally, or is process terminated
forcefully or the computer switched off?
- Is it possible to reproduce this problem using a fresh database
(sometimes, or always)?
- Are there any other exceptions (maybe in the .trace.db file)?
Could you send them please?
- Do you still have any .trace.db files, and if yes could you send them?
- Could you send the .h2.db file where this exception occurs?

Regards,
Thomas

> --
> You received this message because you are subscribed to the Google Groups "H2 Database" group.
> To post to this group, send email to h2-da...@googlegroups.com.
> To unsubscribe from this group, send email to h2-database...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/h2-database?hl=en.
>

Vasco Visser

unread,
Apr 10, 2012, 10:24:03 AM4/10/12
to h2-da...@googlegroups.com
Hi,

On Fri, Mar 30, 2012 at 12:10 PM, Thomas Mueller
<thomas.to...@gmail.com> wrote:
> If you get such problems, it's typically a database corruption. There
> are some known reasons to get such problems, for example by disabling
> the transaction log

I actually do disable the transaction log because I don't require
durability and I need the performance. Right now I disable both the
UNDO_LOG (thinking this affects atomicity) and the LOG (thinking this
affects durability).

Can you give an advice on what settings to use if I don't require
atomicity and durability, but I do not want the database to become
corrupted when killing the process.

regards,

Vasco

Thomas Mueller

unread,
Apr 11, 2012, 3:36:41 PM4/11/12
to h2-da...@googlegroups.com
Hi,

> I actually do disable the transaction log because I don't require
> durability and I need the performance. Right now I disable both the
> UNDO_LOG (thinking this affects atomicity) and the LOG (thinking this
> affects durability).

This is documented at http://h2database.com/html/faq.html#reliable

> Can you give an advice on what settings to use if I don't require
> atomicity and durability, but I do not want the database to become
> corrupted when killing the process.

Yes, see http://h2database.com/html/performance.html#database_performance_tuning
and below.

Regards,
Thomas

Reply all
Reply to author
Forward
0 new messages