java.lang.IllegalStateException: Chunk metadata too long [1.4.199/3]

49 views
Skip to first unread message

areichel

unread,
Aug 28, 2019, 11:15:51 PM8/28/19
to H2 Database
Dear All,

when transferring large data from Oracle queries into a H2 database using very simple Batch Insert Statements, we eventually face reproducible exceptions after 20 mins of transfer:

Caused by: java.lang.IllegalStateException: Chunk metadata too long [1.4.199/3]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:894)
at org.h2.mvstore.Chunk.writeChunkHeader(Chunk.java:165)
at org.h2.mvstore.MVStore.storeNow(MVStore.java:1370)
at org.h2.mvstore.MVStore.store(MVStore.java:1233)
at org.h2.mvstore.MVStore.commit(MVStore.java:1209)
at org.h2.mvstore.MVStore.beforeWrite(MVStore.java:2480)
at org.h2.mvstore.MVMap.beforeWrite(MVMap.java:897)
at org.h2.mvstore.MVMap.operate(MVMap.java:1644)

In the result, the top exception reads like:
 
org.h2.jdbc.JdbcBatchUpdateException: File corrupted while reading record: "nio:/home/manticore/.manticore/ifrsbox_bak.mv.db". Possible solution: use the recovery tool; SQL statement:
insert /*+PARALLEL APPEND_VALUES*/ into CFE.INSTRUMENT (ID_INSTRUMENT ,ID_INSTRUMENT_COMMITMENT ,ID_INSTRUMENT_TYPE ,START_DATE ,END_DATE ,ID_CURRENCY ,ID_CALENDAR ,ID_BUSINESS_DAY_CONVENTION ,DISCOUNT_CURVE ,DISCOUNT_SPREAD)  VALUES (? ,? ,? ,? ,? ,? ,? ,? ,? ,?)  [90030-199]
at org.h2.jdbc.JdbcPreparedStatement.executeBatch(JdbcPreparedStatement.java:1298)
at com.manticore.etl.ETLImportThread.writeDirect(ETLImportThread.java:423)
at com.manticore.etl.ETLImportThread.run(ETLImportThread.java:183)

org.h2.jdbc.JdbcSQLNonTransientConnectionException: File corrupted while reading record: "nio:/home/manticore/.manticore/ifrsbox_bak.mv.db". Possible solution: use the recovery tool; SQL statement:
insert /*+PARALLEL APPEND_VALUES*/ into CFE.INSTRUMENT (ID_INSTRUMENT ,ID_INSTRUMENT_COMMITMENT ,ID_INSTRUMENT_TYPE ,START_DATE ,END_DATE ,ID_CURRENCY ,ID_CALENDAR ,ID_BUSINESS_DAY_CONVENTION ,DISCOUNT_CURVE ,DISCOUNT_SPREAD)  VALUES (? ,? ,? ,? ,? ,? ,? ,? ,? ,?)  [90030-199]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:617)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
at org.h2.message.DbException.get(DbException.java:194)
at org.h2.mvstore.db.MVTableEngine$Store.convertIllegalStateException(MVTableEngine.java:205)
at org.h2.mvstore.db.MVTable.convertException(MVTable.java:725)
at org.h2.mvstore.db.MVSecondaryIndex.add(MVSecondaryIndex.java:202)
at org.h2.mvstore.db.MVTable.addRow(MVTable.java:546)
at org.h2.command.dml.Insert.insertRows(Insert.java:179)
at org.h2.command.dml.Insert.update(Insert.java:131)
at org.h2.command.CommandContainer.update(CommandContainer.java:133)
at org.h2.command.Command.executeUpdate(Command.java:267)
at org.h2.server.TcpServerThread.process(TcpServerThread.java:398)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:175)
at java.lang.Thread.run(Thread.java:745)

Before this happens, around 10 similar queries have been transferred successfully already.
We use a recent Version 1.4.199 from GIT. It will be difficult to share the DB file because it is a bit large:

[manticore@ifrsbox-app ~]$ ls -lh .manticore/ifrsbox_bak.*
-rw-rw-r-- 1 manticore manticore  51G Aug 29 03:43 .manticore/ifrsbox_bak.mv.db
-rw-rw-r-- 1 manticore manticore 136K Aug 29 03:21 .manticore/ifrsbox_bak.trace.db
-rw-rw-r-- 1 manticore manticore  47K Aug 28 05:26 .manticore/ifrsbox_bak.xml

Anything else we could do or try from out end please?
Best regards
Andreas

areichel

unread,
Aug 29, 2019, 1:01:20 AM8/29/19
to H2 Database
Dear All,

good news first: the error was caused by running out of file-system space.
Although this leads to a few questions:

1) would it not be possible to throw a more meaningful exception, e.g. "Failed to allocate table space"
2) why do especially indexes and constraints consume so much space in H2?

To elaborate on question 2): I basically copy a few tables from Oracle to H2 using exactly the same table structures and indexes. Oracle shows a total table space size on 20 GBytes for these tables, indexes, lobs etc. Less than halve of that content transferred to H2 consumed more than 50 GByte already? I used only INSERT INTO statements. Not one single SELECT or VIEW has been used, so I can assume that is "data only".

After having dropped all Constraints and Indexes from the H2 database, I ran the transfer again. This time it worked and much(!) less filespace is used:


[manticore@ifrsbox-app ~]$ ls -lh .manticore/ifrsbox_bak.*
-rw-rw-r-- 1 manticore manticore 9.5G Aug 29 05:25 .manticore/ifrsbox_bak.mv.db
-rw-rw-r-- 1 manticore manticore 3.6K Aug 29 04:30 .manticore/ifrsbox_bak.trace.db
-rw-rw-r-- 1 manticore manticore  47K Aug 29 03:51 .manticore/ifrsbox_bak.xml


40+ GByte for Indexes and Constraints on 10 GByte of content?

Last question please:

3) Is the a way to use compression when writing the H2 database file? I would like to use the H2 merely as a container for transporting data (instead of Schema dumps) and I wounder if a GZIP/BZIP stream could be used to write a compressed file.

Compressing the H2 file with LZMA reduces the size to 2GB (only), so there seems to be a lot of redundancy.

Thank you for kind help and support in advance.
Cheers
Andreas

Noel Grandin

unread,
Aug 29, 2019, 8:41:14 AM8/29/19
to h2-da...@googlegroups.com


On 2019/08/29 7:01 AM, areichel wrote:
> 1) would it not be possible to throw a more meaningful exception, e.g. "Failed to allocate table space"

We generally do, in fact we have unit tests that check that, but obviously we missing something. Very hard to get error
handling right everywhere, especially for the edge cases.

> 2) why do especially indexes and constraints consume so much space in H2?
>

H2, by default in the latest versions uses a storage engine that trades off space for improved IO speed.
You can switch to the older (and more compact) storage engine by append MV_STORE=FALSE to the URL.

> 3) Is the a way to use compression when writing the H2 database file? I would like to use the H2 merely as a container
> for transporting data (instead of Schema dumps) and I wounder if a GZIP/BZIP stream could be used to write a compressed
> file.

H2 is not well suited to that use-case. You'd be better off with one of the schema/data SQL dump tools floating around.

But we do have some options for compressing the data.
http://h2database.com/html/advanced.html#file_system

areichel

unread,
Aug 30, 2019, 4:22:51 AM8/30/19
to H2 Database
Thank you for the explanation, Noel. I am fine with that explanation.
I also managed to work around by 1) dropping all Constraints and Indexes (and only rebuilding them later) and 2) using BTRFS with ZStd compression enabled.

Cheers
Andreas
Reply all
Reply to author
Forward
0 new messages