Huge mvstore database file, compact operation fails.

67 views
Skip to first unread message

Clyde Stubbs

unread,
Dec 1, 2019, 10:44:24 PM12/1/19
to H2 Database
I have an MVStore file that is about 300MB. Running MVStoreTool -info gives me this:

Created: 2019-11-27 12:24:53 (+0 s)
Last modified: 2019-12-02 14:25:05 (+439211 s)
File length: 301449216
The last chunk is not listed
Chunk length: 1400832
Chunk count: 41
Used space: 1%
Chunk fill rate: 38%
Chunk fill rate excluding empty chunks: 40%
  Chunk 769549: 2019-12-01 10:14:20 (+337766 s), 0% used, 11 blocks, unused: 2019-12-01 10:25:56 (+338462 s)
  Chunk 781208: 2019-12-01 10:25:35 (+338441 s), 12% used, 8 blocks
  Chunk 781209: 2019-12-01 10:25:35 (+338441 s), 15% used, 6 blocks
  Chunk 781210: 2019-12-01 10:25:35 (+338441 s), 34% used, 5 blocks
  Chunk 781211: 2019-12-01 10:25:35 (+338441 s), 23% used, 9 blocks
  Chunk 781212: 2019-12-01 10:25:35 (+338441 s), 16% used, 6 blocks
  Chunk 781213: 2019-12-01 10:25:35 (+338441 s), 29% used, 8 blocks
  Chunk 781214: 2019-12-01 10:25:35 (+338441 s), 29% used, 4 blocks
  Chunk 781215: 2019-12-01 10:25:35 (+338441 s), 31% used, 12 blocks
  Chunk 781216: 2019-12-01 10:25:35 (+338441 s), 31% used, 4 blocks
  Chunk 781217: 2019-12-01 10:25:36 (+338442 s), 33% used, 11 blocks
  Chunk 781218: 2019-12-01 10:25:36 (+338442 s), 14% used, 7 blocks
  Chunk 781219: 2019-12-01 10:25:36 (+338442 s), 15% used, 6 blocks
  Chunk 781220: 2019-12-01 10:25:36 (+338442 s), 31% used, 6 blocks
  Chunk 781221: 2019-12-01 10:25:36 (+338442 s), 40% used, 10 blocks
  Chunk 781222: 2019-12-01 10:25:36 (+338442 s), 18% used, 6 blocks
  Chunk 781223: 2019-12-01 10:25:36 (+338442 s), 23% used, 9 blocks
  Chunk 781224: 2019-12-01 10:25:36 (+338442 s), 28% used, 6 blocks
  Chunk 781225: 2019-12-01 10:25:36 (+338442 s), 35% used, 12 blocks
  Chunk 781226: 2019-12-01 10:25:36 (+338442 s), 24% used, 14 blocks
  Chunk 781569: 2019-12-01 10:25:54 (+338460 s), 26% used, 5 blocks
  Chunk 781570: 2019-12-01 10:25:54 (+338460 s), 39% used, 8 blocks
  Chunk 781571: 2019-12-01 10:25:54 (+338460 s), 32% used, 5 blocks
  Chunk 781572: 2019-12-01 10:25:54 (+338460 s), 52% used, 9 blocks
  Chunk 781573: 2019-12-01 10:25:54 (+338460 s), 17% used, 7 blocks
  Chunk 781574: 2019-12-01 10:25:54 (+338460 s), 30% used, 6 blocks
  Chunk 781575: 2019-12-01 10:25:54 (+338460 s), 26% used, 6 blocks
  Chunk 781576: 2019-12-01 10:25:54 (+338460 s), 40% used, 5 blocks
  Chunk 781577: 2019-12-01 10:25:54 (+338460 s), 25% used, 7 blocks
  Chunk 781578: 2019-12-01 10:25:55 (+338461 s), 22% used, 8 blocks
  Chunk 781579: 2019-12-01 10:25:56 (+338462 s), 71% used, 47 blocks
  Chunk 781580: 2019-12-01 10:25:56 (+338462 s), 0% used, 4 blocks, unused: 2019-12-01 10:25:56 (+338462 s)
  Chunk 781581: 2019-12-01 10:25:56 (+338462 s), 63% used, 9 blocks
  Chunk 781582: 2019-12-01 10:25:56 (+338462 s), 60% used, 8 blocks
  Chunk 781583: 2019-12-01 10:25:56 (+338462 s), 28% used, 5 blocks
  Chunk 781584: 2019-12-01 10:25:56 (+338462 s), 66% used, 10 blocks
  Chunk 781585: 2019-12-01 10:25:56 (+338462 s), 49% used, 7 blocks
  Chunk 781586: 2019-12-01 10:25:56 (+338462 s), 52% used, 6 blocks
  Chunk 781587: 2019-12-01 10:25:56 (+338462 s), 59% used, 8 blocks
  Chunk 781588: 2019-12-01 10:25:56 (+338462 s), 33% used, 5 blocks
  Chunk 781589: 2019-12-01 10:25:56 (+338462 s), 70% used, 7 blocks

Does this really mean that only 1% of the file is used for data? My estimate of the amount of in-use data in the file is about 6MB, which is about 2% so that is in the right ballpark. The question then is why is the file so big?

If I attempt to compact it I get this:

java -cp temp org.h2.mvstore.MVStoreTool -compact test.db
Exception in thread "main" java.lang.IllegalStateException: File corrupted in chunk 781582, expected page length 4..768, got 774975793 [1.4.200/6]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:951)
at org.h2.mvstore.Chunk.readBufferForPage(Chunk.java:368)
at org.h2.mvstore.MVStore.readBufferForPage(MVStore.java:1211)
at org.h2.mvstore.MVStore.readPage(MVStore.java:2235)
at org.h2.mvstore.MVMap.readPage(MVMap.java:672)
at org.h2.mvstore.Page$NonLeaf.getChildPage(Page.java:1043)

I have another file that gives similar results that has grown to 1.5GB. It seems that new chunks are being added to the end of the file, but none are being reused. Any idea what's going on and what I can do about?


areichel

unread,
Dec 3, 2019, 9:41:56 AM12/3/19
to H2 Database
Dear All,

I also observed corruptions, when compacting large DB files. Unfortunately it is not reproducible on demand and seems to happen only with large DB files of 300MB plus.

Let us know how we can provide better information or trace and we certainly will do so.

Best regards
Andreas

Reply all
Reply to author
Forward
0 new messages