Large files in temp from database

445 views
Skip to first unread message

LostInSpace2011

unread,
Nov 14, 2009, 11:14:28 AM11/14/09
to H2 Database
I am using H2 (RDBMS: H2, version: 1.1.114 (2009-06-01)) on Mac OS X.
In the last few days things have been going crazy with the database.
It has started created files in my temp folder which seems to
originate from H2. There are about 8000 of these taking up more then
600MB. My database is only 5MB in size. The process seems to be using
up all available CPU as well.

Example of the file in temp:
___db_AddressBookDB.6347079200721256368.temp.db

I am guessing that the database is corrupt which causes it to behave
like this. Are there any tools available to verify or repair the
database.

The change log mentions several fixes since that release. So I guess
the cause of this has been addressed. However I would like to
investigate a bit further into why this has happened.

Thanks in advance.
Alex

Christopher Lakey

unread,
Nov 14, 2009, 2:17:55 PM11/14/09
to h2-da...@googlegroups.com, H2 Database
Alex,

Are you using AUTO_SERVER=true?
How much heap space does your JVM have?

I have noticed a similar problem in recent versions under certain
conditions. Unfortunately I have not had time to write a test case to
reproduce it.

It's possible that some object is holding on to a temp file and is not
being garbage collected. For us, The files go away when the server
(tomcat) is shutdown. I haven't tried forcing garbage collection.

-Christopher




On Nov 14, 2009, at 11:14 AM, LostInSpace2011 <lostins...@googlemail.com
> --
>
> You received this message because you are subscribed to the Google
> Groups "H2 Database" group.
> To post to this group, send email to h2-da...@googlegroups.com.
> To unsubscribe from this group, send email to h2-database...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/h2-database?hl=
> .
>
>

Alexander Hartner

unread,
Nov 14, 2009, 2:47:59 PM11/14/09
to h2-da...@googlegroups.com
Hi Christopher,

I dont' think I am using AUTO_SERVER. For me these files remain even after restarting my JVM.

Regards
Alex

Alexander Hartner

unread,
Nov 14, 2009, 2:54:18 PM11/14/09
to h2-da...@googlegroups.com
Could it be that I am not closing a particular transaction or connection. How can I trace that.

On 14 Nov 2009, at 19:17, Christopher Lakey wrote:

Christopher Lakey

unread,
Nov 15, 2009, 3:52:46 PM11/15/09
to h2-da...@googlegroups.com
UYuyuuyppÓ

Thomas Mueller

unread,
Nov 16, 2009, 4:31:39 PM11/16/09
to h2-da...@googlegroups.com
Hi,

What is your database URL?

The temporary files could be result sets from queries that generate a
lot of (potentially temporary) data, such as: select top 100 * from
test order by x (if there is no index on x and the table is very
large; instead of a large table it could be a join of multiple
tables). Maybe you see what it could be if you open one of those
files? In any case the files should be deleted automatically, not sure
what the problem could be.

> not closing a particular transaction or connection

I don't think that's the problem. Unclosed connections are detected
when garbage collected, you should see an exception message in the
.trace.db file.

Regards,
Thomas

Thomas Mueller

unread,
Nov 24, 2009, 3:11:46 PM11/24/09
to Alexander Hartner, h2-da...@googlegroups.com, Christopher Lakey
Hi,

I'm not sure if you got my email, it looks like not. So I repeat it here:

Alexander Hartner

unread,
Nov 24, 2009, 3:24:11 PM11/24/09
to Thomas Mueller, h2-da...@googlegroups.com
I did get it, but I am not sure what to make of it. My application does use paging, but this should not prevent it from running for extended periods of time. Currently these temp files are fulling up rather quickly and filled up the entire 200GB disk previously. Since the entire database is only 5 MB this is quite a big problem for me. I have done further testing and in most cases these files are deleted on shutdown, however given the extend of the files this can take a while. Not sure how this will work during a shutdown situation.

Any suggestions on how I can deal with these files without having to restart the process.

Regards
Alex

Christopher Lakey

unread,
Nov 24, 2009, 4:52:37 PM11/24/09
to Thomas Mueller, Alexander Hartner, h2-da...@googlegroups.com
Thomas,

I am fairly certain that it was queries in our case, since that's all
the client was doing. It is possible the file were deleted on
shutdown, but the server runs for days or weeks w/o restart and there
is enough ram to not require much garbage collection.

We did not try forcing garbage collection to see if this would in fact
cleanup the files.


We're looking at moving to MySQL for the actual deployment, so this
has not been a prioirity for us.


On Nov 24, 2009, at 3:11 PM, Thomas Mueller <thomas.to...@gmail.com

Alexander Hartner

unread,
Nov 24, 2009, 5:17:20 PM11/24/09
to Thomas Mueller, h2-da...@googlegroups.com, Christopher Lakey
I have since upgraded my H2 version to : h2-1.2.123.jar and still have this issue.

Regards
Alex

On 24 Nov 2009, at 20:11, Thomas Mueller wrote:

Alexander Hartner

unread,
Nov 24, 2009, 6:56:38 PM11/24/09
to Thomas Mueller, h2-da...@googlegroups.com, Christopher Lakey
Hi All,

Thanks for all your help. After having a look I figure I check the manual and found this :

Storing and Reading Large Objects
If it is possible that the objects don't fit into memory, then the data type CLOB (for textual data) or BLOB (for binary data) should be used. For these data types, the objects are not fully read into memory, by using streams. To store a BLOB, use PreparedStatement.setBinaryStream. To store a CLOB, use PreparedStatement.setCharacterStream. To read a BLOB, use ResultSet.getBinaryStream, and to read a CLOB, use ResultSet.getCharacterStream. When using the client/server mode, large BLOB and CLOB data is stored in a temporary file on the client side.

Seems like this is a feature when larger CLOBs and BLOBs are used.  The manual also suggest that increasing MAX_LENGTH_INPLACE_LOB could resolve this, although it doesn't say this explicitly. I have tried setting the value of this larger then my largest blob using the following SQL statements, however this did not prevent further files being created ?

SELECT length(IMAGE) FROM CONTACTS WHERE IMAGE IS NOT NULL ORDER BY 1 DESC
SET MAX_LENGTH_INPLACE_LOB 128000

Any suggestions on how I can avoid these files being created.

Regards
Alex


On 24 Nov 2009, at 20:11, Thomas Mueller wrote:

Alexander Hartner

unread,
Nov 24, 2009, 7:15:15 PM11/24/09
to h2-da...@googlegroups.com
I have also tried setting the system property (2.defaultMaxLengthInplaceLob) to 500,000 as this had been changed in release : Version 1.1.119 (2009-09-26)

Changlog extract:

New system property h2.defaultMaxLengthInplaceLob to change the default maximum size of an in-place LOB object.

However this did not have any effect either.

Regards
Alex



On 24 Nov 2009, at 20:11, Thomas Mueller wrote:

Alexander Hartner

unread,
Nov 27, 2009, 2:28:52 PM11/27/09
to h2-da...@googlegroups.com, Thomas Mueller, Christopher Lakey
I just tried using the latest H2 release (h2.1.2.124) and it still produces these temp files. I tied forcing a GC by attaching JConsole to my VM. It did free up memory, but did not remove these files. I am beginning to think this is a bug as every-time I query a LOB new files are generated. Surely the point of producing these files is to allow the client to access the data stream from it's local disk, however reading the same data causes new files to be produced, which have to be read again. 

Are there any options available where I can configure when these files are produces and cleaned up. This is becoming quite critical for me, and if I can't resolve this I will have to find another database engine. I am really getting desperate for any suggestion on what might work to resolve this and allow my VM to run for long periods without having to be restarted to clean up these files.

Thanks in advance for any assistance.
Alex

On 26 Nov 2009, at 23:57, Alexander Hartner wrote:

I have also tried setting the system property (h2.defaultMaxLengthInplaceLob) to 500,000 as this had been changed in release : Version 1.1.119 (2009-09-26)

Changlog extract:

New system property h2.defaultMaxLengthInplaceLob to change the default maximum size of an in-place LOB object.

However this did not have any effect either.

Regards
Alex
Reply all
Reply to author
Forward
0 new messages