but what I can suggest is that you set this setting to a nice large number
-Dh2.maxCompactTime=60000
And reboot your application on a regular basis.
See here
http://www.h2database.com/html/grammar.html#shutdown
which sets the maximum time to compact the database on shutdown (in
milliseconds).
On 2012-02-15 11:51, IntensiveH2 wrote:
> Thanks for the quick answer but currently it's not possible to reboot
> my application on a regular basis because my application must ensure
> scheduling, failover, HA ....
> and it's not possible to stop it.
> I really need to have a solution at runtime without restarting my
> application.
> Thanks.
>
I had a quick bash at implementing this by taking an exclusive lock on
the DB and re-using the existing compacting code.
So I added a
COMPACT
command.
But the unit test I added indicates that I'm corrupting the database
somehow.
Patch attached - Thomas, perhaps you have an idea?
Regards, Noel.
On 2012-02-15 11:51, IntensiveH2 wrote:
> Thanks for the quick answer but currently it's not possible to reboot
> my application on a regular basis because my application must ensure
> scheduling, failover, HA ....
> and it's not possible to stop it.
> I really need to have a solution at runtime without restarting my
> application.
> Thanks.
>
I would first analyze why it grows so much. Possibly an uncommitted transaction?
Regards,
Thomas
> --
> You received this message because you are subscribed to the Google Groups "H2 Database" group.
> To post to this group, send email to h2-da...@googlegroups.com.
> To unsubscribe from this group, send email to h2-database...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/h2-database?hl=en.
>
> Can I add some debug settings on customer side?
There are two options:
- Use the trace options to analyze the JDBC method calls (append
;TRACE_LEVEL_FILE=3 to the database URL)
- Use the Recover tool to find out if there are uncommitted
transactions in the (large) database file (or a copy of the file; in
this case you could even copy the database file while it's in use,
because you don't need a 'save' copy of the database just to analyze
the transaction log).
> Perhaps we need to implement some kind of default timeout on how long a
> transaction can stay open?
Thats a very good idea. I wonder if other databases support something
like this?
At the very least there should be a warning in the .trace.db file. I
will have a look at that.
Regards,
Thomas
In order to monitor for bad transactions, how about we add a
TRANSACTION_START column to the SESSIONS meta-table?
Regards, Noel Grandin
Then you can log the output of that table regularly to see who the culprit is.
If you want, I have some code that could be used as a starting point.
It's work in progress.
Warning: I havent test it; I didn't run it at all. It could be
completely wrong, and possibly corrupt the database. Also, the term
"TRANSACTION_TIMEOUT" is misleading and has to be changed (it should
be something like "really maximum transaction log size; if the
transaction log is larger than this then the transaction of the
session with the oldest open transaction is rolled back").
http://h2database.com/p.html#29e32a33bca7541cb70ea3cc9d483814
Regards,
Thomas
OK, so it turns out my original idea was a bad one.
Because of the way the locking works, it is not possible to display that
information in the sessions table.
However, using JMX works. The attached patch correctly display ongoing
transactions using the H2 JMX facility.
See the unit-test in the unit-test for how it works.
Regards, Noel.
On 2012-02-27 12:18, IntensiveH2 wrote:
> Any news regarding my issue?
>
> The last status was:
> - command compact works but closes all current connection
> - I tried "TRANSACTION_TIMEOUT" = 33554432 but I have issue.
> - I tried code with TRANSACTION_START but WHERE TRANSACTION_START IS
> NOT NULL returns nothing.
>
> On 23 f�v, 10:56, IntensiveH2<tlegr...@infovista.com> wrote:
>> Hi,
>>
>> I tested the compact command.
>> I have no issue regarding "corruption" but I need time to perform
>> additional validations.
>> But "compact command" closed all opened connection and if you have a
>> "pool" of connection, it's a problem
>> The error message is:
>> org.h2.jdbc.JdbcSQLException: Database is already closed (to disable
>> automatic closing at VM shutdown, add ";DB_CLOSE_ON_EXIT=FALSE" to the
>> db URL) [90121-164]
>>
>> Thierry.
>>
>> On 15 f�v, 13:13, Noel Grandin<noelgran...@gmail.com> wrote:
>>
>>
>>
>>
>>
>>
>>
>>> Hi
>>> I had a quick bash at implementing this by taking an exclusive lock on
>>> the DB and re-using the existing compacting code.
>>> So I added a
>>> COMPACT
>>> command.
>>> But the unit test I added indicates that I'm corrupting the database
>>> somehow.
>>> Patch attached - Thomas, perhaps you have an idea?
>>> Regards, Noel.
>>> On 2012-02-15 11:51, IntensiveH2 wrote:
>>>> Thanks for the quick answer but currently it's not possible to reboot
>>>> my application on a regular basis because my application must ensure
>>>> scheduling, failover, HA ....
>>>> and it's not possible to stop it.
>>>> I really need to have a solution at runtime without restarting my
>>>> application.
>>>> Thanks.
>>> 11KAfficherT�l�charger
I added TRACE_LEVEL_FILE=3 during shutdown compact and added result in attachment.
Can you tell me if you see an explanation about TRANSACTION not committed or something explaining the size difference after and before compact?
Thanks
Thierry.
-----Original Message-----
From: Noel Grandin [mailto:noelg...@gmail.com]
Sent: mardi 28 février 2012 10:18
To: h2-da...@googlegroups.com
Cc: Thierry Legrain
Subject: Re: Automatically compact databases from time to time (as a background process)
Hi
OK, so it turns out my original idea was a bad one.
Because of the way the locking works, it is not possible to display that information in the sessions table.
However, using JMX works. The attached patch correctly display ongoing transactions using the H2 JMX facility.
See the unit-test in the unit-test for how it works.
Regards, Noel.
On 2012-02-27 12:18, IntensiveH2 wrote:
> Any news regarding my issue?
>
> The last status was:
> - command compact works but closes all current connection
> - I tried "TRANSACTION_TIMEOUT" = 33554432 but I have issue.
> - I tried code with TRANSACTION_START but WHERE TRANSACTION_START IS
> NOT NULL returns nothing.
>
> On 23 fév, 10:56, IntensiveH2<tlegr...@infovista.com> wrote:
>> Hi,
>>
>> I tested the compact command.
>> I have no issue regarding "corruption" but I need time to perform
>> additional validations.
>> But "compact command" closed all opened connection and if you have a
>> "pool" of connection, it's a problem The error message is:
>> org.h2.jdbc.JdbcSQLException: Database is already closed (to disable
>> automatic closing at VM shutdown, add ";DB_CLOSE_ON_EXIT=FALSE" to
>> the db URL) [90121-164]
>>
>> Thierry.
>>
>> On 15 fév, 13:13, Noel Grandin<noelgran...@gmail.com> wrote:
>>
>>
>>
>>
>>
>>
>>
>>> Hi
>>> I had a quick bash at implementing this by taking an exclusive lock
>>> on the DB and re-using the existing compacting code.
>>> So I added a
>>> COMPACT
>>> command.
>>> But the unit test I added indicates that I'm corrupting the database
>>> somehow.
>>> Patch attached - Thomas, perhaps you have an idea?
>>> Regards, Noel.
>>> On 2012-02-15 11:51, IntensiveH2 wrote:
>>>> Thanks for the quick answer but currently it's not possible to
>>>> reboot my application on a regular basis because my application
>>>> must ensure scheduling, failover, HA ....
>>>> and it's not possible to stop it.
>>>> I really need to have a solution at runtime without restarting my
>>>> application.
>>>> Thanks.
>>> 11KAfficherTélécharger
> I added TRACE_LEVEL_FILE=3 during shutdown compact and added result in attachment.
This will not really help much. It shows the database shrunk, that's
it. It doesn't say why the database was big.
Could you send me (or upload somewhere) the large database, or run the
Recover tool on a large database, and check if there is a large
transaction log?
Regards,
Thomas
I have downloaded the file now and ran the Recover tool. The database
file is 3 GB, and the cockpit.h2.sql file is 9 GB. The end (tail) of
the .h2.sql script is:
---- Statistics ----
-- page count: 1552048, free: 24350
-- page data bytes: head 486669, empty 15198569, rows 8567178 (38% full)
-- free 7%, 113513 page(s)
-- data leaf 0%, 11842 page(s)
-- data node 0%, 843 page(s)
-- data overflow 1%, 20810 page(s)
-- btree leaf 0%, 1170 page(s)
-- btree node 0%, 17 page(s)
-- free list 0%, 7 page(s)
-- stream trunk 0%, 2764 page(s)
-- stream data 90%, 1401079 page(s)
That means 90% of the file size is transaction log. This is what I
expected. The transaction that was kept open was:
---- Transaction log ----
-- session 257 table 115 - 728
-- session 257 table 115 + ( /* key:794 */ 442, CAST('Discover and
Provision Cisco_Others' AS VARCHAR_IGNORECASE), CAST('' AS
VARCHAR_IGNORECASE), 95, 0, NULL
, NULL, 0, 0, 34, 76, 1, 49)
So it was connection #257 that did this: delete from table MCTASK
(table 115, O_115) where key=728, insert into MCTASK (the values
above). This connection was never committed or rolled back, so the
complete transaction log has to be kept around.
The database itself is quite small. I was you store lots of XML data,
uncompressed, as VARCHAR. I suggest to use CLOB instead, and compress
the data. This will save additional space and probably speed up
things.
You use quite many memory tables. I'm not sure if this is on purpose.
Regards,
Thomas