Re: H2 database embeded mode: Takes more time to connect to database when the size of the db file is over 100 mb

452 views
Skip to first unread message

Thomas Mueller

unread,
Aug 10, 2012, 2:42:34 PM8/10/12
to h2-da...@googlegroups.com
Hi,

Startup is normally very fast. It it takes 5-6 seconds, it sounds like something is wrong. Could you get some profiling data, as described in http://h2database.com/html/performance.html#built_in_profiler ?

Regards,
Thomas


On Wed, Aug 8, 2012 at 11:35 AM, Harish Alwala <harish...@gmail.com> wrote:
Hi All
I have a problem establishing connection to H2 database when the size of the db file is over 100 mb.
I have written a Swing Application and I am using H2 database in embeded mode. Application works(connection establishment & query process) fine when the size of the db file is around 10-50 mb. When the file size increases to 100 mb the application slows down.
Observation:
1. connection =java.sql.DriverManager.getConnection(getConnectionUrl(),username, password);
Above statement takes 5-6 seconds to get the connection.
2. CPU shoots up to 60% while fetching the connection.
3. working with database my application always shows around 500mb in jconsole.

What I have done:
I have replaced above statement with 
connectionPool = JdbcConnectionPool.create(getConnectionUrl(), username, password);
connectionPool.setMaxConnections(200);
Still I dont see any difference.

I have tried same replacing H2 Database with SQLServer Database.
Very interestingly application takes hardly any memory (max 100mb) and cpu shows 30 % while continues db operation.

Regarding memory I feel since I am embeding the H2 database with in my application, the memory used by H2 db + CPU used by H2 db is also shown in my application .

Please let me know if there is any work around.




--
You received this message because you are subscribed to the Google Groups "H2 Database" group.
To view this discussion on the web visit https://groups.google.com/d/msg/h2-database/-/dsjzxuG6gvoJ.
To post to this group, send email to h2-da...@googlegroups.com.
To unsubscribe from this group, send email to h2-database...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/h2-database?hl=en.

Thomas Mueller

unread,
Aug 16, 2012, 3:08:38 AM8/16/12
to h2-da...@googlegroups.com
Hi,

So the main problem is reading from the disk (66%), but this isn't just when opening the database but includes operations when the database is already open, specially (a) below. The main problems are:

(a) 23% + 9% = 31% or more: select all rows from a table (I don't know which table), after the database is already open, in: org.eclipse.persistence.internal.queries.ExpressionQueryMechanism. selectAllRowsFromTable(ExpressionQueryMechanism.java:2611)

(b) about 13%: initialization of the fulltext search while opening the database, specially loading the word list in memory: SELECT * FROM FT.WORDS
  
I don't know how to avoid (a) because I don't know what table and why the table is loaded. To avoid (b), possibly using the Lucene fulltext search can help (see the docs).

Regards,
Thomas



On Wednesday, August 15, 2012, Denross Salenga wrote:
Hello, I tried and put the h2  profiler in my code here is what i got;


Profiler: top 3 stack trace(s) of 11023 ms [build-168]:
1039/4515 (23%):
at java.io.RandomAccessFile.readBytes(Native Method)
at java.io.RandomAccessFile.read(RandomAccessFile.java:355)
at org.h2.store.fs.FileDisk.read(FilePathDisk.java:424)
at org.h2.store.fs.FileUtils.readFully(FileUtils.java:357)
at org.h2.store.FileStore.readFully(FileStore.java:276)
at org.h2.store.PageStore.readPage(PageStore.java:1285)
at org.h2.store.PageStore.getPage(PageStore.java:735)
at org.h2.index.PageDataIndex.getPage(PageDataIndex.java:226)
at org.h2.index.PageDataNode.getNextPage(PageDataNode.java:226)
at org.h2.index.PageDataLeaf.getNextPage(PageDataLeaf.java:392)
at org.h2.index.PageDataCursor.nextRow(PageDataCursor.java:90)
at org.h2.index.PageDataCursor.next(PageDataCursor.java:49)
at org.h2.index.IndexCursor.next(IndexCursor.java:238)
at org.h2.table.TableFilter.next(TableFilter.java:353)
at org.h2.command.dml.Select.queryGroup(Select.java:311)
at org.h2.command.dml.Select.queryWithoutCache(Select.java:613)
at org.h2.command.dml.Query.query(Query.java:307)
at org.h2.command.dml.Query.query(Query.java:277)
at org.h2.command.dml.Query.query(Query.java:36)
at org.h2.command.CommandContainer.query(CommandContainer.java:86)
at org.h2.command.Command.executeQuery(Command.java:191)
at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:109)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeSelect(DatabaseAccessor.java:931)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:607)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:535)
at org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:1717)
at org.eclipse.persistence.sessions.server.ServerSession.executeCall(ServerSession.java:566)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:207)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:193)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeSelectCall(DatasourceCallQueryMechanism.java:264)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.selectAllRows(DatasourceCallQueryMechanism.java:646)
at org.eclipse.persistence.internal.queries.ExpressionQueryMechanism.selectAllRowsFromTable(ExpressionQueryMechanism.java:2611)
625/4515 (13%):
at java.io.RandomAccessFile.readBytes(Native Method)
at java.io.RandomAccessFile.read(RandomAccessFile.java:355)
at org.h2.store.fs.FileDisk.read(FilePathDisk.java:424)
at org.h2.store.fs.FileUtils.readFully(FileUtils.java:357)
at org.h2.store.FileStore.readFully(FileStore.java:276)
at org.h2.result.ResultDiskBuffer.readRow(ResultDiskBuffer.java:194)
at org.h2.result.ResultDiskBuffer.nextUnsorted(ResultDiskBuffer.java:216)
at org.h2.result.ResultDiskBuffer.next(ResultDiskBuffer.java:209)
at org.h2.result.LocalResult.next(LocalResult.java:229)
at org.h2.jdbc.JdbcResultSet.nextRow(JdbcResultSet.java:3002)
at org.h2.jdbc.JdbcResultSet.next(JdbcResultSet.java:124)
at org.h2.fulltext.FullText.init(FullText.java:142)
at org.h2.fulltext.FullText$FullTextTrigger.init(FullText.java:845)
at org.h2.schema.TriggerObject.load(TriggerObject.java:71)
at org.h2.schema.TriggerObject.setTriggerClassName(TriggerObject.java:91)
at org.h2.command.ddl.CreateTrigger.update(CreateTrigger.java:101)
at org.h2.engine.MetaRecord.execute(MetaRecord.java:58)
at org.h2.engine.Database.open(Database.java:632)
at org.h2.engine.Database.openDatabase(Database.java:222)
at org.h2.engine.Database.<init>(Database.java:217)
at org.h2.engine.Engine.openSession(Engine.java:56)
at org.h2.engine.Engine.openSession(Engine.java:159)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:138)
at org.h2.engine.Engine.createSession(Engine.java:121)
at org.h2.engine.Engine.createSession(Engine.java:28)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:305)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:108)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:92)
at org.h2.Driver.connect(Driver.java:72)
at java.sql.DriverManager.getConnection(DriverManager.java:579)
at java.sql.DriverManager.getConnection(DriverManager.java:190)
at org.eclipse.persistence.sessions.DefaultConnector.connect(DefaultConnector.java:98)
429/4515 (9%):
at java.io.RandomAccessFile.readBytes(Native Method)
at java.io.RandomAccessFile.read(RandomAccessFile.java:355)
at org.h2.store.fs.FileDisk.read(FilePathDisk.java:424)
at org.h2.store.fs.FileUtils.readFully(FileUtils.java:357)
at org.h2.store.FileStore.readFully(FileStore.java:276)
at org.h2.store.PageStore.readPage(PageStore.java:1285)
at org.h2.store.PageStore.getPage(PageStore.java:735)
at org.h2.index.PageDataIndex.getPageOverflow(PageDataIndex.java:211)
at org.h2.index.PageDataLeaf.getRowAt(PageDataLeaf.java:345)
at org.h2.index.PageDataCursor.nextRow(PageDataCursor.java:97)
at org.h2.index.PageDataCursor.next(PageDataCursor.java:49)
at org.h2.index.IndexCursor.next(IndexCursor.java:238)
at org.h2.table.TableFilter.next(TableFilter.java:353)
at org.h2.command.dml.Select.queryGroup(Select.java:311)
at org.h2.command.dml.Select.queryWithoutCache(Select.java:613)
at org.h2.command.dml.Query.query(Query.java:307)
at org.h2.command.dml.Query.query(Query.java:277)
at org.h2.command.dml.Query.query(Query.java:36)
at org.h2.command.CommandContainer.query(CommandContainer.java:86)
at org.h2.command.Command.executeQuery(Command.java:191)
at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:109)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeSelect(DatabaseAccessor.java:931)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:607)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:535)
at org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:1717)
at org.eclipse.persistence.sessions.server.ServerSession.executeCall(ServerSession.java:566)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:207)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:193)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeSelectCall(DatasourceCallQueryMechanism.java:264)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.selectAllRows(DatasourceCallQueryMechanism.java:646)
at org.eclipse.persistence.internal.queries.ExpressionQueryMechanism.selectAllRowsFromTable(ExpressionQueryMechanism.java:2611)
at org.eclipse.persistence.internal.queries.ExpressionQueryMechanism.selectAllReportQueryRows(ExpressionQueryMechanism.java:2554)
packages:
66%: org.h2.store.fs
6%: org.h2.store
4%: org.eclipse.persistence.internal.security


BTW. I got  5G of h2 db file on my application and took me 10-12 secs to be able to check/connect for the db

Regards,
Ross
To view this discussion on the web visit https://groups.google.com/d/msg/h2-database/-/HUT8MOew2uoJ.

Thomas Mueller

unread,
Aug 16, 2012, 3:08:39 AM8/16/12
to h2-da...@googlegroups.com
Hi,

Maybe the database file is fragmented. What also might help is run "SHUTDOWN DEFRAG" (see the docs for details).

Regards,
Thomas

Denross Salenga

unread,
Aug 17, 2012, 3:25:39 AM8/17/12
to h2-da...@googlegroups.com
Thank you for the reply. After running "SHUTDOWN DEFRAG"

here's my result

Profiler: top 3 stack trace(s) of 10188 ms [build-168]:
988/4105 (24%):
630/4105 (15%):
380/4105 (9%):
at java.io.RandomAccessFile.readBytes(Native Method)
at java.io.RandomAccessFile.read(RandomAccessFile.java:355)
at org.h2.store.fs.FileDisk.read(FilePathDisk.java:424)
at org.h2.store.fs.FileUtils.readFully(FileUtils.java:357)
at org.h2.store.FileStore.readFully(FileStore.java:276)
at org.h2.result.ResultDiskBuffer.readRow(ResultDiskBuffer.java:198)
packages:
64%: org.h2.store.fs
6%: org.h2.store
5%: org.eclipse.persistence.internal.security
.

and the h2 file size decrease from 5 G to 2 G. I will try the Lucene fulltext search then post my results on it. btw is it okay to run SHUTDOWN COMPACT then SHUTDOWN DEFRAG?

Harish Alwala

unread,
Aug 17, 2012, 9:37:54 AM8/17/12
to h2-da...@googlegroups.com
Hi 
As suggested I read through the cache settings, this is what I have made changes to connection url. 
rurl = "jdbc:h2:file:/" + getDBPath() + dbname + ";DEFAULT_TABLE_TYPE=CACHED;CACHE_SIZE=" + (2 * 1024 * 1024) + ";AUTOCOMMIT=TRUE;CACHE_TYPE=SOFT_LRU";

Impact
I have loaded db to 161 mb.
when I first time get the connection the memory in jconsole shows 50 mb, which is fine. when I perform multiple operations to the database the memory shoots up to 300 mb and never comes down. If I continue doing many such operations it even grows to 500 mb.

Any other suggestions please.
I am using connection pooling as well.

Thomas Mueller

unread,
Aug 17, 2012, 4:52:01 PM8/17/12
to h2-da...@googlegroups.com
Hi,

> is it okay to run SHUTDOWN COMPACT then SHUTDOWN DEFRAG?

Sure, you can do that.

Please note the main problem is still the "selectAllRowsFromTable". To find out what component is reading all rows from a (probably large) table, you would have to change the profiler settings to show more stack trace elements, for example:

Profiler p = new Profiler();
p.depth = 48;
p.startCollecting();
...
System.out.println(p.getTop(20));

> url = "jdbc:h2:file:/" + getDBPath() + dbname + ";DEFAULT_TABLE_TYPE=CACHED;CACHE_SIZE=" + 
> (2 * 1024 * 1024) + ";AUTOCOMMIT=TRUE;CACHE_TYPE=SOFT_LRU";

Many of those settings have no effect (DEFAULT_TABLE_TYPE, AUTOCOMMIT), and others are very unlikely to solve the given problem (CACHE_SIZE, CACHE_TYPE). I wouldn't use any of those options. I would use simply:

url = "jdbc:h2:file:/" + getDBPath() + dbname;

> when I perform multiple operations to the database the memory shoots up to 300 mb and never comes down

If you have a <databaseName>.trace.db file, what does it contain? You can use the Recover tool to find out what is causing the database to grow (see the docs for details). It might be uncommitted transactions or data you inserted.

Regards,
Thomas


--
You received this message because you are subscribed to the Google Groups "H2 Database" group.
To view this discussion on the web visit https://groups.google.com/d/msg/h2-database/-/qzTp7iUm6_8J.

Denross Salenga

unread,
Aug 21, 2012, 8:45:03 PM8/21/12
to h2-da...@googlegroups.com
here's an updated results.


Profiler: top 3 stack trace(s) of 6232 ms [build-168]:
646/2403 (26%):
346/2403 (14%):
48/2403 (1%):
at java.util.zip.Inflater.inflateBytes(Native Method)
at java.util.zip.Inflater.inflate(Inflater.java:256)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at javax.crypto.JarVerifier.verifySingleJar(JarVerifier.java:429)
at javax.crypto.JarVerifier.verifyJars(JarVerifier.java:322)
at javax.crypto.JarVerifier.verify(JarVerifier.java:250)
at javax.crypto.JceSecurity.verifyExemptJar(JceSecurity.java:148)
at javax.crypto.JceSecurityManager.getAppPermissions(JceSecurityManager.java:206)
at javax.crypto.JceSecurityManager.getCryptoPermission(JceSecurityManager.java:124)
at javax.crypto.Cipher.getConfiguredPermission(Cipher.java:2503)
at javax.crypto.Cipher.initCryptoPermission(Cipher.java:685)
at javax.crypto.Cipher.chooseProvider(Cipher.java:848)
at javax.crypto.Cipher.init(Cipher.java:1213)
at javax.crypto.Cipher.init(Cipher.java:1153)
at org.eclipse.persistence.internal.security.JCEEncryptor.<init>(JCEEncryptor.java:60)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at java.lang.Class.newInstance0(Class.java:372)
at java.lang.Class.newInstance(Class.java:325)
at org.eclipse.persistence.internal.security.PrivilegedAccessHelper.newInstanceFromClass(PrivilegedAccessHelper.java:410)
at org.eclipse.persistence.internal.security.SecurableObjectHolder.initSecurableObject(SecurableObjectHolder.java:93)
at org.eclipse.persistence.internal.security.SecurableObjectHolder.getSecurableObject(SecurableObjectHolder.java:55)
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.updateLogins(EntityManagerSetupImpl.java:1589)
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.updateSession(EntityManagerSetupImpl.java:1892)
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:471)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.getDatabaseSession(EntityManagerFactoryDelegate.java:188)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.createEntityManagerImpl(EntityManagerFactoryDelegate.java:277)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:294)
packages:
51%: org.h2.store.fs
9%: org.eclipse.persistence.internal.security
5%: org.h2.store
.

What is the average time to connect to the h2 file? I'm getting a 6secs. btw the "selectAllRowsFromTable" is on my code. sorry for that :D

Thomas Mueller

unread,
Aug 31, 2012, 11:39:31 AM8/31/12
to h2-da...@googlegroups.com
Hi,

> What is the average time to connect to the h2 file? I'm getting a 6secs

It depends, but usually the average time should be less than 0.5 seconds for medium size databases.

In your case the problem is that the (native) fulltext search loads all words in memory at startup, which takes a long time. I guess Apache Lucene wouldn't do that, and is therefore faster for your use case.

Regards,
Thomas




To view this discussion on the web visit https://groups.google.com/d/msg/h2-database/-/1w4fyLHqZCkJ.
Reply all
Reply to author
Forward
0 new messages