Re: The connection was not closed by the application and is garbage collected

1,311 views
Skip to first unread message

happy fish

unread,
Jul 19, 2012, 9:49:55 PM7/19/12
to h2-da...@googlegroups.com
You'd better just close the "poolconnection" rather than the  "connection".
In the end of the programme,try to close the datasource..

On Fri, Jul 20, 2012 at 1:37 AM, rcoe <rc...@alumni.uwaterloo.ca> wrote:
I'm getting a whack of stack traces all at once that indicate my connections are not getting closed;

org.h2.message.DbException: The connection was not closed by the application and is garbage collected [90018-167]
at org.h2.message.DbException.get(DbException.java:169)
at org.h2.message.DbException.get(DbException.java:146)
at org.h2.message.DbException.get(DbException.java:135)
at org.h2.jdbc.JdbcConnection.closeOld(JdbcConnection.java:174)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:121)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:94)
at org.h2.Driver.connect(Driver.java:72)
at org.h2.jdbcx.JdbcDataSource.getJdbcConnection(JdbcDataSource.java:181)
at org.h2.jdbcx.JdbcDataSource.getXAConnection(JdbcDataSource.java:315)
at org.h2.jdbcx.JdbcDataSource.getPooledConnection(JdbcDataSource.java:341)

However, I am closing my connections in finally blocks wherever I am using them:

} finally {
    DbUtils.closeQuietly( conn, pstmt, rs );
}

My conn object is retrieved from a datasource and I'm using a pooled connection:

conn = ( (JdbcDataSource)ds ).getPooledConnection().getConnection();

The code I'm running is scanning through a file and inserting records as it goes.  As each record is inserted, the connection should be getting closed.  I have verified the calls are made:

07-19 12:55:47 jdbc[55]: 
/**/conn108.prepareStatement("INSERT INTO TABLE_X ( path, filename, lastTouched, createdTime ) VALUES( ?, ?, ?, ? )", 1);
07-19 12:55:47 jdbc[55]: 
/**/PreparedStatement prep54 = conn108.prepareStatement("INSERT INTO TABLE_X ( path, filename, lastTouched, createdTime ) VALUES( ?, ?, ?, ? )");
07-19 12:55:47 jdbc[55]: 
/**/prep54.setString(1, "/xxx");
07-19 12:55:47 jdbc[55]: 
/**/prep54.setString(2, "xxxx.pdf");
07-19 12:55:47 jdbc[55]: 
/**/prep54.setTimestamp(3, Timestamp.valueOf("2005-09-16 00:00:00.0"));
07-19 12:55:47 jdbc[55]: 
/**/prep54.setTimestamp(4, Timestamp.valueOf("2012-07-19 12:55:47.848"));
07-19 12:55:47 jdbc[55]: 
/**/prep54.executeUpdate();
07-19 12:55:47 lock: 55 exclusive write lock requesting for TABLE_X
07-19 12:55:47 lock: 55 exclusive write lock added for TABLE_X
07-19 12:55:47 index:  TABLE_X  add ( /* key:27 */ 27, '/xxx', 'xxxx.pdf', TIMESTAMP '2005-09-16 00:00:00.0', TIMESTAMP '2012-07-19 12:55:47.848', NULL)
07-19 12:55:47 pageStore: log + s: 55 table: 15 row: ( /* key:27 */ 27, '/xxx', 'xxxx.pdf', TIMESTAMP '2005-09-16 00:00:00.0', TIMESTAMP '2012-07-19 12:55:47.848', NULL)
07-19 12:55:47 index: IDX_PATH add ( /* key:27 */ 27, '/xxx', 'xxxx.pdf', TIMESTAMP '2005-09-16 00:00:00.0', TIMESTAMP '2012-07-19 12:55:47.848', NULL)
07-19 12:55:47 index: IDX_FILENAME add ( /* key:27 */ 27, '/xxx', 'xxxx.pdf', TIMESTAMP '2005-09-16 00:00:00.0', TIMESTAMP '2012-07-19 12:55:47.848', NULL)
07-19 12:55:47 jdbc[55]: 
/*SQL l:101 #:1 t:1*/INSERT INTO  TABLE_X ( path, filename, lastTouched, createdTime ) VALUES( ?, ?, ?, ? ) {1: '/xxx', 2: 'xxxx.pdf', 3: TIMESTAMP '2005-09-16 00:00:00.0', 4: TIMESTAMP '2012-07-19 12:55:47.848'};
07-19 12:55:47 pageStore: log commit s: 55
07-19 12:55:47 lock: 55 exclusive write lock unlock TABLE_X
07-19 12:55:47 jdbc[55]: 
/**/ResultSet rs30 = prep54.getGeneratedKeys();
07-19 12:55:47 jdbc[55]: 
/*SQL #:1*/SELECT SCOPE_IDENTITY() WHERE SCOPE_IDENTITY() IS NOT NULL;
07-19 12:55:47 jdbc[55]: 
/**/rs30.next();
07-19 12:55:47 jdbc[55]: 
/**/rs30.getLong(1);
07-19 12:55:48 pageStore: pageOut.storePage [12] stream data key:2 pos:175 remaining:1873
07-19 13:31:49 jdbc[55]: 
/**/rs30.close();
07-19 13:31:49 jdbc[55]: 
/**/prep54.close();
07-19 13:31:49 jdbc[55]: 
/**/conn108.rollback();
07-19 13:31:49 jdbc[55]: 
/*SQL */ROLLBACK;
07-19 13:31:49 jdbc[55]: 
/**/conn108.setAutoCommit(true);

So, it seems the connections are calling close(), yet they are not getting closed.  Anyone seen this when running in embedded mode with a connection pool?

Thanks.

--
You received this message because you are subscribed to the Google Groups "H2 Database" group.
To view this discussion on the web visit https://groups.google.com/d/msg/h2-database/-/hJ9e7d153jMJ.
To post to this group, send email to h2-da...@googlegroups.com.
To unsubscribe from this group, send email to h2-database...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/h2-database?hl=en.

rcoe

unread,
Jul 20, 2012, 8:30:59 AM7/20/12
to h2-da...@googlegroups.com
Actually, the jdbc driver is supposed to handle notifying the underlying pool manager to recycle the connection.  Closing the pool is only necessary for a clean shutdown of the database.

From the Javadoc for PooledConnection,
"...When an application closes a connection, it calls the Connection method close. When connection pooling is being done, the connection pool manager is notified because it has registered itself as a ConnectionEventListener object using the ConnectionPool method addConnectionEventListener. The connection pool manager deactivates the handle to the PooledConnection object and returns the PooledConnection object to the pool of connections so that it can be used again. Thus, when an application closes its connection, the underlying physical connection is recycled rather than being closed..."


Thomas Mueller

unread,
Jul 24, 2012, 1:57:01 PM7/24/12
to h2-da...@googlegroups.com
Hi,

When exactly do you get the messages "The connection was not closed by
the application"?

> Closing the pool is only necessary for a clean shutdown of the database.

Yes. So, do you close (dispose) the connection pool?

> Actually, the jdbc driver is supposed to handle notifying the underlying pool manager to recycle the connection.

Yes, it does that. Recycled connection still referenced, and no error
message should be logged. H2 should only log the error message if a
connection is garbage collected.

Regards,
Thomas

rcoe

unread,
Jul 24, 2012, 2:12:25 PM7/24/12
to h2-da...@googlegroups.com
Hi Thomas,

It seems the watchdog dictates when the exceptions get thrown.  In my application, I'm reading in a large file and for every line in the file, I'm transforming the line and inserting records into an embedded database.  So, part way through this rather long file, the watchdog closes the connections en masse, so I get a large number of simultaneous stack traces; all the while the application still parses and inserts records.  As far as that goes, there is no corruption of the data stream, which is good.  My concern is whether I'm dealing with the data source correctly or if I need to do more up-front work in setting up the connection pool.

In my app, I create an H2DataSourceFactory that sets up the jndi initial context (I'm running this standalone and not in a container).  I use a separate DbConnectionFactory to deal out connections from the registered datasources.  So, my DbConnectionFactory first initializes the H2DataSourceFactory and then dispatches connections.  Since the H2 jdbc library doesn't specifically have a PooledDataSource, I'm assuming that calling getPooledConnection.getConnection() from the JdbcDataSource reference will do that work for me.  Perhaps this is my mistake?  I haven't checked the source yet to see whether that's the case.  Btw, I'm not specifically creating an H2 database connection pool, because I want to remain agnostic to the data sources registered in my DbConnectionFactory.  Basically, I'm hand-bombing a DI layer.

Thanks,
Robin.

Thomas Mueller

unread,
Oct 30, 2012, 3:46:19 AM10/30/12
to h2-da...@googlegroups.com
Hi,

I'm afraid I will not be able to help, because I myself don't fully understand how data sources, pooled connections, and so on are supposed to be used... The API is rather complicated (for me), and it is possible that there is a bug somewhere. But I wouldn't know how to fix it, I'm sorry.

Regards,
Thomas
--
You received this message because you are subscribed to the Google Groups "H2 Database" group.
To view this discussion on the web visit https://groups.google.com/d/msg/h2-database/-/jHzj91HFh2cJ.

LL

unread,
Mar 31, 2015, 4:01:58 AM3/31/15
to h2-da...@googlegroups.com
Hi, I'm having the same problem.
I'm using h2-1.4.186 currently.

I'm inserting quite a lot of data too but I don't use connection pool.
In every entry, I open and close connection and statement immediately after insert.

The data are inserted correctly, but the trace.db size increased everytime I do the operation. It's around 2mb increase per batch operation.

Is this supposed to be the correct way of using h2 database?

Thanks.

Thomas Mueller

unread,
Apr 10, 2015, 1:42:00 AM4/10/15
to h2-da...@googlegroups.com
Hi,

trace.db size increased

I would just look at the contents of that file (it's a text file).

Regards,
Thomas


--
You received this message because you are subscribed to the Google Groups "H2 Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email to h2-database...@googlegroups.com.

To post to this group, send email to h2-da...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages