"org.h2.jdbc.JdbcSQLException: Row not found when trying to delete from index" with Version 1.3.162

1,110 views
Skip to first unread message

Sanjeev Gour

unread,
May 13, 2013, 8:47:21 AM5/13/13
to h2-da...@googlegroups.com
I am getting the following error when running a clean up routine on some of the tables. I am using version 1.3.162.  The JDBC url is configured with the following options-

CACHE_TYPE=LRU;PAGE_SIZE=16384;MVCC=TRUE;DB_CLOSE_DELAY=-1

Also tried with ";optimize_update=false" to no avail. This typically happens when my applications runs for long hours so I don't really have an easy test case to reproduce this problem. Is there anything else to try out on this one?

org.h2.jdbc.JdbcSQLException: Row not found when trying to delete from index "TIMESERIES.IDX_METRIC_DATA_START_TIME: ( /* key:18158 */ 43848, TIMESTAMP '2013-03-27 18:30:00.0', TIMESTAMP '2013-03-27 23:10:00.0', X'aced000573720020636f6d2e63612e63686f7275732e74696d657365726965732e54534172726179018c63d06105fca80200054a0007656e6454696d654900066c656e6774684a0009737461727454696d654c000c636c6f636b4d656d656e746f74002a4c636f6d2f63612f63686f7275732f74696d657365726965732f6e756d657269632f4d656d656e746f3b4c000b646174614d656d656e746f71007e000178700000013dae1d7940000000390000013dad1d20407372003a636f6d2e63612e63686f7275732e74696d657365726965732e6e756d657269632e4e756d62657244656c74614f7574707574244d656d656e746f7390f51c1bf5d3fc0200014c00076d656d656e746f71007e0001787073720038636f6d2e63612e63686f7275732e74696d657365726965732e6e756d657269632e4e756d626572524c454f7574707574244d656d656e746ff49d0987111300c102000549000a63757272656e74496e744a000b63757272656e744c6f6e674900066c656e67746849000473697a654c00076d656d656e746f71007e000178700000000000000000000493e000000038000000437372003c636f6d2e63612e63686f7275732e74696d657365726965732e6e756d657269632e4e756d6265725061636b696e674f7574707574244d656d656e746f40a3b5b30dd1f4770c00007870770400000003737200106a6176612e7574696c2e4269745365746efd887e3934ab210200015b0004626974737400025b4a7870757200025b4a782004b512b1759302000078700000000413c39ed68e90205300000000000124f800000000000000000000000000000000787371007e00077704000000007371007e00097571007e000c000000080202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202000000000000000278')"; SQL statement:

DELETE FROM timeseries.metric_data WHERE end_time < ? LIMIT 1000 [90112-168]

                at org.h2.message.DbException.getJdbcSQLException(DbException.java:329)

                at org.h2.message.DbException.get(DbException.java:169)

                at org.h2.message.DbException.get(DbException.java:146)

                at org.h2.index.PageBtreeLeaf.remove(PageBtreeLeaf.java:225)

                at org.h2.index.PageBtreeNode.remove(PageBtreeNode.java:324)

                at org.h2.index.PageBtreeIndex.remove(PageBtreeIndex.java:241)

                at org.h2.index.MultiVersionIndex.remove(MultiVersionIndex.java:170)

                at org.h2.table.RegularTable.removeRow(RegularTable.java:361)

                at org.h2.command.dml.Delete.update(Delete.java:93)

                at org.h2.command.CommandContainer.update(CommandContainer.java:75)

                at org.h2.command.Command.executeUpdate(Command.java:230)

                at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:156)

                at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:142)

                at org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.executeUpdate(WrappedPreparedStatement.java:493)

                at com.ca.chorus.db.DbExecutor$9.call(DbExecutor.java:829)

                ... 21 more

 


Noel Grandin

unread,
May 13, 2013, 9:07:16 AM5/13/13
to h2-da...@googlegroups.com, Sanjeev Gour

You're also running quite an old version of H2, updating to the latest version would probably help.

I would also suggest running without MVCC, it's still a little rough around the edges.


On 2013-05-13 14:47, Sanjeev Gour wrote:
I am getting the following error when running a clean up routine on some of the tables. I am using version 1.3.162.  The JDBC url is configured with the following options-

CACHE_TYPE=LRU;PAGE_SIZE=16384;MVCC=TRUE;DB_CLOSE_DELAY=-1

Also tried with ";optimize_update=false" to no avail. This typically happens when my applications runs for long hours so I don't really have an easy test case to reproduce this problem. Is there anything else to try out on this one?

org.h2.jdbc.JdbcSQLException: Row not found when trying to delete from index "TIMESERIES.IDX_METRIC_DATA_START_TIME: ( /* key:18158 */ 43848, TIMESTAMP '2013-03-27 18:30:00.0', TIMESTAMP '2013-03-27 23:10:00.0', X'aced000573720020636f6d2e63612e63686f7275732e74696d657365726965732e54534172726179018c63d06105fca80200054a0007656e6454696d654900066c656e6774684a0009737461727454696d654c000c636c6f636b4d656d656e746f74002a4c636f6d2f63612f63686f7275732f74696d657365726965732f6e756d657269632f4d656d656e746f3b4c000b646174614d656d656e746f71007e000178700000013dae1d7940000000390000013dad1d20407372003a636f6d2e63612e63686f7275732e74696d657365726965732e6e756d657269632e4e756d62657244656c74614f7574707574244d656d656e746f7390f51c1bf5d3fc0200014c00076d656d656e746f71007e0001787073720038636f6d2e63612e63686f7275732e74696d657365726965732e6e756d657269632e4e756d626572524c454f7574707574244d656d656e746ff49d0987111300c102000549000a63757272656e74496e744a000b63757272656e744c6f6e674900066c656e67746849000473697a654c00076d656d656e746f71007e000178700000000000000000000493e000000038000000437372003c636f6d2e63612e63686f7275732e74696d657365726965732e6e756d657269632e4e756d6265725061636b696e674f7574707574244d656d656e746f40a3b5b30 dd1f4770c00007870770400000003737200106a6176612e7574696c2e4269745365746efd887e3934ab210200015b0004626974737400025b4a7870757200025b4a782004b512b1759302000078700000000413c39ed68e90205300000000000124f800000000000000000000000000000000787371007e00077704000000007371007e00097571007e000c000000080202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202000000000000000278')"; SQL statement:

DELETE FROM timeseries.metric_data WHERE end_time < ? LIMIT 1000 [90112-168]

                at org.h2.message.DbException.getJdbcSQLException(DbException.java:329)

                at org.h2.message.DbException.get(DbException.java:169)

                at org.h2.message.DbException.get(DbException.java:146)

                at org.h2.index.PageBtreeLeaf.remove(PageBtreeLeaf.java:225)

                at org.h2.index.PageBtreeNode.remove(PageBtreeNode.java:324)

                at org.h2.index.PageBtreeIndex.remove(PageBtreeIndex.java:241)

                at org.h2.index.MultiVersionIndex.remove(MultiVersionIndex.java:170)

                at org.h2.table.RegularTable.removeRow(RegularTable.java:361)

                at org.h2.command.dml.Delete.update(Delete.java:93)

                at org.h2.command.CommandContainer.update(CommandContainer.java:75)

                at org.h2.command.Command.executeUpdate(Command.java:230)

                at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:156)

                at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:142)

                at org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.executeUpdate(WrappedPreparedStatement.java:493)

                at com.ca.chorus.db.DbExecutor$9.call(DbExecutor.java:829)

                ... 21 more

 


--
You received this message because you are subscribed to the Google Groups "H2 Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email to h2-database...@googlegroups.com.
To post to this group, send email to h2-da...@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Sanjeev Gour

unread,
May 14, 2013, 1:21:06 AM5/14/13
to h2-da...@googlegroups.com, Sanjeev Gour
Sorry, I mentioned the incorrect version number, I am using 1.3.168. Earlier we also tried without MVCC but that didn't help.

Just for your information that the index which is creating the problem is on a timestamp column if that can give you some clue about the problem.

We are very close to go into production and upgrading H2 does not look a preferred option. Also I read through the change log from 1.3.168 to 1.3.171 and did not see any mention for this defect being fixed so I am not quite sure if upgrading would help. I will try providing more information on this as and when I hit it again.

Sanjeev Gour

unread,
May 30, 2013, 10:30:31 AM5/30/13
to h2-da...@googlegroups.com, Sanjeev Gour
We have tried this with the latest build but that did not solve the problem for us. Any other advice on this one?

Thomas Mueller

unread,
May 30, 2013, 3:21:41 PM5/30/13
to h2-da...@googlegroups.com
Hi,

Did you re-create the problematic index, or (even better) re-create the database? To re-create the database, first create a SQL script using the "script to" command, then create a new database, and run "runscript".

Regards,
Thomas

Sanjeev Gour

unread,
Jun 5, 2013, 4:02:51 AM6/5/13
to h2-da...@googlegroups.com
Hi Thomas-

We tried recreating the index when this problems occurs, however, recreating the index did not help alone as the next attempt to delete rows from the table resulted in the same error. This indicates that the corruption was not in the physical index, instead it is a bug somewhere in the logic that updates the index. Our case is this-

A scheduled service runs every 15 mins to delete a max of thousand rows from the database table starting from the oldest record based on the timestamp column. Sometimes this delete succeeds just fine, (typically when the number of records are less in the table, but that is not always true as occasionally we have seen it happening with fewer records as well ). At other times the delete fails and deletes no rows and complains that it cannot find a particular key in the index. We have all this happening under JBoss, and restarting JBoss helps recover from this problem but it recurs sometime later while trying the delete the rows again.

As a short term solution, we are dropping the index, deleting rows from the table, and then recreating the index.  That way, we are kind of working around the problem. Obviously, the delete from the table after dropping the index is not as fast as it will be with the index and recreating the index also consumes some time. The other downside of this work around is that we have to acquire application level read locks while reading from the table and write locks while writing to it and while dropping the index. Without these locks, the drop and recreate fails saying it failed to lock the table before the timeout.

Trying with a new database is not an option for us as replicating the database may be expensive operation.

Regards-
Sanjeev.

Regards-
Sanjeev.


--
You received this message because you are subscribed to a topic in the Google Groups "H2 Database" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/h2-database/awQR6dY5ZPY/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to h2-database...@googlegroups.com.

To post to this group, send email to h2-da...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages