H2 Database CPU goes to 100% intermittently

832 views
Skip to first unread message

Ezhil

unread,
Oct 23, 2009, 9:45:43 AM10/23/09
to H2 Database, ezhil_a...@trimble.com
We use JBoss Application to embed H2 database. Under very high loads
(both reads and writes) CPU goes to 100%. We use a customized data
layer built on top of spring to access the H2 database.

When we saw the sessions that are connected, we saw lot of merge
statements running for a long time (we use these statements to sync
the cache with the database)

Below is a snippet of stacttrace when a thread dump is taken

"pool-14-thread-4" prio=10 tid=0x000000005418f000 nid=0x333e runnable
[0x000000006c048000]
java.lang.Thread.State: RUNNABLE
at org.h2.command.dml.Update.update(Update.java:90)
at org.h2.command.dml.Merge.merge(Merge.java:175)
at org.h2.command.dml.Merge.update(Merge.java:126)
at org.h2.command.CommandContainer.update(CommandContainer.java:72)
at org.h2.command.Command.executeUpdate(Command.java:208)
- locked <0x00002aab542b0418> (a org.h2.engine.Session)
at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal
(JdbcPreparedStatement.java:139)
- locked <0x00002aab542b0418> (a org.h2.engine.Session)
at org.h2.jdbc.JdbcPreparedStatement.executeBatch
(JdbcPreparedStatement.java:1050)
at org.jboss.resource.adapter.jdbc.WrappedStatement.executeBatch
(WrappedStatement.java:774)
at org.springframework.jdbc.core.JdbcTemplate$4.doInPreparedStatement
(JdbcTemplate.java:881)
at org.springframework.jdbc.core.JdbcTemplate.execute
(JdbcTemplate.java:591)
at org.springframework.jdbc.core.JdbcTemplate.execute
(JdbcTemplate.java:619)
at org.springframework.jdbc.core.JdbcTemplate.batchUpdate
(JdbcTemplate.java:866)
at org.springframework.jdbc.object.BatchSqlUpdate.flush
(BatchSqlUpdate.java:183)
at
com.trimble.mrm.dataaccess.SpringDataAccess.batchUpdate_aroundBody32
(SpringDataAccess.java:351)
at com.trimble.mrm.dataaccess.SpringDataAccess$AjcClosure33.run
(SpringDataAccess.java:1)
at com.trimble.mrm.dataaccess.aspect.LogAdvice.ajc$around
$com_trimble_mrm_dataaccess_aspect_LogAdvice$1$625e745proceed
(LogAdvice.aj:1)
at com.trimble.mrm.dataaccess.aspect.LogAdvice.ajc$around
$com_trimble_mrm_dataaccess_aspect_LogAdvice$1$625e745(LogAdvice.aj:
36)
at com.trimble.mrm.dataaccess.SpringDataAccess.batchUpdate
(SpringDataAccess.java:334)
at
com.trimble.mrm.dataaccess.SpringDataAccess.batchInsert_aroundBody26
(SpringDataAccess.java:297)
at com.trimble.mrm.dataaccess.SpringDataAccess$AjcClosure27.run
(SpringDataAccess.java:1)
at com.trimble.mrm.dataaccess.aspect.LogAdvice.ajc$around
$com_trimble_mrm_dataaccess_aspect_LogAdvice$1$625e745proceed
(LogAdvice.aj:1)
at com.trimble.mrm.dataaccess.aspect.LogAdvice.ajc$around
$com_trimble_mrm_dataaccess_aspect_LogAdvice$1$625e745(LogAdvice.aj:
36)
at com.trimble.mrm.dataaccess.SpringDataAccess.batchInsert
(SpringDataAccess.java:296)
at
com.trimble.mrm.gm.cache.loader.PartitionCacheInsertHandler.insertBatchRecords_aroundBody6
(PartitionCacheInsertHandler.java:166)
at com.trimble.mrm.gm.cache.loader.PartitionCacheInsertHandler
$AjcClosure7.run(PartitionCacheInsertHandler.java:1)
at com.trimble.mrm.dataaccess.aspect.TransactionAdvice.ajc$around
$com_trimble_mrm_dataaccess_aspect_TransactionAdvice$1$72acfe55proceed
(TransactionAdvice.aj:1)
at com.trimble.mrm.dataaccess.aspect.TransactionAdvice.ajc$around
$com_trimble_mrm_dataaccess_aspect_TransactionAdvice$1$72acfe55
(TransactionAdvice.aj:29)
at
com.trimble.mrm.gm.cache.loader.PartitionCacheInsertHandler.insertBatchRecords
(PartitionCacheInsertHandler.java:163)
at
com.trimble.mrm.gm.cache.loader.PartitionCacheInsertHandler.batchInsertHandler_aroundBody0
(PartitionCacheInsertHandler.java:97)
at com.trimble.mrm.gm.cache.loader.PartitionCacheInsertHandler
$AjcClosure1.run(PartitionCacheInsertHandler.java:1)
at com.trimble.mrm.dataaccess.aspect.TransactionAdvice.ajc$around
$com_trimble_mrm_dataaccess_aspect_TransactionAdvice$1$72acfe55proceed
(TransactionAdvice.aj:1)
at com.trimble.mrm.dataaccess.aspect.TransactionAdvice.ajc$around
$com_trimble_mrm_dataaccess_aspect_TransactionAdvice$1$72acfe55
(TransactionAdvice.aj:29)
at
com.trimble.mrm.gm.cache.loader.PartitionCacheInsertHandler.batchInsertHandler
(PartitionCacheInsertHandler.java:47)
at
com.trimble.mrm.gm.cache.loader.CacheInsertHandler.processResultSet_aroundBody0
(CacheInsertHandler.java:88)
at com.trimble.mrm.gm.cache.loader.CacheInsertHandler$AjcClosure1.run
(CacheInsertHandler.java:1)
at com.trimble.mrm.dataaccess.aspect.TransactionAdvice.ajc$around
$com_trimble_mrm_dataaccess_aspect_TransactionAdvice$1$72acfe55proceed
(TransactionAdvice.aj:1)
at com.trimble.mrm.dataaccess.aspect.TransactionAdvice.ajc$around
$com_trimble_mrm_dataaccess_aspect_TransactionAdvice$1$72acfe55
(TransactionAdvice.aj:29)
at com.trimble.mrm.gm.cache.loader.CacheInsertHandler.processResultSet
(CacheInsertHandler.java:75)
at com.trimble.mrm.dataaccess.SpringDataAccessResultSetExtractor
$1.extractData(SpringDataAccessResultSetExtractor.java:35)
at org.springframework.jdbc.core.JdbcTemplate$1.doInPreparedStatement
(JdbcTemplate.java:653)
at org.springframework.jdbc.core.JdbcTemplate.execute
(JdbcTemplate.java:591)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:
641)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:
666)
at
org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query
(NamedParameterJdbcTemplate.java:113)
at
org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query
(NamedParameterJdbcTemplate.java:117)
at
com.trimble.mrm.dataaccess.SpringDataAccess.queryForResultSet_aroundBody12
(SpringDataAccess.java:130)
at com.trimble.mrm.dataaccess.SpringDataAccess$AjcClosure13.run
(SpringDataAccess.java:1)
at com.trimble.mrm.dataaccess.aspect.LogAdvice.ajc$around
$com_trimble_mrm_dataaccess_aspect_LogAdvice$1$625e745proceed
(LogAdvice.aj:1)
at com.trimble.mrm.dataaccess.aspect.LogAdvice.ajc$around
$com_trimble_mrm_dataaccess_aspect_LogAdvice$1$625e745(LogAdvice.aj:
38)
at com.trimble.mrm.dataaccess.SpringDataAccess.queryForResultSet
(SpringDataAccess.java:124)
at
com.trimble.mrm.gm.cache.loader.GenericCacheLoader.refreshData_aroundBody10
(GenericCacheLoader.java:262)
at com.trimble.mrm.gm.cache.loader.GenericCacheLoader$AjcClosure11.run
(GenericCacheLoader.java:1)
at com.trimble.mrm.dataaccess.aspect.TransactionAdvice.ajc$around
$com_trimble_mrm_dataaccess_aspect_TransactionAdvice$1$72acfe55proceed
(TransactionAdvice.aj:1)
at com.trimble.mrm.dataaccess.aspect.TransactionAdvice.ajc$around
$com_trimble_mrm_dataaccess_aspect_TransactionAdvice$1$72acfe55
(TransactionAdvice.aj:29)
at com.trimble.mrm.gm.cache.loader.GenericCacheLoader.refreshData
(GenericCacheLoader.java:179)
at com.trimble.mrm.gm.cache.loader.CacheLoadController.refreshData
(CacheLoadController.java:255)
at com.trimble.mrm.gm.cache.loader.CacheLoadController.access$400
(CacheLoadController.java:22)
at com.trimble.mrm.gm.cache.loader.CacheLoadController$2.run
(CacheLoadController.java:204)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown
Source)
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask.access$301(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)

Thomas Mueller

unread,
Oct 26, 2009, 2:09:35 PM10/26/09
to h2-da...@googlegroups.com
Hi,

> Under very high loads (both reads and writes) CPU goes to 100%.

Well, this sounds normal. What else do you expect? Did you have a look at:

http://www.h2database.com/html/performance.html#database_profiling
http://www.h2database.com/html/performance.html#database_performance_tuning
http://www.h2database.com/html/features.html#trace_options

> When we saw the sessions that are connected, we saw lot of merge
> statements running for a long time (we use these statements to sync
> the cache with the database)

How long do they take, and how do those statements look like? How big
are the batches (you are using batch updates)?

Regards,
Thomas

Ezhil

unread,
Oct 27, 2009, 8:48:30 AM10/27/09
to H2 Database
Thomas,

Thanks for your reply.

We found the cause of the problem at last. This was due to the lack of
primary key in the table which the "MERGE" statement uses. We were
mislead as this problem did not occur for a different table (with out
the primary key) but occurred in this case.

We encounter deterioration in performance of queries run against H2
Cache after lot of writes. Queries performed well at the start before
the writes and slows down gradually (in a hour or so). We thought that
indexes became less tidy and wrote scheduled jobs which dropped and
recreated indexes. Anyway we could not see much performance
improvements even after timely recreation of indexes.

We tried multitasking option (SET MULTI_THREADED 1). But while the
writes run without any problem without index recreation, exception (on
wrong rowid) is thrown when index recreation jobs run. Is
multithreading safe to use?

We use our application to cater frequent reads (more than 100 reads
per second and 15 writes (with batch size of 25) per second. We also
tried to use more CPUs but the performance improvement is not
substantial. We use 4GB of RAM with 64 bit Linux box for the
application (out of total 8 GB).

Can you suggest ways to improve the performance after writes?

Ezhil

unread,
Oct 27, 2009, 10:11:57 AM10/27/09
to H2 Database
And also we use in-memory mode. We execute complex queries with 'group
by's

Thomas Mueller

unread,
Oct 30, 2009, 7:40:51 AM10/30/09
to h2-da...@googlegroups.com
Hi,

> We encounter deterioration in performance of queries run against H2
> Cache after lot of writes. Queries performed well at the start before
> the writes and slows down gradually (in a hour or so). We thought that
> indexes became less tidy and wrote scheduled jobs which dropped and
> recreated indexes. Anyway we could not see much performance
> improvements even after timely recreation of indexes.

Large databases are usually slower than smaller one - maybe this could
be the reason? Maybe the problem can be solved by using a larger cache
size.

> We tried multitasking option (SET MULTI_THREADED 1).

I don't think this will help.

> exception (on wrong rowid) is thrown when index recreation jobs run.

That sounds like a bug. Could you post a simple, standalone test case
that reproduces the problem? It would be great if the test case does
not have any dependencies except the H2 jar file (that is, a simple
SQL script that can be run in the H2 Console, or a Java class uses the
JDBC API and is run using a static main method). Please include any
initialization code (CREATE TABLE, INSERT and so on) in the Java class
or in a .sql script file.

> Can you suggest ways to improve the performance after writes?

See http://www.h2database.com/html/performance.html#database_performance_tuning

Regards,
Thomas

Ezhil

unread,
Oct 30, 2009, 10:11:45 AM10/30/09
to H2 Database
Hi,

We get this exception when we enable multithreading option along with
lock_mode = 0. We dont mind getting a bit outdated data (we dont have
transactional level data) but we dont want exceptions to be thrown. It
would be really helpful if we make multithreading option to work
somehow because we get extremely nice throughput when we try this
option. I dont mind having more memory than 4 GB or more CPUs but
increasing them gives no better throughput. When we went through H2
code we found a database level synchronising without multi threading
option which we know is reliable but slow according to us.

To give us the history of our implementation. we spent a lot of time
investigating the available technology for caching data as we have a
application with extremely high read and write requirements. We tried
lot of in-memory databases and cache; jboss cache, Teracotta, Derby,
H2 to mention a few. H2 clearly outperformed all of them for our
application. Even though jboss cache was good, their searchable cache
was unreliable and extremely slow.

We use in memory mode. Using larger cache size did not help at all. As
said adding even 6 64-bit more CPUs (to the existing 2 64-bit CPUS)
never did anything good.

Below is one of our important table and its index details

TABLE CURRENT_MAP_POSITION_CACHE
(
CLIENT_ID INT NOT NULL,
DEVICE_ID INT NOT NULL,
STATUS NUMBER ,
GPS_STATUS NUMBER ,
INVOKE_CODE NUMBER ,
OBSERVATION_TIME TIMESTAMP ,
LOC_OBS_TIME TIMESTAMP ,
UPDATE_TIME TIMESTAMP ,
HEADING NUMBER ,
SPEED NUMBER ,
LATITUDE DOUBLE(53) ,
LONGITUDE DOUBLE(53) ,
MRP NUMBER ,
MILEAGE NUMBER ,
DST NUMBER ,
GEO_COUNT NUMBER ,
SWITCH_STATUS NUMBER ,
AUX1 NUMBER ,
STOP_DURATION NUMBER ,
MONTH_DATE VARCHAR2(50) ,
SCREEN_NAME VARCHAR2(50) ,
LOCATION_MESSAGE VARCHAR2(100) ,
MAP_HEADING VARCHAR2(50) ,
SWITCH1 NUMBER NOT NULL ,
SWITCH2 NUMBER NOT NULL ,
SWITCH3 NUMBER NOT NULL ,
SWITCH4 NUMBER NOT NULL ,
AUX2 NUMBER ,
AUX3 NUMBER ,
AUX4 NUMBER ,
AUX5 NUMBER ,
AUX6 NUMBER ,
AUX7 NUMBER ,
AUX8 NUMBER ,
AUX9 NUMBER ,
AUX10 NUMBER ,
STATUS_NUMBER NUMBER,
MESSAGE_ID NUMBER,
MESSAGE_READ VARCHAR2(1),
BATTERY_VOLTAGE NUMBER,
TETHER_STATUS NUMBER NOT NULL ,
ADDRESS NUMBER,
STREET VARCHAR2(40),
CROSS_STREET VARCHAR2(40),
ZIP VARCHAR2(40),
COUNTY VARCHAR2(40),
CITY VARCHAR2(40),
STATE VARCHAR2(40),
LANDMARK_ID NUMBER,
LANDMARK VARCHAR2(50),
COUNTRY VARCHAR2(40),
LOC_EVENT_TIME TIMESTAMP,
HOUSE_NUMBER VARCHAR2(40),
CARD_ID VARCHAR2(25),
CARD_EVENT_TIME TIMESTAMP NOT NULL ,
CARD_EVENT_TYPE NUMBER,
LOGIN_TIME TIMESTAMP,
IS_PINGABLE NUMBER,
STOP1 NUMBER,
STOP2 NUMBER,
STOP3 NUMBER,
DEVICE_TYPE_ID NUMBER,
TIME_ZONE_ID NUMBER,
POSITION_COLOR NUMBER,
IS_POCKET_EDITION NUMBER,
DEVICE_GROUP_ID NUMBER,
EID VARCHAR2(60),
FIRST_NAME VARCHAR2(60),
LAST_NAME VARCHAR2(60),
LAST_UPDATE_TIME TIMESTAMP
)

INDEX IDX1_CURRENT_MAP_POSITION_CACHE ON CURRENT_MAP_POSITION_CACHE
(DEVICE_ID )

INDEX IDX2_CURRENT_MAP_POSITION_CACHE ON CURRENT_MAP_POSITION_CACHE
(CLIENT_ID)

INDEX IDX3_CURRENT_MAP_POSITION_CACHE ON CURRENT_MAP_POSITION_CACHE
(CLIENT_ID, DEVICE_ID)

INDEX IDX4_CURRENT_MAP_POSITION_CACHE ON CURRENT_MAP_POSITION_CACHE
(CLIENT_ID, DEVICE_ID, LOC_EVENT_TIME )

Lot of Read queries happen as below with sample bind values

SELECT
count(device_id) AS cnt,
FLOOR (ABS ((latitude - -90)) / 15.002643002755175 ) AS x,
FLOOR (ABS ((longitude - -180.0)) / 22.5) AS y,
MIN(ROUND(longitude + 180,
6) *1000000 + ROUND(latitude + 90,
6) *0.0001) AS centroid ,
CASE WHEN COUNT(device_id) <= 5 THEN group_concat
(device_id separator ', ')
ELSE NULL END ids ,
LAST_UPDATE_TIME
FROM
current_map_position_cache cmp
WHERE
cmp.invoke_code != -1
AND cmp.client_id = 21113
AND ( latitude BETWEEN -90
AND 90.0
OR latitude BETWEEN 90.0
AND -90 )
AND ( longitude BETWEEN -180.0
AND 180.0
OR longitude BETWEEN 180.0
AND -180.0 )
group by
x,
y

Merge statements are also executed along side with the following
queries but at less speeds which syncs with database. When we gone
through the H2 code we found that it actually deletes and inserts the
data.

MERGE INTO CURRENT_MAP_POSITION_CACHE

(DEVICE_ID,CLIENT_ID,STATUS,GPS_STATUS,INVOKE_CODE,OBSERVATION_TIME,

LOC_OBS_TIME,UPDATE_TIME,HEADING,SPEED,LATITUDE,LONGITUDE,MRP,MILEAGE,

DST,GEO_COUNT,SWITCH_STATUS,AUX1,STOP_DURATION,MONTH_DATE,SCREEN_NAME,

LOCATION_MESSAGE,MAP_HEADING,SWITCH1,SWITCH2,SWITCH3,SWITCH4,AUX2,AUX3,
AUX4,AUX5,AUX6,AUX7,AUX8,AUX9,AUX10,STATUS_NUMBER,MESSAGE_ID,
MESSAGE_READ,BATTERY_VOLTAGE,TETHER_STATUS,ADDRESS,STREET,

CROSS_STREET,ZIP,COUNTY,CITY,STATE,LANDMARK_ID,LANDMARK,COUNTRY,LOC_EVENT_TIME,

HOUSE_NUMBER,CARD_ID,CARD_EVENT_TIME,CARD_EVENT_TYPE,LOGIN_TIME,

IS_PINGABLE,STOP1,STOP2,STOP3,DEVICE_TYPE_ID,TIME_ZONE_ID,POSITION_COLOR,

IS_POCKET_EDITION,DEVICE_GROUP_ID,LAST_UPDATE_TIME, EID, FIRST_NAME,
LAST_NAME) KEY(DEVICE_ID)
values

( ?, ? , ? , ? , ? ,? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ?
, ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ?
, ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? ,
? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ?)


I also wrote a small test program to simulate the writes made by the
application, which read from a list of data and tried to update the
database with batch sizes of 25 and does about 6 updates per second.

import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.sql.Connection;
import java.sql.Date;
import java.sql.PreparedStatement;
import java.sql.SQLException;

import org.h2.jdbcx.JdbcConnectionPool;

public class H2WriteCacheTest extends Thread
{
private static final String FILE_NAME = "rel_devices.log";

public static JdbcConnectionPool connectionPool = null;


private static String url = "jdbc:h2:tcp://10.40.51.30:9092/
mem:vehiclecache;IFEXISTS=TRUE;LOCK_MODE=0;LOG=0";

private static String userName = "sa";

private static String password = "";

private static int SIZE=149564;

private static int[] deviceIds = new int[SIZE];

private static int[] clientIds = new int[SIZE];

private static String[] screenNames = new String[SIZE];

private static String UPDATE_CMP_RECORD = new String(
"UPDATE /* FDCCMPUpdateAgent:CMP 2 */ current_map_position_cache
SET " +
"status=?, gps_status=?, invoke_code=?, loc_obs_time=?, " +
"update_time=?, heading=?, speed=?, latitude=?, longitude=?, mrp=?,
mileage=?, dst=?, " +
"stop_duration=?, screen_name=?, " +
"loc_event_time=?,
address=? ,HOUSE_NUMBER=? ,STREET=? ,CITY=? ,STATE=? ,ZIP=? ,COUNTRY=? ,"
+
"CROSS_STREET=? ,COUNTY=? ,LANDMARK_ID =? ,LANDMARK =? WHERE
device_id=? AND client_id=? AND ? >= loc_event_time");

public H2WriteCacheTest(String str)
{
connectionPool = JdbcConnectionPool.create(url, userName,
password);
}

public static void main(String args[]) throws FileNotFoundException,
IOException, InterruptedException
{
new H2WriteCacheTest("H2WriteCacheTest");
loadDevicesFromFile();
updateCMP();
}

public static void updateCMP()
{

int HITS = 150;
int BATCH_SIZE = 25;
int waitSecs = 1000 / (HITS/BATCH_SIZE);

System.out.println("WaitSecs = "+waitSecs);

try
{
Connection conn = connectionPool.getConnection();
PreparedStatement stmt = conn.prepareStatement(UPDATE_CMP_RECORD);

int count1 = 0;
int failureCount = 0;

while(true)
{
try
{
int gpsStatus = random(0, 2);
int invokeCode = random(0, 50);
int latitude = random(-180, 180);
int longitude = random(-180, 180);
int stopDuration = random(0, 50);
//java.sql.Timestamp locEventTime = new java.sql.Timestamp
(System.currentTimeMillis());
java.sql.Timestamp locEventTime = new java.sql.Timestamp(new
java.util.Date().getTime());
java.sql.Timestamp loc_obs_time = locEventTime;
java.sql.Timestamp update_time = locEventTime;
int address = random(1, 10000);
String houseNo = "h"
+ random(1, 1000000);
String street = "s" + random(1, 10000) + random(1, 10000);
String city = " city" + random(1, 10000);
String state = "state " + random(1, 10000);
String zip = "" + random(1, 10000);
String country = "country "
+ random(1, 10000);
String crossSt = "crossSt "
+ random(1, 10000);
String county = " county" + random(1, 10000);
String landmark = "landmark "
+ random(1, 10000);
int status = 0;
int heading = 45;

int speed=80;
int mrp = 255;
double mileage = 50.0;
int dst = 1;
int landmarkId = 0;
for (int i = 0; i < deviceIds.length; i++)
{
count1++;
int index = 1;

stmt.setInt(index++, status);
stmt.setInt(index++, gpsStatus);
stmt.setInt(index++, invokeCode);
stmt.setTimestamp(index++, loc_obs_time);
stmt.setTimestamp(index++, update_time);
stmt.setInt(index++, heading);
stmt.setInt(index++, speed);
stmt.setInt(index++, latitude);
stmt.setInt(index++, longitude);
stmt.setInt(index++, mrp);
stmt.setDouble(index++, mileage);
stmt.setDouble(index++, dst);
stmt.setInt(index++, stopDuration);
stmt.setString(index++, screenNames[i]);
stmt.setTimestamp(index++, locEventTime);
stmt.setInt(index++, address);
stmt.setString(index++, houseNo);
stmt.setString(index++, street);
stmt.setString(index++, city);
stmt.setString(index++, state);
stmt.setString(index++, zip);
stmt.setString(index++, country);
stmt.setString(index++, crossSt);
stmt.setString(index++, county);
stmt.setInt(index++, landmarkId);
stmt.setString(index++, landmark);

//where clause
stmt.setInt(index++, deviceIds[i]);
stmt.setInt(index++, clientIds[i]);
stmt.setTimestamp(index++, locEventTime);


//System.out.println("Updating record "+i+",
device_id="+deviceIds[i]+",screen name:"+screenNames[i]
+",clientId="+clientIds[i]);

stmt.addBatch();

if((i+1) % BATCH_SIZE == 0 || (i+1) ==
deviceIds.length)
{
long prevTime = System.currentTimeMillis();


int count[] = null;

try
{
count = stmt.executeBatch();
//if(count != null)
// System.out.println("UpdatedCount="+count.length);
System.out.println("Executed "+count1+" Records in "+
(System.currentTimeMillis() - prevTime));
}
catch(Exception e)
{
System.out.println("****Exception in Exception
block, failure count="+(++failureCount)+" ******");
e.printStackTrace();
continue;
}
catch(Throwable t)
{
System.out.println("****Exception in Throwable
block, failure count="+(++failureCount)+" ******");
t.printStackTrace();
continue;
}
//System.out.print(".");
//Thread.sleep(waitSecs);
count1 = 0;

}
}

}
catch(org.h2.jdbc.JdbcSQLException sqe)
{
sqe.printStackTrace();
}
}

} catch (Exception e) {
e.printStackTrace();
}

}

public static int random(int min, int max)
{
return (int) ( min + (max - min) * Math.random());
}


private static void loadDevicesFromFile() throws
FileNotFoundException, IOException
{

BufferedReader in = new BufferedReader(new FileReader
(FILE_NAME));


int i = 0;

String nextLine = null;

while((nextLine = in.readLine()) != null)
{

String line = nextLine;

String arr[] = line.split(",");
try
{
deviceIds[i]=Integer.parseInt(arr[0]);
clientIds[i]=Integer.parseInt(arr[1]);
screenNames[i]=arr[2];
}
catch(Throwable t)
{
System.out.println("Error line="+line);
System.out.println("line number="+i);
}
i++;

}
System.out.println("#Devices
loaded:"+i);
}

}

snippet of Input file used

================

291569669,70515,fake5

291569670,70515,fake6

291569671,70515,fake7

291569672,70515,fake8

-369098744,20698,SimDev109969

-1107296256,70515,be00

170991,21915,EE003711

170996,21915,EE031035

171003,21915,BC000967

171004,21915,BC004132

171006,21915,DB009284

171007,21915,DD015141

171018,21915,CA024106

171037,21915,FE000126

171041,21915,AE020809

171017,21915,AA024550

171020,21915,DE015179

171022,21915,ED029549

171025,21915,DA018732

171027,21915,EB013660

171030,21915,CA009119

171031,21915,BA015708


Thomas Mueller

unread,
Nov 3, 2009, 2:58:52 PM11/3/09
to h2-da...@googlegroups.com
Hi,

Unfortunately I was not able to reproduce the problem. What I did is:

- I started a server (the H2 Console)
- I change the database URL to
jdbc:h2:tcp://localhost:9092/mem:vehiclecache;IFEXISTS=TRUE;LOCK_MODE=0;LOG=0;MULTI_THREADED=TRUE
- I create the tables (by the way your script is broken)
- I create the 'input file' with the values you posted in the mail
- I run the test case twice concurrently
- The output was:

Executed 25 Records in 3
Executed 25 Records in 3
Executed 25 Records in 3
Executed 25 Records in 3
Executed 25 Records in 3
Executed 25 Records in 3
Executed 25 Records in 4
Executed 25 Records in 3
Executed 25 Records in 3

and so on.

If you want, you could submit another test case. It would be great if
you could upload the test case somewhere so I don't have to fix line
endings. Also the script should be included - well the test case
should be standalone, and create the tables, start the server, start
multiple threads, reproduce the problem. You should not need to
describe how to run the test case, because running should be: run the
main method. I'm sorry for all this, but otherwise I will not have
time to run it.

The query uses the index IDX2_CURRENT_MAP_POSITION_CACHE: CLIENT_ID =
21113 - I'm not sure if this is optimal.

Regards,
Thomas

Ezhil

unread,
Nov 4, 2009, 7:10:12 AM11/4/09
to H2 Database
Hi

We are grateful to you for analysing the problem. We are really
desparate to have this resolved. We would provide any more testcases
or information if you need.

We have created a standalone test case which depends on two resource
files. We used java 6 for running the code. Please download from the
below link.

http://www.4shared.com/file/146169216/eeeca606/H2TestCase.html

Please convey us if you have any problems with downloading of these
files or issues with the file itself

The test case is bit lousy (as we thought its a throwaway code just to
analyze the problem), anyway the exception is thrown with in a minute
or so.

To recur again we use multithreading as well as lock mode 0 (when all
the threads are running you can see this setting in the console) and
we can go with out dated reads but no exceptions. Performance
degradation is too much either on disabling multi threading or setting
lock mode as 3

Basically one thread does the update, one thread does the merge (we
believe it is delete and insert internally) and 6 threads for reading.

Regarding usage of index IDX2_CURRENT_MAP_POSITION_CACHE: CLIENT_ID =
21113, we tried to use a combinational index of client_id, latitude
and longitude but the sql query does not use the latitude and
longitude columns.

Regards,
Ezhil

Thomas Mueller

unread,
Nov 6, 2009, 1:54:05 PM11/6/09
to h2-da...@googlegroups.com
Hi,

I tried your application. I will change the database in the next
release so that trying to disabling locking (LOCK_MODE=0) will throw
an exception when using the multi-threaded kernel mode. Unfortunately
it is currently not possible to support your use case. I'm sorry! At
this time this combination results in corruption when multiple threads
try to update the same table (even in in-memory mode).

The lock mode (LOCK_MODE=0) affects the whole database (all
connections), not only the current connection. I will document that.

There are many ways to improve the performance of your application
however. For example CMPRead creates a SQL statement for every 20
operations; using a prepared statement would work. Another case is
ResultSet.getInt(columnName) - you should use the column index or the
uppercase column name. To identify performance bottlenecks I usually
use java -Xrunhprof:cpu=samples,depth=16. See also
http://www.h2database.com/html/performance.html#application_profiling
and below.

Regards,
Thomas

Ezhil

unread,
Nov 15, 2009, 3:27:01 AM11/15/09
to H2 Database
Thomas,

Its really nice of you to have helped us a lot and given excellent
support for our problems. Sorry for taking so long to respond, but we
were looking into possible solutions to tackle the problem. Ultimately
we came up with Load balancing by making writes to all available
(currently two) H2 servers and reading from both. We use half of our
application servers (Weblogic) to read from first H2 and others from
the second H2.

Thanks much!
Reply all
Reply to author
Forward
0 new messages