[Hazelcast 3.2.3] SQLException after upgrade from Hazelcast 3.0.3

55 views
Skip to first unread message

Tianyi Cong

unread,
Jun 21, 2014, 2:11:51 PM6/21/14
to haze...@googlegroups.com
Hey all,

I just started using Hazelcast 3.2.3 yesterday, after configure hazelcast.xml and start my application. What log shown are very different than it was with hazelcast 3.0.3.

For example, when load user map in 3.0.3, there will be 2 partition threads and the log looks like this:
2014-06-21 01:05:59,185 INFO  [hz.HCAST_EVERTEST.cached.thread-2] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 1000
2014-06-21 01:05:59,188 INFO  [hz.HCAST_EVERTEST.cached.thread-3] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 886
and that's it.

After switch to 3.2.3, this the the log:
2014-06-21 10:51:00,669 INFO  [hz.HCAST_EVERTEST.cached.thread-12] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 3
2014-06-21 10:51:00,673 INFO  [hz.HCAST_EVERTEST.cached.thread-10] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 7
2014-06-21 10:51:00,676 INFO  [hz.HCAST_EVERTEST.cached.thread-1] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 7
2014-06-21 10:51:00,682 INFO  [hz.HCAST_EVERTEST.cached.thread-15] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 5
2014-06-21 10:51:00,684 INFO  [hz.HCAST_EVERTEST.cached.thread-8] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 10
2014-06-21 10:51:00,686 INFO  [hz.HCAST_EVERTEST.cached.thread-7] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 4
2014-06-21 10:51:00,688 INFO  [hz.HCAST_EVERTEST.cached.thread-13] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 5
2014-06-21 10:51:00,692 INFO  [hz.HCAST_EVERTEST.cached.thread-4] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 11
2014-06-21 10:51:00,694 INFO  [hz.HCAST_EVERTEST.cached.thread-9] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 2
2014-06-21 10:51:00,700 INFO  [hz.HCAST_EVERTEST.cached.thread-11] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 6
2014-06-21 10:51:00,700 INFO  [hz.HCAST_EVERTEST.cached.thread-6] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 4
2014-06-21 10:51:00,702 INFO  [hz.HCAST_EVERTEST.cached.thread-5] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 5
2014-06-21 10:51:00,702 INFO  [hz.HCAST_EVERTEST.cached.thread-17] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 6
2014-06-21 10:51:00,702 INFO  [hz.HCAST_EVERTEST.cached.thread-14] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 6
2014-06-21 10:51:00,703 INFO  [hz.HCAST_EVERTEST.cached.thread-16] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 9
2014-06-21 10:51:00,703 INFO  [hz.HCAST_EVERTEST.cached.thread-2] (StoreLoadUserMap.java:159) --- load all called for user Map - SYSTEM STARTUP : : 7
each thread just load very few entries and repeat many times.

And after a while, I got this exception:
2014-06-21 10:51:35,698 ERROR [hz.HCAST_EVERTEST.cached.thread-10] (StoreLoadUserMap.java:148) --- exception during StoreLoadUserMap load:
java.sql.SQLException: An attempt by a client to checkout a Connection has timed out.
    at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106)
    at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:65)
    at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:527)
    at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128)
    at models.predmkts.StoreLoadUserMap.loadAll(StoreLoadUserMap.java:118)   ----> this line corresponding to conn = ds.getConnection(); in loadAll method.
    at com.hazelcast.map.MapStoreWrapper.loadAll(MapStoreWrapper.java:132)
    at com.hazelcast.map.DefaultRecordStore$MapLoadAllTask.run(DefaultRecordStore.java:1010)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at com.hazelcast.util.executor.CompletableFutureTask.run(CompletableFutureTask.java:57)
    at com.hazelcast.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:186)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744)
    at com.hazelcast.util.executor.PoolExecutorThreadFactory$ManagedThread.run(PoolExecutorThreadFactory.java:59)
Caused by: com.mchange.v2.resourcepool.TimeoutException: A client timed out while waiting to acquire a resource from com.mchange.v2.resourcepool.BasicResourcePool@4716cab2 -- timeout at awaitAvailable()
    at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1317)
    at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:557)
    at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:477)
    at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:525)
    ... 12 more

I have to increase my database max size from 50 to 150, but these is not a way to solve this problem since the data will be increase all the time.
Following is my hazelcast.xml: pool size is 16, does this mean there will be 16 partition threads and there technically should have maximum 16 db connections?

<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.2.xsd"
           xmlns="http://www.hazelcast.com/schema/config"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <group>
        <name>dev</name>
        <password>dev-pass</password>
    </group>
    <management-center enabled="false">http://localhost:8080/mancenter</management-center>
    <network>
        <port auto-increment="true" port-count="100">5701</port>
        <outbound-ports>
            <!--
            Allowed port range when connecting to other nodes.
            0 or * means use system provided port.
            -->
            <ports>0</ports>
        </outbound-ports>
        <join>
            <multicast enabled="true">
                <multicast-group>224.2.2.3</multicast-group>
                <multicast-port>54327</multicast-port>
            </multicast>
            <tcp-ip enabled="false">
                <interface>127.0.0.1</interface>
            </tcp-ip>
            <aws enabled="false">
                <access-key>xxxxxxxx</access-key>
                 <secret-key>xxxxxxxxxx</secret-key>
                 <!--optional, default is us-east-1 -->
                <region>us-east-1</region>
                <!--optional, default is ec2.amazonaws.com. If set, region shouldn't be set as it will override this property -->
                <host-header>ec2.amazonaws.com</host-header>
                <!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
                <security-group-name>hazelcast-sg</security-group-name>
                <tag-key>type</tag-key>
                <tag-value>hz-nodes</tag-value>
            </aws>
        </join>
        <interfaces enabled="false">
            <interface>192.168.1.*</interface>
        </interfaces>
        <ssl enabled="false" />
        <socket-interceptor enabled="false" />
        <symmetric-encryption enabled="false">
            <!--
               encryption algorithm such as
               DES/ECB/PKCS5Padding,
               PBEWithMD5AndDES,
               AES/CBC/PKCS5Padding,
               Blowfish,
               DESede
            -->
            <algorithm>PBEWithMD5AndDES</algorithm>
            <!-- salt value to use when generating the secret key -->
            <salt>thesalt</salt>
            <!-- pass phrase to use when generating the secret key -->
            <password>thepass</password>
            <!-- iteration count to use when generating the secret key -->
            <iteration-count>19</iteration-count>
        </symmetric-encryption>
    </network>
    <partition-group enabled="false"/>
     <executor-service name="default">
        <pool-size>16</pool-size>
        <!--Queue capacity. 0 means Integer.MAX_VALUE.-->
        <queue-capacity>0</queue-capacity>
    </executor-service>

Daniel Gagnon

unread,
Jun 21, 2014, 2:29:08 PM6/21/14
to haze...@googlegroups.com

Do you get the same results when storing as well?

It seems consistent with what i observed.

--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hazelcast+...@googlegroups.com.
To post to this group, send email to haze...@googlegroups.com.
Visit this group at http://groups.google.com/group/hazelcast.
To view this discussion on the web visit https://groups.google.com/d/msgid/hazelcast/b02cc26b-34a2-407e-aa9b-13a6e1556d44%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Tianyi Cong

unread,
Jun 21, 2014, 2:43:12 PM6/21/14
to haze...@googlegroups.com
This exceptions happens in the bootstrap that load all the maps from db.
If I increase the db max pool to 150, all the entries can be loaded successfully. I don't have any process store multiple entries as the same time, but the process such as store a new user works file after bootstrap, I guess that is the most intensive process in the whole application right now.
Does this multiple partition threads load entries together with each of then just load a few entries each time the new way how Hazalcast suppose to work?

Tianyi Cong

unread,
Jun 23, 2014, 12:25:35 PM6/23/14
to haze...@googlegroups.com
Any one start using 3.2.3 face similar problem?
Should I increase db maximum connection size? If yes, what would the approximate proportion between pool-size and database maximum connection?
Reply all
Reply to author
Forward
0 new messages