Terracotta - Cache replication Needed

75 views
Skip to first unread message

mohds...@gmail.com

unread,
Aug 3, 2016, 10:14:11 AM8/3/16
to terracotta-oss
  1. What version of Terracotta Server you are currently using;4.3
  2. Paste the configuration you are using;
  3. Providing JDK and OS versions maybe useful as well.1.7
Hi,
I am evaluating  BigMemory Max-Trial version.
I am trying to setup a replicated cache  in multipe nodes in a cluster.
For this I have setup multiple mirror groups in my tc-config.xml , precisely 3.

I populated 100K objects in my terracotta cluster, but looks like all 3 nodes got around 33K objects, i.e. data is partitioned not replicated.
I need my cache to be replicated on all 3 nodes.

Please suggest any solution or config change by which all my nodes will get 100K objects replicated in all 3 nodes.

TC-CONFIG
<mirror-group group-name="StripeA">
    <server host="localhost" name="Region1TSA">
      ***
    </server>
    <server host="localhost" name="Region2TSA">
      *****
    </server>
 </mirror-group>
 <mirror-group group-name="StripeB">
    <server host="localhost" name="Region3TSA">
     *****
    </server>
    <server host="localhost" name="Region4TSA">
     ******
    </server>
 </mirror-group>
  <mirror-group group-name="StripeC">
    <server host="localhost" name="Region5TSA">
     *****
    </server>
    <server host="localhost" name="Region6TSA">
    ****
    </server>
 </mirror-group>




Thanks
Shariq

Fabien Sanglier

unread,
Aug 3, 2016, 11:45:40 AM8/3/16
to terraco...@googlegroups.com
Shariq,

Data gets replicated within mirror groups (eg. Region1TSA and Region2TSA will have the same data) and partitioned across mirror groups (the idea of adding mirror groups is to increase the throughput of cache access from clients)...this is expected behavior.

On the client side though (eg you Java clients using ehcache), you will indeed get a full view of your cache across all your app servers (all your app servers will see 100k items in the cache)

Hope that clears things up.
If not, please explain further why you think you need all Terracotta nodes to have the same replicated data...

Thanks.

Fabien
--
You received this message because you are subscribed to the Google Groups "terracotta-oss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to terracotta-os...@googlegroups.com.
To post to this group, send email to terraco...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/terracotta-oss/24975e9c-5921-43de-8a97-a3133cc81c1a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
Fabien Sanglier
fabiens...@gmail.com

mohds...@gmail.com

unread,
Aug 3, 2016, 1:47:44 PM8/3/16
to terracotta-oss
Thanks Fabian for the prompt reply.

If I understand correctly, one node in a mirror group acts as active node and rest of the nodes within a mirror group act as passive nodes...so in a cluster...cached data is actively available only with one node....plz correct me if I am wrong.

But I was looking for same data replicated across multiple active nodes....so that in case of very busy cache access, the traffic for same replicated data can be handled by two or three active nodes...This might also help in geographically distributed cluster.

Louis Jacomet

unread,
Aug 4, 2016, 12:58:20 AM8/4/16
to terraco...@googlegroups.com
Hi,

See answers inline.

Regards,
Louis

Regards,
Louis

On Wed, Aug 3, 2016 at 11:17 PM <mohds...@gmail.com> wrote:
Thanks Fabian for the prompt reply.

If I understand correctly, one node in a mirror group acts as active node and rest  of the nodes within a mirror group act as passive nodes...so in a cluster...cached data is actively available only with one node....plz correct me if I am wrong.

Your statement is correct. Although passive may be misleading. They are effectively hot standby, but passive in the sense that they do not handle client requests, only replication from active.

 

But I was looking for same data replicated across multiple active nodes....so that in case of very busy cache access, the traffic for same replicated data can be handled by two or three active nodes...This might also help in geographically distributed cluster.

This is not the deployment model of Terracotta 4.x.
So there is no way to achieve what you are asking for exactly.
Also not that because of sensitiveness to network latency, we never recommend a WAN link between nodes of a cluster.

Remember that Terracotta with Ehcache also provides near-cache on your clients, heap or heap+offheap and that can dramatically reduce the load on the servers for all read operations if your near cache can hold the hot set of data.
 

--
You received this message because you are subscribed to the Google Groups "terracotta-oss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to terracotta-os...@googlegroups.com.
To post to this group, send email to terraco...@googlegroups.com.

mohds...@gmail.com

unread,
Aug 4, 2016, 10:04:50 PM8/4/16
to terracotta-oss
Thanks Louis and Fabia, got clarity on this with your explanations.

Danish Gondal

unread,
Nov 17, 2020, 6:11:34 AM11/17/20
to terracotta-oss

I have some question to clear my concept
Question 1 : Will above configuration be working with the open source Terracotta 5.6.4 ?
Question 2: How many  active servers will be available @ a time ? e.g Region1TSA or Region3TSA or  Region5TSA or all three 
Question 3: How the client node will interact with the strip 1  or Strip 2 or Strip 3 ?  or there will be only one url to connect and it will be managed by the strips where to place the cache ?
Question 4 :clarity required client will not interact with any of the mirrors from strips right ? 

Waiting for the reply 

Reply all
Reply to author
Forward
0 new messages