Cluster, sharding and roles

100 views
Skip to first unread message

Eduardo Fernandes

unread,
May 30, 2014, 7:20:42 AM5/30/14
to akka...@googlegroups.com
Hi all.

Probably this is a silly question but I couldn't find any clear answer in the group or docs.

Suppose I have a cluster with 4 nodes with 2 roles (2 node instances per role). How could I create two shardings, each one sending messages to the nodes belonging to a particular role? The idea is add a new node with a particular role and let the cluster sharding distribute the work among all nodes belonging to that role. I suppose I could create two sharding regions, one per role, and assign the sharding to a role in some way?

I'm using Java and Akka 2.3.3.

Many thanks for your help.

Eduardo.


Patrik Nordwall

unread,
Jun 2, 2014, 4:24:34 AM6/2/14
to akka...@googlegroups.com
Hi Eduardo,

The ClusterSharding extension supports configuration of one role to use a subset of nodes, but that is not what you are looking for. Instead of using the ClusterSharding extension you may start the actors yourself and thereby specify the roles.
See:
ShardCoordinatorSupervisor.props
ShardCoordinator.props
ShardRegion.props

Note that the ShardCoordinatorSupervisor is supposed be started with a ClusterSingletonManager. See here: https://github.com/akka/akka/blob/v2.3.3/akka-contrib/src/main/scala/akka/contrib/pattern/ClusterSharding.scala#L360

Cheers,
Patrik



--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com.
To post to this group, send email to akka...@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.



--

Patrik Nordwall
Typesafe Reactive apps on the JVM
Twitter: @patriknw


Eduardo Fernandes

unread,
Jun 2, 2014, 7:10:39 AM6/2/14
to akka...@googlegroups.com
Many Thanks Patrik.

I'm afraid that if I manage the actors directly I'll lose all the cluster benefits, include spreading out the mapping objectId -> physical node. I think that I can reduce the problem to a case where I could avoid the creation of new actors in a particular node in the cluster and then, after all actors are virtually inactive, turn the node down. 

I don't know where is the mapping of entryId -> physical node. 

Could I override the distribution logical somehow so I could control in which physical node the actor will be instantiated in the cluster? That would be perfect. I'd overridden the mapping using the sharding policy with   AbstractShardAllocationStrategy inheriting from LeastShardAllocationStrategy but I couldn't find where adjust the way the cluster assign physical nodes to the particular sharding entry.

Many thanks, Patrik, for your help.

Regards.

Patrik Nordwall

unread,
Jun 2, 2014, 9:13:00 AM6/2/14
to akka...@googlegroups.com
On Mon, Jun 2, 2014 at 1:10 PM, Eduardo Fernandes <edu...@gmail.com> wrote:
Many Thanks Patrik.

I'm afraid that if I manage the actors directly I'll lose all the cluster benefits, include spreading out the mapping objectId -> physical node.

That would not change. The ClusterSharding extension is "only" creating exactly the same actors for you, in a convenient way. I understand that you think it is overwhelming to create these actors yourself, but it is possible (and the reason why the props methods are public).
 
I think that I can reduce the problem to a case where I could avoid the creation of new actors in a particular node in the cluster and then, after all actors are virtually inactive, turn the node down. 

I don't know where is the mapping of entryId -> physical node. 

Could I override the distribution logical somehow so I could control in which physical node the actor will be instantiated in the cluster? That would be perfect. I'd overridden the mapping using the sharding policy with   AbstractShardAllocationStrategy inheriting from LeastShardAllocationStrategy but I couldn't find where adjust the way the cluster assign physical nodes to the particular sharding entry.

Yes, that is done by the information returned by the AbstractShardAllocationStrategy. The passed in currentShardAllocations contains the ActorRefs of the ShardRegion actors, and you could use the addresses of these to decide which nodes to use. You must somehow correlate those addresses with the addresses of the cluster members if you want to use the cluster role information.

The AbstractShardAllocationStrategy does not allocate locations for individual entries. That is always done on a group of entries, a.k.a. shard. You define the mapping between entry ids (messages) and shards in the MessageExtractor.

/Patrik
 

Many thanks, Patrik, for your help.

Regards.



El lunes, 2 de junio de 2014 10:24:34 UTC+2, Patrik Nordwall escribió:
Hi Eduardo,

The ClusterSharding extension supports configuration of one role to use a subset of nodes, but that is not what you are looking for. Instead of using the ClusterSharding extension you may start the actors yourself and thereby specify the roles.
See:
ShardCoordinatorSupervisor.props
ShardCoordinator.props
ShardRegion.props

Note that the ShardCoordinatorSupervisor is supposed be started with a ClusterSingletonManager. See here: https://github.com/akka/akka/blob/v2.3.3/akka-contrib/src/main/scala/akka/contrib/pattern/ClusterSharding.scala#L360

Cheers,
Patrik


--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com.
To post to this group, send email to akka...@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Eduardo Fernandes

unread,
Jun 2, 2014, 9:24:58 AM6/2/14
to akka...@googlegroups.com
Many Thanks Patrik for your time!

I'll check the addresses and let you know. With this info I could, theoretically, implements a smooth node shutdown. 

Best regards!


You received this message because you are subscribed to a topic in the Google Groups "Akka User List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/akka-user/iP-w0OqBbHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to akka-user+...@googlegroups.com.

Eduardo Fernandes

unread,
Jun 2, 2014, 4:28:33 PM6/2/14
to akka...@googlegroups.com
It worked perfectly!

Many thanks for your help!

Regards.

Luis Medina

unread,
Jun 2, 2014, 4:49:24 PM6/2/14
to akka...@googlegroups.com
Hi Eduardo,

I recently implemented my own version of a ShardAllocationStrategy and made use of the ShardRegion's addresses. I made a post about it here: https://groups.google.com/forum/#!topic/akka-user/7p_fkEFJqHw

It doesn't solve your exact problem but maybe it will give you some ideas. Also, the code is in Java so if you're using Scala you might have to do a bit of translating.

Eduardo Fernandes

unread,
Jun 2, 2014, 6:23:16 PM6/2/14
to akka...@googlegroups.com
Nice post!

I'll use your concepts to implement the progressive scaling down.

Many thanks for your info!
Reply all
Reply to author
Forward
0 new messages