I am experimenting with the cluster sharding activator, but am having lots of issues with it. I have tried updating the activator to 2.3.1, but to no avail (and other issues show up, such as described here: https://www.assembla.com/spaces/akka/simple_planner#/ticket:3967).Problems noticed so far:1) 100% of the time, the activator sends a lot of messages to the ClusterSystem deadLetters on startup of the seed node. Here is one example:[INFO] [03/31/2014 09:37:00.654] [ClusterSystem-akka.actor.default-dispatcher-2] [akka://ClusterSystem/deadLetters] Message [akka.cluster.InternalClusterAction$InitJoin$] from Actor[akka://ClusterSystem/system/cluster/core/daemon/firstSeedNodeProcess#-438400827] to Actor[akka://ClusterSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.[ ... many more akka.cluster.InternalClusterAction$InitJoin$ messages ... ][INFO] [03/31/2014 09:37:05.518] [ClusterSystem-akka.actor.default-dispatcher-14] [akka://ClusterSystem/system/cluster/core/daemon/firstSeedNodeProcess] Message [akka.dispatch.sysmsg.Terminate] from Actor[akka://ClusterSystem/system/cluster/core/daemon/firstSeedNodeProcess#-438400827] to Actor[akka://ClusterSystem/system/cluster/core/daemon/firstSeedNodeProcess#-438400827] was not delivered. [6] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.[... JOINING and Up message...][INFO] [03/31/2014 09:37:06.516] [ClusterSystem-akka.actor.default-dispatcher-16] [akka://ClusterSystem/user/sharding/AuthorListingCoordinator/singleton] Message [akka.contrib.pattern.ShardCoordinator$Internal$Register] from Actor[akka://ClusterSystem/user/sharding/AuthorListing#1471529820] to Actor[akka://ClusterSystem/user/sharding/AuthorListingCoordinator/singleton] was not delivered. [7] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.[INFO] [03/31/2014 09:37:06.516] [ClusterSystem-akka.actor.default-dispatcher-16] [akka://ClusterSystem/user/sharding/PostCoordinator/singleton] Message [akka.contrib.pattern.ShardCoordinator$Internal$Register] from Actor[akka://ClusterSystem/user/sharding/Post#589187748] to Actor[akka://ClusterSystem/user/sharding/PostCoordinator/singleton] was not delivered. [8] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.2) Using the default shared LevelDB journal configuration, sometimes (but not always) when the Bot node is started, the seed node goes nuts:
[INFO] [03/31/2014 09:46:00.768] [ClusterSystem-akka.actor.default-dispatcher-3] [Cluster(akka://ClusterSystem)] Cluster Node [akka.tcp://ClusterSystem@127.0.0.1:2551] - Leader is moving node [akka.tcp://ClusterSystem@127.0.0.1:50327] to [Up]
Uncaught error from thread [ClusterSystem-akka.remote.default-remote-dispatcher-24] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[ClusterSystem]Uncaught error from thread [ClusterSystem-akka.actor.default-dispatcher-17] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[ClusterSystem]Uncaught error from thread [ClusterSystem-akka.actor.default-dispatcher-28] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[ClusterSystem]Uncaught error from thread [ClusterSystem-akka.actor.default-dispatcher-29] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[ClusterSystem]
[...keeps going forever...]^CJava HotSpot(TM) 64-Bit Server VM warning: Exception java.lang.OutOfMemoryError occurred dispatching signal SIGINT to handler- the VM may need to be forcibly terminated3) When it is working, the shared leveldb journal seems to work reasonably well (except for the SPOF on the first node). However, when I change to either one of the MongoDB replicated journals in contrib, when testing various combinations of node failures, things go nuts with duplicatekeyexceptions (looping infinitely), OutOfMemoryError's, and other weirdness. I know these are early implementations but the similarity of the failures when using the two different journal implementations makes me think the problems may not be with the journal implementations, but with akka-persistence instead.4) When restarting the Bot node, there are lots of WARNings about unknown UIDs (the following message keeps repeating for Bots that have been shut down -- i.e. the node never appears to be actually removed from the cluster, even after the entire cluster is restarted):
[WARN] [03/31/2014 10:01:40.280] [ClusterSystem-akka.remote.default-remote-dispatcher-5] [Remoting] Association to [akka.tcp://ClusterSystem@127.0.0.1:50327] with unknown UID is reported as quarantined, but address cannot be quarantined without knowing the UID, gating instead for 5000 ms.
--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com.
To post to this group, send email to akka...@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.
Patrik Nordwall
Typesafe - Reactive apps on the JVM
Twitter: @patriknw
Hi Raman and Michael,I distilled this to 2 remaining issues:1. NoSuchElementException ClusterSharding.scala:1055That looks like a bug. Please create a ticket with description of how to reproduce.2. InvalidActorNameException: actor name must not be empty ClusterSharding.scala:802That means that the id is "", which is not meaningful and not supported. We should add a check and handle it in a better way. Ticket please.Have I missed anything else?/Patrik
On Tue, Apr 1, 2014 at 4:23 PM, Raman Gupta <rocke...@gmail.com> wrote:
All right, at least I figured out the OOM problem. The sbt packaged with Fedora 20 does not set the perm gen size, so it uses the default size of 64MB, which is too small for sbt / Akka. That was probably causing a lot of my issues. In case anyone cares, I created:That took care of a lot of weirdness! There are still issues however. Here is another error I found by starting and stopping the 2552 node several times, specifically stopping it immediately after a "New post saved:" message:Seen on the bot:
[INFO] [04/01/2014 10:20:58.686] [ClusterSystem-akka.actor.default-dispatcher-22] [akka.tcp://ClusterSystem@127.0.0.1:56185/user/sharding/AuthorListingCoordinator] Member removed [akka.tcp://ClusterSystem@127.0.0.1:2552]
Hi Raman and Michael,I distilled this to 2 remaining issues:1. NoSuchElementException ClusterSharding.scala:1055That looks like a bug. Please create a ticket with description of how to reproduce.2. InvalidActorNameException: actor name must not be empty ClusterSharding.scala:802That means that the id is "", which is not meaningful and not supported. We should add a check and handle it in a better way. Ticket please.Have I missed anything else?/Patrik
On Tue, Apr 1, 2014 at 4:23 PM, Raman Gupta <rocke...@gmail.com> wrote:
All right, at least I figured out the OOM problem. The sbt packaged with Fedora 20 does not set the perm gen size, so it uses the default size of 64MB, which is too small for sbt / Akka. That was probably causing a lot of my issues. In case anyone cares, I created:That took care of a lot of weirdness! There are still issues however. Here is another error I found by starting and stopping the 2552 node several times, specifically stopping it immediately after a "New post saved:" message:Seen on the bot:
[INFO] [04/01/2014 10:20:58.686] [ClusterSystem-akka.actor.default-dispatcher-22] [akka.tcp://ClusterSystem@127.0.0.1:56185/user/sharding/AuthorListingCoordinator] Member removed [akka.tcp://ClusterSystem@127.0.0.1:2552]
There is also the one other (relatively minor) issue of the 8-10 dead letters on cluster startup. Do you consider that a bug? If so, I shall create a ticket for that as well.
Hello Patrick,
I have created a ticket:
https://www.assembla.com/spaces/akka/tickets/3974
pls. let me know if you need smthg additionally,
I created https://www.assembla.com/spaces/akka/tickets/3975 re #2.