conf.setMaxTaskParallelism(80);
conf.setNumAckers(4);
for the above setting I dont see any warnings.
conf.setNumAckers(4);
conf.setNumWorkers(4);
PS : the storm-local-dir is not shared among supervisors since they are on different nodes.
Am I missing something? Any help greatly appreciated.
Thanks,
Shrikar
Hi Shrikar,I also meet this problem, could you please tell me how you solve this problem? it was so confused and I have no any clue.Any help greatly appreciate.ThanksSaisai
在 2012年10月14日星期日UTC+8上午3时32分06秒,Shrikar archak写道:
I have a similar setup and am experiencing the same problem.I have three AWS EC2 instances (provisioned by Chef/OpsWorks), one Nimbus and two Supervisor nodes. When I execute the ExclamationTopology from the tutorial on a single Supervisor it works as expected. However, if this topology executes on both Supervisors I see a lot of the dropped messages error, mixed in with stdout from the PrintingBolt I'm using.[2013-05-01 16:47:03,651] worker [WARN] Received invalid messages for unknown tasks. Dropping...[2013-05-01 16:47:03,652] worker [WARN] Received invalid messages for unknown tasks. Dropping...[2013-05-01 16:47:03,655] worker [WARN] Received invalid messages for unknown tasks. Dropping...[2013-05-01 16:47:03,658] STDIO [INFO] source: exclaim2:6, stream: default, id: {}, [jackson!!!!!!][2013-05-01 16:47:03,756] STDIO [INFO] source: exclaim2:9, stream: default, id: {}, [mike!!!!!!][2013-05-01 16:47:03,758] worker [WARN] Received invalid messages for unknown tasks. Dropping...
I've read the Troubleshooting wiki as well as the above recommendations for setting /etc/hosts properly but to no avail. AWS OpsWorks automatically adds records to /etc/hosts, so each machine has something very similar to the following:# This file was generated by OpsWorks# any manual changes will be removed on the next update.127.0.0.1 localhost localhost.localdomain# The following lines are desirable for IPv6 capable hosts::1 ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allroutersff02::3 ip6-allhosts# OpsWorks Layer State127.0.0.1 storm-nimbus.localdomain storm-nimbus10.141.143.158 storm-nimbus<public IP> storm-nimbus-ext10.137.30.236 storm-supervisor-1<public IP> storm-supervisor-1-ext10.142.132.134 storm-supervisor-2<public IP> storm-supervisor-2-ext10.209.138.3 zookeeper-1
<public IP> zookeeper-1-ext10.254.226.114 zookeeper-2<public IP> zookeeper-2-ext10.209.135.84 zookeeper-3<public IP> zookeeper-3-extAs far as I can tell, everything in here is sane and works correctly.The one thing that sticks out in my mind is that the Storm UI indicates both Supervisor nodes report themselves as "localhost" (http://cl.ly/image/1I1D2D0W3T46). Likewise, I notice that whenever any of the Storm components (Nimbus and Supervisor) start up the log output from the ZooKeeper client lists their hostname as localhost, as can be seen here:[2013-05-01 17:36:30,742] ZooKeeper [INFO] Client environment:host.name=localhostDoes anyone have any ideas about what is happening here?Thanks for listening!
--
You received this message because you are subscribed to a topic in the Google Groups "storm-user" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/storm-user/DgIDxawBczg/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to storm-user+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.