--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+...@googlegroups.com.
To post to this group, send email to akka...@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.
Hi Sean,
On the wire:
You can look into https://github.com/akka/akka/tree/master/akka-remote/src/main/protobuf for what exactly we pass around on the wire.
Which serializer is used:
enable debug logging, we log this info in Serialization.scala log.debug("Using serializer[{}] for message [{}]", ser.getClass.getName, clazz.getName)
Other hints:
Are you sure you’re not CPU or something else -bound? And you should be able to stuff the network?
Have you played around with the number of threads etc?
What hardware are you benchmarking on?
[DEBUG] [08/06/2014 22:19:11.836] [0e6fb647-7893-4328-a335-5e26e2ab080c-akka.actor.default-dispatcher-4] [akka.serialization.Serialization(akka://0e6fb647-7893-4328-a335-5e26e2ab080c)] Using serializer[akka.serialization.JavaSerializer] for message [akka.dispatch.sysmsg.Terminate]
Are you sure you’re not CPU or something else -bound? And you should be able to stuff the network?
akka.tcp://app0ex...@192.168.1.53:51582/remote/akka.tcp/2120193a-e10b-474e...@192.168.1.53:48948/remote/akka.tcp/clu...@192.168.1.54:43676/user/master/Worker1/app_0_executor_0/group_1_task_0#-768886794o
h1880512348131407402383073127833013356174562750285568666448502416582566533241241053122856142164774120:
Hi Michael,I used wireshark to capture the traffic. I found for each message sent(the message is sent with option noSender), there is an extra cost of ActorPath.For example, for the following msg example, message payload length length is 100 bytes(bold), but there is also a target actor path for 221 bytes(red), which is much bigger than message itself.Can the actorPath overhead cost be reduced?.
akka.tcp://app0executor0@192.168.1.53:51582/remote/akka.tcp/2120193a-e10b-474e-bccb-8ebc4b...@192.168.1.53:48948/remote/akka.tcp/cluster@192.168.1.54:43676/user/master/Worker1/app_0_executor_0/group_1_task_0#-768886794o
h1880512348131407402383073127833013356174562750285568666448502416582566533241241053122856142164774120:
Hi Michael,I used wireshark to capture the traffic. I found for each message sent(the message is sent with option noSender), there is an extra cost of ActorPath.For example, for the following msg example, message payload length length is 100 bytes(bold), but there is also a target actor path for 221 bytes(red), which is much bigger than message itself.Can the actorPath overhead cost be reduced?.
akka.tcp://app0executor0@192.168.1.53:51582/remote/akka.tcp/2120193a-e10b-474e-bccb-8ebc4b...@192.168.1.53:48948/remote/akka.tcp/cluster@192.168.1.54:43676/user/master/Worker1/app_0_executor_0/group_1_task_0#-768886794o
h1880512348131407402383073127833013356174562750285568666448502416582566533241241053122856142164774120:
Or add compression.
Or add compression.
akka {
remote {
compression-scheme = "zlib" # Options: "zlib" (lzf to come), leave out for no compression
zlib-compression-level = 6 # Options: 0-9 (1 being fastest and 9 being the most compressed), default is 6
}
}
This is the Akka wire level envelope, cannot be directly controlled by users (unless someone writes a new transport of course).
You can do wire-level compression.
On Thu, Aug 7, 2014 at 10:09 AM, Endre Varga <endre...@typesafe.com> wrote:
On Thu, Aug 7, 2014 at 10:05 AM, √iktor Ҡlang <viktor...@gmail.com> wrote:
Or add compression.
This is the Akka wire level envelope, cannot be directly controlled by users (unless someone writes a new transport of course).-Endre
On Aug 7, 2014 9:52 AM, "Endre Varga" <endre...@typesafe.com> wrote:
Hi Sean,Unfortunately there is no way to reduce this overhead without changing the wire layer format, which we cannot do now. As you correctly see, practically all the overhead comes from the path of the destination and sender actor. In the future we have plans to implement a scheme which allows the sender to abbreviate the most common paths used to a single number, but this needs a new protocol.So the answer currently is that you cannot reduce this overhead without introducing some batching scheme yourself: instead of sending MyMessage you can send Array[MyMessage], so the cost of the recipient path is only suffered once for the batch, but not for the individual messages -- i.e. you can amortize the overhead.-Endre
On Thu, Aug 7, 2014 at 8:11 AM, Sean Zhong <cloc...@gmail.com> wrote:
Is it possible to reduce the average message overhead?
200 bytes extra cost per remote message doesn't looks good...
On Thursday, August 7, 2014 1:45:12 PM UTC+8, Sean Zhong wrote:
Hi Michael,I used wireshark to capture the traffic. I found for each message sent(the message is sent with option noSender), there is an extra cost of ActorPath.For example, for the following msg example, message payload length length is 100 bytes(bold), but there is also a target actor path for 221 bytes(red), which is much bigger than message itself.Can the actorPath overhead cost be reduced?.
akka.tcp://app0executor0@192.168.1.53:51582/remote/akka.tcp/2120193a-e10b-474e-bccb-8ebc4b3a0...@192.168.1.53:48948/remote/akka.tcp/cluster@192.168.1.54:43676/user/master/Worker1/app_0_executor_0/group_1_task_0#-768886794o
h1880512348131407402383073127833013356174562750285568666448502416582566533241241053122856142164774120:
--Cheers,√
Hi Viktor,About wire-compression, do you mean this?akka {
remote {
compression-scheme = "zlib" # Options: "zlib" (lzf to come), leave out for no compression
zlib-compression-level = 6 # Options: 0-9 (1 being fastest and 9 being the most compressed), default is 6
}
}
compressed link/interface
Hi Sean,
...For more options, visit https://groups.google.com/d/optout</
Unfortunately there is no way to reduce this overhead without changing the wire layer format, which we cannot do now. As you correctly see, practically all the overhead comes from the path of the destination and sender actor. In the future we have plans to implement a scheme which allows the sender to abbreviate the most common paths used to a single number, but this needs a new protocol.
...
Hi Sean,to summarise our thoughts about the on-the-wire overhead:* Viktor hinted at this technique: http://en.wikipedia.org/wiki/TCP_acceleration which requires special hardware AFAIK, but is able to transparently compress data sent over TCP connections. It's probably costly but can be applied without app modifications.* We are aware of the large overhead and want to fix it soon – this will require changes in the remoting protocol, so it won't happen sooner than 2.4.x. The biggest part of the overhead is the ActorPath and Endre has suggested a neat "actor path shortening" trick which will save a lot of space on the wire for popular senders / receivers.* You could implement your own Transport (like our NettyTransport) and hook additional Gzip stages to the netty pipeline in there... This is quite some work but depending on your use-case perhaps it's worth it? In order to gzip to give anything here it would have to span multiple messages.
...
Hi Sean,
Cheers,
...<a href="http://typesafe.com" target="_blank" onmousedown="this.href='http://www.google.com/url?q\75http%3A%2F%2Ftypesafe.com\46sa\75D\46sntz\0751\46usg\75AFQjCNFC6SplTJxAP7AExZl1lClfJ-tq6w';return true;" onclick="this.href='http://www.google.com/url?q\75http%3A%2F%2Ftypesafe.com\46sa\75D\46sntz\0751\46u
Hi Sean,
...Cheers,Konrad 'ktoso' Malawski
<font style="margin:0px;padding:0px;border:0px;vertical-align:
I was just reading through your discussions. I was quiet curious on how did you measure message throughput in your app?
-prakhyat m m
does 140MB/s include TCP, IP, ethernet header data?
are you communicating across local network or across the internet?
the greater the distance your packets have to travel (specifically the number of hops), the higher chance that they will get dropped and retransmitted
Hi Sean,
Cheers,Konrad 'ktoso' Malawski
<font style="margin:0px;padding:0px;border:0px;vertical-align:
...
Patrik Nordwall
Typesafe - Reactive apps on the JVM
Twitter: @patriknw