Web-Stomp to JS consumer receiving growing frame issue

35 views
Skip to first unread message

Jakub S

unread,
Dec 2, 2015, 9:17:47 AM12/2/15
to rabbitmq-users
Hi!

I'm quite new to RabbitMQ but recently I'm doing a school project incorporating this technology.

The setup looks like this:
1. There is a publisher written in Erlang on one side
2. The publisher send messages to the broker which has Web-Stomp plugin installed
3. The consumer on the other side is written in JavaScript using SockJS over Websockets (exactly how it is done in the examples)

The problem is that when the publisher send a lot of messages (it was tested with sending 100k messages as fast as possible, every message is approximately same size) then the consumer receives STOMP frames which length is growing. When a frame body is examined it turns out that it packs a lot of messages in it, more and more with every consecutive frame.

This is a screen cap from Chrome debugger:


As you can see the last frame is very large and it takes approx. 23 seconds to download it. During this time the application is stuck while waiting for WS to get and "split out" the messages in order to process it, which is not very efficient. 


Is the problem source within the Web-Stomp plugin that buffers the messages (if there are many) and sends one huge frame to the consumer if it accumulate "enough" messages? Or is it somewhere else?


I've been trying to set the "max-length-bytes" queue policy to smaller number, but that didn't change anything and also I was not able to find any related issues online. 


I will appreciate any hints or suggestions on what to do.


Regards, Jakub


Michael Klishin

unread,
Dec 2, 2015, 9:20:20 AM12/2/15
to rabbitm...@googlegroups.com, Jakub S
 On 2 December 2015 at 17:17:50, Jakub S (slupe...@gmail.com) wrote:
> Is the problem source within the Web-Stomp plugin that buffers
> the messages (if there are many) and sends one huge frame to the
> consumer if it accumulate "enough" messages? Or is it somewhere
> else?
>
>
> I've been trying to set the "max-length-bytes" queue policy
> to smaller number, but that didn't change anything and also I
> was not able to find any related issues online.

If you want to limit deliveries, use manual acknowledgements (client or client-individual
subscriptions in STOMP parlance, see http://stomp.github.io/stomp-specification-1.2.html#ACK)
--
MK

Staff Software Engineer, Pivotal/RabbitMQ


Jakub S

unread,
Dec 3, 2015, 2:30:52 AM12/3/15
to rabbitmq-users
Thank's for your reply and suggestion.

I set up the subscription to be ack: "client" but it didn't help. The way the consumer receives frame is still the same: larger and larger frame with more and more messages packed in. 
The only difference now is that the ACK frames are sent for every message it processed from the big frame:



Michael Klishin

unread,
Dec 3, 2015, 4:07:37 AM12/3/15
to rabbitm...@googlegroups.com, Jakub S
On 3 December 2015 at 10:30:54, Jakub S (slupe...@gmail.com) wrote:
> I set up the subscription to be ack: "client" but it didn't help.
> The way the consumer receives frame is still the same: larger
> and larger frame with more and more messages packed in.
> The only difference now is that the ACK frames are sent for every
> message it processed from the big frame:

OK. Web STOMP was significantly reworked in 3.6.0, so please give
https://github.com/rabbitmq/rabbitmq-server/releases/tag/rabbitmq_v3_6_0_rc1 a try,
including raw WebSocket endpoint. 

Jakub S

unread,
Dec 4, 2015, 2:21:18 AM12/4/15
to rabbitmq-users, slupe...@gmail.com
This solved the issue! Thank you for help! 

The program is no longer idle, because the frames got smaller, so there is no "downloading hang time".

One more question tough: is there any specification saying how is the performance of Web-Stomp plugin? 
The message consuming speed seems to be much lower than it used to be in this kind of setup with the previous Rabbit version. 

For comparison it was around 40 seconds to get 100k messages(of size 255 bytes incl.headers) with 3.5.6 and now with 3.6.0 RC1 it takes around 2.2 minutes.

Michael Klishin

unread,
Dec 4, 2015, 3:30:02 AM12/4/15
to rabbitm...@googlegroups.com, Jakub S
On 4 December 2015 at 10:21:21, Jakub S (slupe...@gmail.com) wrote:
> One more question tough: is there any specification saying
> how is the performance of Web-Stomp plugin?
> The message consuming speed seems to be much lower than it used
> to be in this kind of setup with the previous Rabbit version.
>
> For comparison it was around 40 seconds to get 100k messages(of
> size 255 bytes incl.headers) with 3.5.6 and now with 3.6.0 RC1
> it takes around 2.2 minutes.

Throughput is not a common thing Web STOMP users complain about, so we didn't
really benchmark the new implementation. A lot of apps would be fine with a few hundreds of
messages per second (in a Web client).

That said, if you provide us with some test scripts, we can take a look at what can be improved.

Are you comparing raw WebSockets in 3.6 to 3.5.6, though? Because that's not really
apples to oranges, although we'd still be happy to see if we can improve things. 

Jakub S

unread,
Dec 8, 2015, 5:30:40 AM12/8/15
to rabbitmq-users, slupe...@gmail.com


On Friday, 4 December 2015 09:30:02 UTC+1, Michael Klishin wrote:

Throughput is not a common thing Web STOMP users complain about, so we didn't
really benchmark the new implementation. A lot of apps would be fine with a few hundreds of
messages per second (in a Web client).

That said, if you provide us with some test scripts, we can take a look at what can be improved.

Are you comparing raw WebSockets in 3.6 to 3.5.6, though? Because that's not really
apples to oranges, although we'd still be happy to see if we can improve things. 
--  
MK  

Staff Software Engineer, Pivotal/RabbitMQ  

In the previous version (3.5.6) WS over SockJS has been used. With the latest version I switched to raw WS.
Do you think this may have something to do with the throughput?

Jakub
 

Michael Klishin

unread,
Dec 8, 2015, 5:33:06 AM12/8/15
to rabbitm...@googlegroups.com, slupe...@gmail.com
At least it is not an apples-to-apples comparison.
Reply all
Reply to author
Forward
0 new messages