[rabbitmq-discuss] Simple benchmark and results

625 views
Skip to first unread message

David Glaubman

unread,
Oct 19, 2009, 10:46:45 PM10/19/09
to rabbitmq...@lists.rabbitmq.com

Hi, I am experimenting with RabbitMQ (v1.7.0) on Windows using the .Net
client.

First, my initial reaction to AMQP/RabbitMQ is very positive -- I've
received good support in getting started, the protocol and client API are
clear and seem straightforward, and (so far) everything works.

Second, I’d like to offer some performance numbers for a very simple
benchmark I am using.
Your mileage will most assuredly vary.

And third, I am looking for some insight as to suitability/tunability of
RabbitMQ for a particular use case. (but I'll leave that to my next message)

Thanks,
David

The setup
---------
In local setup, I have one rabbitmq server and one single-threaded client
which runs the benchmark.
In distributed setup, client runs as in local setup, but rabbitmq server
runs on another machine.

Client machine: Quad CPU @ 2.4Ghz, 4Gb memory, running 32-bit Windows XP SP2
Server machine: Quad CPU @ 2.2GHz, 4Gb memory, running 64-bit Windows Server
2008 Standard
Network: 1Gbps Ethernet, tracert shows 2 hops between machines

The benchmark
-------------
Benchmark consists of
1. BasicPublish of 1000 24-byte messages to a Fanout exchange
(non-persistent)
2. Dequeuing these 1000 messages using a QueueingBasicConsumer
3. Sending 1000 24-byte messages using MSMQ
4. Receiving 1000 24-byte messages using MSMQ
(All benchmarks were run using MeasureIt, a free microbenchmarking tool for
.Net code.)

Results
-------
RabbitMQ:
In the local case, published about 4K messages per second, consumed about
10K messages per second.
In the remote case, median times were the same as for local, but big
outliers occurred on some runs.
(Using BasicGet instead of QueueingBasicConsumer, I got around 1500 messages
per sec remote, 3K local)

MSMQ:
In the local case, MSMQ sent about 40K messages per second, received about
40K messages per second.
In the remote case, MSMQ sent about 33K messages per second, received about
1.2K per second.

I've run the benchmark about 50 times so far, and median times are pretty
stable, tho max time for a single Consumed message can run up to 5ms.

So, one question I have is: Is there a way to achieve MSMQ-like Send
performance using rabbitmq?

The code
--------

(This is called by the harness with hostname of the local or remote machine,
port = 5672 and a 24-byte binary message payload).

private static void AmqpWorker(string serverAddress, int basePort,
LossBullet message)
{
var exchange = "fanout";
var routingKey= "key";
var endpoint = new AmqpTcpEndpoint(serverAddress, basePort++);
var connection = new ConnectionFactory().CreateConnection(endpoint);
var channel = connection.CreateModel();
string queue = null;
channel.ExchangeDeclare(exchange, ExchangeType.Fanout);
queue = channel.QueueDeclare("q1");
channel.QueueBind(queue, exchange, routingKey, false, null);
var consumer = new QueueingBasicConsumer(channel);

LossBullet bullet = message;

timer1000.Measure("Publish fanout", 1, delegate
{
channel.BasicPublish(exchange, routingKey, null,
bullet.ToBytes());
});

BasicDeliverEventArgs eventArgs = null;
consumer.Model.BasicConsume(queue, null, consumer);

timer1000.Measure("Consume fanout", 1, delegate
{
eventArgs = consumer.Queue.Dequeue() as BasicDeliverEventArgs;
message = new LossBullet( eventArgs.Body);
consumer.Model.BasicAck( eventArgs.DeliveryTag, false);
});
}

--
View this message in context: http://www.nabble.com/Simple-benchmark-and-results-tp25968960p25968960.html
Sent from the RabbitMQ mailing list archive at Nabble.com.


_______________________________________________
rabbitmq-discuss mailing list
rabbitmq...@lists.rabbitmq.com
http://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss

Matthias Radestock

unread,
Oct 20, 2009, 2:10:11 AM10/20/09
to David Glaubman, rabbitmq...@lists.rabbitmq.com
David,

David Glaubman wrote:
> Hi, I am experimenting with RabbitMQ (v1.7.0) on Windows using the .Net
> client.

> [...]


> Client machine: Quad CPU @ 2.4Ghz, 4Gb memory, running 32-bit Windows XP SP2
> Server machine: Quad CPU @ 2.2GHz, 4Gb memory, running 64-bit Windows Server
> 2008 Standard
> Network: 1Gbps Ethernet, tracert shows 2 hops between machines

> [...]


> Benchmark consists of
> 1. BasicPublish of 1000 24-byte messages to a Fanout exchange
> (non-persistent)
> 2. Dequeuing these 1000 messages using a QueueingBasicConsumer
> 3. Sending 1000 24-byte messages using MSMQ
> 4. Receiving 1000 24-byte messages using MSMQ
> (All benchmarks were run using MeasureIt, a free microbenchmarking tool for
> .Net code.)
>
> Results
> -------
> RabbitMQ:
> In the local case, published about 4K messages per second, consumed about
> 10K messages per second.
> In the remote case, median times were the same as for local, but big
> outliers occurred on some runs.
> (Using BasicGet instead of QueueingBasicConsumer, I got around 1500 messages
> per sec remote, 3K local)
>
> MSMQ:
> In the local case, MSMQ sent about 40K messages per second, received about
> 40K messages per second.
> In the remote case, MSMQ sent about 33K messages per second, received about
> 1.2K per second.

It looks like MSMQ may have some optimisation for the local case -
perhaps it bypasses the TCP/IP stack?

As for rabbit, I suspect the main bottleneck is actually the .net
client. What's the CPU load like for the client? It may well be maxing
out one core.

We and other users have seen much higher message rates from RabbitMQ on
similar hardware than what you are reporting - not quite matching MSQM
perhaps, but close.

If you can, try running some tests with the MulticastMain example that
ships with the Java client, for comparison.

Also, note that in your particular test setup the RabbitMQ server will
not be able to take full advantage of the multiple cores since there is
too little inherent parallelism. Presumably your intended use case does
have more than a single message stream, going from one producer to one
consumer; such a configuration will produce higher message rates overall.


Regards,

Matthias.

David Glaubman

unread,
Oct 21, 2009, 1:53:58 AM10/21/09
to rabbitmq...@lists.rabbitmq.com

Matthias,

> It looks like MSMQ may have some optimisation for the local case -
> perhaps it bypasses the TCP/IP stack?

Yes, I believe local MSMQ is highly optimized (perhaps using pipes?).
My major interest there was the send rate of 33K messages per second
that MSMQ managed for the remote case,
compared to 4K with Rabbit's .Net client.
It's odd -- MSMQ Send seems to be 8X faster than Rabbit/.Net Publish,
but MSMQ Receive is 8X slower than Consume!

> As for rabbit, I suspect the main bottleneck is actually the .net
> client. What's the CPU load like for the client? It may well be maxing
> out one core.

For Rabbit, in remote case, client CPU load peaks at around 40%
(2 cores at a little almost 60%, other 2 < 10%).
Server CPU load goes to around 30%,
concentrated on two cores(around 60%, 50%, 5%, 5%).

> [...]


> If you can, try running some tests with the MulticastMain example that
> ships with the Java client, for comparison.

The Java client tells a very different story ...

./runjava.bat com.rabbitmq.examples.MulticastMain -h ca1tesla1 -x 1 -y 1
-z 20 -s 24 -a
(Single Publisher, single consumer each on its own thread)

Averaged 20K/sec sends, 20K/sec receives. All server CPUs were at 85-95%,
client CPUs at 35%,
avg reported latency around 8 or 9 ms.

If #Publishers and/or #Consumers > 1,
all server CPUs were maxed out the whole time, client stayed at 35%.

With #Producers = 2, # consumers = 1, sends/sec = 30K, rcv/sec = 14K.

With #Producers = 1, # consumers = 2, sends/sec = 17K, rcv/sec = 27K
(both consumers get the same messages)

> Also, note that in your particular test setup the RabbitMQ server will
> not be able to take full advantage of the multiple cores since there is
> too little inherent parallelism.

Is load on the server cores so evenly distributed when using the Java client
(even with single Publisher, single Consumer)
largely because sends and receives are running simultaneously on separate
(client) threads?

> Presumably your intended use case does
> have more than a single message stream,
> going from one producer to one
> consumer; such a configuration will produce higher message rates overall.

Yes. I'm trying to pipeline a large computation
which consists of a small number of massively parallel stages.
Each stage consists of a large number of components
each of which transform an input stream (from previous stage)
to an output stream (next stage).
I'm basically looking for a way to send messages fast enough
to prove the concept is workable (or not).
If I can come close to saturating Gbit Ethernet
using COTS h/w and s/w,
I'll know I'm on the right track. (and, contrariwise).

David

-----Original Message-----
From: Matthias Radestock [mailto:matt...@lshift.net]
Sent: Monday, October 19, 2009 11:10 PM
To: David Glaubman
Cc: rabbitmq...@lists.rabbitmq.com
Subject: Re: [rabbitmq-discuss] Simple benchmark and results

[...]
--
View this message in context: http://www.nabble.com/Simple-benchmark-and-results-tp25968960p25986856.html


Sent from the RabbitMQ mailing list archive at Nabble.com.

Matthias Radestock

unread,
Oct 21, 2009, 3:42:28 AM10/21/09
to David Glaubman, rabbitmq...@lists.rabbitmq.com
David,

David Glaubman wrote:
> For Rabbit, in remote case, client CPU load peaks at around 40%
> (2 cores at a little almost 60%, other 2 < 10%).
> Server CPU load goes to around 30%,
> concentrated on two cores(around 60%, 50%, 5%, 5%).

Given that sending is asynchronous, if neither the client or the server
are maxing out a CPU then the network communication would seem to be the
bottleneck.

Nagle is the obvious culprit, but both the client and the server disable
that.

Could it be that the Windows firewall or antivirus is getting in the
way? We have seen some very odd behaviour with these two. OTOH, that
should affect the Java client too. Hmm.

> The Java client tells a very different story ...
>
> ./runjava.bat com.rabbitmq.examples.MulticastMain -h ca1tesla1 -x 1 -y 1
> -z 20 -s 24 -a
> (Single Publisher, single consumer each on its own thread)
>
> Averaged 20K/sec sends, 20K/sec receives. All server CPUs were at 85-95%,
> client CPUs at 35%,
> avg reported latency around 8 or 9 ms.

These figures are much more in line with what I'd expect to see.

Note that there is one obvious difference to the .net tests: By setting
the "-a" flag the consumers will operate in auto-ack mode, whereas in
the .net code you posted you do an explicit ack. The former is quite a
bit more efficient and may well account for the bulk of the difference
between the .net and java consumer performance. The setting won't affect
the sending side at all though,

The discrepancy between the .net and Java client merits investigation.
Can you package up your test code in a form that makes it
straightforward to run and thus try to reproduce your results? Bonus
points if it is easy to get the test to run under mono.

> If I can come close to saturating Gbit Ethernet using COTS h/w and
> s/w, I'll know I'm on the right track.

Message size has a big impact. Small messages carry a significant
(relative) framing and processing overhead. To get anywhere close to
saturating Gbit Ethernet your message payloads would have to be
substantially larger than 24 bytes. And you'd definitely want to create
multiple streams. Our OPRA testing with Intel two years ago did both
these things.


Regards,

Matthias.

David Glaubman

unread,
Oct 21, 2009, 1:44:37 PM10/21/09
to rabbitmq...@lists.rabbitmq.com

Matthias,

You write:
[...]


> Note that there is one obvious difference to the .net tests:
> By setting the "-a" flag the consumers will operate
> in auto-ack mode, whereas in the .net code you posted you do an explicit
> ack.
> The former is quite a bit more efficient and may well account
> for the bulk of the difference between the .net and java consumer
> performance.

Bingo! Setting 'noAck' = true on basicConsume increases messages received to
about 33K per sec.

As to the low send rate --
> Given that sending is asynchronous

Not so fast! (so to speak;-) -- rabbit .Net client uses TCPClient,
which "provides simple methods for connecting, sending, and receiving
stream data over a network in synchronous blocking mode."
(http://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient.aspx)

> Can you package up your test code in a form that makes it
> straightforward to run and thus try to reproduce your results?
> Bonus points if it is easy to get the test to run under mono.

Okay.

Thanks
David

-----Original Message-----
From: Matthias Radestock [mailto:matt...@lshift.net]
Sent: Wednesday, October 21, 2009 12:42 AM
To: David Glaubman
Cc: rabbitmq...@lists.rabbitmq.com
Subject: Re: [rabbitmq-discuss] Simple benchmark and results

David,
[...]

--
View this message in context: http://www.nabble.com/Simple-benchmark-and-results-tp25968960p25997030.html


Sent from the RabbitMQ mailing list archive at Nabble.com.

Philippe Kirsanov

unread,
Oct 21, 2009, 1:57:16 PM10/21/09
to rabbitmq...@lists.rabbitmq.com
What exactly "noAck" parameter in basicConsume mean? Is it auto-ack on
message or something else? API guide sais it is "handshake ack".
On api-guide page there is an example that sets noAvk = false and also
sends ack upon message delivery:
boolean noAck = false;
QueueingConsumer consumer = new QueueingConsumer(channel);
channel.basicConsume(queueName, noAck, consumer);
while (/* decide whether to continue reading */) {
QueueingConsumer.Delivery delivery;
try {
delivery = consumer.nextDelivery();
} catch (InterruptedException ie) {
continue;
}
// (process the message components ...)
channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
}

-----Original Message-----
From: rabbitmq-dis...@lists.rabbitmq.com
[mailto:rabbitmq-dis...@lists.rabbitmq.com] On Behalf Of David
Glaubman
Sent: Wednesday, October 21, 2009 13:45
To: rabbitmq...@lists.rabbitmq.com
Subject: Re: [rabbitmq-discuss] Simple benchmark and results

David Glaubman

unread,
Oct 21, 2009, 2:14:31 PM10/21/09
to rabbitmq...@lists.rabbitmq.com

Philippe Kirsanov wrote:
>
>> What exactly "noAck" parameter in basicConsume mean? Is it auto-ack
>> on message or something else? API guide sais it is "handshake ack".
>

I think it means auto-ack -- when I set it true in BasicConsume
and remove the call to BasicAck my receive performance more than tripled.

Here’s a quote on noAck:
(http://www.trapexit.org/forum/viewtopic.php?p=48116)
AMQP says that a broker shouldn't forget about a msg until it's been
ack'd. Now this can either happen by an explicit ack coming from the
consumer, or from an implicit ack, by setting noAck to true when
subscribing to the queue. Yes, the unfortunate naming of "noAck" is, um,
unfortunate. Internally, we tend to invert this and then call it
AckRequired.

Hope this helps.

David


-----Original Message-----
From: rabbitmq-dis...@lists.rabbitmq.com
[mailto:rabbitmq-dis...@lists.rabbitmq.com] On Behalf Of David
Glaubman
Sent: Wednesday, October 21, 2009 13:45
To: rabbitmq...@lists.rabbitmq.com
Subject: Re: [rabbitmq-discuss] Simple benchmark and results


Matthias,

You write:
[...]
> Note that there is one obvious difference to the .net tests:
> By setting the "-a" flag the consumers will operate
> in auto-ack mode, whereas in the .net code you posted you do an
explicit
> ack.
> The former is quite a bit more efficient and may well account
> for the bulk of the difference between the .net and java consumer
> performance.

Bingo! Setting 'noAck' = true on basicConsume increases messages
received to
about 33K per sec.

[...]
--
View this message in context: http://www.nabble.com/Simple-benchmark-and-results-tp25968960p25997547.html

Matthias Radestock

unread,
Oct 21, 2009, 2:16:41 PM10/21/09
to David Glaubman, rabbitmq...@lists.rabbitmq.com
David,

David Glaubman wrote:
>> Given that sending is asynchronous
>
> Not so fast! (so to speak;-) -- rabbit .Net client uses TCPClient,
> which "provides simple methods for connecting, sending, and receiving
> stream data over a network in synchronous blocking mode."
> (http://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient.aspx)

Do you know what is meant by "synchronous blocking mode"? I thought it
just meant that, say, a call to Write won't return until the kernel has
taken responsibility for the data (which doesn't mean it's been sent,
let alone received by the other end or passed to the app layer there).
That would not involve any waiting unless the buffers involved are full
due to network congestion or the server not draining the data fast enough.


Matthias.

Sylvain Hellegouarch

unread,
Oct 21, 2009, 2:20:24 PM10/21/09
to David Glaubman, rabbitmq...@lists.rabbitmq.com
David Glaubman a écrit :

> Philippe Kirsanov wrote:
>
>>> What exactly "noAck" parameter in basicConsume mean? Is it auto-ack
>>> on message or something else? API guide sais it is "handshake ack".
>>>
> I think it means auto-ack -- when I set it true in BasicConsume
> and remove the call to BasicAck my receive performance more than tripled.
>
> Here’s a quote on noAck:
> (http://www.trapexit.org/forum/viewtopic.php?p=48116)
> AMQP says that a broker shouldn't forget about a msg until it's been
> ack'd. Now this can either happen by an explicit ack coming from the
> consumer, or from an implicit ack, by setting noAck to true when
> subscribing to the queue. Yes, the unfortunate naming of "noAck" is, um,
> unfortunate. Internally, we tend to invert this and then call it
> AckRequired.
>
> Hope this helps.
>

I assume this means that if the consumer fails at processing correctly
the message it won't see it again right, whereas with an ack by the
consumer, one can ensure consistency if the processing fails before the
ack call.

- Sylvain

Matthias Radestock

unread,
Oct 22, 2009, 4:58:12 AM10/22/09
to Sylvain Hellegouarch, rabbitmq...@lists.rabbitmq.com
Sylvain,

Sylvain Hellegouarch wrote:
> I assume this means that if the consumer fails at processing correctly
> the message it won't see it again right, whereas with an ack by the
> consumer, one can ensure consistency if the processing fails before the
> ack call.

Sort of. In auto-ack/no-ack mode the broker forgets about the message as
soon as it has sent it. So any failure of the broker, network or
consuming client after the sending will result in the message being
lost. By contrast, when acks are enabled the broker only forgets about a
message when the client has sent it an ack. Any failure before then
results in an eventual retransmit.


Regards,

Matthias.

Matthias Radestock

unread,
Oct 22, 2009, 10:17:23 AM10/22/09
to David Glaubman, rabbitmq...@lists.rabbitmq.com
David,

David Glaubman wrote:
> You write:
> [...]
>> Note that there is one obvious difference to the .net tests:
>> By setting the "-a" flag the consumers will operate
>> in auto-ack mode, whereas in the .net code you posted you do an explicit
>> ack.
>> The former is quite a bit more efficient and may well account
>> for the bulk of the difference between the .net and java consumer
>> performance.
>
> Bingo! Setting 'noAck' = true on basicConsume increases messages received to
> about 33K per sec.

I have just run some experiments with code very similar to yours - the
only significant changes I made was to set the message to be a byte[24]
array, use DateTime.Now for timing and do 50,000 iterations, i.e.

var publishStart = DateTime.Now;
for (int i = 0; i < 50000; i++) {
channel.BasicPublish(exchange, routingKey, null, message);
}
Console.WriteLine("publish: {0}", DateTime.Now - publishStart);

In the local case I get sending rates of ~10kHz on my ancient machine
running mono on debian. That's significantly higher than the ~4kHz you
are reporting.

The consuming rate is half that but increases to ~15kHz with nAck=true.


So something weird is going on in your set up.


Regards,

Matthias.

David Glaubman

unread,
Oct 22, 2009, 4:34:55 PM10/22/09
to rabbitmq...@lists.rabbitmq.com

Matthias,

I did some profiling of a simple example (basically the SendString example,
with Publish called in a loop). I understand this is not a realistic
scenario,
but it does give some sense of where Publish takes time against .Net.

Anyway, almost all of the time was spent in Binarywriter.Write(byte) called
from Frame.WriteTo. Each Write(byte) was taking about 3-5 mu-sec.

Since this was so, I looked at NetworkBinarywriter,
since it calls BinaryWriter.Write(byte) 4x for every uint32 or int32 and 2X
for every uint16.

Using BinaryWriter as a guide, I turned the overrides of Write(int32) etc
to write to a byte array in the proper network order
and then write the array to the underlying Stream.

Running against the original RabbitMQ.Client dll, doing 10K Publishes of a
24-char string to a remote server,
I got about 1.5K messages per second:
C:\> SendMultiString.exe ca1tesla1 directexchange direct key
123456789012345678901234 10000
10000 messages sent in in 7248 mSec

Buffering the int/short Writes in a modified version of NetworkBinaryWriter,
I got around 7.5K messages per sec.: (5X speedup)
C:\> SendMultiString.exe ca1tesla1 directexchange direct key
123456789012345678901234 10000
10000 messages sent in in 1391 mSec

Also turning on Nagle (setting TCP_NODELAY to false), got it up to 15K per
sec: (total speed up > 10X)
C:\> SendMultiString.exe ca1tesla1 directexchange direct key
123456789012345678901234 10000
10000 messages sent in in 676 mSec

Nagle enabled with the original NetworkBinaryWriter was slightly better (in
this case) than the modified NetworkBinarywriter: (about 8.5K)
C:\> SendMultiString.exe ca1tesla1 directexchange direct key
123456789012345678901234 10000
10000 messages sent in in 1136 mSec


I'm not proposing that either the NBW should be rewritten along the lines
above or that Nagle's algorithm be enabled on Windows.
But I do think this shows there is something going on with Microsoft's
Socket/TCPClient implementations and there may be real gains available in
writing a whole Frame at once
to the socket stream (at least when using TCPClient on Windows).

regards,
David

http://www.nabble.com/file/p26016750/NetworkBinaryReader.cs
NetworkBinaryReader.cs
http://www.nabble.com/file/p26016750/SendMultiString.cs SendMultiString.cs
--
View this message in context: http://www.nabble.com/Simple-benchmark-and-results-tp25968960p26016750.html


Sent from the RabbitMQ mailing list archive at Nabble.com.

David Glaubman

unread,
Oct 22, 2009, 4:46:46 PM10/22/09
to rabbitmq...@lists.rabbitmq.com

Matthias,

Oops -- I uploaded the original NetworkBinaryWriter not the modified version
I used for the test s.

Here is the relevant code:

/// <summary>
/// Override BinaryWriter's method for network-order.
/// </summary>
public override void Write(short i) {
buffer[0] = (byte)(i >> 8);
buffer[1] = (byte)i;
WriteBuffer(2);
}

/// <summary>
/// Override BinaryWriter's method for network-order.
/// </summary>
public override void Write(ushort i) {
buffer[0] = (byte)(i >> 8);
buffer[1] = (byte)i;
WriteBuffer(2);
}

/// <summary>
/// Override BinaryWriter's method for network-order.
/// </summary>
public override void Write(int i) {
buffer[0] = (byte)(i >> 24);
buffer[1] = (byte)(i >> 16);
buffer[2] = (byte)(i >> 8);
buffer[3] = (byte)(i);
WriteBuffer(4);
}

/// <summary>
/// Override BinaryWriter's method for network-order.
/// </summary>
public override void Write(uint i)
{
buffer[0] = (byte)(i >> 24);
buffer[1] = (byte)(i >> 16);
buffer[2] = (byte)(i >> 8);
buffer[3] = (byte)(i);
WriteBuffer(4);
}

/// <summary>
/// Override BinaryWriter's method for network-order.
/// </summary>
public override void Write(long i) {
buffer[0] = (byte)(i >> 56);
buffer[1] = (byte)(i >> 48);
buffer[2] = (byte)(i >> 40);
buffer[3] = (byte)(i >> 32);
buffer[4] = (byte)(i >> 24);
buffer[5] = (byte)(i >> 16);
buffer[6] = (byte)(i >> 8);
buffer[7] = (byte) (i);
WriteBuffer(8);
}

/// <summary>
/// Override BinaryWriter's method for network-order.
/// </summary>
public override void Write(ulong i) {
buffer[0] = (byte)(i >> 56);
buffer[1] = (byte)(i >> 48);
buffer[2] = (byte)(i >> 40);
buffer[3] = (byte)(i >> 32);
buffer[4] = (byte)(i >> 24);
buffer[5] = (byte)(i >> 16);
buffer[6] = (byte)(i >> 8);
buffer[7] = (byte)(i);
WriteBuffer(8);
}

private byte[] buffer = new byte[8];
private void WriteBuffer(int n)
{
OutStream.Write(buffer, 0, n);

Matthias Radestock

unread,
Oct 22, 2009, 7:05:06 PM10/22/09
to David Glaubman, rabbitmq...@lists.rabbitmq.com
David,

David Glaubman wrote:
> Buffering the int/short Writes in a modified version of NetworkBinaryWriter,
> I got around 7.5K messages per sec.: (5X speedup)

The lack of buffering in the .net client is an issue we noticed a while
ago and we filed a bug to look into it. Our results at the time didn't
show as dramatic a difference as your tests. I've increased the severity
of that bug now, so we'll address it sooner.

Is this a showstopper for you or is the performance as it stands
sufficient for your intended use case?

Thanks for your help in tracking this down.


Regards,

Matthias.

simon hegarty

unread,
Oct 22, 2009, 9:02:18 PM10/22/09
to rabbitmq...@lists.rabbitmq.com

Matthias Radestock-2 wrote:
>
> Is this a showstopper for you or is the performance as it stands
> sufficient for your intended use case?
>

I'm yet another user looking to replace MSMQ, looking at AMQP with .net -
rabbit in particular

I am to begin, shortly, a performance evaluation along the lines David has
and have been following this thread with a degree of alarm.

It'll be easier to convince people to change to rabbit if I can produce
statistics showing strong performance compared to MSMQ.

Is there any way the required changes could be made available sooner rather
than later?

Thanks
simon

--
View this message in context: http://www.nabble.com/Simple-benchmark-and-results-tp25968960p26019121.html


Sent from the RabbitMQ mailing list archive at Nabble.com.

David Glaubman

unread,
Oct 22, 2009, 11:04:04 PM10/22/09
to rabbitmq...@lists.rabbitmq.com

Not a showstopper, since with the partial fix, Rabbit should be sufficient to
show feasibility of the approach.

Still, sooner is better than later, and I hope you folks will take a close
look at network performance on .Net/Windows, since it seems to differ from
Mono (or even Java on Windows).

I also think it may be worthwhile to consider programming against Socket
interface rather than TCPClient to maximize performance.

(just my 0.01334 Euros.)

David

--
View this message in context: http://www.nabble.com/Simple-benchmark-and-results-tp25968960p26020282.html


Sent from the RabbitMQ mailing list archive at Nabble.com.

Alexis Richardson

unread,
Oct 23, 2009, 4:49:57 AM10/23/09
to David Glaubman, rabbitmq...@lists.rabbitmq.com
On Fri, Oct 23, 2009 at 4:04 AM, David Glaubman <dgla...@acm.org> wrote:
>
> Not a showstopper, since with the partial fix, Rabbit should be sufficient to
> show feasibility of the approach.
>
> Still, sooner is better than later, and I hope you folks will take a close
> look at network performance on .Net/Windows, since it seems to differ from
> Mono (or even Java on Windows).

Understood.


> I also think it may be worthwhile to consider programming against Socket
> interface rather than TCPClient to maximize performance.

Is that something you could have a crack at?

alexis

David Glaubman

unread,
Oct 27, 2009, 12:47:15 PM10/27/09
to rabbitmq...@lists.rabbitmq.com

I am interested. Would not be able to give this any cycles until after
Thanskgiving.

David


Alexis Richardson-4 wrote:
>
> On Fri, Oct 23, 2009 at 4:04 AM, David Glaubman <dgla...@acm.org> wrote:

> [...]


>> I also think it may be worthwhile to consider programming against Socket
>> interface rather than TCPClient to maximize performance.
>
> Is that something you could have a crack at?
>
> alexis
>

--
View this message in context: http://www.nabble.com/Simple-benchmark-and-results-tp25968960p26081081.html

Matthias Radestock

unread,
Oct 27, 2009, 1:29:00 PM10/27/09
to David Glaubman, rabbitmq...@lists.rabbitmq.com
David,

David Glaubman wrote:
> I am interested. Would not be able to give this any cycles until after
> Thanskgiving.

Too late. Already fixed on the trunk. See
http://hg.rabbitmq.com/rabbitmq-dotnet-client/rev/cd22d1910063

In our tests on Windows this gives a 5-10x improvement in send
performance for tiny messages.


Regards,

Matthias.

Reply all
Reply to author
Forward
0 new messages