On 19 February 2015 at 14:39:25, Marek T (
marek...@gmail.com) wrote:
> How would your recommendation change if the hardware needed
> to handle an additional queue with 10K messages/second which
> occasionally grows to 10M messages due to consumers (let's assume
> 10 consumers) not keeping up?
It depends on message size distribution. 10K per second can be handled by a development
laptop fairly easily if message size is < 4K (so, 4 cores, 4-8 GB of RAM, fast HDD should be OK).
There is currently a fixed cost per message out of the box because our default queue index implementation
is RAM-only. I believe it is about 20 bytes per message.
CloudAMQP folks have a LevelDB-based index:
https://github.com/cloudamqp/msg_store_eleveldb_index
Note that it includes native code and may be OS-specific in some ways.
> If we want to ensure maximum performance, is it enough to optimize
> for RAM and just multiply the approximate volume of messges (10Mx500bytes=5GB),
> taking the 40% watermark into account => 12.5GB of RAM?
Yes, plus the queue index entry cost mentioned above. At this total RAM size you can go
up to 70-80% with the limit, 40% sounds unreasonably low to me.
> My assumptions is that RabbitMQ benefits from multi-core processors,
> is there a practical limit when adding any more processors will
> be unnecessary? (depending on number of queues/consumers?)
> Would it be beneficial to separate this huge queue away from the
> others in a separate broker?
The benefit from extra cores will be greatest when you have more queues and connections/channels
going. For 1K queues and 1K connections I'd expect 4 and 8 cores to work about as well.
16 should make a difference but shouldn't be necessary for the described workload.
Again, the typical "please test it with PerfTest with a similar workload and see for yourself" suggestion
applies but your thinking is largely correct.