[{connection_readers,14584212},
{connection_writers,599240},
{connection_channels,2257084},
{connection_other,20160052},
{queue_procs,4866176},
{queue_slave_procs,0},
{plugins,10757228},
{other_proc,78505332},
{metrics,2195012},
{mgmt_db,12779264},
{mnesia,527712},
{other_ets,4157424},
{binary,24412160720},
{msg_index,296256},
{code,23760359},
{atom,1812721},
{other_system,40830848},
{allocated_unused,5273175896},
{reserved_unallocated,0},
{strategy,rss},
{total,
[{erlang,24630249640},{rss,22018035712},{allocated,29903425536}]}]},
During the spikes our RabbitMQ uses about 3.2 CPU cores of max. 8 cores.In the first queue the messages are still medium size (~3MB).Then the message will be transmitted to another queue (workflow engine queue) with the large message size (~20 MB).Afterwards the message will be moved to a 3rd queue and then moved back to the workflow engine queue.So maybe we reached the CPU limitation of 1 core per queue.
Is I/O wait part of the CPU core per queue limit?
Garbage collection may run on a different core, right?
Throttling with publisher confirms sounds like a good idea for our workflow engine.But our consumers are only processing one request per instance, as they are slow and not multi-threading capable.Therefore publisher throttling wouldn't work for them.
Besides throttling, wouldn't publisher confirms increase the memory usage, because messages are written in batches to disc?