Hello guys,
So, we have been updating our core processes to be able to treat them in batches and send them to RMQ. We average around 4M messages per day. While reviewing my PR my coworker brought this
link from RMQ Doc to my attention. The example is misleading because it does a publishing, but the header above actually states,
Concurrency Considerations for Consumers. After some googling i came across
this other link from the doc which states that:
--------------------------------------------------------------------------
" In general, publishing on a shared "publishing context" (channel in AMQP 0-9-1, connection in STOMP, session in AMQP 1.0 and so on) should be avoided and considered unsafe.
Doing so can result in incorrect framing of data frames on the wire. That leads to connection closure.
With a small number of concurrent publishers in a single application using one thread (or similar) per publisher is the optimal solution. With a large number (say, hundreds or thousands), use a thread pool. "
-------------------------------------------------------------------------------
The implementation we did actually uses one channel for processes that
may run concurrently depending on the System.Threading.Tasks.TaskScheduler. A sample code has been attached (sample.cs). Some methods and properties have been omitted for brevity.
We then ran a benchmark on 1M message with a batch size of 10 000 and then 10M messages with a batch size of 1M on a local RMQ server. No errors were raised by RMQ and the message where been consumed by the client without any frame errors raised by RMQ.
We then repeated the benchmark with 10 000 messages on a distant server. Still, no issues raised.
So, the question is, what are we missing? Will we have a huge surprise in prod? Are the tasks in the method SendAsync not running concurrently?