Hi Luke,
I did some tests based on your reproduction scripts:
1/ using the exact same rabbitmq instance as in my initial issue, no memory issue, as you found out also.
I then checked the connection churn: I get ~130/s on my machine
When I encountered the issue initially it was with a C++ consumer, which seems to be much faster: with 5 consumers I was at ~500 new connections/s.
I tried with a rabbitmq deployed locally (using minikube + helm, cf [1]), same connection churn...
2/ patching your `run.sh`: with 50 repro.py processes instead of 10 I get ~450/s and I finally reproduce the issue: memory increase, rabbit_event mailbox filling up.
=> I suggest you try with more repro.py processes untill the connection churn saturates; you should then see the memory increasing.
If you still don't reproduce on you side, then can try reproducing with another rabbitmq source, maybe the official docker image?
Thanks,
Thomas
---
[1] minikube, helm:
values.yaml:
rabbitmq:
username: guest
password: guest
plugins: |-
[rabbitmq_management, rabbitmq_top].
configuration: |-
loopback_users.guest = false
# deploy helm chart
helm install --name rabbitmq-repro -f values.yaml stable/rabbitmq --version=4.10.0
# wait for pod and service to be ready
kubectl port-forward --namespace default svc/rabbitmq-repro 5672:5672 15672:15672 &
(It deploys the bitnami/rabbitmq:3.7.14 docker image with erlang 21.3.)