Hey,
First of all, things will change next week :)
I think the short explanation for the differences between these pictures is simply the time scale: to produce the graph for the docs,
I ran a workload that filled that WAL in just a few seconds. In your case, it seems like it took 3 hours. Such a difference,
especially combined with likely different Prometheus scraping frequency, exact Prometheus query and Grafana settings, may look
quite different. Additionally, Erlang dynamically adds/releases memory to an Erlang process (not to be confused with an operating system
process) and the increments/decrements might be different based on how quickly things happen. Specifically, since I was publishing very quickly,
as soon as the process released the memory, it was already using more of it again. In your case, with a slower publisher(s), once the WAL
memory was released, the process was pretty much empty, so Erlang released more/didn't immediately allocate again.
The memory breakdown is harder to interpret for me. With no additional data I can't really tell exactly why binary memory dominates,
but RabbitMQ indeed mostly deals with binaries (in Erlang terms) so they will be there. The allocated unused part (purple)
is related to what I mentioned before - memory was released from WAL/ETS but not returned to the operating system, since
Erlang runtime assumed it might need it again.
As for "the appropriate value of raft.wal_max_size_bytes" - if there was the perfect value, we'd just set it and not make it configurable. ;)
We believe 512MB is a reasonable value. I wouldn't change it at all without a specific reason. And remember that things will change
in RabbitMQ 4.1 and likely again in the future. My recommendation would be not to touch it - such "magic values", even if they help
a bit in a given scenario, tend to outlive whatever problems they were solving. RabbitMQ might work very differently in a few years,
and you will still have this value set to some, by then perhaps completely inadequate, value "because it's always been like that". ;)
If you have a specific issue that you are trying to address - please share what it is.
Best,