Hi Aki, yes, if your publisher is publishing as fast as possible using PublishAsync, then the default settings of NATS will result in the "slow consumer" disconnects that you’re seeing. Especially if you are doing all this on localhost. The default pending messages limit for each subscription in NATS is 65536, and your connection’s AckHandler subscription is subject to this limit. Although Go’s scheduler is highly efficient, you’re publishing extremely fast, so the number of pending ACKs for the ACK subscription will grow to > 64K very quickly and your publishing connection will be disconnected by gnatsd.
The above explanation was a long way of saying that YES, it would be more reasonable to set your MaxPubAcksInFlight to a number like 1000, which is what we use by default in the stan-bench.go application. The examples/stan-bench.go application can help you figure out what your threshold is. For instance, I ran the following to reproduce the behavior you described:
% go run stan-bench.go -a -io -ns 1 -np 1 -mpa 16384 -n 2000000 foo
Starting benchmark [msgs=2000000, msgsize=128, pubs=1, subs=1]
write tcp [::1]:57927->[::1]:4222: write: broken pipe
exit status 1
The -mpa is for the publisher’s MaxPubAcksInFlight. When I adjusted this to 8192, there was no disconnect:
% go run stan-bench.go -csv test.csv -a -io -ns 1 -np 1 -mpa 8192 -n 2000000 foo
Starting benchmark [msgs=2000000, msgsize=128, pubs=1, subs=1]
NATS Streaming Pub/Sub stats: 143,355 msgs/sec ~ 17.50 MB/sec
Pub stats: 93,196 msgs/sec ~ 11.38 MB/sec
Sub stats: 71,677 msgs/sec ~ 8.75 MB/sec
And when I set it to 1000, the throughput was slightly better:
% go run stan-bench.go -a -io -ns 1 -np 1 -mpa 1000 -n 2000000 foo
Starting benchmark [msgs=2000000, msgsize=128, pubs=1, subs=1]
NATS Streaming Pub/Sub stats: 157,190 msgs/sec ~ 19.19 MB/sec
Pub stats: 78,604 msgs/sec ~ 9.60 MB/sec
Sub stats: 78,595 msgs/sec ~ 9.59 MB/sec
HTH