Thanks for the report!
You can use "ss -tin" with a recent ss binary to look at the bbr2 estimated bandwidth, estimated min_rtt, pacing rate, and cwnd to see how they may be impacting queuing. To see how to build a recent ss, check out:
That said, there are likely at least a few test artifacts from the fact that the netem rate limiting was applied on the sender, which creates sender-host queueing that interacts with the TCP Small Queues (TSQ) in a way that is unrealistic if the goal is to emulate a path in the middle of the network changing its link bandwidth.
When using qdiscs to emulate a network path, to get high-fidelity results the qdiscs must be on the receiver machine, or a router in the middle of the path.
If you want a tool to automatically set up qdiscs in a good setup to emulate varying network conditions, you may want to check out the transperf tool that our team open-sourced:
https://github.com/google/transperfAn example config that should give you a scenario like the one you mention is:
conn=conn('bbr2', sender=0, start=0.0)
loss=0
bw=var_bw(bw(50, dur=10), bw(10, dur=10), 50)
buf=100
rtt=20
dur=40
Once you have the experiment set up with qdiscs on the receiver (either manually or with transperf) I recommend looking at a sender-side tcpdump trace and either the transperf output graphs or "ss -tin" dump to understand the dynamics.
best,
neal