Thanks, Bob.
In that example, the bottleneck is the pacing mechanism itself, and the inter-skb pacing delay is much longer than the RTT. And in such cases, the simplified model of the system I was mentioning, '"the time elapsed from a TCP data sender transmitting a data byte until that data sender receives a TCP ACK for that byte" is also the definition of RTT', is not a detailed enough model to reason through what's going on.
Since Eric switched the Linux pacing model to the EDT model in 2018, the RTT no longer includes time spent in the fq pacing layer.
So it looks like the difference between the "RTT" and "Wait" in your example is caused by the following:
+ The "RTT" is measuring the time between the EDT pacing release time and the time of the ACK.
+ The "Wait" is measuring the time between TCP transmission and the time of the ACK. This includes time queued in the fq pacing layer waiting to be released to the NIC.
Because the pacing rate is so low here (100Mbit/sec), the skbs in the fq pacing layer spend a lot of time queuing, waiting for their turn to be released. So the "Wait" time (10ms) is considerably longer than the RTT (370us).
thanks,
neal