Thanks, and I'm in total agreement with the definitions being provided in the articles you noted (thanks for these):
Latency:
- The delay incurred in communicating a message (the time the message spends “on the wire”)
- "We refer to latency as the time a request spends "on the wire" before getting to the software system" -Fowler
However, my question was more around why JMeter appears to alter that definition by stating the following:
Latency
- The duration between the end of the request and the beginning of the server response.
Maybe I'm just interpreting JMeter's definition too literally, but when I read this, specifically the italicized part above, it seems to include server processing time, since the server can't begin to respond until after its performed some processing. Is that putting to fine a point on it? The stack overflow article seems to acknowledge this perhaps:
Latency:
- The difference between time when request was sent and time when response has started to be received
This seems to include server processing time as well, where as other definitions only define it as time on the wire! Please do not hesitate to tell me if I'm being super dense here, but there does seem to be some conflicting definitions surrounding what the industry perceives latency to be, and how JMeter implements/calculates it.
So for all intents and purposes, in most circles, latency is simply the time the request (or response) sat on the wire from Host A to Host B, but in JMeter world, latency is the duration between the end of the client request to the beginning of the server's response, which will include "classic latency" (i.e. network time) but also server processing time.
If I am totally off base here, please don't hesitate to call me super dense, but there does seem to be a discrepancy is definitions here.