I've been investigating an issue wherein I get a lot of timeouts from my devices. I finally tracked it down to the sent frame for a read coming right on the heels of the previous read’s response, rather than waiting the 3.5T period as called for by the spec.
Googling led me to this
issue, where someone else observed this behavior. The response from Stephane is "By design, libmodbus doesn't respect the timers defined by the protocol."
This seems problematic at best. Why is this "by design?" Is there a situation in which one would not want these inter-frame times?
I've solved the issue by hard-coding a 2ms delay before each time I make a request, but this seems pretty hacky, and eats into my bandwidth at 19,200 baud.