On 4.08.2019 5:53, M Powell wrote:
>
> For starters I apologize if I’m in the wrong forum.
> A colleague is using asio IO to transfer messages between two applications. The message size is 300 bytes and one of the apps is leveraging a periodic timer predicated on chrono so the message is sent on local host at intervals (1 hertz, 10 hertz ...up to 10K hertz)
>
> For example:
> app 1 is set to transfer the message @ 100 Hz
> app 1 @T0 increments a counter within the message then transfers the message
>
> app 2 receives the message, verifies counter was incremented before incrementing the counter and sending the message
>
> The process repeats periodically at the interval specified. app1 has metrics that tracks transfer rate and dropped messages.
>
>
> At issue. The 10k hertz was flagged as questionable by a few members during a review. The code executes on Windows and Linux and there was added skepticism WRT windows. The outputs didn’t revel any performance issues with 10K Herz transfers on either windows or Linux.
>
> My question. Is there a limitation in TCP transactions or asio that would preclude 10K hertz transfers ? The OS windows ?
I'm pretty sure the problematic points would not be the TCP or network
stack, but rather the overall performance of the computer, and
especially the variations in its performance. And the 10 kHz number is
not something special, similar problems are there always when you
require something to happen during some fixed time.
A consumer-grade OS like Windows or Linux does not guarantee your
program gets a timeslice for running during each 0.1 ms (i.e. 10 kHz).
For hard guarantees one should use some real-time OS instead. In
Windows/Linux one must accept the possibility of occasional slowdowns,
and code accordingly.
The slowdowns may sometimes be severe. If the computer gets overloaded
with too many tasks or too large memory consumption, it may slow to
crawl. Your program might not get a timeslice even during a whole
second, not to speak about milliseconds. There are some ways to mitigate
this by playing with thread/process priorities and locking pages in RAM,
which might help to an extent.
Experience shows Windows tends to be more prone to this "slowing to
crawl" behavior than Linux, and what's worse, it appears to recover from
such situations much worse, if at ever. After there has been a memory
exhaustion event, often the only way to get Windows to function normally
again is a restart. A typical Windows box might need a restart after
every few weeks anyway, to restore its original performance.
In Windows, you also do not know when an antivirus or another too snoopy
spyware decides to install itself in the network loopback interface and
start monitoring all the traffic, causing unpredictable delays. I recall
in one Windows product we replaced the loopback TCP connection by a
solution using shared memory, because there appeared to be random delays
in TCP which we were not able to explain or eliminate. YMMV, of course.
In short, if you have full control over the hardware and installed
software on the machine where your program is supposed to work, and have
verified it by prolonged testing, and you can accept occasional loss of
traffic, and can ensure daily restarts of Windows, then you should
probably be OK.
Another way is to accept there might be functionality loss and tell the
user to fix any problems. Your task is a bit similar to audio playing,
although audio is a bit easier as important frequencies are way below 10
kHz, and for audio there is special hardware support as well (which I do
not know much about though). Now let's say if an .mp3 file does not play
well because of computer overload, the user can try to close other apps
or restart the machine. If this would be OK for your users you should be
fine as well.