I'm currently working on a network audio project that uses configurable forward error correction and connection multihoming to hopefully allow reliable transmission of audio while keeping latency at a minimum.
I'm having some issues with the audio intermittently cutting-out on WiFi which I think may be due to the way I've written the code, not necessarily due to unreliable WiFi.
My question is, in SonoBus / AoO is all the audio processing including the Opus encode/decode done within the high-priority audio callback? Also is the networking code in a separate thread or also in the audio callback?
At the moment I'm doing almost everything outside of the audio callback, and all the audio callback is doing is pulling/pushing samples from a lock-free queue that the lower priority threads fill/empty while communicating with the network.
I was wondering if this could be the wrong approach, as these lower priority threads could get preempted and starve the audio callback of samples? At the same time I've read that you're not supposed to make syscalls like send/recv inside the audio callback.
Any help would be greatly appreciated!