Hello,
I'm now trying to achieve a real big p2p rooms using WebRTC compiled for Android with non-standard ADM (Audio Device Module).
For now with standard settings (not modifying sdp) I've got a limit of about 6-7 peer connection on my Samsung SM T-705 tablet before CPU overrun. (Not a high-end device for now, but good enough, maybe like Samsung S5 or Note4 smartphones).
I believe I can get 2 or even 3 more if I use standard ADM, but I can't use it since I need some spatialization there...
So the question is: how efficiently reduce the amount of CPU usage?
I want it to work 2 or 3 times faster to support about 15-20 peer connections with no CPU overrun and better not using more threads than webrtc worker thread and the one where my ADM works and commits audio data to webrtc.
For one peer now it is spent about 210ms per second to:
1. DeliverRecordedData (most consuming, about 130ms),
2. RequestPlayoutData (about 60ms, growing with number of peers even when all peer's remote audio tracks are disabled, means set_enabled(false)),
3. Apply spatialization (takes about 15ms, really small complexity),
4. Do minor things like obtaining data from audiosink (about 2ms).
My assumptions on what to do to make it faster were:
1. Somehow tweak opus encoder:
- forcing maxsamplerate to 24000 is not giving anything according to my benches;
- settings kDefaultComplexity=1 for opus gives about 15-20ms win, means 10% which is not significant;
- changing minptime and setting ptime to 40 gives nothing;
- still I didn't try to play with bitrate, will see tomorrow;
2. Disable DTLS:
- gives nothing except I can't send binary data via datachannel =\
3. Use ISAC 16khz:
- it really works lowering from 210ms to 130ms overall to complete (60ms for DeliverRecordedData, 50ms to RequestPlayoutData + ~15ms for spatialization and things)
- but it sounds not as good as opus =\
So can someone please suggest any other options that will probably give better performance while still using an existing 'assembly'? (I mean, when only ADM is injected from outside, but all other parts of WebRTC kept untouched in it's place and works like in Chrome or wherever else.)
I know I better re-assemble WebRTC for my purposes - to not encode local audio stream for every peer separately, mean, to encode once and for all* (or twice, for two different bandwidths)... but for now 'ain't nobody got time for that' =)
* - am I actually right it encodes N times for N peers or it's just overheads of the data delivery system?
Regards, Yuri.