changing audio device while in a call results in playout error

1,098 views
Skip to first unread message

Elise Monaghan

unread,
Mar 22, 2021, 9:01:51 PM3/22/21
to discuss-webrtc
im using webrtc on windows. i have a windows WASAPI core audio callback that gets invoked whenever an audio device change happens (for egs: headphones get unplugged). In this callback, i would like to switch audio devices on the fly (for egs, to continue routing the audio playback to the speakers after the headphones get unplugged), however, i hit the following asset in audio_device_core_win before I can handle the device disconnection:

"playout error: rendering thread has ended pre-maturely"

how do i workaround this? i tried calling StopPlayout / Terminate in my callback on the audio device module (im using a custom audio device), but it seems that the call blocks and never gets executed....

elise

Henrik Andreasson

unread,
Mar 23, 2021, 4:50:51 AM3/23/21
to discuss-webrtc
The old/default ADM on Windows does not support device switching properly and has not been updated in a very long time. You could try a later (still experimental) ADM called ADM2 where the support for automatic restart in combination with device switching can be configured using a new flag.

The new ADM (ADM2)  is created using a new dedicated factory method.

Examples of the old and the new ways of creating the ADM can be found here.

Note that, you must now create your own external ADM and inject it into a Peerconnection.
If you don't do that, a default (the old and bad ADM) will be created automatically instead
.
The new ADM can be injected in CreatePeerConnectionFactory and an example of where an external ADM
(a so-called fake ADM in this example) is injected can be found here.


--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/7f83c115-46e0-4866-9bd2-ddb8a02cc18dn%40googlegroups.com.

Elise Monaghan

unread,
Mar 29, 2021, 7:00:39 PM3/29/21
to discuss-webrtc
Thanks for the instructions. I tried this, but I'm getting an undefined external symbol compile error in my project for webrtc::CreateWindowsCoreAudioDeviceModule. I'm using webrtc library built from branch M89 (4389), with the correct headers included (audio_device_factory.h, etc.). 

I'm quite certain the library was built correctly given that if I don't try to call this function, everything else works fine. Is there something special that needs to be done to expose webrtc::CreateWindowsCoreAudioDeviceModule ?

Henrik Andreasson

unread,
Mar 30, 2021, 4:09:02 AM3/30/21
to discuss-webrtc
I just built the latest version and tried a unittest that tests both ADM versions on Windows. All worked for me.

./out/Debug/modules_unittests.exe --gtest_filter=*AudioDevice*.PlayoutDeviceNames*

[ RUN      ] AudioLayerWin/AudioDeviceTest.PlayoutDeviceNames/0 (first round is with ADM1)
[ RUN      ] AudioLayerWin/AudioDeviceTest.PlayoutDeviceNames/1  (first round is with ADM2)

Elise Monaghan

unread,
Mar 30, 2021, 2:33:25 PM3/30/21
to discuss-webrtc
Can you give me a few more details on the specifics of your build setup? I am using MSVS 2017 with the following gn flags:

gn gen windows_msvc_debug_x86 --root="src" --args="target_cpu=\"x86\" is_debug=true use_rtti=true rtc_include_tests=false symbol_level=2 is_clang=false 

Elise Monaghan

unread,
Mar 30, 2021, 9:55:07 PM3/30/21
to discuss-webrtc
OK, looks like I am facing the same issue as outlined here:

I was able to workaround it by following the instructions in the above post i.e. I had to manually edit the BUILD.gn for modules/audio_device/ to include audio_device_module_from_input_and_output and windows_core_audio_utility as dependencies for rtc_library("audio_device_impl"). 

This actually seems to make sense to me, because unless I'm missing something, the only place I'm ever seeing audio_device_module_from_input_and_output in the entire webrtc BUILD codebase is under rtc_include_tests, which is strange.... shouldn't audio_device_module_from_input_and_output be available even if we choose to build with rtc_include_tests=false? (Strangely enough though, I first tried building with rtc_include_tests=true, but even then I got the unresolved external symbol error).

So now I'm able to use webrtc::CreateWindowsCoreAudioAudioDeviceModule (only on 32 bit though.... more on the 64 bit issue later), but another strange thing is that, I have to manually disable the RTC_DCHECK_EQ line here , because my app uses its own audio engine that also interacts with the Core Windows Audio Layer and is initialized before I establish a webrtc connection, so the webrtc RTC_DCHECK_EQ fails because the audio session state is already active at that point. Imo this seems like a potential design flaw, and would be nice if this could be worked around without modifying the source in this way.... is this possible?

Let me know your thoughts on all the above, thanks

Henrik Andreasson

unread,
Mar 31, 2021, 3:37:45 AM3/31/21
to discuss-webrtc
I  created a clean build using default settings here. Did not change anything.

Henrik Andreasson

unread,
Mar 31, 2021, 3:45:18 AM3/31/21
to discuss-webrtc
All audio device modules are old and not part of the official WebRTC stack. They are mainly intended as simple test drivers for audio but lack lots of support that a real client would need such as device selection, how to cooperate with other apps, support of multi-channel input/output etc. ADM2 on Windows was developed to ensure that users could overcome some known issues in the first version (still default) but it is still an experimental version and the building process might not be perfect.

If you prefer using ADM2 over ADM1 and don't want to inject your own modified version, please file a bug in bugs.webrtc.org and propose changes that we can review.

PS ADM2 works well on 64 bit for me

Elise Monaghan

unread,
Apr 6, 2021, 6:04:50 PM4/6/21
to discuss-webrtc
What is the proper way to handle device switching using ADM2 ?

For example, if I have my headphones plugged in and outputting audio during an active webrtc session, and while this session is ongoing, I want to switch the audio output to the speakers? ADM2 correctly handles this switch if I disconnect/unplug the headphones, but is there a way to switch output without the need to unplug? I tried many different sequences, for egs, StopPlayOut(), SetPlayoutDevice(<speakers index>), StartPlayout(), but none of them seem to work...

Henrik Andreasson

unread,
Apr 7, 2021, 4:11:55 AM4/7/21
to discuss...@googlegroups.com
What type of error do you encounter?

Elise Monaghan

unread,
Apr 8, 2021, 1:35:50 AM4/8/21
to discuss-webrtc
So for egs if I directly just call SetPlayoutDevice or SetRecordingDevice (trying to switch microphones), it returns with -1. 

If I call StopRecordingDevice(), SetRecordingDevice(<new device index>), and StartRecordingDevice, all the calls return with 0 (success), but the new device doesn't actually get engaged i.e. in the case of the microphones, the mic I called StopRecordingDevice on stops recording, but the new mic I called SetRecordingDevice on (followed by StartRecordingDevice) does not start recording.

Elise Monaghan

unread,
Apr 12, 2021, 5:51:58 PM4/12/21
to discuss-webrtc
For anyone following this; I was able to get it to work by inserting an Init call as well i.e. the sequence of calls required to switch a device mid-session is (for the case of microphone): StopRecording, SetRecordingDevice(<new device index>), InitRecording, StartRecording.

Now the new issue I'm facing is that, for some reason, Windows ducks the system volume whenever I am in a webrtc call. This is well established as expected behavior on Windows i.e. when the default communications device is engaged, the system volume is ducked. However, I am using the default device, not the default communications device, so I have no idea why this is happening. In fact, I replaced all instances of kDefaultCommunicationsDevice in the webrtc source to kDefaultDevice and built a lib out of that, to try and eliminate every possibility of the communications device getting engaged, but somehow this still didn't help. I know the default device is the one being engaged because I have different devices set as default & comm, and I hear the output on the correct (i.e. default devices).

In addition, the windows code that used to work for me to disable ducking (can be found here: https://docs.microsoft.com/en-us/windows/win32/coreaudio/disabling-the-ducking-experience) also does not work anymore.

Finally, the most mysterious thing about all of this is that, it only happens on ADM2 i.e. if using the old default ADM, both issues disappear (i.e. no volume ducking happens, and I am also able to programmatically disable ducking as outlined in the MS Documentation I linked above).

I'll keep this thread updated with my findings; any help/insight from anyone would be appreciated though.




Elise Monaghan

unread,
Apr 19, 2021, 6:46:44 PM4/19/21
to discuss-webrtc
Discovered the solution to my problem, here:

Henrik in fact you have a TODO in the comments in that section of code; would be nice if the category was configurable/exposed at the API layer, or at the very least, not hard coded to the Communications category, which results in Windows ducking behavior even when the default device is chosen!

Henrik Andreasson

unread,
Apr 20, 2021, 3:32:56 AM4/20/21
to discuss...@googlegroups.com
Yes, that's right. AudioCategory_Communications will have the described effect even when the default device is selected since it opts you in to communications policy and communications processing.
The ADM API has been reduced in size over the years and more and more are locked down for a VoIP scenario; and the same is true in this case for the ADM2. 
IMHO, all VoIP apps should use this mode since it gives a better communication experience (reduces echo, adds other communications processing etc.).
There are currently no plans to expand the ADM API, rather the opposite. If your application has special needs, you will probably save time by injecting your own ADM (based on ADM2).

Elise Monaghan

unread,
Apr 20, 2021, 2:06:57 PM4/20/21
to discuss-webrtc
Well I am already (and always have been) injecting my own ADM based on ADM2, but it's still not possible to configure AudioCategory_Communications to my own category without modifying the webrtc source code directly and re-building the webrtc library, correct?

Henrik Andreasson

unread,
Apr 20, 2021, 2:19:57 PM4/20/21
to discuss...@googlegroups.com
I meant injecting your own (slightly modified) ADM2. No other changes.

Elise Monaghan

unread,
Apr 20, 2021, 2:27:58 PM4/20/21
to discuss-webrtc
That's what I'm currently doing. I have made a custom class that inherits from webrtc::AudioDeviceModule, that internally creates and maintains an instance of ADM2 (using webrtc::CreateWindowsCoreAudioAudioDeviceModule), and overrides only a select bunch of methods in webrtc::AudioDeviceModule as per my applications special needs.

Isn't this what you meant by injecting my own, slightly modified, ADM2? If not, can you please elaborate?

Henrik Andreasson

unread,
Apr 20, 2021, 3:15:04 PM4/20/21
to discuss...@googlegroups.com
I meant that you could copy the default ADM2 and make any changes you need locally and use and maintain that version on your own outside the WebRTC repo (and inject as is done today).
Then you can change to AudioCategory_Other in your local version if you prefer that. Our default ADM will never fit all applications' needs.

Zoron

unread,
Feb 25, 2024, 9:12:45 PMFeb 25
to discuss-webrtc
Hello, I found that in the old version of WebRTC, there was a WinMM ADM module implementation, but in the new version only the CoreAudio implementation is retained. Currently, on a PC, I found that CoreAudio initialization failed and could not play sound (the reason is that the format requested by WebRTC does not match the channel number and bit depth format supported by the device, resulting in initialization failure).

I tried changing the requested channel number to 4 channels, but the bit depth (32-bit) format is still incompatible, as WebRTC structures only support 16-bit PCM data by default.
However, using the WinMM interface can successfully initialize this PC.

Therefore, I would like to inquire whether there is a support patch for WinMM in the new version of WebRTC.

Henrik Andreasson

unread,
Feb 26, 2024, 3:26:47 AMFeb 26
to discuss...@googlegroups.com
I am not familiar with any WinMM ADM. Is the question related to macOS?

In any case, there are no current plans to add multi-channel support in the native ADMs for WebRTC.

--
This list falls under the WebRTC Code of Conduct - https://webrtc.org/support/code-of-conduct.

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.

Zoron

unread,
Feb 28, 2024, 1:13:27 AMFeb 28
to discuss-webrtc
The following is part of the code from an old version of WebRTC. When it detects that CoreAudio is not supported, it will switch to WaveAudio, which is implemented using WinMM-related APIs.
```
        if (AudioDeviceWindowsCore::CoreAudioIsSupported())
        {
            // create *Windows Core Audio* implementation
            ptrAudioDevice = new AudioDeviceWindowsCore(Id());
            WEBRTC_TRACE(kTraceInfo, kTraceAudioDevice, _id, "Windows Core Audio APIs will be utilized");
        }
        else
        {
            // create *Windows Wave Audio* implementation
            ptrAudioDevice = new AudioDeviceWindowsWave(Id());
            if (ptrAudioDevice != NULL)
            {
                // Core Audio was not supported => revert to Windows Wave instead
                _platformAudioLayer = kWindowsWaveAudio;  // modify the state set at construction
                WEBRTC_TRACE(kTraceWarning, kTraceAudioDevice, _id, "Windows Core Audio is *not* supported => Wave APIs will be utilized instead");
            }
        }
```
However, in the new version, the implementation of waveaudio cannot be found. The multi-channel issue mentioned above can be verified to work properly on waveaudio. Can waveaudio be used as a backup option in the new version? It feels like its compatibility will be better.

Henrik Andreasson

unread,
Feb 28, 2024, 3:13:08 AMFeb 28
to discuss...@googlegroups.com
Those are legacy APIs that we no longer use, maintain or support. Please note that all ADM implementations are rather old and not really part of the WebRTC standard. They are mainly provided as test implementations to drive audio and are not in any way intended as reference implementations. Each user has different needs and expectations and we are not able to support all combinations. Instead we have ensured that users can inject their own external ADM implementations (see e.g. https://source.chromium.org/chromium/chromium/src/+/main:third_party/webrtc/pc/peer_connection_interface_unittest.cc;l=650). That way you could use the older ADM version and inject it instead but you will have to maintain it on your own.

There are no current plans to add support for legacy APIs or to add multi-channel support to the default ADM.

Jozsef Vass

unread,
Mar 1, 2024, 11:16:20 AMMar 1
to discuss...@googlegroups.com
Wave audio removed from WebRTC years ago. Bringing it back is our biggest patch. We found that more than 10% of Windows users cannot use CoreAudio and we fall back to WaveAudio.

Jozsef

Reply all
Reply to author
Forward
0 new messages