Audio signal processing parameters in Android application

1,831 views
Skip to first unread message

Mikael Nylund

unread,
Feb 10, 2014, 4:09:51 AM2/10/14
to discuss...@googlegroups.com
Hello,

We are trying to improve audio quality on our native Android application and the lack of documentation is slowing us down.

We are trying to set up basic parameters for the Signal Processing part of the WebRTC, which includes following:

// Audio constraints.
const char MediaConstraintsInterface::kEchoCancellation[] =
    "googEchoCancellation";
const char MediaConstraintsInterface::kExperimentalEchoCancellation[] =
    "googEchoCancellation2";
const char MediaConstraintsInterface::kAutoGainControl[] =
    "googAutoGainControl";
const char MediaConstraintsInterface::kExperimentalAutoGainControl[] =
    "googAutoGainControl2";
const char MediaConstraintsInterface::kNoiseSuppression[] =
    "googNoiseSuppression";
const char MediaConstraintsInterface::kExperimentalNoiseSuppression[] =
    "googNoiseSuppression2";
const char MediaConstraintsInterface::kHighpassFilter[] =
    "googHighpassFilter";
const char MediaConstraintsInterface::kTypingNoiseDetection[] =
    "googTypingNoiseDetection";
const char MediaConstraintsInterface::kAudioMirroring[] = "googAudioMirroring"; We have not found out how to give these parameters to the peerconnection in such a way that those take effect.
What we have done is following:


AudioTrack localAudioTrack = peerConnectionFactory.createAudioTrack("xxxx");

        if (!mediaStream.addTrack(localAudioTrack))
            throw new RuntimeException("Failed to add local audio track");

and then later

 peerConnection.addStream(mediaStream, my_custom_media_constraints);

* Does the latter "addStream" and its media constraints affect on the local audio track given before to the mediaStream? If not, where does
the configuration(setting the constraints) of the audioTrack is done?
* How does one give options to the MediaConstraints object? We have tried in following way:
my_custom_media_constraints.manadatory.add(new KeyValuePair("googNoiseSuppression", "true")); These does not take effect on the audio signal processing.  The lack of examples and documentation is sometimes really frustrating, hopefully someone of you could help us, thanks in advance.
-Mikael
 

Ami Fischman

unread,
Feb 10, 2014, 8:11:40 PM2/10/14
to discuss...@googlegroups.com
Sorry for your frustration.  In general the Java API aims to mirror the C++ API so that no extra documentation is necessary (and so that extra docs don't become stale as they fall out of sync w/ the impl).  The AppRTCDemo app is meant to be the canonical example.

I think you've stumbled over an oversight in the Java API.  In the C++ API you can specify media constraints to PeerConnectionFactoryInterface::CreateAudioSource() and then feed the resulting AudioSource to PeerConnectionFactoryInterface::CreateAudioTrack().  In the Java impl I apparently started implementing an independent AudioSource object but then didn't tie it into the API, missing the fact that that elides the only place mediaconstraints could be specified on the the track.

Filed bug 2912 to track this.

Cheers,
-a

--
 
---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Mikael Nylund

unread,
Feb 11, 2014, 4:39:04 AM2/11/14
to discuss...@googlegroups.com


On Tuesday, February 11, 2014 3:11:40 AM UTC+2, Ami Fischman wrote:
Sorry for your frustration.  In general the Java API aims to mirror the C++ API so that no extra documentation is necessary (and so that extra docs don't become stale as they fall out of sync w/ the impl).  The AppRTCDemo app is meant to be the canonical example.

I think you've stumbled over an oversight in the Java API.  In the C++ API you can specify media constraints to PeerConnectionFactoryInterface::CreateAudioSource() and then feed the resulting AudioSource to PeerConnectionFactoryInterface::CreateAudioTrack().  In the Java impl I apparently started implementing an independent AudioSource object but then didn't tie it into the API, missing the fact that that elides the only place mediaconstraints could be specified on the the track.

Filed bug 2912 to track this.

Cheers,
-a


Big thanks Ami for honest and fast reply, I really appreciate it,

Is there at the meantime way of us implementing this by ourselves? I was thinking if I take the WebRTCDemo application as an reference and
create own JNI wrapper to setup the constraints to AudioTrack?

Basically reusing this code and its JNI implementation?


If this would work I would get a larger set of freedom as well to set up the AEC parameters i.e. 

public enum AecmModes {
    QUIET_EARPIECE_OR_HEADSET, EARPIECE, LOUD_EARPIECE,
    SPEAKERPHONE, LOUD_SPEAKERPHONE
  } So would following work?

* Create own JNI wrapper that creates AudioTrack with certain constraints, return that to the java-side and then give that to the peerconnection?

Thanks in advance,
Mikael


 

Ami Fischman

unread,
Feb 11, 2014, 12:26:12 PM2/11/14
to discuss...@googlegroups.com
If you were interested in hard-coding constraints the minimal diff would be to add them instead of the NULL in the CreateAudioTrack call at


--

Keith

unread,
Feb 11, 2014, 3:58:24 PM2/11/14
to discuss...@googlegroups.com

Ami:

 

We are also very interested in the explanation of the bug you mention to Mikael above.  We have been struggling with switching AudioSources in our application for some time now.  I really appreciate you filing the bug.  Can you give me any idea on when a fix will be made?  We are contemplating making a code change ourselves but would rather have a final fix from you guys.  You mention above that we could replace the NULL in the CreateAudioTrack method..  The second argument to the CreateAudioTrack is an AudioSourceInterface which quickly gets into the details of the code with which we are not familiar.  Is there any guidance you can provide that would hopefully simplify the fix you mention above?  Thanks for any help you can be.

 

Keith Wimberly

Ami Fischman

unread,
Feb 11, 2014, 8:20:29 PM2/11/14
to discuss...@googlegroups.com
Proposed fix posted to bug.  
(please try out the fix & send feedback about it on the reviewlog)

Cheers,
-a

Andrey Grusha

unread,
Feb 13, 2014, 10:28:39 AM2/13/14
to discuss...@googlegroups.com
Ami:
Could You please clarify, what exactly should be passed to CreateAudioSource()? Are there any default values that need to be passed there if there is no special requirement to audio source?

Ami Fischman

unread,
Feb 13, 2014, 10:51:53 AM2/13/14
to discuss...@googlegroups.com
If there are no special requirements then you should specify an empty MediaConstraints object to createAudioSource, like PeerConnectionTest had to do in my change: https://code.google.com/p/webrtc/source/diff?path=/trunk/talk/app/webrtc/javatests/src/org/webrtc/PeerConnectionTest.java&spec=svn5540&r_previous=5539&r=5540&format=side


On Thu, Feb 13, 2014 at 7:28 AM, Andrey Grusha <andrew...@gmail.com> wrote:
Ami:
Could You please clarify, what exactly should be passed to CreateAudioSource()? Are there any default values that need to be passed there if there is no special requirement to audio source?

--

Mikael Nylund

unread,
Feb 14, 2014, 1:58:20 AM2/14/14
to discuss...@googlegroups.com
Thanks Ami again to react so fast!


I did read trough the code changes, but unfortunately I can't test it now. We are having trouble with the latest trunk/stable releases, AppRTC demo crashes randomly whole device (we are using
Android set top box) in 2 - 30min. Only release that has been working for now is stable@4889, anything newer does not work. But I'll write new post on that matter.

Anyway, big thanks Ami!

-Mikael

Andrey Grusha

unread,
Feb 14, 2014, 6:25:46 AM2/14/14
to discuss...@googlegroups.com
Hi there again!

I have a question about this new API. If i want to set some restrictions to my audio stream (for example bandwidth, sampling rate, etc.) what exactly should I put into this MediaConstraints set? Are there any list of fields/values that could be configurable this way? 

Mikael Nylund

unread,
Feb 14, 2014, 7:22:23 AM2/14/14
to discuss...@googlegroups.com
Hi Andrey,

On media constraints (Audio part) you only change your local signal processing parameters. On SDP-offers you can specify how the codec behaves. Please see the documentation for signal processing:

Then SDP stuff for Opus and iSAC:



Br,
Mikael

Andrey Grusha

unread,
Feb 14, 2014, 11:01:43 AM2/14/14
to discuss...@googlegroups.com
Exactly what I wanna know is how can I define my local signal processing parameters. For example I want to set opus to work with 16 kHz clock rate. 
The question is how does my MediaConstraints should look like to accomplish that?
Or the only way to do that is add "a=fmtp" field to my SDP manually?

Vikas

unread,
Feb 14, 2014, 6:47:29 PM2/14/14
to discuss...@googlegroups.com
Hi,

I don't think you can achieve it through MediaConstraints currently, you would need to modify SDP.

/Vikas

Andrey Grusha

unread,
Feb 15, 2014, 1:21:26 PM2/15/14
to discuss...@googlegroups.com
Hi, Vikas!

Thank you for your answer. But would it have any influence if I add "a=fmtp" field into my SDP? I mean, i found an issue  1906 . Does this mean that all fmtp parameters are not supported of just 'maxplaybackrate' ?

Vikas

unread,
Feb 18, 2014, 3:02:21 PM2/18/14
to discuss...@googlegroups.com
Hi,

That's right currently maxplaybackrate is not supported, please star the issue if you want it to be prioritized. The list of currently supported fmtp parameters can be found here.

/Vikas
Message has been deleted

Ömer Furkan Tercan

unread,
Mar 15, 2014, 2:55:57 PM3/15/14
to discuss...@googlegroups.com
Hi!

I am using the r5703 now having this fix. I can see from logcat that by passing the following audio constraints I can override the default audio options. However, I can only tune the echoCancellation, autoGainControl, and highpassFilter parameters. For instance noiseSuppression is ignored.

I am trying to tune the audio constraints to get the best audio quality. Are some audio parameters really ignored or is it just not displayed in the logs? 
Yet, I am not able to notice any difference after tuning the parameters. Is there a set of best combinations that I can trye out? 

Here are the audio constraints that I am passing to the org.webrtc.PeerConnectionFactory.createAudioSource:
mandatory: [], optional: [googExperimentalEchoCancellation: false, googAutoGainControl: false, googHighpassFilter: true, googAudioMirroring: false, googTypingNoiseDetection: false, goooNoiseSuppression: true, googEchoCancellation: true, googExperimentalAutoGainControl: false]

And this is how I see from logcat that libjingle overrides the audio options with my preferences:
I/libjingle( 2909): Setting option overrides: AudioOptions {aec: true, agc: false, hf: true, swap: false, typing: false, }
I/libjingle( 2909): Applying audio options: AudioOptions {aec: true, agc: false, hf: true, swap: false, typing: false, experimental_agc: false, experimental_aec: false, experimental_ns: false, }

Thanks,
-Furkan

Tommi

unread,
Mar 17, 2014, 5:39:13 AM3/17/14
to discuss...@googlegroups.com
Hi Furkan,

You have a typo in 'goooNoiseSuppression'.  It should be 'googNoiseSuppression'.
Also, instead of googExperimentalEchoCancellation, I think you want googEchoCancellation2.

Additionally, you might want to explicitly turn off other constraints (see here):
googAutoGainControl2, googNoiseSuppression2, googAudioMirroring.

Cheers,
Tommi



--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Ömer Furkan Tercan

unread,
Mar 17, 2014, 6:12:49 AM3/17/14
to discuss...@googlegroups.com
Thank you Tommi!

Shame on me for such an easy typo! All constraints can be tuned now. Also thanks for directing to the mediaconstraintsinterface.

-Furkan

Keith Wimberly

unread,
Jan 4, 2015, 1:45:04 AM1/4/15
to discuss...@googlegroups.com
I am getting back to this after the bug has already been fixed.  How do you identify the audio source that you want to use.  If there are multiple microphones on the device, how does the Java app specify which one to use?  See code snippet below:

audioSource = pcFactory.createAudioSource(pcMediaConstraints);
audioTrack = peerConnectionFactory.createAudioTrack("myAudioDesc", audioSource);
mediaStream.addTrack(audioTrack);

Firas Al Kafri

unread,
Nov 27, 2015, 1:53:30 AM11/27/15
to discuss-webrtc

Ömer,

Did you get it to work? Voice cancellation?

Thanks in advance
Reply all
Reply to author
Forward
0 new messages