Custom Audio Buffers for Multiple Tracks

3,094 views
Skip to first unread message

Scott Godin

unread,
Oct 31, 2017, 11:53:37 AM10/31/17
to discuss-webrtc
Hi All,

I have currently have a WebRTC server implemented using the native C++ WebRTC code base.  We are using a custom AudioDeviceModule to avoid interaction with the PC mic and speaker and to provide raw linear audio to the WebRTC engine for transmission to the browser side.  We also use the custom AudioDeviceModule to access the audio received from the browser side (ie: RecordedDataIsAvailable and NeedMorePlayData API's).  This works great!

We are starting to explore the ability to send multiple audio sources from the server to the browser and vice-versa.  Our first attempt is to use multiple tracks within a single peer connection and stream.  I see that I can use an AudioTrackSinkInterface to access the data received on a track, but I cannot find a way to provide a distinct/separate stream of audio to the WebRTC API's for each track for transmission to the browser side.  In fact both tracks, appear to just contain the single stream of data I provide in my custom AudioDeviceModule that is set on the PeerConnectionFactory.   

Is anyone aware of a way for me to provide distinct audio buffers for each audio track to be transmitted to the browser?

If there is not currently a way to accomplish this, we may move on to exploring using multiple channels within a track next.

Thanks and Best Regards,
Scott Godin
SIP Spectrum, Inc.

Sergio Garcia Murillo

unread,
Oct 31, 2017, 12:01:54 PM10/31/17
to discuss...@googlegroups.com
Seems this is a quite popular topic lately.. :)

Check this thread it contain quite useful information:
https://groups.google.com/forum/#!topic/discuss-webrtc/Di8JSd3nl5c

TL;DR; Implement an AudioSourceInteface (inherit it from webrtc::LocalAudioSource or it will complain later on) and pass it as parameter when creating the AudioTrack, you will get the audio track sink in AudioCapturer::AddSink(webrtc::AudioTrackSinkInterface * sink) and you can later call sink->OnData to fetch data to the VOE channel.

Best regards
Sergio
--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/606ccd8a-8331-49db-8927-3624ccd444a1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Scott Godin

unread,
Nov 7, 2017, 12:49:17 PM11/7/17
to discuss-webrtc
Hi Sergio,

Thanks for the response.  I think I was reading the code comments wrong, I assumed sink->OnData was called by the WebRTC Native library to deliver track audio to the application, so that's great that it's actually an API I can call to provide the audio data to the track for transmission to the browser side.  I assume when I start doing this, then I would stop calling RecordedDataIsAvailable on the AudioTransport of the ADM, correct?

Can I also use the AudioSourceInterface to receive the audio data from the browser side for the track?  I don't see an obvious callback on the AudioSourceInterface for that.  Currently we are using NeedMorePlayData on the ADM Audio Transport, but I believe this is just a mix of all the tracks present.

Thanks so much!

Scott

Scott Godin

unread,
Nov 13, 2017, 5:44:47 PM11/13/17
to discuss-webrtc
Hi Sergio,

I successfully implemented the pushing of audio to individual tracks using the sink->OnData as you recommended.  But I still can't figure out how I can read/access the inbound track audio data (from the browser to the native app).  Can you give me any pointers?  There are a few more details in my previous post.

Thanks so much.
Scott

Scott Godin

unread,
Nov 17, 2017, 12:15:36 PM11/17/17
to discuss-webrtc
I think I figured it out.  I need to add handling to the OnAddTrack callback and ask the RtpReceiverInterface for the track() then cast this to AudioTrackInterface and register a custom AudioTrackSinkInterface that implements OnData and add via AddSink. 

For example:
class MyAudioTrackSinkInterface : public webrtc::AudioTrackSinkInterface
{
public:
    virtual void OnData(const void* audio_data,
        int bits_per_sample,
        int sample_rate,
        size_t number_of_channels,
        size_t number_of_frames)
    {
    }
};

void WebRTCConductor::OnAddTrack(rtc::scoped_refptr<webrtc::RtpReceiverInterface> receiver,
    const std::vector<rtc::scoped_refptr<webrtc::MediaStreamInterface>>& streams) 
{
    rtc::scoped_refptr<webrtc::MediaStreamTrackInterface> track = receiver->track();
    if (track->kind() == webrtc::MediaStreamTrackInterface::kAudioKind)
    {
        webrtc::AudioTrackInterface* audioTrack = static_cast<webrtc::AudioTrackInterface*>(track.get());
        // TODO - will this leak??
        audioTrack->AddSink(new MyAudioTrackSinkInterface());
    }
}

Thanks for your assistance.

Scott

Sergio Garcia Murillo

unread,
Nov 17, 2017, 12:17:38 PM11/17/17
to discuss...@googlegroups.com
Nice! Thanks for sharing
Sergio

--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrtc+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/b882623d-c03f-4f4b-a651-bb77b22d8aac%40googlegroups.com.

Idan Beck

unread,
Nov 18, 2017, 4:27:15 AM11/18/17
to discuss-webrtc
Hey Guys, 

Really appreciate this thread - been working on this issue for a few days now with little luck. 

I've been trying to set up two independent audio channels between peers.  One for the audio from the mic and another for generated audio that we're capturing from an application. 

I've tried to create a LocalAudioSource, and while I am indeed able to send data over the pipe - the microphone is still being recorded for some reason (this is with the audio disabled).  Here is the code that we're using to create the LocalAudioSource in which I've overridden the AddSink function

       
 rtc::scoped_refptr<webrtc::AudioTrackInterface> pAudioTrack = nullptr;


 
// Set up constraints
 webrtc
::FakeConstraints audioSourceConstraints;
 webrtc
::PeerConnectionFactoryInterface::Options fakeOptions;


 audioSourceConstraints
.AddMandatory(webrtc::MediaConstraintsInterface::kGoogEchoCancellation, false);
 audioSourceConstraints
.AddOptional(webrtc::MediaConstraintsInterface::kExtendedFilterEchoCancellation, true);
 audioSourceConstraints
.AddOptional(webrtc::MediaConstraintsInterface::kDAEchoCancellation, true);
 audioSourceConstraints
.AddOptional(webrtc::MediaConstraintsInterface::kAutoGainControl, true);
 audioSourceConstraints
.AddOptional(webrtc::MediaConstraintsInterface::kExperimentalAutoGainControl, true);
 audioSourceConstraints
.AddMandatory(webrtc::MediaConstraintsInterface::kNoiseSuppression, false);
 audioSourceConstraints
.AddOptional(webrtc::MediaConstraintsInterface::kHighpassFilter, true);


        m_pWebRTCLocalAudioSource
= WebRTCLocalAudioSource::Create(fakeOptions, &audioSourceConstraints);
 CN
(m_pWebRTCLocalAudioSource);


 pAudioTrack
= rtc::scoped_refptr<webrtc::AudioTrackInterface>(
 m_pWebRTCPeerConnectionFactory
->CreateAudioTrack(
 strAudioTrackLabel
,
 m_pWebRTCLocalAudioSource
)
 
);
 pAudioTrack
->AddRef();


 pMediaStreamInterface
->AddTrack(pAudioTrack);

Any tips would really be appreciated - again, the issue is that even with only this audiosource I still hear the microphone going through (it's almost as if it's being mixed). 
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.

Sergio Garcia Murillo

unread,
Nov 18, 2017, 4:37:29 AM11/18/17
to discuss...@googlegroups.com
Have you provided a dummy external ADM when creating the peerconnection factory?

Best regards
Sergio

To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrtc+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/a8540089-da95-411a-ba80-a393dc53fbf8%40googlegroups.com.

Idan Beck

unread,
Nov 21, 2017, 4:46:40 PM11/21/17
to discuss-webrtc
I have built a custom ADM before - but this was simply a wrapper around the default one.  Using the dummy external ADM would then mean that none of the audio received would actually go to the audio hardware though, and that's what I'm trying to avoid having to build.  Also, the dummy ADM wouldn't allow me to create one track as a "mic" track and the other as a "screen capture" since I'm not able to get the audio right?

Basically I'm trying to avoid having to build a custom WASAPI implementation, but it seems like there's no real way to do that.  If I implement a custom WASAPI implementation (mic capture etc) then I could effectively use the dummy ADM and then use the OnData to drive data on a given track and then the OnData on the sink side to push that audio to WASAPI.

Is this what you mean, or is there something that I'm missing that would allow me to not have to do all of that work on the platform audio side. 

Let me know, and thanks again for the feedback and advice.  This is a pretty critical feature for us! 

Sergio Garcia Murillo

unread,
Nov 22, 2017, 5:18:01 AM11/22/17
to discuss...@googlegroups.com
just provide ADM implementation with void recording methods and a wrapper to the default ADM for the playback ones.

Best regards
Sergio

Idan Beck

unread,
Nov 22, 2017, 1:05:08 PM11/22/17
to discuss-webrtc
I attempted to do something similar when I wrapped the default ADM - but what you're saying is basically create a new implementation that falls back to the default audio layer for playback.  However then I don't get the platform based mic recording correct?  Am I missing something there? 

Meanwhile I've basically been building a WASAPI implementation to capture/render audio from Windows.  This isn't cross platform but we're only on Windows right now (native C++ WebRTC only at the moment).  Then I was going to use the dummy ADM to basically open up generic pipes to funnel that audio from.  Longer term I was going to roll this all up into a custom ADM that wraps the audio layer I'm building. 

I've built a bunch of audio engines in the past, so it's not new work - but it's a lot of effort.  Although last time I did audio programming on Windows it was with Direct Sound and I'll admit WASAPI is a bit easier to get along with! 

benjami...@gmail.com

unread,
Nov 24, 2017, 3:18:49 AM11/24/17
to discuss-webrtc
Hi Scott,

I'm facing the same issues as you. I want to push some custom audio data to an audio track for a remote peer connected with a browser. You said that you successfully pushed data into audio track with the sink->OnData, can you give me more information about this process. For me a the OnData sinks method is used to retrieve data from a specific track, moreover the function proto is : 

void OnData(const void* audio_data, 
              int bits_per_sample,
              int sample_rate,
              size_t number_of_channels,
              size_t number_of_frames)

which looks like a function that deliver data and not receive data. But I'm maybe wrong ...

Thanks in advance, best regards,

Benjamin BATTIN 

Sergio Garcia Murillo

unread,
Nov 24, 2017, 3:22:48 AM11/24/17
to discuss...@googlegroups.com
You have to override the audiosourceinterface when creating the audio track and provide the audio mic via the OnData method as explained earlier on the thread.
Note that you will be bypassing the AEC and audio processing, so you'd better have a good reason for doing so.

Best regards
Sergio

Sergio Garcia Murillo

unread,
Nov 24, 2017, 3:24:02 AM11/24/17
to discuss...@googlegroups.com
It can be used to provide audio from a localaudisource to the track so it is sent to the remote peer, it worked for me at least ;)

Best regards
Sergio
--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.

benjami...@gmail.com

unread,
Nov 24, 2017, 3:34:10 AM11/24/17
to discuss-webrtc
Hi Sergio,

thanks for your quick answer but I'm a bit confused about this process. I read your previous reply : 

TL;DR; Implement an AudioSourceInteface (inherit it from webrtc::LocalAudioSource or it will complain later on) and pass it as parameter when creating the AudioTrack, you will get the audio track sink in AudioCapturer::AddSink(webrtc::AudioTrackSinkInterface * sink) and you can later call sink->OnData to fetch data to the VOE channel.

I understand that I have to implement a class inheriting from webrtc::LocalAudioSource and pass it to the track creation function. Then you speak about AudioCapturer::AddSink(webrtc::AudioTrackSinkInterface * sink), but I can't found any AudioCapturer class in my webrtc source tree (I'm on the M62 release)... On which release are you working with ?

benjami...@gmail.com

unread,
Nov 24, 2017, 5:12:18 AM11/24/17
to discuss-webrtc
And moreover, do I need to provide an external (dummy or not) ADM ?

Sergio Garcia Murillo

unread,
Nov 24, 2017, 5:19:03 AM11/24/17
to discuss...@googlegroups.com
If you don't do so, the internal adm will be feeding audio data simultaneously than your localaudiosoruce and they will be mixed together.


On 24/11/2017 11:12, benjami...@gmail.com wrote:
And moreover, do I need to provide an external (dummy or not) ADM ?
--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.

Benjamin Battin

unread,
Nov 24, 2017, 5:41:30 AM11/24/17
to discuss...@googlegroups.com
Okay, It will surely be a problem. But currently I'm already stuck with the audio data push problem. 
To test the code, I'm trying to implement an audio loopback for a remote peer. This peer send audio data to my application via an audio track (captured from mic, witch Chrome), all I want is to send this data back to him. To do that, I've implemented things like this : 

1) Before my application send offer to the remote peer, I add the stream constituted with my local audio track 

  stream_audio_ = peer_connection_factory_->CreateLocalMediaStream("stream_audio");

  cricket::AudioOptions audio_options;
  audio_options.echo_cancellation = rtc::Optional<bool>(false);
  audio_options.auto_gain_control = rtc::Optional<bool>(false);
  audio_options.noise_suppression = rtc::Optional<bool>(false);
  audio_options.highpass_filter = rtc::Optional<bool>(false);
  audio_options.stereo_swapping = rtc::Optional<bool>(false);
  audio_options.typing_detection = rtc::Optional<bool>(false);
  audio_options.recording_sample_rate = rtc::Optional<unsigned int>(48000);
  audio_options.playout_sample_rate = rtc::Optional<unsigned int>(48000);
  audio_options.aecm_generate_comfort_noise = rtc::Optional<bool>(false);
  audio_options.experimental_agc = rtc::Optional<bool>(false);
  audio_options.extended_filter_aec = rtc::Optional<bool>(false);
  audio_options.delay_agnostic_aec = rtc::Optional<bool>(false);
  audio_options.experimental_ns = rtc::Optional<bool>(false);
  audio_options.intelligibility_enhancer = rtc::Optional<bool>(false);
  audio_options.level_control = rtc::Optional<bool>(false);
  audio_options.residual_echo_detector = rtc::Optional<bool>(false);
  audio_options.audio_network_adaptor = rtc::Optional<bool>(true);
  audio_source_ = webrtc::LocalAudioSource::Create(&audio_options);

  stream_audio_->AddTrack(peer_connection_factory_->CreateAudioTrack("audio", audio_source_));

  peer_connection_->AddStream(stream_audio_));

2) Then on the OnAddStream function (when my application notifies that the remote peer send audio data from his mic) : 

    stream->GetAudioTracks().at(0)->set_enabled(false); // Disable audio output in speaker
    stream->GetAudioTracks().at(0)->AddSink(&input_audio_sink_); // From this I am able to retrieve audio samples in OnData callback

At this point, I'm not able to push the retrieved samples back to the remote peer, since I've got any interface on the audio source. You tell me that I can use the OnData method to do that, but I don't know how to achieve this.

Thanks for your assistance, I really appreciate it.

Best Regards.

Benjamin BATTIN






2017-11-24 11:18 GMT+01:00 Sergio Garcia Murillo <sergio.gar...@gmail.com>:
If you don't do so, the internal adm will be feeding audio data simultaneously than your localaudiosoruce and they will be mixed together.

On 24/11/2017 11:12, benjami...@gmail.com wrote:
And moreover, do I need to provide an external (dummy or not) ADM ?
--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrtc+unsubscribe@googlegroups.com.


--

---
You received this message because you are subscribed to a topic in the Google Groups "discuss-webrtc" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/discuss-webrtc/f3lIyA3Otyo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to discuss-webrtc+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/d1301887-d678-9eee-a518-f8583d00d06e%40gmail.com.

Sergio Garcia Murillo

unread,
Nov 24, 2017, 6:02:33 AM11/24/17
to discuss...@googlegroups.com
You have to call the method on audio_source_ which should be an object you implement, not one created by the pc factory.

BR
Sergio
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/CAMRMz-539AfxUYA-Cw-kEG8WTgeQbdtB02VWT4WhLyCVgxAHoQ%40mail.gmail.com.

Benjamin Battin

unread,
Nov 24, 2017, 6:06:19 AM11/24/17
to discuss...@googlegroups.com
Do you have any dummy implementation of the LocalAudioSource that you can attach to me ?

Thanks a lot, 

Benjamin BATTIN



--

---
You received this message because you are subscribed to a topic in the Google Groups "discuss-webrtc" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/discuss-webrtc/f3lIyA3Otyo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to discuss-webrtc+unsubscribe@googlegroups.com.

Scott Godin

unread,
Nov 27, 2017, 5:27:04 PM11/27/17
to discuss-webrtc
FYI - I just learned that in order to get the track audio you still need to call NeedMorePlayData on the AudioTransport of the ADM (every 10ms) in order to get the OnData callback to occur.

Scott

Scott Godin

unread,
Nov 27, 2017, 5:33:00 PM11/27/17
to discuss-webrtc
Here is some code I used....

class MyLocalAudioSource : public webrtc::LocalAudioSource
{
public:
    static rtc::scoped_refptr<MyLocalAudioSource> Create(const std::string& sTrackName,
        const webrtc::MediaConstraintsInterface* constraints)
    {
        rtc::scoped_refptr<MyLocalAudioSource> source(
            new rtc::RefCountedObject<MyLocalAudioSource>(sTrackName, constraints));
        return source;
    }

    static rtc::scoped_refptr<MyLocalAudioSource> Create(const std::string& sTrackName, const cricket::AudioOptions& audio_options)
    {
        rtc::scoped_refptr<MyLocalAudioSource> source(
            new rtc::RefCountedObject<MyLocalAudioSource>(sTrackName, audio_options));
        return source;
    }

    const cricket::AudioOptions& options() const override { return m_Options; }

    void AddSink(webrtc::AudioTrackSinkInterface* sink) override 
    {
        m_pAudioTrackSinkInterface = sink;
    }
    void RemoveSink(webrtc::AudioTrackSinkInterface* sink) override
    {
        m_pAudioTrackSinkInterface = 0;
    }

    void OnData(const void* pAudioData, int nBitPerSample, int nSampleRate, size_t nNumChannels, size_t nNumFrames)
    {
        if (m_pAudioTrackSinkInterface)
        {
            m_pAudioTrackSinkInterface->OnData(pAudioData, nBitPerSample, nSampleRate, nNumChannels, nNumFrames);
        }
    }
protected:
    MyLocalAudioSource(const std::string& sTrackName, const webrtc::MediaConstraintsInterface* constraints) : m_sTrackName(sTrackName), m_pAudioTrackSinkInterface(0)
    {
        CopyConstraintsIntoAudioOptions(constraints, &m_Options);
    }
    MyLocalAudioSource(const std::string& sTrackName, const cricket::AudioOptions& audio_options) : m_sTrackName(sTrackName), m_Options(audio_options), m_pAudioTrackSinkInterface(0)
    {
    }
    ~MyLocalAudioSource() override {}

private:
    std::string m_sTrackName;
    cricket::AudioOptions m_Options;
    webrtc::AudioTrackSinkInterface* m_pAudioTrackSinkInterface;
};

void WebRTCConductor::AddStreams() 
{
    if (m_ActiveStream.find(g_StreamLabel) != m_ActiveStream.end())
    {
        return;  // Already added.
    }

    // Create stream
    rtc::scoped_refptr<webrtc::MediaStreamInterface> stream =
        m_pPeerConnectionFactory->CreateLocalMediaStream(g_StreamLabel);

    // Create Audio Track 1 
    m_ActiveAudioSources[g_AudioLabel1] = MyLocalAudioSource::Create(g_AudioLabel1, m_WebRTCHandler, NULL);
    rtc::scoped_refptr<webrtc::AudioTrackInterface> audio_track1(
        m_pPeerConnectionFactory->CreateAudioTrack(g_AudioLabel1, m_ActiveAudioSources[g_AudioLabel1]));
    // Add audio track to stream
    stream->AddTrack(audio_track1);

    // Create Audio Track 2
    m_ActiveAudioSources[g_AudioLabel2] = MyLocalAudioSource::Create(g_AudioLabel2, m_WebRTCHandler, NULL);
    rtc::scoped_refptr<webrtc::AudioTrackInterface> audio_track2(
        m_pPeerConnectionFactory->CreateAudioTrack(g_AudioLabel2, m_ActiveAudioSources[g_AudioLabel2]));
    // Add audio track to stream
    stream->AddTrack(audio_track2);

    // TODO - switch to using AddTrack API on m_pPeerConnection - since AddStream will eventually be deprecated

    if (!m_pPeerConnection->AddStream(stream)) 
    {
        ss << __FUNCTION__ << ": Adding stream to PeerConnection failed";
        m_WebRTCHandler.onLog(IWebRTCHandler::Error, ss.str().c_str());
    }
    typedef std::pair<std::string, rtc::scoped_refptr<webrtc::MediaStreamInterface> > MediaStreamPair;
    m_ActiveStream.insert(MediaStreamPair(stream->label(), stream));
}

void WebRTCConductor::SendAudioTrackToWebRTCClient(int nTrackNum, short *pLinearAudio, int nSamples, int nSamplingFreqHz, int nChannels)
{
    auto it = m_ActiveAudioSources.find(nTrackNum == 2 ? g_AudioLabel2 : g_AudioLabel1);
    if (it != m_ActiveAudioSources.end())
    {
        int NumberSamplesFor10ms = nSamplingFreqHz / 100; // eg.  80 for 8KHz and 160 for 16kHz
        assert(nSamples % NumberSamplesFor10ms == 0);

        for (int i = 0; i < nSamples / NumberSamplesFor10ms; i++)
        {
            it->second->OnData(&pLinearAudio[i * NumberSamplesFor10ms * nChannels],
                sizeof(pLinearAudio[0]) * 8 * nChannels,  // BitsPerSample
                nSamplingFreqHz,  // SampleRate
                nChannels,
                NumberSamplesFor10ms);   // NumFrames
        }
    }
}

Good luck.

The big problem we are having now, is that Chrome does not seem to be able to take these 2 tracks we send it and separate them to different speakers devices for playout.

Scott

To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
--

---
You received this message because you are subscribed to a topic in the Google Groups "discuss-webrtc" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/discuss-webrtc/f3lIyA3Otyo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to discuss-webrt...@googlegroups.com.
--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.

--

---
You received this message because you are subscribed to a topic in the Google Groups "discuss-webrtc" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/discuss-webrtc/f3lIyA3Otyo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to discuss-webrt...@googlegroups.com.

benjami...@gmail.com

unread,
Dec 3, 2017, 1:20:51 AM12/3/17
to discuss-webrtc
First of all, thank you sergio and Scott for your answers. I resolve my problem by implementing my own external ADM.
Now I’m able to totally control the audio flow captured from either a physical device or my own user-generated data, and I take advantage of all the underlying audio processing of the webrtc audio stack.
For all the people who wants to deal with audio, take a (long) look at the adm class. This is the almost mandatory entry point you have to deal with.
Message has been deleted

Idan Beck

unread,
Dec 5, 2017, 5:52:51 PM12/5/17
to discuss-webrtc
FYI - I just learned that in order to get the track audio you still need to call NeedMorePlayData on the AudioTransport of the ADM (every 10ms) in order to get the OnData callback to occur.

Scott

Hey Scott - are you using a Dummy ADM for this?  How are you calling the NeedMorePlayData every 10 ms?  If you do this, then it's possible to get the OnData callback from the buffer being passed through the LocalAudioSource?  When I tried to do this with a dummy ADM I wasn't getting any data passing. 

Thanks! 

Scott Godin

unread,
Dec 6, 2017, 11:03:52 AM12/6/17
to discuss...@googlegroups.com
Hi Idan,

Yes I have a dummy ADM.  I used the TimeUntilNextProcess and the Process methods of the ADM to trigger my call to NeedMorePlayData every 10ms.

Scott

--

---
You received this message because you are subscribed to a topic in the Google Groups "discuss-webrtc" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/discuss-webrtc/f3lIyA3Otyo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to discuss-webrtc+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/16be78ff-21e9-4775-a289-3b241404051f%40googlegroups.com.

Idan Beck

unread,
Dec 6, 2017, 11:23:08 AM12/6/17
to discuss...@googlegroups.com
I tried that a bit - and wasn't having too much luck.  I looked at your code, however, and it looks like you're on a significantly different version of WebRTC than us and we really need to upgrade. 

In the meantime I've managed to sort of hack the RecordedDataAvailable function and to mix in a pending buffer into the audio received from the mic.  It's a bit of a crude approach, but it's working well enough expect we're having an issue that the audio input is coming in faster than can be processed for some reason.   

Once this basic feature is up - I'm going to go back and update WebRTC and try the dummy ADM / sink approach since I think it's much more robust. 

Thanks for your help! 



To unsubscribe from this group and all its topics, send an email to discuss-webrt...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/CAKuk9BzXfCxMdPs3cqAumi4mKZHuk2V_enUBsff-RpyTA%2BOLVw%40mail.gmail.com.

Idan Beck

unread,
Dec 15, 2017, 9:23:10 PM12/15/17
to discuss-webrtc
Hey Scott, 

I recently updated our WebRTC to the most recent master branch, and I'm finally seeing some activity on the peer using the local audio source.  

In short, I'm creating the local audio source and then using the OnData from the sink that I get in the AddSink call. 

However, the peer client is getting the following error message: 

(channel.cc:1519): GetPlayoutTimestamp() failed to retrieve timestamp

Any idea what that might be about? 

The steps are as follows, I create a dummy based ADM, which is really just wrapping a AudioDeviceModule::Create(id, dummy) and the ADM basically just holds on to the transport so I can call the NeedMorePlayData every 10 ms on a thread (they recently got rid of Process and TimeUntilNextProcess).  Then in OnAddStreams I register a AudioTrackSinkInterface to the track (I tried with source but it's the same).   Then I stream audio to the source through the OnData call of the custom local audio source. 

Would definitely appreciate any advice / thoughts! 

Scott Godin

unread,
Dec 18, 2017, 12:50:22 PM12/18/17
to discuss...@googlegroups.com
Hi Idan,

I'm not sure what's going on.  My WebRTC drop is few months old, so that might be a difference.  Also, I create and add my AudioTrackSinkInterface in the OnAddTrack callback (you can see my code I posted in this topic earlier).

You said: "Then I stream audio to the source through the OnData call of the custom local audio source. "

The OnData method of your custom AudioTrackSinkInterface is a callback method, that is called by the WebRTC core, it is for receiving remote track data.  Your above statement seems to indicate you are trying to send data to it, instead of receive it?  Sending track data is discussed in the first few posts of this topic.

Scott

--

---
You received this message because you are subscribed to a topic in the Google Groups "discuss-webrtc" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/discuss-webrtc/f3lIyA3Otyo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to discuss-webrtc+unsubscribe@googlegroups.com.

Idan Beck

unread,
Dec 18, 2017, 2:27:35 PM12/18/17
to discuss-webrtc
Sorry if I'm being confusing - and also thanks again for your support on this! 

So I am creating a new audio source in the webrtc peer connection as described in the top of this thread,  here is the code for that: 

auto pWebRTCLocalAudioSource = WebRTCLocalAudioSource::Create(strAudioTrackLabel, fakeAudioOptions);
CN(pWebRTCLocalAudioSource);

pWebRTCLocalAudioSource->SetAudioSourceName(strAudioTrackLabel);

// Add to map
m_pWebRTCLocalAudioSources[strAudioTrackLabel] = pWebRTCLocalAudioSource;

///*
pAudioTrack = rtc::scoped_refptr<webrtc::AudioTrackInterface>(
m_pWebRTCPeerConnectionFactory->CreateAudioTrack(
strAudioTrackLabel,
pWebRTCLocalAudioSource)
);

Where WebRTCLocalAudioSource is a webrtc::LocalAudioSource and then like you I'm sending data to it by way of the OnData function of the webrtc::AudioTrackSinkInterface it gets in the AddSink method.   Here is the code for sending the data:

RESULT WebRTCLocalAudioTrack::SendAudioPacket(const AudioPacket &pendingAudioPacket) {
RESULT r = R_PASS;

CN(m_pLocalAudioTrackSink);

//m_pLocalAudioSourceSink->OnData(
// pendingAudioPacket.GetDataBuffer(),
// pendingAudioPacket.GetBitsPerSample(),
// pendingAudioPacket.GetSamplingRate(),
// pendingAudioPacket.GetNumChannels(),
// pendingAudioPacket.GetNumFrames()
//);

int samples_per_sec = 44100;
int nSamples = pendingAudioPacket.GetNumFrames();
int channels = 1;

static int count = 0;
static int16_t *pDataBuffer = nullptr;

if (pDataBuffer == nullptr) {
pDataBuffer = new int16_t[nSamples];

for (int i = 0; i < nSamples; i++) {
pDataBuffer[i] = sin((count * 4200.0f) / samples_per_sec) * 10000;
count++;
}
}

m_pLocalAudioTrackSink->OnData(
pDataBuffer,
16,
samples_per_sec,
channels,
nSamples
);

Error:
return r;
}

As you can see - I'm diverting the actual audio data and replacing it with a test signal. 

Then in the OnAddStreams I add my public webrtc::AudioTrackSinkInterface which then exposes the OnData call back that should be called when data is available on the channel. 

Then where I create the dummy ADM I set up a thread to call the NeedMorePlaybackData every 10 ms - this is because the OnProcess has recently been taken away from the ADM so this can no longer be used.  This is the code for calling NeedMorePlaybackData on the transport:

m_audioProcessingThread = std::thread(&WebRTCConductor::ADMProcess, this);

...

RESULT WebRTCConductor::ADMProcess() {
RESULT r = R_PASS;
bool fDone = false;

size_t nSamples = 441;
size_t nBytesPerSample = 4;
size_t nChannels = 2;
uint32_t samples_per_sec = 44100;

size_t nSamplesOut = 0;
int64_t elapsed_time_ms = 0;
int64_t ntp_time_ms = 0;

void* pAudioBufferData = (void*)malloc(nSamples * nBytesPerSample * nChannels);
CN(pAudioBufferData);

while (fDone == false) {

CN(m_pAudioDeviceModule);

ntp_time_ms = -1;
elapsed_time_ms = -1;
nSamplesOut = 0;

int retval = reinterpret_cast<WebRTCAudioDeviceModule*>(m_pAudioDeviceModule.get())->GetTransport()->NeedMorePlayData(
nSamples,
nBytesPerSample,
nChannels,
samples_per_sec,
pAudioBufferData,
nSamplesOut,
&elapsed_time_ms,
&ntp_time_ms
);

int16_t *pDataBuffer = (int16_t*)pAudioBufferData;

// Sleep a bit
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}

Error:
return r;
}

This is just a testing thing. 

On the "sender" side I don't see any issues, but on the receiver side the OnData callback for the track sink never gets hit and also I get that error message:

(channel.cc:1519): GetPlayoutTimestamp() failed to retrieve timestamp

The bit of good news I can make sense of is that this error is being sent over and over again, which suggests that at least the webrtc peer is getting something, but I'm not sure where it's going and why the timestamp is screwed up in the first place or how to set it. 

Any thoughts?


jee...@gmail.com

unread,
Dec 21, 2017, 2:38:54 AM12/21/17
to discuss-webrtc
Sergio, will AEC and audio processing will be turned off entirely or only for that track?
Reply all
Reply to author
Forward
0 new messages