Unable to Receive Audio Track on C++ WebRTC Receiver (Video Works)

85 views
Skip to first unread message

Agneya Singh

unread,
May 28, 2025, 4:51:46 PM5/28/25
to discuss-webrtc

Hi everyone,

I'm working on a custom WebRTC application where one peer is a browser based sender, and the other peer is a C++ native receiver.

Goal

I want to receive both audio and video tracks from the remote peer in my C++ application. The video stream is working correctly. I can attach a VideoSink and process frames as expected. However, audio is not being received even though the browser is sending audio.
For audio I am able to reach onTrack on the C++ application but I cannot reach the OnData.

1. JavaScript (sender):

I'm sending both audio and video from the browser like this:

stream.getAudioTracks().forEach(track => {
  this.pc.addTrack(track, stream);
});
stream.getVideoTracks().forEach(track => {
  this.pc.addTrack(track, stream);
});


  On the JS side, chrome://webrtc-internals shows that audio packets are being sent (framessent number increases, packetssent number increases)

2. C++ application

In my C++ Conductor::OnTrack() handler, I'm successfully receiving the video track, but the audio sink never gets triggered. Here's my code:

void Conductor::OnTrack(rtc::scoped_refptr<webrtc::RtpTransceiverInterface> transceiver)
{
    auto track = transceiver->receiver()->track();
    printf("\n Inside Conductor::OnTrack"); fflush(stdout);

    if (track->kind() == webrtc::MediaStreamTrackInterface::kVideoKind) {
        rtc::scoped_refptr<webrtc::VideoTrackInterface> video_track =
            static_cast<webrtc::VideoTrackInterface*>(track.get());

        remote_video_sink_.reset(new VideoFrameExtractor());
        rtc::VideoSinkWants wants;
        wants.rotation_applied = true;
        video_track->AddOrUpdateSink(remote_video_sink_.get(), wants);
        std::cout << "OnTrack Video track received and sink attached.\n";
    }

    else if (track->kind() == webrtc::MediaStreamTrackInterface::kAudioKind) {
        rtc::scoped_refptr<webrtc::AudioTrackInterface> audio_track =
            static_cast<webrtc::AudioTrackInterface*>(track.get());

        std::cout << "[OnTrack] Audio track received:\n";
        std::cout << "  ID      : " << audio_track->id() << "\n";
        std::cout << "  State   : " << (audio_track->state() == webrtc::MediaStreamTrackInterface::TrackState::kLive ? "Live" : "Ended") << "\n";
        std::cout << "  Enabled : " << (audio_track->enabled() ? "true" : "false") << "\n";

        remote_audio_sink_.reset(new AudioSinkImpl());
        audio_track->AddSink(remote_audio_sink_.get());

        printf("\n [OnTrack] Audio track attached.\n"); fflush(stdout);
    }
}


OnTrack() is triggered for both audio and video.
Audio track appears to be live and enabled.
I'm using a custom AudioSinkImpl, but its OnData() callback is never triggered.
STUN and ICE are working (I receive remote video).


Questions:

1. Is there anything else I need to enable in C++ to activate audio decoding or sink processing?
2. Are there known conditions under which AudioTrack::AddSink() would fail to deliver audio?
3. Is it possible that the audio is not being routed because of codec negotiation? (I am using opus 48000 Hz and 16bit)
Reply all
Reply to author
Forward
0 new messages