Christoph, thanks so much for responding. Here are some additional details:
There is a sending device, a camera running custom firmware (I did not write) that can only do H.264.The android side I have set up only to receive a stream, not send one. In the onTrackAdded callback is when I add the video sink call:
@Override
public void onAddTrack(RtpReceiver rtpReceiver, MediaStream[] mediaStreams) {
Log.d(TAG, "onAddTrack: new track added." + log);
if (mediaStreams.length > 0 && mediaStreams[0].videoTracks.size() > 0) {
Log.d(TAG, "onAddTrack: Adding video sink");
mediaStreams[0].videoTracks.get(0).addSink(videoSink);
}
}
I believe that is part of the legacy callbacks? I have also tried doing a similar thing in onAddStream but the results are the same. From what I can tell there is only one video sink being added to the media stream (the above logs only happen once for a connection and streaming session).
I've been playing around with the rendering views a bit to see if something is related to that but I have not been successful figuring that out. I'll explain this as best I can because I have a feeling something here may be the key:
Layout contains a TextureView. I add a surfaceTextureListener and when the surface is ready (callback) I send that to my ProxyVideoSink where I create a target:
private class ProxyVideoSink implements VideoSink {
private SurfaceEglRenderer target = null;
public synchronized void setSurfaceTexture(SurfaceTexture surfaceTexture) {
if (surfaceTexture != null) {
if (target == null) {
target = new SurfaceEglRenderer("Test");
target.init(eglContext, EglBase.CONFIG_PLAIN, new GlRectDrawer());
target.createEglSurface(surfaceTexture);
}
}
}
@Override
public synchronized void onFrame(VideoFrame videoFrame) {
if (target != null) {
target.onFrame(videoFrame);
}
}
}
If I change the "onFrame" callback to the following:
@Override
public synchronized void onFrame(VideoFrame videoFrame) {
executor.execute(() -> {
if (target != null) {
target.onFrame(videoFrame);
}
});
}
...then I get smooth playback. But as mentioned above I get a low level crash from EglRenderer at some point. And the "onFrame()" call itself has no delay even without the executor - if I put a log statement immediately above and below that they both log essentially at the same time, even when I'm getting a single frame every few seconds. But the next "onFrame()" is delayed.
And the thing that really boggles my mind is that when I run it without that modification to force it on the executor thread, it is bad for a long time and then *poof*! It just because super smooth at some point. I can't explain what happens there, but once it starts working after the couple minutes of bad-ness it stays good indefinitely.
I've analyzed the stream with wireshark and packet capture and the data ingest rate is normal. As I've mentioned, I added logging to WebRTC itself and I can see that the frames are decoded and added to the ring buffer (from generic_decoder.cc) at the normal rate. It is just whatever is supposed to be triggering the "decoded" method to pull the frame out of the ring buffer and add it to the renderer is what is not happening.