Insertable Stream in AV1 frame.

138 views
Skip to first unread message

Alla Eddine Attia

unread,
Nov 7, 2023, 5:48:39 AM11/7/23
to discuss-webrtc
Hello, 

I am using insertable streams via the TransformableFrameInterface to append additional data the av1 encoded frames which I am sending. My code is inspired from https://github.com/w3c/webrtc-encoded-transform/blob/main/explainer.md.

The issue I am currently having is that the additional data that I am adding to the end of the frame is treated as a separate OBU, and when received and depacketized from the receiver, I remove the additional data (OBU) but when the frame is decoded that results in many decoding errors.

How do you advise inserting my data into the av1 encoded frame to avoid as many encoding issues as possible?

Thank you in advance.

Philipp Hancke

unread,
Nov 13, 2023, 4:14:03 AM11/13/23
to discuss...@googlegroups.com
Do you have a concrete example, similar to how
dumps the frame? (preferably with a black 320x180 key frame which should be reasonably small)

The interaction between insertable streams and packetization / depacketization is not well defined (for AV1 see this issue I just filed)
which is one of the reasons for this discussion: https://github.com/w3c/webrtc-encoded-transform/pull/186

--
This list falls under the WebRTC Code of Conduct - https://webrtc.org/support/code-of-conduct.
---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/4bd692ea-06ac-40c7-bfd2-d6470cd51231n%40googlegroups.com.

Alla Eddine Attia

unread,
Nov 14, 2023, 9:19:40 AM11/14/23
to discuss...@googlegroups.com
hello 

I am using the C++ native library and for the code example, please look at point 2.Set up transform streams that perform some processing on data.  in the link webrtc-encoded-transform/explainer.md at main · w3c/webrtc-encoded-transform · GitHub my code is similar to that example.

So the result will be after the transform

TransformedFrame = OriginalFrame | MyData

This resulted in so many issues on the receiving side the data is modified and extracting the original frame is not done properly. Resulting in the frames being corrupted.

In another trial, I have tried to preset my data with an obu header and size so it's treated as a metadata OBU by the packetizer and the depacketizer. 

TransformedFrame  = OriginalFrame | ObuHeader = 0'0101'010 | leb128(MyData.Size) | MyData

This method gave better results and most of the time I receive the frame with my data attached to it. But sometimes on the receiver side my data gets detached
and I receive two frames instead of one. The first frame contains the original frame and the second frame contains my data. 
1st received frame = Original Frame. 
2nd received frame = ObuHeader | leb128(MyData.Size)| My Data. 

This is causing me issues since every frame I send has its unique data related to it. Do you have any idea why this sometimes happens and the receiver breaks the frame into two? How can I prevent the separation from happening and always receive one frame with my data attached to it?

Thank you for your help.



--

Ala Eddine ATTIA. 

Software Engineer

Ex Secretary-General of CLL-FST
Reply all
Reply to author
Forward
0 new messages