Hello,
Although I did not do a ton of experimenting with the filters, I had
awesome results and very real-time encoding.
I made a simple coder based on the easy demo coder.
I took my source filters and ran them through a sample grabber.
I took another thread and let it run at a scheduled framerate/hz, which
copied the current samplegrabber-callback's buffer, and encoded it. Same
thing with the audio, using C based coders that took byte pointer. I
responded to the audio sample grabber callback by packing audio packets
into the out-container and interlacing the video packets where video
stream time= audio stream time. The advantage I have with this type of
application is also being able to pass in 'keyframe_requested' on the
hard cuts between sources initiated by user-clicks in the gui.
The issue compounding the muxer that does not release data when it is
written out of the coders, is the hardware has no contract for which
samples are available first, or what the full delta between the first
audio time and video time is. Ive seen two or thee second deltas between
audio input and video input that force the muxer to drop and wait for
video time to be ready for first presentation so there wasn't a ton of
audio in the output befor the video snapped in. Even worse is that I
received the video packet first with typically a 5000 millisecond
stream-time mark, and then I receive audio chunks starting at stream
time 2000. The video comes first with time-stamps far behind the audio,
which comes maybe a second after the first video... its really screwy
and also system dependent. This causes lag to be partially based on
hardware driver implementation.
Fun!