Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How to playback in reverse order

15 views
Skip to first unread message

Tao

unread,
Dec 6, 2002, 11:21:51 AM12/6/02
to
I have a clip encoded using MPEG-1. I was asked to
playback the video (no audio) in reverse order, and the
reverse playback should be as smooth as possible. I think
I should decode the video into buffer first frame by
frame, then show the frame in reverse order. But how to
decode the video into buffer? Or is there another simpler
solution?

Regards,


Alessandro Angeli [MVP::DigitalMedia]

unread,
Dec 6, 2002, 2:10:04 PM12/6/02
to
No simple solution, since a -1.0 playback rate is not supported by any known
decoder. Software MPEG players that offer a rewind function usually just
show only some poster frames and do not actually play the stream in reverse.
So, smooth playback in reverse is not trivial.

Suppose you need to playback in reverse from time T2 to T1 (where T2 > T1),
you need to seek to T1, decode all frames from T1 to T2 as fast as possible
and store them somewhere, than render them in reverse order. You could put a
buffering filter between the MPEG decoder and the video renderer that does
not forward data until instructed by the application to do so, but you would
have a lot of problems.

Using 2 graphs may work better. You create a custom source filter that
outputs decoded frames, insert it into an empty graph and let the graph
manager render it's output pin. This custom source filter should internally
create another graph where you substitute the video renderer with a custom
renderer that just sinks data and forwards it to the wrapping filter which
buffers it as needed. The source filter can then pilot its internal graph as
it pleases to make it produce frame groups that it can push into the
external graph from its output pin so that it's rendered. To make the MPEG
decoder in the internal graph work as fast as possible, you can either set
the graph rate to some impossibly high valie (like 1000.0) or set a custom
clock for the graph that ticks very fast.

"Tao" <t...@dcs.gla.ac.uk> wrote in message
news:044d01c29d43$95591c50$8af82ecf@TK2MSFTNGXA03...

Tao

unread,
Dec 7, 2002, 5:12:49 PM12/7/02
to
Hi, Alessandro,

thanks for your helpful reply. But I'm not quite clear
about the 'better way' you mentioned, esp the
sentence '... substitute the video renderer with a custom
renderer that just sinks data and forwards it to the
wrapping filter which buffers it as needed...'. I'm not
sure which part in your custom filter would take charge
of 'reverse'. Could you explain more and a simple block
diagram would be very helpful.

Regards,

Tao

Alessandro Angeli [MVP::DigitalMedia]

unread,
Dec 7, 2002, 7:58:47 PM12/7/02
to
It's kinda difficult to draw with ASCII ;)

Anyway, that's just a suggestion: I'd go that way because I think it will
work smoothly, but I don't suppose that's the only way to make it work.

Let's say you create a source filter that works in push mode,
MySourceFilter. MySourceFilter has only 1 output pin which supports YUV
video data (the format the MPEG decoder outputs) and that delivers samples
downstream calling IMemInputPin::Receive() on the connected input pin (most
likely the input pin of the VideoRenderer).

When it's created, MySourceFilter intetnally creates a new graph where it
renders the MPEG file. To get the data from this internal graph, you need to
either hook some pin connection and spy the data flow or insert some filters
of yours to sink the data (just like the SampleGrabber, but far more
performing). The DirectShow-way would be to insert a custom filter to sink
the data, MyDataSink, which is nothing else than a video renderer that does
not actually render the video data on some display. MyDataSink has only 1
input pin that accepts the very same format the MPEG decoder outputs and
that MySourceFilter also outputs.

If you let the graph manager have its way, the graph you create will end up
with a VideoRenderer so, either you create the graph manually (which is not
hard) or you render it then remove the VideoRenderer and insert MyDataSink
or you create an empty graph, add MyDataSink and after that call
RenderFile() so that the manager will use the already available renderer
first, which is MyDataSink.

To get the data, MySourceFilter tells MyDataSink to deliver the samples it
receives to some custom callback interface, MyCustomCallback, which has only
1 method which is identical to the input pin receive method. The input pin
receive method on MyDataSink won't do anything else but forward the call to
this interface and let MySourceFilter do something with the data.
MySourceFilter actually copies the data to its own buffer, ready to be
delivered to the outside world when its time to do so.

The flow will be something like that (-: outside world, +: inside
MySourceFilter):

1- you create an instance of MySourceFilter and call
IFileSourceFilter::Load() on it (2-6 are inside Load())
2+ MySourceFilter creates an instance of MyDataSink
3+ MySourceFilter calls some custom method on MyDataSink to notify it about
MyCustomCallback which is implemented by MySourceFilter itself
4+ MySourceFilter creates an empty graph, InternalGraph
5+ MySourceFilter adds MyDataSink to InternalGraph
6+ MySourceFilter calls RenderFile() with the filename it received from the
call to Load()
7- you create an empty graph, ExternalGraph
8- you add MySourceFilter to ExternalGraph
9- you Render() MySourceFilter's output pin
10- you Run() the graph
11+ MySourceFilter spawns a thread to handle InternalGraph and returns from
Run() (so the rest of the operations are carried out by the new thread)
12+ MySourceFilter seeks InternalGraph where the frame group it's going to
buffer starts
13+ MySourceFilter Run()s InternalGraph
14+ MyDataSink starts receiving samples and forwarding them to
MyCustomCallback
15+ MySourceFilter receives the data from MyCustomCallback and copies it in
its buffer (e.g. a linked list of preallocated media samples, since they are
all of the same size and same media type)
16+ MySourceFilters receives the last sample and Stop()s InternalGraph
17- MySourceFilter delivers the samples from its buffer in reverse order to
ExternalGraph
18- MySourceFilter repeats 12-18 until it reaches the beginning of the file

The exact sequence of operations may differ depending on how you like things
to be done.

Of course, you're going to have a pause during 12-16, but you can use 2
parallel threads, one that runs InternalGraph and one that pushes the data
in ExternalGraph, so that you will continue decoding while rendering and you
won't ever be out of data. You will only have an initial delay during the
first buffering round. Of course, a simple synchronization between the 2
threads will be necessary to access the buffer.

The only problem you might encounter is that the MPEG splitter does not
support frame-precise seeking, so you need to be careful where you seek to
not to end up with some duplicated frames (but the timestamps should help
you discard those). Also, it would work better (and faster) if you knew
where the I-frames are and you always seek to those, but you need to scan
the file first to discover it. Scanning the file would help you in case of a
VBR stream, because otherwise seeking is not predictable. Those problems
however are not related to the particular technique you choose, but are
always there because of how the MPEG splitter works and because of how MPEG
files are made (without an index). To really solve those you need to write
an MPEG source filter that scans the MPEG system stream and builds an index
(like VirtualDub's MPEG reader does with MPEG1 system streams and DVD2AVI
does with MPEG2 program streams).

As said, just inserting an buffering filter between the decoder and the
renderer might look easier. But you would be hard pressed to manage the
graph from inside a filter (a lot of threading issues, including possible
deadlocks and crashes) and to maintain the data flow coherent with all the
seeks, starts and stops, since you will deliver not what the graph manager
thinks it's being delivered.

"Tao" <t...@dcs.gla.ac.uk> wrote in message

news:055401c29e3d$c72e2900$89f82ecf@TK2MSFTNGXA01...

0 new messages