OK, I changed the rendering mode from RenderFile to soemthing similar
to Pesce's Book.
I have tried several videos, all MPGS that play in the SDK samples
(Blended)... I can also play any of the files
individually. Graphedit renders but I can only see one video not the
second. I assume it works as it does not
gripe. Everytime I blend them I get 262722 which reports as Partial
Render. Please help. Any thoughts?
protected void RenderFileToVMR9(videoClipConfiguration vc)
{
int hr=0;
IPin pin=null;
IBaseFilter Source=null;
IBaseFilter Audio=null;
//add the filter
hr=graphBuilder.AddSourceFilter(vc.filePath,"",out Source);
DsError.ThrowExceptionForHR(hr);
pin=DsFindPin.ByConnectionStatus(Source,DirectShowLib.PinConnectedStatus.Unconnected,
0);
if(pin==null)
{
throw new Exception("Could Not Render Stream");
}
//if audio!
if(vc.audio)
{
Audio=(IBaseFilter)new DSoundRender();
graphBuilder.AddFilter(Audio,"DirectSound Renderer");
}
if(fgraph2==null)
{
fgraph2=(IFilterGraph2)graphBuilder;
}
hr=fgraph2.RenderEx(pin,DirectShowLib.AMRenderExFlags.RenderToExistingRenderers,IntPtr.Zero);
}
More FYI
I added a RenderFile(file) to the BitMapMixer Sample from
DirectShowLib Samples and got and ActiveMovie Window.
I then added the RenderFileToVMR9 method and received the same
262722. Does anyone have any direction here?
I guess I'm confused here, Not sure if this is my error or because
of DirectShowLib/C# and not pure C++....
I have run Remote Graph Edit, here is the idea (Video Only):
Video 1.mpg -->Mpeg2Demux--->(Video In)Nero Video Decoder(Video Out)--
>VMR9
Video2.mpg-->(Input)Nero Splitter-->(SubPictureIn)Nero Video
Decoder(Video Out)-->VMR9
The Nero Video Decoder is the same for both videos and has 1 output to
the VMR9, doesn't appear to be working. Do I need
to manually connect all of this to ensure proper rendering? Or Is it
worth separate graphs blending to 1 AP? MultiVMR9 Sample?
Most groups ask, as a courtesy, to report discoveries and or solutions
by the originating author to help others and
alert potential helpers from spending time. So I have been reporting
to this issue as I have made discoveries. Unfortunately I
still do not have the solution, please help. That said, I have more
discoveries.
It appears that upon a call to RenderEx on the Filtergraph2 interface,
I am finding that the (Smart Connect) isn't so smart.
Though it appears to work each time with the C++ samples, it fails
each time with the DirectShowLib (C#) code. The RenderEx method is
called with AMRenderExFlags.RenderToExistingRenderers flag. I have
tried to call the method upon inserting the source filters and also
have waited until all of the source filters are registered and then
iterating with RenderEx. Each case is the same. What is happening in
the graph is the system is selecting the Mpeg2 demux for the first
video and a Nero Splitter for the second. Video 1's video output pin
is connecting to a Nero Video Decoder (Video Input Pin). Now the
interesting part. The Nero Splitter has a Video, an Audio and a
SubPicture Output Pin. The Video Pin is not connecting and the
Subpicture Pin connects to the Nero Decoder that has its Video Input
Pin connected to Video 1. The Nero Decoder has a video input and a
subpicture input. So it appears the videos are cross wired. In the C+
+ samples, the
Video 1 connects to a Nero Decoder and leaves the SubPicture Input Pin
alone. Video 2 connects to a new instance of the Nero decoder and
connects its SubPicture and Video outputs to the Decoder's SubPicture
and Video inputs (this works). OBTW, I have reversed the order of the
inserted videos and the 'Song Remains The Same'. The C# code
(DirectShowLib) is just a wrapper. It doesn't really implement
anything.
So the $64M question[s]
Why does the C# example fail (It fails everytime the same way at the
same point)?
Does timing matter in SmartConnect?
Could the wrapper code and C# code force memory limitations causing
the second instance of the decoder not to get created and connected?
If I connect everything manually, that is Video-Demux/Splitter-
Decoder, leave the audio alone, will RenderEx then 'smartly' connect
the video to the VMR9 and the audio filters properly?
If the last question is yes, should I call GetStreamCaps on the Source
Filters to get the MediaType to call the demux's createoutputpin?
Thanks
Inadvertently set streams to render to 1 instead of 2.