From what I've read and researched, I should use GetDeliveryBuffer(),
Deliver() and DeliverEndOfStream().
I haven't figured out, though, how to neutralize the "automatic" calls
to FillBuffer that CSource initiates.
Thanks for any help in implementing Deliver() - i understand it's not
so straightforward.
Thanks again,
Mechi
> From what I've read and researched, I should use GetDeliveryBuffer(),
> Deliver() and DeliverEndOfStream().
> I haven't figured out, though, how to neutralize the "automatic" calls
> to FillBuffer that CSource initiates.
If you wish to use the CSource base class you need to create an override
implementation of the DoBufferProcessingLoop() that is in the CSourceStream
class. The base implementation just calls FillBuffer as quick as possible
but you can alter it any way you see fit. Just beware of potential
deadlocks if you block on this streaming thread.
--
http://www.chrisnet.net/code.htm
[MS MVP for DirectShow / MediaFoundation]
That's why I thought more control would help.
Mechi
>My real problem is that FillBuffer stops getting called and I can't figure
>out why. Frames are arriving on the Filter and there are always new buffers
>available. The video freezes and DoBufferProcessingLoop is not being called.
>
>
>That's why I thought more control would help.
>Mechi
If you set up your symbols correctly (using _NT_SYMBOL_PATH), then you
can see where this thread is stuck. The most likely explanation is
that you have sent a frame with a timestamp in the future and the
renderer is blocked on that. Other possibilities are that someone is
not releasing the buffers correctly so that you run out, or a case
where the audio renderer does not receive data, so the clock does not
advance.
G
Hi again!
I enter "snap shot" mode - so there's no video streaming, just
individual shots at long intervals (or not at all).
After showing a few of the shots, FillBuffer is not called and there
are buffers available to be viewed.
Is there some time of timeout?
I put breakpoints on all the errors, but I don't get to any of them.
Here's the constructor code for the Output pin in which I set 50fps as
default.
// UNITS = 10 ^ 7
// UNITS / 30 = 30 fps;
// UNITS / 20 = 20 fps, etc
const REFERENCE_TIME FPS_100 = UNITS / 100;
const REFERENCE_TIME FPS_50 = UNITS / 50;
//////////////////////////////////////////////////////////////////////////
// CVCamStream is the one and only output pin of CVCam which handles
// all the stuff.
//////////////////////////////////////////////////////////////////////////
CVCamStream::CVCamStream(HRESULT *phr, CVCam *pParent, LPCWSTR
pPinName) :
CSourceStream(NAME("BDR Camera"),phr, pParent, pPinName), m_pParent
(pParent)
{
CAutoLock cAutoLock(&m_cSharedState);
m_rtFrameLength = FPS_50; // as start
m_iFrameNumber = 0;
m_pParent->CamInitialized = false;
m_pParent->CamRunning = false;
GetMediaType(0, &m_mt); // so can have different ones
//GetMediaType(&m_mt);
}
Thanks for any help.
Mechi
Thanks for your reply.
If I'm not interested in playback and I'm working only with video
(from a camera that outputs raw Bayer frames), do I need to worry
about the timestamps? maybe, for now, I could just give consecutive
numbers?
> Other possibilities are that someone is
> not releasing the buffers correctly
Which buffers? As I understood, DoBufferProcessingLoop instantiates
(and releases) a new IMediaSample *pSample on each loop.
In my capture thread I have 3 buffers that are used cyclically for
capture and display.
Thanks for all your help,
Mechi
Sometimes a source filter that is not being constrained as to how fast
it can stream will run the time stamps up very rapidly, leaving behind
other samples and filters in the graph. Stopping and restarting the
graph does not necessarily cure this - I have had to disconnect
everything and rebuild to clear stuff like this up.
This problem comes up because the conventional implementation for
source filters is "keep the pipeline full" by doing as much as
possible each invocation.
One workable work-around is to time the stream-out yourself. I do that
by replacing the "wait for available sample" to "wait for available
sample and N milliseconds", and calculate N based on how far away I am
from the cue time. This limits the timeout to N millseconds and
generally keeps the deadlock demons away.
I used the _NT_SYMBOL_PATH and started using WinDbg. It shows me the
threads, but I haven't been able to figure out how to see info for each
individual thread. I've been reading up on WinDbg - any ideas? The camera
runs well for hours, but when I leave it overnight, I come in the morning and
see that the application's thread that receives frames is working and
receiving frames, but the thread (FillBuffer) that's supposed to be
displaying is not getting called.
> Other possibilities are that someone is
> not releasing the buffers correctly so that you run out,
checked this - the buffers are filled and then freed to be taken - but
FillBuffer is not called
> or a case
> where the audio renderer does not receive data, so the clock does not
> advance.
We don't have audio in our camera - it's for industrial use where
MachineVision is needed.
> G
>
Thanks,
Mechi
"Jamie Faye Fenton" wrote:
> Sometimes a source filter that is not being constrained as to how fast
> it can stream will run the time stamps up very rapidly, leaving behind
> other samples and filters in the graph. Stopping and restarting the
> graph does not necessarily cure this - I have had to disconnect
> everything and rebuild to clear stuff like this up.
>
> This problem comes up because the conventional implementation for
> source filters is "keep the pipeline full" by doing as much as
> possible each invocation.
>
> One workable work-around is to time the stream-out yourself. I do that
> by replacing the "wait for available sample" to "wait for available
> sample and N milliseconds", and calculate N based on how far away I am
> from the cue time. This limits the timeout to N millseconds and
> generally keeps the deadlock demons away.
>
Where is this "stream-out" occurring?
But, when I enter "snapshot mode" there can sometimes be minutes, hours
between frames, and when a frame finally arrives, FillBuffer IS called and
displays the frame.
Thanks,
Mechi
I reread your post and have a better idea of what you are doing. If
the idea is to capture from a camera to a stream, but only have the
stream advance forward in time only when requested. This is very much
like how a video or motion picture camera works in the real world -
hold the button down, it records, let up, it stops, when you replay
it
shows just the times when the button was down.
The key then is to build two graphs, one for capturing the camera
input as a relentless stream, and the other for recording the frames
to storage. This second type could run at full speed for awhile, or
even faster or slower. Siamese twins with one filter on each world.
The filters can both appear in the same DLL (the RGB filter example
shows how that works)
When you trigger a capture burst, then you signal the recieving
device
to release its thread-waiting, and that calls Fillbuffer. You also
set
the sample start and stop times to be the appropriate value for each
frame.
A simple memory buffer can handle the coupling. An Autolock Critical
Section for mutual exclusion. When "record" is active, Fillbuffer
copys data from the continous graph over into the current output
buffer on the intermittent graph, and advances the time position
forward.
I think I have to explain myself even more clearly.
When the video is streaming from the camera to the PC, I display the frames.
I don't record anything. [I supply Callback functions in our SDK, so if a
customer, etc. who's building his/her own application wants, they can save
the frames (in Bayer or RGB formats.)]
When I enter snapshot mode, the camera stops capturing and stops streaming.
There is NO traffic on the USB line, and the camera/sensor "sleeps".
When I give the trigger, 1 frame is captured by the camera and sent over the
USB to the PC. This frame is then displayed. This part works fine.
My problem is that sometimes, when the application is working for a while
(overnight), I come and see that video is streaming from the camera, over the
USB lines and filling up the bufffers, but FillBuffer is not displaying -
it's not getting called, and therefore it's not accessing frames from the
buffer.
I'm using VisualStudio C++ 2005 - and I can't figure out why it suddenly
stops being called. Any ideas?
What I'd like to be able to do is to put in some kind of flag, so that when
FillBuffer has "ignored" a certain number of frames, the application can
somehow close the filter and reopen it without having to stop the stream from
the camera. This way only a minimum of data will be lost.
Thanks!
Mechi
I found where it's stuck!
On the m_pInputPin->Receive(pSample); in Deliver().
Why does this happen?
Can someone explain in plain C++/English what the comment below is
trying to tell me?
> What I'd like to be able to do is to put in some kind of flag, so that whenFillBufferhas "ignored" a certain number of frames, the application can
> somehow close the filter and reopen it without having to stop the stream from
> the camera. This way only a minimum of data will be lost.
Thanks!!
Mechi
from amfilter.cpp - Deliver function:
/* Deliver a filled-in sample to the connected input pin. NOTE the
object must
have locked itself before calling us otherwise we may get halfway
through
executing this method only to find the filter graph has got in and
disconnected us from the input pin. If the filter has no worker
threads
then the lock is best applied on Receive(), otherwise it should be
done
when the worker thread is ready to deliver. There is a wee snag to
worker
threads that this shows up. The worker thread must lock the object
when
it is ready to deliver a sample, but it may have to wait until a
state
change has completed, but that may never complete because the state
change
is waiting for the worker thread to complete. The way to handle
this is for
the state change code to grab the critical section, then set an
abort event
for the worker thread, then release the critical section and wait
for the
worker thread to see the event we set and then signal that it has
finished
(with another event). At which point the state change code can
complete */
// note (if you've still got any breath left after reading that) that
you
// need to release the sample yourself after this call. if the
connected
// input pin needs to hold onto the sample beyond the call, it will
addref
// the sample itself.
// of course you must release this one and call GetDeliveryBuffer for
the
// next. You cannot reuse it directly.
HRESULT
CBaseOutputPin::Deliver(IMediaSample * pSample)
{
if (m_pInputPin == NULL) {
return VFW_E_NOT_CONNECTED;
}
#ifdef DXMPERF
PERFLOG_DELIVER( m_pName ? m_pName : L"CBaseOutputPin", (IPin *)
this, (IPin *) m_pInputPin, pSample, &m_mt );
#endif // DXMPERF
return m_pInputPin->Receive(pSample);
}
>
I found out where I'm getting dead-locked -
return m_pInputPin->Receive(pSample);
doesn't return after a while.
Why does this happen? How can I prevent this?
Thanks!
Mechi
What's the timestamp on the sample at that point, and what is the
stream time? Either the sample is wrongly timestamped a long way in
the future, or the clock is not advancing -- this can happen if the
clock is derived from audio, and no audio is arriving at the audio
renderer.
If your demux does not have sufficient buffering on its output pins
for the offset between video and audio (and the decoders'
requirements), then you can get a deadlock like this. Until the video
is consumed, the demux cannot process any more audio. Since no audio
is arriving at the demux, the clock does not advance and so the video
is not consumed.
G
> What's the timestamp on the sample at that point, and what is the
> stream time? Either the sample is wrongly timestamped a long way in
> the future, or the clock is not advancing -- this can happen if the
> clock is derived from audio, and no audio is arriving at the audio
> renderer.
Hi!
I guess I've not been clear...
I don't have any audio - only video coming from a proprietary camera
over which I have full control (I wrote the code in the camera, too.)
I don't think the problem has anything to do with the time stamp.
I overrode DoProcessingLoop and put in a whole bunch of logs.
I ran 2 cameras in 2 different Preview windows - 2 different threads,
graphs, etc.
One thread stopped (preview screen froze, though frames are incoming
from camera) while the other continued previewing.
The log file (below) from DoProcessingLoop clearly shows that one
thread (pSample = 0x1783F48) stops while the other (pSample =
0x345688) continues. The sample size is 15116544 (5 Mp in RGB24). I
assume it's getting stuck in while (!CheckRequest(&com))
even though the Wait(0) should return right away, cuz right after I
have a log message.
2 questions -
1) Why is it getting stuck? How can I "break out" of the while and
restart?
2) Even though both cameras are 5Mp and output the same FPS, one of
them always gets "more attention" and has more frames displayed. How
can I even this out?
Thanks,
Mechi
The log file:
ms line msg
84599312,11 b4 GetDelivBuf
84599312,22 b4 FillBuf,0x1783F48,15116544
84599359,66 FB ret S_OK, b4 Deliver
84599375,88 sample released
84599375,11 b4 GetDelivBuf
84599375,22 b4 FillBuf,0x345688,15116544
84599390,66 FB ret S_OK, b4 Deliver
84599406,88 sample released
84599375,11 b4 GetDelivBuf
84599375,22 b4 FillBuf,0x1783F48,15116544
84599390,66 FB ret S_OK, b4 Deliver
84599406,88 sample released
84599437,11 b4 GetDelivBuf
84599437,22 b4 FillBuf,0x1783F48,15116544
84599453,66 FB ret S_OK, b4 Deliver
84599468,88 sample released
84599406,11 b4 GetDelivBuf
84599406,22 b4 FillBuf,0x345688,15116544
84599453,66 FB ret S_OK, b4 Deliver
84599468,88 sample released
84599468,11 b4 GetDelivBuf
84599468,22 b4 FillBuf,0x345688,15116544
84599484,66 FB ret S_OK, b4 Deliver
84599500,88 sample released
84599500,11 b4 GetDelivBuf
84599500,22 b4 FillBuf,0x345688,15116544
84599515,66 FB ret S_OK, b4 Deliver
84599531,88 sample released
84599531,11 b4 GetDelivBuf
84599531,22 b4 FillBuf,0x345688,15116544
84599531,66 FB ret S_OK, b4 Deliver
84599546,88 sample released