I have written pin-centric avstream based driver, I can successfully
preview both the audio and video. The capture of the video was also
successful. but iam unable to capture audio through AmCap, The graph
is as follows
avstream based capture filter (Audio pin) ----->Smart Tee(capture
pin)----->AVI MUX -->file writer
avstream based capture filter (Video pin) ----->2nd Smart Tee(capture
pin)----->AVI MUX->file writer
When I drag and drop the captured file in the graphedit tool, the avi
file gets connected to the avi splitter, the avi splitter shows audio
and video pins, the video gets connected to the umedia video decoder
and gets rendered properly. Similarly the audio gets connected to the
umedia audio decoder and gets rendered properly. When I try to play it
I get a black screen for the video.
However when I connect the avi file to the uMedia Demux, it recognises
it as the H.264 format data and plays the video only successfully.
Can someone tell me what could be the issue.
Please let me know if this is the timestamp issue in the driver or
someother issue.
The process function of my driver is something like this, which fills
the timestamp value to the frames.
NTSTATUS
CCapturePin::
Process (
)
{
PAGED_CODE();
NTSTATUS Status = STATUS_SUCCESS;
PKSSTREAM_POINTER Leading;
OBJECT_ATTRIBUTES objectAttributes;
Leading = KsPinGetLeadingEdgeStreamPointer (
m_Pin,
KSSTREAM_POINTER_STATE_LOCKED
);
while(NT_SUCCESS(Status) && (Leading))
{
// read from our queue
ULONG MappingsUsed =
(m_Device->*(this->m_pfnGetDataFromDevice))(
&Leading->OffsetOut.Data,
Leading->OffsetOut.Count,
Leading->OffsetOut.Remaining
);
// update the DataUsed variable with the data filled in the
Buffer
Leading->StreamHeader->DataUsed = MappingsUsed;
if(MappingsUsed)
{
#if 1
if( (m_Clock))
{
if(m_PinIndex == 0)
{
m_AudFrameNumber++;
Leading -> StreamHeader ->Duration =
(((ULONGLONG)MappingsUsed * NANOSECONDS) /192000); //192000 = 48000 *
2
} else if(m_PinIndex == 1){
m_VidFrameNumber++;
Leading -> StreamHeader ->Duration
=m_VideoInfoHeader -> AvgTimePerFrame;
}
Leading -> StreamHeader -> PresentationTime.Numerator
=
Leading -> StreamHeader ->
PresentationTime.Denominator = 1;
Leading -> StreamHeader -> PresentationTime.Time =
m_Clock->GetTime();
Leading -> StreamHeader -> OptionsFlags =
KSSTREAM_HEADER_OPTIONSF_TIMEVALID
| KSSTREAM_HEADER_OPTIONSF_DURATIONVALID;
if(Leading->StreamHeader->Size >=
sizeof(KSSTREAM_HEADER) +
sizeof(KS_FRAME_INFO))
{
PKS_FRAME_INFO FrameInfo = reinterpret_cast
<PKS_FRAME_INFO>(Leading-
>StreamHeader + 1);
FrameInfo->ExtendedHeaderSize = sizeof
(KS_FRAME_INFO);
if(m_PinIndex == 0)
{
FrameInfo -> PictureNumber =
(LONGLONG)m_AudFrameNumber;
}else if(m_PinIndex == 1)
{
FrameInfo -> PictureNumber =
(LONGLONG)m_VidFrameNumber;
}
FrameInfo -> DropCount =
(LONGLONG)m_failureCount;
}
}
#endif
// advance the stream pointer to the next stream
Status = KsStreamPointerAdvance(Leading);
m_failureCount = 0; // reset the counter
}
else
{
m_failureCount++;
Status = STATUS_DEVICE_NOT_READY;
}
}
if(!Leading)
{
Status = STATUS_PENDING;
}
//
// If we didn't run the leading edge off the end of the queue,
unlock it.
//
if (NT_SUCCESS (Status) && Leading)
{
KsStreamPointerUnlock (Leading, FALSE);
}
return Status;
}
Note: Also What I see in the avssamp sample is that the duration is
not filled for the Audio frames, however video frames duration is
filled. Additionally extended header info is available for the video
and not present for the audio frames. Is my understanding of the
sample code correct or wrong, please comment.
Regards
Ahmed