Hi again,
On 17/09/12 15:47, marktrue wrote:
> Hello Michael,
> I have some new problems about jjmpeg.
> I find I can not get PTS of AVFrame, such as:
> *while((jvr = (JJReaderVideo) mMediaReader.readFrame()) != null)*
> *{*
> *System.out.println("pts :" + jvr.getFrame().getPTS());*
> *}*
> It always shows pts :-9223372036854775808
> If pts of does not work in AVFrame, how could I know when the AVFrame
> should be presented ?
You use dts if that is AV_NOPTS_VALUE. I have no idea why, but this is
what the ffmpeg examples use. See JJMediaReader.readFrame().
If you're using JJMediaReader it already has the pts for you, just use
that via:
long pts = jvr.convertPTS(mediaReader.getPTS());
which converts it to a usable millisecond value.
This always just uses dts, which seems to be good enough for my
requirements but may not be correct if audio is important (I think it
might depend on the codec anyway and some never set it).
> And another problem is that I called function
> decodeVideo(AVFrame,AVPacket), but it always throw AVEncodingError.
> *code:
> complete = mCodec.decodeVideo(mFrame, packet);*
>
> That means avcodec_decode_video2 returns -1 and decode failed.
> At first I guess the reason could be the AVFrame does not initialized.
> So I use this AVFrame.allocFrame() to alloc the AVFrame, but still do
> not work.
I'm afraid there isn't enough there for me to help with why it isn't
working.
ffmpeg is a complicated API, your best bet is to look at the example
code in jjmpeg and/or the stuff from ffmpeg: i.e. ffmpeg.c, ffplay.c,
and doc/examples/*.c and search the libav archives. ffmpeg.c would be
the most useful if you're working on a transcoder - it isn't a trivial
task to do it properly.
But if you're using jjmediareader it already does the decoding for you,
so maybe start with that and see how that does it.
> And I copy the C++ code of project
https://github.com/havlenapetr/FFMpeg
> and translate it to java.
> But I can't find how to override AVCodecContext.get_buffer and
> AVCodecContext.release_buffer.
Next time it would help if you let me know what file it was used in,
took a while to find it.
I don't know why they're tracking the global pts that way, I don't see
ffmpeg.c doing that. TBH it seems a little strange because you always
get the frames decoded in correct order anyway so you can do the same
thing then.
ffmpeg.c is replacing it for some codecs, my best guess at the
no-comment source seems to suggest it's a way to avoid copying frame
data for certain codecs as they might re-use the buffers, but that just
seems like a performance optimisation to me.
> May the AVFrame initialized in avcodec_decode_video2 which called
> get_buffer, I guess.
Custom get_buffer is just for replacing the memory management with your
own: there's no need to do this for things to work as ffmpeg has it's
own implementation.
> Should I add this to jjmpeg.c or you have already added?
If you need this for your own application you should write your own jni
to do it separately - it's something that would have to be done in C and
would be very specific to a given application and probably not reusable.
It wouldn't really fit within jjmpeg unless something internally needed it.
> Could I send my codes to you?
Well have a go with the info above, if you get really stuck come and ask
again. (I presume it's some free software project).
As I said above ffmpeg is a pretty complex - not really public - api, so
it can take a lot of trial and error and trawling through the ffmpeg
source to find out what's going on. Pretty much all the knowledge I
have of it is stored in the jjmpeg and jjmpegdemos projects so you can
find it yourself there.
Regards,
Michael