Hello
I am trying to write an .y4m file to use as input to vpxenc. I have successfully generated the YUV 444 format, but the vpxenc I have compiled from git a few weeks ago wants YUV 420 format.
Though my YUV444 file plays as expected, I have been unsucessfull in writing an YUV420 file with my application. Is there a more detailed description of the file contents, with more information than the short summary I have been reading here
http://wiki.multimedia.cx/index.php?title=YUV4MPEG2 ?
By searching the internet I find the only better source of information seems to be the ffmpeg source files, but I was hoping to find a textual description of the planar progressive (or interlaced) frame format to write in the .y4m file.
I am using the C420 color space (with coincident chroma planes, as my link describes it), I write my red and blue chroma planes right one after another in the .y4m frame, both same size and computed the same way. But playing my file with `vlc` or `mpv` shows that I do not pack, align or palce the blue-yellow plane correctly in my .y4m file, as the playback shows blue colors periodically scrambled over the upper half of the frames, plus the entire frame looks a little fuzzy as if the color information is displaced by 1 pixel. The luma part is just fine throughout the file.
So I think there is something about the frame format in the .y4m file that I do not yet know, and I have no proper source of information on the file format, maybe you guys know some other web page describing the frame format/content ? Eventually I used ffmpeg to convert my YUV444 file to YUV420, and I noticed ffmpeg outputs the C420jpeg color space, which I do not yet understand.
And another question: does libvpx also require the YUV420 color space, like the vpxenc executable does ? Actually, is there an advantage if I use the library to encode YUV images instead of RGB images ? Like should I expect the library to encode faster with YUV frames instead of RGB frames ? I still have RGB frames as input anyway, but I think I can use the Intel IPP library functions to convert RGB frames to YUV, if libvpx can benefit from YUV more than RGB. My movies are short screen captures, and I am worried that I need a good-quality, close to pixel-exact reproduction of the video frames (my frame rate would be about 1 / second), because any blur can make the fonts hard to read and users will percieve the video quality as too low or unacceptable. For this reason, I really want to encode the YUV444 frames instead of YUV420 (I am worried about quality). What is a good set of encoder options I should use to compress such screen capture movies, or how do I get the vp9 encoder to work close to a pixel-exact compression of frames ?
Anyway, despite my worries, my short tests until now show that the resulting movie (that I got by converting YUV444 to 420 with ffmpeg, and than from YUV to webm with vpxenc) has very good quality and excelent text readability (which I find amazing, by the way), but my current compression speed is low for my requirements (at 10 frames per sec, where the 10 frames are really mostly duplicated images). I wish to be able to record and compress continously for 8 or 16 hours per day if needed.
Thank you,
Adrian Constantin