550D test build:
http://groups.google.com/group/ml-devel/browse_thread/thread/850ec268bc883ceb/84f39efd77501fd2?show_docid=84f39efd77501fd2
(see my previous message in that thread for a short description of
internals)
60D test build:
http://groups.google.com/group/ml-devel/browse_thread/thread/47f8a4b186bd9a81/af85aaa3d441ea4e?show_docid=af85aaa3d441ea4e
I've also rewritten this wiki page, added a small bitrate test and
tried to remove common misconceptions about QScale.
http://magiclantern.wikia.com/wiki/Bit_rate
I start this thread for analyzing how the compression works; it may be
possible to find additional parameters, for example, subsampling mode
(currently 420).
For analysis, I will use 550D/1.0.8 firmware, for which I have the
best documentation from Arm.Indy and AJ. 5D2 204 is also a good
option.
There is a structure named AJ_Movie_CompressionRate_struct, at 0x67bc,
found also in dryos.h and named mvr_config. This is currently used for
QScale and CBR parameters. My current understanding of it is here:
https://bitbucket.org/hudson/magic-lantern/changeset/fa92f754a949
Some raw H.264 parameters (no idea about their meaning; found in
SetEncodeH264Parameter):
0xc0e10080 jp62_sizer
0xc0e100c0 jp62_seqcr1
0xc0e100d0 jp62_piccr1
0xc0e100fc jp62_miscr
0xc0e1000c jp62_opmr3
0xc0e100e0 jp62_slcr1
0xc0e100e4 jp62_slcr2
LVCDEV_H264EncodeStart(pInputAddress, pStreamAddress1, Size1,
pStreamAddress2, Size2, flag, struct(unk,unk,unk))
0x5240 MovieSize 0 1 2
0x5244 MovieMode 0 or 8, maybe crop
0x524c -1 ??
0x5260 pStreamAddress1
0x5264 image_width
0x6600: a structure with some parameters set from SetParameterH264Encode.
Is anyone familiar with the internals of H.264 and wants to help?
--
http://magiclantern.wikia.com/
To post to this group, send email to ml-d...@googlegroups.com
To unsubscribe from this group, send email to ml-devel+u...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/ml-devel?hl=en
The PDF is very interesting, but huge.
More findings:
- Canon's CBR algorithm seems pretty similar to my old CBRe method,
i.e. works by changing QScale. Of course, they have a much better
implementation than mine.
- QScale will not go down more than -16. This means that, on many
scenes without too much detail, bitrate can be much lower than
requested. This happens without ML, too. When this happens, it may be
dangerous, because you may have requested a bitrate higher than your
card can support, without even noticing!
In math jargon, CBR with high bitrate will converge to QScale -16.
- Reading for certain addresses around 0xc0e10000 will cause ERR80.
A message from a user who can't post yet to mailing list:
---------- Forwarded message ----------
From: Jan Raiber <jan.r...@filmakademie.de>
Date: Tue, May 17, 2011 at 10:06 AM
Subject: H.264 codec analysis
To: broscu...@gmail.com
Hi,
some days ago i beg alex for implementing a better CBR feature. and so
he dit first steps further. the new one works much better than the
old, because at the old one, the bitrate changed to much, from under
FWdefault to beyond the speed limits of the card and/or the
card-writing-unit. And all this may happen within a single shot. In
other words it wasn't very usefull.
So why i wanted and still want to have higher bitrate? because you can
see it. In scenes with heavy detail you can see the blocking artifacts
of the compression with bare eyes. What means heavy detail? Some of
these situations, which I had the last couple of days were: Rain,
waving leaves of trees, an anthill, rought structures of walls, the
pixel-structure of a computer-monitor or high ISO-settings....
So you see, nothing very special.
I made a test where I shot with maximum ISO (6400) to see, on what
point my card will not be able to handle the datarate. a resulting
picture, you can see here:
surrandom.com/toDownload/CBR.jpg
(it's two times the upper right part of the picture at 100% size)
Here it's very easy to see the difference between FWdefault (around
45mbit) and CBR 1,7 (arround 65mbit) You can recognize, that the
default setting will produce a very blurry image with much blocking,
like a bad youtube-video. this is nothing you want to have in
postproduction.
On the right picture you see much more detail, in this case the
original grain of the image sensor. (which is nothing you wanna have
as well, but hopefully you will never be in situation to shoot with
6400 ISO ...)
So detailed grain is better, it looks much more natural, but don't
misunderstand, it's not the case, that you have more of this grain on
the right image, in the left you have the grain plus the degrading,
plus the blocking, which is not easy to handle in color correction.
But what kind of color correction do i mean, you can do heavy things
like much contrast or desaturating and you will not see anything more.
But if you want to brighten the picture, or increase the color, you
will come in a mess of blocks.
And the worse thing you can do with heavily compressed material is
selective color-correction, where you pic a color (blue of sky or
green of gras) and change it separately from the other colors.
Try it with your material and you will see, what I mean. With
uncompressed material from a telecine f.e. you can turn one yellow car
out of a traffic jam into a blue one, without anybody will notice it.
with compressed material you must be the master of masks to isolate
it.
But there is an other very good way to see, how much reduced is your
material. put it in an application like after effects and have a look
at the channel switch beneath the mainframe.
click it to show only the red, or the blue channel. switch to full
quality and 100% view. Now choose a part of the picture with detail
and you will see what compression is doing to you picture. The eye is
less sensitive to red and blue (instead of green or lightness) so the
reduction is more intense here.
And what do you see? Heaps of blocks all over the frame. This has
nothing to do with 4:2:0 subsampling (this should only look twice as
pixie than normal) and the only way to reduce it, is cranking up the
data-rate as much as possible. You see this blocks even is scenes with
less detail, less of them, but you see them, because the rate goes
down to hold same image quality.
If you think the best thing for grading and post would be having 4:2:2
subsampling than you'r right and wrong. it's only better, if you can
raise the data-rate, because, you got a quarter more pixeldata to
compress and to safe. So remember the picture above: is it useful to
have a picture with more detail, which has to be more compressed? i
don't think so.
So the first step to improve image quality with this cams is the
highest possible data-rate, if it works (a CBR on a hard limit), than
we can think of a better subsampling or maybe a better bitdepth (10
instead of 8 bit is quarter more data too) which would give us more
flexibility in lightness and contrast than the subsampling. 1024 steps
in lightness instead of 256 (!) in my eyes, it's a better deal.
But first: less compression.
Maybe there are other possibilities with the codec because i had the
experience with AVC-HD from Sony with less rate (35mbit) but better
image quality (in terms of compression) than the canon codec (which is
h264 too). But maybe it's limited by the processing power of the image
processor. But i'm not in this terms to know, if it has something to
do with a better use of B-frames....???
So long, have a nice day,
greets and cheers,
Jan
so when you increase the bitrate, you basically increase the resolution? what would you say the bitrate of 2k, 3k ,4k, etc be? --- On Fri, 5/13/11, jraiber <jra...@web.de> wrote: |
|
|
|
|
|
|
|
To unsubscribe from this group, send email to ml-devel+u...@googlegroups.com