To post to this group, send email to ml-d...@googlegroups.com
To unsubscribe from this group, send email to ml-devel+u...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/ml-devel?hl=en
OK, I didn't know yet that this behaviour is called "parafocal". My
18-200 must be a so called "varifocal" lens then.
Anyhow, wouldn't it still be a nice feature if Magic Lantern could
make varifocal lenses behave like parafocal ones?
I guess there are other people besides me using varifocal
On 9 Jan., 18:06, "K." <justynadric...@gmail.com> wrote:
> No,its not the case here.Just buy parafocal zoom.Your 18-200 isnt parafocal
Yes parfocal,I think MLantern would need to recognize each lens and correct focus differently because each model has different focus shifts when zooming in/out.Different manufacturers.Also it would depend on how far your object is ,focus would need to be shifted differently on far and close objects.
> you need to work on higher resolution data than ML currently works with (ie the high-def video ram buffer).
That's true. The movie from vimeo uses a 720p buffer (e.g. seg 1 while
not recording). I think it can be downscaled without much loss of
I'm thinking to process only adjacent pixels which fit in an int32.
Those pixels are coded as:
uint32_t pixel = v_row[x/2];
uint32_t p0 = ((pixel >> 16) & 0xFF00) >> 8; // odd bytes are luma
uint32_t p1 = ((pixel >> 0) & 0xFF00) >> 8;
So, from a single memory read, I want to compute the thresholded edge
strength, which is abs(p1 - p0) > thr ? 1 : 0. Threshold is constant
within one frame, but will change from one frame to another.
Is there a way to compute it very fast in assembler?
Drawing speed is not a problem, since only 1...5% of the tested pixels
will be actually displayed.
I think it requires more optimization than the Zebras, but with the
help of AJ and Piers (the ASM experts), it will be possible. My
estimation right now is exactly like Debian releases: when it's done.
It also depends on the amount of time we can dedicate to ML, since we
have daily jobs, too.
> I'm not very comfortable with changing code (that I don't know how it works). I am guessing this is similar to the the Config parameters.
I've tried to change CONFIG_INT to signed int and that made ML run
like on a 286 with turbo switch off... no idea what happened.
Did you notice that memory address X has the same contents as memory
address X | 0x40000000 ? The first address is cacheable, second is
e.g. in 550D, the image buffer 0x40D07800 contains the same stuff as
the one from 0x00D07800.
and in 5D2:
seg1 = seg12,
seg2 = seg13,
seg3 = seg14,
Does this mean Canon upsamples this data when recording (with some
quality loss), or it's just a slightly smaller buffer used for other
purposes (like AE, AF...)?
AJ, is it worth to analyze the other buffers? or they are simply
copies? Or they have different lags? (e.g. one buffer keeps last
frame, another keep previous frame). Also, is it possible to do some
kind of vsync?
Is anyone interested in taking silent pictures (without moving the
mirror), at 1720x974, uncompressed YUV422?
"silent pictures" - a dream within a dream :D
1720x974 uncompressed YUV422?
I am in...!!!!
If there are hardware differences in shutter for doing this, we don't
have any chance.
The silent picture would be done like this:
- go to Movie mode
- move focus to star button (ML can change this setting if needed)
- half-press shutter (I don't know how to stop full-press shutter)...
or you can suggest other button (set?)
- camera will start recording, dump the image buffer (4 MB), then
delete the movie (this takes 1-2 seconds or so)
- for postprocessing, I can write a script which converts the yuv 422
to jpeg or tiff.
For this to work, I also need to know how to do vsync. AJ, do you have
any suggestions for this?
The main limiting factor is how fast can we save those pictures. I
think 4 MB can be saved in 0.5 seconds or less. If I put QScale at
+16, the movie recording process shouldn't interfere too much with
First, I was talking about stills. To implement (or port) a new video
codec is much more difficult.
Second, 3:2 is only 1056*704. The high-res buffer (1720x974) is 16:9.
Great info, I should investigate this.
> YUV422 -> SCREEN
This will make your Zoom capability much better-looking and simpler to
implement (you won't need the LUT any more). At some point, you said
that you wrote somewhere in a VRAM buffer and you got something on
screen. Can you display something meaningful with this method?
> 1080p out of the camera
Don't we already have that? Or do you mean yuv 422 uncompressed 1080p?
How much RAM can we safely malloc?
return *(220728 + 4*arg0)
Also, this function seems to create JPEG files from LiveView:
sprintf_maybe(unk_SP /* points to unk_R5 */ , 'B:/DCIM/LV%06d.jpg',
*(0x5280), 12 + unk_R4 /* points to unk_R5 */ )
FIO_CreateFile(unk_SP /* points to unk_R5 */ )
It's pretty complex, so I don't know if it's safe to call or not, and
also which are its parameters.
Focus assist is not there; I've started to code it, but it's not ready yet.
The silent pictures are taken from the exact same image buffer which
will be used for focus assist.