--
--
http://magiclantern.wikia.com/
To post to this group, send email to ml-d...@googlegroups.com
To unsubscribe from this group, send email to ml-devel+u...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/ml-devel?hl=en
--
http://magiclantern.wikia.com/
To post to this group, send email to ml-d...@googlegroups.com
To unsubscribe from this group, send email to ml-devel+u...@googlegroups.com
OK, I didn't know yet that this behaviour is called "parafocal". My
18-200 must be a so called "varifocal" lens then.
Anyhow, wouldn't it still be a nice feature if Magic Lantern could
make varifocal lenses behave like parafocal ones?
I guess there are other people besides me using varifocal
lenses... :-)
On 9 Jan., 18:06, "K." <justynadric...@gmail.com> wrote:
> No,its not the case here.Just buy parafocal zoom.Your 18-200 isnt parafocal
> probably.
>
> 2011/1/9 Chris71 <niedeg...@gmx.net>
Yes parfocal,I think MLantern would need to recognize each lens and correct focus differently because each model has different focus shifts when zooming in/out.Different manufacturers.Also it would depend on how far your object is ,focus would need to be shifted differently on far and close objects.
> you need to work on higher resolution data than ML currently works with (ie the high-def video ram buffer).
That's true. The movie from vimeo uses a 720p buffer (e.g. seg 1 while
not recording). I think it can be downscaled without much loss of
information.
I'm thinking to process only adjacent pixels which fit in an int32.
Those pixels are coded as:
YYYYYYYYccccccccYYYYYYYYcccccccc
i.e.
uint32_t pixel = v_row[x/2];
uint32_t p0 = ((pixel >> 16) & 0xFF00) >> 8; // odd bytes are luma
uint32_t p1 = ((pixel >> 0) & 0xFF00) >> 8;
So, from a single memory read, I want to compute the thresholded edge
strength, which is abs(p1 - p0) > thr ? 1 : 0. Threshold is constant
within one frame, but will change from one frame to another.
Is there a way to compute it very fast in assembler?
Drawing speed is not a problem, since only 1...5% of the tested pixels
will be actually displayed.
Konarix:
I think it requires more optimization than the Zebras, but with the
help of AJ and Piers (the ASM experts), it will be possible. My
estimation right now is exactly like Debian releases: when it's done.
It also depends on the amount of time we can dedicate to ML, since we
have daily jobs, too.
> --
> http://magiclantern.wikia.com/
>
> To post to this group, send email to ml-d...@googlegroups.com
> To unsubscribe from this group, send email to ml-devel+u...@googlegroups.com
> I'm not very comfortable with changing code (that I don't know how it works). I am guessing this is similar to the the Config parameters.
I've tried to change CONFIG_INT to signed int and that made ML run
like on a 286 with turbo switch off... no idea what happened.
Did you notice that memory address X has the same contents as memory
address X | 0x40000000 ? The first address is cacheable, second is
uncacheable.
e.g. in 550D, the image buffer 0x40D07800 contains the same stuff as
the one from 0x00D07800.
and in 5D2:
seg1 = seg12,
seg2 = seg13,
seg3 = seg14,
etc.
http://magiclantern.wikia.com/wiki/VRAM/550D
Does this mean Canon upsamples this data when recording (with some
quality loss), or it's just a slightly smaller buffer used for other
purposes (like AE, AF...)?
AJ, is it worth to analyze the other buffers? or they are simply
copies? Or they have different lags? (e.g. one buffer keeps last
frame, another keep previous frame). Also, is it possible to do some
kind of vsync?
Is anyone interested in taking silent pictures (without moving the
mirror), at 1720x974, uncompressed YUV422?
"silent pictures" - a dream within a dream :D
1720x974 uncompressed YUV422?
I am in...!!!!
--
If there are hardware differences in shutter for doing this, we don't
have any chance.
The silent picture would be done like this:
- go to Movie mode
- move focus to star button (ML can change this setting if needed)
- half-press shutter (I don't know how to stop full-press shutter)...
or you can suggest other button (set?)
- camera will start recording, dump the image buffer (4 MB), then
delete the movie (this takes 1-2 seconds or so)
- for postprocessing, I can write a script which converts the yuv 422
to jpeg or tiff.
For this to work, I also need to know how to do vsync. AJ, do you have
any suggestions for this?
The main limiting factor is how fast can we save those pictures. I
think 4 MB can be saved in 0.5 seconds or less. If I put QScale at
+16, the movie recording process shouldn't interfere too much with
this.
First, I was talking about stills. To implement (or port) a new video
codec is much more difficult.
Second, 3:2 is only 1056*704. The high-res buffer (1720x974) is 16:9.
> Vsync
Great info, I should investigate this.
> YUV422 -> SCREEN
This will make your Zoom capability much better-looking and simpler to
implement (you won't need the LUT any more). At some point, you said
that you wrote somewhere in a VRAM buffer and you got something on
screen. Can you display something meaningful with this method?
> 1080p out of the camera
Don't we already have that? Or do you mean yuv 422 uncompressed 1080p?
> malloc
How much RAM can we safely malloc?
NSTUB(0xFF0F41BC, AJ_lv_continous_frame_save_related)
return *(220728 + 4*arg0)
Called by:
AJ_lv_continous_frame_save+56: AJ_lv_continous_frame_save_related(0)
Also, this function seems to create JPEG files from LiveView:
NSTUB(0xFF0FE674, lvcdevResourceGet)
...
sprintf_maybe(unk_SP /* points to unk_R5 */ , 'B:/DCIM/LV%06d.jpg',
*(0x5280), 12 + unk_R4 /* points to unk_R5 */ )
FIO_CreateFile(unk_SP /* points to unk_R5 */ )
...
It's pretty complex, so I don't know if it's safe to call or not, and
also which are its parameters.
Focus assist is not there; I've started to code it, but it's not ready yet.
The silent pictures are taken from the exact same image buffer which
will be used for focus assist.
http://groups.google.com/group/ml-devel/msg/72451b985d90e38f
See this test:
http://www.dvxuser.com/V6/showthread.php?218686-Magic-Lantern-for-550D-in-progress!/page43
See http://magiclantern.wikia.com/wiki/ASM_Zedbra
On Wed, Jan 12, 2011 at 5:01 PM, mohan manu <moha...@gmail.com> wrote:
> very disappointing.... :( :( :(
>
> Does 7D , Mark do the same?
>
So this mean that legally I can market a camera which claims to 'record 1080p', but in reality is sampling fewer pixels which are then up sampled?
Isn't that a breach of the trade description act or something?
New! 4K Barbie-cam! Just up sample yourself in post to get Hollywood quality results!
Dont remember, it was long time ago and cant find any link to the news... but it called my attention
It would be interesting is someone took a picture of a resolution chart and found out if upsamping is happening from 1872 -> 1920 (for recording), or if the 1872 pixels is itself being upsampled - and if so, what the resolution is.
That number make equate to an number in the 'engio-struct' definition ... might be useful.
AJ
You are very optimistic about Canon. They never ever listen to their customers and never added new features to an existing product (except the 5D2 video modes).
And don't mention 7D because that is a shame...
"I dont have anything against them ,people complained about 24p and they gave it to them.people complained about manual audio and they did it for 60D."
If they listen their customers so well then where is the manual audio in the 7D?
And why they put the update counter to the 7D? I don't know any photographer who ask that "feature"...
Only the 5D2 has got new function in Canon's history... vs. the whole Canon DSLR line. -> It means for me they are not listening to their customers.
ML came from Trammel not from Canon so it doesn't count... :-)
I own a 7D but I went and bought a 550D just so I can run ML :) .
--
During LiveView operation, the buffer sizes are those from wiki:
http://magiclantern.wikia.com/wiki/VRAM/550D
So the camera does not sample the full sensor. I think this also
allows cooler operation when not recording.
During picture taking (with mirror movement), the buffer is 18MP.
If the entire sensor would be scanned at 18 MP in LiveView, you would
get extreme jello effect. With a mechanical shutter, the sensor can
send the entire data without hurrying, and there's no jello (exposure
takes place only when shutter is open, but reading data continues
after completely closing the shutter).
I've discovered that by dismantling an old Sony compact (for modding
for infrared), and when I removed the shutter, it took overexposed
pictures with strange patterns (something like Egyptean artwork).
These cameras do have an electronic shutter, which can be turned
on/off (and used only during LV or movie mode). I think the electronic
shutter is rolling.
If we can discover how to configure the scanning window (the area from
which the camera requests data from the sensor), then we'll be able to
take 18MP pictures in silent mode, but with jello. This would be
really nice imo, but very difficult to implement with my current
knowledge about the camera.
With 50/1.8 I doubt it will be reliable (it's too slow in LV focusing).
Ideas?
On Thu, Jan 13, 2011 at 7:02 PM, Lionel Davey <audio...@gmail.com> wrote:
> Would it be possible to lock the focus on a subject and have the firmware
> follow that subject maintaining focus?
>
A simpler thing to implement is to set rack focus start/end points by
moving the focus window (the little rectangle) on the screen.
But all of these will only be possible after implementing the focus
assist as you saw in the movie.
--
> 3) Someone can tell me if I am wrong: I think the most efficient way to do
> absolute differences its to use Exclusive Or.
> yyyyyyyy XOR YYYYYYYY = (I think this is = ) absolute difference.
2 xor 3 = 1 ok
3 xor 5 = 6 not very exact...
Did you use USAD8 and USADA8? it seems they can help here.
On Mon, Jan 10, 2011 at 11:42 AM, Antony Newman <antony...@gmail.com> wrote:
> Alex,
>
> 1) Points to note:
> ----------------------------------
> p0 = Luma of Pixel on right
> p1 = Luma of Pixel on left
> -----------------------------------
>
> 2) If you read 2 x vram pixels in a row, then you have 4 pixels, which
> means you can do 3 Absolute differences.
> If you only do the differnce between each of the two pixels then you are
> wasting contrast information that you could use.
>
> 3) Someone can tell me if I am wrong: I think the most efficient way to do
> absolute differences its to use Exclusive Or.
> yyyyyyyy XOR YYYYYYYY = (I think this is = ) absolute difference.
>
> Therefore:
>
> unsigned int Pixel = Vram Pixel;
> unsigned int Contast = Pixel ^ (Pixel >> 16) ; // would give you
> ???????? ???????? CCCCCCCC ????????
>
> // would give you contrast at the CCCCCCC postion.
>
>
> 4) In the ASM that I wrote ... I got a bit 'tricky' ... and thought .. what
> if you had:
> Vram1 = AAAAAAAA ???????? BBBBBBBB ????????
> Vram2 = CCCCCCCC ???????? DDDDDDDD ????????
>
> Vram1 ^ Vram2 = [AAAAAAAA^CCCCCCCC] [????????] [BBBBBBBB^DDDDDDDD]
> [????????]
>
> And so in one XOR ... you could get 2 x absolute differences in on
> instruction.
>
> 5) Sobel edge detection (as used by Trammell) using differences in the
> horizontal and vertical.
>
> The issue here is that you if you want to process 1 x line, you end up
> reading the next one (from vram) to calculate the first one.
> So you end up reading 2 x vram lines for each one you display.
>
> A better solution may be to process N lines at the time - and then you
> only need N + 1 vram lines for Sobel - ie you only need to read an extra
> line every
> N lines.
>
> In my asm version, I read two lines in parallel, and on 50% of the vram
> was re-read (rather than 100%).
>
>
> 6) Speed: I think the integrated Zebra, (Sobel) Edge detection, Focus
> accumulator and Overlay YUV transform assembler ran at around 24 fps for the
> entire screen. It didn't use the ML code base (whose graphics are modular).
>
>
> 7) Before you code something into ASM - I suggest that the algorithm that
> you want to use fine tuned to do exactly what you need.
> It took me about 30 hours to decided on an optimised version of ASM for the
> 'all-in-one' (on pieces of paper). Then about 4.5 days to write it in asm
> (mostly because I could not find examples of how to get GCC to do nothing
> other than compile my code, and secondly the discovery that the DryOs
> interrupts do something very naughty an expect one of your registers not to
> be used as temporary workspace = r13).
>
> In a nut shell - i'd need to see your design first before I could help you
> optimised the code (in C or ASM).
>
> 8) If you want to test one line of ASM at the time - you can do it like
> this: (taken from my Logarithm code)
>
> /*******************************************************
> * OK .. use CLZ from ASM to count the leading Zeros *
> *******************************************************/
>
> unsigned int yclz = 32 - aj_CLZ( val ); // if val =0, yclz= 0
> // if val =2^32-1, yclz=
> 32
>
> ==============
>
> /*************************************************************************************************
> * aj_CLZ() - Do an asm( CLZ )
> *************************************************************************************************/
> unsigned int aj_CLZ( unsigned int input_num)
> {
> asm volatile(
>
> " CLZ r0,r0\n"
> " MOV r15,r14\n"
> //===============================================
> //===============================================
> //======= ^^ RETURN POINT OF ASM ^^ ==========
> //===============================================
> //===============================================
>
> : // Output operands eg.
> [output1]"+r"(g_r13_temp_store)
> : // Input operands eg. [input]"m"(parm1)
> : "r0" // eg "memory","cc" = Clobber list
> ); // end of asm volatile()
>
> return( 666 );
>
> } /* end of aj_clz() */
>
>
>
> 9) I have attached the annotated asm routine that did:
>
> +) YUV tranformation lookup
> +) Colour dithering
> +) Under and Over exposure Zebra
> +) Screen clearing
> +) Sobel Edge detection
> +) Focus accumulation
> +) Overlay update (zebra or transformed YUV)
>
> Note - I don't use this anymore (and found a YUV lookup that takes 2 less
> cycles with a different methodology)
>
> AJ
>
> --
> http://magiclantern.wikia.com/
>
> To post to this group, send email to ml-d...@googlegroups.com
> To unsubscribe from this group, send email to
> ml-devel+u...@googlegroups.com
Known bugs:
* it's easily fooled by contrast (or lack of it)
* it may display red markers even if there's nothing in focus
Implementation details:
* Only horizontal edges are detected.
* Only adjacent pixels which fit in an int32 are considered.
* Threshold: 1% percentile.
* When recording, HD buffer is downsampled horizontally by 2.
AJ, can you take a look at the code?
http://bitbucket.org/hudson/magic-lantern/src/tip/zebra.c in function
draw_focus_assist.
BTW, the current speed is achieved with the mirror buffer enabled
(i.e. it does not overwrite Canon stuff on the screen). But because
only 1% of pixels are actually drawn, the speed penalty is minimal.
* Fixed the saving bug (you can now save the Focus Peak setting).
* Added some kind of color coding (warmer is better)
* Small optimizations (I'm not sure if they have any effect or not).
1) yes.
2) on my camera, it resumes when you de-press the button
3) there's no code which saves debug logs in zebra.c. Does it make any
new files? In which mode are you using it?
4) yes, but it's more complex to code.
I've disabled other stuff (cropmarks and zebras) for speed. Will be
enabled again these days.
I'm thinking to draw cropmarks and histogram at 1 fps, and rewrite
zebra with the same framework. I've disabled them because I thought
this would need every bit of speed, but it's much faster than I've
expected from a pure C code.
=> will be fixed.
Colors show the small number from the upper left, which is the
threshold used for the edge image. In theory, colder colors might give
a hint that it may be a false detection. In practice, experiments will
tell if they mean anything or not.
I'll answer the other questions tomorrow, thanks for all the feedback!
> 2- What is the color code? I've seen dark blue, light blue and yellow
The threshold for edge detection (see previous mail).
> 3- Is this an appetizer until we get 1:1 magnification window? (just kidding ^_^)
I don't think we'll get 1:1 magnification (see Antony's work on this).