Focus assist experiments

745 views
Skip to first unread message

Alex

unread,
Jan 9, 2011, 7:32:24 AM1/9/11
to Magic Lantern firmware development
I've began to experiment with focus assist algorithms for Magic Lantern.

Theory: http://magiclantern.wikia.com/wiki/Focus_Assist
First experiment: http://vimeo.com/18584960

Suggestions welcome.

James Donnelly

unread,
Jan 9, 2011, 9:15:14 AM1/9/11
to ml-d...@googlegroups.com

--

What you posted on Vimeo seems to suggest that your experiments are very promising.  Is this using the same code that 5d users have deemed not very usable, or is it a new algorithm you have implemented?

Looks to me like it would already make very accurate critical focus possible under most conditions, can't wait to see it in a release to play with.

James Donnelly

unread,
Jan 9, 2011, 9:19:30 AM1/9/11
to ml-d...@googlegroups.com
I should have read more thoroughly, I see that this was not done in the camera now, damn, got all excited again.

Alex

unread,
Jan 9, 2011, 9:20:25 AM1/9/11
to ml-d...@googlegroups.com
I've never seen how 5D works, so... I don't know. Zebra code from ML is undescifrable for me... it has binary arithmetic, which I'm not good at.

What I'm pretty sure the 5D code doesn't have is the percentile threshold. I should also render a video with constant threshold to see the difference.

--
http://magiclantern.wikia.com/
 
To post to this group, send email to ml-d...@googlegroups.com
To unsubscribe from this group, send email to ml-devel+u...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/ml-devel?hl=en

Rui Madruga

unread,
Jan 9, 2011, 10:12:11 AM1/9/11
to Magic Lantern firmware development
What I see in your video is a peaking filter. This is fantastic.
For working is nice if the image is in black and white.
Thank you for work in this item.

Sincerely


Rui Madruga




On 9 Jan, 14:20, Alex <broscutama...@gmail.com> wrote:
> I've never seen how 5D works, so... I don't know. Zebra code from ML is
> undescifrable for me... it has binary arithmetic, which I'm not good at.
>
> What I'm pretty sure the 5D code doesn't have is the percentile threshold. I
> should also render a video with constant threshold to see the difference.
>
>
>
> On Sun, Jan 9, 2011 at 4:15 PM, James Donnelly <jam...@easynet.co.uk> wrote:
> > On 9 January 2011 04:32, Alex <broscutama...@gmail.com> wrote:
>
> >> I've began to experiment with focus assist algorithms for Magic Lantern.
>
> >> Theory:http://magiclantern.wikia.com/wiki/Focus_Assist
> >> First experiment:http://vimeo.com/18584960
>
> >> Suggestions welcome.
>
> >> --
>
> > What you posted on Vimeo seems to suggest that your experiments are very
> > promising.  Is this using the same code that 5d users have deemed not very
> > usable, or is it a new algorithm you have implemented?
>
> > Looks to me like it would already make very accurate critical focus
> > possible under most conditions, can't wait to see it in a release to play
> > with.
>
> > --
> >http://magiclantern.wikia.com/
>
> > To post to this group, send email to ml-d...@googlegroups.com
> > To unsubscribe from this group, send email to
> > ml-devel+u...@googlegroups.com<ml-devel%2Bunsubscribe@googlegroups.c­om>
> > For more options, visit this group at
> >http://groups.google.com/group/ml-devel?hl=en- Ocultar texto citado -
>
> - Mostrar texto citado -

Antony Newman

unread,
Jan 9, 2011, 10:42:03 AM1/9/11
to ml-d...@googlegroups.com
I believe the existing ML edge detection did work - maybe at not an especially hight frame rate, and based on Sobel edge detection.
I believe it has a two pixel resolution, and displays 32 'colours' dependent on detail.

I've tried a single pixel 24fps version of this (coded in assembler).  I found that as the resolution of the target system was 1080p, having edge detection at 480p was not that helpful in getting focus perfect.

Following Marshals own version of edge detection (single pixel), much fewer shades of red for higher contrast.
I tried this too. Same problem.

My conclusion is that you need to work on higher resolution data than ML currently works with (ie the high-def video ram buffer).

I then implemented a 'contrast' accumulator that provides a sum of the 'Sobel' for the entire screen (and on each line).
The problem with this was how to 'show' this information in a meaningful way.

I've now stripped this from my asm routine library.  The binary arithmetic is bit more involved that C (which itself already may have looked involved) - because I'd heavily optimised it.  If someone wants to resurrect it - I created an introduction to how the code worked in an Excel.

The main reason I dropped the code was that I think for passive focusing (as opposed to active getting the camera to change its focus), I think the 'Pop-up Overlay' may be a better solution.

AJ

 


Chris71

unread,
Jan 9, 2011, 11:44:20 AM1/9/11
to Magic Lantern firmware development
I want to suggest a different approach to keep focus when zooming:

I did video filming in the "good old" analogue times. The camera (a
Sony Video 8 camera) didn't autofocus as far as I can remember, but
keeping focus when zooming was quite easy: First you had to zoom fully
in, set the focus correctly on the desired object and then you could
start filming. The object stayed perfectly in focus throughout the
entire zoom range (at least as long as the distance to the object
stayed the same.)

When I film using my 550D I usually have the same approach, but
unfortunately the object doesn't stay perfectly in focus when zooming
out, so I have to correct it slightly manually. Im using an EF-S
18-200mm IS Superzoom for filming and don't know if other zooms
(without such a big zoom range) stay better in focus when zooming.

Therefore I think it might be a nice feature if Magic Lantern would do
the necessary slight focus corrections when zooming out and in.

I guess that for this feature we would need to measure how the focus
on the different zoom lenses behaves when zooming, so there would be a
table for each supportet lens that tells the focus correction for each
zoom setting.

Would this feature make sense for film makers?

Would this be possible to implement?

Chris

K.

unread,
Jan 9, 2011, 12:06:48 PM1/9/11
to ml-d...@googlegroups.com
No,its not the case here.Just buy parafocal zoom.Your 18-200 isnt parafocal probably.

2011/1/9 Chris71 <nied...@gmx.net>
--
http://magiclantern.wikia.com/

To post to this group, send email to ml-d...@googlegroups.com
To unsubscribe from this group, send email to ml-devel+u...@googlegroups.com

Chris71

unread,
Jan 9, 2011, 12:20:07 PM1/9/11
to Magic Lantern firmware development
OK, I didn't know yet that this behaviour is called "parafocal". My
18-200 must be a so called "varifocal" lens then.

Anyhow, wouldn't it still be a nice feature if Magic Lantern could
make varifocal lenses behave like parafocal ones?

I guess there are other people besides me using varifocal
lenses... :-)

On 9 Jan., 18:06, "K." <justynadric...@gmail.com> wrote:
> No,its not the case here.Just buy parafocal zoom.Your 18-200 isnt parafocal
> probably.
>
> 2011/1/9 Chris71 <niedeg...@gmx.net>
> > ml-devel+u...@googlegroups.com<ml-devel%2Bunsu...@googlegroups.com>
Message has been deleted

James Donnelly

unread,
Jan 9, 2011, 12:26:47 PM1/9/11
to ml-d...@googlegroups.com
I believe the term is parfocal rather than parafocal.

I think this is an excellent idea personally, and I don't see why in theory it shouldn't be possible with ML.  You would need a lens that reports it's current focal length correctly for it to work.

However, I never zoom during a shot, so it doesn't interest me :)



On 9 January 2011 09:20, Chris71 <nied...@gmx.net> wrote:
OK, I didn't know yet that this behaviour is called "parafocal". My
18-200 must be a so called "varifocal" lens then.

Anyhow, wouldn't it still be a nice feature if Magic Lantern could
make varifocal lenses behave like parafocal ones?

I guess there are other people besides me using varifocal
lenses... :-)

On 9 Jan., 18:06, "K." <justynadric...@gmail.com> wrote:
> No,its not the case here.Just buy parafocal zoom.Your 18-200 isnt parafocal
> probably.
>
> 2011/1/9 Chris71 <niedeg...@gmx.net>

K.

unread,
Jan 9, 2011, 1:00:03 PM1/9/11
to ml-d...@googlegroups.com
Yes parfocal,I think MLantern would need to recognize each lens and correct focus differently because each model has different focus shifts when zooming in/out.Different manufacturers.
Also it would depend on how far your object is ,focus would need to be shifted differently on far and close objects.

2011/1/9 James Donnelly <jam...@easynet.co.uk>

konarix

unread,
Jan 9, 2011, 2:26:00 PM1/9/11
to Magic Lantern firmware development
Hi Alex.

When can we expect ML release with focus assist??

Best Regards.

chungdha

unread,
Jan 9, 2011, 3:01:24 PM1/9/11
to Magic Lantern firmware development
Nowadays zooming does not effect the newer lenses mostly the old
lenses have this problem the extreme case with a varifocal lens. But
all new lenses you can zoom in to focus and zoom out and the focus
will be still at the same point.

James Donnelly

unread,
Jan 9, 2011, 5:12:46 PM1/9/11
to ml-d...@googlegroups.com
On 9 January 2011 10:00, K. <justyna...@gmail.com> wrote:
Yes parfocal,I think MLantern would need to recognize each lens and correct focus differently because each model has different focus shifts when zooming in/out.Different manufacturers.
Also it would depend on how far your object is ,focus would need to be shifted differently on far and close objects.


Good point.  How about a calibration tool that plots a series of correction values for given focus distances and interpolates a graph/matrix for use during zooming in video mode?  To generate the points to interpolate from, with the camera in still mode,  could we track the (fulltime?) autofocus corrections made by the camera during a slow zoom, and repeat for a range of focus distances

Alex

unread,
Jan 10, 2011, 3:46:55 AM1/10/11
to ml-d...@googlegroups.com
Thanks for details, AJ.

> you need to work on higher resolution data than ML currently works with (ie the high-def video ram buffer).

That's true. The movie from vimeo uses a 720p buffer (e.g. seg 1 while
not recording). I think it can be downscaled without much loss of
information.

I'm thinking to process only adjacent pixels which fit in an int32.
Those pixels are coded as:
YYYYYYYYccccccccYYYYYYYYcccccccc
i.e.
    uint32_t pixel = v_row[x/2];
    uint32_t p0 = ((pixel >> 16) & 0xFF00) >> 8; // odd bytes are luma
    uint32_t p1 = ((pixel >>  0) & 0xFF00) >> 8;

So, from a single memory read, I want to compute the thresholded edge
strength, which is abs(p1 - p0) > thr ? 1 : 0. Threshold is constant
within one frame, but will change from one frame to another.

Is there a way to compute it very fast in assembler?

Drawing speed is not a problem, since only 1...5% of the tested pixels
will be actually displayed.

Konarix:

I think it requires more optimization than the Zebras, but with the
help of AJ and Piers (the ASM experts), it will be possible. My
estimation right now is exactly like Debian releases: when it's done.
It also depends on the amount of time we can dedicate to ML, since we
have daily jobs, too.

> --
> http://magiclantern.wikia.com/
>
> To post to this group, send email to ml-d...@googlegroups.com
> To unsubscribe from this group, send email to ml-devel+u...@googlegroups.com

Antony Newman

unread,
Jan 10, 2011, 4:42:12 AM1/10/11
to ml-d...@googlegroups.com
Alex,

1)  Points to note:
----------------------------------
p0 = Luma of Pixel on right
p1 = Luma of Pixel on left
-----------------------------------

2)   If you read 2 x vram pixels in a row, then you have 4 pixels, which means you can do 3 Absolute differences.
If you only do the differnce between each of the two pixels then you are wasting contrast information that you could use.

3)  Someone can tell me if I am wrong:  I think the most efficient way to do absolute differences its to use Exclusive Or.
yyyyyyyy XOR YYYYYYYY  = (I think this is = ) absolute difference.

Therefore:

unsigned int Pixel = Vram Pixel;
unsigned int Contast =  Pixel ^ (Pixel >> 16) ;        // would give you    ???????? ???????? CCCCCCCC ????????
                                                                                // would give you contrast at the CCCCCCC postion.


4) In the ASM that I wrote ... I got a bit 'tricky' ... and thought .. what if you had:
Vram1 = AAAAAAAA ???????? BBBBBBBB ????????
Vram2 = CCCCCCCC ???????? DDDDDDDD ????????

Vram1 ^ Vram2 =  [AAAAAAAA^CCCCCCCC]  [????????] [BBBBBBBB^DDDDDDDD] [????????]

And so in one XOR ... you could get 2 x absolute differences in on instruction.

5) Sobel edge detection (as used by Trammell) using differences in the horizontal and vertical.

    The issue here is that you if you want to process 1 x line, you end up reading the next one (from vram) to calculate the first one.
    So you end up reading 2 x vram lines for each one you display.  

    A better solution may be to process N lines at the time - and then you only need N + 1 vram lines for Sobel - ie you only need to read an extra line every
    N lines.

    In my asm version, I read two lines in parallel, and on 50% of the vram was re-read (rather than 100%).


6) Speed:  I think the integrated Zebra, (Sobel) Edge detection, Focus accumulator and Overlay YUV transform assembler ran at around 24 fps for the entire screen.  It didn't use the ML code base (whose graphics are modular).
 

7) Before you code something into ASM - I suggest that the algorithm that you want to use fine tuned to do exactly what you need.  
It took me about 30 hours to decided on an optimised version of ASM for the 'all-in-one' (on pieces of paper). Then about 4.5 days to write it in asm (mostly because I could not find examples of how to get GCC to do nothing other than compile my code, and secondly the discovery that the DryOs interrupts do something very naughty an expect one of your registers not to be used as temporary workspace = r13).

In a nut shell - i'd need to see your design first before I could help you optimised the code (in C or ASM).

8) If you want to test one line of ASM at the time - you can do it like this:   (taken from my Logarithm code)

   /*******************************************************
   *   OK .. use CLZ from ASM to count the leading Zeros  *
   *******************************************************/
  
   unsigned int yclz = 32 - aj_CLZ( val );      // if val =0,        yclz= 0
                                                // if val =2^32-1,   yclz= 32

==============

/*************************************************************************************************
*  aj_CLZ()  -  Do an asm( CLZ  )
*************************************************************************************************/
unsigned int aj_CLZ( unsigned int input_num)
{
   asm volatile(

"     CLZ r0,r0\n"
"     MOV r15,r14\n"    
//===============================================
//===============================================
//=======   ^^ RETURN POINT OF ASM ^^  ==========
//===============================================
//===============================================   
   
      :             // Output operands   eg.  [output1]"+r"(g_r13_temp_store)
      :             // Input operands    eg.  [input]"m"(parm1)
      : "r0"        // eg "memory","cc"    =   Clobber list      
   );  // end of asm volatile()

   return( 666 );

} /* end of aj_clz() */



9)  I have attached the annotated asm routine that did:

+) YUV tranformation lookup
+) Colour dithering
+) Under and Over exposure Zebra
+) Screen clearing
+) Sobel Edge detection
+) Focus accumulation
+) Overlay update (zebra or transformed YUV)

Note - I don't use this anymore (and found a YUV lookup that takes 2 less cycles with a different methodology)

AJ
AJ3 Optimised ASM.xls

Alex

unread,
Jan 10, 2011, 4:49:58 AM1/10/11
to ml-d...@googlegroups.com
Wow, that's a lot of food for thought. Thanks.

Antony Newman

unread,
Jan 10, 2011, 6:41:08 AM1/10/11
to ml-d...@googlegroups.com
Forgot to mention that after I wrote the asm (in Excel), then cut and paste it into the code I found a bug (surprise surprise).

BGT and BLT (Branch greater than / Branch Less than) are 'Signed'
I had to switch these to use BLO and BHI (unsigned versions of the same instructions).

Simple ... but took me 6 hours to track down.

-

Side note:
Strangely enough - in my first attempt to integrate my code with ML .. the first variable that I chose was defined as 'unsigned' (in ML).
I don't know what the default is - and it caused my GCC to go a bit mad, and so I changed the type of the variable (in ML) to be 'unsigned int'.

I'm not very comfortable with changing code (that I don't know how it works).    I am guessing this is similar to the the Config parameters.

Although not 'type' correct,  I have coded everything (that I can) that interacts with the assembler to be of type 'unsigned int'.
(4 bytes unsigned = 1 x word = 1 register).


Side note 2:
ML has a few hard coded structures. When read into memory - 32 Bytes (8 registers worth) are loaded into cache that are in the memory added of that 32 Byte aligned item  (assuming you are reading from cacheable memory).

When designing my fake colour an histogram array.  I store everything that I need (false colour and 256 level counter) in 1 x word.  I ensure this is allocated in cacheable memeory, and alligned to a 32 Byte boundary.   This means that only (256 / 8) 32 reads are required before everything is cache resident.


AJ

 

 

 

Alex

unread,
Jan 10, 2011, 6:46:21 AM1/10/11
to ml-d...@googlegroups.com
Signed BGT/BLT: maybe I should put this in the mini-asm wiki page.

> I'm not very comfortable with changing code (that I don't know how it works). I am guessing this is similar to the the Config parameters.

I've tried to change CONFIG_INT to signed int and that made ML run
like on a 286 with turbo switch off... no idea what happened.

Antony Newman

unread,
Jan 10, 2011, 7:54:16 AM1/10/11
to ml-d...@googlegroups.com
Alex,

Maybe a 'what to check when your asm goes t1tsup' page could be of use.

I blogged the BGT issue on http://magiclantern.wikia.com/wiki/2.0.4_AJ


Config / Menus:     When I get around this this, I'll be investigating in ML how:
+) To accept numerical input (which I'll need for User crop marks)
+) The ability to deactivate (grey out) options (eg when not possible)
+) The ability to store unsigned ints - which means that I'll have to read the code!

AJ

Alex

unread,
Jan 11, 2011, 2:26:07 AM1/11/11
to ml-d...@googlegroups.com
AJ,

Did you notice that memory address X has the same contents as memory
address X | 0x40000000 ? The first address is cacheable, second is
uncacheable.

e.g. in 550D, the image buffer 0x40D07800 contains the same stuff as
the one from 0x00D07800.
and in 5D2:
seg1 = seg12,
seg2 = seg13,
seg3 = seg14,
etc.

Antony Newman

unread,
Jan 11, 2011, 3:41:23 AM1/11/11
to ml-d...@googlegroups.com
Hi Alex,

Very interesting, I had not noticed!    

When I saw the address of segment 1 as 0x04000080,  I thought it was already in the uncachable region (ie misread as 0x400000080!).

This means that:
1) I have accessing cached vram for the focus assist.   (ie program will speed up when I switch to uncached video memory)

2) We only have half the number of vram segments to work out (HD has 4 banks now)

3) looks like the 'find segments' subroutine was pretty consistent (reordered the list below)

AJ


    {0x01B0FF00-0x5A0*0x17, 0x01B9CE9C, 534,  1440}, // seg  0
    {0x41B0FF50, 0x41B97FFC, 578, 1440},                       // seg 11 vram[0] uncachable 

    {0x04000080, 0x0415407C,1360, 2048},  // seg  1            HD_VRAM  [0] Cacheable
    {0x44000080, 0x4414FFFC,1344, 2048},  // seg 12           HD_VRAM [0]  UnCachable

    {0x10000080, 0x1015407C,1360, 2048},  // seg  2            HD_VRAM  [1] Cacheable
    {0x50000080, 0x5015407C,1328, 2048},  // seg 13           HD_VRAM  [1] UnCacheable
                                                           
    {0x1C00FF50, 0x1C097FFC, 578, 1440},  // seg  3       
    {0x5C00FF50, 0x5C09CE9C, 596, 1440},  // seg 14          vram[1] uncachable 

    {0x1C414000, 0x1C4F7FFC, 912, 1440},  // seg  4     
    {0x5C414000, 0x5C4F7FFC, 912, 1440},  // seg 15      
   
    {0x1C4F5278, 0x1C53FFFC, 224, 1260},  // seg  5    
    {0x5C4F5000, 0x5C5576FC, 286, 1260},  // seg 16
                    
    {0x1F60FF50, 0x1F69CE9C, 540, 1440},  // seg  6      
    {0x5F60FF50, 0x5F697FFC, 576, 1440},  // seg 18  
      
    {0x21B0FF50, 0x21B9CE9C, 596, 1440},  // seg  7

    {0x31B0FF50, 0x31B9CE9C, 596, 1440},  // seg  9     
          
    {0x5C578678, 0x5C5D7FFC, 320, 1260},  // seg 17    

    {0x24000080, 0x2414FFFC,1335, 2048},  // seg  8        HD_VRAM  [2] Cacheable
    {0x64000080, 0x6415407C,1328, 2048},  // seg 19        HD_VRAM  [2] UnCacheable
      

    {0x34000080, 0x3415407C,1328, 2048},  // seg 10        HD_VRAM  [3] Cacheable
    {0x74000080, 0x7415407C,1360, 2048}   // seg 20        HD_VRAM  [3] UnCacheable



Alex

unread,
Jan 11, 2011, 3:44:54 AM1/11/11
to ml-d...@googlegroups.com
Why this is marked as abuse? It has been marked as abuse.
Report not abuse

Alex

unread,
Jan 11, 2011, 4:12:44 AM1/11/11
to ml-d...@googlegroups.com
Found one segment, 1056x704. This may grow when recording.

http://magiclantern.wikia.com/wiki/VRAM/550D

Antony Newman

unread,
Jan 11, 2011, 7:47:35 AM1/11/11
to ml-d...@googlegroups.com
CACHING
 
The CHDK_Coding guidlines is a good start for people .... I ended up reading the rather bulky GCC documents directly when I got stuck.  I still found there is a Gap in what they talk about:
 
1) Do you need to list R0 as a clobbered register if you change it in your code.
    I've got a bit defensive about this and have started cloberring the r0-r3 paramaters passed to my routine ... just in case GCC think they have not changed .
 
2) What took my an entire day to understand is that there is a bug in gcc which relates to symbols (labels in asm you can branch to).  I upgraded to from 4.3.2 -> 4.4.2 and found there is still a bug.  I've split my subroutine library into two to ensure that in the second library, there are no direct calls to the asm routine (Indy may have found another way too). 
 
CACHING & DMA
 
1) I have not got to the bottom of exaclty what is casing the DMA issue with reading / writing.
    I currently am limiting my routiens to read/write at most 4 registers at the time (reduces latency for Interrrupts). 
    But then thought maybe the issues is that I am reading across two 32 byte boundaries .. which means that 64 bytes are being
    read into cache in one go .. and it is this that us cause the problem (ie reading 2 registersn worth).
    I will be mindful of this in future routines.
 
2) In my C code I have injected NOPs (to mimic Trammell).  I have never got any ERR70s at all in my code .. but did get ARM lockups (with DMA still working in the background updating the screeen).  In the end I used a dummy bmp_printf() in the main loop that 'solved the lockedup problem'.
 
3)  I spend about 2 hours yesterday trying to track down why my Vector Cropmarks were vanishing .. and basically not behaving (I was trying to make sure they dynamically moved to the correct pixel locations when in Lcd / HDMI standard / HMDI HD / and recording ... the last one needs more work for HDMI HD).  And was getting very very confused.
 
NOW I think Alex has found the answer (not at my PC).  The crompark update was drawing Just the outline box .. but .. this was going to the Cache ... and NOT the screen ...the screen was only being updated when the cache was emptied at a later point (which is what I was trying to get to the bottom of !).  I'm back at midnightish and will test.
 
(sorry if this thread has gone off topic slightly).
 
SEGMENTS
 
The '1260' segment had 3 images in one.  They looked like low noise, high contrast segments ... probably used for face dected (or maybe contrast focus in the Dryos).
 
 
AJ
 
 
 
 
 

Antony Newman

unread,
Jan 11, 2011, 7:54:20 AM1/11/11
to ml-d...@googlegroups.com
Also .. CHDK.  They give exampls of when coders have decided to go the 'naked route
for asm subroutines - ie not prologue / epilogue.  I think it is more efficient to maximise
the number of registers that are passed directly into the routine (r0-r3), packing them
if possible (eg 3 items packed into 32bits), rather than being forced to got to stack or
alternative memory.  Ie going naked my seem clean - but you end up having wrap the
code up with forced calls to access memory.
 
AJ

Alex

unread,
Jan 11, 2011, 10:13:12 AM1/11/11
to ml-d...@googlegroups.com
The HD VRAM buffer is 1056*704 (3:2) when idle, and during recording
it increases to 1720x974.

Does this mean Canon upsamples this data when recording (with some
quality loss), or it's just a slightly smaller buffer used for other
purposes (like AE, AF...)?

AJ, is it worth to analyze the other buffers? or they are simply
copies? Or they have different lags? (e.g. one buffer keeps last
frame, another keep previous frame). Also, is it possible to do some
kind of vsync?

Is anyone interested in taking silent pictures (without moving the
mirror), at 1720x974, uncompressed YUV422?

mohan manu

unread,
Jan 11, 2011, 10:18:51 AM1/11/11
to ml-d...@googlegroups.com
1720x974 uncompressed YUV422?

I am in...!!!!

xaos

unread,
Jan 11, 2011, 10:28:47 AM1/11/11
to ml-d...@googlegroups.com
> Is anyone interested in taking silent pictures (without moving the
> mirror), at 1720x974, uncompressed YUV422?

"silent pictures" - a dream within a dream :D

Carlos

unread,
Jan 11, 2011, 10:29:59 AM1/11/11
to ml-d...@googlegroups.com
yummm silent pictures... could this be inherited from 7D?

On Tue, Jan 11, 2011 at 4:18 PM, mohan manu <moha...@gmail.com> wrote:
1720x974 uncompressed YUV422?

I am in...!!!!

--

Alex

unread,
Jan 11, 2011, 10:48:40 AM1/11/11
to ml-d...@googlegroups.com
Since 7D doesn't have ML, we can't spy how it does the silent picture.
Does 5D2 do that?

If there are hardware differences in shutter for doing this, we don't
have any chance.

The silent picture would be done like this:
- go to Movie mode
- move focus to star button (ML can change this setting if needed)
- half-press shutter (I don't know how to stop full-press shutter)...
or you can suggest other button (set?)
- camera will start recording, dump the image buffer (4 MB), then
delete the movie (this takes 1-2 seconds or so)
- for postprocessing, I can write a script which converts the yuv 422
to jpeg or tiff.

For this to work, I also need to know how to do vsync. AJ, do you have
any suggestions for this?

xaos

unread,
Jan 11, 2011, 10:59:16 AM1/11/11
to ml-d...@googlegroups.com
Can we take defined number of pictures, not only one?

Alex

unread,
Jan 11, 2011, 11:20:16 AM1/11/11
to ml-d...@googlegroups.com
Of course.

The main limiting factor is how fast can we save those pictures. I
think 4 MB can be saved in 0.5 seconds or less. If I put QScale at
+16, the movie recording process shouldn't interfere too much with
this.

xaos

unread,
Jan 11, 2011, 11:38:01 AM1/11/11
to ml-d...@googlegroups.com
BTW.. Do we know, how short video we can take?

Antony Newman

unread,
Jan 11, 2011, 12:38:55 PM1/11/11
to ml-d...@googlegroups.com
Alex:

Vsync

Yes there are vsync routines - I think you can register a routine to be called when a Vsync occurs (on the bmpdisplay device).
If you have my idc to hand there is a well commented structure that gives the name of the routine that resgisters it wants a vsync call-back.

Or search for the routines that have vsync in them.   (I may have called it AJ_wait_vsync).

Silent Pictures

Taking a copy of vram from the the uncached area should be possible.

(I think my Zebra routine was working at about 17 frames a second on the cached HD image - which means that a staight read on uncached memory should doable in under a vsync).  Soneone needs to see if you can malloc 1.4MB (to store the image).


HDVRAM buffers

We could really do with someone finding the memory locations (hopefully in a vram structure) that point to the base of each Vram segment.


Experiment 1 : RECORDING

It would be interesting to get someone to write to the uncached area of memory - and see if any one of these cause the output gets 'recorded'.   This would mean that 'that' vram segment is the one that the camera uses to record from.


Experiment 2 : YUV422 -> SCREEN

This is the what someone may be interested in investigating.
There is a YUV422filetoscreen routine.

I think this looks at the resolution of the the screen (eg HD-HDMI) and displays the picture.

It may do this Uncompressed, and without black bars.

Thre routine -> Allocates memory: Loads the YUV422 file into memory -> displays the image directly.


Once you've worked out the 2 parms ... we should be able to change the Live-view to call this .... and instead of

calling the routine to display a file .... we simply point the display device and the HD HDMI buffer.

Fingers crossed ... we may get 1080p out of the camera.

AJ




K.

unread,
Jan 11, 2011, 12:44:06 PM1/11/11
to ml-d...@googlegroups.com
Alex do you mean it would be more likely possible to record 3:2 aspect ratio in movie mode now ???? So it doesnt crop to 16:9 ? This would be so great 

2011/1/11 Antony Newman <antony...@gmail.com>

Alex

unread,
Jan 11, 2011, 1:04:55 PM1/11/11
to ml-d...@googlegroups.com
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
> Alex do you mean it would be more likely possible to record 3:2 aspect ratio
> in movie mode now ???? So it doesnt crop to 16:9 ? This would be so great

First, I was talking about stills. To implement (or port) a new video
codec is much more difficult.
Second, 3:2 is only 1056*704. The high-res buffer (1720x974) is 16:9.

> Vsync
Great info, I should investigate this.

> YUV422 -> SCREEN
This will make your Zoom capability much better-looking and simpler to
implement (you won't need the LUT any more). At some point, you said
that you wrote somewhere in a VRAM buffer and you got something on
screen. Can you display something meaningful with this method?

> 1080p out of the camera
Don't we already have that? Or do you mean yuv 422 uncompressed 1080p?

> malloc
How much RAM can we safely malloc?

Alex

unread,
Jan 11, 2011, 2:25:44 PM1/11/11
to ml-d...@googlegroups.com
> HDVRAM buffers

NSTUB(0xFF0F41BC, AJ_lv_continous_frame_save_related)
return *(220728 + 4*arg0)

Called by:
AJ_lv_continous_frame_save+56: AJ_lv_continous_frame_save_related(0)

Also, this function seems to create JPEG files from LiveView:
NSTUB(0xFF0FE674, lvcdevResourceGet)
...
sprintf_maybe(unk_SP /* points to unk_R5 */ , 'B:/DCIM/LV%06d.jpg',
*(0x5280), 12 + unk_R4 /* points to unk_R5 */ )
FIO_CreateFile(unk_SP /* points to unk_R5 */ )
...

It's pretty complex, so I don't know if it's safe to call or not, and
also which are its parameters.

Antony Newman

unread,
Jan 11, 2011, 5:34:01 PM1/11/11
to ml-d...@googlegroups.com
"> YUV422 -> SCREEN
This will make your Zoom capability much better-looking and simpler to
implement (you won't need the LUT any more). At some point, you said"

If the HD segment is uncompressed, and we can point the screen directly at the segment, and the HDMI image that is rendered is not squashed ... we can get uncompressed HDMI output in Liveview.

Hook this up to a nano-flash and we've got something very cute.

(if it works, the magnification stuff that I've written would not be required).

-----------

AJ

Alex

unread,
Jan 12, 2011, 6:46:35 AM1/12/11
to ml-d...@googlegroups.com
Please test the latest build; it has the first implementation of
silent pictures.

Focus assist is not there; I've started to code it, but it's not ready yet.

The silent pictures are taken from the exact same image buffer which
will be used for focus assist.

http://groups.google.com/group/ml-devel/msg/72451b985d90e38f

Carlos

unread,
Jan 12, 2011, 6:53:01 AM1/12/11
to ml-d...@googlegroups.com
I'm anxious to try the silent picture feature.

Thanks to all the people involved

Alex

unread,
Jan 12, 2011, 8:44:28 AM1/12/11
to ml-d...@googlegroups.com
> Does this mean Canon upsamples this data when recording (with some quality loss)?

See this test:
http://www.dvxuser.com/V6/showthread.php?218686-Magic-Lantern-for-550D-in-progress!/page43

Carlos

unread,
Jan 12, 2011, 9:24:33 AM1/12/11
to ml-d...@googlegroups.com
aaaargh we are paying for 1080p and getting 974p! :P

That's why pana GH2 1080p videos look much sharper

Alex

unread,
Jan 12, 2011, 9:28:33 AM1/12/11
to ml-d...@googlegroups.com
And GH2 does not skip lines.

xaos

unread,
Jan 12, 2011, 9:33:37 AM1/12/11
to ml-d...@googlegroups.com
We don't have 4k, 3k, 2k and even 1080p.. :P

mohan manu

unread,
Jan 12, 2011, 10:01:58 AM1/12/11
to ml-d...@googlegroups.com
very disappointing.... :(  :(  :(

Does 7D , Mark do the same?

Alex

unread,
Jan 12, 2011, 10:03:59 AM1/12/11
to ml-d...@googlegroups.com
5D2 has 1872px horizontal resolution (not sure about the vertical one).

See http://magiclantern.wikia.com/wiki/ASM_Zedbra

On Wed, Jan 12, 2011 at 5:01 PM, mohan manu <moha...@gmail.com> wrote:
> very disappointing.... :(  :(  :(
>
> Does 7D , Mark do the same?
>

Carlos

unread,
Jan 12, 2011, 12:00:32 PM1/12/11
to ml-d...@googlegroups.com
Do you remember the guy that sued HP because they claimed one of their PDA's was able to reproduce 65K colors, but it wasn't true?

On Wed, Jan 12, 2011 at 5:55 PM, James Donnelly <jam...@easynet.co.uk> wrote:
So this mean that legally I can market a camera which claims to 'record 1080p', but in reality is sampling fewer pixels which are then up sampled?

Isn't that a breach of the trade description act or something?

New! 4K Barbie-cam! Just up sample yourself in post to get Hollywood quality results!

K.

unread,
Jan 12, 2011, 12:05:32 PM1/12/11
to ml-d...@googlegroups.com
What happened ? He lost ?
I think Canon is a good compny ,they listen to customers,they released 7D because people wanted 24p ,then they gave 24p to 5DII,ive never seen anything like this before with any company.You dont add features to existing products but they did.

2011/1/12 Carlos <carlo...@gmail.com>

Antony Newman

unread,
Jan 12, 2011, 12:11:21 PM1/12/11
to ml-d...@googlegroups.com
It would be interesting is someone took a picture of a resolution chart and found out if upsamping is happening from 1872 -> 1920 (for recording), or if the 1872 pixels is itself being upsampled - and if so, what the resolution is.

That number make equate to an number in the 'engio-struct' definition ... might be useful.

AJ


Carlos

unread,
Jan 12, 2011, 12:25:18 PM1/12/11
to ml-d...@googlegroups.com

Dont remember, it was long time ago and cant find any link to the news... but it called my attention


>>> For more options, visit this group at
>>> http://groups.google.com/group/ml-devel?hl=en
>>>
>>
>> --
>> http://magiclantern.wikia.com/
>>
>> To post to this group, send email to ml-d...@googlegroups.com
>> To unsubscribe from this group, send email to

K.

unread,
Jan 12, 2011, 12:30:35 PM1/12/11
to ml-d...@googlegroups.com
I resized 1720x974 picture to 1920x1080 from 500D in silent still mode ,compared them and resized 1720x974  looks almost identical to MOV frame grab.Looks like camera is using blinear resizing of 974p to 1080p before h.264 compression.
If someone would be able to get rid of this resize algorithm then we would get sharper and detailed picture.


I dont have any res charts to test if resolution is improved a lot.

2011/1/12 Antony Newman <antony...@gmail.com>
It would be interesting is someone took a picture of a resolution chart and found out if upsamping is happening from 1872 -> 1920 (for recording), or if the 1872 pixels is itself being upsampled - and if so, what the resolution is.

That number make equate to an number in the 'engio-struct' definition ... might be useful.

AJ

pel

unread,
Jan 12, 2011, 12:22:26 PM1/12/11
to ml-d...@googlegroups.com

You are very optimistic about Canon. They never ever listen to their customers and never added new features to an existing product (except the 5D2 video modes).

And don't mention 7D because that is a shame...

Torrey Meeks

unread,
Jan 12, 2011, 2:17:05 PM1/12/11
to ml-d...@googlegroups.com
Hm, well I'm not a defender of the corporate policy or anything, but just looking at it from a general product quality standpoint, I can see why Canon takes so long with its updates.

If they release something that's buggy or frequently crashes or doesn't work perfect everytime, they'd get blasted everywhere for making faulty inconsistent things. No one wants inconsistency. It'd make their cameras look much less appealing to the general photography hobbyist, which is I'm guessing the bulk of their market.

We have the luxury of getting it wrong sometimes and things not working perfectly. We're a relatively small, advanced portion of Canon's customer base and we're modding a stable platform. 

If they really wanted to be jerks, they could aggressively block attempts to do firmware cracks.

They might and if they do they're really lame, but they haven't yet.

aqu...@gmail.com

unread,
Jan 12, 2011, 2:25:11 PM1/12/11
to ml-d...@googlegroups.com
They introduced a firmware update counter with the release of the 7D,
that's effectively what halted ML development on the 7D.

K.

unread,
Jan 12, 2011, 2:45:52 PM1/12/11
to ml-d...@googlegroups.com
And they released very cheap 24p camera -550D , now with mlantern its worth double the price.
I dont have anything against them ,people complained about 24p and they gave it to them.people complained about manual audio and they did it for 60D.
They are listening,adding features slowly but listening.
Do you know any other company that gave new feature like canon did adding 24p for 5DII, withouit 24p this cam could even not exist for me,but 24p changed a lot.

pel

unread,
Jan 12, 2011, 3:02:53 PM1/12/11
to ml-d...@googlegroups.com

"I dont have anything against them ,people complained about 24p and they gave it to them.people complained about manual audio and they did it for 60D."

If they listen their customers so well then where is the manual audio in the 7D?

And why they put the update counter to the 7D? I don't know any photographer who ask that "feature"...

K.

unread,
Jan 12, 2011, 3:15:38 PM1/12/11
to ml-d...@googlegroups.com
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
Its one 7D against 5D/550D/60D and maybe 500D hacked :) They never promised manual audio on 7D.

2011/1/12 pel <p...@pel.hu>

pel

unread,
Jan 12, 2011, 4:27:13 PM1/12/11
to ml-d...@googlegroups.com

Only the 5D2 has got new function in Canon's history... vs. the whole Canon DSLR line. -> It means for me they are not listening to their customers.

ML came from Trammel not from Canon so it doesn't count... :-)

K.

unread,
Jan 12, 2011, 4:45:45 PM1/12/11
to ml-d...@googlegroups.com
5D got also manual settings in movie mode through canon firmware update.
I would be unhappy as 7D owner too,i would sell it for 550D.

2011/1/12 pel <p...@pel.hu>

Torrey Meeks

unread,
Jan 12, 2011, 4:50:33 PM1/12/11
to ml-d...@googlegroups.com
Fair enough : )

aqu...@gmail.com

unread,
Jan 12, 2011, 5:03:19 PM1/12/11
to ml-d...@googlegroups.com
The 550D may also have a firmware updater counter, you/we're just not
noticing it because ML uses autoexec rather than the update firmware
function?

I own a 7D but I went and bought a 550D just so I can run ML :) .

Fernando Freire

unread,
Jan 13, 2011, 2:06:28 AM1/13/11
to ml-d...@googlegroups.com
Hi, on the issue of 974P vs 1080P .... we could also think that the photos are not 18 MPx,
and so they are also upscaled from 1056*704.

I mean that in this forum we are coming to conclusions based on very, very little information.

First, what some people is understanding and using is the FIRMWARE, but the core CANON algorithms
are implemented on DIGIC chip (nor firmware), as h264 compression. Otherwise
CANON would not have the processing power necessary,  and battery drain would also be much higher.

What we are finding is a memory buffer of at least 1704x974 that "flows" to paint
the camera screen and I guess any external monitor.

Not necessarily follow that the video process is not based on 1080p. We know nothing of the on-chip video process

But, why if the photos taken from the buffer have more or equal quality than a video frame? Does not show anything
and indeed you can not expect otherwise, since they are RAW photos in 4:2:2 . The
video frames are h264 COMPRESSED, YV12 4:2:0,  (resolution not only depends on pixels).

Regarding the alleged counter 7D firmware, it is one hypothesis, one in a million, and not very strong,
based on what has happened with ONE camera that was being MANIPULATED, and manipulated
with SW not sufficiently tested.

Good work, Alex, silent pictures is a very interesting feature to take a burst raw shots at very high fps...
I think BMP conversion it's no necessary (perhaps with a option) and lower the fps and lower battery time (more CPU intensive).




K.

unread,
Jan 13, 2011, 6:15:40 AM1/13/11
to ml-d...@googlegroups.com
Upscaled 974 looks identical like frame from 1080p video.That says something.
We could not think that 18mp was upscaled from 704 ,come on lets be serious.

2011/1/13 Fernando Freire <nang...@gmail.com>




--

xaos

unread,
Jan 13, 2011, 7:00:22 AM1/13/11
to ml-d...@googlegroups.com
That mean there is 18mp vram buffer for picture, right?

Alex

unread,
Jan 13, 2011, 7:17:56 AM1/13/11
to ml-d...@googlegroups.com
I wish it was.

During LiveView operation, the buffer sizes are those from wiki:
http://magiclantern.wikia.com/wiki/VRAM/550D

So the camera does not sample the full sensor. I think this also
allows cooler operation when not recording.

During picture taking (with mirror movement), the buffer is 18MP.

If the entire sensor would be scanned at 18 MP in LiveView, you would
get extreme jello effect. With a mechanical shutter, the sensor can
send the entire data without hurrying, and there's no jello (exposure
takes place only when shutter is open, but reading data continues
after completely closing the shutter).

I've discovered that by dismantling an old Sony compact (for modding
for infrared), and when I removed the shutter, it took overexposed
pictures with strange patterns (something like Egyptean artwork).

These cameras do have an electronic shutter, which can be turned
on/off (and used only during LV or movie mode). I think the electronic
shutter is rolling.

If we can discover how to configure the scanning window (the area from
which the camera requests data from the sensor), then we'll be able to
take 18MP pictures in silent mode, but with jello. This would be
really nice imo, but very difficult to implement with my current
knowledge about the camera.

Antony Newman

unread,
Jan 13, 2011, 7:20:34 AM1/13/11
to ml-d...@googlegroups.com
Fernando.
 
If you are able to do some of the analyis to determine which theories are correct that would be fantastic.
 
+) We have a multitude of video buffers (fact)
+) One of them may directly be used to create recorded movies. (theory)
 
If you can write a quick program to write to all the vram buffers - ie destructively - then we will be able to find out which vram buffer(s) is the one that h.264 recording compression is based on.
 
+) We have a list of 'DIGIC' structures for the vram segments (fact)
+) Once which know which structure (based on the above experiment) - we may be able to experiment with it (theory). 
 
 
+) We currently have almost-HD vram images (eg 1872 pixels) on the 5d2.   (fact)
+) These may be native resoultion (theory) - we need someone to take a picture of a detailed picture (eg resolution chart).
 
AJ
 
 

Lionel Davey

unread,
Jan 13, 2011, 12:02:59 PM1/13/11
to ml-d...@googlegroups.com
Would it be possible to lock the focus on a subject and have the firmware follow that subject maintaining focus?

Alex

unread,
Jan 13, 2011, 12:08:46 PM1/13/11
to ml-d...@googlegroups.com
With a clever algorithm, maybe. If the subject is moving sideways... I
don't think we have enough CPU power and robust algorithms to track
it.

With 50/1.8 I doubt it will be reliable (it's too slow in LV focusing).

Ideas?

On Thu, Jan 13, 2011 at 7:02 PM, Lionel Davey <audio...@gmail.com> wrote:
> Would it be possible to lock the focus on a subject and have the firmware
> follow that subject maintaining focus?
>

Lionel Davey

unread,
Jan 13, 2011, 12:38:17 PM1/13/11
to ml-d...@googlegroups.com
I don't have any ideas on how to do it.
And I certainly expect it to fail if their is lack of contrast between subject and background for example.

I was more thinking about initially, assistance for small slow changes of focus.
I mainly shoot stills.  So I'm abit slow at remembering to adjust focus.

Ok, I do have one thought from a video encoding background which will probably fail due to lack of processing power.
Can you lock onto a rgb value and track it by searching neighboring pixels.

Or actually, is it possible to modify face detection.


On Jan 14, 3:08 am, Alex <broscutama...@gmail.com> wrote:
> With a clever algorithm, maybe. If the subject is moving sideways... I
> don't think we have enough CPU power and robust algorithms to track
> it.
>
> With 50/1.8 I doubt it will be reliable (it's too slow in LV focusing).
>
> Ideas?
>

Alex

unread,
Jan 13, 2011, 1:17:44 PM1/13/11
to ml-d...@googlegroups.com
Face detection may be a good idea.

A simpler thing to implement is to set rack focus start/end points by
moving the focus window (the little rectangle) on the screen.

But all of these will only be possible after implementing the focus
assist as you saw in the movie.

JeremyOne

unread,
Jan 13, 2011, 1:38:30 PM1/13/11
to Magic Lantern firmware development
If one could take a still while video recording a resolution chart and
a digital timer, you could extract the same frame from the video and
still and compare them closely.

Jeremy

On Jan 13, 3:15 am, "K." <justynadric...@gmail.com> wrote:
> Upscaled 974 looks identical like frame from 1080p video.That says
> something.
> We could not think that 18mp was upscaled from 704 ,come on lets be serious.
>
> 2011/1/13 Fernando Freire <nangdo...@gmail.com>
> > ml-devel+u...@googlegroups.com<ml-devel%2Bunsu...@googlegroups.com>

Alex

unread,
Jan 13, 2011, 1:40:30 PM1/13/11
to ml-d...@googlegroups.com
If the resolution chart and camera are not moving, there's no need for timer ;)

JeremyOne

unread,
Jan 13, 2011, 1:47:05 PM1/13/11
to Magic Lantern firmware development
The thinking was that if you were able to extract the exact same
frame, you may be able to identify the same noise pattern and see
which image has more detail.

On a separate thought, if the 1080p videos are produced from a lower
resolution buffer, aliasing artifacts would occur on the buffers
resolution (974?), instead of 1080. This would seem fairly easy to
test for.

Jeremy

On Jan 13, 10:40 am, Alex <broscutama...@gmail.com> wrote:
> If the resolution chart and camera are not moving, there's no need for timer ;)
>

Fernando Freire

unread,
Jan 13, 2011, 5:56:06 PM1/13/11
to ml-d...@googlegroups.com
Antony, I haven't a resolution chart nor home printer to print some from internet, but I'll try to set up a subject this week-end. If I could get the time I will make the program.

My first subjective impressions, very, very subjetive and very very first, with a high resolution canon-L EF macro optics with a subject with fine details(subbtle threads of ink at border of printed letters), shooting at 1:1, are that the resolution of the 1080p video is better that resolution of 422 raw pictures.

Fernando.
  

2011/1/13 Antony Newman <antony...@gmail.com>
 

--

Alex

unread,
Jan 13, 2011, 6:02:12 PM1/13/11
to ml-d...@googlegroups.com
422 pictures have two resolutions; the highest one is achieved when
you take them while recording a movie.

cristian paradiso

unread,
Jan 13, 2011, 6:42:06 PM1/13/11
to ml-d...@googlegroups.com
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
jj


Da: Alex <broscu...@gmail.com>
A: ml-d...@googlegroups.com
Inviato: Gio 13 gennaio 2011, 23:02:12
Oggetto: Re: [ML] Re: Focus assist experiments

>> For more options, visit this group at
>> http://groups.google.com/group/ml-devel?hl=en
>
> --
> http://magiclantern.wikia.com/
>
> To post to this group, send email to ml-d...@googlegroups.com
> To unsubscribe from this group, send email to

> For more options, visit this group at
> http://groups.google.com/group/ml-devel?hl=en

--
http://magiclantern.wikia.com/

To post to this group, send email to ml-d...@googlegroups.com
To unsubscribe from this group, send email to ml-devel+unsub...@googlegroups.com

Alex

unread,
Jan 14, 2011, 2:06:12 AM1/14/11
to ml-d...@googlegroups.com
AJ,

> 3) Someone can tell me if I am wrong: I think the most efficient way to do
> absolute differences its to use Exclusive Or.
> yyyyyyyy XOR YYYYYYYY = (I think this is = ) absolute difference.

2 xor 3 = 1 ok
3 xor 5 = 6 not very exact...

Did you use USAD8 and USADA8? it seems they can help here.

On Mon, Jan 10, 2011 at 11:42 AM, Antony Newman <antony...@gmail.com> wrote:
> Alex,
>
> 1)  Points to note:
> ----------------------------------
> p0 = Luma of Pixel on right
> p1 = Luma of Pixel on left
> -----------------------------------
>
> 2)   If you read 2 x vram pixels in a row, then you have 4 pixels, which
> means you can do 3 Absolute differences.
> If you only do the differnce between each of the two pixels then you are
> wasting contrast information that you could use.
>
> 3)  Someone can tell me if I am wrong:  I think the most efficient way to do
> absolute differences its to use Exclusive Or.
> yyyyyyyy XOR YYYYYYYY  = (I think this is = ) absolute difference.
>
> Therefore:
>
> unsigned int Pixel = Vram Pixel;
> unsigned int Contast =  Pixel ^ (Pixel >> 16) ;        // would give you
> ???????? ???????? CCCCCCCC ????????
>
> // would give you contrast at the CCCCCCC postion.
>
>
> 4) In the ASM that I wrote ... I got a bit 'tricky' ... and thought .. what
> if you had:
> Vram1 = AAAAAAAA ???????? BBBBBBBB ????????
> Vram2 = CCCCCCCC ???????? DDDDDDDD ????????
>
> Vram1 ^ Vram2 =  [AAAAAAAA^CCCCCCCC]  [????????] [BBBBBBBB^DDDDDDDD]
> [????????]
>
> And so in one XOR ... you could get 2 x absolute differences in on
> instruction.
>
> 5) Sobel edge detection (as used by Trammell) using differences in the
> horizontal and vertical.
>
>     The issue here is that you if you want to process 1 x line, you end up
> reading the next one (from vram) to calculate the first one.
>     So you end up reading 2 x vram lines for each one you display.
>
>     A better solution may be to process N lines at the time - and then you
> only need N + 1 vram lines for Sobel - ie you only need to read an extra
> line every
>     N lines.
>
>     In my asm version, I read two lines in parallel, and on 50% of the vram
> was re-read (rather than 100%).
>
>
> 6) Speed:  I think the integrated Zebra, (Sobel) Edge detection, Focus
> accumulator and Overlay YUV transform assembler ran at around 24 fps for the
> entire screen.  It didn't use the ML code base (whose graphics are modular).
>
>
> 7) Before you code something into ASM - I suggest that the algorithm that
> you want to use fine tuned to do exactly what you need.
> It took me about 30 hours to decided on an optimised version of ASM for the
> 'all-in-one' (on pieces of paper). Then about 4.5 days to write it in asm
> (mostly because I could not find examples of how to get GCC to do nothing
> other than compile my code, and secondly the discovery that the DryOs
> interrupts do something very naughty an expect one of your registers not to
> be used as temporary workspace = r13).
>
> In a nut shell - i'd need to see your design first before I could help you
> optimised the code (in C or ASM).
>
> 8) If you want to test one line of ASM at the time - you can do it like
> this:   (taken from my Logarithm code)
>
>    /*******************************************************
>    *   OK .. use CLZ from ASM to count the leading Zeros  *
>    *******************************************************/
>
>    unsigned int yclz = 32 - aj_CLZ( val );      // if val =0,        yclz= 0
>                                                 // if val =2^32-1,   yclz=
> 32
>
> ==============
>
> /*************************************************************************************************
> *  aj_CLZ()  -  Do an asm( CLZ  )
> *************************************************************************************************/
> unsigned int aj_CLZ( unsigned int input_num)
> {
>    asm volatile(
>
> "     CLZ r0,r0\n"
> "     MOV r15,r14\n"
> //===============================================
> //===============================================
> //=======   ^^ RETURN POINT OF ASM ^^  ==========
> //===============================================
> //===============================================
>
>       :             // Output operands   eg.
> [output1]"+r"(g_r13_temp_store)
>       :             // Input operands    eg.  [input]"m"(parm1)
>       : "r0"        // eg "memory","cc"    =   Clobber list
>    );  // end of asm volatile()
>
>    return( 666 );
>
> } /* end of aj_clz() */
>
>
>
> 9)  I have attached the annotated asm routine that did:
>
> +) YUV tranformation lookup
> +) Colour dithering
> +) Under and Over exposure Zebra
> +) Screen clearing
> +) Sobel Edge detection
> +) Focus accumulation
> +) Overlay update (zebra or transformed YUV)
>
> Note - I don't use this anymore (and found a YUV lookup that takes 2 less
> cycles with a different methodology)


>
> AJ
>
> --
> http://magiclantern.wikia.com/
>
> To post to this group, send email to ml-d...@googlegroups.com
> To unsubscribe from this group, send email to

> ml-devel+u...@googlegroups.com

Alex

unread,
Jan 14, 2011, 5:56:56 AM1/14/11
to ml-d...@googlegroups.com
Who wants to try the first implementation of focus peaking?

Known bugs:

* it's easily fooled by contrast (or lack of it)
* it may display red markers even if there's nothing in focus

Implementation details:

* Only horizontal edges are detected.
* Only adjacent pixels which fit in an int32 are considered.
* Threshold: 1% percentile.
* When recording, HD buffer is downsampled horizontally by 2.

AJ, can you take a look at the code?
http://bitbucket.org/hudson/magic-lantern/src/tip/zebra.c in function
draw_focus_assist.

magiclantern-2011Jan14.550d.fw109.NoAudioMon.focus-peak.alex.zip

Carlos

unread,
Jan 14, 2011, 6:25:08 AM1/14/11
to ml-d...@googlegroups.com
I can't wait to get home and try this! (and of course, donate a little bit for your efforts :) )

Alex

unread,
Jan 14, 2011, 6:27:07 AM1/14/11
to ml-d...@googlegroups.com
Yay, thanks :)

Alex

unread,
Jan 14, 2011, 6:56:39 AM1/14/11
to ml-d...@googlegroups.com
The little number displayed is the threshold used for edge detection.
Usually, the bigger, the better. Maybe this can be color-coded
somehow.

Antony Newman

unread,
Jan 14, 2011, 8:18:40 AM1/14/11
to ml-d...@googlegroups.com
Alex:
 
XOR arithmetic: 
 
I hadn't noticed in some very quick experiments.  In my first attempt to differences, I coded the Parallel Add  / subtract feature of the ARM core .. and thought wow .. until I found out its not on this Core and had to remove it .. haha.
 
It it still possible to do 4 parallel byte subtract operateion between two registers (so long as for each of the respective bytes, the second number is less than the first).
 
Can I let some of the other 'C' coders out there review your zebra.c ... the main reason is that I don't use Zebra.c in my code.
 
When I have some free time - I may return to optimal 'edge' detection.  This is more likely to happen if i think that an Active focus algorithm would benefit from it.  In which case I would have to consider the whole mechanism to ensure it ran as fast as possilble.
 
AJ

Alex

unread,
Jan 14, 2011, 8:21:41 AM1/14/11
to ml-d...@googlegroups.com
> Can I let some of the other 'C' coders out there review your zebra.c ... the main reason is that I don't use Zebra.c in my code.
Sure. It's coded from scratch, it's there just because the older edge
detection was there, too. So you shouldn't have problems reviewing it.

BTW, the current speed is achieved with the mirror buffer enabled
(i.e. it does not overwrite Canon stuff on the screen). But because
only 1% of pixels are actually drawn, the speed penalty is minimal.

Alex

unread,
Jan 14, 2011, 8:53:08 AM1/14/11
to ml-d...@googlegroups.com
Here's the second try.

* Fixed the saving bug (you can now save the Focus Peak setting).
* Added some kind of color coding (warmer is better)
* Small optimizations (I'm not sure if they have any effect or not).

magiclantern-2011Jan14.550d.fw109.NoAudioMon.focus-peak-2.alex.zip

K.

unread,
Jan 14, 2011, 11:17:20 AM1/14/11
to ml-d...@googlegroups.com
When focus assist is working then cropmarks arent drawn,when you turn focus assist they are visible again.Is this a problem ot it will stay like that ?

2011/1/14 Alex <broscu...@gmail.com>

Fernando Freire

unread,
Jan 14, 2011, 11:26:57 AM1/14/11
to ml-d...@googlegroups.com
Hi, Alex wen focus assist is working and press half-shutter:

1) Focus assists stops: freeze the edge detection.

2) Focus assists refuse to function until restart the cam.

3) A lot or card activity (some debug activated).

4) ¿It's possible to draw more thicker edges?


Thanks:

Fernando


Alex

unread,
Jan 14, 2011, 12:22:08 PM1/14/11
to ml-d...@googlegroups.com
Hi,

1) yes.
2) on my camera, it resumes when you de-press the button
3) there's no code which saves debug logs in zebra.c. Does it make any
new files? In which mode are you using it?
4) yes, but it's more complex to code.

I've disabled other stuff (cropmarks and zebras) for speed. Will be
enabled again these days.

sawomedia

unread,
Jan 14, 2011, 12:27:13 PM1/14/11
to Magic Lantern firmware development
When i press half-shutter (global redraw) the Peaks freeze and then
dissapear, what is what you exspect when clearing the screen via H-S

As soon as i let go the half-shutter, the peaks continue to work again
normal.

I can double the bug with the Cropmarks though.

Alex

unread,
Jan 14, 2011, 12:32:02 PM1/14/11
to ml-d...@googlegroups.com
Yes, this is the exact behavior in my camera.

I'm thinking to draw cropmarks and histogram at 1 fps, and rewrite
zebra with the same framework. I've disabled them because I thought
this would need every bit of speed, but it's much faster than I've
expected from a pure C code.

=> will be fixed.

Fernando Freire

unread,
Jan 14, 2011, 1:16:45 PM1/14/11
to ml-d...@googlegroups.com
1) Focus assists stops: freeze the edge detection.
     ** My fault. I had set to OFF the Clear Screen Half Shutter (thanks sawomedia).

 2) Focus assists refuse to function until restart the cam.
** Sequence.

Start the camera.
Liveview
ClrScr OFF and PEAK OFF
set PEAK ON: all peaking working at video and photo modes (very promising for hand held macro photography!!)
set ClrScr Half Shutter: no more Peaks until restart


3) A lot or card activity (some debug activated).
My fault: I have active the debugging from my own programming. Disabled and all OK.

 4) ¿And to change their color online, say select from primary colors in order to pick the right color according to make them more visible according to environment?


K.

unread,
Jan 14, 2011, 1:20:48 PM1/14/11
to ml-d...@googlegroups.com
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
Im not sure about the colours... in the same scene when i move the cam then sharpest possible on some subjects will show ligh blue and it cant be any sharper(its in perfect focus),then i move the cam to the left when i have different subject and when its in light blue its not in perfect focus,i must move the lens a bit until its focus peak colour is yellow ,then i am in perfect focus.
IMO this is misleading cause when somewhere light blue is perfect focus then in some areas  light blue is not perfect focus , i would prefere single colour for this cause its confusing.
In some areas its not possible to get yellow colour and when i leave it at lght blue then again im not sure if its in focus cause yellow its perfect focus when i move camera a bit.

2011/1/14 Alex <broscu...@gmail.com>

K.

unread,
Jan 14, 2011, 1:41:49 PM1/14/11
to ml-d...@googlegroups.com
How is it in some monitors ? Is it only red ? I saw some vids and its only red everywhere
I think the least often colour visible in real enviroments is magenta.Blue colour might collide with bluescreen.

2011/1/14 Fernando Freire <nang...@gmail.com>

Carlos

unread,
Jan 14, 2011, 2:32:04 PM1/14/11
to ml-d...@googlegroups.com
Hey Alex, tested the focus assistant. Here are my thoughts:
1- Will be available a threshold parameter? Now the threshold is too high...
2- What is the color code? I've seen dark blue, light blue and yellow
3- Is this an appetizer until we get 1:1 magnification window? (just kidding ^_^)

konarix

unread,
Jan 14, 2011, 3:35:06 PM1/14/11
to Magic Lantern firmware development
Hi Alex.

Peaking seems to be a really big thing.

But what exactly line colors means(l.blue, blue, yellow, red)?

Best Regards.

On 14 Sty, 20:32, Carlos <carlosh...@gmail.com> wrote:
> Hey Alex, tested the focus assistant. Here are my thoughts:
> 1- Will be available a threshold parameter? Now the threshold is too high...
> 2- What is the color code? I've seen dark blue, light blue and yellow
> 3- Is this an appetizer until we get 1:1 magnification window? (just kidding
> ^_^)
>
>
>
>
>
>
>
> On Fri, Jan 14, 2011 at 7:41 PM, K. <justynadric...@gmail.com> wrote:
> > How is it in some monitors ? Is it only red ? I saw some vids and its only
> > red everywhere
> >http://www.youtube.com/watch?v=s1jOK66-dNk
> >http://www.youtube.com/watch?v=P6O7U6H0H38
> > I think the least often colour visible in real enviroments is magenta.Blue
> > colour might collide with bluescreen.
>
> > 2011/1/14 Fernando Freire <nangdo...@gmail.com>
>
> > 1) Focus assists stops: freeze the edge detection.
> >>      ** My fault. I had set to OFF the Clear Screen Half Shutter (thanks
> >> sawomedia).
>
> >>  2) Focus assists refuse to function until restart the cam.
> >> ** Sequence.
>
> >> Start the camera.
> >> Liveview
> >> ClrScr OFF and PEAK OFF
> >> set PEAK ON: all peaking working at video and photo modes (very promising
> >> for hand held macro photography!!)
> >> set ClrScr Half Shutter: no more Peaks until restart
>
> >> 3) A lot or card activity (some debug activated).
> >> My fault: I have active the debugging from my own programming. Disabled
> >> and all OK.
>
> >>  4) ¿And to change their color online, say select from primary colors in
> >> order to pick the right color according to make them more visible according
> >> to environment?
>
> >> --
> >>http://magiclantern.wikia.com/
>
> >> To post to this group, send email to ml-d...@googlegroups.com
> >> To unsubscribe from this group, send email to
> >> ml-devel+u...@googlegroups.com<ml-devel%2Bunsubscribe@googlegroups.c om>
> >> For more options, visit this group at
> >>http://groups.google.com/group/ml-devel?hl=en
>
> >  --
> >http://magiclantern.wikia.com/
>
> > To post to this group, send email to ml-d...@googlegroups.com
> > To unsubscribe from this group, send email to
> > ml-devel+u...@googlegroups.com<ml-devel%2Bunsubscribe@googlegroups.c om>

James Donnelly

unread,
Jan 14, 2011, 5:17:14 PM1/14/11
to ml-d...@googlegroups.com
Alex,

YES!  This is what I've been waiting for.  Whatever enhancements you do, this feature, as is, transforms the camera for me.

It works better than I expected.  I've just taken a video indoors and kept my small very mobile subject in pretty good focus without the use of loupe, which would have been impossible before.

I am sending you a small donation to show my appreciation, which in no way compensates you for all your hours, but I hope others will do the same!

Cheers,

James

Alex

unread,
Jan 14, 2011, 7:56:51 PM1/14/11
to ml-d...@googlegroups.com
Just got back from a small concert. ISO 25600 was a must have, even @
f1.8 with 1/45, but focus assist was really useless in low light due
to noise. Anyway, Canon's phase detection focus worked really well (it
was slow, but reliable).

Colors show the small number from the upper left, which is the
threshold used for the edge image. In theory, colder colors might give
a hint that it may be a false detection. In practice, experiments will
tell if they mean anything or not.

I'll answer the other questions tomorrow, thanks for all the feedback!

Alex

unread,
Jan 15, 2011, 2:56:28 AM1/15/11
to ml-d...@googlegroups.com
> 1- Will be available a threshold parameter? Now the threshold is too high...
Yes.

> 2- What is the color code? I've seen dark blue, light blue and yellow

The threshold for edge detection (see previous mail).

> 3- Is this an appetizer until we get 1:1 magnification window? (just kidding ^_^)

I don't think we'll get 1:1 magnification (see Antony's work on this).

cp

unread,
Jan 15, 2011, 12:17:02 PM1/15/11
to Magic Lantern firmware development
Alex,

Issues with this build ...2011Jan14.fw109.NoaudioMon.focus-peak...

1) If you turn on the camera with peaking enabled, cropmarks,
histogram, and zebras don't work.
2) When turning on the camera with peaking off and then enabling it,
everything, crops, histogram, and zebras, seem to work except the
peaking erases the black borders of the cropmarks.

Great job, you da man!

-cp

On Jan 14, 4:56 am, Alex <broscutama...@gmail.com> wrote:
> Who wants to try the first implementation of focus peaking?
>
> Known bugs:
>
>   * it's easily fooled by contrast (or lack of it)
>   * it may display red markers even if there's nothing in focus
>
> Implementation details:
>
>   * Only horizontal edges are detected.
>   * Only adjacent pixels which fit in an int32 are considered.
>   * Threshold: 1% percentile.
>   * When recording, HD buffer is downsampled horizontally by 2.
>
> AJ, can you take a look at the code?http://bitbucket.org/hudson/magic-lantern/src/tip/zebra.cin function
> draw_focus_assist.
>
> On Fri, Jan 14, 2011 at 9:06 AM, Alex <broscutama...@gmail.com> wrote:
> > AJ,
>
> >> 3)  Someone can tell me if I am wrong:  I think the most efficient way to do
> >> absolute differences its to use Exclusive Or.
> >> yyyyyyyy XOR YYYYYYYY  = (I think this is = ) absolute difference.
>
> > 2 xor 3 = 1 ok
> > 3 xor 5 = 6 not very exact...
>
> > Did you use USAD8 and USADA8? it seems they can help here.
>
>  magiclantern-2011Jan14.550d.fw109.NoAudioMon.focus-peak.alex.zip
> 1060KViewDownload
It is loading more messages.
0 new messages