Focus assist experiments

723 views
Skip to first unread message

Alex

unread,
Jan 9, 2011, 7:32:24 AM1/9/11
to Magic Lantern firmware development
I've began to experiment with focus assist algorithms for Magic Lantern.

Theory: http://magiclantern.wikia.com/wiki/Focus_Assist
First experiment: http://vimeo.com/18584960

Suggestions welcome.

James Donnelly

unread,
Jan 9, 2011, 9:15:14 AM1/9/11
to ml-d...@googlegroups.com

--

What you posted on Vimeo seems to suggest that your experiments are very promising.  Is this using the same code that 5d users have deemed not very usable, or is it a new algorithm you have implemented?

Looks to me like it would already make very accurate critical focus possible under most conditions, can't wait to see it in a release to play with.

James Donnelly

unread,
Jan 9, 2011, 9:19:30 AM1/9/11
to ml-d...@googlegroups.com
I should have read more thoroughly, I see that this was not done in the camera now, damn, got all excited again.

Alex

unread,
Jan 9, 2011, 9:20:25 AM1/9/11
to ml-d...@googlegroups.com
I've never seen how 5D works, so... I don't know. Zebra code from ML is undescifrable for me... it has binary arithmetic, which I'm not good at.

What I'm pretty sure the 5D code doesn't have is the percentile threshold. I should also render a video with constant threshold to see the difference.

--
http://magiclantern.wikia.com/
 
To post to this group, send email to ml-d...@googlegroups.com
To unsubscribe from this group, send email to ml-devel+u...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/ml-devel?hl=en

Rui Madruga

unread,
Jan 9, 2011, 10:12:11 AM1/9/11
to Magic Lantern firmware development
What I see in your video is a peaking filter. This is fantastic.
For working is nice if the image is in black and white.
Thank you for work in this item.

Sincerely


Rui Madruga




On 9 Jan, 14:20, Alex <broscutama...@gmail.com> wrote:
> I've never seen how 5D works, so... I don't know. Zebra code from ML is
> undescifrable for me... it has binary arithmetic, which I'm not good at.
>
> What I'm pretty sure the 5D code doesn't have is the percentile threshold. I
> should also render a video with constant threshold to see the difference.
>
>
>
> On Sun, Jan 9, 2011 at 4:15 PM, James Donnelly <jam...@easynet.co.uk> wrote:
> > On 9 January 2011 04:32, Alex <broscutama...@gmail.com> wrote:
>
> >> I've began to experiment with focus assist algorithms for Magic Lantern.
>
> >> Theory:http://magiclantern.wikia.com/wiki/Focus_Assist
> >> First experiment:http://vimeo.com/18584960
>
> >> Suggestions welcome.
>
> >> --
>
> > What you posted on Vimeo seems to suggest that your experiments are very
> > promising.  Is this using the same code that 5d users have deemed not very
> > usable, or is it a new algorithm you have implemented?
>
> > Looks to me like it would already make very accurate critical focus
> > possible under most conditions, can't wait to see it in a release to play
> > with.
>
> > --
> >http://magiclantern.wikia.com/
>
> > To post to this group, send email to ml-d...@googlegroups.com
> > To unsubscribe from this group, send email to
> > ml-devel+u...@googlegroups.com<ml-devel%2Bunsubscribe@googlegroups.c­om>
> > For more options, visit this group at
> >http://groups.google.com/group/ml-devel?hl=en- Ocultar texto citado -
>
> - Mostrar texto citado -

Antony Newman

unread,
Jan 9, 2011, 10:42:03 AM1/9/11
to ml-d...@googlegroups.com
I believe the existing ML edge detection did work - maybe at not an especially hight frame rate, and based on Sobel edge detection.
I believe it has a two pixel resolution, and displays 32 'colours' dependent on detail.

I've tried a single pixel 24fps version of this (coded in assembler).  I found that as the resolution of the target system was 1080p, having edge detection at 480p was not that helpful in getting focus perfect.

Following Marshals own version of edge detection (single pixel), much fewer shades of red for higher contrast.
I tried this too. Same problem.

My conclusion is that you need to work on higher resolution data than ML currently works with (ie the high-def video ram buffer).

I then implemented a 'contrast' accumulator that provides a sum of the 'Sobel' for the entire screen (and on each line).
The problem with this was how to 'show' this information in a meaningful way.

I've now stripped this from my asm routine library.  The binary arithmetic is bit more involved that C (which itself already may have looked involved) - because I'd heavily optimised it.  If someone wants to resurrect it - I created an introduction to how the code worked in an Excel.

The main reason I dropped the code was that I think for passive focusing (as opposed to active getting the camera to change its focus), I think the 'Pop-up Overlay' may be a better solution.

AJ

 


Chris71

unread,
Jan 9, 2011, 11:44:20 AM1/9/11
to Magic Lantern firmware development
I want to suggest a different approach to keep focus when zooming:

I did video filming in the "good old" analogue times. The camera (a
Sony Video 8 camera) didn't autofocus as far as I can remember, but
keeping focus when zooming was quite easy: First you had to zoom fully
in, set the focus correctly on the desired object and then you could
start filming. The object stayed perfectly in focus throughout the
entire zoom range (at least as long as the distance to the object
stayed the same.)

When I film using my 550D I usually have the same approach, but
unfortunately the object doesn't stay perfectly in focus when zooming
out, so I have to correct it slightly manually. Im using an EF-S
18-200mm IS Superzoom for filming and don't know if other zooms
(without such a big zoom range) stay better in focus when zooming.

Therefore I think it might be a nice feature if Magic Lantern would do
the necessary slight focus corrections when zooming out and in.

I guess that for this feature we would need to measure how the focus
on the different zoom lenses behaves when zooming, so there would be a
table for each supportet lens that tells the focus correction for each
zoom setting.

Would this feature make sense for film makers?

Would this be possible to implement?

Chris

K.

unread,
Jan 9, 2011, 12:06:48 PM1/9/11
to ml-d...@googlegroups.com
No,its not the case here.Just buy parafocal zoom.Your 18-200 isnt parafocal probably.

2011/1/9 Chris71 <nied...@gmx.net>
--
http://magiclantern.wikia.com/

To post to this group, send email to ml-d...@googlegroups.com
To unsubscribe from this group, send email to ml-devel+u...@googlegroups.com

Chris71

unread,
Jan 9, 2011, 12:20:07 PM1/9/11
to Magic Lantern firmware development
OK, I didn't know yet that this behaviour is called "parafocal". My
18-200 must be a so called "varifocal" lens then.

Anyhow, wouldn't it still be a nice feature if Magic Lantern could
make varifocal lenses behave like parafocal ones?

I guess there are other people besides me using varifocal
lenses... :-)

On 9 Jan., 18:06, "K." <justynadric...@gmail.com> wrote:
> No,its not the case here.Just buy parafocal zoom.Your 18-200 isnt parafocal
> probably.
>
> 2011/1/9 Chris71 <niedeg...@gmx.net>
> > ml-devel+u...@googlegroups.com<ml-devel%2Bunsu...@googlegroups.com>
Message has been deleted

James Donnelly

unread,
Jan 9, 2011, 12:26:47 PM1/9/11
to ml-d...@googlegroups.com
I believe the term is parfocal rather than parafocal.

I think this is an excellent idea personally, and I don't see why in theory it shouldn't be possible with ML.  You would need a lens that reports it's current focal length correctly for it to work.

However, I never zoom during a shot, so it doesn't interest me :)



On 9 January 2011 09:20, Chris71 <nied...@gmx.net> wrote:
OK, I didn't know yet that this behaviour is called "parafocal". My
18-200 must be a so called "varifocal" lens then.

Anyhow, wouldn't it still be a nice feature if Magic Lantern could
make varifocal lenses behave like parafocal ones?

I guess there are other people besides me using varifocal
lenses... :-)

On 9 Jan., 18:06, "K." <justynadric...@gmail.com> wrote:
> No,its not the case here.Just buy parafocal zoom.Your 18-200 isnt parafocal
> probably.
>
> 2011/1/9 Chris71 <niedeg...@gmx.net>

K.

unread,
Jan 9, 2011, 1:00:03 PM1/9/11
to ml-d...@googlegroups.com
Yes parfocal,I think MLantern would need to recognize each lens and correct focus differently because each model has different focus shifts when zooming in/out.Different manufacturers.
Also it would depend on how far your object is ,focus would need to be shifted differently on far and close objects.

2011/1/9 James Donnelly <jam...@easynet.co.uk>

konarix

unread,
Jan 9, 2011, 2:26:00 PM1/9/11
to Magic Lantern firmware development
Hi Alex.

When can we expect ML release with focus assist??

Best Regards.

chungdha

unread,
Jan 9, 2011, 3:01:24 PM1/9/11
to Magic Lantern firmware development
Nowadays zooming does not effect the newer lenses mostly the old
lenses have this problem the extreme case with a varifocal lens. But
all new lenses you can zoom in to focus and zoom out and the focus
will be still at the same point.

James Donnelly

unread,
Jan 9, 2011, 5:12:46 PM1/9/11
to ml-d...@googlegroups.com
On 9 January 2011 10:00, K. <justyna...@gmail.com> wrote:
Yes parfocal,I think MLantern would need to recognize each lens and correct focus differently because each model has different focus shifts when zooming in/out.Different manufacturers.
Also it would depend on how far your object is ,focus would need to be shifted differently on far and close objects.


Good point.  How about a calibration tool that plots a series of correction values for given focus distances and interpolates a graph/matrix for use during zooming in video mode?  To generate the points to interpolate from, with the camera in still mode,  could we track the (fulltime?) autofocus corrections made by the camera during a slow zoom, and repeat for a range of focus distances

Alex

unread,
Jan 10, 2011, 3:46:55 AM1/10/11
to ml-d...@googlegroups.com
Thanks for details, AJ.

> you need to work on higher resolution data than ML currently works with (ie the high-def video ram buffer).

That's true. The movie from vimeo uses a 720p buffer (e.g. seg 1 while
not recording). I think it can be downscaled without much loss of
information.

I'm thinking to process only adjacent pixels which fit in an int32.
Those pixels are coded as:
YYYYYYYYccccccccYYYYYYYYcccccccc
i.e.
    uint32_t pixel = v_row[x/2];
    uint32_t p0 = ((pixel >> 16) & 0xFF00) >> 8; // odd bytes are luma
    uint32_t p1 = ((pixel >>  0) & 0xFF00) >> 8;

So, from a single memory read, I want to compute the thresholded edge
strength, which is abs(p1 - p0) > thr ? 1 : 0. Threshold is constant
within one frame, but will change from one frame to another.

Is there a way to compute it very fast in assembler?

Drawing speed is not a problem, since only 1...5% of the tested pixels
will be actually displayed.

Konarix:

I think it requires more optimization than the Zebras, but with the
help of AJ and Piers (the ASM experts), it will be possible. My
estimation right now is exactly like Debian releases: when it's done.
It also depends on the amount of time we can dedicate to ML, since we
have daily jobs, too.

> --
> http://magiclantern.wikia.com/
>
> To post to this group, send email to ml-d...@googlegroups.com
> To unsubscribe from this group, send email to ml-devel+u...@googlegroups.com

Antony Newman

unread,
Jan 10, 2011, 4:42:12 AM1/10/11
to ml-d...@googlegroups.com
Alex,

1)  Points to note:
----------------------------------
p0 = Luma of Pixel on right
p1 = Luma of Pixel on left
-----------------------------------

2)   If you read 2 x vram pixels in a row, then you have 4 pixels, which means you can do 3 Absolute differences.
If you only do the differnce between each of the two pixels then you are wasting contrast information that you could use.

3)  Someone can tell me if I am wrong:  I think the most efficient way to do absolute differences its to use Exclusive Or.
yyyyyyyy XOR YYYYYYYY  = (I think this is = ) absolute difference.

Therefore:

unsigned int Pixel = Vram Pixel;
unsigned int Contast =  Pixel ^ (Pixel >> 16) ;        // would give you    ???????? ???????? CCCCCCCC ????????
                                                                                // would give you contrast at the CCCCCCC postion.


4) In the ASM that I wrote ... I got a bit 'tricky' ... and thought .. what if you had:
Vram1 = AAAAAAAA ???????? BBBBBBBB ????????
Vram2 = CCCCCCCC ???????? DDDDDDDD ????????

Vram1 ^ Vram2 =  [AAAAAAAA^CCCCCCCC]  [????????] [BBBBBBBB^DDDDDDDD] [????????]

And so in one XOR ... you could get 2 x absolute differences in on instruction.

5) Sobel edge detection (as used by Trammell) using differences in the horizontal and vertical.

    The issue here is that you if you want to process 1 x line, you end up reading the next one (from vram) to calculate the first one.
    So you end up reading 2 x vram lines for each one you display.  

    A better solution may be to process N lines at the time - and then you only need N + 1 vram lines for Sobel - ie you only need to read an extra line every
    N lines.

    In my asm version, I read two lines in parallel, and on 50% of the vram was re-read (rather than 100%).


6) Speed:  I think the integrated Zebra, (Sobel) Edge detection, Focus accumulator and Overlay YUV transform assembler ran at around 24 fps for the entire screen.  It didn't use the ML code base (whose graphics are modular).
 

7) Before you code something into ASM - I suggest that the algorithm that you want to use fine tuned to do exactly what you need.  
It took me about 30 hours to decided on an optimised version of ASM for the 'all-in-one' (on pieces of paper). Then about 4.5 days to write it in asm (mostly because I could not find examples of how to get GCC to do nothing other than compile my code, and secondly the discovery that the DryOs interrupts do something very naughty an expect one of your registers not to be used as temporary workspace = r13).

In a nut shell - i'd need to see your design first before I could help you optimised the code (in C or ASM).

8) If you want to test one line of ASM at the time - you can do it like this:   (taken from my Logarithm code)

   /*******************************************************
   *   OK .. use CLZ from ASM to count the leading Zeros  *
   *******************************************************/
  
   unsigned int yclz = 32 - aj_CLZ( val );      // if val =0,        yclz= 0
                                                // if val =2^32-1,   yclz= 32

==============

/*************************************************************************************************
*  aj_CLZ()  -  Do an asm( CLZ  )
*************************************************************************************************/
unsigned int aj_CLZ( unsigned int input_num)
{
   asm volatile(

"     CLZ r0,r0\n"
"     MOV r15,r14\n"    
//===============================================
//===============================================
//=======   ^^ RETURN POINT OF ASM ^^  ==========
//===============================================
//===============================================   
   
      :             // Output operands   eg.  [output1]"+r"(g_r13_temp_store)
      :             // Input operands    eg.  [input]"m"(parm1)
      : "r0"        // eg "memory","cc"    =   Clobber list      
   );  // end of asm volatile()

   return( 666 );

} /* end of aj_clz() */



9)  I have attached the annotated asm routine that did:

+) YUV tranformation lookup
+) Colour dithering
+) Under and Over exposure Zebra
+) Screen clearing
+) Sobel Edge detection
+) Focus accumulation
+) Overlay update (zebra or transformed YUV)

Note - I don't use this anymore (and found a YUV lookup that takes 2 less cycles with a different methodology)

AJ
AJ3 Optimised ASM.xls

Alex

unread,
Jan 10, 2011, 4:49:58 AM1/10/11
to ml-d...@googlegroups.com
Wow, that's a lot of food for thought. Thanks.

Antony Newman

unread,
Jan 10, 2011, 6:41:08 AM1/10/11
to ml-d...@googlegroups.com
Forgot to mention that after I wrote the asm (in Excel), then cut and paste it into the code I found a bug (surprise surprise).

BGT and BLT (Branch greater than / Branch Less than) are 'Signed'
I had to switch these to use BLO and BHI (unsigned versions of the same instructions).

Simple ... but took me 6 hours to track down.

-

Side note:
Strangely enough - in my first attempt to integrate my code with ML .. the first variable that I chose was defined as 'unsigned' (in ML).
I don't know what the default is - and it caused my GCC to go a bit mad, and so I changed the type of the variable (in ML) to be 'unsigned int'.

I'm not very comfortable with changing code (that I don't know how it works).    I am guessing this is similar to the the Config parameters.

Although not 'type' correct,  I have coded everything (that I can) that interacts with the assembler to be of type 'unsigned int'.
(4 bytes unsigned = 1 x word = 1 register).


Side note 2:
ML has a few hard coded structures. When read into memory - 32 Bytes (8 registers worth) are loaded into cache that are in the memory added of that 32 Byte aligned item  (assuming you are reading from cacheable memory).

When designing my fake colour an histogram array.  I store everything that I need (false colour and 256 level counter) in 1 x word.  I ensure this is allocated in cacheable memeory, and alligned to a 32 Byte boundary.   This means that only (256 / 8) 32 reads are required before everything is cache resident.


AJ

 

 

 

Alex

unread,
Jan 10, 2011, 6:46:21 AM1/10/11
to ml-d...@googlegroups.com
Signed BGT/BLT: maybe I should put this in the mini-asm wiki page.

> I'm not very comfortable with changing code (that I don't know how it works). I am guessing this is similar to the the Config parameters.

I've tried to change CONFIG_INT to signed int and that made ML run
like on a 286 with turbo switch off... no idea what happened.

Antony Newman

unread,
Jan 10, 2011, 7:54:16 AM1/10/11
to ml-d...@googlegroups.com
Alex,

Maybe a 'what to check when your asm goes t1tsup' page could be of use.

I blogged the BGT issue on http://magiclantern.wikia.com/wiki/2.0.4_AJ


Config / Menus:     When I get around this this, I'll be investigating in ML how:
+) To accept numerical input (which I'll need for User crop marks)
+) The ability to deactivate (grey out) options (eg when not possible)
+) The ability to store unsigned ints - which means that I'll have to read the code!

AJ

Alex

unread,
Jan 11, 2011, 2:26:07 AM1/11/11
to ml-d...@googlegroups.com
AJ,

Did you notice that memory address X has the same contents as memory
address X | 0x40000000 ? The first address is cacheable, second is
uncacheable.

e.g. in 550D, the image buffer 0x40D07800 contains the same stuff as
the one from 0x00D07800.
and in 5D2:
seg1 = seg12,
seg2 = seg13,
seg3 = seg14,
etc.

Antony Newman

unread,
Jan 11, 2011, 3:41:23 AM1/11/11
to ml-d...@googlegroups.com
Hi Alex,

Very interesting, I had not noticed!    

When I saw the address of segment 1 as 0x04000080,  I thought it was already in the uncachable region (ie misread as 0x400000080!).

This means that:
1) I have accessing cached vram for the focus assist.   (ie program will speed up when I switch to uncached video memory)

2) We only have half the number of vram segments to work out (HD has 4 banks now)

3) looks like the 'find segments' subroutine was pretty consistent (reordered the list below)

AJ


    {0x01B0FF00-0x5A0*0x17, 0x01B9CE9C, 534,  1440}, // seg  0
    {0x41B0FF50, 0x41B97FFC, 578, 1440},                       // seg 11 vram[0] uncachable 

    {0x04000080, 0x0415407C,1360, 2048},  // seg  1            HD_VRAM  [0] Cacheable
    {0x44000080, 0x4414FFFC,1344, 2048},  // seg 12           HD_VRAM [0]  UnCachable

    {0x10000080, 0x1015407C,1360, 2048},  // seg  2            HD_VRAM  [1] Cacheable
    {0x50000080, 0x5015407C,1328, 2048},  // seg 13           HD_VRAM  [1] UnCacheable
                                                           
    {0x1C00FF50, 0x1C097FFC, 578, 1440},  // seg  3       
    {0x5C00FF50, 0x5C09CE9C, 596, 1440},  // seg 14          vram[1] uncachable 

    {0x1C414000, 0x1C4F7FFC, 912, 1440},  // seg  4     
    {0x5C414000, 0x5C4F7FFC, 912, 1440},  // seg 15      
   
    {0x1C4F5278, 0x1C53FFFC, 224, 1260},  // seg  5    
    {0x5C4F5000, 0x5C5576FC, 286, 1260},  // seg 16
                    
    {0x1F60FF50, 0x1F69CE9C, 540, 1440},  // seg  6      
    {0x5F60FF50, 0x5F697FFC, 576, 1440},  // seg 18  
      
    {0x21B0FF50, 0x21B9CE9C, 596, 1440},  // seg  7

    {0x31B0FF50, 0x31B9CE9C, 596, 1440},  // seg  9     
          
    {0x5C578678, 0x5C5D7FFC, 320, 1260},  // seg 17    

    {0x24000080, 0x2414FFFC,1335, 2048},  // seg  8        HD_VRAM  [2] Cacheable
    {0x64000080, 0x6415407C,1328, 2048},  // seg 19        HD_VRAM  [2] UnCacheable
      

    {0x34000080, 0x3415407C,1328, 2048},  // seg 10        HD_VRAM  [3] Cacheable
    {0x74000080, 0x7415407C,1360, 2048}   // seg 20        HD_VRAM  [3] UnCacheable



Alex

unread,
Jan 11, 2011, 3:44:54 AM1/11/11
to ml-d...@googlegroups.com
Why this is marked as abuse? It has been marked as abuse.
Report not abuse

Alex

unread,
Jan 11, 2011, 4:12:44 AM1/11/11
to ml-d...@googlegroups.com
Found one segment, 1056x704. This may grow when recording.

http://magiclantern.wikia.com/wiki/VRAM/550D

Antony Newman

unread,
Jan 11, 2011, 7:47:35 AM1/11/11
to ml-d...@googlegroups.com
CACHING
 
The CHDK_Coding guidlines is a good start for people .... I ended up reading the rather bulky GCC documents directly when I got stuck.  I still found there is a Gap in what they talk about:
 
1) Do you need to list R0 as a clobbered register if you change it in your code.
    I've got a bit defensive about this and have started cloberring the r0-r3 paramaters passed to my routine ... just in case GCC think they have not changed .
 
2) What took my an entire day to understand is that there is a bug in gcc which relates to symbols (labels in asm you can branch to).  I upgraded to from 4.3.2 -> 4.4.2 and found there is still a bug.  I've split my subroutine library into two to ensure that in the second library, there are no direct calls to the asm routine (Indy may have found another way too). 
 
CACHING & DMA
 
1) I have not got to the bottom of exaclty what is casing the DMA issue with reading / writing.
    I currently am limiting my routiens to read/write at most 4 registers at the time (reduces latency for Interrrupts). 
    But then thought maybe the issues is that I am reading across two 32 byte boundaries .. which means that 64 bytes are being
    read into cache in one go .. and it is this that us cause the problem (ie reading 2 registersn worth).
    I will be mindful of this in future routines.
 
2) In my C code I have injected NOPs (to mimic Trammell).  I have never got any ERR70s at all in my code .. but did get ARM lockups (with DMA still working in the background updating the screeen).  In the end I used a dummy bmp_printf() in the main loop that 'solved the lockedup problem'.
 
3)  I spend about 2 hours yesterday trying to track down why my Vector Cropmarks were vanishing .. and basically not behaving (I was trying to make sure they dynamically moved to the correct pixel locations when in Lcd / HDMI standard / HMDI HD / and recording ... the last one needs more work for HDMI HD).  And was getting very very confused.
 
NOW I think Alex has found the answer (not at my PC).  The crompark update was drawing Just the outline box .. but .. this was going to the Cache ... and NOT the screen ...the screen was only being updated when the cache was emptied at a later point (which is what I was trying to get to the bottom of !).  I'm back at midnightish and will test.
 
(sorry if this thread has gone off topic slightly).
 
SEGMENTS
 
The '1260' segment had 3 images in one.  They looked like low noise, high contrast segments ... probably used for face dected (or maybe contrast focus in the Dryos).
 
 
AJ
 
 
 
 
 

Antony Newman

unread,
Jan 11, 2011, 7:54:20 AM1/11/11
to ml-d...@googlegroups.com
Also .. CHDK.  They give exampls of when coders have decided to go the 'naked route
for asm subroutines - ie not prologue / epilogue.  I think it is more efficient to maximise
the number of registers that are passed directly into the routine (r0-r3), packing them
if possible (eg 3 items packed into 32bits), rather than being forced to got to stack or
alternative memory.  Ie going naked my seem clean - but you end up having wrap the
code up with forced calls to access memory.
 
AJ

Alex

unread,
Jan 11, 2011, 10:13:12 AM1/11/11
to ml-d...@googlegroups.com
The HD VRAM buffer is 1056*704 (3:2) when idle, and during recording
it increases to 1720x974.

Does this mean Canon upsamples this data when recording (with some
quality loss), or it's just a slightly smaller buffer used for other
purposes (like AE, AF...)?

AJ, is it worth to analyze the other buffers? or they are simply
copies? Or they have different lags? (e.g. one buffer keeps last
frame, another keep previous frame). Also, is it possible to do some
kind of vsync?

Is anyone interested in taking silent pictures (without moving the
mirror), at 1720x974, uncompressed YUV422?

mohan manu

unread,
Jan 11, 2011, 10:18:51 AM1/11/11
to ml-d...@googlegroups.com
1720x974 uncompressed YUV422?

I am in...!!!!

xaos

unread,
Jan 11, 2011, 10:28:47 AM1/11/11
to ml-d...@googlegroups.com
> Is anyone interested in taking silent pictures (without moving the
> mirror), at 1720x974, uncompressed YUV422?

"silent pictures" - a dream within a dream :D

Carlos

unread,
Jan 11, 2011, 10:29:59 AM1/11/11
to ml-d...@googlegroups.com
yummm silent pictures... could this be inherited from 7D?

On Tue, Jan 11, 2011 at 4:18 PM, mohan manu <moha...@gmail.com> wrote:
1720x974 uncompressed YUV422?

I am in...!!!!

--

Alex

unread,
Jan 11, 2011, 10:48:40 AM1/11/11
to ml-d...@googlegroups.com
Since 7D doesn't have ML, we can't spy how it does the silent picture.
Does 5D2 do that?

If there are hardware differences in shutter for doing this, we don't
have any chance.

The silent picture would be done like this:
- go to Movie mode
- move focus to star button (ML can change this setting if needed)
- half-press shutter (I don't know how to stop full-press shutter)...
or you can suggest other button (set?)
- camera will start recording, dump the image buffer (4 MB), then
delete the movie (this takes 1-2 seconds or so)
- for postprocessing, I can write a script which converts the yuv 422
to jpeg or tiff.

For this to work, I also need to know how to do vsync. AJ, do you have
any suggestions for this?

xaos

unread,
Jan 11, 2011, 10:59:16 AM1/11/11
to ml-d...@googlegroups.com
Can we take defined number of pictures, not only one?

Alex

unread,
Jan 11, 2011, 11:20:16 AM1/11/11
to ml-d...@googlegroups.com
Of course.

The main limiting factor is how fast can we save those pictures. I
think 4 MB can be saved in 0.5 seconds or less. If I put QScale at
+16, the movie recording process shouldn't interfere too much with
this.

xaos

unread,
Jan 11, 2011, 11:38:01 AM1/11/11
to ml-d...@googlegroups.com
BTW.. Do we know, how short video we can take?

Antony Newman

unread,
Jan 11, 2011, 12:38:55 PM1/11/11
to ml-d...@googlegroups.com
Alex:

Vsync

Yes there are vsync routines - I think you can register a routine to be called when a Vsync occurs (on the bmpdisplay device).
If you have my idc to hand there is a well commented structure that gives the name of the routine that resgisters it wants a vsync call-back.

Or search for the routines that have vsync in them.   (I may have called it AJ_wait_vsync).

Silent Pictures

Taking a copy of vram from the the uncached area should be possible.

(I think my Zebra routine was working at about 17 frames a second on the cached HD image - which means that a staight read on uncached memory should doable in under a vsync).  Soneone needs to see if you can malloc 1.4MB (to store the image).


HDVRAM buffers

We could really do with someone finding the memory locations (hopefully in a vram structure) that point to the base of each Vram segment.


Experiment 1 : RECORDING

It would be interesting to get someone to write to the uncached area of memory - and see if any one of these cause the output gets 'recorded'.   This would mean that 'that' vram segment is the one that the camera uses to record from.


Experiment 2 : YUV422 -> SCREEN

This is the what someone may be interested in investigating.
There is a YUV422filetoscreen routine.

I think this looks at the resolution of the the screen (eg HD-HDMI) and displays the picture.

It may do this Uncompressed, and without black bars.

Thre routine -> Allocates memory: Loads the YUV422 file into memory -> displays the image directly.


Once you've worked out the 2 parms ... we should be able to change the Live-view to call this .... and instead of

calling the routine to display a file .... we simply point the display device and the HD HDMI buffer.

Fingers crossed ... we may get 1080p out of the camera.

AJ




K.

unread,
Jan 11, 2011, 12:44:06 PM1/11/11
to ml-d...@googlegroups.com
Alex do you mean it would be more likely possible to record 3:2 aspect ratio in movie mode now ???? So it doesnt crop to 16:9 ? This would be so great 

2011/1/11 Antony Newman <antony...@gmail.com>

Alex

unread,
Jan 11, 2011, 1:04:55 PM1/11/11
to ml-d...@googlegroups.com
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
> Alex do you mean it would be more likely possible to record 3:2 aspect ratio
> in movie mode now ???? So it doesnt crop to 16:9 ? This would be so great

First, I was talking about stills. To implement (or port) a new video
codec is much more difficult.
Second, 3:2 is only 1056*704. The high-res buffer (1720x974) is 16:9.

> Vsync
Great info, I should investigate this.

> YUV422 -> SCREEN
This will make your Zoom capability much better-looking and simpler to
implement (you won't need the LUT any more). At some point, you said
that you wrote somewhere in a VRAM buffer and you got something on
screen. Can you display something meaningful with this method?

> 1080p out of the camera
Don't we already have that? Or do you mean yuv 422 uncompressed 1080p?

> malloc
How much RAM can we safely malloc?

Alex

unread,
Jan 11, 2011, 2:25:44 PM1/11/11
to ml-d...@googlegroups.com
> HDVRAM buffers

NSTUB(0xFF0F41BC, AJ_lv_continous_frame_save_related)
return *(220728 + 4*arg0)

Called by:
AJ_lv_continous_frame_save+56: AJ_lv_continous_frame_save_related(0)

Also, this function seems to create JPEG files from LiveView:
NSTUB(0xFF0FE674, lvcdevResourceGet)
...
sprintf_maybe(unk_SP /* points to unk_R5 */ , 'B:/DCIM/LV%06d.jpg',
*(0x5280), 12 + unk_R4 /* points to unk_R5 */ )
FIO_CreateFile(unk_SP /* points to unk_R5 */ )
...

It's pretty complex, so I don't know if it's safe to call or not, and
also which are its parameters.

Antony Newman

unread,
Jan 11, 2011, 5:34:01 PM1/11/11
to ml-d...@googlegroups.com
"> YUV422 -> SCREEN
This will make your Zoom capability much better-looking and simpler to
implement (you won't need the LUT any more). At some point, you said"

If the HD segment is uncompressed, and we can point the screen directly at the segment, and the HDMI image that is rendered is not squashed ... we can get uncompressed HDMI output in Liveview.

Hook this up to a nano-flash and we've got something very cute.

(if it works, the magnification stuff that I've written would not be required).

-----------

AJ

Alex

unread,
Jan 12, 2011, 6:46:35 AM1/12/11
to ml-d...@googlegroups.com
Please test the latest build; it has the first implementation of
silent pictures.

Focus assist is not there; I've started to code it, but it's not ready yet.

The silent pictures are taken from the exact same image buffer which
will be used for focus assist.

http://groups.google.com/group/ml-devel/msg/72451b985d90e38f

Carlos

unread,
Jan 12, 2011, 6:53:01 AM1/12/11
to ml-d...@googlegroups.com
I'm anxious to try the silent picture feature.

Thanks to all the people involved

Alex

unread,
Jan 12, 2011, 8:44:28 AM1/12/11
to ml-d...@googlegroups.com
> Does this mean Canon upsamples this data when recording (with some quality loss)?

See this test:
http://www.dvxuser.com/V6/showthread.php?218686-Magic-Lantern-for-550D-in-progress!/page43

Carlos

unread,
Jan 12, 2011, 9:24:33 AM1/12/11
to ml-d...@googlegroups.com
aaaargh we are paying for 1080p and getting 974p! :P

That's why pana GH2 1080p videos look much sharper

Alex

unread,
Jan 12, 2011, 9:28:33 AM1/12/11
to ml-d...@googlegroups.com
And GH2 does not skip lines.

xaos

unread,
Jan 12, 2011, 9:33:37 AM1/12/11
to ml-d...@googlegroups.com
We don't have 4k, 3k, 2k and even 1080p.. :P

mohan manu

unread,
Jan 12, 2011, 10:01:58 AM1/12/11