Motion detection patch (60D vs. alpha 12)

110 views
Skip to first unread message

Colin Peart

unread,
Jun 2, 2011, 1:47:40 PM6/2/11
to ml-devel
Well, here is my first contribution.  It works for me -- tested on the 60D.

I have included the full file and the diff.  Let me know what you think.

Changes:
-The motion detection used a variable (K) to prevent it from shooting immediatly after starting live-view and after a picture.  However, if you set up motion detection, and then entered live view, and used it for a bit, and then left live view, K was not reset.  That meant that when you re-entered live view, it would take a shot right away, instead of waiting for the exposure to settle.
-Changed the menu entry for motion detection, added a configuration item for the sensitivity
-Change the live view detection:  a circular buffer of the last 5 readings is kept now, and averaged.  When the new reading is off of the average by more than the sensitivity, an image was taken.

I tried to speed it up, and ran some tests, but nothing worked worth mentioning, so I stuck to just the stuff I actually had working.  I found that the regular mode can trigger between 240 and 375 ms, depending on the silent shooting mode on the canon menu for the camera.  (silent mode 1 or 2 is faster, because it doesn't close and re-open the shutter first.)

Tests using the magiclantern silent shooting mode put the response at under 100ms, so I suggest using that for things like lightening.  

Things I would like to add in the future:
-Configuration for the number of pictures taken in ML silent mode -- currently hard coded to 3
-Still looking to find a way to speed it up.



Colin Peart
cgap...@gmail.com

shoot.c
shoot.diff

Alex

unread,
Jun 2, 2011, 2:09:51 PM6/2/11
to ml-d...@googlegroups.com
Thanks. I took a quick look and here are my comments:

- Sensivity adjustment is welcome.

- LiveView image is triple-buffered, and we may use this for more complex algorithms for motion detection. For example, making the difference between last two buffers and computing some kind of average. This will detect when a subject is moving, but not enough to cause a variation in the exposure.

- The above method might make possible another application: detecting the amount of camera motion (and taking a picture when camera is not moving). But with such a big delay for shutter release... it may not work as expected.

- Instead of averaging over the last 5 items, we could use a moving average (very easy to compute and to adjust the time constant). Like this: x_avg = x_current * k + x_avg * (1-k), and then you adjust k between 0 and 1. Of course, with integer arithmetic :)

- I get around 240ms on 550D, and if I hold the * button, it drops to around 190. Maybe faking some button presses will help.

--
http://magiclantern.wikia.com/
 
To post to this group, send email to ml-d...@googlegroups.com
To unsubscribe from this group, send email to ml-devel+u...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/ml-devel?hl=en

Colin Peart

unread,
Jun 2, 2011, 3:11:51 PM6/2/11
to ml-d...@googlegroups.com
That's a good point about the * button.  Alternatively, I suspect that setting manual exposure might help too.

Averaging over the last 5 items was cheap, and I figured it would help stabilize the release a bit.  I am definitely open to alternate techniques.  This was dipping my toes in before I dive.

The other target I want to look at is how the AEV is calculated, and where on the sensor that value comes from. 

However, before I go too far, it might help to get an idea of what it's used for.

i.e.
-Lightening (fast and large cross sensor change in brightness)
-"security" mode - trying to capture something like a bird coming out of a bird house, or an animal coming by (slower than a lightening flash, and target area could be spread about.  Might be more useful to trigger on color changes instead of overall brightness.)
-Photo Finish - capturing a sports event or something moving fast as it hits a particular part of the frame
-Etc, etc, etc. 

Once I can figure out what kind of motion events are being targeted, I can probably come up with adjustments or profiles to get better performance.

--Colin

Colin Peart
cgap...@gmail.com

Alex

unread,
Jun 4, 2011, 8:45:06 AM6/4/11
to ml-d...@googlegroups.com
I've applied your patch and also added a second method for motion detection (experimental).

We have now:

* "EXP" mode, where it detects variations in exposure (old method, good for lightning and big subjects which change the exposure value)

* "DIF" mode, where it performs the difference between last two LiveView frames and computes some kind of average (it detects small movements which do not change the exposure). If GlobalDraw is on, it also displays the difference between last two LV frames (some kind of background subtraction).

/* any better words for these two terms? */

Still no luck with decreasing response time (it's sometimes around 200ms... other times around 300...)

To test it, compile these revisions from Bitbucket (code is committed for 550D and 60D). There are a few other changes which were not yet fully tested (e.g. some kind of panorama mode); see the changeset history for details.

https://bitbucket.org/hudson/magic-lantern/changeset/19504ef3047c 550D
https://bitbucket.org/hudson/magic-lantern/changeset/8720f389876c 60D

Colin Peart

unread,
Jun 5, 2011, 1:12:37 AM6/5/11
to ml-d...@googlegroups.com
I will give that a go tomorrow, and see how it works out.  Should be interesting.

--Colin


------------
Colin Peart
Sent from mobile

Colin Peart

unread,
Jun 5, 2011, 3:41:57 PM6/5/11
to ml-d...@googlegroups.com
I like the DIF mode very much -- Holding the * button down dropped the time to 180ms, in DIF mode.
EXP mode, 200+.  I think the exposure measurement must be slowish -- that dif mode is good.

--Colin


Colin Peart
cgap...@gmail.com

Alex

unread,
Jun 5, 2011, 3:47:50 PM6/5/11
to ml-d...@googlegroups.com
This may indicate that we are using the wrong buffer in EXP mode (i.e. not the one with the latest frame). Can you try to switch them?

Colin Peart

unread,
Jun 5, 2011, 3:53:11 PM6/5/11
to ml-d...@googlegroups.com
Would that be by using the get_fastrefresh_422_buf(); call instead of using the YUV422_LV_BUFFER_DMA_ADDR?


Colin Peart
cgap...@gmail.com

Alex

unread,
Jun 5, 2011, 3:58:39 PM6/5/11
to ml-d...@googlegroups.com
Yes. One of them is a pointer in the camera firmware (which I believe to be the buffer where DMA is currently writing to), and the other is the buffer where Magic Zoom can write (i.e. after DMA wrote the image, but before LCD task reads and displays it). The LiveView image is triple-buffered, so you may experiment with these buffers even more.

Probably the best moment to catch is just after the DMA went past the rectangular area checked by ML.

Michael Richards

unread,
Jun 6, 2011, 10:34:29 AM6/6/11
to ml-d...@googlegroups.com
I'm not sure how much CPU is available for motion detection but I can explain how it's been done in other implementations. A simplistic approach is to simply do a subtraction of the last frame compared to this frame. For a fixed camera this is effective as you can then do an analysis of the differences scaling it for sensitivity in luminance and chroma which gives a good image of what has moved. From there apply a threshold and do blob detection to determine if the movement meets your criteria - motion and size. This shouldn't be terribly expensive for the processor.

An alternative that uses a lot more juice would be to implement the Lucas Kanade algorithim for optical flow. Basically you process the image to identify "features" then track their movement from frame to frame. This has the ability to deal with the entire frame moving as well since the problem becomes points moving relative to each other. If you look up some of the openCV stuff these are both areas that have had a significant amount of research.

Alex

unread,
Jun 6, 2011, 12:29:08 PM6/6/11
to ml-d...@googlegroups.com
The "DIF" option uses a simpler version of the first approach: it
computes something like sum( abs(a_ij - b_ij) ), where a and b are the
luma components of the last two frames and then it uses a threshold.
Currently it's done on a 200x200 cropped area, for speed.

I don't think we have enough CPU power for more complex algorithms, but
if you'd like to try, you are welcome.

Reply all
Reply to author
Forward
0 new messages