Robustly tracking foreground and background objects

915 views
Skip to first unread message

dennis.go...@neuro.fchampalimaud.org

unread,
Nov 13, 2017, 5:07:48 AM11/13/17
to Bonsai Users
Hi Bonsai users and developers,

I currently use Bonsai to do image-based tracking of single fruit flies in an arena with several food spots (yeast and sucrose). I use IR background lighting and a IR cut-on filter (R72). I record from four arenas simultaneously, see image attached (test_2017-10-03-191752-0000.jpg).

In order to track the fly, I use background subtraction and an adaptive threshold to be able to find connected components ("find contours" node) and to select the largest to be the fly centroid (tracker.JPG). I would also like to track the food spots, and I started out using an adaptive threshold  with Gaussian window and then finding contours. This works robustly for the yeast spots which are opaque. However, the sucrose spots are more transparent, and although visible, very noisy in the image.

Now, here are my concrete questions:

1) Is there a way to average multiple frames in order to get rid of some of the noise that is occluding the detection of sucrose spots?
2) How can I differentiate foreground from background objects (fly = foreground, spots = background)? I'd like to subtract the fly from the spot detection image.
3) Movements or forces from outside of the setup may lead to differences detected by the background subtraction. Is there a way to manually reset/restart the background subtraction node (without using the adaptive term)? Or even better, could I maybe instead of the largest binary region, choose a region which "matches" the shape of a fly (based on lengths of axes, etc.)?
4) Finally, if anyone has comments/suggestions about the provided workflow, please let me know.

Thank you in advance.
Dennis
CamTracking.bonsai
CamTracking.bonsai.layout
test_2017-10-03-191752-0000.jpg
Tracker.JPG

Gonçalo Lopes

unread,
Nov 14, 2017, 8:35:42 PM11/14/17
to Dennis Goldschmidt, Bonsai Users
Hi Dennis,

Cool setup :-) here are hopefully some tips:

1) Yes, you can average multiple frames in different ways. The simples way may be to use the IncrementalMean node and then use Take to select how many frames you want to use for the average. You can then combine that average with the original stream for doing the subtraction using CombineLatest. You may also find this other thread interesting: https://groups.google.com/forum/#!msg/bonsai-users/vlNwBVGgp6c/5R3Cy9mPAwAJ

2) If you have isolated the fly using FindContours, you can draw it back as an image using DrawContours. This will produce an image which you can then subtract from the result of thresholding the spots.

3a) You can force BackgroundSubtraction to reset by placing the operator inside a triggered workflow. You can create a simple triggered workflow using the TriggeredWindow operator like so:


You would then place the BackgroundSubtraction operator inside the SelectMany node. What this does is effectively start a new background subtraction workflow every time you press a key, which causes the background to reset. You can read more about windows and other higher-order operators here: http://bonsai-rx.org/docs/higher-order/

3b) Regarding template matching strategies, you could try to use the CmtTracker node. This is a specialized template tracking node that can be found in the CMT package (Bonsai.Cmt). In order for the tracker to work, you need to initialize it by drawing a RegionOfInterest around the object to be tracked. Try to use it manually first to get a feel for it, maybe even start with the video paused so you can select the bounding box and then play it to see how it behaves. If it works well for you we can discuss more clever ways to make the initialization automatic.

4) Regarding the workflow, I am curious why you have AdaptiveThreshold after BackgroundSubtraction. Usually the AdaptiveThreshold works best on grayscale images, but not necessarily black-and-white images (ah, maybe you are using BackgroundSubtraction to simply subtract the background without thresholding?). Other than that it looks good.

Hope this helps.

--
You received this message because you are subscribed to the Google Groups "Bonsai Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users+unsubscribe@googlegroups.com.
Visit this group at https://groups.google.com/group/bonsai-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/bonsai-users/91fc3664-4754-4b84-ac56-261a496ad89d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

dennis.go...@neuro.fchampalimaud.org

unread,
Nov 16, 2017, 5:20:28 PM11/16/17
to Bonsai Users
Hi Gonçalo,
As usual, you are a fountain of wisdom. Thanks so much for the quick reply.

1) I tried out IncrementalMean, and it looks good. I actually realized that RunningAverage does the same job dynamically, am I right? If so, I will probably use that one. 

2) Do I understand it right, DrawContours simply converts the connected components into an opencv image?

3a) Great, this works amazing!

3b) I will try this tomorrow, for now this has not been my highest priority. I let you know, how it will work out.

4) You are right, I now use it beforehand. 

For the robustness of the tracker, I will now use a hybrid solution that applies background subtraction, and previous information about the position of the fly (for moments when the fly rests). 

One more follow-up question: Is it possible to have an automatic video calibration for translation, rotation, scaling and distortion. I think the first three should be easy to detect using HoughCircles or similar shape landmarks. I can't figure how to determine the distortion within Bonsai. Let me know if anything is unclear.

Once more, thanks so much for your help!

Cheers,
Dennis
To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users...@googlegroups.com.

Gonçalo Lopes

unread,
Nov 19, 2017, 6:45:38 AM11/19/17
to Dennis Goldschmidt, Bonsai Users
Glad to hear it helped!
 
1) I tried out IncrementalMean, and it looks good. I actually realized that RunningAverage does the same job dynamically, am I right? If so, I will probably use that one.

RunningAverage indeed a similar job. The difference is that IncrementalMean completes the exact mean incrementally, while RunningAverage is more like a rolling mean (i.e. sliding window), with the size of the window specified by the Alpha parameter.
 
2) Do I understand it right, DrawContours simply converts the connected components into an opencv image?

Yes, DrawContours simply draws the Contours into a new image. Note that holes in the objects may be eliminated (depending on how you configure the contour extraction, etc). In your case, I believe this will actually be an advantage.

One more follow-up question: Is it possible to have an automatic video calibration for translation, rotation, scaling and distortion. I think the first three should be easy to detect using HoughCircles or similar shape landmarks. I can't figure how to determine the distortion within Bonsai. Let me know if anything is unclear.

About this automatic video calibration: is the goal to automatically center, orient and undistort the video for each arena? If yes, you should be able to use dynamically parameterized WarpAffine to do the centering and orienting (see Exercise 3 in this worksheet).

About correcting the distortion, the easiest way is to undistort the whole image first, and then you don't have to worry about local distortion in different image segments. You can use the Undistort node for this, and usually you can just replace the arenas with a checkerboard at the same distance and play with the distortion parameters until you make all the lines straight.

Hope this helps.
 
To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users+unsubscribe@googlegroups.com.

dennis.go...@neuro.fchampalimaud.org

unread,
Nov 19, 2017, 10:07:27 AM11/19/17
to Bonsai Users
The worksheet is a great help, thanks! I will use the *Undistort* node on a checkerboard, and will use the extracted information from the arena center positions to determine rotation and translation of the *WarpAffine* transform.

To come back to the *CmtTracker*, I was able to initialize it with the ROI defined around the fly, but I have the feeling that it might not surpass the background subtraction in robustness. What does the supervised algorithm exactly do? I guess in the end, for online tracking, background subtraction might be more efficient, no?!

Gonçalo Lopes

unread,
Nov 19, 2017, 10:15:49 AM11/19/17
to Dennis Goldschmidt, Bonsai Users
Here is a description of the CMT algorithm (there is also a publication on it):

Indeed, these trackers are usually more about tracking objects in very uncontrolled conditions of light and pose, but in general for experiments where you have full control over the environment and illumination it is usually better to go for a simple method like background subtraction.


To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users+unsubscribe@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages