So I recently started tracking a red LED hat that my rats where in a cage which has a camera setup using Bonsai. I attached a PNG file of a screenshot of part of my workflow; after camera capture and crop, I split the image into RGB color channels, and then convert the images for the Green and Red channels to a matrix consisting of RGB values. Then I subtract the Green channel from the Red channel to remove all of the green in the image and leave only Red. What about the Blue channel? Green and Blue are similar enough that it doesn't matter which you subtract from Red, and because I have an optogenetic stimulation protocol applied that emits green light from the tips of the patch cords, and because I don't know how to remove both Green and Blue channels, I decided to remove the Green channel. Then I convert my newly calculated matrix of values that represents the image in the Red channel after having remove the Green channel back into an image which is more or less like a gray scale image, except that the white pixels translates to red and the black pixels translates to all of the other colors in the image. The Threshold node allows the gating of objects (that must be red at this point in the workflow branch, otherwise they wouldn't show up in the image) to be at least a certain level of brightness; set the parameters for Threshold Type to Binary, leave the Maximum Value at 255, and then adjust the Threshold Value while Bonsai is running live and you have the object in which you are trying to track present in the camera frame; I suggest setting it low enough of course to still detect the object over and above any or everything else, but high enough above the other objects that you can feel confident that Bonsai won't lose track of the object in different contexts (eg the rat is at a different location in the camera's frame than when you were adjusting parameters, and at this location the red LED hat happens to appear less bright in Bonsai). The FindContours node allows the gating of objects to be at least and not more than a certain size; however, by this point in the workflow branch the FindContours node should not have to contribute to the detection of the object of interest (over and above any or everything else), but if so you should reconsider the design of the cage or whatever the object is in (ie change the materials, the color of the materials, the object you are tracking, the size of the object you are tracking, etc.). I set the parameters for the FindContours node in my workflow to a MinimumArea of one pixel and MaximumArea of 100000, and the Method is set to ChainApproxNone which may be important but I'm not entirely sure, it's just what works. If it's multiple objects of interest that you are tracking I suggest using the FindContours max and min parameters as you need them; just make sure they aren't very stringent as you might lose the object more frequently, which is why I set my MinimumArea to 1 (eg the rat is rearing and the red LED hat happens to become smaller as an image in the camera frame because of perspective). By now we've reached the BinaryRegionAnalysis / LargestBinaryRegion nodes, which collectively do most of the work at least when conditions are good (ie all of the other previous nodes did what they needed to do), so after that we can get the data we want. Output (right-click the LargestBinaryRegion node Output > X Float, Y Float) gives us our XY-coordinates, which I save as a csv in my workflow after the Zip node pairs them (X,Y) to later analyze in MATLAB; as well, I use a branch in the workflow to calculate velocity values using those XY-coordinates, and I save that as another csv file, too. Anything else you can come up with for your experiment that you want to measure using Bonsai starts here at this point in the workflow branch, right after all of the previous nodes I've described. Good luck, and I hope my example helps you. Let me know if you have any issues adapting it to your setup.
Best,