Hi Vincent,
There are currently two main approaches to track multiple individuals in a video feed. The first is using different segmentation methods (e.g. can be color or just brightness contrast in the case of black and white mice) to distinguish each individual. The second one is using a marker-based approach with the
ArUco library. ArUco uses the structure of known fiducials to individually recognize markers as well as recover their full 3D position and orientation in space relative to the camera. The disadvantage is of course that markers have to be used.
To use this latter approach you need to install the Bonsai ArUco package (using the package manager). Unfortunately there a couple of tricky steps in the approach, since you need to calibrate the intrinsic parameters of your camera and provide the DetectMarkers node with a file containing these parameters. In the future I will write a detailed tutorial on how to do this, but for now if you want to give it a quick try I'm attaching a calibration file for a 640x480 webcam, as well as a marker file. If you start Bonsai, feed your camera data to the DetectMarkers node, point it to the calibration parameters file and show it a physical print of the marker, you can at least get a feel for what it looks like.
There are other computer-vision based approaches to track multiple objects that I would love to integrate with Bonsai. If by any chance anyone comes across any open-source libraries that can do markerless model-based, feature-based or template-matching tracking, let me know and I will gladly take a look at them. I have my own approaches to try in the future but didn't get around to implement them yet.
Cheers,