Hey Mikkel,
The way I usually do it is start by synchronizing the frame acquisition. This can be done in two ways, depending on the kind of camera you have:
1) If you have a professional/industrial grade camera (e.g. PointGrey, Basler, IDS) then they usually provide a GPIO output that can be configured to send a digital pulse whenever the shutter is open for integration. If you feed in this pulse directly to one of the OpenEphys digital inputs, this gives you precise hardware synchronization of when each individual frame was grabbed. If you have support for this in your camera, this is definitely the gold standard. If you have this you're pretty much done, as long as you don't drop frames, since no matter how much extra latency your analysis adds you will always know which frame you were working on and by extension which slice of the neural data that matches to. In order to account for dropped frames you should also store the hardware frame counter of the camera.
2) If you just have a webcam or a lower grade camera with no GPIO capabilities, then you will probably have to hack together an external sync pulse, something like an external LED that blinks periodically in a place where your camera can see it at all times (i.e. away from the animal). This is most commonly simply an
Arduino running the
Blink sample. In addition to the LED for the camera, you pass a wire that also connects the Arduino output to the OpenEphys digital input. In this way, whenever the LED goes high, the ephys system will see the digital pulse and the camera will see an optical signal. You can track when the optical signal lights up in Bonsai using a
Crop node (see example of Crop at
https://www.youtube.com/watch?v=736G93Qaak0) followed by
Sum, which you can then threshold to find your sync pulse rising edges.
As long as you have one of these methods in place, then you can play with your online analysis without worrying so much about synchronization.
Regarding visualization of tracking coverage, it is also possible to do it in a couple of ways:
1) If you prefer to do it in OpenEphys, then yes, you need to find a way to pipe the XY values to the GUI. I guess in principle you could setup Bonsai to report the XY to a NI-DAQ analog output and then read that into OpenEphys, but this seems to me like a bit of an overkill and likely to introduce more latency than necessary.
The best way would be for Bonsai to communicate directly with the OpenEphys GUI using some kind of IPC (inter-process communication). Usually I do this using the
Bonsai.OSC module.
OSC is a lightweight binary communication protocol much used in the electronic music scene to control various input and output devices (think of it as MIDI 2.0). Bonsai supports sending most data types directly through OSC, including Points, orientations, times, etc, etc. It would be nice if you could somehow have the OpenEphys GUI have an OSC source (and sink while you're at it). This would enable Bonsai and the OpenEphys GUI to communicate much more effectively data back and forth (Josh et al. what do you think?). The
spec is very easy and there are a number of
reference implementations you could probably grab and plug into a source.
2) Another way is to build the visualization using Bonsai itself. For this there are also a number of options. The easiest way is to simply use the built-in visualizers. For this you can just follow the steps in this tutorial (
https://youtu.be/_uJVtsGtI1M). Another way is to accumulate a 2D histogram of tracked positions over time. This is easy enough to do with OpenCV and will be included in the next Bonsai release (2.2). If you prefer to go this way let me know and I can show a couple of ways to do this in the current version using Python nodes. There are even other options, but they're probably too involved for what you need right now.
Regarding alternative ways of tracking 3D position, you can use the
Bonsai.Aruco module to extract the 3D position and orientation of known fiducial markers using the
ArUco library. This works by printing out a square black and white code pattern which the camera can easily identify and extract the geometry of (see also
Figure 4D in the
Bonsai paper for an example). You also need to calibrate the camera's intrinsic parameters (either using OpenCV or a manual procedure), however better to ask in another thread if you are interested in this option.
Regarding the overlay of physiology data directly on the video, this would also be very interesting to do. Again you could mix up OpenEphys and Bonsai.
Probably the easiest way here would be to start with Bonsai since it already has built-in video visualizers. I was imagining that if OpenEphys could send OSC messages to Bonsai (see above), then you could potentially imagine streaming-in a series of color codes (one for each clustered cell) as an OSC stream to Bonsai. Then you could combine that stream with the current tracking state and simply color the current position with the color of the cell. Then you would have the classical place-cell visualization online. There are also many other options, but probably it's better to trail off here as this e-mail is getting long.
I'm taking the liberty to forward this also to the Bonsai mailing list as this turned out to be a useful synthesis of a very frequently asked category of questions :-) Feel free to sign in if you have future questions about how to use Bonsai.
Cheers and hope this helps. Thanks for the feedback,
Gonçalo