Yes, I guess a more standardized example of simultaneous ephys and image processing is a bit overdue.
Attached you can find two workflows that demonstrate basic ephys and image acquisition and processing scenarios. I tend to prefer keeping them as separate Bonsai workflows logically and sometimes even physically. The reason to have them separate is that you may need to stop and start the ephys independently from the video (e.g. if some of the amplifier connectors jump out unexpectedly for example). Anyway, if you prefer them to be together in the same workflow, the solution is simple: just open a new Bonsai and drag both files inside; two nested workflows will be created that represent each of the parallel tasks.
Ok, so what's in these workflows? In order to run them successfully you need specific hardware and software arrangements. Starting with the video:
Video Workflow
* Hardware: PointGrey camera and FlyCapture software installed and configured. The workflow records both the acquired video stream and specific image metadata. In order for the metadata to be properly acquired, it needs to be activated in the FlyCapture control panel, under "Advanced Camera Settings". You should tick the boxes corresponding to the frame counter and GPIO pin state.
* Outputs: This workflow outputs 3 files:
- video.avi: the compressed video output from the camera
- video.csv: a text file containing the image metadata (one line per frame, each column represents a different metadata value - counter, gpio0-3)
- tracking.csv: a text file containing tracking information about the largest segmented object from the video (one line per frame)
You can play around with the various parameters. The goal of the metadata is to provide possible synchronization points with ephys. The frame counter in general tells you if and where you've dropped frames for some reason (you'll see a jump in counter values). The GPIO pins can be used to pass synchronization pulses to the camera using the GPIO cable. The tracking provided here is very basic using just contrast segmentation and tracking the X,Y and angle position of the detected object. More sophisticated techniques include background subtraction, color segmentation and others which may be inserted before the FindContours node.
Ephys Workflow:
* Hardware: RHD2000 evaluation board or OpenEphys acquisition board. In order for this workflow to work, you also need the proper FPGA bitfile (see
FAQ). The workflow records output data from multiple amplifiers, so you can connect headstages to the board for up to 256 channels simultaneous recording. It also records data from the board ADC and digital TTL inputs.
* Outputs: This workflow also outputs 3 files:
- amplifier.bin: the raw unsigned 16-bit values acquired from the amplifiers in column-major format
- adc.bin: same for the board ADCs
- sync.bin: the raw unsigned 8-bit value representing the board TTL digital inputs
The workflow also provides two basic visualizations: LFP and Spikes. For each of them you need to select the channel (or channels) that you want to visualize. I've left it by default to visualize only 1 channel to ensure good operation. Visualizing simultaneously 256 channels at 30 kHz may prove challenging. Future versions of Bonsai will try to address this by using more optimized filtering and visualization routines, but for now it should be no problem to visualize something between 32-64 channels simultaneously. The ADC and TTL inputs from the board can be used to record extra sensors and signals synchronously with the neural data. This can again be used to solve synchronization between video and ephys, either by passing signals from the camera to the ephys system or from an external signal source (e.g. Arduino) to both of them.
As long as there is some kind of event that is simultaneously recorded reliably in both datasets, aligning the data later should be no problem.
Hope you find this helpful. There is of course much to improve on this basic workflow, but this should cover a large swath of the basics.
Cheers,
Gonçalo