Hi Reese and welcome to the forums! I will try and help you debug the situation. In general the approach will be to benchmark each of the individual processing components until we understand which part exactly is causing the slowdown.
I am attaching a modified version of your workflow where I simply grouped a couple of common nodes at the beginning (all the branches seem to start with the exact same crop) and replaced the final Zip operation on the AnalogInput node with a CombineLatest. This last modification is potentially relevant, since the Arduino input may be slower or simply out of phase with the camera input and the synchronization enforced by Zip may be causing memory and/or performance issues.
You should give it a try, but still I don't believe this will necessarily address your problem.
I'm curious about your report that changing the VideoWriter frame rate property prevented the workflow from crashing. This sounds strange because that value only controls the video playback speed and is not used to influence the encoding process at all, which always happens at the frame rate of the input video stream. Can you confirm that this still happens with your final workflow?
Also, you mentioned that you can run your python loop faster than 100Hz. Can I ask how these measurements were performed? Were they run on Bonsai with offline video streams or outside in another python environment? If outside, how was the image processing handled?
Finally, to help debugging, it would help to use the PointGrey's embedded hardware frame counter in order to have an unambiguous way of measuring capture performance. You can record this by going to "Advanced Camera Settings" tab in the PointGrey camera configuration dialog and making sure that the "Frame counter" checkbox is checked under the "Embedded Image Information" section. I have included a node in the modified workflow that will extract this information and record it onto a CSV file called "frame_counter.csv". You will be able to know precisely how many frames are being dropped and when by looking at the difference between consecutive frame counter values in each line of the CSV file. This is the preferred way of debugging performance in PointGrey video acquisition in general.
In any case, assuming no other issue, I can see three potential bottlenecks in your image processing pipeline:
1) Video recording: encoding large resolution color video can be an extremely CPU intensive process. I would say this is probably the most costly step in your whole workflow. Usually if you have multiple fast CPUs on your machine, it should still be able to do it with your resolution. However, one easy debug step would be to remove the VideoWriter and record only the hardware frame counter so we can have an idea of how much this is impacting the rest of your processing.
If this turns out to be a problem, you can choose either to reduce your resolution, encode in grayscale, or change your video encoding strategy, possibly changing the codec to a faster encoding with FFmpeg as described
in this post. Let me know if this turns out to be the issue and I can help you transition to whatever method fits best.
2) How are you converting the colored video from the PointGrey? Usually these cameras acquire their raw color data using a
Bayer mask and then use a debayering algorithm to convert back to color. Interestingly, we have discovered that the debayering algorithm running on the PointGrey driver is somehow slower than taking the raw bayer image and running the conversion directly in Bonsai. If you upgrade your FlyCapture package to the latest version (2.2.0) you will be able to set the color processing method of the FlyCapture node to "
NoColorProcessing". This will retrieve the raw bayer array and you can then convert it manually back to color in Bonsai using the
ConvertColor node with one of the
Bayer*G2Bgr options.
3) I realize you need to run two processing algorithms in parallel on each full frame, one in color space, and the other in grayscale space. It would be good to know the relative costs of each of these branches to your overall processing. The easiest way to do this is to systematically delete each of these branches and run the workflow with only one of them at a time and record the influence in the hardware frame counter.
It would be good to know the impact of each of these strategies in the results recorded in the "frame_counter.csv" file. If you could run some experiments around these lines and let me know the results that would be immensely helpful to try and understand what is going on.
Hope this helps!