On Monday, 27 April 2015 14:56:10 UTC+1, Johannes Larsch wrote:
(...) hooked it up to a python transform that generates the stimulus image using openCV drawing functions.
I want to store the video of the animal and, separately, the play back image stream.
Nice!
Would it be enough to save the parameters that generated the play back stimulus image along with a timestamp?
You could reconstruct the playback image later for analysis and saving just the parameters is the most compressed representation you could possibly store ;-)
The required resolution and frame rate are yet to be determined. I was initially aiming at 1MP video (and hoping 4MP would also work). My video projector works at 60Hz, so that frame rate seemed like a good target for the closed loop. Gray scale is fine. I am using a recent xeon processor (E5-1630, 16gb RAM and a nvidia gtx960 card)
I haven't found a good way of profiling bonsai execution speed.
The easiest way to profile speed in Bonsai is to use the TimeInterval node. Basically it measures the interval (in seconds) between two incoming events and computes the result with sub-millisecond precision. You can place it after any node where you need to measure performance. Let me know if an example would be useful.
Another way is of course to record timestamps and then measure deltas from these.
The workflow described above seemed to run smoothly at 100fps/1MP using XVID encoding via videoWriter without ffMPEG (great!) but there was a delay of 50-80ms between the recording and the playback which I would like to minimize. CPU load is ~45%.
This was measured using Timestamp I presume? Usually yes, I would expect delays to be on the order of ~10s of ms. It's usually very tricky to get jitter below 10ms on any software-based loop in non-RT OS, but you should be able to bring it a bit closer to that number, yes. If you really need to bring it below 10ms, you probably will need to incorporate a microcontroller hybrid solution which we can also work out.
Using 1920x1080, I reached up to 60 fps, at faster frame rates I received an insufficient memory error (buffering?)
It was probably a buffer overflow yes. Are you using Bonsai 32- or 64-bit? Windows 32-bit processes have a default limit of 2 Gb on heap memory per process. VideoWriter uses buffering to avoid delaying the image processing pipeline, but it will require memory for storing images on the encoding queue.
64-bit Bonsai should in principle not give this error unless you exceed the page file size limit (HD space).
In parallel I was playing with video encoding settings in Streampix and discovered that Norpix's proprietary CUDA optimized H264 codec quadrupled the achievable frame rate from ~30-~120 fps @ 1920x1080 on my computer. (not sure why it only gave 30fps without gpu acceleration but the 120fps WITH gpu were impressive.
Yeah, GPU encoding sounds nice, I will just need more time to add it in properly.
Also, I noticed that the animal tracking part of my workflow became slow if the input video was > 1MP (I anectodally tried 1.5x1.5MP). Again I was thinking the gpu libraries of openCV migh speed up thresholding and countour extraction etc.
Again, I've noticed 64-bit OpenCV is much faster for large images. If you're using 32-bit I would recommend testing with Bonsai (x64).
Since you asked about color format:
Is there a way to read images from the camera as gray scale? My camera is configured to send mono8. When I use CameraCapture, bonzai imageVisualizer shows me u8 as image format - which I then convert to gray before applying a threshold.
Can I read grayscale from the camera to begin with?
Unfortunately OpenCV CameraCapture always sends out RGB images. Which camera are you using? We have been using PointGrey monochrome as our standard imaging workhorse. The dedicated Bonsai node for this sends out grayscale images directly, and it's much faster, yes.
Should I be using CameraCapture or VideoCaptureDevice? (differences?)
VideoCaptureDevice supports more extended DirectShow devices (e.g. framegrabber cards) and more access to native camera configuration windows. The disadvantage is that AForge sources enforce buffering to avoid dropping any frames. While this conceptually sounds nice, if your processing is not exactly balanced, you may build up delay over time... I may change this eventually.
Is there documentation of the CameraCapture parameters? I made sense of some of the obvious such as frameWhidth/Height etc. but there are a lot more...
You can find a description of all capture properties at OpenCV.Net documentation:
I will try your suggestion regarding VBR, that sounds promising, most of my video is indeed not changing at all for most of the time. Do you know a good reference that summarizes video encoder optimization for speed?
This is in general hard to find, people tend to be very obscure when talking about this... but you can find some help around ffmpeg forums. No ideal summarizing reference, though. Mostly these parameters I got through trial and error and sometimes looking in ffmpeg documentation:
thanks for spending your time on this!
best,
J
Thanks, it's always nice to see people push Bonsai to its limits. That's exactly how it will keep improving in the long term!
Best,
G