problems with webcams

11,048 views
Skip to first unread message

p_lin

unread,
Sep 11, 2012, 10:32:07 AM9/11/12
to beagl...@googlegroups.com
Hi,

I'm running Ubuntu 11.10 and I'm trying to develop an application using opencv and a webcam. I'm running into problems whenever I try for resolutions higher than 320x240.  I tried using the Ps3Eye (driver is gspca_ov534) and the logitech C260 (driver is uvcvideo). At 320x240 it seems to work fine and I get a saved image with the occasional "select timeout" error.

However, when I try to run at 640x480 I get this output (with a black image file):

VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
Resolution set, entering loop... 
select timeout
select timeout
Saving Image


Any ideas on how to fix this error?  It would be great to be able to save higher resolution images... are there other ways to grab a still from the webcam?


Enter code here...
    CvCapture* capture = 0;

   capture = cvCaptureFromCAM(-1);
 

    //set camera resolution to low, perform object detection
   //then set resolution to high, and capture images

    cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, 640 );
   cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, 480 );

    if (!capture)
       printf("error opening capture");
   else
   {
       printf("Resolution set, entering loop... \r\n");
           IplImage* image = cvQueryFrame( capture );
           image = cvQueryFrame( capture );
           
           if( !image )
               return 0;

                        printf("Saving Image\n\r");
                       cvSaveImage("cam.jpg", image);  //save the image
                       //start=time(NULL);  //get current time in seconds
               
   }






Martin Fox

unread,
Jan 29, 2013, 1:30:32 AM1/29/13
to beagl...@googlegroups.com
You can ignore the QUERYMENU error messages. Some cameras don't support this optional call and OpenCV whines when it doesn't get a response. When it does get a response it just throws the information away.

It has been several months since I last tried capturing OpenCV images on my dusty Beaglebone (using Ubuntu, not Angstrom). Back then I encountered two problems. One was that the USB machinery in the kernel wasn't up to snuff and could not deliver images across the USB bus quickly enough. I have no idea if that problem has been fixed since my webcam project is on the back burner.

The second problem was that the OpenCV code was asking video4linux (v4l) to deliver RGB frames. v4l was in turn grabbing JPEG compressed RGB images from the camera and decompressing them using a very slow software-based JPEG decoder. No doubt that decoder uses a lot of floating-point and is fast enough on desktop machines but terribly slow on the Beaglebone and Beagleboard-XM.

I was able to get much better capture frame rates once I wrote my own code to capture uncompressed YUV images and hand-convert them to RGB or grayscale. My C++ classes for doing this are hosted in a Mercurial repository at https://bitbucket.org/beldenfox/cvcapture. They are pretty minimal; I did only as much as I needed to get things working for my specific camera. You can dig up similar sample code on the web to fill in any gaps.

Martin

PS. I tracked down the JPEG performance problem using the 'perf' utility which was easy to install and use under Ubuntu. Highly recommended.

On Sunday, January 27, 2013 7:21:32 PM UTC-8, Michael Darling wrote:
I have had similar problems:  http://beagleboard.org/project/stache

Has anyone had success using the PS3 Eye with OpenCV on the BeagleBone? I have been able to capture images in 320x240 resolution, but need to take advantage of the PS3 Eye's higher resolution (640x480) and frame rate (60 Hz) for my project. I am brand new to the BeagleBone and embedded linux, but have read that a driver patch is required to use the PS3 Eye at higher resolutions and frame rates. [1] Can anyone confirm whether or not I should be able to capture frames at 640x480 without a patch?

I am using the C++ interface of OpenCV and have tried setting the resolution through VideoCapture::set,

cv::VideoCapture cap(0);
cap.set(CV_CAP_PROP_FRAME_WIDTH,640.0);
cap.set(CV_CAP_PROP_FRAME_HEIGHT,480.0);

but this returns the following errors: (the program runs successfully if the resolution is set to 320x240, or the properties are not set at all).

VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument

select timeout
select timeout
^C

I am using a Rev A6a board running the latest BeagleBone image :

root@beaglebone:~# cat /etc/angstrom-version 
Angstrom v2012.05 (Core edition)
Built from branch: denzil
Revision: 38faf241f6666527870da99f5560c92ae1b5ee8c
Target system: arm-angstrom-linux-gnueabi

Any advice is greatly appreciated!
-Mike

[1] http://www.electronsonradio.com/2011/05/openembbeded-linux-kernel-adding-pseye-patched-modules/

Michael Darling

unread,
Mar 1, 2013, 1:03:19 PM3/1/13
to beagl...@googlegroups.com
Thanks for your insights, Martin.
 

As an update, I have dug into the OpenCV source code a little bit and believe that I may have found the source of my problem. In /OpenCV/modules/highgui/src/ there are two files named "cap_v4l.cpp" and "cap_lib_v4l.cpp" that seem to be handling the camera through the Video4Linux API. Both of these files querey the camera's capabilities (including the maximum height/width supported by the camera). It does this by first determining if the device is V4L or V4L2 compatible and then doing something like:


/* Query the newly opened device for its capabilities */
//# if V4L:
ioctl(capture->deviceHandle, VIDIOCGCAP, &capture->capability)
//# if V4L2:
xioctl (capture->deviceHandle, VIDIOC_QUERYCAP, &capture->cap)


And later, the OpenCV code attempts to readjust the resolution to be less than or equal to the maximum supported resolution. (I'm working with version 2.4.2 of OpenCV -- available here: http://sourceforge.net/projects/opencvlibrary/files/?source=navbar)


if (capture==0) return 0;

if (w>capture->capability.maxwidth) {
w=capture->capability.maxwidth;
}

if (h>capture->capability.maxheight) {
h=capture->capability.maxheight;
}

capture->captureWindow.width=w;
capture->captureWindow.height=h;

//# (other stuff)
}


Again, I know that my camera (PS3 Eye) supports 640x480 resolution (and I have confirmed this by invoking "v4l2-ctl -V" in a terminal window on the beagle bone, which reports 320x240 and 640x480 as supported resolutions.  This leads me to believe that OpenCV is not querying the camera's resolution correctly.  I have tried to comment out the lines that resize the height/width and rebuild OpenCV from source on the BeagleBone using the simple cmake/ make/ make install instructions provided here (http://opencv.willowgarage.com/wiki/InstallGuide_Linux), but end up with some kind of "Out of memory" error about 76% of the way through the build.  I have started looking into cross-compiling OpenCV for the BeagleBone, but haven't got a clue what I am doing and have no idea if commenting out these lines would solve my problem anyways.

Can anyone offer any advice on how I should proceed with this?  Many thanks to anyone who takes a look at this.

(I'm not sure that v4l2-ctl -V is totally accurate according to this:  http://www.mattfischer.com/blog/?p=211 )

Martin Fox

unread,
Mar 4, 2013, 7:07:28 PM3/4/13
to beagl...@googlegroups.com
The sane way to approach this (for me) was to take one of the files you mentioned (cap_lib_v4l.cpp) and get it to compile outside of OpenCV. For the most part it's just standard v4l2 capture code adapted from readily available sample code. Then at the very end the image data is packaged with an OpenCV wrapper. Its all based on publicly available v4l and OpenCV headers so it can be isolated.

The code I pointed to earlier (https://bitbucket.org/beldenfox/cvcapture) is basically the v4l2-compatible path of cap_lib_v4l.cpp that's been cleaned up, pared down, and packaged with its own CMake files so it can be compiled outside of OpenCV. All you need is the development files for libv4l2, which in Ubunutu you can get by installing the libv4l-dev package. Even if the code doesn't work for you at least it would give you a blob of v4l video-capture code that can be compiled outside of OpenCV itself. You can build up from there by copying and pasting from the OpenCV sources.

You don't mention if you're getting the QUERYMENU error messages. If you are you're using the v4l2-compatible path in the code, in which case the maxwidth and maxheight checks are not being applied (based on my somewhat hasty reading of the code.)

I have managed to compile OpenCV 2.4.2 on a BeagleBoard-XM with an external hard drive. It took almost two hours. I really doubt you can compile it on a BeagleBone. I gave up on trying to cross-compile it; OpenCV relies on a bunch of external libraries (like v4l and jpeg) and I never figured out how to download the necessary libraries and insert them into a cross-compilation environment.

Michael Darling

unread,
Mar 5, 2013, 8:02:00 PM3/5/13
to beagl...@googlegroups.com
Thanks for your prompt reply,

But as it turns out, I don't think OpenCV is my problem after all.  I tried capturing frames from the command line using "ffmpeg" and still had bad results.  As a point of comparison, I tried to capture some frames on my desktop's Ubuntu partition through OpenCV.  I had no problems at all and even the 320x240 frames were much clearer than I was able to get with the BeagleBone.

To see what was going on, I tried calling "dmesg" on my desktop as well as the BealeBone.  I noticed that my desktop was interfacing with the webcam with something called ehci_hcd while the BB was using musb-hdrc.


BeagleBoard
[    5.277099] PHY 0:01 not found
[    5.289520] ADDRCONF(NETDEV_UP): eth0: link is not ready
[   76.481750] usb 1-1: new high-speed USB device number 2 using musb-hdrc
[   76.624176] usb 1-1: New USB device found, idVendor=1415, idProduct=2000
[   76.624206] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[   76.624237] usb 1-1: Product: USB Camera-B4.09.24.1
[   76.624237] usb 1-1: Manufacturer: OmniVision Technologies, Inc.
[   77.393920] gspca_main: v2.14.0 registered
[   77.401489] gspca_main: ov534-2.14.0 probing 1415:2000
[   77.563629] usbcore: registered new interface driver ov534


Desktop:

[  577.672908] ftdi_sio 1-2.1:1.1: device disconnected

[  583.768329] usb 1-2: new high-speed USB device number 5 using ehci_hcd

[  583.903707] usb 1-2: New USB device found, idVendor=1415, idProduct=2000

[  583.903718] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0

[  583.903725] usb 1-2: Product: USB Camera-B4.09.24.1

[  583.903732] usb 1-2: Manufacturer: OmniVision Technologies, Inc.

[  583.985507] gspca_main: v2.14.0 registered

[  583.990621] gspca_main: ov534-2.14.0 probing 1415:2000

[  585.594388] usbcore: registered new interface driver snd-usb-audio

[  585.595905] usbcore: registered new interface driver ov534



It looks like these are some kind of Linux USB drivers.  A call to "modinfo" confirmed that the ehci_hcd driver was not installed on the BB but was on my desktop.  I found an (alleged) solution here: https://groups.google.com/forum/?fromgroups=#!topic/beagleboard/sgCwaP5RVUo  that relates to the ehci_hcd driver provided by Krcevina.  I have not yet attempted to apply the fix.  (I am brand new to Linux, and learning as I go has been quite slow.)

Do you have any knowledge of the Linux USB drivers?  If so, do you think this could potentially be the culprit?  My only rationale is that the USB is being slowed to USB 1 speeds instead of using USB 2.0.  Thoughts?


Thank you once again for taking the time to help me out.  I appreciate it very much!
-Mike


Chris Loughnane

unread,
Mar 6, 2013, 12:55:02 PM3/6/13
to beagl...@googlegroups.com
Not quite an answer, but I'm experiencing similar problems. I just posted the specifics of the code I'm running to StackOverflow. Really trying to get this to go.

Martin Fox

unread,
Mar 6, 2013, 10:54:35 PM3/6/13
to beagl...@googlegroups.com
The poor floating-point performance I mentioned in my initial post will probably affect just about any video-capture path that requests JPEG frames from the camera and decompresses them. It is likely that ffmpeg is doing this by default; it's a reasonable thing to do on a desktop machine since it reduces the USB bandwidth and desktop machine generally have blazing fast floating-point units (compared to the dismal FPU's on the Beagles). It is worth doing some research to determine if ffmpeg can be manipulated to grab YUV frames from the camera so you can eliminate that bottleneck. For OpenCV work the only solution was to write my own capture code. (By the way, you can track software issues like this down using the "perf" utility.) 

Several months ago when I was working on my project there were well-known kernel issues with the USB DMA machinery on the BeagleBone. I have no idea where that stands now. My initial work was on the BeagleBoard-XM which has the floating-point bottleneck but not the USB one and my project was put on hold just as I began working with the Bone.

Michael Darling

unread,
Mar 9, 2013, 7:47:00 PM3/9/13
to beagl...@googlegroups.com

I get the following output to my screen:

VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QBUF: Invalid argument
VIDIOC_QBUF: Invalid argument
VIDIOC_QBUF: Invalid argument
VIDIOC_QBUF: Invalid argument
VIDIOC_QBUF: Invalid argument
VIDIOC_QBUF: Invalid argument

and I do get a 640x480 frame.  Unfortunately it is completely garbled  (attached)

Michael Darling

unread,
Mar 9, 2013, 7:49:33 PM3/9/13
to beagl...@googlegroups.com
I am switching to a Logitech c270 webcam, by the way, to see if I can get that working first.  (It is a supported UVC device.)

Michael Darling

unread,
Mar 11, 2013, 7:26:33 PM3/11/13
to beagl...@googlegroups.com
Hi Martin,

I hate to ask such an embarrassingly simple question -- but I am in completely unfamiliar territory and need to get this working ASAP so I can get back on track with my thesis:

I'm convinced that OpenCV's handling of the V4L2 API is indeed the issue.  (I have found examples of people being able to capture higher-res frames on the BeagleBone without OpenCV).  So I would like to try bypassing OpenCV's buggy camera capture functionality by using your custom code.  However, I'm not sure what exactly I need to do to implement your code into my project.  Do I need to use Cmake to build a binary of some sort? -- because when I try to do this, I get an error (below).  I know that all of the OpenCV libraries and includes are indeed installed (by default with the stock Angstrom image I am using).

Instead of using Cmake, can I simply add an #include "OCVcapture.h" line to my main file, modify my code to use an instantiation of your OCVcapture object to capture frames, and compile everything with the g++ compiler as I usually do?
i.e.  g++ -Wall MyCode.cpp OCVcapture.cpp -o MyCode.o -I /usr/include -L /usr/lib -lopencv_core -lopencv_highgui -lopencv_imgproc

Also what frame rate can I expect to with your code?  Ideally, I would like to get around 10 frames per second.

Thanks a ton for your help!  If I cannot get this to work, I may need to move to another board.
-Mike



CMake Error at CMakeLists.txt:15 (find_package):
  Could not find module FindOpenCV.cmake or a configuration file for package
  OpenCV.

  Adjust CMAKE_MODULE_PATH to find FindOpenCV.cmake or set OpenCV_DIR to the
  directory containing a CMake configuration file for OpenCV.  The file will
  have one of the following names:

    OpenCVConfig.cmake
    opencv-config.cmake

Martin Fox

unread,
Mar 12, 2013, 3:34:11 PM3/12/13
to beagl...@googlegroups.com
Yes, just compile my code into your project as you describe. It needs to link against the v4l2 library, so add -lv4l2 to your command line and it should work. I compiled my sample application on Ubuntu using just this line:

g++ camera.cpp OCVCapture.cpp -lopencv_core -lopencv_highgui -lv4l2

I have no idea what frame rate you can expect. I didn't take measurements back when I was working on the Bone. At the time the USB drivers were in such poor shape there didn't seem to be much point so I just worked with a BeagleBoard-XM instead.

Michael Darling

unread,
Mar 12, 2013, 3:57:36 PM3/12/13
to beagl...@googlegroups.com
Update:  I set up a simple OpenCV script to capture frames using the tools developed by Martin Fox.  320x240 frames are captured no problems, but no luck at 640x480 -- same select timeout errors.  The result was the same for all three cameras I tried:

Capture: capabilities 5000001
Capture: channel 0
Capture: input 0 ov534 0
Capture: format YUYV YUYV
Capture: format RGB3 RGB3
Capture: format BGR3 BGR3
Capture: format YU12 YU12
Capture: format YV12 YV12
Capture: dimensions 640 x 480
Capture: bytes per line 1280
Capture: frame rate 30 fps
Capture: 4 buffers allocated
Capture: buffer length 614400
Capture: buffer length 614400
Capture: buffer length 614400
Capture: buffer length 614400
Capture 640 x 480 pixels at 30 fps
Capture: select timeout
Capture: select timeout

Any other ideas? 

Martin

unread,
Mar 13, 2013, 10:57:21 AM3/13/13
to beagl...@googlegroups.com
Just out of curiosity:

Have you had a look at the "motion" package (http://www.lavrsen.dk/foswiki/bin/view/Motion/WebHome)?

I am using this on a beaglebone A3 board running ubuntu. Motion can be installed using "sudo apt-get install motion". On my board it can capture 640x480 images without problems.

I am not sure if motion uses OpenCV or how it grabs images from the camera. 

But maybe worth a look if you can get it to work for your camera and board, and if it works take a look at how it does the capture, I believe there is source code available.

Martin

Michael Darling

unread,
Mar 13, 2013, 5:16:38 PM3/13/13
to beagl...@googlegroups.com
I haven't, but I'll start looking into it.  Thanks for the recommendation!

Michael Darling

unread,
Apr 2, 2013, 4:19:09 PM4/2/13
to beagl...@googlegroups.com
Hi Martin,

Sorry it took me so long to get back. I was having problems getting a stable version of Ubuntu installed on my board.  A new version was just released and that solved my problems.

I just installed the motion package.  I copied the default motion.conf file to my working directory, renamed it, and changed the width and height values to 320 and 240, respectively -- Everything works as expected using the PS3 Eye:

ubuntu@arm:~/Motion$ motion -c loResMotion.conf
[0] Processing thread 0 - config file hiResMotion.conf
[0] Motion 3.2.12 Started
[0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478784
[0] Thread 1 is from hiResMotion.conf
[0] motion-httpd/3.2.12 running, accepting connections
[0] motion-httpd: waiting for data on port TCP 8080
[1] Thread 1 started
[1] cap.driver: "ov534"
[1] cap.card: "USB Camera-B4.09.24.1"
[1] cap.bus_info: "usb-musb-hdrc.1-1"
[1] cap.capabilities=0x05000001
[1] - VIDEO_CAPTURE
[1] - READWRITE
[1] - STREAMING
[1] Config palette index 8 (YU12) doesn't work.
[1] Supported palettes:
[1] 0: YUYV (YUYV)
[1] Selected palette YUYV
[1] Test palette YUYV (320x240)
[1] Using palette YUYV (320x240) bytesperlines 640 sizeimage 153600 colorspace 00000008
[1] found control 0x00980900, "Brightness", range 0,255 
[1]     "Brightness", default 0, current 0
[1] found control 0x00980901, "Contrast", range 0,255 
[1]     "Contrast", default 32, current 32
[1] found control 0x00980911, "Exposure", range 0,255 
[1]     "Exposure", default 120, current 120
[1] found control 0x00980912, "Auto Gain", range 0,1 
[1]     "Auto Gain", default 1, current 1
[1] found control 0x00980913, "Main Gain", range 0,63 
[1]     "Main Gain", default 20, current 20
[1] mmap information:
[1] frames=4
[1] 0 length=155648
[1] 1 length=155648
[1] 2 length=155648
[1] 3 length=155648
[1] Using V4L2
[1] Resizing pre_capture buffer to 1 items
[1] Started stream webcam server in port 8081
[1] File of type 8 saved to: /tmp/motion/01-20130402201014.swf
[1] File of type 1 saved to: /tmp/motion/01-20130402201014-00.jpg
[1] File of type 1 saved to: /tmp/motion/01-20130402201018-01.jpg
[1] File of type 1 saved to: /tmp/motion/01-20130402201019-01.jpg
[1] File of type 1 saved to: /tmp/motion/01-20130402201021-00.jpg


But if I change the height and width in the conf file to 640 and 480, respectively, I get the following:

ubuntu@arm:~/Motion$ motion -c hiResMotion.conf
[0] Processing thread 0 - config file hiResMotion.conf
[0] Motion 3.2.12 Started
[0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478784
[0] Thread 1 is from hiResMotion.conf
[0] motion-httpd/3.2.12 running, accepting connections
[0] motion-httpd: waiting for data on port TCP 8080
[1] Thread 1 started
[1] cap.driver: "ov534"
[1] cap.card: "USB Camera-B4.09.24.1"
[1] cap.bus_info: "usb-musb-hdrc.1-1"
[1] cap.capabilities=0x05000001
[1] - VIDEO_CAPTURE
[1] - READWRITE
[1] - STREAMING
[1] Config palette index 8 (YU12) doesn't work.
[1] Supported palettes:
[1] 0: YUYV (YUYV)
[1] Selected palette YUYV
[1] Test palette YUYV (640x480)
[1] Using palette YUYV (640x480) bytesperlines 1280 sizeimage 614400 colorspace 00000008
[1] found control 0x00980900, "Brightness", range 0,255 
[1]     "Brightness", default 0, current 0
[1] found control 0x00980901, "Contrast", range 0,255 
[1]     "Contrast", default 32, current 32
[1] found control 0x00980911, "Exposure", range 0,255 
[1]     "Exposure", default 120, current 120
[1] found control 0x00980912, "Auto Gain", range 0,1 
[1]     "Auto Gain", default 1, current 1
[1] found control 0x00980913, "Main Gain", range 0,63 
[1]     "Main Gain", default 20, current 20
[1] mmap information:
[1] frames=4
[1] 0 length=614400
[1] 1 length=614400
[1] 2 length=614400
[1] 3 length=614400
[1] Using V4L2
[1] Resizing pre_capture buffer to 1 items
[1] v4l2_next: VIDIOC_DQBUF: EIO (s->pframe 0): Input/output error
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] Error capturing first image
[1] Started stream webcam server in port 8081
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] Video device fatal error - Closing video device
[1] Closing video device /dev/video0
[1] Retrying until successful connection with camera
[1] cap.driver: "ov534"
[1] cap.card: "USB Camera-B4.09.24.1"
[1] cap.bus_info: "usb-musb-hdrc.1-1"
[1] cap.capabilities=0x05000001
[1] - VIDEO_CAPTURE
[1] - READWRITE
[1] - STREAMING
[1] Config palette index 8 (YU12) doesn't work.
[1] Supported palettes:
[1] 0: YUYV (YUYV)
[1] Selected palette YUYV
[1] Test palette YUYV (640x480)
[1] Using palette YUYV (640x480) bytesperlines 1280 sizeimage 614400 colorspace 00000008
[1] found control 0x00980900, "Brightness", range 0,255 
[1]     "Brightness", default 0, current 0
[1] found control 0x00980901, "Contrast", range 0,255 
[1]     "Contrast", default 32, current 32
[1] found control 0x00980911, "Exposure", range 0,255 
[1]     "Exposure", default 120, current 120
[1] found control 0x00980912, "Auto Gain", range 0,1 
[1]     "Auto Gain", default 1, current 1
[1] found control 0x00980913, "Main Gain", range 0,63 
[1]     "Main Gain", default 20, current 20
[1] mmap information:
[1] frames=4
[1] 0 length=614400
[1] 1 length=614400
[1] 2 length=614400
[1] 3 length=614400
[1] Using V4L2
[1] v4l2_next: VIDIOC_DQBUF: EIO (s->pframe 0): Input/output error
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] Video device fatal error - Closing video device
[1] Closing video device /dev/video0
^C[0] httpd - Finishing
[0] httpd Closing
[0] httpd thread exit
[1] Thread exiting
[0] Motion terminating

It looks like the motion package is using Video4Linux, according to the Motion homepage.  Besides the fact that I am using a Rev. A6a board, what could possibly be different in my setup compared to yours?  I am running the 2013-03-28 Quantal 12.10 version of Ubuntu for BeagleBone.

Thanks!


On Wednesday, March 13, 2013 7:57:21 AM UTC-7, Martin wrote:

Michael Darling

unread,
Apr 2, 2013, 4:44:36 PM4/2/13
to beagl...@googlegroups.com
sorry just to be clear...  the conf file i used was actually called hiResMotion.conf both times.  i just changed the resolution between the two instances.

Michael Darling

unread,
May 13, 2013, 1:28:09 AM5/13/13
to beagl...@googlegroups.com

Hi Martin,

I'm not sure if you're still interested in helping me, but I did want to let you know that I have finally been able to grab 640x480 frames on my BeagleBone from the PS3 webcam.  I did end up using your custom capture code since the framerate setting in OpenCV doesn't work. (Thanks!)

I am able to set the camera to 15fps and capture frames without select timeout errors, however I end up with significant motion blur due to the low framerate.  (Again, I plan to put this system on an airplane, so that won't cut it for me.)  I blindly made a couple of adjustments to your code, and am able to get frames with the camera set at 30 fps if I open 3 instances of my program, then close two.  (This is what I have to do on my Mac with the 3rd party macam driver for the PS3 Eye -- thats where I got the idea from.)  Unfortunately, even at the 30 fps setting, I really am only getting about 10.

For my application, it is okay if I only grab frames at 10 Hz, but I need to have the camera operating at high enough of a frame rate that I can eliminate motion blur.  I'm not very familiar with Video4Linux or the nitty-gritty of capturing frames from a webcam, so I was wondering if you might be able to provide some guidance.  Is there any way to be able to eliminate motion blur with a slow embedded processor and just tolerate dropped frames, or am I pretty much hosed?

Thanks for any help you can provide.
-Mike


--
For more options, visit http://beagleboard.org/discuss
---
You received this message because you are subscribed to a topic in the Google Groups "BeagleBoard" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/beagleboard/G5Xs2JuwD_4/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to beagleboard...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Matthew Witherwax

unread,
Aug 12, 2013, 7:48:10 AM8/12/13
to beagl...@googlegroups.com
Michael,

Would you be willing to tell me how you got the PS3Eye running at 640x480?  I am running Arch Linux, and the best I can do is 320x240 even when using custom capture code.  The call to Select() in V4L2 times out regardless of how long the timeout is.  The camera's capture led comes on when the code executes so I suspect it is an issue with the amount of data the camera tries to bulk transfer over usb.

Thanks for your help and work,
Matthew

Michael Darling

unread,
Aug 13, 2013, 5:00:00 PM8/13/13
to beagl...@googlegroups.com
Hi Matthew,

I have been able to get 640x480, but not at a very high framerate.  I've spent a lot of time asking questions in different forums so let me see if I can condense things down to one or two links for you.  (I haven't touched the vision part of my thesis for awhile so its all a bit foggy in my memory.  I will be returning to BeagleBone vision work in the next couple of weeks so hopefully I can be of more help soon.):

I have three posts on this page starting at May 11 (way towards the bottom of the page).  It links to some sloppy code that I put up on GitHub.  You might find it useful.  I believe I was running the most recent version of Ubuntu for the BeagleBone at the time.

If I remember right, the biggest problem was that the standard OpenCV capture code does not adjust the camera framerate like it should (i.e. if you try to set the framerate to 30 fps,  the camera actually stays at the default 60 fps setting).  I haven't been able to capture 640x480 at 60 fps, (possibly limitations of the board's hardware?) so you have to settle for a lower framerate.  To actually set the framerate you have to use a custom piece of capture code that you can compile along with the rest of your project.

Let me know if you get any farther with any of that.  As I said, I will try and get back in touch within a couple of weeks when I am working with this stuff again.  Go ahead and shoot me a reminder message just in case I forget.

Best of luck.
-Mike

Michael Darling

unread,
Sep 10, 2013, 7:42:17 PM9/10/13
to beagl...@googlegroups.com
Matthew,

Well I am back to working with BeagleBone/OpenCV.  This time I am using the BeagleBone Black.  I had hoped that the faster processor, additional RAM, and on-board eMMC might have somehow made a difference, but I am not able to capture images at 640x480 resolution with the new board.  For my first attempts, I have tried to stick with the standard Angstrom image that ships with the BBB.  I have tried a few different command-line video4linux capture utilities (fswebcam and v4l2-ctl/v4l2grab), but cannot get images at the higher resolution.

So I know that the problem is not with OpenCV.  The problem must either be in the video4linux2 API, or the gspca-ov534 driver that is built-into the kernel version that the standard Angstrom build is using. Or worse, it could be a problem with the hardware itself.

I'm quite new to linux and embedded systems in general, but I will keep trying to track down the problem.  I think I will try and install the Ubuntu image for the BBB and see if that improves things, as it might use a newer kernel version.  It has been a long time, but I remember that my *marginal* success came after tweaking some things in the linux kernel for the Ubuntu image and building it myself -- what a headache!

Let me know if you make any progress on this.  What project are you working on that you need to use the PS3 Eye, anyways?  (Just curious)  If you have the time to help me track down the issue I have some email correspondence and online forum posts that I can share with you as they might be helpful .  If I can get this to work, the rest of my master's thesis should be smooth sailing from here on out, so I definitely have the motivation to get this figured out.


I'll keep you updated on my successes and (more likely) failures with you.

- Mike


On Monday, August 12, 2013 4:48:10 AM UTC-7, Matthew Witherwax wrote:

Matthew Witherwax

unread,
Sep 11, 2013, 8:13:30 AM9/11/13
to beagl...@googlegroups.com
Michael,

I too am using the BeagleBone Black, and I have actually spent a lot of time working on this.  A lot of what I have discovered is in this post https://groups.google.com/forum/#!topic/beagleboard/2NO62mGcSvA
To recap, you need to reduce the framerate using v4l2-ctl to no more than 15 FPS to capture at 640x480 using the PSEye.  You can also do 320x240 at up to 60 FPS.  It seems the PS3Eye transfers data in bulk mode putting a lot of data on the the bus.  Once the amount of data reaches a certain limit, you will get select timeouts.

The PS3Eye sends uncompressed images, but if compressed images are acceptable for your application, you might want to look into a camera that supports MJPEG compression.  Allowing the camera to compress the frames as jpegs greatly reduces the amount of data sent over usb as well as the cpu usage.  I am currently capturing still images at 1920x1080 with the FPS set to 30 using the Logitech C920.

Today I am posting an article on thresholding colors using OpenCV on my blog at blog.lemoneerlabs.com and will document more on the webcams I have been working with as soon as I can.

As for the project I am working on, I am giving sight to an autonomous robot I am working on.  I will post more on that as well... when I find the time.  :)  Please do let me know how you work goes, and I will do my best to help you.

Good luck!

Matthew Witherwax




Michael Darling

unread,
Sep 11, 2013, 2:39:38 PM9/11/13
to beagl...@googlegroups.com
Wow!  You seem to be pretty knowledgeable about all of this. You might have sold me on going out and buying a C920 ASAP!  I've tried the Logitech C270 but couldn't get above 15fps so I sort of gave up on trying/buying other cameras since nobody could confirm better performance with anything else. 

I plan to put the camera on an autonomous RC aircraft that will be following another carrying very bright LEDs. I'm doing some simple "blob detection" and using the EPnP algorithm to estimate the pose (relative positition and orientation of the leader wrt the follower)

At what resolution can you get about 60 fps with the Logitech C920?  I need a frame rate that is high enough to stop motion blur, but I don't need to process every frame with OpenCV. I am limited to 10 Hz by the navigation loop of the autopilot I'm using anyways. 

Do you think the C920 would work for my application?  

Matthew Witherwax

unread,
Sep 11, 2013, 3:27:27 PM9/11/13
to beagl...@googlegroups.com
I will do some testing with my cameras to see what frame rate I can achieve just sending the data to /dev/null.  As of now, I have just confirmed what frame rates and resolutions will allow me to capture individual frames without select timeouts.  I have not tried capturing a continuous stream to see what the recordable fps is.

There are several things you should note:
1. If you are displaying the images on the BBB, it will increase cpu utilization and reduce your frame rate.
2. If your are writing the stream to disk, the latency in writing will affect your frame rate.
3. The Logitech C920 tops out hardware wise at 30 FPS for all resolutions if I recall correctly. 
4. Capturing in MJPG offers reduced cpu use and increased throughput due to the smaller images - but you have to decide if compression artifacts will cause you issues.
5. There are forum posts such as this http://answers.opencv.org/question/4346/videocapture-parameters-cv_cap_prop_fourcc-or/ that suggest you cannot set the pixel format through OpenCV, so in order to capture in MJPG you may have to write your own capture code.

I will let you know how the testing goes.



--

Michael Darling

unread,
Sep 11, 2013, 4:11:08 PM9/11/13
to beagl...@googlegroups.com
To address some of the excellent points you brought up:

1.  I am no need for displaying images on the BBB -- the relative state of the leader w.r.t. the follower UAV (6 numbers, probably as integers) will be sent over to the autopilot via serial over the BBB's UART pins (I still need to get that set up).  The only displaying I might do would be purely for debugging purposes.
2.  I may write frames to disk as part of my flight data recording -- but for the exact problem you mentioned I will probably only save off a single frame every couple of seconds or so.
3.  I think that 30 fps will be okay for my application.  That is the fastest I could get the PS3 eye to operate in 640x480 even on my laptop.  Faster would be better -- but I have done some ground tests that give me some hope that 30 fps might work.
4.  I don't need to do much video processing aside from detecting the bright LEDs in the red channel of my image.  I am using a modified version of OpenCV's "simple blob detector", which mainly identifies blobs by their brightness, circularity, inertia, etc.  Since brightness is the primary criteria here, I think I would be okay with compression as long as the LEDs appear semi-circular, but I will do more reading on it.
5.  I have actually been using custom capture code written by someone else (and modified a bit by me).  I can add the v4l2 code to change the pixel format if the capability is not already in the capture code I'm using.


Thanks so much for your help!  Its been incredibly helpful to have found someone else wrestling with the same problems.
-Mike

Matthew Witherwax

unread,
Sep 11, 2013, 6:34:56 PM9/11/13
to beagl...@googlegroups.com
Mike,

Just wanted to give you a quick update.  Using capture code based on the V4l2 example modified to capture frames in mjpeg format and throw them away, I am able to capture 640x480 at right around 30 fps using the C920.  I set up the capture, take 1000 frames, and then tear it down.  Inside the program I time actual runtime using calls to clock(), and the whole executable is timed with time on the command line.  I did this 10 times on the BBB, and all runs were right around 33.6 seconds which works out to about 29.76 captured frames per second.  While this was running, I had another connection open to the BBB running top, and the frame capture application used about .3% of the cpu.  Just so you don't glance over it, that was point 3%.

Keep in mind, I did no processing on the frame, just grabbed it and tossed it, but with only .3% cpu in use, you probably have enough to handle OpenCV.  Which, if you don't mind me suggesting, take a look at the code I posted on my blog today.  It shows how to threshold colors, and I have used the technique to find and track both green and red lasers.  Depending on your lighting conditions and if you leds are bright and of a distinctive color, it may work for you and be less compute intensive.

Matthew Witherwax

Matthew Witherwax

unread,
Sep 11, 2013, 6:36:34 PM9/11/13
to beagl...@googlegroups.com
Mike,

As an addendum, I will post the code and calling details to my blog shortly... within the next day or so.

William C Bonner

unread,
Sep 11, 2013, 6:41:59 PM9/11/13
to beagl...@googlegroups.com
I thought I'd mention that 've spent a lot of time playing with FFMPEG and the C920 on my BBB. If I'm capturing directly from the camera and writing to the uSD flash in an mp4 file, having FFMPEG do no transcoding, it usually runs about 3% CPU if I'm running at 1GHz, and 10% if I'm running at 300MHz. If you've got your performance monitor at the default governer of on-demand, it'll drop down to the lower frequency if you aren't taxing the cpu.


You received this message because you are subscribed to the Google Groups "BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email to beagleboard...@googlegroups.com.

Michael Darling

unread,
Sep 11, 2013, 7:39:29 PM9/11/13
to beagl...@googlegroups.com
Wow!  Thanks so much Matthew and William!  It sounds like the C920 w/ MPEG encoding is the way to go.  I will definitely check out the blog you referenced and look over any code you have posted, Matthew.

I'll get myself a C920 and play around to see if I can get some simple OpenCV code working.  

I can't thank you guys enough!  I'm finally feeling optimistic about getting this thesis finished. :)

-Mike
You received this message because you are subscribed to the Google Groups "BeagleBoard" group.

To unsubscribe from this group and stop receiving emails from it, send an email to beagleboard...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Matthew Witherwax

unread,
Sep 12, 2013, 9:15:53 AM9/12/13
to beagl...@googlegroups.com
Mike,

I have posted my capture code, the results of some timing tests, and my understanding of USB webcams on my blog here http://blog.lemoneerlabs.com/post/BBB-webcams

Michael Darling

unread,
Sep 13, 2013, 8:07:26 PM9/13/13
to beagl...@googlegroups.com
Hi Matthew,

I read through your blog -- great work!  I have a question that I was hoping you could help me out with:

I was previously using a piece of custom capture code written by Martin Fox, (https://bitbucket.org/beldenfox/cvcapture/src/b7f279b278aa?at=default).  His capture code is basically taken from the sample video4linux2 capture code available on the LinuxTV.org website (http://linuxtv.org/downloads/v4l-dvb-apis/capture-example.html).  Martin's code assumes that the camera is capturing in raw YUYV format and goes through the equations for converting YUYV to a cv::Mat object in OpenCV, which is all fairly straightforward.  Not surprisingly, if I just change the format in Martin's code using V4L2_PIX_FMT_H264, I get mostly green frames since the implied YUYV to RGB conversion is no longer valid.

Since I would, however, like to take advantage of the C920's H.264 hardware compression, I need to make some significant modifications to Martin's code to make it work for my application.  What is the best way to go about decoding the H.264 video stream and converting it into the cv::Mat format that OpenCV understands?  Do you know where I could find some sample video4linux capture code that uses the H.264 format that I can study? I see where the V4L2_PIX_FMT_ is set to "YUYV" in Martin's code, but am not sure how the code might need to be modified if I set this to V4L2_PIX_FMT_H264.  (For example: do I need to change the buffer size or will this somehow be handled internally when the buffers are created?)  Will I need to make use of the libavcodec library to decompress the video stream?  (I imagine that there has to be *some* library I can use so that I don't have to reinvent the wheel.)

I've looked over your framegrabber.c code, but it looks like you aren't decoding the video streams just yet.  You are just writing out to a binary file -- correct?.  (You write: "Capturing in H264 and YUYV format will also work, but you will not be able to simply open the resulting file in your favorite image editor.")

I've been Google-ing like crazy, but I was hoping that you could point me towards some helpful resources.

Thanks a ton!
-Mike

Matthew Witherwax

unread,
Sep 14, 2013, 11:09:26 AM9/14/13
to beagl...@googlegroups.com
Mike,

You are correct, the code I posted is just writing out raw frames right now.  If the frame that is written is of MJPEG format, then it should be an actual jpeg image.  If you cannot open it, then pass it through the MJPEGPatcher program to insert the missing Huffman table.

If the frame is of YUYV format, then it will have to be converted to jpeg using code like the conversion code found in v4l2grab.  As I am using this on the BBB, it is more beneficial to have the camera compress and encode the frames as jpeg thus reducing the amount of data transferred and the need to do the conversion on the BBB.

For H264, things are a bit different.  In both MJPEG and YUYV, all the data for the frame is present.  Or at least enough to reconstruct the image.  With H264, frames are dependent on other frames for decoding.  I have not begun working on decoding these, but it is my understanding the OpenCV uses or can use ffmpeg.  If this is the case, then ffmpeg should be able to read the h264 frames.  I would start with OpenCV's code for capturing frames from the webcam to see if it in fact does anything using ffmpeg and go from there.  I will try to start looking into decoding h264 frames this evening.

On your question about using h264 and needing to size the buffers, there is nothing you should need to change.  If you have a look at framegrabber, when we receive a capture we also receive the amount of data returned.  The only thing you should need to do is set up a process to decode the h264 frames.


--

Michael Darling

unread,
Sep 14, 2013, 3:51:23 PM9/14/13
to beagl...@googlegroups.com
Sorry for the ignorance here, but I've only been working with C/C++/Linux/OpenCV for about a year now -- The V4L2 code you modified originally sent the framebuffers as a stream to stdout. How can I get the stream passed to OpenCV as an argument in my own code?  

Would it be sensible to compile the custom V4l2 capture code as a class that writes its frame buffer to some stream ( FILE*) that can be passed into a standard OpencV function that can operate on video streams (and hopefully decode the video) such as the VideoCapture class?  Or would it be better to leave the V4L2 code as its own command line program and use system() calls in my OpenCV code?

I have played some with ffmpeg in the command line but don't want to mess with the API if I can help it since it is still a little bit beyond my understanding.

Thanks.

Matthew Witherwax

unread,
Sep 14, 2013, 8:54:12 PM9/14/13
to beagl...@googlegroups.com
Mike,

I have been looking into this for awhile and there isn't a whole lot to go on when it comes to working with h264.  The current idea I am working on is modifying framegrabber to output continuously to stdout when the count is -1.  This way you can send a continuous stream of captures to stdout.

With this I would pipe the output to avconv and have it set up an rtp stream with something like
 ./framegrabber -f h264 -H 1080 -W 1920 -c 10 -I 30 -o | avconv -re -i - -vcodec copy -f  rtp rtp://xxx.xxx.x.x:5060/
replacing the xs with the ip address of my BBB.

I would then make sure opencv is compiled with ffmpeg support (and possibly gstreamer).  If not, it will need to be recompiled.  This step is the tricky part because recompiling on the bone will A) take a long time and B) possibly fail because you do not have enough free space depending on if you are running off a large sd card or the 2 gigs of nand.  To get around this you can set up a cross compiler on a desktop install of linux.  For this see http://archlinuxarm.org/developers/distcc-cross-compiling

I actually went the route of setting up a cross compiler running on a virtual box vm, and it wasn't terribly difficult.

After all this, OpenCV should be able to open the stream with something like
VideoCapture cap;
cap.open("rtp://xxx.xxx.x.x:5060/");


This is the approach I am going to take.  I will let you know how it turns out.  In the meantime you may want to try it out yourself or see if you can accomplish your goals with the MJPEG stream.  I updated my blog today with some final testing.  The C920 will stream 1920x1080 at 30 FPS in MJPEG.

Michael Darling

unread,
Sep 14, 2013, 10:20:36 PM9/14/13
to beagl...@googlegroups.com
Okay, I think I follow all of that, rtp streams are a new concept to me though.  Since I am not sending the video over a network to the BBB (it has to operate on the UAV), my initial thought was to send the video stream to the VideoCapture object using a FILE* pointer.  However, the only implementations for VideoCapture::open() are:

     VideoCapture::open(int devNo)
and
     VideoCapture::open(const string& filename)

So your use of an rtp stream should work since the address can be passed as a string (and OpenCV does indeed accept rtp streams it looks like), but I think I will get complaints if I try to pass a FILE* pointer. Instead, should I set up the rtp stream using the loopback IP address (127.0.0.1)?


Also, after having a better understanding of H.264 I think you are right that MJPEG will probably be fine for what I'm doing.  The C920 should be capable of delivering 640x480 MPEG compressed frames at up to 60 fps.  In my case, I want as high of a framerate in the camera hardware as I can get to prevent motion blur, but I really don't need to process every frame in software.  Is there any way that I can set the camera to deliver one out of every n frames that it captures?  I don't see a setting for that in the v4l2 controls.  I could choose to not mmap() some framebuffers in the video4linux capture code if processing becomes too costly, but by then the frame data has already come across the USB hardware. What I really want is the equivalent of a camera with fast shutter speed that delivers new frames in 640x480 res or higher at ~10 Hz.


Really sorry for my naievete!  I never expected that I would have these sort of difficulties with my thesis.  Thanks again for your help and patience.

Matthew Witherwax

unread,
Sep 16, 2013, 7:35:26 AM9/16/13
to beagl...@googlegroups.com
Mike,

You are correct, because your robot does not have a network connection, you would need to use 127.0.0.1.  I did some preliminary testing with the rtp stream yesterday, and I have a couple of issues to report.

First, the stream is delayed by several seconds.  No doubt this is due to avconv having to process the h264 stream from the camera and stream it out in the rtp format.  Being several seconds off is probably not going to work for your application.

The second issue is the amount of compression artifacts when the scene changes dramatically.  I am not sure if it is the camera or avconv, but if the camera is streaming a static scene and someone walks in to the view of the camera and moves about, it takes several seconds for the picture to stabilize.  Interestingly, some of it seems to be due to the camera autofocusing.  At any rate, I had hoped going the streaming route would allow us to make use of existing methods for consuming the h264 stream, but it looks like we will have to go back to figuring out how to consume it directly.

Once you open the camera, you are presented with a stream of captures.  As you have said, there is no way to only capture certain frames without moving the ones you do not want out of the way.  The best solution to this is probably to just continually grab frames from the camera and discard them until some signal is received.  When the signal is received, process the frame(s) in whatever manner you would like, then switch back to discarding the frames until the next signal.  Simply grabbing and discarding frames in MJPEG format is relatively cheap.  Capturing MJPEG frames at 1920x1080 with framegrabber uses just .3% of the cpu.

A couple things to note:
The Select() call used to grab the frame data will suspend execution until there is data to grab http://linuxtv.org/downloads/v4l-dvb-apis/func-select.html
You should be able to grab frames in an infinite loop without swallowing the cpu as the suspend will yield the cpu for other tasks.

Per the statement above, you might want to do your capturing on a separate thread so your main program is not dependent on the arrival of data from the camera.

Finally you can always try to start up the camera, grab the frames you want, and shutdown the camera at certain time intervals to achieve the effect you desire, but I don't think that will work out too well.


--

Michael Darling

unread,
Sep 16, 2013, 9:12:11 AM9/16/13
to beagl...@googlegroups.com
Interesting...

I have been doing some googling and found a few tutorials on video capture with the BeagleBone by a guy named Derek Molloy http://derekmolloy.ie/beaglebone/beaglebone-video-capture-and-image-processing-on-embedded-linux-using-opencv/ .  He has a repo on Github called "boneCV" with a few simple examples using the v4l2 sample capture code, OpenCV, and some command line utilities like avconv for RTP streaming. Theres not really anything you haven't already done, but I referenced his tutorial (http://derekmolloy.ie/streaming-video-using-rtp-on-the-beaglebone-black/) for testing RTP streaming via the loopback IP, but unfortunately I never got it working so that I could see the video stream in VLC.

Since I wasn't able to view the stream, I didn't get the chance to see the delay you mentioned for myself -- Did you try MJPEG format in as well as h264?  I'm not sure if it would make any difference in the delay, but wonder if it might eliminate the artifacts from dramatic scene changes (knowing that h264 is an interframe method).  Also, if your application allows it, you could always disable the C920's autofocus and manually set it to "0" (out of 255 -- the equivalent of "infinity focus") with v4l2-ctl or libv4l. This is probably what I would do since my UAVs will probably keep at least 30 feet of separation.

As far as the behavior of the select() calls in the v4l2 library -- I don't really care if my entire program has to suspend while it waits for data from the camera since the BBB is only serving as the processor for my vision subsystem. All of the flight controls are done with the ArduPilot Mega. The only exception to this is if I want to implement some kind of state estimator (Kalman filter) on the BBB to provide a "guess" in between successful localization estimates.

"...It looks like we will have to go back to figuring out how to consume it directly."  -- So you are now thinking that YUYV is the way to go after all?  It seems to me like the ~13.2 Mbits/s maximum transfer rate over USB that you found is likely to be hardware related.

**Some Googling.......**

This is all new-ish to me but it looks like there are three speeds of USB devices:
   Low Speed: 1.5 Mbits/s
   Full Speed:  12 Mbits/s   <--- looks close to what you estimated
   High Speed:  480 Mbits/s

The BeagleBone is equipped with a USB 2.0 port (LS/FS/HS) according to the "Features" table on the product Wiki

I confirmed that 480 Mbit/s should be possible on the BBB:
     beaglebone:~# cat /sys/bus/usb/devices/usb?/speed
     480
     480

Considering the fact that the PS3 Eye and C920 both support 60 fps at 640x480 in YUYV format, I would imagine that they are both High Speed USB devices and aren't whats limiting the data transfer.  I'm not sure where that leaves us, but maybe the issue is in software after all.

Matthew Witherwax

unread,
Sep 16, 2013, 9:33:36 AM9/16/13
to beagl...@googlegroups.com
Mike,

I only tested streaming with H264 but will look into MJPEG streaming when I have a moment.  Concerning the autofocus, I plan to see what happens with it as well as auto white balance turned off when I have some more free time.

On "...It looks like we will have to go back to figuring out how to consume it directly." I meant in order to process the H264 stream, we will have to figure out how to work with it directly as opposed to streaming it and allowing OpenCV to read from the stream.  I still believe for your purpose (and mine) MJPEG will be the most fruitful in terms of time to implement and performance.

Both cameras are High Speed devices.  It is likely there is an underlying hardware issue or low level usb driver issue that is limiting the amount of data we can push through.

For a hardware example, I have two laptops, both Dells and both with i5 processors.  I created a Linux thumbdrive and booted each to test capturing performance of the PS3Eye.  The older laptop (by 1 processor generation) could not capture from the PS3Eye at 640x480 at 60 FPS while the newer one could.  In both cases the ports used were usb 2.0 and I used the same the same Linux thumbdrive/software.

As a software example, there are numerous reports of terrible webcam performance with the Raspberry Pi due to a bad low level usb driver. http://www.raspberrypi.org/phpBB3/viewtopic.php?f=28&t=23544

It is a bit difficult to way which (or both?) is the problem with the BBB, but for our immediate needs, we should probably focus on working with the relatively quick MJPEG capture we have and solve the usb throughput issue as time permits.

Michael Darling

unread,
Sep 17, 2013, 9:39:26 PM9/17/13
to beagl...@googlegroups.com
I've spent the last couple of days looking into using the Libav libraries (particularly libavcodec), which avconv is built upon, to do real-time MJPEG decoding. My hope is to pass the frame buffers from the v4l2 capture code into some kind of decoder function written using Libav, and use the raw data to build-up a cv::Mat frame that OpenCV can work with (much like Martin Fox did to convert YUV pixel format to RGB in his own capture code).

I've been referencing the api-example.c as there is little documentation on the Libav API. I have reached out for some help with using the libraries on the libav-api mailing list.  I will keep you posted as I progress. Let me know if you have any thoughts on all of this.

-Mike


--

Michael Darling

unread,
Sep 18, 2013, 2:27:58 AM9/18/13
to beagl...@googlegroups.com
On a side note, I've noticed some weird issues where the BBB only recognizes the C920 if it is plugged in during boot.  I can unplug/re-plug my PS3 eye and C270 and it will detect the change in "lsusb", but once I plug in the C920 the lsusb command keeps returning the previous state.  Everything works as expected on my laptop, though.

Do you have any clue what could cause this?  I have tried reflashing the eMMC with the standard Angstrom distro as well as the most recent version of Ubuntu for the BBB.  Are you having similar experiences?

-Mike

Matthew Witherwax

unread,
Sep 18, 2013, 8:32:41 AM9/18/13
to beagl...@googlegroups.com
Mike,

You shouldn't need to decode the jpeg yourself. Attached is a quick example I threw together using python.  You can do the same thing in C or C++ see here http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html#imdecode

It would be something like this

CvMat cvmat = cvMat(HEIGHT, WIDTH, CV_8UC3, (void*)buffer);
IplImage * img;
img = cvDecodeImage(&cvmat, 1);
Where you set the height and width to your image parameters.


OpenCVjpeg.py

Matthew Witherwax

unread,
Sep 18, 2013, 2:47:47 PM9/18/13
to beagl...@googlegroups.com
Mike,

I have posted a working C example on my blog here http://blog.lemoneerlabs.com/post/opencv-mjpeg

William C Bonner

unread,
Sep 18, 2013, 8:18:47 PM9/18/13
to beagl...@googlegroups.com
I was just reading this thread of messages after being away for a few days. I saw mention of the delay when streaming over a network using rtp. The closest to realtime I've been able to achieve with my own project using the C920 on the BBB appears to be just under two seconds. (rough estimate by pointing the C920 at the analog clock on my computer)

Is under 0.8 seconds possible with this camera and hardware? 

Does anyone have a suggestion of how I can accurately measure the lag that is being intruduced, and where it's being introduced in the process?

Wim.


You received this message because you are subscribed to the Google Groups "BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email to beagleboard...@googlegroups.com.

Michael Darling

unread,
Sep 19, 2013, 2:56:43 AM9/19/13
to beagl...@googlegroups.com
Well... I've modified Martin Fox's capture code a bit to implement the OpenCV decodeImage() function you pointed me to (big thanks!) when capturing in MJPEG format (code attached).  At 640x480 and 30 fps, it runs flawlessly on my laptop, giving nice and sharp real-time video at a measured framerate of just about 30 fps on-the-dot.

Next, I set up the code to save off one in every 100 frames as JPEGs so that I could evaluate the image quality when running on the BBB.  With the settings at 640x480, 30fps, and capturing in MJPEG mode, I actually end up with ~10 frames per second.  That might be enough for my application, but I have to add processing time on top of that as I am just acquiring video and throwing it away for now. Also, the few images that I saved off had a bit of motion blur when I jostled the camera around, which could be the biggest issue for me. So I went back to running the code on my desktop and saved off some frames for a comparison, but they appeared slightly blurred as well.

I ended up going back to the BBB again to try and measure the load on the CPU using the "top" utility, but this time recompiled my code with the -Ofast g++ option. At the beginning frames were capturing slowly again, but after a few seconds the framerate increased and settled to just about 20 fps with CPU usage of  approx 73%.  This might cut it for what I need to do with the BBB. I'm concerned that the motion blur will still be an issue, but that might be unavoidable with this kind of hardware.

Is there any other way that you can think of improving the frame rate or motion blur?

Thanks a ton for all of the help you have provided on this! I think (hope) this performance will be sufficient for me to move forward in my project.  =)

-Mike
OCVCapture.h
OCVCapture.cpp
camera.cpp

Matthew Witherwax

unread,
Sep 19, 2013, 9:22:26 AM9/19/13
to beagl...@googlegroups.com
Mike,

I briefly looked at the code, and one thing I noticed is the MJPEG frame is being converted to an OpenCV Mat and then the Mat is saved.  If you want to save the images, you can write them out directly when they are captured as MJPEGs because they are essentially jpegs - there is no need to convert them.

Are you running the code that displays the images on the BBB?  If so, you should be able to greatly reduce the cpu use by not displaying the image, but I do not know if that will result in any significant increase in frame rate.  I can also tell you that if you forgo saving the images and just work with the images in memory, you should see better frame rates as writing files (in my experience) seems to take a while.

I know I can capture 30 FPS at 640x480 in MJPEG on the BBB without doing any processing on them.  I will modify my code to load the image into an OpenCV Mat and see how it affects frame rate.  I should have something to report this evening.

As far as motion blur, I do not think we will be able to eliminate it.  The cameras available (outside of the PS3Eye) were intended for video conferencing, etc. and do not capture at a high enough frame rate.  I read somewhere that the PS3Eye could produce jpegs as well as uncompressed images, but I have found no indication that it is possible with the driver under Linux.  Instead, we have to find ways to work with the noisy images.

I am glad to hear this has helped clear the way for you to continue on your project.  Maybe you'll let me read your thesis when you are done?


--

Matthew Witherwax

unread,
Sep 19, 2013, 9:35:13 AM9/19/13
to beagl...@googlegroups.com
William,

I do not have a good way as of yet to measure the lag, but the first place I would look is the application you use to view the stream.  Depending on how it buffers the stream, it may buffer a second or so worth before displaying anything.  For instance, in VLC, in Open Media -> Network, click Show more options and there is a setting for how much to cache.  Another place where latency is probably introduced is in the conversion to the rtp stream on the BBB and the decoding on the other side.  In my case I am capturing from a usb webcam and streaming over a usb wifi dongle.  Since the BBB has just one usb port, I am sure there is latency introduced there as well.

William C Bonner

unread,
Sep 19, 2013, 1:27:54 PM9/19/13
to beagl...@googlegroups.com
Matthew: Thanks for the suggestion on the network caching item. Do you have any idea how to do this in VLC using an SDP file? (I may have figured it out, but I'd like confirmation. In VLC on windows I've used the "Open Advanced" option, added the SDP file to the file selection, and reduced the caching to 0ms. )

A simple example of just pushing video from my webcam to my PC using a recent build of FFMPEG has the following command on my BBB:

ffmpeg -f v4l2 -video_size 1280x720 -framerate 30 -input_format h264 -i /dev/video0 -vcodec copy -f rtp rtp://239.8.8.8:8090

and then I open an SDP file with the following contents on my desktop in VLC:

v=0
o=- 1188340656180883 1 IN IP4 239.8.8.8
s=Bonecam streamed by GStreamer
i=bonecam
t=0 0
a=tool:GStreamer
a=type:broadcast
m=video 8090 RTP/AVP 96
c=IN IP4 239.8.8.8
a=rtpmap:96 H264/90000

I seem to never be able to get much less than a full second lag, testing visually. I've tried reducing the video size down as far as 320x240 and it doesn't seem to make any difference in the lag. The only reason I'm not using 1920x1080 as my default is because I don't want my video window filling my primary monitor entirely.

I should mention that when I start FFMPEG it outputs an SDP definition that doesn't seem to work in VLC, and the file I'm using above is one I got from http://www.oz9aec.net/index.php/gstreamer/473-using-the-logitech-c920-webcam-with-gstreamer

FFMPEG kicks out:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 239.8.8.8
t=0 0
a=tool:libavformat 55.15.100
m=video 8090 RTP/AVP 96
b=AS:-5
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1; sprop-parameter-sets=Z0JAKLtAMgOL+AokAAADAAQAAAMA8YEAALcbAAtx73vheEQjUA==,aM44gA==; profile-level-id=424028

I've also used the FFMPEG command:
ffmpeg -f v4l2 -video_size 1280x720 -framerate 30 -input_format h264 -i /dev/video0 -vcodec copy -f mpegts udp://192.168.0.10:8090

and opened the network stream  udp://@:8090 with the caching set to zero, I have not been able to get a latency that I was happy with. I've read some documents that compare mpegts over udp to rtp as a transport protocol and my understanding was that rtp should provide lower latency as well as producing less network congestion.

I should probably mention that the FFMPEG I'm using is one I compiled myself on my BBB and the only external library that I used was x264. Compiling ffmpeg on the BBB took about 2 hours.
git clone git://source.ffmpeg.org/ffmpeg.git ; date ; ./configure --prefix=/usr --enable-gpl --enable-libx264 ; date ; make ; date

Michael Darling

unread,
Sep 19, 2013, 4:17:35 PM9/19/13
to beagl...@googlegroups.com
When running on the BBB, I commented out all of the namedWindow() and imshow() lines that display the image to save on cpu usage.  I also only had it save 1 out of every 100 frames so that it didn't have to store a lot of data to memory (approximately one image every 5 seconds).  I found that the BBB could easily capture the raw image data at 30 fps using your capture code from earlier, but I think the extra overhead of decompressing the JPEGs might be the limiting factor now.

Let me know what you find.  I have a little bit more tinkering that I want to do with disabling the autofocus, and investigating a little more into what the PS3 Eye driver does support.

Also, I would be happy to send you a copy of my thesis whenever I finally get it finished!

Thanks!
Mike

Matthew Witherwax

unread,
Sep 19, 2013, 7:22:22 PM9/19/13
to beagl...@googlegroups.com
William,

I will have to look more into VLC, but what you did sounds close to memory.  I actually may have some code to post shortly that I have been working on for streaming.  I have been able to stream 640x480 at 30 FPS with no noticeable latency using a modified version of the framgrabber app running on the BBB and a C application running on and windows machine.  The windows app uses opencv to display the stream.  Transfer of the images from the BBB to the windows machine is done using raw data over a socket.  I have more testing to do, but so far it looks promising.  CPU use on the BBB is 6 - 12%.  I hope to have it on my blog by the tomorrow evening.


Matthew Witherwax

unread,
Sep 20, 2013, 9:01:23 AM9/20/13
to beagl...@googlegroups.com
William,

I have posted my streaming code and writeup here http://blog.lemoneerlabs.com/post/bbb-mjpeg-streaming


Matthew Witherwax

unread,
Sep 20, 2013, 3:57:38 PM9/20/13
to beagl...@googlegroups.com
Mike,

Here are the numbers for the version of framegrabber I used to capture and convert to OpenCV images.  I ran a run of 1000 and a run of 5000 captures.  In both cases, every frame was converted to an OpenCV image.

[root@alarm ~]# time ./framegrabber -f mjpeg -H 480 -W 640 -c 1000 -I 30  -o
Startup took 0.000000 seconds
Captured 1000 frames in 45.890000 seconds
Shutdown took 0.000000 seconds

real    0m47.135s
user    0m43.242s
sys     0m2.955s

[root@alarm ~]# time ./framegrabber -f mjpeg -H 480 -W 640 -c 5000 -I 30  -o
Startup took 0.000000 seconds
Captured 5000 frames in 228.000000 seconds
Shutdown took 0.000000 seconds

real    3m54.043s
user    3m33.734s
sys     0m14.566s
[root@alarm ~]#

In short it runs at a little more than 21 FPS.  I noted that memory use during this was only 2% but cpu use was almost 98%.  I am not sure it will get much better than that as it appears most of the time is spent in converting to an OpenCV image.

And here are the numbers for only converting every 3rd frame.

[root@alarm ~]# time ./framegrabber -f mjpeg -H 480 -W 640 -c 1000 -I 30  -o    Startup took 0.000000 seconds
Captured 1000 frames and Processed 334 in 14.830000 seconds
Shutdown took 0.010000 seconds

real    0m33.963s
user    0m14.105s
sys     0m1.031s


[root@alarm ~]# time ./framegrabber -f mjpeg -H 480 -W 640 -c 5000 -I 30  -o
Startup took 0.000000 seconds
Captured 5000 frames and Processed 1667 in 74.070000 seconds
Shutdown took 0.010000 seconds

real    2m47.270s
user    1m9.268s
sys     0m5.115s

Memory usage is still 2% but cpu drops to ~45%.

Michael Darling

unread,
Sep 20, 2013, 5:07:03 PM9/20/13
to beagl...@googlegroups.com
I'm making a final last-ditch effort at speeding things up just a little bit more.  I am not sure what the build configuration is for the OpenCV package in the Ubuntu 'apt-get' repo, but I have found that you can optionally build OpenCV to use a different JPEG image codec.  In particular, you can build it with 'libjpeg-turbo'  (http://libjpeg-turbo.virtualgl.org/), which supposedly is 2-4 times faster than libjpeg.  I am going to try installing libjpeg-turbo, and then building OpenCV from source using these build options:

cmake -DWITH_JPEG=ON -DBUILD_JPEG=OFF -DJPEG_INCLUDE_DIR=/path/to/libjepeg-turbo/include/ -DJPEG_LIBRARY=/path/to/libjpeg-turbo/lib/libjpeg.a /path/to/OpenCV

(http://stackoverflow.com/questions/10465209/how-to-compile-opencv-with-libjpeg-turbo).


I'm a little bit concerned that the BBB may not be able to build OpenCV, so if the build fails I'll have to look into cross-compiling for the BBB.  If I am able to get it built, I will recompile your framegrabber program and post the new results for comparison.

-Mike

Michael Darling

unread,
Sep 20, 2013, 7:45:28 PM9/20/13
to beagl...@googlegroups.com
Max,

Quick update:  The BBB ran out of space while building OpenCV near the 40% mark (no surprise there).  I will try and do my research on cross-compiling starting with the link you posted awhile back.

Let me know if you have any other tips/suggestions for getting OpenCV cross-compiled.  It seems like there are a lot of ways to go about it and none of them seem very straightforward.  I should have all of my dependencies for OpenCV pre-installed on the BBB, but I'm not sure that that does me any good now that I have to cross-compile, anyways.

-Mike

Matthew Witherwax

unread,
Sep 20, 2013, 9:55:56 PM9/20/13
to beagl...@googlegroups.com
I have a 64 gb class 10 sd card.  I can install Arch on it and see if I can compile OpenCV without running out of space.  If so, it may be an option for you if you cannot get the cross compiler to work.  The instructions I sent you are apparently how all packages for Arch Linux Arm are built.  Let me know if you run into any issues, and I will see what I can do.

One thing you might also want to do is enable neon support.  I am not sure what parts of OpenCV have neon optimizations, but you might be able to squeeze a little more performance out of it.

Matthew Witherwax

unread,
Sep 20, 2013, 10:13:23 PM9/20/13
to beagl...@googlegroups.com
Mike,

Looks like Arch Linux Arm uses libjpeg-turbo as the system library.  Send me your code and I will compile and confirm.

Michael Darling

unread,
Sep 21, 2013, 12:30:03 AM9/21/13
to beagl...@googlegroups.com
I actually found some really straightforward directions for distributed cross compilation for the raspberry pi. It seems like distributed cross compilation is by far the easiest way to go for compiling big projects with many dependencies like OpenCV. I'm adapting the instructions for BBB and going to give it a shot tonight. I've been able to cross compile a small 'hello world' program so I know I have the right tool chain and am on the right path. 

I'll let you know if I run into any troubles. If all goes smoothly, I'll have some results soon.

Michael Darling

unread,
Sep 21, 2013, 5:16:22 AM9/21/13
to beagl...@googlegroups.com
Oh and also the BUILD.txt file in the libjpeg source directory it says that the package automatically installs NEON on arm systems, so I think that should all be taken care of. 

I had to spend some time getting my local network working again so I'll give distcc a shot tomorrow morning. 

jesc...@googlemail.com

unread,
Sep 21, 2013, 5:54:02 AM9/21/13
to beagl...@googlegroups.com
Matthew,

Thanks! That may be really useful for me. Am messing around with a PTZ camera at the moment, and lag in the video stream has been a bit of a problem so far..

Regards,
Jon

Michael Darling

unread,
Sep 21, 2013, 5:08:34 PM9/21/13
to beagl...@googlegroups.com
Matthew -- would you be willing to upload the version of your framegrabber.c program that you used to time frame conversion to cv::Mat objects?  It looks like OpenCV is building fine via distributed cross-compilation and I would like to be able to compare 'apples-to-apples' to see if libjpeg-turbo improves performance.

Thanks,
-Mike


Matthew Witherwax

unread,
Sep 21, 2013, 7:46:43 PM9/21/13
to beagl...@googlegroups.com
Mike,

Please see the attached file.  I left a comment in there for the lines to remove if you want to capture all frames instead of every 3rd.
framegrabberCV.c

Matthew Witherwax

unread,
Sep 21, 2013, 7:48:34 PM9/21/13
to beagl...@googlegroups.com
Jon,

I hope you find it useful.  I would appreciate any feedback you have to give.

Best of luck,
Matthew


Michael Darling

unread,
Sep 23, 2013, 4:48:18 AM9/23/13
to beagl...@googlegroups.com
I have confirmed that it is indeed possible to grab AND convert frames at 640x480 and 30 fps with the BBB and Logitech C920 operating in MJPEG mode.  My solution was to basically install libjpeg-turbo from source with NEON enabled and then building and installing OpenCV from source, using libjpeg-turbo as the JPEG codec. I also built OpenCV with NEON enabled.

Here is sample output with every frame being processed. The 'top' utility shows about 50% CPU and only about 2% memory usage. Before executing the code, I set the CPU to operate at constant 1Ghz using "sudo cpufreq-set -g performance" so that there isn't a "slow start" while the BBB starts grabbing frames at 300 Mhz before it finally kicks into 1Ghz under load.

time ./framegrabber -f mjpeg -H 480 -W 640 -c 1000 -I 30 -o
Startup took 0.000000 seconds
Captured 1000 frames and Processed 1000 in 13.940000 seconds
Shutdown took 0.010000 seconds


real    0m33.760s
user    0m11.566s
sys    0m2.518s

I ended up using distcc to do a distributed cross-build of OpenCV on my PC, but still had problems with running out of space trying to store all of the source and object files on the 2GB eMMC on the BBB.  I ended up having to mount an ext2 formatted filesystem using a USB thumb drive hooked up to the BBB and building OpenCV from there.  I probably ended up doing a lot of extraneous things that had little to no effect but I do have a working process for those who need to get 30 fps.  I need to go back and clean up some notes I took and then I plan to post a step-by-step guide for anybody else who wants to replicate what I did.

Thanks,
-Mike

Matthew Witherwax

unread,
Sep 23, 2013, 8:12:47 AM9/23/13
to beagl...@googlegroups.com
Mike,

I look forward to your write up.  I am glad to hear distcc worked for you.  A note about the files to put on the BBB, you should really only need the compiled libs and the header files.  The build I did on my surface (Windows) created a directory called install that has a lib and an include directory with a total size of ~10 MB.  I would imagine the build for Linux would be close to that.

Michael Darling

unread,
Sep 23, 2013, 3:02:42 PM9/23/13
to beagl...@googlegroups.com
Yeah. I did a "make install" step that saved all of the files to /usr/local/lib and /usr/local/include. The thumb drive was just for temporary storage while building.

Michael Darling

unread,
Sep 23, 2013, 4:24:15 PM9/23/13
to beagl...@googlegroups.com
Yeah. I did a "make install" step that saved all of the files to /usr/local/lib and /usr/local/include. The thumb drive was just for temporary storage while building.

On Monday, September 23, 2013, Matthew Witherwax wrote:
Message has been deleted

Matthew Witherwax

unread,
Sep 24, 2013, 2:41:14 PM9/24/13
to beagl...@googlegroups.com
Richard,

Please see the post http://blog.lemoneerlabs.com/post/BBB-webcams concerning the PS3Eye, C920, and frame rates.  I believe it will answer your questions.


On Tue, Sep 24, 2013 at 12:50 PM, rh_ <richard...@lavabit.com> wrote:
On Mon, 23 Sep 2013 01:48:18 -0700
Michael Darling <fndrpl...@gmail.com>

wrote:

> I have confirmed that it is indeed possible to grab AND convert
> frames at 640x480 and 30 fps with the BBB and Logitech C920 operating

The ps3 eye could not do this or have you tried yet?
Will you say what the application is? Or at least is it a stationary
camera? Or is it something like for robot vision?  Using that Motion
app (mentioned previously) would get higher frame-rate, I think.
Unless Motion is post-processing. I'm sure not all USB cams are
created equal. But I wonder how much can be done via USB.
Seems to me that image substraction (or whatever the correct term)
would probably need to happen at the hardware level to get any
capture rate increase. So maybe there's no way to get a higher
frame rate due to hardware limitations.


>
> Thanks,
> -Mike
>
>
> On Sat, Sep 21, 2013 at 4:48 PM, Matthew Witherwax
> <able...@gmail.com>wrote:
>
> > Jon,
> >
> > I hope you find it useful.  I would appreciate any feedback you
> > have to give.
> >
> > Best of luck,
> > Matthew
> >
> >
> > On Sat, Sep 21, 2013 at 4:54 AM, <jescombe-gM/Ye1E23mwN
> > +BqQ9...@public.gmane.org> wrote:
> >
> >> Matthew,
> >>
> >> Thanks! That may be really useful for me. Am messing around with a
> >> PTZ camera at the moment, and lag in the video stream has been a
> >> bit of a problem so far..
> >>
> >> Regards,
> >> Jon
> >>
> >>
> >> On Friday, 20 September 2013 14:01:23 UTC+1, Matthew Witherwax
> >> wrote:
> >>>
> >>> William,
> >>>
> >>> I have posted my streaming code and writeup here

> >>>
> >>>
> >>>  --
> >> For more options, visit http://beagleboard.org/discuss
> >> ---
> >> You received this message because you are subscribed to a topic in
> >> the Google Groups "BeagleBoard" group.
> >> To unsubscribe from this topic, visit
> >> https://groups.google.com/d/topic/beagleboard/G5Xs2JuwD_4/unsubscribe.
> >> To unsubscribe from this group and all its topics, send an email to
> >> beagleboard+unsubscribe-/JYPxA39Uh5TLH3MbocFF
> >> +G/Ez6Z...@public.gmane.org For more options, visit

> >> https://groups.google.com/groups/opt_out.
> >>
> >
> >  --
> > For more options, visit http://beagleboard.org/discuss
> > ---
> > You received this message because you are subscribed to a topic in
> > the Google Groups "BeagleBoard" group.
> > To unsubscribe from this topic, visit
> > https://groups.google.com/d/topic/beagleboard/G5Xs2JuwD_4/unsubscribe.
> > To unsubscribe from this group and all its topics, send an email to
> > beagleboard+unsubscribe-/JYPxA39Uh5TLH3MbocFF
> > +G/Ez6Z...@public.gmane.org For more options, visit

> > https://groups.google.com/groups/opt_out.
> >
>
> --
> For more options, visit http://beagleboard.org/discuss
> ---
> You received this message because you are subscribed to the Google
> Groups "BeagleBoard" group. To unsubscribe from this group and stop

> receiving emails from it, send an email to beagleboard
Message has been deleted
Message has been deleted

Michael Darling

unread,
Sep 24, 2013, 8:39:11 PM9/24/13
to beagl...@googlegroups.com
Hi Richard,

Here is a summary of what I ended up doing to get 30 fps out of the BBB. You are right -- it simply came down to taking advantage of NEON hardware acceleration.  My little "How-To" guide might be a bit verbose,  but you can skip all of the background information and skip right to the steps I took if you like. However reading some of the background information might give you some more insight into what I have already tried with the PS3 Eye (along with many others like Matthew) and what we have learned. Its also chock-full of great references on the topic.

Right now, I am considering this a DRAFT as I have not gone back through to make sure that all the commands I pasted will work verbatim. If you're pretty comfortable in Linux, I'm sure this is enough for you to replicate what I have done.

I wrote it up in LaTeX, as that was the easiest for me, so here it is in both PDF and HTML format.

Best of luck.  If you end up taking a look at this, please let me know if you have any comments or suggestions for improvement.

-Mike


On Tue, Sep 24, 2013 at 3:52 PM, rh_ <richard...@lavabit.com> wrote:
On Tue, 24 Sep 2013 13:41:14 -0500
Matthew Witherwax <able...@gmail.com>

wrote:

> Richard,
>
> Please see the post

> C920, and frame rates.  I believe it will answer
> your questions.

Ok it seems that OpenCV is the limiter. But the camera is significant.
I like the idea of using a USB camera due to low cost. But I dont like
it due to limited features.  However the limitations of cheap USB might
be overcome by using two cameras. Although only to a small degree.
Probably need a camera cape to get at the camera features. And a
feature-full camera.  I am thinking of repurposing a video camera
as most have lots of features but are fairly cheap.
BBB_30fps.html
BBB_30fps.pdf

Matthew Witherwax

unread,
Sep 24, 2013, 9:22:56 PM9/24/13
to beagl...@googlegroups.com
Mike,

Great write up.  When I have some free time, I will replicate your steps on my BBB running Arch Linux.  Not sure if you have a personal website, but would you mind me posting this to my blog once it makes it out of draft?

I look forward to seeing your aircraft in action; on to the CV problems!

Michael Darling

unread,
Sep 24, 2013, 9:53:04 PM9/24/13
to beagl...@googlegroups.com
Thanks!  You're absolutely welcome to distribute it freely. I don't have my own website so that's a great way to share the information.

Luckily I have most of my CV algorithm done. I have a bit of cleaning up to do but most of my work is implementing hardware from here on out.

Thanks again SO much for your help!  I look forward to getting feedback on the write up.

-Mike

Matthew Witherwax

unread,
Sep 25, 2013, 1:55:30 PM9/25/13
to beagl...@googlegroups.com
Mike,

Looking over the code in you document, I noticed some formatting was off and I needed to clean up the way I handled processing the subset of the frames.  I am cleaning things up and will get you a new version shortly.

Matthew Witherwax

unread,
Sep 26, 2013, 7:46:39 AM9/26/13
to beagl...@googlegroups.com
Mike,

Here is the cleaned up one.  Here are the differences:
-o is not used to indicate which frames to convert to OpenCV Mats and requires an integer argument
    -o 1 would convert every frame
    -o 2 would convert every 2nd or ever other, etc
default is 1

-p is similar to -o in the original framegrabber.  However, it doesn't actually output anything.  It just controls if any frames are to be converted.

Captured count and processed count variables have been renamed and moved to the top.

Formatting has been corrected.


Testing of your procedure to follow.
framegrabberCV.c

Matthew Witherwax

unread,
Sep 26, 2013, 7:50:14 AM9/26/13
to beagl...@googlegroups.com
-o is not used to indicate which frames to convert to OpenCV Mats and requires an integer argument
should read
-o is now used to indicate which frames to convert to OpenCV Mats and requires an integer argument


On Thursday, September 26, 2013 6:46:39 AM UTC-5, Matthew Witherwax wrote:
Mike,

Here is the cleaned up one.  Here are the differences:
-o is not used to indicate which frames to convert to OpenCV Mats and requires an integer argument
    -o 1 would convert every frame
    -o 2 would convert every 2nd or ever other, etc
default is 1

-p is similar to -o in the original framegrabber.  However, it doesn't actually output anything.  It just controls if any frames are to be converted.

Captured count and processed count variables have been renamed and moved to the top.

Formatting has been corrected.


Testing of your procedure to follow.
On Wed, Sep 25, 2013 at 12:55 PM, Matthew Witherwax wrote:
Mike,

Looking over the code in you document, I noticed some formatting was off and I needed to clean up the way I handled processing the subset of the frames.  I am cleaning things up and will get you a new version shortly.
On Tue, Sep 24, 2013 at 8:53 PM, Michael Darling wrote:
Thanks!  You're absolutely welcome to distribute it freely. I don't have my own website so that's a great way to share the information.

Luckily I have most of my CV algorithm done. I have a bit of cleaning up to do but most of my work is implementing hardware from here on out.

Thanks again SO much for your help!  I look forward to getting feedback on the write up.

-Mike


On Tuesday, September 24, 2013, Matthew Witherwax wrote:
Mike,

Great write up.  When I have some free time, I will replicate your steps on my BBB running Arch Linux.  Not sure if you have a personal website, but would you mind me posting this to my blog once it makes it out of draft?

I look forward to seeing your aircraft in action; on to the CV problems!
On Tue, Sep 24, 2013 at 7:39 PM, Michael Darling wrote:
Hi Richard,

Here is a summary of what I ended up doing to get 30 fps out of the BBB. You are right -- it simply came down to taking advantage of NEON hardware acceleration.  My little "How-To" guide might be a bit verbose,  but you can skip all of the background information and skip right to the steps I took if you like. However reading some of the background information might give you some more insight into what I have already tried with the PS3 Eye (along with many others like Matthew) and what we have learned. Its also chock-full of great references on the topic.

Right now, I am considering this a DRAFT as I have not gone back through to make sure that all the commands I pasted will work verbatim. If you're pretty comfortable in Linux, I'm sure this is enough for you to replicate what I have done.

I wrote it up in LaTeX, as that was the easiest for me, so here it is in both PDF and HTML format.

Best of luck.  If you end up taking a look at this, please let me know if you have any comments or suggestions for improvement.

-Mike
On Tue, Sep 24, 2013 at 3:52 PM, rh_ wrote:
On Tue, 24 Sep 2013 13:41:14 -0500
Matthew Witherwax
wrote:

> Richard,
>
> Please see the post
> http://blog.lemoneerlabs.com/post/BBB-webcamsconcerning the PS3Eye,
> C920, and frame rates.  I believe it will answer
> your questions.

Ok it seems that OpenCV is the limiter. But the camera is significant.
I like the idea of using a USB camera due to low cost. But I dont like
it due to limited features.  However the limitations of cheap USB might
be overcome by using two cameras. Although only to a small degree.
Probably need a camera cape to get at the camera features. And a
feature-full camera.  I am thinking of repurposing a video camera
as most have lots of features but are fairly cheap.

--
For more options, visit http://beagleboard.org/discuss
---
You received this message because you are subscribed to a topic in the Google Groups "BeagleBoard" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/beagleboard/G5Xs2JuwD_4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to beagleboard+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
For more options, visit http://beagleboard.org/discuss
---
You received this message because you are subscribed to a topic in the Google Groups "BeagleBoard" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/beagleboard/G5Xs2JuwD_4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to beagleboard+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
For more options, visit http://beagleboard.org/discuss
---
You received this message because you are subscribed to a topic in the Google Groups "BeagleBoard" group.
To unsubscribe from this

--
For more options, visit http://beagleboard.org/discuss
---
You received this message because you are subscribed to a topic in the Google Groups "BeagleBoard" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/beagleboard/G5Xs2JuwD_4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to beagleboard+unsubscribe@googlegroups.com.

shedmeister

unread,
Oct 2, 2013, 10:44:12 AM10/2/13
to beagl...@googlegroups.com
I just wanted to say thanks for this awesome thread.  With your procedure and code, I have VGA capture now working too.  I would like to add the following comments:

1) I am using a Logitech C230 camera (looks like an eyeball).  So you can add this to the list of cameras that support MJPG capture mode with full JPEG output.
2) I am not getting a full 30 fps, only about 24.  But there are differences between my setup and yours.  I am running Ubuntu 12.04 (Precise) + LXDE, and haven't really optimized much except for building libjpeg-turbo and rebuilding OpenCV as you suggest.
3) I built OpenCV from source on the BBB without using distcc or an external thumb drive.  I am booting from a 4GB uSD card.  It built in 3-4 hours.

Thanks again!
 Jim


On Tuesday, September 11, 2012 10:32:07 AM UTC-4, p_lin wrote:
Hi,

I'm running Ubuntu 11.10 and I'm trying to develop an application using opencv and a webcam. I'm running into problems whenever I try for resolutions higher than 320x240.  I tried using the Ps3Eye (driver is gspca_ov534) and the logitech C260 (driver is uvcvideo). At 320x240 it seems to work fine and I get a saved image with the occasional "select timeout" error.

However, when I try to run at 640x480 I get this output (with a black image file):

VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
Resolution set, entering loop... 
select timeout
select timeout
Saving Image


Any ideas on how to fix this error?  It would be great to be able to save higher resolution images... are there other ways to grab a still from the webcam?


Enter code here...
    CvCapture* capture = 0;

   capture = cvCaptureFromCAM(-1);
 

    //set camera resolution to low, perform object detection
   //then set resolution to high, and capture images

    cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, 640 );
   cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, 480 );

    if (!capture)
       printf("error opening capture");
   else
   {
       printf("Resolution set, entering loop... \r\n");
           IplImage* image = cvQueryFrame( capture );
           image = cvQueryFrame( capture );
           
           if( !image )
               return 0;

                        printf("Saving Image\n\r");
                       cvSaveImage("cam.jpg", image);  //save the image
                       //start=time(NULL);  //get current time in seconds
               
   }






Michael Darling

unread,
Oct 2, 2013, 11:45:52 PM10/2/13
to beagl...@googlegroups.com
Jim,

I'm glad to hear that you found my write-up helpful.  I really appreciate your kind words and will definitely add the C230 as a MJPEG-ready webcam.  Out of my own curiosity, are you 1) doing any additional processing with OpenCV or 2) displaying the video in real-time (either of which could be cause for a slower frame rate)? If you have LXDE, my guess is that #2 is the culprit. At the very least, it sounds like you're happy with the performance for your application and thats what matters! =)

Thanks again!
- Mike


--
For more options, visit http://beagleboard.org/discuss
---
You received this message because you are subscribed to a topic in the Google Groups "BeagleBoard" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/beagleboard/G5Xs2JuwD_4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to beagleboard...@googlegroups.com.

shedmeister

unread,
Oct 3, 2013, 4:41:28 PM10/3/13
to beagl...@googlegroups.com
Hi Mike,

Yes, I am doing additional OpenCV processing, and have 4 video windows displayed, 2 being updated in "real-time" (hah!).  I have capture running in a separate thread.  My frame processing rate is well under 1 fps, so I'm ecstatic with 24 fps capture rate.

Thanks again!
 Jim

josiasina...@gmail.com

unread,
Nov 3, 2013, 4:22:12 PM11/3/13
to beagl...@googlegroups.com
Excellent, excellent thread! I just purchased my BBB yesterday for a personal project, and I've been reading for many many hours how and what would be the best way to stream video from the BBB to a PC over WiFi with the lowest latency as I could get. This thread, including all the links that users posted here, were really helpful in understand the steps I should take.

I'm just running to the store in a few minutes to buy a C920 and start trying stuff :) I'm glad I've waited and read a lot before buying the PS3 Eye... Thanks guys. I will update any results I achieve.

Michael Darling

unread,
Nov 3, 2013, 8:35:20 PM11/3/13
to beagl...@googlegroups.com
Glad to hear it. Looking forward to your updates.

-Mike


On Sun, Nov 3, 2013 at 1:22 PM, <josiasina...@gmail.com> wrote:
Excellent, excellent thread! I just purchased my BBB yesterday for a personal project, and I've been reading for many many hours how and what would be the best way to stream video from the BBB to a PC over WiFi with the lowest latency as I could get. This thread, including all the links that users posted here, were really helpful in understand the steps I should take.

I'm just running to the store in a few minutes to buy a C920 and start trying stuff :) I'm glad I've waited and read a lot before buying the PS3 Eye... Thanks guys. I will update any results I achieve.

--

Matthew Witherwax

unread,
Nov 4, 2013, 7:07:58 AM11/4/13
to beagl...@googlegroups.com
Glad to hear our work is helping others.  Let us know how your endeavors go.
-Matthew


On Sun, Nov 3, 2013 at 3:22 PM, <josiasina...@gmail.com> wrote:
Excellent, excellent thread! I just purchased my BBB yesterday for a personal project, and I've been reading for many many hours how and what would be the best way to stream video from the BBB to a PC over WiFi with the lowest latency as I could get. This thread, including all the links that users posted here, were really helpful in understand the steps I should take.

I'm just running to the store in a few minutes to buy a C920 and start trying stuff :) I'm glad I've waited and read a lot before buying the PS3 Eye... Thanks guys. I will update any results I achieve.

--

Learning Opencv

unread,
Jan 29, 2014, 2:52:50 PM1/29/14
to beagl...@googlegroups.com
I am working with the C920 on an ODroid, similar to the BBB.  I am also having a problem capturing more than 15fps in OpenCV, however, I need to use OpenCV for doing processing on the video.  Since I am not just grabbing frames, I can't do something like framegrabber.  Is there anyone who has done this and have a solution.  I have tried changing settings on v4l before running my code, but have had no luck.

Thanks.

Michael Darling

unread,
Jan 29, 2014, 5:02:43 PM1/29/14
to beagl...@googlegroups.com
All you have to do is adapt the frame grabber code for your needs.  I actually went ahead and rewrote the capture code to be object oriented so that I could conveniently use it with OpenCV's C++ interface.  You might find some of my code helpful as an example to work from.  It is in a public git repo:

I think you would primarily be interested in taking a look at CamObj.hpp, CamObj.cpp, and v4l2Cap.h, v4l2Cap.cpp, and/or v4l2_c.h and v4l2_c.c.  As a word of warning, my code is sloppy and specific for my needs, but it does what I need it to do.  Feel free to borrow from it and modify it to your heart's content.

I actually don't remember how all of this stuff works anymore, but you can look at main_threaded.cpp to see how I am using v4l2_c.h to get frames into a cv::Mat object.  If I remember right, I had to abandon the object-oriented approach for some reason so that later so that I could implement multi-threading using pthreads. I think the magic you are looking for is in the function v4l2_process_image(cv::Mat, const void), where the frame buffer is actually decoded and saved to a cv::Mat object. Once you have that, you're home free and can just deal with OpenCV.

static void v4l2_process_image(cv::Mat &img, const void *p)
{
        cv::Mat buff(img.cols, img.rows, CV_8UC3, (void*)p);
        img = cv::imdecode(buff,CV_LOAD_IMAGE_COLOR);
}

I hope that helps you to get started.  If you need any help, just ask.  I'll do my best to help where I can.

Good luck,
Mike

Matthew Witherwax

unread,
Jan 29, 2014, 8:41:54 PM1/29/14
to beagl...@googlegroups.com
To piggy back on Mike's response, when we were testing the capture rate of the BBB, we used a modified version of framegrabber that is attached to this article http://blog.lemoneerlabs.com/post/bbb-optimized-opencv-mjpeg-stream

It allows you to set all the parameters the regular framegrabber accepts but converts each frame to an OpenCV Mat.  It is C code and as another word of warning, I won't say the code is any less sloppy than Mike's :) It was written for testing purposes.

The article itself details how to compile OpenCV to use NEON and libjpeg-turbo courtesy of Mike.

I am surprised to hear the ODroid has trouble capturing higher than 15 fps.  I recently bought a Wanboard Quad, but the ODroid was on my short list.

A few things to consider:

Have you tested with framegrabber to see if you can capture the raw frames at a speed greater than 15 fps?  It could be that OpenCV is the bottle neck and perhaps you should recompile it per Mike's instructions.

Is auto exposure turned off?  Even with the frame rate set to 30 fps, in low light conditions, the camera increases exposure time to compensate leading to about 15 fps.

Good luck, and as Mike said, if you need help, we will do out best.

epsilo...@gmail.com

unread,
Jan 30, 2014, 10:22:20 AM1/30/14
to beagl...@googlegroups.com
Thank you very much for both of your responses.  I will take a look at the code you both suggested.

In response to Matthew, the ODroid so far has proven to be great at almost everything.  I have already turned off auto exposure and auto focus.  The platform I have the camera on moves often near and far from objects as well as vibrates, so these of course caused issues.

The webcam seems to get the higher frame rate with no issue when I use external applications, such as Cheese.  I even used framegrabber and another application someone made for the BBB to test and was able to get 30 fps.  The problem seems to be directly related to OpenCV and how it uses v4l.  Even if the resolution (in OpenCV) is set to the lowest resolution, the fps still maxes at 15fps.  This rate is found with just image capture, so no other processing is performed.  This seems like a hard limit that OpenCV is causing specifically on the ODroid setup.

Michael Darling

unread,
Jan 30, 2014, 1:53:17 PM1/30/14
to beagl...@googlegroups.com
Okay, so the problem you're having has to do with bugs in OpenCV, itself.  Unfortunately, the capture methods in OpenCV do not set the camera properties correctly for video4linux devices.  In other words, you may write the line of code to set the frame rate to 30 fps, but the camera isn't actually getting the instruction to change the frame rate.

The fix Matthew and I have used is to just use our own video4linux capture code.  You might be able to modify the OpenCV source code, but the capture code is difficult to follow since there are so many layers of abstraction. (There are a lot of wrapper classes used to handle v4l2 devices, v4l1 devices, Mac, Windows so that the programmer doesn't have to handle each camera differently depending on his/her system.)

Hope that explains some things for you  :)
- Mike

Matthew Witherwax

unread,
Jan 30, 2014, 3:17:40 PM1/30/14
to beagl...@googlegroups.com
If you bypass OpenCV and capture directly like we did, you should test to see if you can capture successfully in YUYV format.  OpenCV can convert YUYV to a Mat with less than 3% cpu use.  If you capture in MJPEG, you will see cpu use of 90% or more to convert the image to a Mat.  This isn't so bad on the Wandboard because it only consumes one core, but can be intense on the single core BBB.  I can tell you from testing with the Wandboard Quad, it can push 30 fps in YUYV over USB.  However, it is only possible to stream from one camera at 30 fps in YUYV.  In short it is a tradeoff.  You can either saturate the USB and save processing or save bandwidth and increase processing. Something to consider depending on your needs.


--

qsc...@gmail.com

unread,
Feb 16, 2014, 1:37:01 PM2/16/14
to beagl...@googlegroups.com
Hello, ive been reading through this group and found it very useful. i am using an odroid U2 with a logitech c920, on ubuntu 12.11 and opencv 2.4.6.1
 ive used the custom capture code found https://github.com/mdarling39/LinuxVision/blob/master/OCVCapture.cpp to capture. while it certainly uses less resources than the built in function in opencv. for some reason when i set the resolution to 1280*720 the fps wont go past 10, but for any smaller resolution i am able to get 15 fps no problem. the cpu usage is not maxed out when i use 1280*720 resolution.

any help would be appreciated.

Thanks

qsc...@gmail.com

unread,
Feb 16, 2014, 1:57:43 PM2/16/14
to beagl...@googlegroups.com
Hello, ive been reading through the group and found i have found it very helpful. i am running an odroid u2 with ubuntu 12.11 and opencv 2.4.6.1.
i used the code i found in git :/mdarling39/ to capture from a logitech c920. i ran into an issue when i set the resolution to 1280*720 the fps would not go past 10. however for any smaller resolution i am able to get 15 fps no problem. the custom capture code certainly does use less resources than the built in opencv function. i have 2 threads, the main for capturing and the second for processing both of which do not max out their cpu so i cant figure out why im getting this fps drop.


any help would be appreciated.

Thanks

Michael Darling

unread,
Feb 18, 2014, 2:12:16 PM2/18/14
to beagl...@googlegroups.com
My first question for you would be which pixel format are you capturing in?  If you do a "v4l2-ctl -d /dev/videoX --list-formats-ext" in the command line (where X is 0, 1, ...  whatever your C920 is) you can see the various pixel formats, resolutions, and frame rates supported by the camera.

For YUYV, the maximum frame rate at 1280x720 is 10 fps.  If you are using H.264 or MJPEG, the maximum frame rate is 30 fps.

I'm doubtful that that's your problem since you are still only getting 15 fps at lower resolutions.  (Can you get 30 fps at 640x480?  That is the resolution I am using for my own project and might serve as a good baseline measurement.)

My only other thought for now is that your actual measurement of the frame rate could be wrong.  I don't have much experience with pthreads, but I know that some of the "clock" functions in the <ctime> header have to be handled differently when you are multi-threading.

Those are my only ideas for now.  I'll keep racking my brain and let you know if I come up with something else.

- Mike

qsc...@gmail.com

unread,
Feb 18, 2014, 2:31:22 PM2/18/14
to beagl...@googlegroups.com
you are absolutely right. at 1280x720 it is limited to 10 fps in YUYV format and i do get about 26fps in 640x480, but that could be just how fps are calculated. so it could be 30 in reality.

thank you.

Michael Darling

unread,
Feb 18, 2014, 3:03:43 PM2/18/14
to beagl...@googlegroups.com
Glad I could help.  =]   Let me know what you find out after looking into it a little bit more!
Message has been deleted

Matthew Witherwax

unread,
Apr 1, 2014, 8:06:10 AM4/1/14
to beagl...@googlegroups.com
Adam,

If your issue is a low frame rate coming from the PS3Eye, I have written about this problem here http://blog.lemoneerlabs.com/post/BBB-webcams
To make a long story short, the transfer method the PS3Eye uses to transfer data over USB doesn't work on the BBB at high frame rates and/or high resolutions.

If the frame rate you are getting is sufficient, then the question becomes, what algorithm is being used to track faces?

Let me know what is going on, and I will help where I can.

Matthew


On Mon, Mar 31, 2014 at 4:50 AM, <ad...@ben-dror.com> wrote:
Hi all, 

I worked on this project a while back - www.ben-dror.com/pinokio I want to get it running on a beagle bone. 
I have purchased a BBB and a ps3 eye. OpenCV face-tracking seems to run at a very low frame-rate. I guess I am looking for around 15fps.
Please any insights would be greatly appreciated. 

Adam
For more options, visit https://groups.google.com/d/optout.

Message has been deleted

Anh Tung Vu

unread,
Apr 4, 2014, 5:20:23 AM4/4/14
to beagl...@googlegroups.com
Hi,

I have done a lot of reading about using OpenCV with libjpeg-turbo on BBB and C920.

I'm running Ubuntu 12.04 with LXDE on BBB. I compiled libjpeg-turbo from source, I tried many version, 1.3.1, 1.3.0, 1.1.90 (of course I clean up before trying another version). But OpenCV seems to not recognize /opt/libjpeg-turbo/lib/libjpeg.a. After running ccmake, in Media I/O, JPEG is reported to use /opt/libjpeg-turbo/lib/libjpeg.a (ver  ). Yes, the version info is missing, like that the library is not valid.

Even with version info missing, OpenCV compiled without any error, but performance is poor. I can run "time ./framegrabber -f mjpg -H 480 -W 640 -c 1000 -p" in ~34 secs, pretty close to 30 fps, but CPU usage is ~65%.

Thank you for your time. I appreciate any help :)

Anh Tung Vu

unread,
Apr 6, 2014, 2:28:01 AM4/6/14
to beagl...@googlegroups.com
I have successfully followed Michael Darling's guide. And I make an updated one http://vuanhtung.blogspot.com/2014/04/and-updated-guide-to-get-hardware.html

Full credit to Michael Darling and Lemoneer :). My guide is only intended to update the steps.

n8a...@gmail.com

unread,
Jan 9, 2015, 1:33:54 AM1/9/15
to beagl...@googlegroups.com
Hi Michael,

A few tips to get better video capture quality in highly dynamic motion situations:

* Motion blur is not caused by low frame rate, it is caused by:
    * rolling shutters - global shutters exhibit much better motion capture
    * poor dynamic range - most sub $60 web cams have narrow dynamic ranges, so the shutter speed is often long
    * Insufficient light capture - between cheap optics and small CMOS chips, lower end cameras tend to do gather little light and thus, again, have long shutter times.
* Note that megapixels has almost nothing to do with the above, other than that larger CMOS chips usually have higher MP.  People need to stop worrying so much about resolution and demand better quality for any given resolution.  </soapbox> <!-- your OCD wishes I had the other tag... now I'm not even going to finish this comment tag.  ;-)
* If you are paying less than $100 for a given camera, it is almost certainly a rolling shutter camera.  If you find a good global shutter, high-dynamic range camera for < $100, please let us all know.

Good luck,

-Nate

On Sunday, May 12, 2013 at 11:28:09 PM UTC-6, Michael Darling wrote:

Hi Martin,

I'm not sure if you're still interested in helping me, but I did want to let you know that I have finally been able to grab 640x480 frames on my BeagleBone from the PS3 webcam.  I did end up using your custom capture code since the framerate setting in OpenCV doesn't work. (Thanks!)

I am able to set the camera to 15fps and capture frames without select timeout errors, however I end up with significant motion blur due to the low framerate.  (Again, I plan to put this system on an airplane, so that won't cut it for me.)  I blindly made a couple of adjustments to your code, and am able to get frames with the camera set at 30 fps if I open 3 instances of my program, then close two.  (This is what I have to do on my Mac with the 3rd party macam driver for the PS3 Eye -- thats where I got the idea from.)  Unfortunately, even at the 30 fps setting, I really am only getting about 10.

For my application, it is okay if I only grab frames at 10 Hz, but I need to have the camera operating at high enough of a frame rate that I can eliminate motion blur.  I'm not very familiar with Video4Linux or the nitty-gritty of capturing frames from a webcam, so I was wondering if you might be able to provide some guidance.  Is there any way to be able to eliminate motion blur with a slow embedded processor and just tolerate dropped frames, or am I pretty much hosed?

Thanks for any help you can provide.
-Mike


On Tue, Apr 2, 2013 at 1:44 PM, Michael Darling <fndrpl...@gmail.com> wrote:
sorry just to be clear...  the conf file i used was actually called hiResMotion.conf both times.  i just changed the resolution between the two instances.

On Tuesday, April 2, 2013 1:19:09 PM UTC-7, Michael Darling wrote:
Hi Martin,

Sorry it took me so long to get back. I was having problems getting a stable version of Ubuntu installed on my board.  A new version was just released and that solved my problems.

I just installed the motion package.  I copied the default motion.conf file to my working directory, renamed it, and changed the width and height values to 320 and 240, respectively -- Everything works as expected using the PS3 Eye:

ubuntu@arm:~/Motion$ motion -c loResMotion.conf
[0] Processing thread 0 - config file hiResMotion.conf
[0] Motion 3.2.12 Started
[0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478784
[0] Thread 1 is from hiResMotion.conf
[0] motion-httpd/3.2.12 running, accepting connections
[0] motion-httpd: waiting for data on port TCP 8080
[1] Thread 1 started
[1] cap.driver: "ov534"
[1] cap.card: "USB Camera-B4.09.24.1"
[1] cap.bus_info: "usb-musb-hdrc.1-1"
[1] cap.capabilities=0x05000001
[1] - VIDEO_CAPTURE
[1] - READWRITE
[1] - STREAMING
[1] Config palette index 8 (YU12) doesn't work.
[1] Supported palettes:
[1] 0: YUYV (YUYV)
[1] Selected palette YUYV
[1] Test palette YUYV (320x240)
[1] Using palette YUYV (320x240) bytesperlines 640 sizeimage 153600 colorspace 00000008
[1] found control 0x00980900, "Brightness", range 0,255 
[1]     "Brightness", default 0, current 0
[1] found control 0x00980901, "Contrast", range 0,255 
[1]     "Contrast", default 32, current 32
[1] found control 0x00980911, "Exposure", range 0,255 
[1]     "Exposure", default 120, current 120
[1] found control 0x00980912, "Auto Gain", range 0,1 
[1]     "Auto Gain", default 1, current 1
[1] found control 0x00980913, "Main Gain", range 0,63 
[1]     "Main Gain", default 20, current 20
[1] mmap information:
[1] frames=4
[1] 0 length=155648
[1] 1 length=155648
[1] 2 length=155648
[1] 3 length=155648
[1] Using V4L2
[1] Resizing pre_capture buffer to 1 items
[1] Started stream webcam server in port 8081
[1] File of type 8 saved to: /tmp/motion/01-20130402201014.swf
[1] File of type 1 saved to: /tmp/motion/01-20130402201014-00.jpg
[1] File of type 1 saved to: /tmp/motion/01-20130402201018-01.jpg
[1] File of type 1 saved to: /tmp/motion/01-20130402201019-01.jpg
[1] File of type 1 saved to: /tmp/motion/01-20130402201021-00.jpg


But if I change the height and width in the conf file to 640 and 480, respectively, I get the following:

ubuntu@arm:~/Motion$ motion -c hiResMotion.conf
[0] Processing thread 0 - config file hiResMotion.conf
[0] Motion 3.2.12 Started
[0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478784
[0] Thread 1 is from hiResMotion.conf
[0] motion-httpd/3.2.12 running, accepting connections
[0] motion-httpd: waiting for data on port TCP 8080
[1] Thread 1 started
[1] cap.driver: "ov534"
[1] cap.card: "USB Camera-B4.09.24.1"
[1] cap.bus_info: "usb-musb-hdrc.1-1"
[1] cap.capabilities=0x05000001
[1] - VIDEO_CAPTURE
[1] - READWRITE
[1] - STREAMING
[1] Config palette index 8 (YU12) doesn't work.
[1] Supported palettes:
[1] 0: YUYV (YUYV)
[1] Selected palette YUYV
[1] Test palette YUYV (640x480)
[1] Using palette YUYV (640x480) bytesperlines 1280 sizeimage 614400 colorspace 00000008
[1] found control 0x00980900, "Brightness", range 0,255 
[1]     "Brightness", default 0, current 0
[1] found control 0x00980901, "Contrast", range 0,255 
[1]     "Contrast", default 32, current 32
[1] found control 0x00980911, "Exposure", range 0,255 
[1]     "Exposure", default 120, current 120
[1] found control 0x00980912, "Auto Gain", range 0,1 
[1]     "Auto Gain", default 1, current 1
[1] found control 0x00980913, "Main Gain", range 0,63 
[1]     "Main Gain", default 20, current 20
[1] mmap information:
[1] frames=4
[1] 0 length=614400
[1] 1 length=614400
[1] 2 length=614400
[1] 3 length=614400
[1] Using V4L2
[1] Resizing pre_capture buffer to 1 items
[1] v4l2_next: VIDIOC_DQBUF: EIO (s->pframe 0): Input/output error
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] Error capturing first image
[1] Started stream webcam server in port 8081
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] Video device fatal error - Closing video device
[1] Closing video device /dev/video0
[1] Retrying until successful connection with camera
[1] cap.driver: "ov534"
[1] cap.card: "USB Camera-B4.09.24.1"
[1] cap.bus_info: "usb-musb-hdrc.1-1"
[1] cap.capabilities=0x05000001
[1] - VIDEO_CAPTURE
[1] - READWRITE
[1] - STREAMING
[1] Config palette index 8 (YU12) doesn't work.
[1] Supported palettes:
[1] 0: YUYV (YUYV)
[1] Selected palette YUYV
[1] Test palette YUYV (640x480)
[1] Using palette YUYV (640x480) bytesperlines 1280 sizeimage 614400 colorspace 00000008
[1] found control 0x00980900, "Brightness", range 0,255 
[1]     "Brightness", default 0, current 0
[1] found control 0x00980901, "Contrast", range 0,255 
[1]     "Contrast", default 32, current 32
[1] found control 0x00980911, "Exposure", range 0,255 
[1]     "Exposure", default 120, current 120
[1] found control 0x00980912, "Auto Gain", range 0,1 
[1]     "Auto Gain", default 1, current 1
[1] found control 0x00980913, "Main Gain", range 0,63 
[1]     "Main Gain", default 20, current 20
[1] mmap information:
[1] frames=4
[1] 0 length=614400
[1] 1 length=614400
[1] 2 length=614400
[1] 3 length=614400
[1] Using V4L2
[1] v4l2_next: VIDIOC_DQBUF: EIO (s->pframe 0): Input/output error
[1] v4l2_next: VIDIOC_QBUF: Invalid argument
[1] Video device fatal error - Closing video device
[1] Closing video device /dev/video0
^C[0] httpd - Finishing
[0] httpd Closing
[0] httpd thread exit
[1] Thread exiting
[0] Motion terminating

It looks like the motion package is using Video4Linux, according to the Motion homepage.  Besides the fact that I am using a Rev. A6a board, what could possibly be different in my setup compared to yours?  I am running the 2013-03-28 Quantal 12.10 version of Ubuntu for BeagleBone.

Thanks!


On Wednesday, March 13, 2013 7:57:21 AM UTC-7, Martin wrote:
Just out of curiosity:

Have you had a look at the "motion" package (http://www.lavrsen.dk/foswiki/bin/view/Motion/WebHome)?

I am using this on a beaglebone A3 board running ubuntu. Motion can be installed using "sudo apt-get install motion". On my board it can capture 640x480 images without problems.

I am not sure if motion uses OpenCV or how it grabs images from the camera. 

But maybe worth a look if you can get it to work for your camera and board, and if it works take a look at how it does the capture, I believe there is source code available.

Martin

On Tuesday, March 12, 2013 7:57:36 PM UTC, Michael Darling wrote:
Update:  I set up a simple OpenCV script to capture frames using the tools developed by Martin Fox.  320x240 frames are captured no problems, but no luck at 640x480 -- same select timeout errors.  The result was the same for all three cameras I tried:

Capture: capabilities 5000001
Capture: channel 0
Capture: input 0 ov534 0
Capture: format YUYV YUYV
Capture: format RGB3 RGB3
Capture: format BGR3 BGR3
Capture: format YU12 YU12
Capture: format YV12 YV12
Capture: dimensions 640 x 480
Capture: bytes per line 1280
Capture: frame rate 30 fps
Capture: 4 buffers allocated
Capture: buffer length 614400
Capture: buffer length 614400
Capture: buffer length 614400
Capture: buffer length 614400
Capture 640 x 480 pixels at 30 fps
Capture: select timeout
Capture: select timeout

Any other ideas? 

--
For more options, visit http://beagleboard.org/discuss
---
You received this message because you are subscribed to a topic in the Google Groups "BeagleBoard" group.
Reply all
Reply to author
Forward
0 new messages