Webcam Matrix

0 views
Skip to first unread message

Daria

unread,
Aug 4, 2024, 8:08:39 PM8/4/24
to gritysunprom
Isthere any guidance on how to get arduino to capture an image or video feed to send to an LED matrix? I can get the matrix working on its own but when it comes to getting the video feed from the camera to the arduino and then to the matrix, i get lost. any help?

The Arduino doesn't have enough memory to process a whole image, as far as I know. You would have better luck using a computer to "read" the webcam and send data to the arduino, controlling the matrix.


On the computer, yes. There definitely is a way to calculate the average brightness and color of 64 parts of an image, but I think the easiest way to convey that to the arduino would be to send 64 bytes of info, one for each pixel, or more data if it is a multiple color matrix.

But the arduino definitely cannot do this on its own with a normal webcam. Maybe if you could find a super low-res camera, I think the ones in optical mice are pretty low res.

Here's something promising!:




Check out the Video Peggy - Video Peggy in action Evil Mad Scientist Laboratories. It uses a webcam connected to a PC to capture the data, the Processing language is then used to format the image data into a 25 x 25 matrix with 16 levels of grey. This matrix is then sent to the peggy board (which is essentially an arduino connected to the large LED matrix) for display. This happens 25 times every second.


THAT SAID, don't give up just because I say so! It'd be a good learning experience to get it all working. You might be able to grab their processing code, and modify it to display an 8x8 non-greyscale matrix - that way you could see what it'd look like first.


I would think that the actual size of a chess grid would have to be known, but I did not see mention of manipulation of that in the tutorial link I provided. I basically had to scale down the chessboard grid so that it would work with my camera setup.


However, as I understand it, this is referring to the number of grids, not the individual size of the grids (in either mm or in). Interestingly enough, to me it appears that their grid example is really 10x7.


I believe you are referring to the input "squareSize", which by default is 1. Is this the value that should be changed, and the input could be in any desired units? I don't believe anything else has to be changed except grid array size?


Other point I wanted to note is that I am dealing with a micro video lens and therefore dealing with smaller field of views / regions of interest. I don't believe this should play a role but just wanted to note that the size of a single grid is about 1.4 mm x 1.4 mm. So if I wanted units of mm, then squareSize = 1.4, correct?


I printed the original chessboard and each square seems to be about 21 mm. After measuring my chessboard more closely, each square is about 1.03 mm. However, by simply scaling, I do not see how the math could work out.


I added that in and started playing with the parameters. I made "square_size" have a value of 1 and then 2. In either case, the camera matrix is the same. The rvecs are also the same. The tvecs however, are twice as great when "square_size" has a value of 2. Still, I don't see why the calculation for focal length doesn't work out, using the data I get from the camera matrix. Is there something I am not taking into account?


your pixel size is 4.8μm and you can multiple with resolution but default camera resolution 1280x1024 and then your sensor size is found by μm convert it to the mm then apply formula Fx = fx * W/w, instead of your 12mm write my above described method .


Hi, solarflare.Since the real value and the calculated value are in the same order of magnitude.I guess the problem could be that the "w" and "W" don't macth each other. I'm not sure about that, but according to your describe, I believe that you did nothing wrong in the calculation." w = 1264 pixels (image size I set) W = 12 mm (from datasheet) "As you said, "W = 12mm comes from datasheet", but maybe in your selected image size or in video capture mode not the whole ccd/coms sensor array are working.Sometimes for a webcam, for example my Logitech C270, its "Optical Resolution(Ture): 1280 x 960" and the "Video Capture: 800x600" are just different. So if i use 800 instead of 1280 to calculate w/W then ...To avoid problem like this, I suggest that if you can find the mm/pixel rate of your camera's ccd sensor, you can use it to try again.


I don't know camera parameters (it's a cheap webcam and vendor is not providing them). I'm able to get camera intrinsic with cvCalibrateCamera2 (after some cvFindChessboardCorners as described here) and I save them with cvSave.


I am using opencv to calibrate my webcam. So, what I have done is fixed my webcam to a rig, so that it stays static and I have used a chessboard calibration pattern and moved it in front of the camera and used the detected points to compute the calibration. So, this is as we can find in many opencv examples ( _py_calibration.html)


However, what I am interested in is the global extrinsic matrix i.e. once I have removed the checkerboard, I want to be able to specify a point in the image scene i.e. x, y and its height and it gives me the position in the world space. As far as I understand, I need both the intrinsic and extrinsic matrix for this. How should one proceed to compute the extrinsic matrix from here? Can I use the measurements that I have already gathered from the chessboard calibration step to compute the extrinsic matrix as well?


You will need at least 3 non-collinear points with corresponding 3D-2D coordinates for solvePnP to work (link), but more is better. To have good quality points, you could print a big chessboard pattern, put it flat in the floor, and use it as a grid. What's important is that the pattern is not too small in the image (the larger, the more stable your calibration will be).


And, very important: for the intrinsic calibration, you used a chess pattern with squares of a certain size, but you told the algorithm (which does kind of solvePnPs for each pattern), that the size of each square is 1. This is not explicit, but is done in line 10 of the sample code, where the grid is built with coordinates 0,1,2,...:


(Haven't done this, but is theoretically possible, do it if you can't do 2) Reuse the intrinsic calibration by scaling fx and fy. This is possible because the camera sees everything up to a scale factor, and the declared size of a square only changes fx and fy (and the T in the pose for each square, but that's another story). If the actual size of a square is L, then replace fx and fy Lfx and Lfy before calling solvePnP.


Hi,I have a problem during camera calibration using chess board. I'm using C++ calibrateCamera method to find out the intrinsic matrix. The problem is the fx, fy are varying from 700 to 1100 depending on the distance from camera the cx, cy vary as well. My question is, can it be caused by poor camera quality(using some no-name webcam with resolution 640x480). Can this poor resolution be the only problem??


Did you use the program in the Code examples or roll your own? If you programmed your own, use the example program to make sure you are getting EXACTLY the same numbers and error values in yours before trusting it.


Do you have a proper chessboard? - the book says the chessboard should be at least several corners in each direction - and that one dimension should have an odd number of corners and the other should be even. I use a 9 X 6 corner chessboard printed on a 8 1/2 X 11 sheet of paper taped to some cardboard.


Make sure your chessboard is taking up as much of the camera view as possible - especially with your low resolution. Don't be afraid to rotate and skew, but when you do make it big - remember you only need inside corners within the camera frame. (not having smaller sectional views might affect the distortion correction numbers - but your basic matrix is probably the first step.)


Finally, use the drawChessboardCorners function (or use the Example Code program's show chessboards option) to pop-up the chessboard-by-chessboard results. Take a close look at where OpenCV thinks the corners are. With low-res video stills I've seen chessboards that OpenCV thinks are good where the corners are drawn 1/2 square off where they should be. Don't use bad corners in your calibration.


Thank you very for your advices. I switched to OpenCV from JavaCV to try out with the original calibration demo. Now I'm playing with it and hope will get better results. Just small question it is not important to place the chessboard far away from camera? You said that it is important to cover as much as possible of the image with the chessboard I thought the difference in position(depth) are making the results better:-/


Well, I am still learning as well, so if you find a trustworthy source that tells you that far away chessboards are important, listen to them first.But if you pull up the far away chessboards intermeadiate results (the red/rainbow circles and lines drawn on the frames), you'll see a ton of error on low-res, if you get detects at all. You could do a bunch of these and pick out the good ones.However, my intution, based on my moderate-level understanding of what's going on under the hood is that for the "main" camera cal matrix parameters, fx, fy, cx, and cy, should be OK with only big chessboard views (make sure to remember to tilt/turn). I think the smaller views might help pick out lens and translation distorion.


Sounds reasonable. With chessboard closer to the camera the result seems to be better.I'm still having big avg.backprojection error, so I have some more questions please:1) the cx,cy values are not very often referring to the exact middle of the sensor, is there any way how I can find them and use them as fixed afterwards?2)What exactly is the "CV_CALIB_SAME_FOCAL_LENGTH" is it just to simplify the calculation of the calibration?3)Is it at all possible to get backprojection errors really close to zero with such a bad camera?4) can I somehow check manually the correctness of the intrinsic matrix? Let say I will use the points on the chessboard using real size in mm and than for known distance of some point from the camera calculate it's projection and compare them with real picture?

3a8082e126
Reply all
Reply to author
Forward
0 new messages