Weare making our own cameras using the IMX412 sensor along with a selected 25 mm focal length lens. We have designed the camera PCB for the sensor along with the carrier board for the Orin AGX module. We are developing a product that uses 36 cameras per unit and have wanted to optimize the hardware. We would like to have access to the Camera ISP Tuning tools to tune our cameras for optimal performance. Is this something that we can get access to?
After I have finished with the internal setup of the camera, where I have hit the 4 targets as best as possible, I use some time for fine-tuning.
I position 5 pieces of wood as in the picture, which make up my test targets.
My biggest problem with this process is the light. Even the lousy original led chain is too bright and my marks burn out. Today I turned off the light and put some duct tape over the light chain, it helps to see the points with the camera but everything else in my workshop is now helplessly dark.
VVDN team comes with unmatched expertise in imaging, ISP tuning, sensor characterization, calibration covering a wide array of sensors, platforms, and applications. Our specialized in-house image quality tuning and testing lab are designed to accelerate camera development to deliver vision solutions powered in a faster time to market.
VVDN works on 3A digital imaging technology to achieve the maximum image contrast, improve overexposure or underexposure of the subject, and compensate for the chromatic aberration of the picture under different light.
We're using a camera tuning blob ( -depthai-data-local/misc/tuning_exp_limit_500us.bin), but a shutter time of max 500us is a bit too short. The max shutter time of the other blob ( -depthai-data-local/misc/tuning_exp_limit_8300us.bin) is too long. Can we make a custom blob too? Also, what exactly does the blob contain?
Hi @jakaskerl ,
Ok thanks, we'll do that. Right now we use the camera tuning blob, but we're also setting lots of camera parameters manually. And I suspect setting lots of parameters manually after importing the camera blob might mess up the tuning again. So I'd be very happy to know which parameters are in the camera blob?
When using the AprilTag pipeline, you should try to use as high of a resolution as you can while still maintaining a reasonable FPS measurement. This is because higher resolution allows you to detect tags with higher accuracy and from larger distances.
Camera exposure and brightness control how bright the captured image will be, although they function differently. Camera exposure changes how long the camera shutter lets in light, which changes the overall brightness of the captured image. This is in contrast to brightness, which is a post-processing effect that boosts the overall brightness of the image at the cost of desaturating colors (making colors look less distinct).
For all pipelines, exposure time should be set as low as possible while still allowing for the target to be reliably tracked. This allows for faster processing as decreasing exposure will increase your camera FPS.
For reflective pipelines, after adjusting exposure and brightness, the target should be lit green (or the color of the vision tracking LEDs used). The more distinct the color of the target, the more likely it will be tracked reliably.
Unlike with retroreflective tape, AprilTag tracking is not very dependent on lighting consistency. If you have trouble detecting tags due to low light, you may want to try increasing exposure, but this will likely decrease your achievable framerate.
Orientation can be used to rotate the image prior to vision processing. This can be useful for cases where the camera is not oriented parallel to the ground. Do note that this operation can in some cases significantly reduce FPS.
This changes the resolution which is used to stream frames from PhotonVision. This does not change the resolution used to perform vision processing. This is useful to reduce bandwidth consumption on the field. In some high-resolution cases, decreasing stream resolution can increase processing FPS.
Specifically, the ALSC-module has completely been dropped, as the data included in this module is certainly not applicable for most lens/sensor combinations we would use in a film scanner. Also, the color handling was completely modified; lastly, the existing gamma curve was replaced with the sRGB curve, resulting in better color reproduction (the pop-colors are gone, the highlights and shadows are better recovered).
This section is completely gone at the moment. The reason for this is that this section actually describes the current lens/camera combination. So it depends on the lens you are using at the moment. As the lens can be changed on the HQ camera, any data in the tuning file relating to the lens is most of the time not correct (only, if your lens happens to be the one used in the calibration).
rec709 features a little bit less contrast than the contrast curve of the standard file. This has also an impact on color saturation. Images obtained with the v 0.8 tuning file will have slightly less color saturation than images captured with the standard file.
In the tiny range from 3700 K to 4000 K, the principle red component varies in the original file quite a bit, resulting in noticeable color changes within small deviations of the estimated color temperature. In contrast, the variation of the matrices components in the new tuning file is much smoother.
Needless to say that I like the smooth variation of the new tuning file better than the wiggling of the standard tuning file. With the standard tuning file, small changes in estimated color temperature will lead to rather strong variations in the coefficients of color matrix. This is especially pronounced in the 4000 K to 4400 K range. Given, the color shifts introduced by this behavior might not really be noticeable for the viewer.
PS. Additional information.
The problem is the same with other tuning-files in the same directory.
Including the directory path explicitly in the libcamera-still call works. Looks like there is an error in the default path used by the library.
@PM490 - I just tried the libcamera-still command on my RP3 both on my RP4 as well as on my RP3, and it failed exactly like you described. Maybe I had the tuning file somewhere in my path when I tried it previously? I have no idea. Anyway, you already described a way to use it: either copy it from the default location to the directory you are calling libcamera-still from, or include the full path in order to make it work, like so:
Yes, we did remeasure the colour matrices so that the default matrices that users will get are slightly more accurate. Only the default tunings are changed, obviously not any others such as the scientific ones. Specifically, the files that have been changed are:
imx219.json
imx296.json
imx477.json
imx708.json
imx708_wide.json
ov5647.json
This change is not in the Raspberry Pi fork of libcamera yet, nor therefore in any of the packages that we distribute. The change is in mainline libcamera, though we recommend Raspberry Pi users to use the Raspberry Pi fork, because it contains better platform specific settings. So only users who are building and installing from the upstream (non-recommended) repository would notice any change currently.
If I understand that correctly: the standard tuning files will have a changed color science sometime in the near future, compared with the old ones. If you use one of these tuning files in your scanner, you might get results different compared to previous scans.
The ccm (color matrices) show differences. From what I have seen in the libcamera repository, they upgraded the algorithm to optimize the color matches to operate no longer in RGB-space, but in Lab-space or so. They still look much too weird for me:
If this was happening to my cams, I would set up a trap to capture the exact moment it happened. This would involve setting a single Trigger\Action rule for each cam. The Trigger on each rule would be when that cam turns off. The Action would be to turn on a designated Bulb for 1m.
So, when this happens again, all you have to do is look at the Rules History and find exactly when that Rule executed. That will give you an indication of when it happened, if all the cams did it at the same time, or if they were doing it at different times.
If this is happening on a regular basis, keeping a log of exactly when on each cam is helpful to determine a pattern. That is why there needs to be some sort of feedback reporting produced by the Rules I suggested. The only other possible source is the shared account.
I have the same problem. I am getting damn tired of reinstalling hard wired carport flood and camera, portable cam , and 2 doorbell cams. I was an early adopter and have various genders of camera. about 8 installed and several in boxes.
Moderator Note: Personal information has been manually removed from this post. Such information often gets included inadvertently in an email signature block when replying by email. The forum software attempts to automatically remove email signatures but it is not always successful. When replying to the forum by email, it is best to remove the signature block yourself before sending.
With the Camera class, it is not only possible to adjust camera settings,but we can also save the current state of all the camera settings in a file and initiate the camera settingsfrom a similar file.
Once we opened a camera usingthe Camera class, if we want to configure the sensor Anti-Flicker similarly towhat was shown in the previous section, we have to use get the device from the camera with get_device():
you tuned your camera using Metavision Studio (adjusted the biases,set an ROI, enabled an hardware filter etc.), you want to re-use those settings in some other applicationslike testing some algorithms with our Advanced modules Samples.
The camera settings are saved in a JSON file that you can also manually edit as long as you respect the structure.For example, here is a JSON setting files that will set a RONI window (Region of Non Interest) of size 50,50at coordinates 100,200 and that will set bias_fo to 11:
3a8082e126