Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

radiometric calibration for a digital camera using a color chart?

61 views
Skip to first unread message

MBALOVER

unread,
Nov 27, 2009, 9:08:15 AM11/27/09
to
Hi all,

I know a good way to radiometrically calibrate a low-cost camera
(which uses a color filter array) is using a uniformly illuminated
color chart (such as Macbeth color chart).

However, I do not know exactly how to do it. I have some experiences
with colorimetric calibration for a scanner with a Q-60 target. I am
wondering if it is similar to the radiometric calibration for a
camera.

I intend to do the following:

1. Take a picture of the Macbeth color chart in a light box. Average
RGB values each gray patch.

2. Measure the XYZ values of the gray patches of the color chart using
a spectroradiometer.

3. Fit Y values and corresponding L values (where L=R, G, or B) for
each color channel.

I am wondering if the procedure is reasonable.

Do I need to care about zoom, exposure, aperture, noise removing, etc?

If anybody knows where I can read detail of the radiometric
calibration with a color chart (either book or paper), please let me
know.

Thanks a lot.


Thanks a lot.

Mike Russell

unread,
Nov 27, 2009, 7:17:29 PM11/27/09
to
On Fri, 27 Nov 2009 06:08:15 -0800 (PST), MBALOVER wrote:

> Hi all,
>
> I know a good way to radiometrically calibrate a low-cost camera
> (which uses a color filter array) is using a uniformly illuminated
> color chart (such as Macbeth color chart).
>
> However, I do not know exactly how to do it. I have some experiences
> with colorimetric calibration for a scanner with a Q-60 target. I am
> wondering if it is similar to the radiometric calibration for a
> camera.
>
> I intend to do the following:
>
> 1. Take a picture of the Macbeth color chart in a light box. Average
> RGB values each gray patch.
>
> 2. Measure the XYZ values of the gray patches of the color chart using
> a spectroradiometer.
>
> 3. Fit Y values and corresponding L values (where L=R, G, or B) for
> each color channel.
>
> I am wondering if the procedure is reasonable.

Very reasonable for a first attempt, but not very practical unless you are
mainly interested in being able to make excellent images of the gray
patches of the color chart. One problem is that the checker does not have
enough patches. People have discovered that you need a relatively large
number of patches, dozens to hundreds, in order to generate an accurate
profile for a camera.

I think you will be surprised, though, at how poorly the colored areas of
the chart are reproduced. The reasons for this is fairly interesting, and
has to do with the spectrographic response of the camera's Bayer filters
and light source, relative to the pigments of the checker

> Do I need to care about zoom, exposure, aperture, noise removing, etc?

Absolutely. Lenses generally have significant vignetting, and this can
vary for the same lens, depending on focal length and aperture.

> If anybody knows where I can read detail of the radiometric
> calibration with a color chart (either book or paper), please let me
> know.

Whether or not you are working with profiles, I would recommend that you at
least do a survey of what's been done already. I'd start by reading some
of the introductory material at the ICC web page, http://www.color.org .
Don't get bogged down in the details, though.

There is also excellent open source software, including camera calibration
utilities, available at http://www.littlecms.com and
http://www.argyllcms.com .

Hopefully Gernot Hoffman or Danny Rich will have some suggestions.
--
Mike Russell - http://www.curvemeister.com

Gernot Hoffmann

unread,
Nov 28, 2009, 3:01:07 AM11/28/09
to

Mike Russell schrieb:

Hello Mike, hello OP,

I had done some experiments:

http://www.fho-emden.de/~hoffmann/camcal17122006.pdf

In my opinion, a camera cannot be color-calibrated generally. But a
reasonable calibration is possible for the reproduction of paintings
etc..
Then, camera parameters and illumination are the same for taking
shots of a flat object with and without the ColorChecker.
In the doc above the package ProfileMaker 5 was used.
Each field of the ColorChecker shows the raw color, the corrected
color and the true color. The last two are rather alike.

The same data where handled by much simpler mathematics with
less success (not satisfying). Opposed to the ProfileMaker profile
it's merely a matrix correction:

http://www.fho-emden.de/~hoffmann/leastsqu16112006.pdf

Best regards --Gernot Hoffmann

ImageAnalyst

unread,
Dec 5, 2009, 9:08:55 PM12/5/09
to
It depends on whether you're doing color correction (rgb to corrected
rgb) or color standardization (rgb to estimated xyz). I usually do
color correction followed by xyz or lab estimation. You're sort of on
track but you forgot to do a background correction to account for
vignetting and non-uniform lighting, otherwise that certain color
patch won't have the same rgb values if it were in different places in
the field of view despite the fact that it should. So correct for
that - you can do it different ways but generally it involves a
division (NOT a subtraction) by the background. Then you can develop
some model to convert rgb into estimated X, another equation to
convert rgb into estimated Y, and a third equation to get Z. You
might use a cross channel cubic polynomial equation. E.g.
estimatedX = a0 + a1*R + a2*G + a3*B + coefficients time RG, GB, RB,
R^2, G^2, B^2, etc. terms.
estimatedY = b0 + b1*R + b2*G + b3*B + coefficients time RG, GB, RB,
R^2, G^2, B^2, etc. terms.
estimatedZ = c0 + c1*R + c2*G + c3*B + coefficients time RG, GB, RB,
R^2, G^2, B^2, etc. terms.
The b's won't equal the a's or the c's - none of the three sets will
be the same of course. Go out as many terms as you want - I use 14
except in cases where I want to estimate very white things. Because
they are near the edge of your training set they can tend to get way
out of hand near there and extrapolate wildly, so for that I use
linear fit rather than cubic - it's much better behaved.

Back in the late 80's and early 90's Henry Kang of Xerox did some
experiments to determine how many color chips you need and how many
terms in your equation you need. Basically he determined that you
don't need more than about 24 chips. Using the 2 hundred and
something chip macbeth chart was overkill and didn't give any more
accuracy in chips not in the training set than did 24 chips. He also
found that using more than about 3rd order polynomial also did not
gain you anything in terms of accuracy for test chips. You might look
up his papers. It goes into the math. You can get even fancier and
more accurate than that if you want by characterizing the OECF of the
camera, but I'm not going to get into that. Basically it undoes the
"gamma" of the camera to give increased accuracy. Kang worked with
scanners, but I asked him personally and he said that the same process
would also work for cameras, and in fact I use it for cameras. Like
Gernot said, it's not going to be perfect but it will probably work
well enough for you to do what you need to do (e.g. color image
analysis, reproduction, etc.)

Mike Russell

unread,
Dec 14, 2009, 6:24:03 AM12/14/09
to
On Sat, 5 Dec 2009 18:08:55 -0800 (PST), ImageAnalyst wrote:

> You're sort of on
> track but you forgot to do a background correction to account for
> vignetting and non-uniform lighting, otherwise that certain color
> patch won't have the same rgb values if it were in different places in
> the field of view despite the fact that it should. So correct for
> that - you can do it different ways but generally it involves a
> division (NOT a subtraction) by the background.

I would add that you can use the free software tool Gimp to accomplish this
division operation, if you do not already have a means of doing so.
Surprisingly, Photoshop does not support this.

0 new messages