Namely, I have a project where the resolution of the camera sensor is
important. I am trying to determine with the best available resolution
where an object is within the sensor array. If I am using a 640x480
monochrome sensor, then I have intensity variation on a pixel-by-pixel
basis. The problem arises because I may need to use a color 640x480 sensor
(this is due to cost and availability) and therefore the intensity
resolution is split up among RGB pixels. If I have a pure red object then I
can only determine it's position in the camera array by looking at every red
pixel which is much less than 640x480. This is an unlikely scenario but I
am concerned about the loss in resolution by using a color sensor.
How does a 640x480 24 bit RGB image get created from a 640x480 color sensor
using a Bayer pattern?
Can I convert the raw Bayer Pattern data to gray scale and if so, how?
Is there really a loss of resolution when trying to determine the position
of an object when using a color sensor as opposed to a monochrome sensor?
Any comments are appreciated.
Gerald Morrison
You can simply convert to greylevel with:
Y= 0.299*R + 0.587*G + 0.114*B
Regards
http://members.telecom.at/~elektro
Gerald Morrison <geraldmorrison@(nospam)canada.com> schrieb in im
Newsbeitrag: 37e0...@news.cadvision.com...
Not quite. You have to interpolate the 2 missing colors at
each pixel location first. And no, there's no "standard" way
of doing the interpolation.
Also, you must white balance, color correct, and possibly
gamma correct prior to converting to gray (YCbCr space.)
CCR is correct about interpolating the two missing colors. I believe this
is called color science by the chip manufacturers. Is there anything out
there that gives examples of how this interpolation would be done? I'm not
talking about examples of mathematical interpolation algorithms but what I
would like to see are examples of how the interpolation of the two missing
colors is done based on the nearest neighbors. For example, a Bayer patter
might look like this:
R G R G R G
G B G B G B
Pixels 1, 3, &5 are red pixels. So for pixel 3 (which is red) how would I
determine the G and B components for that location? Is it just the average
of the nearest neighbor Gs and Bs or is it more sophisticated?
Once I have the G and B intensities for the pixel 3 location, I can then
convert it to a gray scale intensity. But, ... how is the actual gray
scale resolution affected by this conversion? I know I end up with 640x480
gray scale in the end (I started with a 640x480 color sensor that uses a
Bayer pattern) but did I cheat on the resolution by using information from
the nearest neighbors when I determined the gray scale intensity of the
pixel of interest?
Thanks!
-Gerald
ccr <c...@biomorphic.com> wrote in message
news:bc8E3.5785$_x1.1...@news5.giganews.com...
You hit it exactly. It's going to depend on how good your interpolation
algorithm is. And, as I said previously, there is no "standard" way of
doing this. There probably isn't even a "best" way.
> Once I have the G and B intensities for the pixel 3 location, I can then
> convert it to a gray scale intensity. But, ... how is the actual gray
> scale resolution affected by this conversion? I know I end up with 640x480
> gray scale in the end (I started with a 640x480 color sensor that uses a
> Bayer pattern) but did I cheat on the resolution by using information from
> the nearest neighbors when I determined the gray scale intensity of the
> pixel of interest?
I suppose the result will depend on the color of your object too. What
about if your object is for example red? You will have a much stronger
signal from red cells than from blue. So if the object is actually centered
over a green cell, then the red neighbour will suggest its close to it,
whereas a blue neighbour will do the opposit. Without being an expert
in any way, I would say you have to make sure the object is white. And
then you can treat all R,G and B cells as if was a monocrome CCD.
/Bengt Cyren
Right, Maybe! But it depends on your needs!
The image points of the RGB pixels are not missing, but the RGB values out
of the bayer pattern has a high correlation
between the 3 channels. So it makes no sense first to interpolate the R, G,
B seperatly. Mathematically it can be shown that
the result is better to do it in directly, the camera do this in similar
way. Note in YCbCr space you have a decimation of the
Cb and Cr (4:2:2), so you have to generate 4:4:4 first. But this makes no
sense at RGB!
Regards Peter
ccr <c...@biomorphic.com> schrieb in im Newsbeitrag:
Huh? In a BAYER pattern, every pixel is missing two of the three components.
>but the RGB values out
>of the bayer pattern has a high correlation
>between the 3 channels.
Not at edges, especially color edges. All dreams of correlation between
colors goes away.
>So it makes no sense first to interpolate the R, G,
>B seperatly. Mathematically it can be shown that
>the result is better to do it in directly, the camera do this in similar
>way. Note in YCbCr space you have a decimation of the
>Cb and Cr (4:2:2), so you have to generate 4:4:4 first. But this makes no
>sense at RGB!
>
No you don't. The (4:2:2) is just and artifact invented for compression. It
has
nothing to do with YCbCr space other than the compression community
recognized that the frequencies of the Cb and Cr components are generally
low so that subsampling is not overly detrimental to image (color) quality.
I'm not real sure what the point is you're making, but to arrive at a high
quality YCbCr image, you need a high quality RGB image (assuming that
RGB is your starting space.)