I'm trying to visualize the CT images from a Picker scanner. At this moment
I don't know anything about the format of the files, just the fact that are
in the DICOM standard.
Does anyone have any similar experience? Where I may find the
specifications for these files?
Thank you,
Nick.
http://idt.net/~dclunie/medical-image-faq/html/part8.html
The web page at http://www.dvcco.com/rgb.htm has a method where you
just average the data from the nearest samples of the correct color.
Is this the best thing to do if computation time is not an issue? It
seems like this will blur the image more than necessary. Is there some
better way based on the Sampling theory? How do singe CCD color cameras
do it? Our there some standard papers or books I should read?
Thanks
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
Not that I'm a huge expert, but when I was working this out in my head
before, the algorithm given on the web page is the one I came up with. It's
certainly the obvious one. I'm not sure about your concerns with blurring
the image. For any given pixel, one component is 'exact', and the others are
derived from the nearest neighbors. Anything else would seem a more likely
to introduce blurring. One thing you could try is to only use pairs of
pixels instead of quads, and experiment to see if different orientations
give you better subjective results. I'm sure it would depend heavily on the
image, though, and it would be hard to automate the decision.
Cheers,
Pete
--
Pete Cockerell
California, USA
http://216.102.89.91
This is a difficult problem, known as "de-mosaicing". The obvious
algorithms don't give very good results, and the more clever ones are
patented. Here is a good article about this:
http://www.eetimes.com/news/98/1003news/digital_camera.html
Thierry
Bear in mind that without something really fancy, you will never get a
sharp dividing line, say from full intensity in that color to zero
intensity. Without some fancy algorithms, you'll end up with some
middle intensity pixels in there.
In order to minimize the effect of this, however, I would lean towards
something like a set of three two-dimensional cubic splines, which you
could then evaluate at the desired locations. This would be
processor-intensive, but should be manageable on current computers.
Since splines are affected by the entire pattern rather than just the
closest pixels, you would presumably get a better result.
You'd need to look up the algorithm for 2D splines, but they shouldn't
be too complicated. Basically you set up a matrix and then solve it
iteratively. I've got a 1-D spline class for C++ that I wrote up on my
webpage at http://members.home.net/cfriesen/software_projects.html
Chris
cullenfluf...@my-deja.com wrote:
> How do I take raw data from a CCD that is in a Bayer mask pattern and
> interpolate it to an RGB image?
>
> The web page at http://www.dvcco.com/rgb.htm has a method where you
> just average the data from the nearest samples of the correct color.
>
> Is this the best thing to do if computation time is not an issue? It
> seems like this will blur the image more than necessary. Is there some
> better way based on the Sampling theory? How do singe CCD color cameras
> do it? Our there some standard papers or books I should read?
The point with the Bayer pattern is that there is 2x as much green
information as blue and red. In any case, there are only HxV pixels in
the image (usually multiples of 2) and only half of them are green.
You *might* be able to use the wavelength sensitivity of the different
Bayer filters and their respective overlaps with the green component to
make a higher resolution attempt at the L component of the image.
However, there really is only one red sample for every 2x2 in the image
and only one blue for the same area.
Assuming you know the Point Spread Function of the optics, you *may* be
able to improve the effective resolution by convolving with its inverse,
but all that really does is fix some of the problems in your optical path.
If your images are noisy (and they all are), especially if your images are
dark, then you have very little S/N and your spatial resolution is no
better than described. If your images are bright, then you may be able to
do one of these steps.
If you can pan the camera and take multiple images, you can
(theoretically) sample portions of the scene with different pixel colors
and build up a higher resolution version of the image.
On other possible problem with the Bayer pattern is if there is bleed from
one pixel to the next (horizontally). You may find that (depending upon
the camera, the anti-aliasing filter, and A/D) that you are getting a
Red/Green or Blue/Green smear.
There are techniques for improving your images, but you will probably find
that you cannot achieve the same quality that a 3-Chip camera can give
you.
Also, since the product life of a CCD is getting shorter, you might want
to consider how much time it is worth putting into a software solution,
especially since by the time you're done, a 4MPixel (or 8 or 16) camera
will be out that costs the same as the 2MPixel (or 4 or 8) one your using
now.
-Chris Russ