Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Telephoto Picture & Technical Analysis

5 views
Skip to first unread message

Hughes

unread,
Apr 26, 2009, 9:08:35 AM4/26/09
to
Here is a photo I shot with the 1000mm telephoto with webcam.

http://www.pbase.com/image/111769165/original

Target (scanned) is located 3.8 meters from telephoto/webcam (note:
photo taken is middle portion of brochure with the girl's neck joining
the boy with yellow shirt)

http://www.pbase.com/image/111769109/original

here is scanned image with zoomed portion of image

http://www.pbase.com/image/111769128/original

Measurement of actual picture is 7 inches horizontal,
5 inches vertical. Telephoto/Webcam located 3.8 meters
away shows an area of only 0.4 inches horizontal
(or only 5.7% of entire picture).

The webcam picture shows 0.15 degrees of picture
which is located at a distance of 3.8 meters.

Calculations:

Specs

4" aperture Telephoto 1000mm focal length f/10
webcam 1/4 inch CMOS 640 X 480 Sensor

picture area taken with telephoto/webcam is 10mm horizontal
at a distance of 3.8 meters hence

degrees of telephoto/webcam coverage = 2 x artcan (0.5 (10mm)/3800)
= 0.15 degrees or 540 arcseconds

resolving power of telephoto = 116 / 100mm = 1.16 arcsecond

pixel scale = 206265 (0.0057/1000)
= 1.185 arcsecond / pixel

Inquiries:

1. In the picture there is rectangular pattern taken with telephoto/
webcam, what is it? Printing artifact of the brochure?

2. How come I can see vertical lines moving upward in the webcam
preview in the monitor? Noise or because image is dim??

3. Using MACRO photography, what specs so you can see the same
rectangular printing pattern in the picture of color brochure taken?

4. Using a DSLR, what would be the improvement in resolution and
colors provided dslr and webcam has same pixel pitch?

5. To be noise resistance, does the pixel (or sensel) have to be
at least 4.7 micron? How about 2 micron pixel pitch like in digicam.
Is there no possibility to construct 2 micron pixel pitch in the
future with the same noisefree performance as present 6 micron DSLR?
What optical principles make it impossible? Something above wavelength
of light inpringing on the neighboring pixel or sensels? Or what??

Hughes

Nicko

unread,
Apr 26, 2009, 10:33:47 AM4/26/09
to

You need to take your medications.

Hughes

unread,
Apr 26, 2009, 6:51:52 PM4/26/09
to
> You need to take your medications.- Hide quoted text -
>
> - Show quoted text -

Medications, why did you say that? I'm inquiring it because
eventually I want to test resolution charts on different
telescopes or telephoto. Now right now I'm thinking whether
any 5.7 micron pixel pitch would have at least minimum
quality to it that is good enough. Like if even webcam is
cheapy, any 5.7 micron pixel implementation can acquire
enough light that it would be not far from dslr ccds or cmos
sensors at the pixel (sensel) level. Anyone?

Hu

Rich

unread,
Apr 26, 2009, 9:07:38 PM4/26/09
to
Hughes <eugen...@gmail.com> wrote in news:da9020ae-37aa-4583-87e3-
a03ec4...@k19g2000prh.googlegroups.com:

> Here is a photo I shot with the 1000mm telephoto with webcam.

>

> 4" aperture Telephoto 1000mm focal length f/10
> webcam 1/4 inch CMOS 640 X 480 Sensor

I think I'm going to be sick. Now that cheap large sensors are here, even
the astronomical community isn't stupid enough to use 1/4" sensors with 8-
bit conversion for ANY images.


> 5. To be noise resistance, does the pixel (or sensel) have to be
> at least 4.7 micron? How about 2 micron pixel pitch like in digicam.
> Is there no possibility to construct 2 micron pixel pitch in the
> future with the same noisefree performance as present 6 micron DSLR?

Check out the actual sensor noise performance from 2001 till now with same-
sized and pixel count sensors. Not much difference, is there? Most of the
noise control has been achieved in post-sensor processing. So come back in
about 20 years and ask if a 2 micron pixel pitch can produce image quality
like a 6 megapixel DSLR.

Hughes

unread,
Apr 26, 2009, 9:22:20 PM4/26/09
to
On Apr 27, 9:07 am, Rich <n...@nowhere.com> wrote:
> Hughes <eugenhug...@gmail.com> wrote in news:da9020ae-37aa-4583-87e3-
> a03ec4bdf...@k19g2000prh.googlegroups.com:

>
> > Here is a photo I shot with the 1000mm telephoto with webcam.
>
> > 4" aperture Telephoto 1000mm focal length f/10
> > webcam 1/4 inch CMOS 640 X 480 Sensor
>
> I think I'm going to be sick.  Now that cheap large sensors are here, even
> the astronomical community isn't stupid enough to use 1/4" sensors with 8-
> bit conversion for ANY images.

I found my old webcam in storage room I bought 3 years ago.
I can use it for free while I need to spend $500 on a new large
sensor DSLR like canon 1000D. They bought have same
pixel pitch of ~ 5.7 micron. How do you know my webcam
has 8 bit conversion? So you mean the higher bit conversion
of DSLR have very significant quality improvement (what's
the conversion bitrate for DSLR that you have used?)

>
> > 5. To be noise resistance, does the pixel (or sensel) have to be
> > at least 4.7 micron? How about 2 micron pixel pitch like in digicam.
> > Is there no possibility to construct 2 micron pixel pitch in the
> > future with the same noisefree performance as present 6 micron DSLR?
>
> Check out the actual sensor noise performance from 2001 till now with same-
> sized and pixel count sensors.  Not much difference, is there?  Most of the
> noise control has been achieved in post-sensor processing.  So come back in
> about 20 years and ask if a 2 micron pixel pitch can produce image quality
> like a 6 megapixel DSLR.

Is this post-sensor processing inside the CCD component or
are you talking about software? So you mean to say 2 micron
pixel pitch have similar quality without the post-sensor processing.
But you also said in the first paragraph that webcam has only
8 bit while DSLR has higher bit conversion. So how can the
quality of the two be the same unless the bit conversion is
post sensor processing and PC software based?

Hugh

Hughes

unread,
Apr 26, 2009, 9:59:19 PM4/26/09
to
On Apr 27, 9:07 am, Rich <n...@nowhere.com> wrote:
> Hughes <eugenhug...@gmail.com> wrote in news:da9020ae-37aa-4583-87e3-
> a03ec4bdf...@k19g2000prh.googlegroups.com:
>
> > Here is a photo I shot with the 1000mm telephoto with webcam.
>
> > 4" aperture Telephoto 1000mm focal length f/10
> > webcam 1/4 inch CMOS 640 X 480 Sensor
>
> I think I'm going to be sick.  Now that cheap large sensors are here, even
> the astronomical community isn't stupid enough to use 1/4" sensors with 8-
> bit conversion for ANY images.

I just checked at internet. Most webcam uses 24 bit colour depth
conversion!
Where did you get the idea it uses only 8 bit??

My concern with my CMOS webcam is whether using Celestron
NexImage CCD with better alleged noise and color fidelity
can increase the pixel resolution given same pixel pitch and
sensor size. I'll use my webcam for resolution charts.

Hugh

Bob Larter

unread,
Apr 27, 2009, 1:38:35 AM4/27/09
to
Hughes wrote:
> On Apr 27, 9:07 am, Rich <n...@nowhere.com> wrote:
>> Hughes <eugenhug...@gmail.com> wrote in news:da9020ae-37aa-4583-87e3-
>> a03ec4bdf...@k19g2000prh.googlegroups.com:
>>
>>> Here is a photo I shot with the 1000mm telephoto with webcam.
>>> 4" aperture Telephoto 1000mm focal length f/10
>>> webcam 1/4 inch CMOS 640 X 480 Sensor
>> I think I'm going to be sick. Now that cheap large sensors are here, even
>> the astronomical community isn't stupid enough to use 1/4" sensors with 8-
>> bit conversion for ANY images.
>
> I just checked at internet. Most webcam uses 24 bit colour depth
> conversion!
> Where did you get the idea it uses only 8 bit??

"24 bit" = 8 bits each for red, green & blue. That is the maximum colour
depth for any webcam that outputs JPEGs. DLSRs have 12-14 bits each for
red, green & blue, making a total of 36-42 bits per pixel.


--
W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est
---^----^---------------------------------------------------------------

Hughes

unread,
Apr 27, 2009, 1:59:11 AM4/27/09
to
On Apr 27, 1:38 pm, Bob Larter <bobbylar...@gmail.com> wrote:
> Hughes wrote:
> > On Apr 27, 9:07 am, Rich <n...@nowhere.com> wrote:
> >> Hughes <eugenhug...@gmail.com> wrote in news:da9020ae-37aa-4583-87e3-
> >> a03ec4bdf...@k19g2000prh.googlegroups.com:
>
> >>> Here is a photo I shot with the 1000mm telephoto with webcam.
> >>> 4" aperture Telephoto 1000mm focal length f/10
> >>> webcam 1/4 inch CMOS 640 X 480 Sensor
> >> I think I'm going to be sick.  Now that cheap large sensors are here, even
> >> the astronomical community isn't stupid enough to use 1/4" sensors with 8-
> >> bit conversion for ANY images.
>
> > I just checked at internet. Most webcam uses 24 bit colour depth
> > conversion!
> > Where did you get the idea it uses only 8 bit??
>
> "24 bit" = 8 bits each for red, green & blue. That is the maximum colour
> depth for any webcam that outputs JPEGs. DLSRs have 12-14 bits each for
> red, green & blue, making a total of 36-42 bits per pixel.
>

Canon 1000D has 36-42 bits per pixel?? Don't think so.
After downloading a sample Canon 1000D picture, Irfanview
reports it as 24 bit per pixel too.
Reading further. I think the 8 bit is the tonal range of the
entire image.. meaning the brightness of all pixels
only vary by 256 or the dynamic range.

The 24 bit in each pixel is just the Bayer result
which seems to differ from the 8 bit A/D converter
which belongs to the entire sensor and not the
8 bit in each color RGB.

Or maybe the 8 bit A/D converter really belongs to
each color RGB, can any sensor expert confirm?

Hu

nospam

unread,
Apr 27, 2009, 2:15:54 AM4/27/09
to
In article
<129bee1c-04ff-4f9d...@z16g2000prd.googlegroups.com>,
Hughes <eugen...@gmail.com> wrote:

> > > I just checked at internet. Most webcam uses 24 bit colour depth
> > > conversion!
> > > Where did you get the idea it uses only 8 bit??
> >
> > "24 bit" = 8 bits each for red, green & blue. That is the maximum colour
> > depth for any webcam that outputs JPEGs. DLSRs have 12-14 bits each for
> > red, green & blue, making a total of 36-42 bits per pixel.
>
> Canon 1000D has 36-42 bits per pixel?? Don't think so.

it has a 12 bit a/d converter, or 36 bit rgb. higher end nikon and
canon dslrs have a 14 bit a/d converter.

> After downloading a sample Canon 1000D picture, Irfanview
> reports it as 24 bit per pixel too.

was it jpeg?

David J Taylor

unread,
Apr 27, 2009, 2:52:26 AM4/27/09
to
Hughes wrote:
[]

> Canon 1000D has 36-42 bits per pixel?? Don't think so.

All cameras offering RAW data do.

> After downloading a sample Canon 1000D picture, Irfanview
> reports it as 24 bit per pixel too.

JPEG is normally 8 bits per channel, as that's about the limit of what the
eye can see or the printer can print. Encoding is normally
gamma-corrected, not linear.

> Reading further. I think the 8 bit is the tonal range of the
> entire image.. meaning the brightness of all pixels
> only vary by 256 or the dynamic range.
>
> The 24 bit in each pixel is just the Bayer result
> which seems to differ from the 8 bit A/D converter
> which belongs to the entire sensor and not the
> 8 bit in each color RGB.
>
> Or maybe the 8 bit A/D converter really belongs to
> each color RGB, can any sensor expert confirm?
>
> Hu

The 12-bit sensor image, each RGB channel, is gamma corrected before
quantisation to 8-bit data for JPEG encoding. Hence 24-bit RGBs, which
have non-linear encoding. You would have to check precisely what your
Webcam did.

David

Hughes

unread,
Apr 27, 2009, 5:08:09 AM4/27/09
to
On Apr 27, 2:15 pm, nospam <nos...@nospam.invalid> wrote:
> In article
> <129bee1c-04ff-4f9d-bb6e-9b525c242...@z16g2000prd.googlegroups.com>,

>
> Hughes <eugenhug...@gmail.com> wrote:
> > > > I just checked at internet. Most webcam uses 24 bit colour depth
> > > > conversion!
> > > > Where did you get the idea it uses only 8 bit??
>
> > > "24 bit" = 8 bits each for red, green & blue. That is the maximum colour
> > > depth for any webcam that outputs JPEGs. DLSRs have 12-14 bits each for
> > > red, green & blue, making a total of 36-42 bits per pixel.
>
> > Canon 1000D has 36-42 bits per pixel??  Don't think so.
>
> it has a 12 bit a/d converter, or 36 bit rgb.  higher end nikon and
> canon dslrs have a 14 bit a/d converter.
>

Ic. So the webcam has 8-bit for each RGB or

256 x 256 x 256 = 16.7 Million Colors. That's good enough.

Canon 1000D (12 bit) has 4096 x 4096 x 4096 or

68.7 Billion colors.

I wonder how easy it is to detect between the 8-bit
16 million colors versus the 12-bit 68.7 billion colors
in a portrait picture.. Hmm..

H

David J Taylor

unread,
Apr 27, 2009, 5:32:38 AM4/27/09
to
Hughes wrote:
[]

> Ic. So the webcam has 8-bit for each RGB or
>
> 256 x 256 x 256 = 16.7 Million Colors. That's good enough.
>
> Canon 1000D (12 bit) has 4096 x 4096 x 4096 or
>
> 68.7 Billion colors.
>
> I wonder how easy it is to detect between the 8-bit
> 16 million colors versus the 12-bit 68.7 billion colors
> in a portrait picture.. Hmm..
>
> H

You need to check exactly how many bits the Webcam has - the number of
bits /before/ the A-D convertor.

In a portrait, I think it would be very easy to detect quantisation in the
skin-tones in a linear 8-bit RGB image - remember that in digital cameras
the data is gamma-corrected, so that the lower bits represent steps which
are much less that 1/256 of the total signal.

David

Bob Larter

unread,
Apr 27, 2009, 7:13:59 AM4/27/09
to
Hughes wrote:
> On Apr 27, 1:38 pm, Bob Larter <bobbylar...@gmail.com> wrote:
>> Hughes wrote:
>>> On Apr 27, 9:07 am, Rich <n...@nowhere.com> wrote:
>>>> Hughes <eugenhug...@gmail.com> wrote in news:da9020ae-37aa-4583-87e3-
>>>> a03ec4bdf...@k19g2000prh.googlegroups.com:
>>>>> Here is a photo I shot with the 1000mm telephoto with webcam.
>>>>> 4" aperture Telephoto 1000mm focal length f/10
>>>>> webcam 1/4 inch CMOS 640 X 480 Sensor
>>>> I think I'm going to be sick. Now that cheap large sensors are here, even
>>>> the astronomical community isn't stupid enough to use 1/4" sensors with 8-
>>>> bit conversion for ANY images.
>>> I just checked at internet. Most webcam uses 24 bit colour depth
>>> conversion!
>>> Where did you get the idea it uses only 8 bit??
>> "24 bit" = 8 bits each for red, green & blue. That is the maximum colour
>> depth for any webcam that outputs JPEGs. DLSRs have 12-14 bits each for
>> red, green & blue, making a total of 36-42 bits per pixel.
>>
>
> Canon 1000D has 36-42 bits per pixel?? Don't think so.

You'd be wrong then. ;^)

> After downloading a sample Canon 1000D picture, Irfanview
> reports it as 24 bit per pixel too.

That'll be a JPEG. I'm talking about RAW images (.CR2).

> Reading further. I think the 8 bit is the tonal range of the
> entire image.. meaning the brightness of all pixels
> only vary by 256 or the dynamic range.

In a JPEG, yes, but not a RAW image.

> The 24 bit in each pixel is just the Bayer result
> which seems to differ from the 8 bit A/D converter
> which belongs to the entire sensor and not the
> 8 bit in each color RGB.

DSLRs use either a 12 bit or 14 bit A2D converter.

> Or maybe the 8 bit A/D converter really belongs to
> each color RGB, can any sensor expert confirm?

As I've already said, webcams use (at most) an 8 bit A2D converter, but
DSLRs use either a 12 or 14 bit A2D converter.

Bob Larter

unread,
Apr 27, 2009, 7:15:13 AM4/27/09
to
nospam wrote:
> In article
> <129bee1c-04ff-4f9d...@z16g2000prd.googlegroups.com>,
> Hughes <eugen...@gmail.com> wrote:
>
>>>> I just checked at internet. Most webcam uses 24 bit colour depth
>>>> conversion!
>>>> Where did you get the idea it uses only 8 bit??
>>> "24 bit" = 8 bits each for red, green & blue. That is the maximum colour
>>> depth for any webcam that outputs JPEGs. DLSRs have 12-14 bits each for
>>> red, green & blue, making a total of 36-42 bits per pixel.
>> Canon 1000D has 36-42 bits per pixel?? Don't think so.
>
> it has a 12 bit a/d converter, or 36 bit rgb. higher end nikon and
> canon dslrs have a 14 bit a/d converter.

Exactly.

>> After downloading a sample Canon 1000D picture, Irfanview
>> reports it as 24 bit per pixel too.
>
> was it jpeg?

$5 says it was - ie; 8 bit only.

Bob Larter

unread,
Apr 27, 2009, 7:24:57 AM4/27/09
to
David J Taylor wrote:
> Hughes wrote:
> []
>> Ic. So the webcam has 8-bit for each RGB or
>>
>> 256 x 256 x 256 = 16.7 Million Colors. That's good enough.
>>
>> Canon 1000D (12 bit) has 4096 x 4096 x 4096 or
>>
>> 68.7 Billion colors.
>>
>> I wonder how easy it is to detect between the 8-bit
>> 16 million colors versus the 12-bit 68.7 billion colors
>> in a portrait picture.. Hmm..
>>
>> H
>
> You need to check exactly how many bits the Webcam has - the number of
> bits /before/ the A-D convertor.

Hrm. There's no such creature as "the number of bits" until it hits the
A2D converter. ;^)

> In a portrait, I think it would be very easy to detect quantisation in
> the skin-tones in a linear 8-bit RGB image - remember that in digital
> cameras the data is gamma-corrected, so that the lower bits represent
> steps which are much less that 1/256 of the total signal.

Indeed.

Bob Larter

unread,
Apr 27, 2009, 7:23:31 AM4/27/09
to
Hughes wrote:
> On Apr 27, 2:15 pm, nospam <nos...@nospam.invalid> wrote:
>> In article
>> <129bee1c-04ff-4f9d-bb6e-9b525c242...@z16g2000prd.googlegroups.com>,
>>
>> Hughes <eugenhug...@gmail.com> wrote:
>>>>> I just checked at internet. Most webcam uses 24 bit colour depth
>>>>> conversion!
>>>>> Where did you get the idea it uses only 8 bit??
>>>> "24 bit" = 8 bits each for red, green & blue. That is the maximum colour
>>>> depth for any webcam that outputs JPEGs. DLSRs have 12-14 bits each for
>>>> red, green & blue, making a total of 36-42 bits per pixel.
>>> Canon 1000D has 36-42 bits per pixel?? Don't think so.
>> it has a 12 bit a/d converter, or 36 bit rgb. higher end nikon and
>> canon dslrs have a 14 bit a/d converter.
>>
>
> Ic. So the webcam has 8-bit for each RGB or
>
> 256 x 256 x 256 = 16.7 Million Colors. That's good enough.
>
> Canon 1000D (12 bit) has 4096 x 4096 x 4096 or
>
> 68.7 Billion colors.
>
> I wonder how easy it is to detect between the 8-bit
> 16 million colors versus the 12-bit 68.7 billion colors
> in a portrait picture.. Hmm..

It all depends on how you process the images. Shooting in RAW, I can
often pull brilliant images out of seemingly ruined low-light shots.

David J Taylor

unread,
Apr 27, 2009, 8:34:08 AM4/27/09
to
Bob Larter wrote:
> David J Taylor wrote:
[]

>> You need to check exactly how many bits the Webcam has - the number
>> of bits /before/ the A-D convertor.
>
> Hrm. There's no such creature as "the number of bits" until it hits
> the A2D converter. ;^)

Thanks, Bob. Arrgh! Of course. Now what did I mean to write......

Check to how many bits whe video is digitised, before it is converted to
JPEG where gamma correction may have taken place.

Cheers,
David

Bob Larter

unread,
Apr 27, 2009, 8:44:49 AM4/27/09
to

That's better. ;^)

David J Taylor

unread,
Apr 27, 2009, 8:46:37 AM4/27/09
to
Bob Larter wrote:
> David J Taylor wrote:
>> Bob Larter wrote:
>>> David J Taylor wrote:
>> []
>>>> You need to check exactly how many bits the Webcam has - the number
>>>> of bits /before/ the A-D convertor.
>>>
>>> Hrm. There's no such creature as "the number of bits" until it hits
>>> the A2D converter. ;^)
>>
>> Thanks, Bob. Arrgh! Of course. Now what did I mean to write......
>>
>> Check to how many bits whe video is digitised, before it is
>> converted to JPEG where gamma correction may have taken place.
>
> That's better. ;^)

Thank you, sir!

David

nospam

unread,
Apr 27, 2009, 10:49:31 AM4/27/09
to
In article <KCcJl.20052$OO7....@text.news.virginmedia.com>, David J
Taylor <david-...@blueyonder.not-this-part.nor-this.co.uk.invalid>
wrote:

> > Canon 1000D has 36-42 bits per pixel?? Don't think so.
>
> All cameras offering RAW data do.

almost all. :) medium format backs have a 16 bit a/d, pro dslrs are 14
bit, entry level & prosumer dslrs are 12 bit, sigma's cameras are 10
bit and the compact p&s cameras that offer raw are 8 bit.

David J Taylor

unread,
Apr 27, 2009, 11:24:40 AM4/27/09
to

All offer more than 8-bits when in linear, i.e. RAW mode, which was the
point being questioned.

Unless I'm wrong, the 8-bit data in some older camera's 8-bit TIFFs is
gamma-corrected, and the TIFF was offered simply to avoid JPEG's defects,
not as a RAW format. Some compact cameras today offer both full-precision
RAW and JPEG.

David

nospam

unread,
Apr 27, 2009, 11:46:32 AM4/27/09
to
In article <Y6kJl.20198$OO7....@text.news.virginmedia.com>, David J
Taylor <david-...@blueyonder.not-this-part.nor-this.co.uk.invalid>
wrote:

p&s cams that offer raw use an 8 bit a/d, maybe 10 bit on higher end
ones (i haven't really kept up with that end of the market). i'd be
surprised if they are any higher since a tiny sensor doesn't warrant
it.

David J Taylor

unread,
Apr 27, 2009, 11:57:11 AM4/27/09
to
nospam wrote:
[]

> p&s cams that offer raw use an 8 bit a/d, maybe 10 bit on higher end
> ones (i haven't really kept up with that end of the market). i'd be
> surprised if they are any higher since a tiny sensor doesn't warrant
> it.

I think you need to prepare for surprises, then! I think you will find
that all photographic quality cameras use more than 8-bit ADCs. Even a
P&S camera analysed in 2006 (Canon S70) showed a linear dynamic range of
2000:1, 11 bits.

http://www.clarkvision.com/imagedetail/evaluation-canon-s70/index.html

It would be helpful to see a table showing the ADC width of a variety of
modern compact cameras.

David

Hughes

unread,
Apr 27, 2009, 5:05:19 PM4/27/09
to
Guys,

About resolution charts and resolving lines per millimeter.

What is its relationship to the A/D converter used. Between
8 bit A/D vs 12 bit or 16 bit A/Ds. Can the higher A/D
resolve the lines better? Or since the pixel can see
say 1.3 arcsecond (with an equivalent lines per millimeter),
and the lines per millimeter are in black and white.
Then it doesn't matter what A/D converter bits are
used? Astronomical CCDs are only in monochrome
and the quality is even better than colored. So I
wonder if there is a linear relationship in resolving
lines per millimeter against the A/D converter bits used.
Anyone?

Hug

Chris L Peterson

unread,
Apr 27, 2009, 6:03:59 PM4/27/09
to
On Mon, 27 Apr 2009 14:05:19 -0700 (PDT), Hughes <eugen...@gmail.com>
wrote:

That's actually a very complex question, and it lacks a simple answer.
At its most basic, the A/D is unrelated to resolution. When you talk
about resolution in units like "lines per millimeter" or "arcseconds per
pixel" you are considering spatial resolution, and that is- to a first
order- determined by pixel spacing.

The number of bits that you use to digitize each pixel is important in
determining your dynamic range. Roughly, that is the range between the
lightest and darkest values you can detect, and the number of intensity
increments you can resolve between those. This also affects your S/N,
and comes into play with respect to your spatial resolution because of
the modulation transfer function, MTF. Imagine you are imaging a pattern
of alternating black and white lines. If they are far apart compared
with your spatial resolution, you will record them at black and white.
As they approach your resolution, however, they will sort of overlap,
and you'll see alternating bars of light and dark gray. Eventually
they'll merge into uniform gray. That exact point is usually how spatial
resolution is defined. If you digitize more bits of intensity, you'll be
better able to determine that point of maximum resolution (which happens
at minimum contrast).

In practice, I wouldn't look at the choice of A/D this way, however. The
A/D should be selected to match the sensor. Very good sensors have about
13 bits of dynamic range. Typical modern sensors have 10-12 bits of
dynamic range. So a 14-bit A/D is adequate for any sensor you are likely
to see, and a 12-bit converter will do for most. For some cases, like
video, 8-10 bits is enough to cover what the sensor is capable of.

You might note that most astronomical CCD cameras these days use 16-bit
converters. Do not think this means that you are getting 16-bits of real
data from these cameras. There are still only 12-13 bits of information
available. There are some minor advantages to digitizing an extra bit or
two, and 16-bit, low speed A/D converters are cheap these days. Also,
binning modes can require more bit depth. There's a bit of a marketing
angle as well.
_________________________________________________

Chris L Peterson
Cloudbait Observatory
http://www.cloudbait.com

Hughes

unread,
Apr 27, 2009, 6:23:10 PM4/27/09
to
On Apr 28, 6:03 am, Chris L Peterson <c...@alumni.caltech.edu> wrote:
> On Mon, 27 Apr 2009 14:05:19 -0700 (PDT), Hughes <eugenhug...@gmail.com>

In other words, webcam with 8 bit A/D converter would be
inferior even when resolving simple black and white
lines per millimeter as it can distinguish poorly near
the resolving limit. So webcam is used in planetary
viewing only because it can stack images or take
them successively for minutes and then filter and
combine all images to get the best picture. Even
while doing this, it can't resolve the subtle details
of jupiter because of the poor 8 bit A/D converter. This
means in nights of good seeing, a DSLR with 12
bit A/D converter can resolve more details in
jupiter even if the image would only be a tiny
mid portion of the picture taken. Agree with all
points? comments?

Hug

Chris L Peterson

unread,
Apr 27, 2009, 7:00:04 PM4/27/09
to
On Mon, 27 Apr 2009 15:23:10 -0700 (PDT), Hughes <eugen...@gmail.com>
wrote:

>In other words, webcam with 8 bit A/D converter would be


>inferior even when resolving simple black and white
>lines per millimeter as it can distinguish poorly near
>the resolving limit.

Not necessarily. The spatial resolution of the detector could easily be
higher than the source. That's called oversampling, and is common in
astronomical imaging, where the atmospheric seeing usually degrades
resolution beyond the limits of both the actual target and the optics.

Also, the difference in the MTF (as recorded) between an 8-bit and
12-bit converter, under real world conditions, would be tiny.

>So webcam is used in planetary
>viewing only because it can stack images or take
>them successively for minutes and then filter and
>combine all images to get the best picture. Even
>while doing this, it can't resolve the subtle details
>of jupiter because of the poor 8 bit A/D converter. This
>means in nights of good seeing, a DSLR with 12
>bit A/D converter can resolve more details in
>jupiter even if the image would only be a tiny
>mid portion of the picture taken.

It would be rare for the DSLR to give better results. You have to
consider how long an exposure would be required to actually collect
enough photons to utilize the full dynamic range. Unless the telescope
is very large, that usually takes long enough that atmospheric motion
interferes- even under good seeing conditions. For typical telescope
apertures, with a bright planet like Jupiter, you will probably only get
6-8 bits of real data in the very short exposures used to beat seeing.

If you stack 250 8-bit images, the resulting image has close to 16 bits
of actual depth. It will be better than the DSLR image because it has
higher dynamic range, and because it was constructed from images with
better spatial resolution, selected to beat the seeing.

Hughes

unread,
Apr 27, 2009, 7:16:51 PM4/27/09
to
On Apr 28, 7:00 am, Chris L Peterson <c...@alumni.caltech.edu> wrote:
> On Mon, 27 Apr 2009 15:23:10 -0700 (PDT), Hughes <eugenhug...@gmail.com>

> wrote:
>
> >In other words, webcam with 8 bit A/D converter would be
> >inferior even when resolving simple black and white
> >lines per millimeter as it can distinguish poorly near
> >the resolving limit.
>
> Not necessarily. The spatial resolution of the detector could easily be
> higher than the source. That's called oversampling, and is common in
> astronomical imaging, where the atmospheric seeing usually degrades
> resolution beyond the limits of both the actual target and the optics.
>
> Also, the difference in the MTF (as recorded) between an 8-bit and
> 12-bit converter, under real world conditions, would be tiny.

The source will be resolution charts set at a long hallway with
good enough lighting so you can ignore the atmospheric
haze. In this scenerio, an 8-bit webcam and 12-bit DSLR
with similar 5.7 micron pixel can see or resolve the same
bar details given same telescope/telephoto lens used in
both? You said the difference in recorded MTF between
8-bit and 12-bit converter would be tiny. Does this also
occur in resoluting chart testing in long hallway or did
you frame the statement in astronomical imaging
scenerio exclusively due to atmospheric haze that
can render 8-bit vs 12-bit indistinguishable? Thanks.

Hug

Dave Typinski

unread,
Apr 27, 2009, 8:02:32 PM4/27/09
to
Chris L Peterson <c...@alumni.caltech.edu> wrote:
>
>If you stack 250 8-bit images, the resulting image has close to 16 bits
>of actual depth. It will be better than the DSLR image because it has
>higher dynamic range, and because it was constructed from images with
>better spatial resolution, selected to beat the seeing.

What's a common exposure time for each image? And how soon can that
be repeated?

I get the overall idea of image stacking, but not much else about it.
How can you get 250 images of Jupiter without letting it rotate
noticeably in the process? A one second repetition rate would allow
Jupiter to turn through about 2��.

Does CCD imagery simply not take very long, or does the software
somehow correct for the changes in the image?
--
Dave

Chris L Peterson

unread,
Apr 27, 2009, 8:07:06 PM4/27/09
to
On Mon, 27 Apr 2009 16:16:51 -0700 (PDT), Hughes <eugen...@gmail.com>
wrote:

>The source will be resolution charts set at a long hallway with


>good enough lighting so you can ignore the atmospheric
>haze. In this scenerio, an 8-bit webcam and 12-bit DSLR
>with similar 5.7 micron pixel can see or resolve the same
>bar details given same telescope/telephoto lens used in
>both? You said the difference in recorded MTF between
>8-bit and 12-bit converter would be tiny. Does this also
>occur in resoluting chart testing in long hallway or did
>you frame the statement in astronomical imaging
>scenerio exclusively due to atmospheric haze that
>can render 8-bit vs 12-bit indistinguishable? Thanks.

Do the experiment and see. My expectation would be that you'll see very
little difference.

That said, with a color sensor there are some very strange resolution
effects, and what you actually get depends on the sort of processing
applied. DSLRs tend to have much more sophisticated image processing to
eliminate fringing and Moire artifacts, and webcams usually just
de-Bayer and little else.

If you're comparing just the cameras, your test if fine. If you're
trying to learn something more fundamental about the effects of bit
depth, you really need to use a customized webcam driver that lets you
collect the raw image, and work with the raw DSLR image as well.

An easy test is to take a 12-bit raw with your DSLR, and convert it to
8-bit. Look at the two side-by-side and see if there is any visible
difference.

Chris L Peterson

unread,
Apr 27, 2009, 8:29:08 PM4/27/09
to
On Mon, 27 Apr 2009 20:02:32 -0400, Dave Typinski <m�bi...@trapezium.net>
wrote:

>What's a common exposure time for each image? And how soon can that
>be repeated?
>
>I get the overall idea of image stacking, but not much else about it.
>How can you get 250 images of Jupiter without letting it rotate
>noticeably in the process? A one second repetition rate would allow
>Jupiter to turn through about 2��.

With my 12" aperture, and a 4200mm focal length (around 0.25"/pixel) I
am able, using a webcam, to shoot at 15 frames per second, and I usually
set the exposure time at either 1/50 second or 1/100 second. That gives
me 4500 frames to pick from in a 300 second sequence, for which the
maximum motion blur will be about 0.5". I usually pull several stacks
out of a movie that length, so the actual motion blur will be less.

>Does CCD imagery simply not take very long, or does the software
>somehow correct for the changes in the image?

If you image with a DSLR or typical astronomical camera, you can't get
very many frames, because it takes a few seconds for each. So you don't
get enough to boost the dynamic range and reduce S/N, and you don't get
enough to select a large number with high quality. Under my imaging
conditions, I'm lucky if I can use 10% of the frames.

Chris Malcolm

unread,
Apr 27, 2009, 8:48:12 PM4/27/09
to

Don't forget the P&S cameras with big sensors :-)

--
Chris Malcolm

Dave Typinski

unread,
Apr 27, 2009, 10:05:14 PM4/27/09
to
Chris L Peterson <c...@alumni.caltech.edu> wrote:
>
>On Mon, 27 Apr 2009 20:02:32 -0400, Dave Typinski <m�bi...@trapezium.net>
>wrote:
>
>>What's a common exposure time for each image? And how soon can that
>>be repeated?
>>
>>I get the overall idea of image stacking, but not much else about it.
>>How can you get 250 images of Jupiter without letting it rotate
>>noticeably in the process? A one second repetition rate would allow
>>Jupiter to turn through about 2��.
>
>With my 12" aperture, and a 4200mm focal length (around 0.25"/pixel) I
>am able, using a webcam, to shoot at 15 frames per second, and I usually
>set the exposure time at either 1/50 second or 1/100 second.

Oh! Much faster than I thought possible. CCD technology is
apparently better than I realize.

>That gives
>me 4500 frames to pick from in a 300 second sequence, for which the
>maximum motion blur will be about 0.5".

Okay, that checks.

So, basically, we're talking about a one or two pixel shift, which
isn't enough to worry about?

Of course, that only applies to the center of Jupiter's disk; the blur
would be smaller the further you get from the center and zero right at
the edge.

>I usually pull several stacks
>out of a movie that length, so the actual motion blur will be less.
>
>>Does CCD imagery simply not take very long, or does the software
>>somehow correct for the changes in the image?
>
>If you image with a DSLR or typical astronomical camera, you can't get
>very many frames, because it takes a few seconds for each. So you don't
>get enough to boost the dynamic range and reduce S/N, and you don't get
>enough to select a large number with high quality.

So noted. Thanks!
--
Dave

Chris L Peterson

unread,
Apr 27, 2009, 10:32:24 PM4/27/09
to
On Mon, 27 Apr 2009 22:05:14 -0400, Dave Typinski <m�bi...@trapezium.net>
wrote:

>Oh! Much faster than I thought possible. CCD technology is


>apparently better than I realize.

Well, video cameras have been outputting 30 fps using CCD (or CMOS)
sensors for years. There's really nothing very new here. A webcam is
basically just an ordinary video camera with the frame grabber built in.

>So, basically, we're talking about a one or two pixel shift, which
>isn't enough to worry about?

It depends on the seeing. If it's good, I might just use a minute's
worth of data. That still gives hundreds of images to sort by quality
and select from. And Jupiter is the worst case. Other planets don't
rotate as fast.

>Of course, that only applies to the center of Jupiter's disk; the blur
>would be smaller the further you get from the center and zero right at
>the edge.

Exactly.

Hughes

unread,
Apr 28, 2009, 3:00:24 AM4/28/09
to
On Apr 28, 8:07 am, Chris L Peterson <c...@alumni.caltech.edu> wrote:
> On Mon, 27 Apr 2009 16:16:51 -0700 (PDT), Hughes <eugenhug...@gmail.com>

You didn't mention about the effect of noise in imaging
resolution chart bars near the resolving limit where the light
grey and dark grey merging into just grey. I tried to research
in the net the difference in the noise between webcam pixels
and DSLR pixels but couldn't find any articles. They
have the same pixel (or sensel) sizes of say 5.7 micron.
The advantage of DSLR over point & shoot is the former
has pixels size twice that of point&shoot's 2 micron size.
Now I wonder how webcam pixel noise compare with
dslr pixel noise since they have the same sizes. Say
the worse webcam in the world made in china with
the worse 5.7 micron size pixels. Could it be that
because it is big enough, it has reached a certain
critical size threshold where the signal to noise ratio
is big enough that noise is suppressed in any 5.7 micron
size pixels of the best and worse made in the planet
(used in the cheapest webcam and priciest of DSLR?)
Or are webcam pixel noise as bad as point&shoot
and they aren't noticed because one doesn't make
make portrait picture out of webcam. If this latter
is true. Then what is the effect of noise in imaging
resolution bars near the resolving limits where the
light grey and dark grey bars are in the threshold
of turning pure grey (when maximum resolution
is reached)?

Hughes

Martin Brown

unread,
Apr 28, 2009, 3:12:34 AM4/28/09
to
Dave Typinski wrote:
> Chris L Peterson <c...@alumni.caltech.edu> wrote:
>> If you stack 250 8-bit images, the resulting image has close to 16 bits
>> of actual depth. It will be better than the DSLR image because it has
>> higher dynamic range, and because it was constructed from images with
>> better spatial resolution, selected to beat the seeing.
>
> What's a common exposure time for each image? And how soon can that
> be repeated?
>
> I get the overall idea of image stacking, but not much else about it.
> How can you get 250 images of Jupiter without letting it rotate
> noticeably in the process? A one second repetition rate would allow
> Jupiter to turn through about 2��.

A webcam is typically operating at 25 or 30 fps. So the time is about
30x shorter than you imagine. Also throwing away the worst images (as in
lowest adjacent pixel contrast) before stacking helps a lot. The so
called lucky exposures are much better than the average one.


>
> Does CCD imagery simply not take very long, or does the software
> somehow correct for the changes in the image?

Webcam captures what is intended to be a realtime moving picture for
something nice and bright like Jupiter. This in effect freezes the
seeing (but only some of the images will be sharp). First order
atmospheric distortions move the image about at the focal plane which is
why shift to centroid and stacking works so well after you have thrown
away the obviously wrecked images.

Regards,
Martin Brown

bugbear

unread,
Apr 28, 2009, 4:59:34 AM4/28/09
to
David J Taylor wrote:
> nospam wrote:
> []
>> p&s cams that offer raw use an 8 bit a/d, maybe 10 bit on higher end
>> ones (i haven't really kept up with that end of the market). i'd be
>> surprised if they are any higher since a tiny sensor doesn't warrant
>> it.
>
> I think you need to prepare for surprises, then! I think you will find
> that all photographic quality cameras use more than 8-bit ADCs. Even a
> P&S camera analysed in 2006 (Canon S70) showed a linear dynamic range of
> 2000:1, 11 bits.


The Canon powershots are only 10 bit (sadly), at least that's
what in the RAW files.

BugBear

bugbear

unread,
Apr 28, 2009, 5:02:00 AM4/28/09
to
Hughes wrote:

I'm sorry; I've lost track.

What are you actually trying to achieve here?

Oh, and whilst bottom posting is good, it's only good
in conjunction with editing the post you're replying too.

BugBear

bugbear

unread,
Apr 28, 2009, 5:06:58 AM4/28/09
to
Chris Malcolm wrote:
>> p&s cams that offer raw use an 8 bit a/d, maybe 10 bit on higher end
>> ones (i haven't really kept up with that end of the market). i'd be
>> surprised if they are any higher since a tiny sensor doesn't warrant
>> it.
>
> Don't forget the P&S cameras with big sensors :-)
>

Are you sure there should be an 's' on camera in that sentence?

BugBear

Pierre Vandevenne

unread,
Apr 28, 2009, 5:26:55 AM4/28/09
to
On 28 avr, 09:00, Hughes <eugenhug...@gmail.com> wrote:

> You didn't mention about the effect of noise in imaging
> resolution chart bars near the resolving limit where the light
> grey and dark grey merging into just grey. I tried to research
> in the net the difference in the noise between webcam pixels
> and DSLR pixels but couldn't find any articles. They

Noise in CCD (and CMOS - most of today's consumer imaging devices are
build around CMOS sensors) sensors is a really complex and multi-
factorial issue.

To get at the bottom of things, this document is a good start

http://astro.union.rpi.edu/documents/CCD%20Image%20Sensor%20Noise%20Sources.pdf

In order to get the info for a particular webcam, open it, identify
the sensor it uses and read the sensor data sheet. Some links here.

http://homepage.ntlworld.com/molyned/web-cameras.htm

Current DSLR sensors detailed data sheets are usually not available
publicly.

> Now I wonder how webcam pixel noise compare with
> dslr pixel noise since they have the same sizes. Say
> the worse webcam in the world made in china with
> the worse 5.7 micron size pixels. Could it be that
> because it is big enough, it has reached a certain
> critical size threshold where the signal to noise ratio
> is big enough that noise is suppressed in any 5.7 micron

All other things being equal, and provided you chose your optical
train to sample optimally in all cases, larger pixels are usually
better. However, all other factors (and there are many other factors)
aren't equal, some of them already mentioned above . A very cheap
webcam is likely to use low quality sensors whose pixels do not
respond evenly to photons and generate different amounts of dark
current because of the impurities they contain. A cheap webcam is
likely to use poor clocks and noisy amplifiers. In the priciest DSLR,
the whole chain will generally be of much higher quality.

In my practical experience, current DSLRs beat webcams by a very wide
margin in terms of noise. But "webcam" is a matter of definition -
are those "webcams"? http://www.lumenera.com/products/index.php.

> Or are webcam pixel noise as bad as point&shoot
> and they aren't noticed because one doesn't make
> make portrait picture out of webcam.

Well, I don't think "portrait pictures" are a good test for absolute
resolution. :-)

Last but not least, in the constrained environment you describe,
unlike in astronomical applications, you can increase the signal to
get close to the optimum SNR.

Pierre Vandevenne

unread,
Apr 28, 2009, 5:35:56 AM4/28/09
to
On 28 avr, 04:05, Dave Typinski <möb...@trapezium.net> wrote:

> Oh!  Much faster than I thought possible.  CCD technology is
> apparently better than I realize.

Much faster - for example

http://www.cplab.com/PDF/Mega%20Speed%20V17.pdf

And cheap, widely available, CMOS based consumer P&S cameras are able
to do 1000+ fps...

David J Taylor

unread,
Apr 28, 2009, 5:38:52 AM4/28/09
to

.. but that's still significantly more than 8 bits.

Cheers,
David

Hughes

unread,
Apr 28, 2009, 6:00:19 AM4/28/09
to
On Apr 27, 2:52 pm, "David J Taylor" <david-tay...@blueyonder.not-this-
part.nor-this.co.uk.invalid> wrote:

> Hughes wrote:
>
> []
>
> > Canon 1000D has 36-42 bits per pixel??  Don't think so.
>
> All cameras offering RAW data do.
>
> > After downloading a sample Canon 1000D picture, Irfanview
> > reports it as 24 bit per pixel too.
>
> JPEG is normally 8 bits per channel, as that's about the limit of what the
> eye can see or the printer can print.  Encoding is normally
> gamma-corrected, not linear.
>
> > Reading further. I think the 8 bit is the tonal range of the
> > entire image.. meaning the brightness of all pixels
> > only vary by 256 or the dynamic range.
>
> > The 24 bit in each pixel is just the Bayer result
> > which seems to differ from the 8 bit A/D converter
> > which belongs to the entire sensor and not the
> > 8 bit in each color RGB.
>
> > Or maybe the 8 bit A/D converter really belongs to
> > each color RGB, can any sensor expert confirm?
>
> > Hu
>
> The 12-bit sensor image, each RGB channel, is gamma corrected before
> quantisation to 8-bit data for JPEG encoding.  Hence 24-bit RGBs, which
> have non-linear encoding.  You would have to check precisely what your
> Webcam did.
>
> David

The 12-bit sensor gamma correction will be superior to
8-bit gamma correction, even if both will end up as 8-bit
non-linear jpeg image, right? By how many percentage
will the 12-bit be better than 8-bit? If they are the final
output, 12-bit would be obviously better than 8-bit. But
since both would end up as 8-bit, how then do you
qualify the difference? Any images comparisons of
both jpeg but one of them starting as 12-bit sensor
gamma corrected versus the second starting as 8-bit
sensor gamma corrected?

H

David J Taylor

unread,
Apr 28, 2009, 7:03:54 AM4/28/09
to
Hughes wrote:
[]

> The 12-bit sensor gamma correction will be superior to
> 8-bit gamma correction, even if both will end up as 8-bit
> non-linear jpeg image, right?

That's not what I'm saying - an 8-bit gamma corrected image would have a
greater dynamic range than an 8-bit linear image, with the dynamic range
approaching that of a 12-bit linear image. The loss is in the precision
of the highlights.

Yes, 12-bits linear after gamma correction would be better than 8-bit
linear subject to the same gamma correction.

> By how many percentage
> will the 12-bit be better than 8-bit? If they are the final
> output, 12-bit would be obviously better than 8-bit. But
> since both would end up as 8-bit, how then do you
> qualify the difference? Any images comparisons of
> both jpeg but one of them starting as 12-bit sensor
> gamma corrected versus the second starting as 8-bit
> sensor gamma corrected?
>
> H

You would need to define exactly what you meant by better, and it would
then be something you could predict just with the maths. In practice, of
course, you would probably want a set of observers to evaluate the
results, and adjust your mathematical model to grade percentages into
"noticeable values". Perhaps you can find something already in the
literature about this.

As a guide, an 8-bit linear quantisation, when gamma-corrected, would
likely have very noticeable banding/contouring in the shadow regions.

David

Helpful person

unread,
Apr 28, 2009, 7:29:31 AM4/28/09
to

Don't forget that when resolving bar charts you are looking at both
the fundamental frequency and higher orders. That said, with a decent
image analysis program I consider the three bar chart an excellent way
to measure/compare performance. However, make sure you understand the
effects of phase reversal that can give you false results.

www.richardfisher.com

Hughes

unread,
Apr 28, 2009, 8:15:25 AM4/28/09
to
On Apr 28, 7:03 pm, "David J Taylor" <david-tay...@blueyonder.not-this-

part.nor-this.co.uk.invalid> wrote:
> Hughes wrote:
>
> []
>
> > The 12-bit sensor gamma correction will be superior to
> > 8-bit gamma correction, even if both will end up as 8-bit
> > non-linear jpeg image, right?
>
> That's not what I'm saying - an 8-bit gamma corrected image would have a
> greater dynamic range than an 8-bit linear image, with the dynamic range
> approaching that of a 12-bit linear image.  The loss is in the precision
> of the highlights.
>
> Yes, 12-bits linear after gamma correction would be better than 8-bit
> linear subject to the same gamma correction.

But if you convert it to 8-bit Jpeg. All the benefits would
be lost. Printing photos is only 8-bit. So if you don't
edit Raw Image (like exposure, white balance) which
is the main purpose of 12-bit Raw. Then getting (and
using) an 8-bit Raw (or 8-bit A/D) would be sufficient.
What do you think?

H

David J Taylor

unread,
Apr 28, 2009, 9:13:54 AM4/28/09
to
Hughes wrote:
> On Apr 28, 7:03 pm, "David J Taylor"
[]

>> That's not what I'm saying - an 8-bit gamma corrected image would
>> have a greater dynamic range than an 8-bit linear image, with the
>> dynamic range approaching that of a 12-bit linear image. The loss is
>> in the precision of the highlights.
>>
>> Yes, 12-bits linear after gamma correction would be better than 8-bit
>> linear subject to the same gamma correction.
>
> But if you convert it to 8-bit Jpeg. All the benefits would
> be lost.

No, because of the extended dynamic range after gamma correction.

> Printing photos is only 8-bit. So if you don't
> edit Raw Image (like exposure, white balance) which
> is the main purpose of 12-bit Raw. Then getting (and
> using) an 8-bit Raw (or 8-bit A/D) would be sufficient.
> What do you think?
>
> H

It depends on your subject, and how badly using only 256-level
quantisation affects the images for your particular use. You could
probably recognise a face with 4-bit linear data. For the most subtle
skin tones or shadow detail you may need 10, 11 or even 12-bit data,
gamma-corrected to fit into the typical 8-bit display.

Cheers,
David

nospam

unread,
Apr 28, 2009, 9:46:04 AM4/28/09
to
In article
<c822a264-e25d-4df2...@s38g2000prg.googlegroups.com>,
Hughes <eugen...@gmail.com> wrote:

> But if you convert it to 8-bit Jpeg. All the benefits would
> be lost. Printing photos is only 8-bit.

it can be 16 bit.

> So if you don't
> edit Raw Image (like exposure, white balance) which
> is the main purpose of 12-bit Raw. Then getting (and
> using) an 8-bit Raw (or 8-bit A/D) would be sufficient.
> What do you think?

if the jpeg is perfect out of the camera, then you don't really need
raw. good luck with that though.

Chris L Peterson

unread,
Apr 28, 2009, 10:07:35 AM4/28/09
to
On Tue, 28 Apr 2009 00:00:24 -0700 (PDT), Hughes <eugen...@gmail.com>
wrote:

>You didn't mention about the effect of noise in imaging


>resolution chart bars near the resolving limit where the light
>grey and dark grey merging into just grey.

You'll need to assess that by testing. If you are interested in
planetary imaging, where the exposures are short, noise is dominated by
readout noise, and that is similar in DLSRs and video cameras. Even
inexpensive webcams can have pretty good noise specs (you need to choose
the model for that), meaning they don't add much noise over the sensor
specs themselves. It is hard to assess DSLR noise, because all of these
cameras use internal processing to suppress it- an operation that also
results in a loss of actual signal.

bugbear

unread,
Apr 28, 2009, 10:58:03 AM4/28/09
to
Hughes wrote:
> Printing photos is only 8-bit.

Would you care to defend that statement?

BugBear

Hughes

unread,
Apr 28, 2009, 6:20:03 PM4/28/09
to
On Apr 28, 10:58 pm, bugbear <bugbear@trim_papermule.co.uk_trim>
wrote:

I read it here:

http://photo.net/learn/raw/

Excerpt:

"I said above that the data could be stored as an 8 or 16-bit TIFF
file. RAW data from most high end digital camera contains 12 bit data,
which means that there can be 4096 different intensity levels for each
pixel. In an 8-bit file (such as a JPEG), each pixel can have one of
256 different intensity levels. Actually 256 levels is enough, and all
printing is done at the 8 bit level, so you might ask what the point
is of having 12 bit data. The answer is that it allows you to perform
a greater range of manipulation to the image without degrading the
quality. You can adjust curves and levels to a greater extent, then
convert back to 8-bit data for printing. If you want to access all 12
bits of the original RAW file, you can convert to a 16-bit TIFF file.
Why not a 12-bit TIFF file? Because there's no such thing! Actually
what you do is put the 12 bit data in a 16 bit container. It's a bit
like putting a quart of liquid in a gallon jug, you get to keep all
the liquid but you have some free space. Putting the 12 bit data in a
8 bit file is like pouring that quart of liquid into a pint container.
It won't all fit so you have to throw some away."

About what I'm trying to achieve in this thread and newsgroup.
I'm just trying to decide where to buy a DSLR or retain the
use of a CMOS webcam or buy a CCD webcam, A celestron
NexImage which costs $91. But then, second hand Canon
300D costs only twice as much in the $200 range. So it is
hard to decide, add to it is whether to get a 1000D or 450D
or even 500D if I'll go to the next level. Hence I'm acquiring all
usefull data to be able to decide. It's just hobby, I'm not
photographer or anything like that. Although primary objective
is to test resolution charts for better understanding of
Modular Transfer Function, Contrast Transfer, and the like.

Hughes

nospam

unread,
Apr 28, 2009, 7:46:46 PM4/28/09
to
In article
<bbe55aa8-27fb-4062...@q33g2000pra.googlegroups.com>,
Hughes <eugen...@gmail.com> wrote:

> > > Printing photos is only 8-bit.
> >
> > Would you care to defend that statement?
>

> I read it here:
>
> http://photo.net/learn/raw/

that article is five years old.

Hughes

unread,
Apr 28, 2009, 10:09:59 PM4/28/09
to
On Apr 28, 9:13 pm, "David J Taylor" <david-tay...@blueyonder.not-this-

Hi,

I downloaded some Canon Raw images. I want to see the image
before Bayer demosaicing occurs. In Photoshop 4, when you
open RAW files, it automatically demosaice it. What program
shareware do you know that can open RAW file without
automatic bayer demosaicing? I want to zoom at 4 pixels
and see the adjacent 2 green and 1 red, blue pixels before
demosaicing occurs.

Hughes

Hughes

unread,
Apr 28, 2009, 10:11:40 PM4/28/09
to
On Apr 28, 10:07 pm, Chris L Peterson <c...@alumni.caltech.edu> wrote:
> On Tue, 28 Apr 2009 00:00:24 -0700 (PDT), Hughes <eugenhug...@gmail.com>

I wonder if the following calculations would be the determining
factor.

I use arcsecond pixel scale which may be confusing for
photography as it uses only lines pair per millimeter. So
have to convert arcseconds to lines pair per millimeter.

My telephoto has resolution specs of having 50 lines
per millimeter in the manual. Now converting arcseconds to
lp/mm in pixel scale using Canon 300D and a 1000mm Telephoto:

Pixel scale = 205265 x 0.0074 / 1000 = 1.53 arcsec/pixel

Line pair per millimeter = [206265 / (pixel scale)x focal length]
= 135 lines pair / millimeter or
135/2 = 67.5 lines per
millimeter

Since my Telephoto has only 50 lines/mm and the Canon
300D using it would produce 67.5 lines per millimeter. Then
it's adequate for the task?

Or using the Telephoto 50 lines/mm spec to calculate
the pixel scale

Lines pair/milliter = 206265 / [(pixel scale)xfocal lenght)]
50 x 2 = 206265 / (pixel scale x 1000)
pixel scale = 206265 / 100 * 1000
pixel scale = 2.06 arcsec / pixel

This means the telephoto with 50 lp/mm resolution specs
can only produce 2.06 arcsec in conjunction with the
7.4 micron Canon 300D which has a 1.53 arcsec pixel
scale. So it is sufficient even though it doesn't reach
the 1/2 criterion where pixel is 1/2 of Telephoto resolving
power which isn't used standardly.

Anyone else with Telephoto who can confirm the calculations?
If your Telephoto has resolution spec of 50 lines/mm only. A
digicam used with it which has theoretical resolution of
67.5 lines/mm is sufficient for the task? Again 67.5 lines/mm
is calculated from:

Pixel scale = 205265 x 0.0074 / 1000 = 1.53 arcsec/pixel

Line pair per millimeter = [206265 / (pixel scale)x focal length]
= 135 lines pair / millimeter or
135/2 = 67.5 lines per
millimeter

Is the calculations correct??

Hughes

nospam

unread,
Apr 28, 2009, 10:39:44 PM4/28/09
to
In article
<3edbe01e-0ab2-406d...@v1g2000prd.googlegroups.com>,
Hughes <eugen...@gmail.com> wrote:

> I downloaded some Canon Raw images. I want to see the image
> before Bayer demosaicing occurs. In Photoshop 4, when you
> open RAW files, it automatically demosaice it.

photoshop 4 is over ten years old and does not support raw at all. do
you mean cs4 or perhaps photoshop elements?

> What program
> shareware do you know that can open RAW file without
> automatic bayer demosaicing? I want to zoom at 4 pixels
> and see the adjacent 2 green and 1 red, blue pixels before
> demosaicing occurs.

dcraw

David J Taylor

unread,
Apr 29, 2009, 2:15:11 AM4/29/09
to
Hughes wrote:
[]

> Pixel scale = 205265 x 0.0074 / 1000 = 1.53 arcsec/pixel
>
> Line pair per millimeter = [206265 / (pixel scale)x focal length]
> = 135 lines pair / millimeter or
> 135/2 = 67.5 lines per
> millimeter
>
> Is the calculations correct??
>
> Hughes

[cross-posting trimmed]

Using a single number for "resolution" may not tell the whole story for
imaging systems, depending on the application. You need to consider the
MTF of the entire system, and also the noise spectrum. An image may look
better if the lower-frequency MTF is high, and different observers may
prefer an image with a different noise ("grain") spectrum, but the same
total noise. Remember that a sensor with finite sensing sites will most
likely have an anti-aliasing filter to reduce the Moir� fringing and
related effects.

David

bugbear

unread,
Apr 29, 2009, 4:15:21 AM4/29/09
to

It was wrong then too :-)

BugBear

Bob Larter

unread,
Apr 29, 2009, 4:31:35 AM4/29/09
to

So? It's true that all the usual printing technologies only use 8 bit
colour depth.

--
W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est
---^----^---------------------------------------------------------------

Bob Larter

unread,
Apr 29, 2009, 4:43:48 AM4/29/09
to
Hughes wrote:
> I downloaded some Canon Raw images. I want to see the image
> before Bayer demosaicing occurs. In Photoshop 4, when you
> open RAW files, it automatically demosaice it. What program
> shareware do you know that can open RAW file without
> automatic bayer demosaicing? I want to zoom at 4 pixels
> and see the adjacent 2 green and 1 red, blue pixels before
> demosaicing occurs.

I don't know of any specific program that can do that. If you're a good
C programmer, you could probably modify Dcraw to do what you want:
<http://www.cybercom.net/~dcoffin/dcraw/>

PS: please try to trim your posts so that you're only quoting relevant
material from the previous person.

Chris Malcolm

unread,
Apr 29, 2009, 6:28:00 AM4/29/09
to

I only know with certainty of one, but not being familiar with every
high quality P&S on the market I can't be sure there aren't more.

--
Chris Malcolm

bugbear

unread,
Apr 29, 2009, 8:29:36 AM4/29/09
to
Bob Larter wrote:
> nospam wrote:
>> In article
>> <bbe55aa8-27fb-4062...@q33g2000pra.googlegroups.com>,
>> Hughes <eugen...@gmail.com> wrote:
>>
>>>>> Printing photos is only 8-bit.
>>>> Would you care to defend that statement?
>>> I read it here:
>>>
>>> http://photo.net/learn/raw/
>>
>> that article is five years old.
>
> So? It's true that all the usual printing technologies only use 8 bit
> colour depth.
>

PostScript (a rather common printing engine/language) has supported > 8 bit
samples since Level 2.

BugBear

nospam

unread,
Apr 29, 2009, 8:37:32 AM4/29/09
to
In article <49f8...@dnews.tpgi.com.au>, Bob Larter
<bobby...@gmail.com> wrote:

> >>>> Printing photos is only 8-bit.
> >>> Would you care to defend that statement?
> >> I read it here:
> >>
> >> http://photo.net/learn/raw/
> >
> > that article is five years old.
>
> So? It's true that all the usual printing technologies only use 8 bit
> colour depth.

mac os x supports 16 bit printing and photoshop cs4 takes advantage of
that. there are probably other apps that also support it.

Bob Larter

unread,
Apr 29, 2009, 11:41:48 AM4/29/09
to

Really? I didn't know that.

I actually used to program in PostScript, but that was before Level 2.
(ie; a very long time ago.)

Bob Larter

unread,
Apr 29, 2009, 11:42:47 AM4/29/09
to

That doesn't mean that the printer uses more than 8 bits.

Hughes

unread,
Apr 28, 2009, 8:06:09 PM4/28/09
to
On Apr 28, 10:07 pm, Chris L Peterson <c...@alumni.caltech.edu> wrote:
> On Tue, 28 Apr 2009 00:00:24 -0700 (PDT), Hughes <eugenhug...@gmail.com>

> wrote:
>
> >You didn't mention about the effect of noise in imaging
> >resolution chart bars near the resolving limit where the light
> >grey and dark grey merging into just grey.
>
> You'll need to assess that by testing. If you are interested in
> planetary imaging, where the exposures are short, noise is dominated by
> readout noise, and that is similar in DLSRs and video cameras. Even
> inexpensive webcams can have pretty good noise specs (you need to choose
> the model for that), meaning they don't add much noise over the sensor
> specs themselves. It is hard to assess DSLR noise, because all of these
> cameras use internal processing to suppress it- an operation that also
> results in a loss of actual signal.
> _________________________________________________
>
> Chris L Peterson
> Cloudbait Observatoryhttp://www.cloudbait.com

I wonder if the following calculations would be the determining
factor.

I use arcsecond pixel scale which may be confusing for
photography as it uses only lines pair per millimeter. So
have to convert arcseconds to lines pair per millimeter.

My telephoto has resolution specs of having 50 lines
per millimeter in the manual. Now converting arcseconds to
lp/mm in pixel scale using Canon 300D and a 1000mm Telephoto:

Pixel scale = 205265 x 0.0074 / 1000 = 1.53 arcsec/pixel

Line pair per millimeter = [206265 / (pixel scale)x focal length]
= 135 lines pair / millimeter or
135/2 = 67.5 lines per
millimeter

Since my Telephoto has only 50 lines/mm and the Canon


300D using it would produce 67.5 lines per millimeter. Then
it's adequate for the task?

Or using the Telephoto 50 lines/mm spec to calculate
the pixel scale

Lines pair/milliter = 206265 / [(pixel scale)xfocal lenght)]
50 x 2 = 206265 / (pixel scale x 1000)
pixel scale = 206265 / 100 * 1000
pixel scale = 2.06 arcsec / pixel

This means the telephoto with 50 lp/mm resolution specs
can only produce 2.06 arcsec in conjunction with the
7.4 micron Canon 300D which has a 1.53 arcsec pixel
scale. So it is sufficient even though it doesn't reach
the 1/2 criterion where pixel is 1/2 of Telephoto resolving
power which isn't used standardly.

Anyone else with Telephoto who can confirm the calculations?
If your Telephoto has resolution spec of 50 lines/mm only. A
digicam used with it which has theoretical resolution of
67.5 lines/mm is sufficient for the task? Again 67.5 lines/mm
is calculated from:

Pixel scale = 205265 x 0.0074 / 1000 = 1.53 arcsec/pixel

Hughes

unread,
Apr 28, 2009, 8:27:15 PM4/28/09
to
On Apr 28, 10:07 pm, Chris L Peterson <c...@alumni.caltech.edu> wrote:
> On Tue, 28 Apr 2009 00:00:24 -0700 (PDT), Hughes <eugenhug...@gmail.com>

> wrote:
>
> >You didn't mention about the effect of noise in imaging
> >resolution chart bars near the resolving limit where the light
> >grey and dark grey merging into just grey.
>
> You'll need to assess that by testing. If you are interested in
> planetary imaging, where the exposures are short, noise is dominated by
> readout noise, and that is similar in DLSRs and video cameras. Even
> inexpensive webcams can have pretty good noise specs (you need to choose
> the model for that), meaning they don't add much noise over the sensor
> specs themselves. It is hard to assess DSLR noise, because all of these
> cameras use internal processing to suppress it- an operation that also
> results in a loss of actual signal.
> _________________________________________________
>
> Chris L Peterson

Hughes

unread,
Apr 28, 2009, 10:58:30 PM4/28/09
to
On Apr 29, 10:39 am, nospam <nos...@nospam.invalid> wrote:
> In article
> <3edbe01e-0ab2-406d-b018-de22b2b86...@v1g2000prd.googlegroups.com>,

>
> Hughes <eugenhug...@gmail.com> wrote:
> > I downloaded some Canon Raw images. I want to see the image
> > before Bayer demosaicing occurs. In Photoshop 4, when you
> > open RAW files, it automatically demosaice it.
>
> photoshop 4 is over ten years old and does not support raw at all.  do
> you mean cs4 or perhaps photoshop elements?
>

Photoshop cs4. When you open RAW, it automatically
performs bayer interpolation?

When you look at original RAW without interpolation
(or demosaicing), are you supposed to see colors
or just greyscale image which is what the sensor
see? But in the following site:

http://www.kenrockwell.com/tech/bayer.htm#

It gives comparison between images with and
without interpolation by rolling over the mouse in
the image. Both shows colors. But in pure RAW,
it is just data and no colors should be seen, so
is the site incorrect to show RAW in colors before
Bayer Interpolation applied?

H

Hughes

unread,
Apr 29, 2009, 4:44:16 AM4/29/09
to
On Apr 29, 10:39 am, nospam <nos...@nospam.invalid> wrote:
> In article
> <3edbe01e-0ab2-406d-b018-de22b2b86...@v1g2000prd.googlegroups.com>,
>
> Hughes <eugenhug...@gmail.com> wrote:
> > I downloaded some Canon Raw images. I want to see the image
> > before Bayer demosaicing occurs. In Photoshop 4, when you
> > open RAW files, it automatically demosaice it.
>
> photoshop 4 is over ten years old and does not support raw at all.  do
> you mean cs4 or perhaps photoshop elements?

Yes, it's CS4. As I understood it. All programs that
open RAW automatically demosaice (apply
Bayer Interpolation) to it. If one can open it RAW
(or raw since RAW is not an acronym) that is
not demosaiced, the image would look very
green. However, in the following site:

http://www.kenrockwell.com/tech/bayer.htm

One can compare between image that has
undergone Bayer Interpolation and one without.
I think it's wrong because the one without doesn't
look green enough. Look at it and let me know
your comment. Both seems to have Bayer
Interpolation (put your mouse over the first
message and out of it to switch it).

Hughes

nospam

unread,
Apr 30, 2009, 12:05:06 AM4/30/09
to
In article
<a3147b58-8ab5-40f5...@v35g2000pro.googlegroups.com>,
Hughes <eugen...@gmail.com> wrote:

> Photoshop cs4. When you open RAW, it automatically
> performs bayer interpolation?

yep, adobe camera raw is invoked when a raw image is opened. there's a
wealth of adjustments in it.

> When you look at original RAW without interpolation
> (or demosaicing), are you supposed to see colors
> or just greyscale image which is what the sensor
> see?

it's just data, but typically the appropriate colour is applied to the
pixel, so you should see something somewhat greenish. if you look at
some of the papers on bayer algorithms, there should be some samples of
the raw data and the effects of the various algorithms.

> But in the following site:
>
> http://www.kenrockwell.com/tech/bayer.htm#
>
> It gives comparison between images with and
> without interpolation by rolling over the mouse in
> the image. Both shows colors. But in pure RAW,
> it is just data and no colors should be seen, so
> is the site incorrect to show RAW in colors before
> Bayer Interpolation applied?

if it's at ken rockwell's site, then there's a decent chance it's not
accurate. read his about page -- he likes to make up stuff.

in any event, it looks what what he did was take a photo using a longer
lens and downsample it to show what it would be like had there been no
bayer interpolation done, i.e., with a 3 chip camera or a scanner.
that's not the same as looking at raw data prior to demosaicing.

if you want to extract the raw data itself, use dcraw:

<http://www.cybercom.net/~dcoffin/dcraw/>

Bob Larter

unread,
Apr 30, 2009, 3:59:31 AM4/30/09
to
Hughes wrote:
> On Apr 29, 10:39 am, nospam <nos...@nospam.invalid> wrote:
>> In article
>> <3edbe01e-0ab2-406d-b018-de22b2b86...@v1g2000prd.googlegroups.com>,
>>
>> Hughes <eugenhug...@gmail.com> wrote:
>>> I downloaded some Canon Raw images. I want to see the image
>>> before Bayer demosaicing occurs. In Photoshop 4, when you
>>> open RAW files, it automatically demosaice it.
>> photoshop 4 is over ten years old and does not support raw at all. do
>> you mean cs4 or perhaps photoshop elements?
>>
>
> Photoshop cs4. When you open RAW, it automatically
> performs bayer interpolation?

Yes.

> When you look at original RAW without interpolation
> (or demosaicing), are you supposed to see colors
> or just greyscale image which is what the sensor
> see? But in the following site:
>
> http://www.kenrockwell.com/tech/bayer.htm#
>
> It gives comparison between images with and
> without interpolation by rolling over the mouse in
> the image. Both shows colors. But in pure RAW,
> it is just data and no colors should be seen, so
> is the site incorrect to show RAW in colors before
> Bayer Interpolation applied?

No, because each pixel has a red, green or blue colour filter over it.

Bob Larter

unread,
Apr 30, 2009, 4:01:19 AM4/30/09
to
Hughes wrote:
> On Apr 29, 10:39 am, nospam <nos...@nospam.invalid> wrote:
>> In article
>> <3edbe01e-0ab2-406d-b018-de22b2b86...@v1g2000prd.googlegroups.com>,
>>
>> Hughes <eugenhug...@gmail.com> wrote:
>>> I downloaded some Canon Raw images. I want to see the image
>>> before Bayer demosaicing occurs. In Photoshop 4, when you
>>> open RAW files, it automatically demosaice it.
>> photoshop 4 is over ten years old and does not support raw at all. do
>> you mean cs4 or perhaps photoshop elements?
>
> Yes, it's CS4. As I understood it. All programs that
> open RAW automatically demosaice (apply
> Bayer Interpolation) to it. If one can open it RAW
> (or raw since RAW is not an acronym) that is
> not demosaiced, the image would look very
> green. However, in the following site:
>
> http://www.kenrockwell.com/tech/bayer.htm
>
> One can compare between image that has
> undergone Bayer Interpolation and one without.
> I think it's wrong because the one without doesn't
> look green enough. Look at it and let me know
> your comment. Both seems to have Bayer
> Interpolation (put your mouse over the first
> message and out of it to switch it).

Read the text under the image for an explanation.

bugbear

unread,
Apr 30, 2009, 4:53:05 AM4/30/09
to
Hughes wrote:
> On Apr 29, 10:39 am, nospam <nos...@nospam.invalid> wrote:
>> In article
>> <3edbe01e-0ab2-406d-b018-de22b2b86...@v1g2000prd.googlegroups.com>,
>>
>> Hughes <eugenhug...@gmail.com> wrote:
>>> I downloaded some Canon Raw images. I want to see the image
>>> before Bayer demosaicing occurs. In Photoshop 4, when you
>>> open RAW files, it automatically demosaice it.
>> photoshop 4 is over ten years old and does not support raw at all. do
>> you mean cs4 or perhaps photoshop elements?
>
> Yes, it's CS4. As I understood it. All programs that
> open RAW automatically demosaice (apply
> Bayer Interpolation) to it.

You've already been told of dcraw, that can be told (the -D flag)
to do what you want.

Please have the courtesy to read people answers, and do some
work yourself.

BugBear

Hughes

unread,
Apr 30, 2009, 8:09:13 AM4/30/09
to

Well. When I posted the first reply to "nospam".
I thought it was lost because Google didn't post
it for hours. That is why I wrote a second message
that practically rephrase the same contents as
the first reply. This is also why I posted 3
duplicates of another message, because I
thought it was lost. Maybe google was
congested by panic about the Swine Flu.

Hughes

Pierre Vandevenne

unread,
Apr 30, 2009, 1:31:20 PM4/30/09
to
On Apr 29, 2:27 am, Hughes <eugenhug...@gmail.com> wrote:

> My telephoto has resolution specs of having 50 lines
> per millimeter in the manual. Now converting arcseconds to
> lp/mm in pixel scale using Canon 300D and a 1000mm Telephoto:

So you have a 1000mm lens - is it the Phoenix 500mm with doubler at
roughly $100 or the $34000 Sigma?
I'll assume it is the former since you probably wouldn't be asking if
you had the Sigma: you would have chatted with the designer of the
lens.

The Phoenix lens is rated as 500mm - F/D 8 - and therefore has at most
a 62.5mm aperture.

Absolute resolution depends, in a perfect environment, on the diameter
of the objective. Its resolution in the visual frequencies is
therefore at best, around 2 arc seconds = 0.25 x 0,5/0,0625

(http://ceres.hsc.edu/homepages/classes/astronomy/fall97/Mathematics/
sec16.html)

which is roughly the resolution given by all manufacturers of 60mm
telescopes. That would be for a perfect single lens... Imperfect
lenses, doubling system, etc... will decrease the performance greatly.

Assuming you are using it with a barlow (the doubler) at 1000mm you
are operating at around F/D 16 and therefore, on 7.1 µm pixels (or
7.4µm pixels), imaging at approx 1.46 arc second per pixel. If you
want to sample the signal at 1 arc second per pixel, you'll have to go
at a F/D of 23 24.

> Pixel scale = 205265 x 0.0074 / 1000 = 1.53 arcsec/pixel
>
> Line pair per millimeter = [206265 / (pixel scale)x focal length]
>                                     = 135 lines pair / millimeter or
>                                        135/2 = 67.5 lines per
> millimeter

why complicate easy things? a line 7.4 µm - a line pair 14.8 µm

1000/14.8 = 67,5675675.... lpm

> This means the telephoto with 50 lp/mm resolution specs
> can only produce 2.06 arcsec in conjunction with the

Which is another way of saying you have an effective aperture of
62.5mm

> Is the calculations correct??

Roughly yes.

The 50D would be a better match, with its 4.7 µm pixel size, giving
you around 1 arc second per pixel.

But that would only be the case with a perfect lens, on a perfect
monochrome sensor...

Initially, you said you were planning to shoot a chart indoor. At
which distance are you going to place that chart? Interesting
question, if you plan to shoot line pairs...

Hughes

unread,
Apr 30, 2009, 6:26:47 PM4/30/09
to
On May 1, 1:31 am, Pierre Vandevenne <pie...@datarescue.com> wrote:
> On Apr 29, 2:27 am, Hughes <eugenhug...@gmail.com> wrote:
>
> > My telephoto has resolution specs of having 50 lines
> > per millimeter in the manual. Now converting arcseconds to
> > lp/mm in pixel scale using Canon 300D and a 1000mm Telephoto:
>
> So you have a 1000mm lens - is it the Phoenix 500mm with doubler at
> roughly $100 or the $34000 Sigma?
> I'll assume it is the former since you probably wouldn't be asking if
> you had the Sigma: you would have chatted with the designer of the
> lens.

What I have is the 100mm aperture Rubinar that is raw 1000mm
F/10. See:

http://www.kremlinoptics.com/catalog/item/rubinar_10_1000_telephoto_lens.html

Although I got it only for $560 at www.lzos.ru (click this twice
to make it appear)

>
> The Phoenix lens is rated as 500mm - F/D 8 - and therefore has at most
> a 62.5mm aperture.
>
> Absolute resolution depends, in a perfect environment, on the diameter
> of the objective. Its resolution in the visual frequencies is
> therefore at best, around 2 arc seconds = 0.25 x 0,5/0,0625
>
> (http://ceres.hsc.edu/homepages/classes/astronomy/fall97/Mathematics/
> sec16.html)
>
> which is roughly the resolution given by all manufacturers of 60mm
> telescopes. That would be for a perfect single lens... Imperfect
> lenses, doubling system, etc... will decrease the performance greatly.
>
> Assuming you are using it with a barlow (the doubler) at 1000mm you
> are operating at around F/D 16 and therefore, on 7.1 µm pixels (or
> 7.4µm pixels), imaging at approx 1.46 arc second per pixel. If you
> want to sample the signal at 1 arc second per pixel, you'll have to go
> at a F/D of 23 24.
>
> > Pixel scale = 205265 x 0.0074 / 1000 = 1.53 arcsec/pixel
>
> > Line pair per millimeter = [206265 / (pixel scale)x focal length]
> >                                     = 135 lines pair / millimeter or
> >                                        135/2 = 67.5 lines per
> > millimeter
>
> why complicate easy things? a line 7.4 µm - a line pair 14.8 µm
>
> 1000/14.8 = 67,5675675.... lpm

The Rubinar has a spec of 50 lines per millimeter in the manual.
A perfect F/10 can resolve 90 lines per millimeter from the
formula

Linear Resolving Power = D/ (f. wavelength) lp/mm
For f/D=10 and for green light where wavelength is 555nm:

Linear Resolving power = 1 / 10 x 5.55x10^-4 = 180 lp/mm

In lines /mm instead of lines pair /mm. It's 180/2 = 90 l/mm
which is for a perfect system analogous to Dawes limit.
So my Rubinar has lower and poor spec as it can only
resolve 50 l/mm versus the 90 l/mm from a perfect system.
I'm trying to figure out the optimum pixel size for my
50 l/mm Rubinar without oversampling so calculating:

50 lpm = 1000/(pixel size x 2)
pixel size = (1000/50) /2
pixel size = 10 micron

So a pixel size of 10 micron is enough to use with my
Rubinar? But Nyquist criteria says that to have perfect
sample, it must be 1/2 the size limit. In the case of
Rayleigh Criterion, the 100mm Aperture has maximum
resolving power of 1.16 arcsec. Nyquist criteria says
sampling must be 1/2 of it or 0.58 arcsec. Now in
the case of lines per millimeter.. how do you apply
Nyquist criteria to it where optimum sampling should
be 1/2 of maximum signal??

Does it mean that instead of 10 micron pixel, I should
get a 5 micron pixel to obey Nyquist criteria?? How
does Nyquist apply in lines per millimeter instead of
resolving power in degrees arcseconds or arcseconds?

Something else that confuses me, linear resolving power
is related to angular resolving power by the formula:

Linear resolving power = 206265 / Angular resolving power x f
In my example where angular resolving power is 1.14 arcsec
and f=1000mm.

Linear Resolving power is 206265 / 1.14 x 1000= 180lp/mm

Now here's what bugging me.

If a say 100mm aperture telephoto or telescope has
poor optics. aperture. It can still resolve at 1.14 arcseconds..
we don't state a poor optics scope as having lower
resolving power like only 2 arcsecond for a 100mm aperture
lens. It is still 1.14 arcseconds. But how come in linear
resolving power, we have 50 lp/mm Spec instead of perfect
90lp/mm (for a 4" aperture)? Why don't we have specs
of 2 arcsecond too in telescope for a poor 4" lens
but instead the default 1.14 arcsecond is always use?
Is there proof that in poor lens, the angular resolving power
decreases too just like the linear resolving power?
Pls. don't omit this issue in your reply because it is the
key to understand everything.

>
> > This means the telephoto with 50 lp/mm resolution specs
> > can only produce 2.06 arcsec in conjunction with the
>
> Which is another way of saying you have an effective aperture of
> 62.5mm
>
> > Is the calculations correct??
>
> Roughly yes.
>
> The 50D would be a better match, with its 4.7 µm pixel size, giving
> you around 1 arc second per pixel.
>
> But that would only be the case with a perfect lens, on a perfect
> monochrome sensor...
>
> Initially, you said you were planning to shoot a chart indoor. At
> which distance are you going to place that chart? Interesting
> question, if you plan to shoot line pairs...

That's why I can't start to use the resolution chart because
I don't know at what distance to use it. What distance do
you think??

Many thanks.
Hughes

Pierre Vandevenne

unread,
May 1, 2009, 4:14:14 PM5/1/09
to
On May 1, 12:26 am, Hughes <eugenhug...@gmail.com> wrote:

> What I have is the 100mm aperture Rubinar that is raw 1000mm
> F/10. See:

Ah, OK - well, in that case, there is another issue that comes into
play: this is a catadioptric optic. That means that there is a central
obstruction that will play a significant role. It will definitely have
a negative impact on the contrast, at least compared to an
unobstructed optical system of the same aperture. There are people
here who have a much better understanding of optics than I have so I
won't even attempt to go into the details. The effect on resolution
will be less noticeable, which is why interferometry (roughly
speaking, two small scopes separated by a large base - a kind of
central obstruction) increases resolution to some extent, especially
for point sources. But the effect of the loss of contrast will be more
significant with alternating black and white lines. Also, according to
the rubinar specs, the 50 lpm is for center of field, the edge of the
field are given at 35 lpm "only", quite possibly because coma (an
optical aberration one encounters in many scopes) plays a role there.
Then, there is the issue of collimation, which is very important in
catadioptrics.

Of course, the quality of the mirror also matters because it is one
thing to have a certain potential resolution but the amount of light
one effectively puts within that ideal minimal dot matters a lot

http://www.rfroyce.com/standards.htm

> Although I got it only for $560 atwww.lzos.ru(click this twice
> to make it appear)

Good deal indeed, and lzos is a very respected optical manufacturer.

> In lines /mm instead of lines pair /mm. It's 180/2 = 90 l/mm
> which is for a perfect system analogous to Dawes limit.
> So my Rubinar has lower and poor spec as it can only
> resolve 50 l/mm versus the 90 l/mm from a perfect system.

When I was writing my first message, I was tempted to write that the
real life performance would be about half of what the theoretical,
diameter based, limit. I didn't because I didn't want to sound rude,
but that's probably what we are seing here. It could be that lzos
believes its optics to have a certain precision (see the royce link
above) and factors that in its performance claim.

> I'm trying to figure out the optimum pixel size for my
> 50 l/mm Rubinar without oversampling so calculating:
>
> 50 lpm = 1000/(pixel size x 2)
> pixel size = (1000/50) /2
> pixel size = 10 micron

50 LPM leads to a 20 microns size for the pair, and 10 micron for the
line itself. What you are sampling is the line, not the pair, so if
you are basing your sampling criteria on Nyquist, you should probably
use a 5 micron pixel. But that is only if your goal is to accurately
sample thoses lines if they are ideally projected on your sensor. As
you have noted, a 50LPM resolved 1 to 1 on a sensor would not be as
good as the theoretical maximun that coud be achieved. (Interestingy
enough, this might be part of the reason why there is a F/D 20 adapter
for the rubinar http://www.rugift.com/photocameras/teleconverter_tkl2.htm
that allows for closer to optimum pixel matching for
astrophotography.

> Does it mean that instead of 10 micron pixel, I should
> get a 5 micron pixel to obey Nyquist criteria?? How
> does Nyquist apply in lines per millimeter instead of
> resolving power in degrees arcseconds or arcseconds?

Well, Nyquist is a general principle. When I was taught about it, it
was presented as some kind of absolute truth & limit. When you dig a
bit today, the emphasis is on sufficient, which means that in some
case (google "sparse sampling") the signal can be reconstructed in
other conditions.

> 90lp/mm (for a 4" aperture)? Why don't we have specs
> of 2 arcsecond too in telescope for a poor 4" lens
> but instead the default 1.14 arcsecond is always use?

Well, that is marketing. The figures quoted in telescope ads are
always the figures derived from the wavelenght/diameter formula.
Obviously, there will be very good and very bad scopes and their
resolution will differ markedly (see Strehl in the above Royce link)

> Is there proof that in poor lens, the angular resolving power
> decreases too just like the linear resolving power?
> Pls. don't omit this issue in your reply because it is the
> key to understand everything.

The "royce" link above will give you a few hints about what can go
wrong.

> > Initially, you said you were planning to shoot a chart indoor. At
> > which distance are you going to place that chart? Interesting
> > question, if you plan to shoot line pairs...

> That's why I can't start to use the resolution chart because
> I don't know at what distance to use it. What distance do
> you think??

Frankly, I have no idea. If I really wanted to play with such a chart,
I would first determine the minimum distance at which the scope will
focus and, from there, approximate what would be a good test. I would
then print a test chart on a laser printer and mesure what I can
actually resolve. The fact that you can't print 50 LPM doesn't really
matter because the chart will be far away. In fact, you could even
print a rough chart and move it away until you can't resolve it
anymore. Then, it would be fairly easy to calculate what your
practical resolution is.

The formulas give you a rough approximation of what to try. If you are
sampling optimally, the quality of the setup will be revealed - at
least you'll know why you can't resolve better. Since we are usually
stuck with a system whose diameter can't be increased, whose quality
can't be magically improved, whose optical formula can't be changed, a
fixed pixel size, the easiest parameter to change is the focal length
(hence the extender)...

Hughes

unread,
May 1, 2009, 9:19:08 PM5/1/09
to
> > Although I got it only for $560 atwww.lzos.ru(clickthis twice

> > to make it appear)
>
> Good deal indeed, and lzos is a very respected optical manufacturer.
>
> > In lines /mm instead of lines pair /mm. It's 180/2 = 90 l/mm
> > which is for a perfect system analogous to Dawes limit.
> > So my Rubinar has lower and poor spec as it can only
> > resolve 50 l/mm versus the 90 l/mm from a perfect system.
>
> When I was writing my first message, I was tempted to write that the
> real life performance would be about half of what the theoretical,
> diameter based, limit. I didn't because I didn't want to sound rude,
> but that's probably what we are seing here. It could be that lzos
> believes its optics to have a certain precision (see the royce link
> above) and factors that in its performance claim.

Say the Rubinar has strehl of 50% or only half of the
light is in the center of the airy disc, this means
the diffraction rings are wider and brighter and
the entire airy disc with rings are 3 times the size
of a perfect 100 Strehl reference. Now what would
be the equivalent aperture to have the same
airy disc size since smaller aperture would have
larger airy disc, I guess its about 70mm. So a
70mm perfect refractor would have similar
linear resolving power as a 100mm Rubinar
with a more reasonable Strehl of say 60%.

Now look at the following review of the Rubinar
vs the Canon 400 Telephoto.

http://photography-on-the.net/forum/showthread.php?t=353350&page=2

The reviewer said:
"That is my issue. With Canon lens at least I have some extra pixels
to crop around. With Rubinar I end up having the same sized output
without the cropping flexibility...".

From our theoretical understanding, a good Canon 300-400
would have the same resolving power as a bad Rubinar.
But how come the guy said his Canon 300 is still better.
We have to go into arcsecond per pixel analysis.

Assuming Canon 300-400 F4 has aperture of 70mm (is
this correct?), it's perfect linear resolving power is
116/70= 1.6 arcsec. Pixel scale = 206265 x 4.7um/300=3.2
arcsec/pixel. Airy disc size for F4 is around 4.6 micron.
Using his 4.7micron D50 dslr, his pixel can cover
one airy disc, that means he can't image the diffraction
patterns as it covers one pixel.. we can say it's undersample.

Now the Rubinar with 1.16 arcsec with pixel scale of
0.96 arcsec/pixel. Airy disc size for F10 is around
13 micron. Using his 4.7 micron D50 dsl, his pixel
is less than one half of the airy disc or 2.4 pixels
for every airy disc.

What the above means is that when he looks at
his image of the Canon 300 Telephoto even at 100%
zoom, he can't see the diffraction patterns blurring.
In the case of the Rubinar, he can see it as there
is 3 pixels per airy discs. But worse, since the airy
disc is now 2-3 times bigger from the Strehl of 60%,
each airy disc with rings has over 10 pixels in
them. No wonder he can see the image of the
Rubinar with lower contrast because 10 pixels
made up one airy disc while in his Canon 300
Telephoto, one pixel made up the airy disc.

With this information, it is indeed possible that
a Canon 300-400mm telephoto can resolve
the same as the 1000mm Rubinar with poor
optics?

Near the resolving limit, there is indeed a blurring.
But how about large scale contrast far from airy
disc scale. I think contrast at large scale should
be the same, but since large scale is made up
of small scale airy disc, then it can be equally
affected too.

What I can't believe the author said is this:

"Canon 100-400 without teleconverter provides the same picture
quality, is lighter, brighter, has autofocus and zoom. Yes it was
three times the price of Rubinar (not 30x!), but it is SO MUCH EASIER
TO USE.
The size of the picture of the surfer is about the max pixel-wise that
I can tolerate from Rubinar. If you look at its images at 1:1 (zoom
100%) - it looks awful, downsample a 13 megapixel frame to 1 megapixel
and it becomes a nice picture for the web sharing. But that is not my
target output. I need a 13 megapixel image that is good at pixel
level.
Just recently I cropped and down-sized some of my images to fit
exactly the Toshiba P56-QHD LCD, which is a 56 inch, 3840x2160 color
display (8 megapixel) - I could not believe the results - stunning. I
could not do this with Rubinar. That is my issue. With Canon lens at
least I have some extra pixels to crop around. With Rubinar I end up
having the same sized output without the cropping flexibility..."

Is it so bad that the Rubinar is only good for 1 megapixel
while his Canon 300 has quality of 13 megapixels? But
then magnfication wise, I think he can see same detail
as 1 megapixel in the Rubinar and zoomed pixels
in the Canon 300? I'm looking for resolution charts
of the Canon and can't seem to find it in the internet.
If you know of them, let me know.

What do you think of my analysis above. Pls. think
of it.


> > I'm trying to figure out the optimum pixel size for my
> > 50 l/mm Rubinar without oversampling so calculating:
>
> > 50 lpm = 1000/(pixel size x 2)
> > pixel size = (1000/50) /2
> > pixel size = 10 micron
>
> 50 LPM leads to a 20 microns size for the pair, and 10 micron for the
> line itself. What you are sampling is the line, not the pair, so if
> you are basing your sampling criteria on Nyquist, you should probably
> use a 5 micron pixel. But that is only if your goal is to accurately
> sample thoses lines if they are ideally projected on your sensor. As
> you have noted, a 50LPM resolved 1 to 1 on a sensor would not be as
> good as the theoretical maximun that coud be achieved. (Interestingy
> enough, this might be part of the reason why there is a F/D 20 adapter

> for the rubinarhttp://www.rugift.com/photocameras/teleconverter_tkl2.htm


> that allows for closer to optimum pixel matching for
> astrophotography.

Another thing to add to my above analysis.
Frankly in terrestrial photography, we don't need to
resolving two points of light with distance at the
resolving limit, it's only for binary stars, so I guess
in terrestrial photography, the pixel doesn't have
to be 1/2 that of resolving power in arcseconds but
1/2 the size of the airy disc. Is there practical
performance gain to resolve points in the image
that is inside the airy disc, maybe if you just want
to spot the ants existence in a Tiger face a mile
way but then you can't make out the ants but just
dot if you have to resolving it like binary stars.

I have only inkjet printer so I haven't printed the test chart
as very close lines can smudge into each other. I
can't have it printed at Cybercafe either because their
laser system is not new so it may not print the fine
lines too. I'm also looking for thicker paper so the ink
in my inkjet in case I print it won't wet the paper. I'd
lose so many ink just test printing one page so I
want to make sure first before doing it.

Hughes

Ray Fischer

unread,
May 1, 2009, 11:49:04 PM5/1/09
to
Hughes <eugen...@gmail.com> wrote:
>Here is a photo I shot with the 1000mm telephoto with webcam.
>
>http://www.pbase.com/image/111769165/original
>
>Target (scanned) is located 3.8 meters from telephoto/webcam (note:

Nobody cares

>Measurement of actual picture is 7 inches horizontal,

Really, nobody cares.

>1. In the picture there is rectangular pattern taken with telephoto/
>webcam, what is it? Printing artifact of the brochure?

Ya think?

Read this: http://en.wikipedia.org/wiki/Color_printing

>2. How come I can see vertical lines moving upward in the webcam
>preview in the monitor? Noise or because image is dim??

Who knows? It's your hardware.

>4. Using a DSLR, what would be the improvement in resolution and
>colors provided dslr and webcam has same pixel pitch?
That depends on the webcam and the dSLR, doesn't it?

>5. To be noise resistance, does the pixel (or sensel) have to be
>at least 4.7 micron? How about 2 micron pixel pitch like in digicam.

How about: It depends upon the sensor, the ambient temperature, the
amplifiers, and the signal processing.

--
Ray Fischer
rfis...@sonic.net

Chris Malcolm

unread,
May 3, 2009, 4:55:39 AM5/3/09
to
In rec.photo.digital Pierre Vandevenne <pie...@datarescue.com> wrote:

> On May 1, 12:26?am, Hughes <eugenhug...@gmail.com> wrote:

>> What I have is the 100mm aperture Rubinar that is raw 1000mm
>> F/10. See:

> Ah, OK - well, in that case, there is another issue that comes into
> play: this is a catadioptric optic. That means that there is a central
> obstruction that will play a significant role. It will definitely have
> a negative impact on the contrast, at least compared to an
> unobstructed optical system of the same aperture. There are people
> here who have a much better understanding of optics than I have so I
> won't even attempt to go into the details.

There will be some effect due to the larger amount of diffraction
going on, but I would have thought that wouldn't produce a general
image-wide effect like loss of contrast. I've seen it suggested that
the lower contrast of these lenses is at least partly due to their
short fat shape, which makes them much more susceptible to light from
outside the field of view getting in and bouncing around and ending
up on the sensor. A long lens hood is often suggested as improving
contrast a lot.

--
Chris Malcolm

Hughes

unread,
May 3, 2009, 6:21:22 AM5/3/09
to

Where did you read it suggested that off-axis
reflections is caused of contrast loss? I think
it's the following:

1. F/10. Even if you are using a Hubble with
F/10 sensor. You'd have the same amount of
light in the sensor. Canon are designed for
F/2.8-F/5.6. The photographers may not have
used slower shutter speed because they
can't take action shots. So fast shutter would
produce dim image.

2. The airy disc is made up of more than 10
airy disc. One half from the normal pixel
scale and the second from the fact that
light in the center of the airy discs is pumped
to the surrounding make it bigger.

Note that almost all Canon Telephoto used
in conjunction with any Canon DSLR would
have the airy disc inside one pixel. But when
you have airy disc that is made up of more
than 10 pixels. There must be contrast loss.

The above is my analysis of the problem after
thinking of it for weeks. I stand corrected by
others with better theory.

Hughes

Chris Malcolm

unread,
May 3, 2009, 12:37:12 PM5/3/09
to
In rec.photo.digital Hughes <eugen...@gmail.com> wrote:

If I understand your airy disc argument properly, that would lead to
local contrast loss, such as edge blurring, but not image wide general
contrast loss. I was under the impression that the low contrast loss
complaint about reflex lenses referred to a general loss of contrast
over the entire image such as would be produced by a general fogging.

--
Chris Malcolm

Pierre Vandevenne

unread,
May 3, 2009, 12:50:34 PM5/3/09
to
On May 3, 10:55 am, Chris Malcolm <c...@holyrood.ed.ac.uk> wrote:
> In rec.photo.digital Pierre Vandevenne <pie...@datarescue.com> wrote:
>
> > On May 1, 12:26?am, Hughes <eugenhug...@gmail.com> wrote:
> >> What I have is the 100mm aperture Rubinar that is raw 1000mm
> >> F/10. See:
> > Ah, OK - well, in that case, there is another issue that comes into
> > play: this is a catadioptric optic. That means that there is a central
> > obstruction that will play a significant role. It will definitely have
> > a negative impact on the contrast, at least compared to an
> > unobstructed optical system of the same aperture. There are people
> > here who have a much better understanding of optics than I have so I
> > won't even attempt to go into the details.
>
> There will be some effect due to the larger amount of diffraction
> going on, but I would have thought that wouldn't produce a general
> image-wide effect like loss of contrast. I've seen it suggested that

Well, the loss of contrast of catadioptrics is a sure thing, although
it is not as bad as one would intuitively expect.

http://www.telescope-optics.net/obstruction.htm

But of course, as you say, other factors come into play as well.

Chris L Peterson

unread,
May 3, 2009, 12:51:10 PM5/3/09
to
On 3 May 2009 16:37:12 GMT, Chris Malcolm <c...@holyrood.ed.ac.uk> wrote:

>If I understand your airy disc argument properly, that would lead to
>local contrast loss, such as edge blurring, but not image wide general
>contrast loss. I was under the impression that the low contrast loss
>complaint about reflex lenses referred to a general loss of contrast
>over the entire image such as would be produced by a general fogging.

Don't try to understand his Airy disc discussion- he is obsessed with
Airy discs, even though they are generally useless from a practical
standpoint in understanding resolution.

If you view a point source, you can see the effect of an obstructed
system on the Airy disc: energy is transferred from the central peak
into outer diffraction rings. The effect is to broaden the entire
structure. This modifies the MTF is such a way as to generally lower
contrast (with some exceptions at particular image scales). It does this
across the entire image- after all, "contrast" only makes sense when you
are talking about edges, which are anyplace where you have a change in
intensity from one region to the next. The effect is minor- some people
can see it visually, some can't. For imaging, it makes no difference at
all (nearly all of the biggest and best imaging instruments have huge
central obstructions).

Pierre Vandevenne

unread,
May 3, 2009, 2:52:48 PM5/3/09
to
On May 3, 12:21 pm, Hughes <eugenhug...@gmail.com> wrote:

> 1. F/10. Even if you are using a Hubble with
> F/10 sensor. You'd have the same amount of

What's a F/10 sensor? That doesn't really make sense.

> light in the sensor. Canon are designed for
> F/2.8-F/5.6. The photographers may not have

No, they aren't. I have lenses nice up to F/D 1.2 and others that
behave decently (but not optimally) at F/D 22. Ultimately, their
absolute resolving power is defined by their physical aperture.
Whether this is under/optimally/fully exploited depends on the focal
length, the design, the number of lenses in the optical path, their
precison, etc...

> 2. The airy disc is made up of more than 10
> airy disc.

Hmmmmm.

> Note that almost all Canon Telephoto used
> in conjunction with any Canon DSLR would
> have the airy disc inside one pixel. But when

Totally incorrect. It is usually the opposite.

And of course, as it was stated at the start of the thread, the notion
of pixel is a bit artificial on a Bayer Matrix.

What about simply enjoying your photogaphy?

Hughes

unread,
May 3, 2009, 5:47:12 PM5/3/09
to
On May 4, 2:52 am, Pierre Vandevenne <pie...@datarescue.com> wrote:
> On May 3, 12:21 pm, Hughes <eugenhug...@gmail.com> wrote:
>
> > 1. F/10. Even if you are using a Hubble with
> > F/10 sensor. You'd have the same amount of
>
> What's a F/10 sensor? That doesn't really make sense.

What I meant to say was. Even if you have a scope with
an aperture the size of the moon... if the system is F/10.
It would show the same brightness at the focal plane
compare to a scope say 50mm in aperture. Now in
photography, F/10 would be dim and there is a
critical shutter speed where you can't go slower
because the wind can affect the tree branches and
if you shoot slower enough to match F/10. You can
see blurring of the trees. Maybe someone can
produce a formula for this.. it's like the movement of
the trees should be less than than the shutter speed.


>
> > light in the sensor. Canon are designed for
> > F/2.8-F/5.6. The photographers may not have
>
> No, they aren't. I have lenses nice up to F/D 1.2 and others that
> behave decently (but not optimally) at F/D 22. Ultimately, their
> absolute resolving power is defined by their physical aperture.
> Whether this is under/optimally/fully exploited depends on the focal
> length, the design, the number of lenses in the optical path, their
> precison, etc...

But for nature photography, F/5.6 is the slowest one can
get due to the movement in the animal and trees although
static subject should have no effect and Shutter Priority
could be initiated.

>
> > 2. The airy disc is made up of more than 10
> > airy disc.
>
> Hmmmmm.
>
> > Note that almost all Canon Telephoto used
> > in conjunction with any Canon DSLR would
> > have the airy disc inside one pixel. But when
>
> Totally incorrect. It is usually the opposite.

Not so because the pixel scale of common Canon
Telephoto like Canon 300 F2.8 has only small focal
length so the pixel scale would be quite large
compared to the resolving power, for example.

Canon 300 F/2.8 resolving power 114/70= 1.6 arcsec

pixel scale = 206265 x 0.067 /300 = 4.6 arcsec

airy disc of f/2.8 = 3 micron

airy disc subtended = 2 arcsec

Hence pixel scale of 4.6 arc is larger than pixel angular
size of 2 arcsec hence airy disc is inside one pixel.
Refute this.

>
> And of course, as it was stated at the start of the thread, the notion
> of pixel is a bit artificial on a Bayer Matrix.
>
> What about simply enjoying your photogaphy?

Well. I bought the Rubinar because I saw there is a general
contrast loss and I'd like to understand why as it doesn't
tally with optical physics unless the contrast loss is
caused by off-axis light or other unknown factor. I also
have another 100mm scope the APM- triplet with a
strehl of 99% or over 1/14th wave. I'm interested mainly
in understanding optics and not actual photography
because understanding optics is a prerequisite to my ultimate
interest which is Einstein General and Special Relativity
which has gravitational lensing and light wavelength
and interference effects so understanding lenses,
telescopes and telephotos would aid immensely
in the basic knowledge of optics and comprehension of
the behavior of light.

Hughes

Hughes

unread,
May 3, 2009, 6:03:00 PM5/3/09
to

So you are emphasizing that spherical aberrations shouldn't
affect general large scale contrast in an image. But how
come the Rubinar has so bad contrast like in the following
actual image:

http://www.duliskovich.com/rubinar/Bridge%20%282341%20meters%29%20and%20house%20%284878%20meters%29.jpg

Compare this image to:

http://www.flickr.com/photos/gps1/1356386350/sizes/l/

Now why is this image has so much better contrast than the
first considering that spherical aberrations and airy disc/
pixel resoltuion is irrelevant for general large scale contrast
in the image.

Hope someone can shed some light on this. The Rubinar
is so well baffle that off-axis reflections is minimal yet
large scale contrast in the image is bad which seems
not to tally with theoretical principles. Unless it is because
of the F/10 focal ratio? Hope photography people can
share about photos taken with F/10 lens if it would
produce bad contrast with shutte speed not matched
to it.

Again since contrast loss in spherical aberrations is
only in the details near the resolving limit, how do
you make the image of the Rubinar in photography
as good as the second image above where contrast
is excellent??

Hughes

Hughes

unread,
May 3, 2009, 6:29:44 PM5/3/09
to
On May 4, 12:37 am, Chris Malcolm <c...@holyrood.ed.ac.uk> wrote:
> Chris Malcolm- Hide quoted text -
>
> - Show quoted text -

Yes, that is also my understanding that image wide general contrast
loss shouldn't be caused by airy disc stuff which only affect local
contrast at the resolving limit. Yet the big mystery is how come
the Rubinar image is affected image wide (no, it is not the
off-axis reflections because it's internal baffled are filled up
with totally non-reflecting surface (pitch black). Perhaps some
kind of chain reaction where local effects can magnify into
large one from some kind of in-camera interpolation?? Again let me
state to you the same statements I address Mr. Peterson.

***

Chris L Peterson

unread,
May 3, 2009, 6:44:34 PM5/3/09
to
On Sun, 3 May 2009 15:03:00 -0700 (PDT), Hughes <eugen...@gmail.com>
wrote:

>So you are emphasizing that spherical aberrations shouldn't
>affect general large scale contrast in an image. But how
>come the Rubinar has so bad contrast like in the following
>actual image:

I think you might be overanalyzing things. Perhaps it's just a bad lens?
Bad baffling, rough mirror, or some other problem? There are certainly
plenty of catadioptric camera lenses, with substantial obstructions,
that do just fine.

Hughes

unread,
May 3, 2009, 6:50:18 PM5/3/09
to

I'm analysis Mr. Peterson statements. I think we must distinguish
between terrestrial photograpy vs astrophography. In
astrophotography, we are imaging two point sources that
may be near the resolving limit. So you toss a wine when
you can resolve two binary stars in say 1.5 arcsec and
you don't mind the airy disc overlapping as you just want
to know there is 2 binary sources only and that is main
purpose. But in terrestrial photography, the blurring or
clarify in the image is dictated by how good is neighboring
pixel. Now when you have pixels with overlapping
diffraction discs from airy disc, you would have blurring.
Now I'm trying to view the image I took with the Rubinar
in the following.

http://www.pbase.com/eugenehughes/image/111769165


If you zoom it 100% in a monitor, the airy disc final
diameter after taking into account spherical aberrations
can be detected by the eyes and this would result in
blurring in the image, so I think it's possible that airy
disc stuff can indeed cause wide image contrast loss??
If you consider that edges could be in any place in
the image where there is change of intensity from
one place to another then every inch of the image
is affect hence contrast loss can be general.

How then astrophotography also involves imaging
planets which doesn't differ from terrestrial photography,
so can how Mr. Peterson say it doesn't affect the
contrast image wide at large scale unless I'm wrong
in understanding his statements and he is saying
that there is indeed wide scale image loss. But
why did he say airy disc is not important when it
alone can blur the image taken???

Hope someone can shed more light on this mystery
of cosmic proportion.

Hughes

Hughes

unread,
May 3, 2009, 7:01:22 PM5/3/09
to
On May 4, 6:44 am, Chris L Peterson <c...@alumni.caltech.edu> wrote:
> On Sun, 3 May 2009 15:03:00 -0700 (PDT), Hughes <eugenhug...@gmail.com>

No, because all 80,000 pieces of the Rubinar produced
identifcal contrast when taking with a digicam. I really
can't understand why you don't believe airy disc could
be the culprit. Imagine the entire image composing of
only 5 airy disc overlapping, you can't see much
details in the image. So make it millions of airy discs
and you can make out the scenic lake. In terrestrial
photography, airy disc could be like the pixel that
write the details. In astronomy, you ignore the
airy discs because you are mainly interested in
the separation of two binary stars. But in imaging
planets, the same argument hold. Imagine the
image of Jupiter only has 3 airy disc, you can't
make out any features. But make it millions of
airy disc and Jupiter can be make out. Simple
isn't it. Why can't you just understand this. If you
didn't own an observatory, I could have just
treated you as someone ignorant of optics but
you own an observatory and optics should be your
middle name. Think for a while in the perspective
of terrestrial photography or better yet, planetary
imaging where you want to make the airy discs
as small as possible lest it affects the entire
image via the overlapping diffraction rings which
can lower image wide contrast. Ponder on it.

Hughes

Chris L Peterson

unread,
May 3, 2009, 7:05:34 PM5/3/09
to
On Sun, 3 May 2009 15:50:18 -0700 (PDT), Hughes <eugen...@gmail.com>
wrote:

>If you zoom it 100% in a monitor, the airy disc final


>diameter after taking into account spherical aberrations
>can be detected by the eyes and this would result in
>blurring in the image, so I think it's possible that airy
>disc stuff can indeed cause wide image contrast loss??
>If you consider that edges could be in any place in
>the image where there is change of intensity from
>one place to another then every inch of the image
>is affect hence contrast loss can be general.

You're making things very difficult on yourself trying to reason out
contrast and resolution along these lines. We've discussed this many
times, but you apparently refuse to listen.

>How then astrophotography also involves imaging
>planets which doesn't differ from terrestrial photography,
>so can how Mr. Peterson say it doesn't affect the
>contrast image wide at large scale unless I'm wrong
>in understanding his statements and he is saying
>that there is indeed wide scale image loss. But
>why did he say airy disc is not important when it
>alone can blur the image taken???

It isn't that the Airy disc isn't important, it is that thinking about
the Airy disc isn't a very useful way to analyze the problem. The
central obstruction definitely affects image contrast, and over the
entire image. However, the impact is pretty small, and is unimportant
when imaging because you can trivially boost the contrast of the final
image (which of course, you can't do when using a telescope visually).

Again, for purposes of analysis, all you mainly care about is your
theoretical diffraction limited resolution, as determined by aperture
(and if you have suitable measuring tools, you may also consider
aberrations that can reduce that), and your pixel image scale, as
determined by focal length. If you're really inclined to rigorously test
the lens, you'll need to shoot proper resolution targets and measure the
MTF. The target you're currently using is essentially worthless for
figuring out anything useful.

Chris L Peterson

unread,
May 3, 2009, 7:30:42 PM5/3/09
to
On Sun, 3 May 2009 16:01:22 -0700 (PDT), Hughes <eugen...@gmail.com>
wrote:

>I really


>can't understand why you don't believe airy disc could
>be the culprit. Imagine the entire image composing of
>only 5 airy disc overlapping, you can't see much

>details in the image...

I give up.

Hughes

unread,
May 3, 2009, 7:51:32 PM5/3/09
to
On May 4, 7:05 am, Chris L Peterson <c...@alumni.caltech.edu> wrote:
> On Sun, 3 May 2009 15:50:18 -0700 (PDT), Hughes <eugenhug...@gmail.com>

Here's the culprit, if you boost the contrast of the final image,
it won't produce/reconstruct the tiny details that is loss from the
overlapping diffraction at the resolving limit. We must distinguish
between large scale contrast and small scale contrast. You can
boost large scale contrast but the details in small scale
contrast near the resolving limit is loss (for example, lettering
of an electric post dozens of kilometers away) which you could
have seen with a larger aperture telephoto/telescope.

Another illustration is alternating black and white
lines in resolution charts. As the bars separation
is near the resolving threshold, they would overlap
producing light and grey bars until they just
merge into one. In the Rubinar image I took,
it's like the light and grey bar can already be
imaged one to one and seen by the eyes when
the image zoomed 100% in the monitor hence the
overall contrast is so poor. Boosting the overall
contrast would only make large scale contrast
easier to see but the small scale contrast details
near the resolving limit is lost forever.


> Again, for purposes of analysis, all you mainly care about is your
> theoretical diffraction limited resolution, as determined by aperture
> (and if you have suitable measuring tools, you may also consider
> aberrations that can reduce that), and your pixel image scale, as
> determined by focal length. If you're really inclined to rigorously test
> the lens, you'll need to shoot proper resolution targets and measure the
> MTF. The target you're currently using is essentially worthless for
> figuring out anything useful.
> _________________________________________________
>

But it still helps in terrestrial photography if you try to make the
airy disc smaller or translating into your language of
resolution and pixel image scale, to make it not matched
so you would have less resolution. When the resolution
of your digicam or CCD is less, the diffraction rings
can't be imaged at the cost of smaller details resolved
(in Terrestrial Photography, maybe this is the philosophy
where smale scale details are not important as long
as contrast in overall image is better *naturally*.
Of course if you just want the details resolved, it has
to match and obey Nyquist criterion irregardless of airy disc
linear diameter to pixel pitch differences. To further illustrate my
point, if the digicam/telescope/telephoto would have bad
contrast at the resolving limit in imaging jupiter or the snow cap
miles away. Then don't use the digicam, just use the
eyes in directly watching it. You won't see anything
resolved, you won't see any loss of contrast either
because you won't see both of them at all. So you
agree that resolution and contrast can't be both
acquired perfectly. If one increase resolution or pixel
scale, contrast at resolving limit gets bad (bars
become light/grey instead of white/black but at least
some details can be resolved (Rayleight criterion)).
Maybe this is the point you are driving??
The best of both worlds is maximum resolution or
pixel scale and at the same time maximum contrast
(make the diffraction rings smaller). This can only be
prefectly done by making the aperture larger.

It's settled now? Comment?

Hughes

Helpful person

unread,
May 3, 2009, 8:10:58 PM5/3/09
to
On May 3, 7:30 pm, Chris L Peterson <c...@alumni.caltech.edu> wrote:
> On Sun, 3 May 2009 16:01:22 -0700 (PDT), Hughes <eugenhug...@gmail.com>

> wrote:
>
> >I really
> >can't understand why you don't believe airy disc could
> >be the culprit. Imagine the entire image composing of
> >only 5 airy disc overlapping, you can't see much
> >details in the image...
>
> I give up.
> _________________________________________________
>
> Chris L Peterson
> Cloudbait Observatoryhttp://www.cloudbait.com

Sometimes no matter how hard you try (and you've tried very hard) it
is impossible to explain fundamentals to people with closed minds. I
admire your patience.

www.richardfisher.com

Hughes

unread,
May 3, 2009, 8:29:31 PM5/3/09
to

As they say, picture is worth a thousand words, Try
to make the following picture contrast:

http://www.duliskovich.com/rubinar/Bridge%20%282341%20meters%29%20and%20house%20%284878%20meters%29.jpg

like this...

http://www.flickr.com/photos/gps1/1356386350/sizes/l/

Can you do that naturally? How?

H


Hughes

unread,
May 3, 2009, 8:57:31 PM5/3/09
to
On May 4, 7:05 am, Chris L Peterson <c...@alumni.caltech.edu> wrote:
> On Sun, 3 May 2009 15:50:18 -0700 (PDT), Hughes <eugenhug...@gmail.com>

I think the key is dynamic range. In astrophotography, dynamic
range is not important in resolving binary stars, planetary
details and deep sky hence the airy disc smearing in neighboring
pixels is not important. But in terrestrial photography, isn't
it that dynamic range can be affected by many pixels making
up one airy disc?? The dynamic range can become much
lessened. Hope someone can point out to analysis concerning
the relationship of dynamic range and airy disc to pixel ratio.
Basically what Mr. Peterson seems to be saying is that the signal
(point sources) to noise (airy disc diffraction) ratio is large so
it is best to be able to resolve two points even if their
diffraction rings overlap. But this can cost dynamic range.
The following picture has very bad dynamic range:

http://www.duliskovich.com/rubinar/Bridge%20%282341%20meters%29%20and%20house%20%284878%20meters%29.jpg

While this one is great:

http://www.flickr.com/photos/gps1/1356386350/sizes/l/

In the first picture, many pixels make up one airy disc,
in the second picture, a pixel make up one airy disc,
so dynamic range is better in the second than the first
because of this? Anyone

Hughes

Chris L Peterson

unread,
May 3, 2009, 9:10:47 PM5/3/09
to
On Sun, 3 May 2009 17:57:31 -0700 (PDT), Hughes <eugen...@gmail.com>
wrote:

>Basically what Mr. Peterson seems to be saying is that the signal


>(point sources) to noise (airy disc diffraction) ratio is large so
>it is best to be able to resolve two points even if their
>diffraction rings overlap.

I'm saying nothing of the sort.

Hughes

unread,
May 3, 2009, 10:05:56 PM5/3/09
to
On May 4, 9:10 am, Chris L Peterson <c...@alumni.caltech.edu> wrote:
> On Sun, 3 May 2009 17:57:31 -0700 (PDT), Hughes <eugenhug...@gmail.com>

> wrote:
>
> >Basically what Mr. Peterson seems to be saying is that the signal
> >(point sources) to noise (airy disc diffraction) ratio is large so
> >it is best to be able to resolve two points even if their
> >diffraction rings overlap.
>
> I'm saying nothing of the sort.
> _________________________________________________
>
> Chris L Peterson
> Cloudbait Observatoryhttp://www.cloudbait.com

But you said that the resolution is much smaller than
the airy disc, so we should aim for the pixel scale
that would be 1/2 smaller than the resolving power
to satisfy Nyquist criterion. What this means is that
the signal (point source) to noise (diffraction discs)
is large so we should aim for resolution and forget
about airy disc as it is not important in the analysis
because resolution is the meat of it.

You maybe be right but in target where dynamic range is
important, the airy disc size over pixel may need to
be taken into account. If maximum dynamic range is
prefered, then we can decrease ccd pixel scale
resolution. Isn't this what nature photography is all
about, to get the best dynamic range at the cost of
resolution.

Image Analysis ongoing. Hope it can be resolved soon
and the mystery of why Rubinar image has poor
contrast (or dynamic range) can be solved soon.

Hughes

Hughes

unread,
May 3, 2009, 10:40:37 PM5/3/09
to

This can be proven or disproven in photoshop.

If I open an image of say 14-bit skin tone or
other object where there is great dynamic
range in photoshop, how can I smudge
every 10 pixels in circle all over the 12
megapixel image to simulate the airy
disc blurring (for every 10 pixels). Any tools
for this in Photoshop or other image editing
tool, anyone?

Hughes

Chris L Peterson

unread,
May 3, 2009, 10:55:57 PM5/3/09
to
On Sun, 3 May 2009 19:05:56 -0700 (PDT), Hughes <eugen...@gmail.com>
wrote:

>What this means is that


>the signal (point source) to noise (diffraction discs)

That is not remotely how signal and noise are defined.

It is loading more messages.
0 new messages