Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Future of the megapixel race

0 views
Skip to first unread message

John Navas

unread,
Jun 24, 2009, 11:25:30 AM6/24/09
to
Many have decried the "megapixel race" that has resulted in ever smaller
photosites, not just for compact digital cameras, but also for dSLR
cameras, but I think this is not being borne out in real world
performance -- images from current high megapixel sensors in both
compact digital cameras (e.g., Panasonic DMC-FZ28) and dSLR cameras are
unquestionably better than images from earlier comparable sensors with
lower megapixel counts (e.g., Panasonic DMC-FZ8).

The reasons are that sensors are better and that image quality has come
to be dominated by in camera processing, especially as faster and more
powerful processors have become available.

I personally see no reason not to increase the sensor resolution as long
as in camera processing keeps pace -- I've seen some interesting
(non-public) papers on how smaller photosites can be aggregated to
produce better results than larger photosites, in part because of Bayer
issues.

I think it's quite possible we might see (say) 16 MP sensors aggregated
down to (say) 8 MP output that produce better results than comparable
8 MP sensors, especially for everyday shooting.

--
Best regards,
John (Panasonic DMC-FZ28, and several others)

Robert Spanjaard

unread,
Jun 24, 2009, 11:42:44 AM6/24/09
to
On Wed, 24 Jun 2009 08:25:30 -0700, John Navas wrote:

> Many have decried the "megapixel race" that has resulted in ever smaller
> photosites, not just for compact digital cameras, but also for dSLR
> cameras, but I think this is not being borne out in real world
> performance -- images from current high megapixel sensors in both
> compact digital cameras (e.g., Panasonic DMC-FZ28) and dSLR cameras are
> unquestionably better than images from earlier comparable sensors with
> lower megapixel counts (e.g., Panasonic DMC-FZ8).
>
> The reasons are that sensors are better and that image quality has come
> to be dominated by in camera processing, especially as faster and more
> powerful processors have become available.
>
> I personally see no reason not to increase the sensor resolution as long
> as in camera processing keeps pace

You don't see any reason because there's no direct comparison. If you
build a 6 MP APS-C sensor with current technology, you can use much
larger and better photosites than in the days of 6 MP-DSLRs. And if you
use current processing techniques afterwards, the image will get even
better.

I'm not saying that less is better, but you can't claim that more is
better either. As I said, there's no comparison available.

--
Regards, Robert http://www.arumes.com

Fred McKenzie

unread,
Jun 24, 2009, 2:11:18 PM6/24/09
to
In article <a929$4a424974$5469b618$24...@cache60.multikabel.net>,
Robert Spanjaard <spam...@arumes.com> wrote:

> You don't see any reason because there's no direct comparison. If you
> build a 6 MP APS-C sensor with current technology, you can use much
> larger and better photosites than in the days of 6 MP-DSLRs. And if you
> use current processing techniques afterwards, the image will get even
> better.
>
> I'm not saying that less is better, but you can't claim that more is
> better either. As I said, there's no comparison available.

Robert-

If you apply the same pixels (density) of the 6 MP APS-C sensor, to a
24mm X 36mm sensor, you would have about 13.5 MP. Comparing same-size
prints, they should be better because less magnification is required.

Or, are you claiming that 6 MP APS-C is "good enough"? While that may
be true, it isn't the issue!

Fred

Robert Spanjaard

unread,
Jun 24, 2009, 2:32:24 PM6/24/09
to
On Wed, 24 Jun 2009 14:11:18 -0400, Fred McKenzie wrote:

> In article <a929$4a424974$5469b618$24...@cache60.multikabel.net>,


>> I'm not saying that less is better, but you can't claim that more is
>> better either. As I said, there's no comparison available.
>

> If you apply the same pixels (density) of the 6 MP APS-C sensor, to a
> 24mm X 36mm sensor, you would have about 13.5 MP. Comparing same-size
> prints, they should be better because less magnification is required.

_That's_ not the issue. John and I were talking about differing numbers
of pixels on a single sensor size.

Robert Sneddon

unread,
Jun 24, 2009, 7:09:33 PM6/24/09
to
In message <nsg445hsf9nnitng6...@4ax.com>, John Navas
<spamf...@navasgroup.com> writes
[Clip]

>images from current high megapixel sensors in both
>compact digital cameras (e.g., Panasonic DMC-FZ28) and dSLR cameras are
>unquestionably better than images from earlier comparable sensors with
>lower megapixel counts (e.g., Panasonic DMC-FZ8).

It's a truism that newer components tend be better than older
components; if they were worse then there would be no point in designing
and building new ones.


>
>The reasons are that sensors are better and that image quality has come
>to be dominated by in camera processing, especially as faster and more
>powerful processors have become available.
>
>I personally see no reason not to increase the sensor resolution as long
>as in camera processing keeps pace -- I've seen some interesting
>(non-public) papers on how smaller photosites can be aggregated to
>produce better results than larger photosites, in part because of Bayer
>issues.

I'd like to see wider dynamic range for existing smaller sensor devices
rather than specifically increasing pixel density. One way to do this
might be to increase the number of filter channels -- a six-channel
filter, say instead of the traditional 3-channel RGB filtering process.
Printing technology went this way some years back with the Hexachrome
process replacing older 4-colour CMYK print lines, and some high-gamut
LCD monitors and displays are starting to appear on the market with
five-colour filters rather than the conventional three-channel RGB
system that is a hangover from the old colour TV set CRT.

Increasing useful pixel density on existing small-surface sensors will
require improvements in lens quality and there isn't a lot of slack in
that process these days -- even cheap lenses are being built to function
at close to their theoretical optimum performance.
--
To reply, my gmail address is nojay1 Robert Sneddon

trouble

unread,
Jun 24, 2009, 7:40:07 PM6/24/09
to
The battle goes on.
I can guarantee that no one can distinguish an 8.5x11 inch print made from a
D70, a D200 or a D300 of an image made under identical circumstances.
Observers may distinguish subtle differences but that is not the same thing
as being able to tell which sensor made which image.
One reason too many of you forget is that the inkjet printing process, both
the software driver and the physical properties of the medium, are the great
levelers of print image quality. In fact the particular printer type and
paper used will have more of an effect on the final printed image than the
dSLR image sensors listed.
There is no denying that viewing magnified sections of an image on a
computer monitor will easily distinguish specific sensor characteristics.
However that act is in no way comparable to viewing a finished ink jet
print.
For most advanced amateur purposes the megapixel race is illusory and
diverts attention from the persistent Achilles heel of digital image
capture: restricted dynamic range.

ribbit

unread,
Jun 24, 2009, 8:03:25 PM6/24/09
to

You are talking to the deaf mate.
Not long ago a Wedding photography client complained to me that the
pictures from my S5 Fuji Pro showed all the skin blemishes on his new
wife's arms but his $150 Olympus P&S didn't.

He thought the lack of detail from his P&S was an Olympus feature! How
many proponents of P&S being as good as a DSLR have the same belief?

I used Noise Ninja at high levels to remove all the detail from the
photo. He asked me why I hadn't done that in the first place.

The notion that you cannot tell the difference between a shot from a P&S
and one from a DSLR with good quality glass is silly. Of course you
can... Unless you only want to look at outlines.

--
With age come a new ability ...multi-tasking.
I can laugh, cough, sneeze, fart and pee all at the same time!

Scott W

unread,
Jun 24, 2009, 8:08:19 PM6/24/09
to
On Jun 24, 1:40 pm, "trouble" <fac_...@hotmail.com> wrote:
> The battle goes on.
> I can guarantee that no one can distinguish an 8.5x11 inch print made from a
> D70, a D200 or a D300 of an image made under identical circumstances.

It depends on the subject matter. Some subjects are going to produce
a large amount of moiré pattern with the D70 that would not show up on
the D300. In this case even a 4x6 print can show the difference.

Scott

John Navas

unread,
Jun 24, 2009, 8:23:05 PM6/24/09
to
On Thu, 25 Jun 2009 10:03:25 +1000, ribbit <rib...@news.group> wrote in
<7aft6gF...@mid.individual.net>:

>The notion that you cannot tell the difference between a shot from a P&S
>and one from a DSLR with good quality glass is silly. Of course you
>can... Unless you only want to look at outlines.

Nope. Sorry.

Ron Hunter

unread,
Jun 24, 2009, 8:49:59 PM6/24/09
to
Sure, but at some point the sensor sites will become so small that the
signal to noise ratio approaches 0, which would be a 'point of
diminishing returns', literally. There are physical realities in this
case, photons, and electrons, and they have finite sizes. Where is the
limit? Who can say, but the commercial feasibility is what governs this
issue, not the physical/electronic limitations. We might make sensors
that have more pixels, but would the cost rise faster than the improved
quality.

Rich

unread,
Jun 24, 2009, 9:04:11 PM6/24/09
to
On Jun 24, 11:25 am, John Navas <spamfilt...@navasgroup.com> wrote:
> Many have decried the "megapixel race" that has resulted in ever smaller
> photosites, not just for compact digital cameras, but also for dSLR
> cameras, but I think this is not being borne out in real world
> performance -- images from current high megapixel sensors in both
> compact digital cameras (e.g., Panasonic DMC-FZ28) and dSLR cameras are
> unquestionably better than images from earlier comparable sensors with
> lower megapixel counts (e.g., Panasonic DMC-FZ8).
>
> The reasons are that sensors are better and that image quality has come
> to be dominated by in camera processing, especially as faster and more
> powerful processors have become available.
>
> I personally see no reason not to increase the sensor resolution as long
> as in camera processing keeps pace -

Lens quality = cost. Processing can't create detail out of thin air,
they can only refine what is there.

Me

unread,
Jun 24, 2009, 10:46:42 PM6/24/09
to
I tend to agree. I can see the OP's point, but from what I've seen of
resolution with APS-c dslrs moving from ~12 to ~15mp, there's a
considerable overhead (file size, in camera processing affecting battery
life etc) for at best negligible benefit - sometimes not even measurable
(resolution).
If there were visible "issues" such as aliasing, moire / bayer
demosaicing artifacts at existing pixel densities, then it might be
"worth it". But I haven't seen such problems since using a D70 and
Canon 5d (Mk 1).
Unless there were suddenly much better lenses available, then there
seems to be little to gain. Perhaps the only clue we've got as to what
could be achieved using better technology at the same pixel density was
when Nikon used "only" 12mp in the D3. Was it significantly better than
the Canon 5d sensor? I think so.

David J Taylor

unread,
Jun 25, 2009, 2:55:04 AM6/25/09
to
John Navas wrote:
> On Thu, 25 Jun 2009 10:03:25 +1000, ribbit <rib...@news.group> wrote
> in <7aft6gF...@mid.individual.net>:
>
>> The notion that you cannot tell the difference between a shot from a
>> P&S and one from a DSLR with good quality glass is silly. Of course
>> you can... Unless you only want to look at outlines.
>
> Nope. Sorry.

I've tried the test with the cameras I have, John, and the difference was
quite obvious.

David

Martin Brown

unread,
Jun 25, 2009, 3:59:15 AM6/25/09
to

It depends on the pair of cameras. I would reckon some of the best 3x
zoom P&S lenses could give the stock zoom lens on cheaper DSLRs a run
for their money. All bets are off for good quality prime glass though.

And the guy does have a point - showing up unwanted fine detail on skin
and out of place single hairs is characteristic of what the sharper
lenses show.

Big advantage of a compact P&S is that it fits in a pocket so you can
carry it anywhere. DSLRs are too big to carry around routinely (as were
early P&S digicams).

I expect the megapixel race in P&S will continue onwards and upwards in
steps of about 2Mpixels per year or so. Just enough to keep dumb
consumers buying a new one to get that "extra" resolution.

Regards,
Martin Brown

Bob Williams

unread,
Jun 25, 2009, 4:35:44 AM6/25/09
to

Is it visible on an 8x10 print?
I did a controlled test with a 4MP Panasonic FZ15 (P/S) with a 12X Leica
lens VS a 8 MP Canon Rebel XT DSLR with a kit lens.
I printed both images at 8x10 and the Panny/Leica combo outperformed the
DSLR Canon/Canon combo.
I repeated the test with a much more expensive Canon Lens (ca. $500) and
the prints were comparable with a slight margin still in favor of the
Panny/Leica. The thing that worked against the Canon Combo was the
shallower DOF. The subject was a stuffed child's cat.
Lots of very fine detail in hair and whiskers.
The Canon Combo could not get the whole cat in sharp focus.
To some people, that is not an issue. But to me it is.
Bob Williams
P.S.
I know that you have (had?) one or more Panasonic FZ series cameras .
Was one of those used in your comparison with your DSLR?
Bob

David J Taylor

unread,
Jun 25, 2009, 4:51:13 AM6/25/09
to
Bob Williams wrote:
> David J Taylor wrote:
>> John Navas wrote:
>>> On Thu, 25 Jun 2009 10:03:25 +1000, ribbit <rib...@news.group> wrote
>>> in <7aft6gF...@mid.individual.net>:
>>>
>>>> The notion that you cannot tell the difference between a shot from
>>>> a P&S and one from a DSLR with good quality glass is silly. Of
>>>> course you can... Unless you only want to look at outlines.
>>>
>>> Nope. Sorry.
>>
>> I've tried the test with the cameras I have, John, and the difference
>> was quite obvious.
>>
>> David
>
> Is it visible on an 8x10 print?

I don't print. The difference was obvious filling a 20-inch 1600 x 1200
pixel display.

> I did a controlled test with a 4MP Panasonic FZ15 (P/S) with a 12X
> Leica lens VS a 8 MP Canon Rebel XT DSLR with a kit lens.
> I printed both images at 8x10 and the Panny/Leica combo outperformed
> the DSLR Canon/Canon combo.

The Canon kit lens had a very poor reputation, although I gather that the
newer 18-55mm IS is a better quality lens.

> I repeated the test with a much more expensive Canon Lens (ca. $500)
> and the prints were comparable with a slight margin still in favor of
> the Panny/Leica. The thing that worked against the Canon Combo was
> the shallower DOF. The subject was a stuffed child's cat.
> Lots of very fine detail in hair and whiskers.
> The Canon Combo could not get the whole cat in sharp focus.
> To some people, that is not an issue. But to me it is.
> Bob Williams
> P.S.
> I know that you have (had?) one or more Panasonic FZ series cameras .
> Was one of those used in your comparison with your DSLR?
> Bob

My comparison was of a sunlit, distant street scene with the 7MP compact
TZ3 and the 6MP Nikon D40 with the "kit" 18-55mm lens.

I can appreciate your point about the shallower depth-of-field, and it was
actually just that advantage of the DSLR which made my wife change from
her Panasonic FZ20 to a Nikon D60 with the 18-200mm VR lens. For a
comparison keeping the DoF the same, you would need to stop down the DSLR
more than the P&S.

One other thing which some people may not realise when comparing the
cameras are the settings for picture sharpness and colour saturation - by
default these may be different on every camera you use.

I do still have my Panasonic FZ5, and I have been very pleased with the
images from both the FZ5 and the FZ20. Operationally, the DSLR wins for
me as it's faster to use (zooming by twisting a single ring versus zooming
with two push-buttons) and still produces good images at higher ISOs,
where the compact is struggling. I take the TZ3 for movies, when a spare
camera is needed, or where its small size is more convenient.

Cheers,
David

Bob Williams

unread,
Jun 25, 2009, 7:19:33 AM6/25/09
to

Your points and preferences are well taken.
I am beginning to feel that lens quality is probably at least as
important to image quality as MP and sensor size (within limits of
course). In strong sunlite, a good Leica lens with a small (1/2.5" 4MP
sensor)can yield a better image than a mediocre lens with a larger
(APS-C, 8 MP sensor).
Now, these tests were done in bright sunlight where the small sensors
can still collect a full well of photons and produce full color images.
I suspect that if the test was done in deep shade or in late evening
under dim lite, the results might be quite different and favor the DSLR.
Bob

Alfred Molon

unread,
Jun 25, 2009, 1:32:21 PM6/25/09
to
I see your point, but before further increasing the pixel counts they
should make full colour pixels. That alone would boost the effective
resolution substantially.
--

Alfred Molon
------------------------------
Olympus 50X0, 8080, E3X0, E4X0, E5X0 and E3 forum at
http://tech.groups.yahoo.com/group/MyOlympus/
http://myolympus.org/ photo sharing site

bugbear

unread,
Jun 25, 2009, 8:59:37 AM6/25/09
to
Robert Sneddon wrote:
> In message <nsg445hsf9nnitng6...@4ax.com>, John Navas
> <spamf...@navasgroup.com> writes
> [Clip]
>> images from current high megapixel sensors in both
>> compact digital cameras (e.g., Panasonic DMC-FZ28) and dSLR cameras are
>> unquestionably better than images from earlier comparable sensors with
>> lower megapixel counts (e.g., Panasonic DMC-FZ8).
>
> It's a truism that newer components tend be better than older
> components; if they were worse then there would be no point in designing
> and building new ones.

You haven't seen guitar players clawing over each other
to get vintage tone capacitors!

BugBear

bugbear

unread,
Jun 25, 2009, 9:00:31 AM6/25/09
to
ribbit wrote:
> You are talking to the deaf mate.
> Not long ago a Wedding photography client complained to me that the
> pictures from my S5 Fuji Pro showed all the skin blemishes on his new
> wife's arms but his $150 Olympus P&S didn't.
>
> He thought the lack of detail from his P&S was an Olympus feature! How
> many proponents of P&S being as good as a DSLR have the same belief?

Built in "glamour retouch"! Splendid!

BugBear

Ofnuts

unread,
Jun 25, 2009, 9:50:44 AM6/25/09
to

They soon will insit to get their electricity from charcoal power plants.

--
Bertrand

nospam

unread,
Jun 25, 2009, 9:53:33 AM6/25/09
to
In article <MPG.24add311c...@news.supernews.com>, Alfred
Molon <alfred...@yahoo.com> wrote:

> I see your point, but before further increasing the pixel counts they
> should make full colour pixels. That alone would boost the effective
> resolution substantially.

actually the resolution increase would be much less than 'substantial',
the sensor will have more noise and at least right now, more difficult
to manufacture.

Ofnuts

unread,
Jun 25, 2009, 9:56:58 AM6/25/09
to

Don Stauffer

unread,
Jun 25, 2009, 10:15:47 AM6/25/09
to

At some point the number of photons/photoelectrons captured during a
typical exposure will become so low that photon noise WILL become an
issue. At any reasonable exposure there are only so many photons
striking each square micron of detector surface area. We may not be
there yet, in spite of warnings, but we cannot go TOO small without
running into this problem. We cannot cut pixel size indefinitely
without ending up with cameras with a very low ISO sensitivity.

bugbear

unread,
Jun 25, 2009, 11:00:34 AM6/25/09
to

Remember the fad over the "Lomo Effect", and reproducing it in Gimp/Photoshop?!

BugBear

bugbear

unread,
Jun 25, 2009, 11:01:53 AM6/25/09
to
Don Stauffer wrote:
> At some point the number of photons/photoelectrons captured during a
> typical exposure will become so low that photon noise WILL become an
> issue. At any reasonable exposure there are only so many photons
> striking each square micron of detector surface area. We may not be
> there yet, in spite of warnings, but we cannot go TOO small without
> running into this problem.

Can any passing astronomers comment on this - I strongly
suspect photon calculations are meat and drink in that
sphere.

BugBear

John Navas

unread,
Jun 25, 2009, 11:13:48 AM6/25/09
to
On Thu, 25 Jun 2009 06:55:04 GMT, "David J Taylor"
<david-...@blueyonder.not-this-part.nor-this.co.uk.invalid> wrote in
<cbF0m.47241$OO7....@text.news.virginmedia.com>:

Then, and with all due respect, you're pixel peeping, or need to learn
how to properly use your compact camera, or need a better compact
camera.

I do such comparisons frequently, and both prints and viewed images from
my compact camera are often better than comparable pictures taken with a
dSLR camera.

To be clear, I'm talking real world use of real world images, not
"ultimate quality" under controlled conditions.

At a yacht club after a regatta last year I made my usual offer to add
my images to the projector showing pro dSLR images of the event. The
pro objected. In the past too many people had commented on how my
images were better.

A camera is just a tool, and a great photographer with a good camera
will tend to produce better images than a good photographer with a great
camera.

"The best camera is the one you have with you."

"The single most important component of a camera is the twelve inches
behind it." ~Ansel Adams

"A photograph is usually looked at - seldom looked into." ~Ansel Adams

"There are no rules for good photographs, there are only good
photographs." ~Ansel Adams

Your Camera Doesn't Matter, by Ken Rockwell
<http://www.kenrockwell.com/tech/notcamera.htm>

"Buying a Nikon doesn't make you a photographer. It makes you a Nikon
owner." ~Author Unknown

John Navas

unread,
Jun 25, 2009, 11:24:38 AM6/25/09
to
On Thu, 25 Jun 2009 04:19:33 -0700, Bob Williams <mytbob...@cox.net>
wrote in <93J0m.51989$lB7....@newsfe19.iad>:

>I am beginning to feel that lens quality is probably at least as
>important to image quality as MP and sensor size (within limits of
>course).
>
>In strong sunlite, a good Leica lens with a small (1/2.5" 4MP
>sensor)can yield a better image than a mediocre lens with a larger
>(APS-C, 8 MP sensor).


From a post I made back in 2007:

> In terms of resolution, the DMC-FZ8 Leica super-zoom actually surpasses
> the fixed prime Canon EF 50 mm f/1.4 on the EOS D60, 10D, and 300D, as
> well as the fixed prime Nikkor 50 mm f/1.4 on the Nikon D100, and fixed
> prime Nikkor 50 mm f/1.8 on the Nikon D50, D70s, and D40:
> http://www.dpreview.com/reviews/panasonicfz8/page16.asp
> http://www.dpreview.com/reviews/CanonEOS10D/page22.asp
> http://www.dpreview.com/reviews/NikonD100/page20.asp
> http://www.dpreview.com/reviews/nikond50/page25.asp
> http://www.dpreview.com/reviews/nikond40/page24.asp
>
> When typical comparable images (lens and exposure) are viewed as
> intended in the real world (e.g., as 8x10 prints), there is no
> meaningful difference, and the shot from the DMC-DZ8 will frequently be
> the better because of handling advantages.

John Navas

unread,
Jun 25, 2009, 11:27:21 AM6/25/09
to
On Thu, 25 Jun 2009 19:32:21 +0200, Alfred Molon
<alfred...@yahoo.com> wrote in
<MPG.24add311c...@news.supernews.com>:

>I see your point, but before further increasing the pixel counts they
>should make full colour pixels. That alone would boost the effective
>resolution substantially.

When pixel count is increased, chrominance (color) resolution increases
as well, which is part of why the 16 MP sensor might well produce a
better 8 MP image than an 8 MP sensor.

David J Taylor

unread,
Jun 25, 2009, 12:29:35 PM6/25/09
to
John Navas wrote:
> On Thu, 25 Jun 2009 06:55:04 GMT, "David J Taylor"
> <david-...@blueyonder.not-this-part.nor-this.co.uk.invalid> wrote
> in <cbF0m.47241$OO7....@text.news.virginmedia.com>:
>
>> John Navas wrote:
>>> On Thu, 25 Jun 2009 10:03:25 +1000, ribbit <rib...@news.group> wrote
>>> in <7aft6gF...@mid.individual.net>:
>>>
>>>> The notion that you cannot tell the difference between a shot from
>>>> a P&S and one from a DSLR with good quality glass is silly. Of
>>>> course you can... Unless you only want to look at outlines.
>>>
>>> Nope. Sorry.
>>
>> I've tried the test with the cameras I have, John, and the
>> difference was quite obvious.
>
> Then, and with all due respect, you're pixel peeping, or need to learn
> how to properly use your compact camera, or need a better compact
> camera.

No, looking at the full image on a 1.9MP 20-inch display, as I already
said elsewhere. Specifically /not/ pixel peeping. The compact camera was
correctly used. A "better" compact camera than the Panasonic TZ3 (of the
same era) would no longer be "compact".

David

C J Campbell

unread,
Jun 25, 2009, 1:01:52 PM6/25/09
to
On 2009-06-24 08:25:30 -0700, John Navas <spamf...@navasgroup.com> said:

> The reasons are that sensors are better and that image quality has come
> to be dominated by in camera processing, especially as faster and more
> powerful processors have become available.
>

Uh, no. In-camera processing cannot recover information that was never
recorded or which was destroyed by the sensor. You don't get something
from nothing. It is not reasonable to say that a system that first
damages a photo and then makes cosmetic adjustments to "fix" it is
"better."

However, even RAW files have been showing some improvement. These are
not processed in-camera. The improvements have come from better
microlenses and other physical improvements to the sensors. But those
improvements are in spite of the increase in the number of photosites,
not because of them.

--
Waddling Eagle
World Famous Flight Instructor

John Navas

unread,
Jun 25, 2009, 1:20:15 PM6/25/09
to
On Thu, 25 Jun 2009 10:01:52 -0700, C J Campbell
<christophercam...@hotmail.com> wrote in
<2009062510015216807-christophercampbellremovethis@hotmailcom>:

>On 2009-06-24 08:25:30 -0700, John Navas <spamf...@navasgroup.com> said:
>
>> The reasons are that sensors are better and that image quality has come
>> to be dominated by in camera processing, especially as faster and more
>> powerful processors have become available.
>
>Uh, no. In-camera processing cannot recover information that was never
>recorded or which was destroyed by the sensor.

The reality is actually quite different, a matter of extracting signal
from noise, at which camera processors (and algorithms) are getting
better and better.

>You don't get something
>from nothing.

True -- it takes more computing power and more sophisticated algorithms.

>It is not reasonable to say that a system that first
>damages a photo and then makes cosmetic adjustments to "fix" it is
>"better."

Again, the reality is actually quite different.
Read up on digital signal processing.

Alfred Molon

unread,
Jun 26, 2009, 3:14:23 AM6/26/09
to
In article <250620090953333025%nos...@nospam.invalid>, nospam says...

> actually the resolution increase would be much less than 'substantial',

Substantial because right now 2/3 of the colour information are just
guessed. But do we need to reopen this discussion?

> the sensor will have more noise and at least right now, more difficult
> to manufacture.

You can't make such blanket statements. That depends of course on the
implementation.

nospam

unread,
Jun 25, 2009, 9:24:51 PM6/25/09
to
In article <MPG.24ae937c1...@news.supernews.com>, Alfred
Molon <alfred...@yahoo.com> wrote:

> > actually the resolution increase would be much less than 'substantial',
>
> Substantial because right now 2/3 of the colour information are just
> guessed.

it's not guessed, it's precisely calculated, and human vision is not
that sensitive to colour resolution so you can't see the difference
anyway.

> But do we need to reopen this discussion?

no

> > the sensor will have more noise and at least right now, more difficult
> > to manufacture.
>
> You can't make such blanket statements. That depends of course on the
> implementation.

it does depend on the implementation, but absent a breakthrough in
physics or manufacturing, it remains true. there's no free lunch.

John Navas

unread,
Jun 26, 2009, 12:17:38 AM6/26/09
to
On Thu, 25 Jun 2009 21:24:51 -0400, nospam <nos...@nospam.invalid> wrote
in <250620092124514719%nos...@nospam.invalid>:

Actually it doesn't, as ample objective evidence makes clear.
Read my original post more carefully and with a more open mind.

>there's no free lunch.

Red herring.

nospam

unread,
Jun 26, 2009, 12:49:36 AM6/26/09
to
In article <usi845t6hufndn7lb...@4ax.com>, John Navas
<spamf...@navasgroup.com> wrote:

> >> You can't make such blanket statements. That depends of course on the
> >> implementation.
> >
> >it does depend on the implementation, but absent a breakthrough in
> >physics or manufacturing, it remains true.
>
> Actually it doesn't, as ample objective evidence makes clear.

such as what?

> Read my original post more carefully and with a more open mind.

your post doesn't say anything about full colour sensors.

making a full colour sensor, at least with today's technology, requires
taking a full pixel and dividing it up into 3 (or more) parts. foveon
does it in layers and nikon's patent uses dichroic mirrors and
sub-pixels within a pixel. by dividing up the pixel, noise goes up,
that's just basic physics. maybe one day someone will come up with a
different technology but that's how it is today.

plus, nikon's patent looks like a bitch to manufacture, but maybe they
have solved that problem. nevertheless, it's vapor at the moment.
foveon is also expensive to manufacture.

that might change one day, but as of right now, bayer is the most cost
effective solution and it works quite well and far better than the full
colour pixel advocates believe.

> >there's no free lunch.
>
> Red herring.

uh, no.

Alfred Molon

unread,
Jun 26, 2009, 1:53:26 PM6/26/09
to
In article <250620092124514719%nos...@nospam.invalid>, nospam says...

> it's not guessed, it's precisely calculated,

It's *wrongly* calculated. It's called "interpolation".

> and human vision is not
> that sensitive to colour resolution so you can't see the difference
> anyway.

Nonsense. Just enlarge the image and you will see the errors.

> it does depend on the implementation, but absent a breakthrough in
> physics or manufacturing, it remains true. there's no free lunch.

There is no law of physics stating that a full colour sensor must have
more noise than a Bayer sensor.

Tsk Tsk

unread,
Jun 26, 2009, 8:44:55 AM6/26/09
to
On Thu, 25 Jun 2009 08:24:38 -0700, John Navas <spamf...@navasgroup.com>
wrote:

Now stop that!

You know what happens to psychotics when they are forced to face reality.
This is just going to upset their imaginary pretend-photographer's
DSLR-TROLL world again. The least you could do is have them submit a
scanned copy of their current anti-psychotic prescriptions or something, to
be sure they can handle this kind of information. Show them reality in a
responsible manner. This is just reckless of you.

Next time start out more slowly. Maybe with a post like:

"In three days I'm going to show you something that goes against every
DSLR-TROLL post you ever read in your twisted little virtual-reality world.
This will give all of you adequate time to prepare for it. At least consult
your psychiatrist and have them increase your maintenance dosages before
then."

Something like that will at least stop them from cutting and purging. The
way you did it here is dangerous for all of them. No warning, no nothing.
Just a slap upside their demented little heads, right out of nowhere.

WHAM! REALITY! DEAL WITH IT!

SEE? How very unkind and cruel of you. You could almost be reported to the
A.S.P.C.A. for a reckless stunt like this.

But it is nice of you to not throw reality in their face more than once
every 2 years. This is going to keep them agitated that long again. It
takes them at least that long to shrug off any nasty brush with reality.


Martin Brown

unread,
Jun 26, 2009, 9:04:30 AM6/26/09
to
Alfred Molon wrote:
> In article <250620092124514719%nos...@nospam.invalid>, nospam says...
>
>> it's not guessed, it's precisely calculated,
>
> It's *wrongly* calculated. It's called "interpolation".

Interpolation of the Bayer array is generally well behaved except for
pathological test cases designed to break it. And even then the modern
heuristics for processing the sampled chroma data do very well.

Most real photographic scenes do not push the envelope at all and Bayer
mask images are effectively indistinguishable from Foveon ones.

Provided that the luminance is OK and adjacent pixels are the right
colour the eye tolerates some colour error in an pixel remarkably well.

If you think your eyes are that reliable at judging colour accurately
you may care to look at the following powerful optical illusion:
http://blogs.discovermagazine.com/badastronomy/2009/06/24/the-blue-and-the-green/


>
>> and human vision is not
>> that sensitive to colour resolution so you can't see the difference
>> anyway.
>
> Nonsense. Just enlarge the image and you will see the errors.

You are tilting at windmills. The loss of chroma information through
subsampling has only minor deleterious effects. Foveon is a cute
technology but it solves a non-problem. The human eye has a higher
resolution for luminance than it does for chrominance.


>
>> it does depend on the implementation, but absent a breakthrough in
>> physics or manufacturing, it remains true. there's no free lunch.
>
> There is no law of physics stating that a full colour sensor must have
> more noise than a Bayer sensor.

But it must contain at least 2N more active sensor sites and arrange to
filter the light into at least RGB.

Regards,
Martin Brown


nospam

unread,
Jun 26, 2009, 10:24:55 AM6/26/09
to
In article <MPG.24af29553...@news.supernews.com>, Alfred
Molon <alfred...@yahoo.com> wrote:

> > it's not guessed, it's precisely calculated,
>
> It's *wrongly* calculated. It's called "interpolation".

interpolation does not mean the result is always wrong.

> > and human vision is not
> > that sensitive to colour resolution so you can't see the difference
> > anyway.
>
> Nonsense. Just enlarge the image and you will see the errors.

if you pixel peep you can see problems no matter what camera was used.
the fact remains that human vision can't see colour detail anywhere
near as well as luminance detail which is why the bayer chip works a
well as it does. blur just the colour channels in photoshop and you
won't be able to see a difference until you use very high levels of
blur.

> > it does depend on the implementation, but absent a breakthrough in
> > physics or manufacturing, it remains true. there's no free lunch.
>
> There is no law of physics stating that a full colour sensor must have
> more noise than a Bayer sensor.

there is if the pixel is divided up into parts, since each part will
have a lower s/n ratio. and with foveon, the conversion from the three
layers (not true rgb) into rgb also adds noise. everything is a
tradeoff.

Bob Larter

unread,
Jun 27, 2009, 2:56:39 AM6/27/09
to

Don't get me started on audiophools...
<http://grumpyoldarts.com/2009/04/18/audiophools/>

--
W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est
---^----^---------------------------------------------------------------

nospam

unread,
Jun 27, 2009, 3:10:55 AM6/27/09
to
In article <4a45c2a7$1...@dnews.tpgi.com.au>, Bob Larter
<bobby...@gmail.com> wrote:

> Don't get me started on audiophools...
> <http://grumpyoldarts.com/2009/04/18/audiophools/>

they're a hoot. how about a 770 pound turntable that uses bullet-proof
wood, for only $150k:
<http://www.needledoctor.com/Clearaudio-Statement-Turntable>

and don't cheap out on the needle:
<http://www.needledoctor.com/Clearaudio-Goldfinger-Phono-Cartridge>

Bob Larter

unread,
Jun 27, 2009, 3:17:31 AM6/27/09
to
David J Taylor wrote:
> Bob Williams wrote:

>> David J Taylor wrote:
>>> John Navas wrote:
>>>> On Thu, 25 Jun 2009 10:03:25 +1000, ribbit <rib...@news.group> wrote
>>>> in <7aft6gF...@mid.individual.net>:
>>>>
>>>>> The notion that you cannot tell the difference between a shot from
>>>>> a P&S and one from a DSLR with good quality glass is silly. Of
>>>>> course you can... Unless you only want to look at outlines.
>>>>
>>>> Nope. Sorry.
>>>
>>> I've tried the test with the cameras I have, John, and the difference
>>> was quite obvious.
>>>
>>> David
>>
>> Is it visible on an 8x10 print?
>
> I don't print. The difference was obvious filling a 20-inch 1600 x 1200
> pixel display.
>
>> I did a controlled test with a 4MP Panasonic FZ15 (P/S) with a 12X
>> Leica lens VS a 8 MP Canon Rebel XT DSLR with a kit lens.
>> I printed both images at 8x10 and the Panny/Leica combo outperformed
>> the DSLR Canon/Canon combo.
>
> The Canon kit lens had a very poor reputation, although I gather that
> the newer 18-55mm IS is a better quality lens.

Well, if you shoot with a consumer zoom on a DSLR you might as well be
shooting with a P&S anyway.

Bob Larter

unread,
Jun 27, 2009, 3:23:12 AM6/27/09
to

Mmm... Smeared detail. Yum!

Bob Larter

unread,
Jun 27, 2009, 3:31:21 AM6/27/09
to
Alfred Molon wrote:
> I see your point, but before further increasing the pixel counts they
> should make full colour pixels. That alone would boost the effective
> resolution substantially.

In theory. The Foveon sensor falls short in practice.

Bob Larter

unread,
Jun 27, 2009, 3:32:33 AM6/27/09
to
Alfred Molon wrote:
> In article <250620090953333025%nos...@nospam.invalid>, nospam says...
>
>> actually the resolution increase would be much less than 'substantial',
>
> Substantial because right now 2/3 of the colour information are just
> guessed.

"Interpolated" != "guessed".

> But do we need to reopen this discussion?

I hope not.

Bob Larter

unread,
Jun 27, 2009, 3:37:35 AM6/27/09
to
nospam wrote:
> In article <MPG.24af29553...@news.supernews.com>, Alfred
> Molon <alfred...@yahoo.com> wrote:
>
>>> it's not guessed, it's precisely calculated,
>> It's *wrongly* calculated. It's called "interpolation".
>
> interpolation does not mean the result is always wrong.
>
>>> and human vision is not
>>> that sensitive to colour resolution so you can't see the difference
>>> anyway.
>> Nonsense. Just enlarge the image and you will see the errors.
>
> if you pixel peep you can see problems no matter what camera was used.
> the fact remains that human vision can't see colour detail anywhere
> near as well as luminance detail which is why the bayer chip works a
> well as it does. blur just the colour channels in photoshop and you
> won't be able to see a difference until you use very high levels of
> blur.

Indeed. If anyone wants to try it out, you can do it in Photoshop by
converting the image to LAB, then blurring the a & b channels.

Alfred Molon

unread,
Jun 27, 2009, 9:55:13 AM6/27/09
to
In article <260620091024553082%nos...@nospam.invalid>, nospam says...

> interpolation does not mean the result is always wrong.

It's a guess, i.e. could be right or wrong.

> > There is no law of physics stating that a full colour sensor must have
> > more noise than a Bayer sensor.
>
> there is if the pixel is divided up into parts, since each part will
> have a lower s/n ratio. and with foveon, the conversion from the three
> layers (not true rgb) into rgb also adds noise. everything is a
> tradeoff.

You are making assumptions about the implementation. Consider the
following example:

- monochrome sensor
- place RGB colour filters on it arranged in a Bayer pattern; take a
photo
- now place a rotating wheel with three red, green and blue windows.
Take three images, a green, red and a blue one. Combine them.

The Bayer image has the same noise levels and the same dynamic range as
the full-colour image.

I'm not advocating this type of solution, just mentioning this to make
it clear that sensor performance depends on the implementation and this
need not to be the Foveon one. Some company is experimenting with three
transparent stacked colour sensitive layers, to make another example.

Alfred Molon

unread,
Jun 27, 2009, 9:55:15 AM6/27/09
to
In article <zH31m.2601$dz7...@newsfe04.iad>, Martin Brown says...


> Interpolation of the Bayer array is generally well behaved except for
> pathological test cases designed to break it. And even then the modern
> heuristics for processing the sampled chroma data do very well.

Lots of "pathological" cases in real world scenes. Green leaves against
a blue sky, urban scenes just to make a couple of examples. Colour
changes between adjacent pixels are quite common. There are even colour
changes within a pixel, which no sensor is able to capture and just
averages out.

> You are tilting at windmills. The loss of chroma information through
> subsampling has only minor deleterious effects. Foveon is a cute
> technology but it solves a non-problem. The human eye has a higher
> resolution for luminance than it does for chrominance.

The human eye is not the measure of all things. You want to know if the
sensor has a certain resolution or not. If it is unable to capture
changes between adjacent pixels, the effective resolution is lower than
the nominal one (i.e. the pixel count).

To draw another parallel, the human eye can't see colours when it's too
dark. Should image sensors therefore also switch to monochrome in low
light levels, i.e. at night?

> > There is no law of physics stating that a full colour sensor must have
> > more noise than a Bayer sensor.
>
> But it must contain at least 2N more active sensor sites and arrange to
> filter the light into at least RGB.

You are making assumptions about the implementation here. See my other
post.

David J Taylor

unread,
Jun 27, 2009, 4:30:23 AM6/27/09
to
Alfred Molon wrote:
[]

> The human eye is not the measure of all things. You want to know if
> the sensor has a certain resolution or not. If it is unable to capture
> changes between adjacent pixels, the effective resolution is lower
> than the nominal one (i.e. the pixel count).

.. but the human eye is what will be viewing the majority of digital
photos.

> To draw another parallel, the human eye can't see colours when it's
> too dark. Should image sensors therefore also switch to monochrome in
> low light levels, i.e. at night?

Try making one of your night-time pictures monochrome. It can be very
effective in adding atmosphere, and all the issues of multiple
colour-temperature light sources vanish. You might even add a little
noise for more effect.

David

nospam

unread,
Jun 27, 2009, 11:20:21 AM6/27/09
to
In article <4a45cc3f$1...@dnews.tpgi.com.au>, Bob Larter
<bobby...@gmail.com> wrote:

> > if you pixel peep you can see problems no matter what camera was used.
> > the fact remains that human vision can't see colour detail anywhere
> > near as well as luminance detail which is why the bayer chip works a
> > well as it does. blur just the colour channels in photoshop and you
> > won't be able to see a difference until you use very high levels of
> > blur.
>
> Indeed. If anyone wants to try it out, you can do it in Photoshop by
> converting the image to LAB, then blurring the a & b channels.

yep, and you can blur the a and b channels by quite a bit, roughly a
5-10 pixel radius, before it's even barely noticeable. meanwhile, blur
the luminance channel ever so slightly and you'll notice a difference
immediately.

nospam

unread,
Jun 27, 2009, 11:20:25 AM6/27/09
to
In article <MPG.24b013d7a...@news.supernews.com>, Alfred
Molon <alfred...@yahoo.com> wrote:

> > interpolation does not mean the result is always wrong.
>
> It's a guess, i.e. could be right or wrong.

it's not a guess, it's calculated based on a lot of information from
surrounding pixels, and it is right *far* more often than it's wrong.
you make it sound like it's a random guess, which it very definitely is
not.

> > there is if the pixel is divided up into parts, since each part will
> > have a lower s/n ratio. and with foveon, the conversion from the three
> > layers (not true rgb) into rgb also adds noise. everything is a
> > tradeoff.
>
> You are making assumptions about the implementation. Consider the
> following example:
>
> - monochrome sensor
> - place RGB colour filters on it arranged in a Bayer pattern; take a
> photo
> - now place a rotating wheel with three red, green and blue windows.
> Take three images, a green, red and a blue one. Combine them.
>
> The Bayer image has the same noise levels and the same dynamic range as
> the full-colour image.

a rotating wheel is limited to motionless subjects, hardly a useful
tradeoff, nor is it a full colour sensor.

> I'm not advocating this type of solution, just mentioning this to make
> it clear that sensor performance depends on the implementation and this
> need not to be the Foveon one. Some company is experimenting with three
> transparent stacked colour sensitive layers, to make another example.

several companies are working on it, including nikon, canon and fuji.
that doesn't mean what they come up with will be better, and if it is
better, how soon it will be in an actual product you can buy and how
much it will cost. maybe one day they'll solve most of the problems,
but it isn't going to be in the short term.

right now, bayer is the best solution and will remain so for a while.

nospam

unread,
Jun 27, 2009, 11:20:26 AM6/27/09
to
In article <MPG.24b01599e...@news.supernews.com>, Alfred
Molon <alfred...@yahoo.com> wrote:

> > Interpolation of the Bayer array is generally well behaved except for
> > pathological test cases designed to break it. And even then the modern
> > heuristics for processing the sampled chroma data do very well.
>
> Lots of "pathological" cases in real world scenes. Green leaves against
> a blue sky, urban scenes just to make a couple of examples. Colour
> changes between adjacent pixels are quite common.

and there's a luminance change in those examples, so a bayer sensor
will resolve it. bayer falls apart when there is a colour change with
little to no luminance change, and not only does that not happen in the
real world, but your eye can't see that very well either.

> There are even colour
> changes within a pixel, which no sensor is able to capture and just
> averages out.

how does colour change within a pixel???

> > You are tilting at windmills. The loss of chroma information through
> > subsampling has only minor deleterious effects. Foveon is a cute
> > technology but it solves a non-problem. The human eye has a higher
> > resolution for luminance than it does for chrominance.
>
> The human eye is not the measure of all things.

it is if you take photos to be viewed by humans, which is what just
about everyone taking photos does. if you are taking photos for
scientific analysis, then you won't be using a consumer camera.

Ofnuts

unread,
Jun 27, 2009, 4:07:29 PM6/27/09
to
bugbear wrote:

>
> Remember the fad over the "Lomo Effect", and reproducing it in
> Gimp/Photoshop?!
>

Looks fun....

--
Bertrand

tconway

unread,
Jun 27, 2009, 4:51:36 PM6/27/09
to

"Bob Larter" <bobby...@gmail.com> wrote in message
news:4a45c78b$1...@dnews.tpgi.com.au...

> David J Taylor wrote:
>> Bob Williams wrote:
>>> David J Taylor wrote:
>>>> John Navas wrote:
>>>>> On Thu, 25 Jun 2009 10:03:25 +1000, ribbit <rib...@news.group> wrote
>>>>> in <7aft6gF...@mid.individual.net>:
>>>>>
>>>>>> The notion that you cannot tell the difference between a shot from
>>>>>> a P&S and one from a DSLR with good quality glass is silly. Of
>>>>>> course you can... Unless you only want to look at outlines.
>>>>>
>>>>> Nope. Sorry.
>>>>
>>>> I've tried the test with the cameras I have, John, and the difference
>>>> was quite obvious.
>>>>
>>>> David
>>>
>>> Is it visible on an 8x10 print?
>>
>> I don't print. The difference was obvious filling a 20-inch 1600 x 1200
>> pixel display.
>>
>>> I did a controlled test with a 4MP Panasonic FZ15 (P/S) with a 12X
>>> Leica lens VS a 8 MP Canon Rebel XT DSLR with a kit lens.
>>> I printed both images at 8x10 and the Panny/Leica combo outperformed
>>> the DSLR Canon/Canon combo.
>>
>> The Canon kit lens had a very poor reputation, although I gather that the
>> newer 18-55mm IS is a better quality lens.
>
> Well, if you shoot with a consumer zoom on a DSLR you might as well be
> shooting with a P&S anyway.
oh well, my Nikon 18-135 DX is highly rated though.
Does that count? (beats any 28-200 I've ever seen)


Alfred Molon

unread,
Jun 28, 2009, 6:31:24 AM6/28/09
to
In article <270620091120268880%nos...@nospam.invalid>, nospam says...

> and there's a luminance change in those examples, so a bayer sensor
> will resolve it.

Not necessarily, and even if there was it would not help a Bayer sensor
to accurately reconstruct the image.

> how does colour change within a pixel???

Scene having more detail than the sensor can capture?

Bob Larter

unread,
Jun 28, 2009, 1:53:15 AM6/28/09
to
nospam wrote:
> In article <4a45c2a7$1...@dnews.tpgi.com.au>, Bob Larter
> <bobby...@gmail.com> wrote:
>
>> Don't get me started on audiophools...
>> <http://grumpyoldarts.com/2009/04/18/audiophools/>
>
> they're a hoot.

To bring it back to photography, check out one of the comments on the
post, which compares audiophool idiocy to the photographic equivalent:
---
Exactly the same debate exists in photography: Photoshop does not
replace the fun of darkroom, for those of us who enjoy darkroom, but
that has nothing to do with objective resolution and dynamics, where
digital has won the battle years ago.
---

> how about a 770 pound turntable that uses bullet-proof
> wood, for only $150k:
> <http://www.needledoctor.com/Clearaudio-Statement-Turntable>

Jesus!

*cough* *splutter*

David J Taylor

unread,
Jun 28, 2009, 2:06:28 AM6/28/09
to
Bob Larter wrote:
> nospam wrote:
[]

>> how about a 770 pound turntable that uses bullet-proof
>> wood, for only $150k:
>> <http://www.needledoctor.com/Clearaudio-Statement-Turntable>
>
> Jesus!
>
>> and don't cheap out on the needle:
>> <http://www.needledoctor.com/Clearaudio-Goldfinger-Phono-Cartridge>
>
> *cough* *splutter*

Somehow it would be more believable at $147K and $9625 for the needle!

Even more expensive than Leica!

David

nospam

unread,
Jun 28, 2009, 3:16:29 AM6/28/09
to
In article <MPG.24b164e92...@news.supernews.com>, Alfred
Molon <alfred...@yahoo.com> wrote:

> > and there's a luminance change in those examples, so a bayer sensor
> > will resolve it.
>
> Not necessarily, and even if there was it would not help a Bayer sensor
> to accurately reconstruct the image.

actually it would.

> > how does colour change within a pixel???
>
> Scene having more detail than the sensor can capture?

if it's beyond the capabilities of the sensor, then it doesn't matter
if it's full colour or not. it's not going to be resolved.

Frédérique & Hervé Sainct

unread,
Jun 28, 2009, 5:59:10 AM6/28/09
to
Robert Sneddon <fr...@nospam.demon.co.uk> wrote:

> One way to do this might be to increase the number of filter channels -- a
> six-channel filter, say instead of the traditional 3-channel RGB filtering
> process.

multipectral photography is my dream...

--
Fr�d�rique & Herv� Sainct, h.sa...@laposte.net [fr,es,en,it]
Fr�d�rique's initial is missing in front of the above address
l'initiale de Fr�d�rique manque devant l'adresse email ci-dessus

Frédérique & Hervé Sainct

unread,
Jun 28, 2009, 5:59:10 AM6/28/09
to
bugbear <bugbear@trim_papermule.co.uk_trim> wrote:

> Can any passing astronomers comment on this - I strongly
> suspect photon calculations are meat and drink in that
> sphere.

they already use multi-minutes pauses, don"t they?

Bob Larter

unread,
Jun 28, 2009, 8:40:51 AM6/28/09
to

Exactly.

Tzorzakakis Dimitrios

unread,
Jun 28, 2009, 1:45:47 PM6/28/09
to

? "nospam" <nos...@nospam.invalid> ?????? ??? ??????
news:270620090310554763%nos...@nospam.invalid...
My turntable, a Project Debut III, goes for 370 euros, complete with Ortofon
OM5 moving magnet cartridge, and Cambridge Audio 540 P MM preamplifier. If
someone has $ 150,000 and is willing to spend them on this trurntable...but
the chinese have a proverb, that even if you have acres of rice-plantages,
you still eat one helping of rice in your supper....

--
Tzortzakakis Dimitris
major in electrical engineering
mechanized infantry reservist
hordad AT otenet DOT gr


Martin Brown

unread,
Jun 29, 2009, 3:23:02 AM6/29/09
to
Alfred Molon wrote:
> In article <zH31m.2601$dz7...@newsfe04.iad>, Martin Brown says...
>
>> Interpolation of the Bayer array is generally well behaved except for
>> pathological test cases designed to break it. And even then the modern
>> heuristics for processing the sampled chroma data do very well.
>
> Lots of "pathological" cases in real world scenes. Green leaves against
> a blue sky, urban scenes just to make a couple of examples. Colour
> changes between adjacent pixels are quite common. There are even colour
> changes within a pixel, which no sensor is able to capture and just
> averages out.

You have several serious misconceptions abotu what happens when a real 3
colour scene is imaged by a Bayer sensor. The green channel carries over
50% of the luminance information and when corrected with the red and
blue channels it gives a very good proxy for luminance. It is true that
the edges for red and blue detail are necessarily soft since the best
that software can guess at is based on the luminance at a green site and
the signal at blue cells on either side. But that is usually still good
enough. The sorts of pathological target that Bayer fails to get right
are where you try to photograph a Bayer mask at the right distance to
match the sensor scale and offset so that the green channels are imaged
by the red and blue masks and vice versa.


>
>> You are tilting at windmills. The loss of chroma information through
>> subsampling has only minor deleterious effects. Foveon is a cute
>> technology but it solves a non-problem. The human eye has a higher
>> resolution for luminance than it does for chrominance.
>
> The human eye is not the measure of all things. You want to know if the
> sensor has a certain resolution or not. If it is unable to capture
> changes between adjacent pixels, the effective resolution is lower than
> the nominal one (i.e. the pixel count).

Not at all. The effective resolution for *colour* is lower (although not
by all that much) but the resolution for luminance is pretty close to
the theoretical limit for the sensor. That is why it works so well. The
eye cannot see fine colour detail so well but it is very sensitive to
luminance. If the luminance is accurate then small chroma errors are
effectively invisible until you zoom in to pixel level detail.


>
> To draw another parallel, the human eye can't see colours when it's too
> dark. Should image sensors therefore also switch to monochrome in low
> light levels, i.e. at night?

There is some advantage in doing so since you can illuminate wildlife
with IR floodlights that do not disturb them. You should note here that
it was not until *1971* that it was possible with colour film technology
to reproduce on colour film what the human eye would see under faint
lighting if it was more sensitive.

All the early colour astronomincal photographs have way too much pink
and blue and no green (which is actually the brightest nebula emmission
but just happens to sit on the safelight wavelength for colour film).


>
>>> There is no law of physics stating that a full colour sensor must have
>>> more noise than a Bayer sensor.
>> But it must contain at least 2N more active sensor sites and arrange to
>> filter the light into at least RGB.
>
> You are making assumptions about the implementation here. See my other
> post.

You can do multiple exposures with different filters on the same sensor
array. Iff you can be sure the subject will not move or alter in any
way. An assumption that is invalid in almost all circumstances.

That is how colour images are done in professional astronomy where
having reliable raw values for every pixel in each waveband really
matters. And most of the objects with a few exceptions change appearance
only on geological timescales. But apart from for quantitative
scientific imaging the Bayer matrix is good enough in all practical
circumstances.

Regards,
Martin Brown

Martin Brown

unread,
Jun 29, 2009, 3:26:07 AM6/29/09
to
Alfred Molon wrote:
> In article <270620091120268880%nos...@nospam.invalid>, nospam says...
>
>> and there's a luminance change in those examples, so a bayer sensor
>> will resolve it.
>
> Not necessarily, and even if there was it would not help a Bayer sensor
> to accurately reconstruct the image.
>
>> how does colour change within a pixel???
>
> Scene having more detail than the sensor can capture?

All natural scenes meet that criterion there is always finer detail than
the medium can support. The imaging device lens or mirror limits the
highest spatial frequency that makes it to the sensor.

The Bayer demosaic is a *lot* more effective than you seem to think.

Regards,
Martin Brown

Martin Brown

unread,
Jun 29, 2009, 3:35:49 AM6/29/09
to
bugbear wrote:
> Don Stauffer wrote:
>> At some point the number of photons/photoelectrons captured during a
>> typical exposure will become so low that photon noise WILL become an
>> issue. At any reasonable exposure there are only so many photons
>> striking each square micron of detector surface area. We may not be
>> there yet, in spite of warnings, but we cannot go TOO small without
>> running into this problem.
>
> Can any passing astronomers comment on this - I strongly
> suspect photon calculations are meat and drink in that
> sphere.

They used to be an issue in the film days when reciprocity failure would
cause faint image detail to never be recorded at room temperature. All
sorts of witchcraft involving baking in nitrogen and hydrogen gas was
used to make film behave better at low light levels.

By comparison CCDs are fairly well behaved. A photon impact releases and
electron with some decent percentage quantum efficiency independent of
the rate of arrival. What hurts is thermal noise build up during the
exposure so astro CCDs are actively cooled, and thermal noise/IR photons
emitted by the readout amplifier in the corner or the chip.

A reasonable introduction to the issues in astronomical CCDs is at:
http://www.astrophys-assist.com/educate/noise/noise.htm

Regards,
Martin Brown

John Navas

unread,
Jun 30, 2009, 12:30:10 PM6/30/09
to
On Sat, 27 Jun 2009 11:20:25 -0400, nospam <nos...@nospam.invalid> wrote
in <270620091120258778%nos...@nospam.invalid>:

>In article <MPG.24b013d7a...@news.supernews.com>, Alfred
>Molon <alfred...@yahoo.com> wrote:

>> - monochrome sensor
>> - place RGB colour filters on it arranged in a Bayer pattern; take a
>> photo
>> - now place a rotating wheel with three red, green and blue windows.
>> Take three images, a green, red and a blue one. Combine them.
>>
>> The Bayer image has the same noise levels and the same dynamic range as
>> the full-colour image.
>
>a rotating wheel is limited to motionless subjects, hardly a useful
>tradeoff,

Depends on how fast the wheel is rotating -- three (say) 1/500 sec
exposures could be combined into a single effective exposure of 1/160
sec.

>nor is it a full colour sensor.

Why not?

--
Best regards,
John (Panasonic DMC-FZ28, and several others)

Ray Fischer

unread,
Jun 30, 2009, 2:09:50 PM6/30/09
to
Alfred Molon <alfred...@yahoo.com> wrote:
> nospam says...

>> interpolation does not mean the result is always wrong.
>
>It's a guess, i.e. could be right or wrong.

By that "logic" ALL photos are guesses.

--
Ray Fischer
rfis...@sonic.net

nospam

unread,
Jun 30, 2009, 2:21:57 PM6/30/09
to
In article <s7fk4596r9ijo50ae...@4ax.com>, John Navas
<spamf...@navasgroup.com> wrote:

> >a rotating wheel is limited to motionless subjects, hardly a useful
> >tradeoff,
>
> Depends on how fast the wheel is rotating -- three (say) 1/500 sec
> exposures could be combined into a single effective exposure of 1/160
> sec.

then the subject must remain still for 1/160th, thus, no sports or
action photography where a faster shutter speed is needed, or outdoors
in bright sun where the lens can't be stopped down. not a good solution
for most people.

> >nor is it a full colour sensor.
>
> Why not?

a full colour sensor means each pixel captures all three colours, as
with foveon. three successive photos using different filtration is not
a full colour sensor.

Martin Brown

unread,
Jun 30, 2009, 6:11:13 PM6/30/09
to
John Navas wrote:
> On Sat, 27 Jun 2009 11:20:25 -0400, nospam <nos...@nospam.invalid> wrote
> in <270620091120258778%nos...@nospam.invalid>:
>
>> In article <MPG.24b013d7a...@news.supernews.com>, Alfred
>> Molon <alfred...@yahoo.com> wrote:
>
>>> - monochrome sensor
>>> - place RGB colour filters on it arranged in a Bayer pattern; take a
>>> photo
>>> - now place a rotating wheel with three red, green and blue windows.
>>> Take three images, a green, red and a blue one. Combine them.
>>>
>>> The Bayer image has the same noise levels and the same dynamic range as
>>> the full-colour image.
>> a rotating wheel is limited to motionless subjects, hardly a useful
>> tradeoff,
>
> Depends on how fast the wheel is rotating -- three (say) 1/500 sec
> exposures could be combined into a single effective exposure of 1/160
> sec.

The result would not be very satisfactory for sports photography. A
runner at a modest 7m/s covers 14mm in 1/500s which if you track the
runner will have him sharp and the background motion smeared or vice
versa. But if you have three time staggered exposures the background or
the runner becomes an extremely distracting tricolour rainbow.

The method is used for scientific photography for targets where the
subject is stationary or only very slowly changing.

Regards,
Martin Brown

John Navas

unread,
Jun 30, 2009, 7:42:06 PM6/30/09
to
On Tue, 30 Jun 2009 14:21:57 -0400, nospam <nos...@nospam.invalid> wrote
in <300620091421575623%nos...@nospam.invalid>:

>In article <s7fk4596r9ijo50ae...@4ax.com>, John Navas
><spamf...@navasgroup.com> wrote:
>
>> >a rotating wheel is limited to motionless subjects, hardly a useful
>> >tradeoff,
>>
>> Depends on how fast the wheel is rotating -- three (say) 1/500 sec
>> exposures could be combined into a single effective exposure of 1/160
>> sec.
>
>then the subject must remain still for 1/160th, thus, no sports or
>action photography where a faster shutter speed is needed, or outdoors
>in bright sun where the lens can't be stopped down. not a good solution
>for most people.

In other words, you were wrong, and are now scrambling.

>> >nor is it a full colour sensor.
>>
>> Why not?
>
>a full colour sensor means each pixel captures all three colours, as
>with foveon. three successive photos using different filtration is not
>a full colour sensor.

Actually it is.

John Navas

unread,
Jun 30, 2009, 7:52:30 PM6/30/09
to
On Tue, 30 Jun 2009 23:11:13 +0100, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote in
<r4w2m.2$IU...@newsfe05.iad>:

>John Navas wrote:
>> On Sat, 27 Jun 2009 11:20:25 -0400, nospam <nos...@nospam.invalid> wrote
>> in <270620091120258778%nos...@nospam.invalid>:
>>
>>> In article <MPG.24b013d7a...@news.supernews.com>, Alfred
>>> Molon <alfred...@yahoo.com> wrote:

>>> a rotating wheel is limited to motionless subjects, hardly a useful
>>> tradeoff,
>>
>> Depends on how fast the wheel is rotating -- three (say) 1/500 sec
>> exposures could be combined into a single effective exposure of 1/160
>> sec.
>
>The result would not be very satisfactory for sports photography. A
>runner at a modest 7m/s covers 14mm in 1/500s which if you track the
>runner will have him sharp and the background motion smeared or vice
>versa. But if you have three time staggered exposures the background or
>the runner becomes an extremely distracting tricolour rainbow.

EV 16 (daylight, ISO 200) is 1/4000 sec at f/4.0, or about 1/1300 sec
for a color composite image, more than fast enough for much (most?)
sports photography*, and slight color misregistration can be corrected
with in camera processing (much like chromatic aberration).

* Most of my film motorsports photography was at ISO 64, typically
1/250-1/1000 sec.

nospam

unread,
Jun 30, 2009, 11:16:32 PM6/30/09
to
In article <8l8l451n3gr6rcpoc...@4ax.com>, John Navas
<spamf...@navasgroup.com> wrote:

> >> >a rotating wheel is limited to motionless subjects, hardly a useful
> >> >tradeoff,
> >>
> >> Depends on how fast the wheel is rotating -- three (say) 1/500 sec
> >> exposures could be combined into a single effective exposure of 1/160
> >> sec.
> >
> >then the subject must remain still for 1/160th, thus, no sports or
> >action photography where a faster shutter speed is needed, or outdoors
> >in bright sun where the lens can't be stopped down. not a good solution
> >for most people.
>
> In other words, you were wrong, and are now scrambling.

quite the opposite. you're changing the meaning of motionless from
immobile such as a building to something that can be frozen by 1/160th.
it's still very limiting.

> >> >nor is it a full colour sensor.
> >>
> >> Why not?
> >
> >a full colour sensor means each pixel captures all three colours, as
> >with foveon. three successive photos using different filtration is not
> >a full colour sensor.
>
> Actually it is.

saying so doesn't make it so.

Alfred Molon

unread,
Jul 1, 2009, 12:41:14 PM7/1/09
to
In article <nZZ1m.4820$3O....@newsfe25.iad>, Martin Brown says...

> The green channel carries over
> 50% of the luminance information

That's not correct. The green channel only carries the luminance
information of the green image component. The bandwidth of the green
colour filter may be wider or narrower, but still the green pixels only
accurately measure the luminance of the green channel.

> and when corrected with the red and
> blue channels it gives a very good proxy for luminance.

The green channel is a poor approximation of luminance. The red and blue
channels are even poorer approximations of luminance, and together they
(i.e. R and B) make up 50% of the pixels of a typical Bayer sensor.

> It is true that
> the edges for red and blue detail are necessarily soft since the best
> that software can guess at is based on the luminance at a green site and
> the signal at blue cells on either side. But that is usually still good
> enough.

Define "good enough". "Good enough" may mean that your Bayer sensor at
the end of all calculations only has approx. 60%-70% of effective
resolution, i.e. a 10MP Bayer sensor having the same resolution as a 6
or 7 MP full colour sensor. All this of course depends on the scene you
are photographing.

> Not at all. The effective resolution for *colour* is lower

Also for luminance, because the luminance information in a Bayer sensor
is inaccurate.

But you are missing another big issue. In a Bayer sensor the luminance
resolution differs from the chroma resolution. This creates problems for
the dimensioning of the AA filters: either set them for the lower
chrominance resolution, with the result of reducing too much the overall
MTF of the camera, or set them for the higher luminance resolution and
then suffer from colour aliasing.

By the way, you are actually not disagreeing with what I am writing. You
also acknowledge that the effective resolution of a Bayer sensor is
lower than the nominal one. My guesstimate is that it is 60%-70% on
average, while you think it is a bit higher than that.

Martin Brown

unread,
Jul 1, 2009, 8:34:31 AM7/1/09
to
Alfred Molon wrote:
> In article <nZZ1m.4820$3O....@newsfe25.iad>, Martin Brown says...
>> The green channel carries over
>> 50% of the luminance information
>
> That's not correct. The green channel only carries the luminance
> information of the green image component.

Which represents almost 60% of the total luminance signal.
The standard CCIR 601 gives the coefficients weights as follows

Y' = 0.299 R + 0.587 G + 0.114 B

You would barely notice the effect that the blue channel had on total
luminance if it went missing entirely.

> The bandwidth of the green
> colour filter may be wider or narrower, but still the green pixels only
> accurately measure the luminance of the green channel.

But because of how the human eye is constructed with a peak sensitivity
to green light t ensures that it represnts the bulk of the luminance signal.


>
>> and when corrected with the red and
>> blue channels it gives a very good proxy for luminance.
>
> The green channel is a poor approximation of luminance. The red and blue
> channels are even poorer approximations of luminance, and together they
> (i.e. R and B) make up 50% of the pixels of a typical Bayer sensor.

A quarter each to red and blue. There are some sensors using CMYG or an
emerald green to get extended green gamut colours.


>
>> It is true that
>> the edges for red and blue detail are necessarily soft since the best
>> that software can guess at is based on the luminance at a green site and
>> the signal at blue cells on either side. But that is usually still good
>> enough.
>
> Define "good enough". "Good enough" may mean that your Bayer sensor at
> the end of all calculations only has approx. 60%-70% of effective
> resolution, i.e. a 10MP Bayer sensor having the same resolution as a 6
> or 7 MP full colour sensor. All this of course depends on the scene you
> are photographing.

There is perhaps a 10-20% hit compared to the result for luminance
sampled with a dedicated unfiltered monochrome CCD. The exact number
would depend on signal to noise, how good the demosaic was and how well
understood the antialias filter is on the imaging sensor. The estimated
values for doubling the resolution of high signal to noise images are
very good. There is only an issue at very sharp edge transitions and
even there the heuristics used are pretty good now.


>
>> Not at all. The effective resolution for *colour* is lower
>
> Also for luminance, because the luminance information in a Bayer sensor
> is inaccurate.

It is only very slightly inferior to the luminance from a dedicated
monochrome sensor with the same pixel pitch.


>
> But you are missing another big issue. In a Bayer sensor the luminance
> resolution differs from the chroma resolution. This creates problems for
> the dimensioning of the AA filters: either set them for the lower
> chrominance resolution, with the result of reducing too much the overall
> MTF of the camera, or set them for the higher luminance resolution and
> then suffer from colour aliasing.

These are only big issues if you spend your time trying to photograph
test charts and other Bayer filter mosaics at the limit of the sensor
resolution.


>
> By the way, you are actually not disagreeing with what I am writing. You
> also acknowledge that the effective resolution of a Bayer sensor is
> lower than the nominal one. My guesstimate is that it is 60%-70% on
> average, while you think it is a bit higher than that.

It cannot be worse than the sensor spacing of the green channel. There
is an argument in favour of having the pixels diagonally since in the
urban world buildings have lots of vertical and horizontal features.
_ _ _
|_|_|_|
|_|_|_|
|_|_|_|

So that the finer resolution A/sqrt(2) along the 45 degree diagonals are
utilised more effectively. ISTR one of two cameras have used this approach.

Regards,
Martin Brown

nospam

unread,
Jul 1, 2009, 2:48:58 PM7/1/09
to
In article <MPG.24b3e84e7...@news.supernews.com>, Alfred
Molon <alfred...@yahoo.com> wrote:

> > The green channel carries over
> > 50% of the luminance information
>
> That's not correct. The green channel only carries the luminance
> information of the green image component. The bandwidth of the green
> colour filter may be wider or narrower, but still the green pixels only
> accurately measure the luminance of the green channel.

and it calculates the exact value based on the neighboring red and blue
pixels.

> > and when corrected with the red and
> > blue channels it gives a very good proxy for luminance.
>
> The green channel is a poor approximation of luminance. The red and blue
> channels are even poorer approximations of luminance, and together they
> (i.e. R and B) make up 50% of the pixels of a typical Bayer sensor.

actually the green channel is a very good approximation.

> > It is true that
> > the edges for red and blue detail are necessarily soft since the best
> > that software can guess at is based on the luminance at a green site and
> > the signal at blue cells on either side. But that is usually still good
> > enough.
>
> Define "good enough". "Good enough" may mean that your Bayer sensor at
> the end of all calculations only has approx. 60%-70% of effective
> resolution, i.e. a 10MP Bayer sensor having the same resolution as a 6
> or 7 MP full colour sensor. All this of course depends on the scene you
> are photographing.

that has more to do with aliasing than it being bayer.

> > Not at all. The effective resolution for *colour* is lower
>
> Also for luminance, because the luminance information in a Bayer sensor
> is inaccurate.

wrong, as can be seen by the output of the cameras.

> But you are missing another big issue. In a Bayer sensor the luminance
> resolution differs from the chroma resolution. This creates problems for
> the dimensioning of the AA filters: either set them for the lower
> chrominance resolution, with the result of reducing too much the overall
> MTF of the camera, or set them for the higher luminance resolution and
> then suffer from colour aliasing.

true, everything is a compromise.

> By the way, you are actually not disagreeing with what I am writing. You
> also acknowledge that the effective resolution of a Bayer sensor is
> lower than the nominal one. My guesstimate is that it is 60%-70% on
> average, while you think it is a bit higher than that.

and it won't be much higher with a full colour sensor because of
aliasing.

Alfred Molon

unread,
Jul 2, 2009, 11:08:54 AM7/2/09
to
In article <300620091421575623%nos...@nospam.invalid>, nospam says...

> a full colour sensor means each pixel captures all three colours, as
> with foveon. three successive photos using different filtration is not
> a full colour sensor.

Any sensor which captures a complete RGB triplet per pixel is full
colour.

nospam

unread,
Jul 2, 2009, 1:09:42 PM7/2/09
to
In article <MPG.24b5b9ade...@news.supernews.com>, Alfred
Molon <alfred...@yahoo.com> wrote:

> > a full colour sensor means each pixel captures all three colours, as
> > with foveon. three successive photos using different filtration is not
> > a full colour sensor.
>
> Any sensor which captures a complete RGB triplet per pixel is full
> colour.

that's what i said.

John Navas

unread,
Jul 2, 2009, 1:25:28 PM7/2/09
to
On Thu, 02 Jul 2009 13:09:42 -0400, nospam <nos...@nospam.invalid> wrote
in <020720091309421877%nos...@nospam.invalid>:

Nope.
Proof: <news:300620091421575623%nos...@nospam.invalid>

nospam

unread,
Jul 2, 2009, 1:49:29 PM7/2/09
to
In article <1arp45tne58lueke5...@4ax.com>, John Navas
<spamf...@navasgroup.com> wrote:

> On Thu, 02 Jul 2009 13:09:42 -0400, nospam <nos...@nospam.invalid> wrote
> in <020720091309421877%nos...@nospam.invalid>:
>
> >In article <MPG.24b5b9ade...@news.supernews.com>, Alfred
> >Molon <alfred...@yahoo.com> wrote:
> >
> >> > a full colour sensor means each pixel captures all three colours, as
> >> > with foveon. three successive photos using different filtration is not
> >> > a full colour sensor.
> >>
> >> Any sensor which captures a complete RGB triplet per pixel is full
> >> colour.
> >
> >that's what i said.
>
> Nope.
> Proof: <news:300620091421575623%nos...@nospam.invalid>

scroll up. you even included what i said in the quote.

i'll say it again, three photos that are blended in post-processing is


not a full colour sensor.

it's the same as taking multiple photos at different exposures doesn't
make a sensor a wide dynamic range sensor. when people use the term
full colour sensor they mean capturing rgb at every location. foveon is
the first implementation and others are sure to follow, assuming they
can overcome some of the technical hurdles.

John Navas

unread,
Jul 2, 2009, 1:55:53 PM7/2/09
to
On Thu, 02 Jul 2009 13:49:29 -0400, nospam <nos...@nospam.invalid> wrote
in <020720091349295113%nos...@nospam.invalid>:

>In article <1arp45tne58lueke5...@4ax.com>, John Navas
><spamf...@navasgroup.com> wrote:
>
>> On Thu, 02 Jul 2009 13:09:42 -0400, nospam <nos...@nospam.invalid> wrote
>> in <020720091309421877%nos...@nospam.invalid>:
>>
>> >In article <MPG.24b5b9ade...@news.supernews.com>, Alfred
>> >Molon <alfred...@yahoo.com> wrote:
>> >
>> >> > a full colour sensor means each pixel captures all three colours, as
>> >> > with foveon. three successive photos using different filtration is not
>> >> > a full colour sensor.
>> >>
>> >> Any sensor which captures a complete RGB triplet per pixel is full
>> >> colour.
>> >
>> >that's what i said.
>>
>> Nope.
>> Proof: <news:300620091421575623%nos...@nospam.invalid>
>
>scroll up. you even included what i said in the quote.
>
>i'll say it again, three photos that are blended in post-processing is
>not a full colour sensor.

You're arguing silly semantics, not a meaningful distinction.
It's a full color image, which is all that really matters.

nospam

unread,
Jul 2, 2009, 3:09:52 PM7/2/09
to
In article <j3tp45tc3t80sm6e6...@4ax.com>, John Navas
<spamf...@navasgroup.com> wrote:

> >i'll say it again, three photos that are blended in post-processing is
> >not a full colour sensor.
>
> You're arguing silly semantics, not a meaningful distinction.

as are you.

> It's a full color image, which is all that really matters.

and bayer outputs a full colour image too.

John Navas

unread,
Jul 2, 2009, 3:26:49 PM7/2/09
to
On Thu, 02 Jul 2009 15:09:52 -0400, nospam <nos...@nospam.invalid> wrote
in <020720091509522972%nos...@nospam.invalid>:

Bayer can only create a "good enough" demosaiced (interpolated),
anti-aliased image with lower accuracy and resolution, particularly
chrominance.

Color wheel is capable of creating a non-mosaiced non-interpolated image
with full chrominance and luminance resolution.

I'm done with this silly debate. Have the last word.

nospam

unread,
Jul 2, 2009, 4:04:34 PM7/2/09
to
In article <or1q451evbh50skic...@4ax.com>, John Navas
<spamf...@navasgroup.com> wrote:

> Bayer can only create a "good enough" demosaiced (interpolated),
> anti-aliased image with lower accuracy and resolution, particularly
> chrominance.

it's more than good enough for the vast majority of subjects and the
anti-alias filter is a requirement of any sampling system for accurate
resolution. omitting it results in *less* accurate results due to
aliasing. that's one of the flaws of sigma/foveon, although they spin
it as a feature. and lower chrominance isn't anything the eye can see,
which is exactly why bayer works so well.

> Color wheel is capable of creating a non-mosaiced non-interpolated image
> with full chrominance and luminance resolution.

only for subjects that don't move very much, if at all. that's a huge
limitation.

> I'm done with this silly debate. Have the last word.

in other words, you're wrong again and exit.

Alfred Molon

unread,
Jul 3, 2009, 4:38:50 AM7/3/09
to
In article <GJI2m.9740$iU7....@newsfe01.iad>, Martin Brown says...


> Which represents almost 60% of the total luminance signal.
> The standard CCIR 601 gives the coefficients weights as follows
>
> Y' = 0.299 R + 0.587 G + 0.114 B
>
> You would barely notice the effect that the blue channel had on total
> luminance if it went missing entirely.

If the blue channel went missing entirely you'd notice it very well.
Just try setting the blue components of the pixels of a photo to 0 and
see what difference it makes.

But we're not here to discuss whether the human eye is more sensitive to
red or green light. The intensity of light (let's avoid the term
luminance since it seems to cause confusion) is given by this formula:

L = sqrt (R^2 + G^2 + B^2)

i.e. the length of a vector in a three dimensional space.

> But because of how the human eye is constructed with a peak sensitivity
> to green light t ensures that it represnts the bulk of the luminance signal.

Once again, you are performing a measurement, so the sensitivities of
the human eye are not relevant. Otherwise you'd have to flatten the
colours when the light intensity goes below a certain level, because the
human eye can't see colours if it's too dark.

> There is perhaps a 10-20% hit

More than that in my opinion. Try the following:

- resize an image to 50% (i.e. a 12MP image down to 3MP)
- upsize the downsized image back to the original size, i.e. take the
3MP image and interpolate it to 12MP
- compare the original image with the downsized-upsized one: you'll be
surprised how little difference there is



> These are only big issues if you spend your time trying to photograph
> test charts and other Bayer filter mosaics at the limit of the sensor
> resolution.

Either a sensor is capable of a certain resolution or it is not. If a
12MP sensor is not capable of recording 12MP, and for instance only
records 8MP, it's effective resolution is 8MP.

Alfred Molon

unread,
Jul 3, 2009, 4:38:52 AM7/3/09
to
In article <010720091448589346%nos...@nospam.invalid>, nospam says...


> and it calculates the exact value based on the neighboring red and blue
> pixels.

It does not.

> actually the green channel is a very good approximation.

It is not.

> that has more to do with aliasing than it being bayer.

Aliasing is a source of error and limits the effective resolution. As I
already wrote, since in a Bayer sensor the colour resoltion is lower
than the intensity resolution, you can't dimension properly the AA
filters and this will impact the resolution of the camera.

> wrong, as can be seen by the output of the cameras.

It is inaccurately calculated.

> and it won't be much higher with a full colour sensor because of
> aliasing.

It will be, among other reasons because you can dimension the AA filter
properly.

Alfred Molon

unread,
Jul 3, 2009, 5:03:52 AM7/3/09
to
In article <020720091604349325%nos...@nospam.invalid>, nospam says...

> in other words, you're wrong again and exit.

Nah, all points have already been made (and once again: this stuff pops
up every few months).

There is not much more which can be added to this discussion now. You
disagree and that is fine. You have the right to your personal opinion.

It's just that as soon as a company will bring out a full-colour sensor
with good performance (the Foveon sensor does not have a good
performance), the entire industry will switch to full colour sensors and
everybody will all of a sudden agree that full colour sensors are so
much better.

This reminds me of the debate which took place a few years ago here
about live view in DSLRs. It was argued in a very scientific manner
(google for Dave Martindale's posts) why live view would ruin the (SNR)
performance of DSLR sensors and should not be introduced. People were
totally against it. Now just see the Nikon D3 and its high ISO
performance - much better than the Canon 5D which does not have live
view.

It was also argued that DSLR should not record movies, that that was a
stupid silly feature. Now the capability of recording video is a most
welcome feature, also because there is a growing demand for short videos
in the stock photo sector.

The bottom line is that there are a lot of technological advances which
people here seem to be totally opposed initially. Yet, once they are
introduced, these new features become quickly essential.

Bob Larter

unread,
Jul 3, 2009, 1:03:07 AM7/3/09
to
Alfred Molon wrote:
> In article <GJI2m.9740$iU7....@newsfe01.iad>, Martin Brown says...
>
>> Which represents almost 60% of the total luminance signal.
>> The standard CCIR 601 gives the coefficients weights as follows
>>
>> Y' = 0.299 R + 0.587 G + 0.114 B
>>
>> You would barely notice the effect that the blue channel had on total
>> luminance if it went missing entirely.
>
> If the blue channel went missing entirely you'd notice it very well.
> Just try setting the blue components of the pixels of a photo to 0 and
> see what difference it makes.

He's talking about the blue contribution to total luminosity, not to the
colour.


--
W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est
---^----^---------------------------------------------------------------

Martin Brown

unread,
Jul 3, 2009, 6:10:51 AM7/3/09
to
Alfred Molon wrote:
> In article <020720091604349325%nos...@nospam.invalid>, nospam says...
>
>> in other words, you're wrong again and exit.
>
> Nah, all points have already been made (and once again: this stuff pops
> up every few months).
>
> There is not much more which can be added to this discussion now. You
> disagree and that is fine. You have the right to your personal opinion.
>
> It's just that as soon as a company will bring out a full-colour sensor
> with good performance (the Foveon sensor does not have a good
> performance), the entire industry will switch to full colour sensors and
> everybody will all of a sudden agree that full colour sensors are so
> much better.

But they are not. The human eye cannot tell the difference. If the
luminance signal is accurate then small chroma errors are invisible.

As an example. There is an absolutely gross error in the JPEG codec used
in PaintShop Pro v8 which results in vertical chroma subsampling being
entirely wrong when the default 2x2 chroma subsampling is used.

A few people have commented on slightly odd sky noise but hardly anyone
has really noticed what is a major chroma sampling defect.

If you don't believe me. Draw a few nearly vertical blue or red lines in
PSPro v8 on a plain background and save as JPEG 2x2 subsampled.
You will be surprised what you get when you reload.

What happens during subsampling depends critically on whether the
coloured line is in an odd or an even indexed pixel. I would be
interested to know if the problem persists in their later offerings. I
never bought their stuff again after that.

> This reminds me of the debate which took place a few years ago here
> about live view in DSLRs. It was argued in a very scientific manner
> (google for Dave Martindale's posts) why live view would ruin the (SNR)
> performance of DSLR sensors and should not be introduced. People were
> totally against it. Now just see the Nikon D3 and its high ISO
> performance - much better than the Canon 5D which does not have live
> view.

I too would expect live view to make the CCD readout amplifier run
warmer hurting dark current and low light usage. In fact I know that it
would hurt in exactly that way as well as crippling battery life.


>
> It was also argued that DSLR should not record movies, that that was a
> stupid silly feature. Now the capability of recording video is a most
> welcome feature, also because there is a growing demand for short videos
> in the stock photo sector.

Recording movies is kind of cute if there is enough memory capacity.
Again it hammers the battery life.


>
> The bottom line is that there are a lot of technological advances which
> people here seem to be totally opposed initially. Yet, once they are
> introduced, these new features become quickly essential.

However, full colour imaging is not one of them. Bayer sensors are
already good enough.

Despite Foveons USP no-one is rushing to buy their cameras.

Regards,
Martin Brown

nospam

unread,
Jul 3, 2009, 6:36:24 AM7/3/09
to
In article <7Pk3m.23723$S16....@newsfe23.iad>, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote:

> > It's just that as soon as a company will bring out a full-colour sensor
> > with good performance (the Foveon sensor does not have a good
> > performance), the entire industry will switch to full colour sensors and
> > everybody will all of a sudden agree that full colour sensors are so
> > much better.
>
> But they are not. The human eye cannot tell the difference. If the
> luminance signal is accurate then small chroma errors are invisible.

exactly.

> I too would expect live view to make the CCD readout amplifier run
> warmer hurting dark current and low light usage. In fact I know that it
> would hurt in exactly that way as well as crippling battery life.

someone on dpreview has measured the temperature of the sensor on a
nikon d3 and it doesn't change very much.

> > The bottom line is that there are a lot of technological advances which
> > people here seem to be totally opposed initially. Yet, once they are
> > introduced, these new features become quickly essential.
>
> However, full colour imaging is not one of them. Bayer sensors are
> already good enough.

yep.

> Despite Foveons USP no-one is rushing to buy their cameras.

very true. part of it is because the camera is so slow and flaky and
part of it is because it's not any better than what nikon/canon have.

nevertheless, the sd15 is going to be interesting because it really
isn't much different than an sd14 with a few bugs fixed, and the price
for the sd14 has dropped to $350 from its initial street price of
$1600. there's no way sigma can be making any money on these.

nospam

unread,
Jul 3, 2009, 6:36:26 AM7/3/09
to
In article <MPG.24b7e7e78...@news.supernews.com>, Alfred
Molon <alfred...@yahoo.com> wrote:

> It's just that as soon as a company will bring out a full-colour sensor
> with good performance (the Foveon sensor does not have a good
> performance), the entire industry will switch to full colour sensors and
> everybody will all of a sudden agree that full colour sensors are so
> much better.

there are some very serious technical hurdles to overcome for a full
colour sensor to have good performance. if they can be solved, then
full colour sensors will replace bayer, and if not, they won't. so
far, foveon's attempt has been fairly poor, riddled with all sorts of
issues.

nospam

unread,
Jul 3, 2009, 6:36:28 AM7/3/09
to
In article <MPG.24b7c8dcd...@news.supernews.com>, Alfred
Molon <alfred...@yahoo.com> wrote:

> > and it calculates the exact value based on the neighboring red and blue
> > pixels.
>
> It does not.

true. it's not exact but it's almost exact. the error is very small and
humans won't notice it at all except in well known edge cases such as
red/blue test charts.

> > actually the green channel is a very good approximation.
>
> It is not.

it is, but an even better approximation uses red and blue channel
information.

Martin Brown

unread,
Jul 3, 2009, 12:24:04 PM7/3/09
to
Alfred Molon wrote:
> In article <GJI2m.9740$iU7....@newsfe01.iad>, Martin Brown says...
>
>> Which represents almost 60% of the total luminance signal.
>> The standard CCIR 601 gives the coefficients weights as follows
>>
>> Y' = 0.299 R + 0.587 G + 0.114 B
>>
>> You would barely notice the effect that the blue channel had on total
>> luminance if it went missing entirely.
>
> If the blue channel went missing entirely you'd notice it very well.
> Just try setting the blue components of the pixels of a photo to 0 and
> see what difference it makes.
>
> But we're not here to discuss whether the human eye is more sensitive to
> red or green light. The intensity of light (let's avoid the term
> luminance since it seems to cause confusion) is given by this formula:
>
> L = sqrt (R^2 + G^2 + B^2)
>
> i.e. the length of a vector in a three dimensional space.

You can define it like that if you want to but you are on your own.
Everyone else works in terms of roughly what are "just noticeable
differences" in colours. That is nothing like what you get if you
generate the entire 0..255 range for each of red, green and blue.


>
>> But because of how the human eye is constructed with a peak sensitivity
>> to green light t ensures that it represnts the bulk of the luminance signal.
>
> Once again, you are performing a measurement, so the sensitivities of
> the human eye are not relevant. Otherwise you'd have to flatten the
> colours when the light intensity goes below a certain level, because the
> human eye can't see colours if it's too dark.

Doesn't matter. The same happens when an image with single bits of
colour are enabled. A quick crude 12bit greyscale can be had on a normal
display by dithering the least significant colour bit. You can see the
colour error on a test chart but on a normal image with even a slight
amount of noise the chroma errors vanish unless you go pixel peeping.

What on earth is the point of recording something in a colour photograph
that the human eye cannot distinguish when you could use the same
resources to better effect?


>
>> There is perhaps a 10-20% hit
>
> More than that in my opinion. Try the following:
>
> - resize an image to 50% (i.e. a 12MP image down to 3MP)
> - upsize the downsized image back to the original size, i.e. take the
> 3MP image and interpolate it to 12MP
> - compare the original image with the downsized-upsized one: you'll be
> surprised how little difference there is

That just tells you how strongly correlated most natural images are from
pixel to pixel. It is the reason why Bayer works so well in practice.


>
>> These are only big issues if you spend your time trying to photograph
>> test charts and other Bayer filter mosaics at the limit of the sensor
>> resolution.
>
> Either a sensor is capable of a certain resolution or it is not. If a
> 12MP sensor is not capable of recording 12MP, and for instance only
> records 8MP, it's effective resolution is 8MP.

The sensor is perfectly capable of the resolution although many lenses
can only do it at their optimum aperture and a lot of amateurs who lack
a steady hand are not. Images with large JPEG filesizes tend to be the
ones with the largest high spatial frequency content.

Regards,
Martin Brown

nospam

unread,
Jul 3, 2009, 7:17:26 PM7/3/09
to
In article <Hgq3m.2318$dd4....@newsfe10.iad>, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote:

> What on earth is the point of recording something in a colour photograph
> that the human eye cannot distinguish when you could use the same
> resources to better effect?

exactly.

> > More than that in my opinion. Try the following:
> >
> > - resize an image to 50% (i.e. a 12MP image down to 3MP)
> > - upsize the downsized image back to the original size, i.e. take the
> > 3MP image and interpolate it to 12MP
> > - compare the original image with the downsized-upsized one: you'll be
> > surprised how little difference there is
>
> That just tells you how strongly correlated most natural images are from
> pixel to pixel. It is the reason why Bayer works so well in practice.

actually it tells you that there wasn't that much detail in the
original image. if there was, you *will* see a difference by
downsizing and upsizing. try it with a picture of a solid colour wall
and downsize/upsize all you want :)

Paul Furman

unread,
Jul 4, 2009, 2:08:11 PM7/4/09
to

I'm guessing they won't mount Nikkors, lens sales could close the gap.
That's a ridiculous price for a DSLR.

--
Paul Furman
www.edgehill.net
www.baynatives.com

all google groups messages filtered due to spam

nospam

unread,
Jul 4, 2009, 2:30:15 PM7/4/09
to
In article <h2o61k$dj3$1...@news.eternal-september.org>, Paul Furman
<paul-@-edgehill.net> wrote:

> >> Despite Foveons USP no-one is rushing to buy their cameras.
> >
> > very true. part of it is because the camera is so slow and flaky and
> > part of it is because it's not any better than what nikon/canon have.
> >
> > nevertheless, the sd15 is going to be interesting because it really
> > isn't much different than an sd14 with a few bugs fixed, and the price
> > for the sd14 has dropped to $350 from its initial street price of
> > $1600. there's no way sigma can be making any money on these.
>
> I'm guessing they won't mount Nikkors, lens sales could close the gap.
> That's a ridiculous price for a DSLR.

there are adapter rings for non-sigma mount lenses but like any adapter
ring, there will not be any autofocus or auto-aperture.

Alfred Molon

unread,
Jul 5, 2009, 8:27:35 AM7/5/09
to
In article <7Pk3m.23723$S16....@newsfe23.iad>, Martin Brown says...

> But they are not.

They are.

> If the
> luminance signal is accurate then small chroma errors are invisible.

Luminance is not accurate in a Bayer sensor.

> However, full colour imaging is not one of them. Bayer sensors are
> already good enough.

No, they are not. But feel free to continue using cameras with crippled
Bayer sensors, once high performance full colour sensors are available.

Besides, have you ever thought why video cameras with three CCDs are
available, if one CCD is sufficient?

Alfred Molon

unread,
Jul 5, 2009, 8:43:58 AM7/5/09
to
In article <Hgq3m.2318$dd4....@newsfe10.iad>, Martin Brown says...

> What on earth is the point of recording something in a colour photograph
> that the human eye cannot distinguish when you could use the same
> resources to better effect?

By this logic we would not need very high resolution image sensors,
because the human eye cannot see the fine detail anyway.

Make an enlargement and the human eye will notice very well the
deficiencies of a Bayer sensor.

> That just tells you how strongly correlated most natural images are from
> pixel to pixel. It is the reason why Bayer works so well in practice.

No, this tells how bad cameras with Bayer sensors are at recording fine
detail.

The typical scene you are recording has much more detail than your
sensor can capture.

> The sensor is perfectly capable of the resolution although many lenses
> can only do it at their optimum aperture and a lot of amateurs who lack
> a steady hand are not. Images with large JPEG filesizes tend to be the
> ones with the largest high spatial frequency content.

Lenses are another issue affecting the recorded image.

In any case, and I already explained this, with a Bayer sensor there is
a basic problem with the AA filters, because you have:

- individual colour channels at 1/2 the line count of the sensor (i.e.
just one red pixel in each 2x2 pixel block)
- light intensity channel at a higher resolution

How to dimension the AA filter?

A. For the colour channels, cutting off at 1/2 the sensor line count or
B. at the line count of the sensor

Case A: more perceived recorded resolution, lots of colour Moire
Case B: less perceived recorded resolution, no colour Moire

Aliasing errors have the effect of reducing the effective resolution of
the recorded image, i.e. the recorded highest resolution information
drowns in a sea or aliasing errors.

Alfred Molon

unread,
Jul 5, 2009, 10:01:25 AM7/5/09
to
In article <030720091617268453%nos...@nospam.invalid>, nospam says...

> try it with a picture of a solid colour wall
> and downsize/upsize all you want :)

You enjoy taking pictures of solid colour walls with no detail in them?

Martin Brown

unread,
Jul 5, 2009, 4:33:10 AM7/5/09
to
Alfred Molon wrote:
> In article <7Pk3m.23723$S16....@newsfe23.iad>, Martin Brown says...
>
>> But they are not.
>
> They are.

Tis. Tisn't. Tis. Tisn't.

You cannot point to any worthwhile full colour digital camera that
produces noticeably better image quality than the existing Bayer
sensors. The only example to date is noticeably worse.


>
>> If the
>> luminance signal is accurate then small chroma errors are invisible.
>
> Luminance is not accurate in a Bayer sensor.
>
>> However, full colour imaging is not one of them. Bayer sensors are
>> already good enough.
>
> No, they are not. But feel free to continue using cameras with crippled
> Bayer sensors, once high performance full colour sensors are available.

Since the only example of a so called "better colour" camera using the
Foveon sensor is a complete dog it is hard to see why you feel there is
an advantage worth persuing.


>
> Besides, have you ever thought why video cameras with three CCDs are
> available, if one CCD is sufficient?

In the old days it used to be the only way they could do it. A sensor
for each colour and the industry is rather conservative about quality.
You do realise that all broadcast TV and many MPEG streams use 2x2
chroma subsampling to save transmission bandwidth which is somewhat more
brutal than the 2x1 chroma subsampling implicit in the Bayer matrix.

Why bother to waste bandwidth on information that the eye cannot see?

If the JPEG standards committee had anticipated the Bayer matrix of
digicams it would have been possible to directly encode the sensor
measurements with better compression than the normal JPEG file format.

Regards,
Martin Brown

It is loading more messages.
0 new messages