Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Highest Megapixels Possible in APS-Cs

1 view
Skip to first unread message

lastico

unread,
Apr 27, 2009, 7:17:13 AM4/27/09
to
Hi,

What's the highest megapixels possible in APS-C
DSLRs before noise makes the quality bad... 20
Megapixels? 40 Megapixels? There will come
a time when the pixel sizes will match the point
& shoot department. Will DSLRs go back to 35mm
lens? What's the roadmaps for Nikon, Canon, Sony
in years, decades ahead?? What new technology
will produce 50 megapixels DSLR with lightweight
lens like the EFs. Or will DSLRs reach a certain
limit like 30 megapixels where the manufacturers
would no longer push it above but maintain it for
decades or centuries to come?? Or will new
pixel technology resistance to noise produce 120 Megapixels or even 1
Gigapixels and beyond?

lastico

Tzortzakakis Dimitrios

unread,
Apr 27, 2009, 9:03:41 AM4/27/09
to

? "lastico" <lasti...@yahoo.com> ?????? ??? ??????
news:c01725a0-4c7a-498a...@y34g2000prb.googlegroups.com...
No idea, nobody can predict the future. Maybe cameras will evolve in a
totally different way that we today cannot even imagine. If you think what
people believed in the '80s the 21st century would be like, you will be
amazed. Everybody thought that we would be having flying cars, colonies in
the moon, starships travelling to jupiter.... OTOH, we have now mobile
phones, the soviet bloc doesn't exist since 1989, the internet and in
general the digital revolution.

--
Tzortzakakis Dimitrios
major in electrical engineering
mechanized infantry reservist
hordad AT otenet DOT gr


Don Stauffer

unread,
Apr 27, 2009, 9:53:30 AM4/27/09
to
I am not sure of the exact format size of APS-C, so I cannot compute it
right now. However, even though "photo"lithography has moved to
submicron feature size, I doubt if sensor pixels will go below 1 micron
anytime soon. I see real problems with submicron pixels, although it is
theoretically possible. So a fair benchmark would be an array of 1
micron pixels.

Neil Harrington

unread,
Apr 27, 2009, 10:20:05 AM4/27/09
to

It varies slightly among the so-called APS-C cameras; there is no "exact
format size." For example, a Nikon D70s has a sensor size of 23.4 x 15.6 mm,
while a D80 sensor is 23.6 x 15.8 mm. Those dimensions are typical for other
6- and 10-megapixel models respectively in Nikon's DSLR line.

Chris H

unread,
Apr 28, 2009, 4:43:20 AM4/28/09
to
In message <c01725a0-4c7a-498a...@y34g2000prb.googlegroup
s.com>, lastico <lasti...@yahoo.com> writes

>Hi,
>
>What's the highest megapixels possible in APS-C
>DSLRs before noise makes the quality bad...

Currently about 15MP?

At least after that they all seem to go FX

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Neil Harrington

unread,
Apr 28, 2009, 11:46:52 AM4/28/09
to
Chris H wrote:
> In message
> <c01725a0-4c7a-498a...@y34g2000prb.googlegroup s.com>,
> lastico <lasti...@yahoo.com> writes
>> Hi,
>>
>> What's the highest megapixels possible in APS-C
>> DSLRs before noise makes the quality bad...
>
> Currently about 15MP?
>
> At least after that they all seem to go FX

Maybe, but it's certainly *possible* to go much higher than that in a DX
sized sensor.

We now have compact and ultracompact cameras with up to 14 megapixels.
That's with an effective sensor area of somewhere around 45 sq mm. The same
pixel density on a DX sensor (about 373 sq mm) would be around 116
megapixels.

I'm not saying it's likely they'd ever want to do that, but apparently they
could if they wanted to.


Matt Ion

unread,
Apr 28, 2009, 6:26:39 PM4/28/09
to

Any calculations and assumed limitations would also assume makers stay
with current technologies. A completely new and different sensor
technology could be discovered tomorrow that would completely invalidate
the very concept of "megapixels".

Rich

unread,
Apr 28, 2009, 8:15:10 PM4/28/09
to
"Neil Harrington" <sec...@illumnati.net> wrote in
news:joCdneMIJLiIuGrU...@giganews.com:

And they'd charge some idiot $5000 for the rubbish such a sensor would
produce.

Kennedy McEwen

unread,
Apr 28, 2009, 9:05:56 PM4/28/09
to
In article
<c01725a0-4c7a-498a...@y34g2000prb.googlegroups.com>,
lastico <lasti...@yahoo.com> writes

>Hi,
>
>What's the highest megapixels possible in APS-C
>DSLRs before noise makes the quality bad... 20
>Megapixels? 40 Megapixels?

What makes you think there is a limit at all?

In a conventional sensor, as was the case a couple of years ago, you
could say that there was no point in making the pixel smaller than the
diffraction limit of the optical system. In most cases that is around
f/2, in a few cases it gets down to f/1.2, but few lenses meet this
theoretical limit of resolution. For green 550nm light than makes the
optical diffraction cut-off around 1500cy/mm, so no point in making
pixels smaller than 0.33um, since it is impossible to resolve more than
that with f/1.2. On a typical APS-C sensor of 23x15mm that works out at
a snitch over 3gigapixels - the absolute limit.

But who is going to go there and why, when most lenses are diffraction
limited at closer to f/4 or f/5.6? That results in a maximum useful
megapixel size of 145Mpix, and even that doesn't take account of the
pixel resolution itself, just the optical limits.

However, long before that you will see a change in the sensors
occurring, indeed it has already started with some of the SRAW type
concepts which trade resolution for noise.

The idea here is similar to "bit-stream" audio, where a single bit DAC
running at 100MHz with digital filtering reproduces better audio than a
16-bit DAC at the 44.1kHz data recorded on CD. Digital processing takes
that low bandwidth, high dynamic range digital signal and processes it
into high bandwidth low dynamic range digital for the ADC so that it can
be simply filtered in the analogue domain to a low bandwidth high
dynamic range output without the need for high precision analogue
components - literally making digital audio as cheap as chips!

Why should the sensor just accumulate photons in pixel sized buckets
that require high dynamic range analogue and ADC components to get good
quality images? Why not make the pixels and the size of their
individual buckets smaller and then use digital processing based on the
known lens parameters to compute just the information that the lens at
the aperture and zoom position can actually resolve? For example, a 3Gp
sensor with only a few hundred electron capacity could, with appropriate
processing capacity meet the resolution limits of all current optics
with a signal to noise ratio that would outperform all of today's
sensors.

Take it another step and simply record the position that each photon has
induced a free electron on the focal plane and process that data
on-chip. Based on today's devices of, say a 5um pixel with 25,000
electrons per pixel, that would be 0.4Tp for APS-C - with 1 electron
each. The digital output off the chip could be 50Mp with 12bit dynamic
range if the lens was at its sweet spot, or only 1Mp with 18-bit dynamic
range if the lens was optically limited. You could have a variable
pixel density and dynamic range - just like VBR MP3 coding - across the
image, optimally encoding the image to account for the central sweet
spot and the aberration limited corners.

No, you are nowhere near the practical limits with today's technology
and a long way from the limits of theory.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)

Alfred Molon

unread,
Apr 29, 2009, 2:05:53 AM4/29/09
to
Don't forget that full colour sensors are the future.
--

Alfred Molon
------------------------------
Olympus 50X0, 8080, E3X0, E4X0, E5X0 and E3 forum at
http://tech.groups.yahoo.com/group/MyOlympus/
http://myolympus.org/ photo sharing site

Kennedy McEwen

unread,
Apr 29, 2009, 8:03:28 AM4/29/09
to
In article <MPG.24620caf9...@news.supernews.com>, Alfred Molon
<alfred...@yahoo.com> writes

>Don't forget that full colour sensors are the future.

With that level of oversampling it doesn't matter whether the sensors
are "full colour" at each site or a Bayer type matrix of single colours.

Chris Malcolm

unread,
Apr 29, 2009, 1:07:49 PM4/29/09
to
Alfred Molon <alfred...@yahoo.com> wrote:

> Don't forget that full colour sensors are the future.

It's a shame we won't be able to upgrade our eyes :-)

--
Chris Malcolm

jdear

unread,
Apr 29, 2009, 1:36:46 PM4/29/09
to
On Apr 28, 6:05 pm, Kennedy McEwen <r...@nospam.demon.co.uk> wrote:
> In article
> <c01725a0-4c7a-498a-a03c-7f6d4f40f...@y34g2000prb.googlegroups.com>,
> lastico <lastico...@yahoo.com> writes

I don't see how your idea would work. The single bit ADCs you talk
about
are sigma delta converters, that is, the next sample is either one
bit
value higher or one bit value lower than the PREVIOUS sample. Still
image
data is not temporal data, so there is no previous data to compare to.
There is spatial data that can be used for resolution purposes and for
a
very limited intensity purposes ( blown highlights ). A single bit
sensor
wouldn't know the difference between a patch of grey and a patch of
white
in the scene. Adding dither to the sensor would help, but not enough.

Kennedy McEwen

unread,
Apr 30, 2009, 3:00:57 AM4/30/09
to
In article
<9bb8f907-0887-4851...@a5g2000pre.googlegroups.com>,
jdear <jde...@yahoo.com> writes

>I don't see how your idea would work. The single bit ADCs you talk
>about
>are sigma delta converters, that is, the next sample is either one
>bit
>value higher or one bit value lower than the PREVIOUS sample.

No that isn't how sigma-delta ADCs work.
See http://en.wikipedia.org/wiki/Digital_to_analog_converter

These are oversampling DACs with noise shaping digital filters. The
same effect can be achieved in ADCs with the digital filter on the
output, as I suggest.

>Still
>image
>data is not temporal data, so there is no previous data to compare to.

Ignoring your previous error, this is also irrelevant because whilst
there is no temporal previous data there is adjacent spatial data, not
only that, but unlike the time axis, there are two relevant orthogonal
spatial axes.


>A single bit
>sensor
>wouldn't know the difference between a patch of grey and a patch of
>white
>in the scene.

That is precisely the point. At the individual sample there are only
two levels, however that sampling density is so high - much higher than
the optical capabilities of any lens - that the useful image information
is achieved by averaging over a dynamically variable area, matched to
the capabilities of the lens. What you are measuring is a canonical
ensemble, the electron density on the silicon. Current devices
effectively have a single bit sensor already, individual electrons, but
they are spatially clustered - averaged over relatively large, fixed,
areas which are below the resolution of the best glass.

Alfred Molon

unread,
May 1, 2009, 2:19:56 AM5/1/09
to
In article <LItMraAQ...@kennedym.demon.co.uk>, Kennedy McEwen
says...


> With that level of oversampling it doesn't matter whether the sensors
> are "full colour" at each site or a Bayer type matrix of single colours.

It does. Since you can't increase the pixel count for ever, the future
are better pixels and full colour pixels are better than single colour
ones.

Alfred Molon

unread,
May 1, 2009, 2:21:58 AM5/1/09
to
In article <75rfr5F...@mid.individual.net>, Chris Malcolm says...

> Alfred Molon <alfred...@yahoo.com> wrote:
>
> > Don't forget that full colour sensors are the future.
>
> It's a shame we won't be able to upgrade our eyes :-)

Already now electronic sensors are better than our eyes:
- We can't see colours when it's too dark, electronic sensors can.
- We can't see infrared or ultraviolet, electronic sensor can.

Kennedy McEwen

unread,
May 1, 2009, 10:39:03 AM5/1/09
to
In article <MPG.2464b2d6c...@news.supernews.com>, Alfred Molon
<alfred...@yahoo.com> writes

>In article <LItMraAQ...@kennedym.demon.co.uk>, Kennedy McEwen
>says...
>
>> With that level of oversampling it doesn't matter whether the sensors
>> are "full colour" at each site or a Bayer type matrix of single colours.
>
>It does. Since you can't increase the pixel count for ever, the future
>are better pixels and full colour pixels are better than single colour
>ones.

You have clearly missed the point, which is to eliminate pixels on the
focal plane entirely by increasing the sampling density significantly
beyond what can be resolved by any practical optic. The output pixels
are downsampled from a super-resolution sampling scale. You don't need
to increase the pixel count forever. Once you are past the point where
an optical system can differentiate the spatial extent of any colour
then spatially separated colour sensors (eg. Bayer) cease to have any
differentiation from full colour spatially coherent sensors (eg.
Foveon). However, single photo-electron localisation requires several
orders of magnitude higher sampling density than an optical diffraction
limit, so way beyond where it becomes possible to optically spatially
separate colours.

An analogy of this is to couple a conventional Bayer sensor to a very
poor resolution lens, something like a Lensbaby with a >20Mp sensor.
Since the Lensbaby cannot resolve any more than, say 10lp/mm, it cannot
differentiate the spatial difference of the 6um or so between coloured
pixels on the Bayer sensor. Rather, at 10lp/mm the smallest pixel that
the lens can resolve is about 50x50um, which is just a shade more than
8x8 Bayer pixels. Consequently after spatially filtering the image back
to the maximum resolution that the Lensbaby can reproduce, each output
pixel IS a full colour pixel. There need be no Bayer demosaicing
process, because the lens resolution comes nowhere near the sensor
resolution. Simply integrating the electron density in each colour
within an area determined by the lens resolution limit is enough.

Now you can argue that if each Bayer sensor was sensitive to three
colours in a Foveon or similar arrangement then you would get more
sensitivity, but that is just the limitations of the analogy not the
concept. With the 1-electron capacity at the ideal sensor resolution
limit, and hence single bit depth, that argument just ceases to be
relevant.

Alfred Molon

unread,
May 1, 2009, 5:26:30 PM5/1/09
to
In article <TMoK6DDH...@kennedym.demon.co.uk>, Kennedy McEwen
says...

> Now you can argue that if each Bayer sensor was sensitive to three
> colours in a Foveon or similar arrangement then you would get more
> sensitivity,

... and that's essentially the reason why I was suggesting to have full
colour pixels. If you only capture only one colour component per spatial
area, you are throwing away 2/3 of the incoming light, i.e. you have
only 1/3 of the sentitivity.

Besides I suspect that the additional circuitry to read and process a
pixel occupies sensor area, which can otherwise be used to capture
light. So, instead of having three single colour pixels it's better to
have just one full-colour pixel.

Kennedy McEwen

unread,
May 1, 2009, 4:50:11 PM5/1/09
to
In article <MPG.2465873bc...@news.supernews.com>, Alfred Molon
<alfred...@yahoo.com> writes

>In article <TMoK6DDH...@kennedym.demon.co.uk>, Kennedy McEwen
>says...
>
>> Now you can argue that if each Bayer sensor was sensitive to three
>> colours in a Foveon or similar arrangement then you would get more
>> sensitivity,
>
>... and that's essentially the reason why I was suggesting to have full
>colour pixels. If you only capture only one colour component per spatial
>area, you are throwing away 2/3 of the incoming light, i.e. you have
>only 1/3 of the sentitivity.
>
Not if you are localising individual photo-electrons with 1-bit dynamic
range! Take a look at some of the non-Foveon concepts for full colour
pixels - the photoelectrons are stored on the silicon in different
spatial locations depending on their colour. Spatial separation of
colour signals does not always result in "throwing away 2/3 of the
incoming light", even though that happens with Bayer arrays.

jdear

unread,
May 1, 2009, 5:53:19 PM5/1/09
to
On Apr 30, 12:00 am, Kennedy McEwen <r...@nospam.demon.co.uk> wrote:

> >A single bit
> >sensor
> >wouldn't know the difference between a patch of grey and a patch of
> >white
> >in the scene.
>
> That is precisely the point.  At the individual sample there are only
> two levels, however that sampling density is so high - much higher than
> the optical capabilities of any lens - that the useful image information
> is achieved by averaging over a dynamically variable area, matched to
> the capabilities of the lens.  What you are measuring is a canonical
> ensemble, the electron density on the silicon.  Current devices
> effectively have a single bit sensor already, individual electrons, but
> they are spatially clustered - averaged over relatively large, fixed,
> areas which are below the resolution of the best glass.
> --
> Kennedy
> Yes, Socrates himself is particularly missed;
> A lovely little thinker, but a bugger when he's pissed.
> Python Philosophers         (replace 'nospam' with 'kennedym' when replying)

No, it won't work.

Think about this, you have seen like this:

00000000000000000000
00000000000000000000
00000000000000000000
00000000000000000000
00000000000000000000
00000111111111100000
00000111111111100000
00000111111111100000
00000111111111100000
00000111111111100000
00000000000000000000
00000000000000000000
00000000000000000000
00000000000000000000

the zeros are not exposed, the one's are. Since this is
a single bit system, there is no way of telling if some of
the one's were brighter than the others. Using adjacent
bits won't help as there could be a dim spot in the center
but still bright enough to be recorded as a one. Two much
information has been lost and can not be recovered. This
same pattern above would be recorded regardless of how
bright the center spot is, as long as it was bright enough
to cause a one to be recorded.
The only way this system could work is if you had a series
of VERY short exposures and you stacked the exposures.
That amounts to reading every photon as they come in in real
time.

nospam

unread,
May 1, 2009, 7:25:04 PM5/1/09
to
Molon <alfred...@yahoo.com> wrote:

> ... and that's essentially the reason why I was suggesting to have full
> colour pixels. If you only capture only one colour component per spatial
> area, you are throwing away 2/3 of the incoming light, i.e. you have
> only 1/3 of the sentitivity.

the 2/3 that is 'thrown away' is reconstituted later and if the sensor
outresolves the lens, there's essentially no loss at all. plus, having
a full colour pixel would mean a raw file that's three times as big,
requiring three times the bandwidth in the camera as well as three
times as much flash and hard drive storage.

> Besides I suspect that the additional circuitry to read and process a
> pixel occupies sensor area, which can otherwise be used to capture
> light. So, instead of having three single colour pixels it's better to
> have just one full-colour pixel.

except that there's a noise penalty to pay, even with some of the
non-foveon designs, so you have a full colour pixel that doesn't work
as well at higher isos.

Kennedy McEwen

unread,
May 1, 2009, 10:16:53 PM5/1/09
to
In article
<e90c177f-ccf0-4b4d...@b6g2000pre.googlegroups.com>,
jdear <jde...@yahoo.com> writes

>
>No, it won't work.
>
>Think about this, you have seen like this:
>
>00000000000000000000
>00000000000000000000
>00000000000000000000
>00000000000000000000
>00000000000000000000
>00000111111111100000
>00000111111111100000
>00000111111111100000
>00000111111111100000
>00000111111111100000
>00000000000000000000
>00000000000000000000
>00000000000000000000
>00000000000000000000
>
>the zeros are not exposed, the one's are.

That is an impossible situation since we are talking about samples here
which are many times higher than the optical resolution of any lens. You
simply can't achieve areas of the sensor exposed as above, there is just
a photo-electron density transition at the optical resolution. Lens
resolution is finite, even for an optically perfect lens. Photoelectrons
can be localised at much finer precision than the optical lens can
resolve.

You don't need to know that some of the 1's are brighter than others,
because they aren't - its the total number of 1's in the entire area
that matters and, in your diagram above, with 280 possible positions in
what would be only part of an optically resolved sample, there is just
over 8-bits of equivalent precision with a simple single order digital
filter without any noise shaping.

Your issue on exposure time is also misguided. Photoelectrons are
generated by photons absorbed by the silicon, they don't just disappear,
they remain there until something allows them to move off the focal
plane. The concept isn't so different from the original photocathodes
on CRT sensors, just in solid state at a finer resolution.

Alfred Molon

unread,
May 2, 2009, 12:50:03 AM5/2/09
to
In article <ZLJ4r7FD...@kennedym.demon.co.uk>, Kennedy McEwen
says...

> Not if you are localising individual photo-electrons with 1-bit dynamic
> range! Take a look at some of the non-Foveon concepts for full colour
> pixels - the photoelectrons are stored on the silicon in different
> spatial locations depending on their colour.

That sounds like Foveon... or are you suggestion that not in every
location on the sensor you are capturing all colours? Then you are
throwing away photons.

Alfred Molon

unread,
May 2, 2009, 12:54:04 AM5/2/09
to
In article <010520091625049712%nos...@nospam.invalid>, nospam says...

> the 2/3 that is 'thrown away' is reconstituted later and if the sensor
> outresolves the lens, there's essentially no loss at all.

Yes there is because with Bayer you are throwing away 2/3 of all
photons, with consequent loss of sensitivity.

To put it in other terms, a Bayer sensor uses only 1/3 of incoming
light, therefore the full-colour sensor is 3 times as sensitive.

> plus, having
> a full colour pixel would mean a raw file that's three times as big,
> requiring three times the bandwidth in the camera as well as three
> times as much flash and hard drive storage.

Not much of an issue, since memory is cheap and getting cheaper. Besides
I suspect that you could use some data compression to reduce the memory
usage.

nospam

unread,
May 2, 2009, 12:49:41 AM5/2/09
to
In article <MPG.2465f0575...@news.supernews.com>, Alfred
Molon <alfred...@yahoo.com> wrote:

> To put it in other terms, a Bayer sensor uses only 1/3 of incoming
> light, therefore the full-colour sensor is 3 times as sensitive.

with foveon, there's also a loss of sensitivity from slicing a pixel
into three layers, so each 'colour' gets 1/3 the light (simplified),
along with the losses in the conversion to rgb. with nikon's dichroic
mirror patent, the 3 colour receptors have to fit into the space of one
pixel (plus the mirrors). a smaller receptor results in a lower s/n
ratio.

real cameras are consistent with that loss. the nikon d300 goes to iso
3200 with 6400 in extended mode. the sigma sd14 goes to iso 800, with
iso 1600 in extended mode. plus, iso 800 on the sigma is pretty bad,
worse than 3200 on a nikon d300.

> > plus, having
> > a full colour pixel would mean a raw file that's three times as big,
> > requiring three times the bandwidth in the camera as well as three
> > times as much flash and hard drive storage.
>
> Not much of an issue, since memory is cheap and getting cheaper. Besides
> I suspect that you could use some data compression to reduce the memory
> usage.

regardless of its price, full colour raw images will have three times
as much data to move and store than bayer images. compression doesn't
matter since bayer raw images can be compressed too. in fact, bayer
*is* a form of slightly lossy compression.

J. Clarke

unread,
May 2, 2009, 7:33:00 AM5/2/09
to

Sounds like you're proposing that the existing systems be replaced with some
kind of digital half-toning with very, very small active sites. That,
unless there is a huge breakthrough in sensor technology, is going to flush
your sensitivity down the toilet.

Alfred Molon

unread,
May 2, 2009, 4:07:36 PM5/2/09
to
In article <010520092149418345%nos...@nospam.invalid>, nospam says...

> with foveon, there's also a loss of sensitivity from slicing a pixel
> into three layers, so each 'colour' gets 1/3 the light (simplified),

No. With a full colour sensor each photon is captured, while with Bayer
2/3 of the photons are thrown away.

> along with the losses in the conversion to rgb.

What losses do you mean?



> in fact, bayer
> *is* a form of slightly lossy compression.

It's not. With bayer the missing colour data is guessed (incorrectly for
most pixels).

nospam

unread,
May 2, 2009, 4:20:13 PM5/2/09
to
In article <MPG.2466c63bb...@news.supernews.com>, Alfred
Molon <alfred...@yahoo.com> wrote:

> > with foveon, there's also a loss of sensitivity from slicing a pixel
> > into three layers, so each 'colour' gets 1/3 the light (simplified),
>
> No. With a full colour sensor each photon is captured, while with Bayer
> 2/3 of the photons are thrown away.

they may be thrown away but the data can be regenerated. furthermore,
each individual foveon layer is itself noisier than one thick pixel.

> > along with the losses in the conversion to rgb.
>
> What losses do you mean?

unlike what sigma's ads show, foveon doesn't capture true rgb, but
rather three overlapping spectra that must be converted to rgb. for
instance, the top layer is mostly blue but has a lot of green and even
some red. the middle layer is mostly green but has a lot of blue and
red. simplifying, to get rgb, the layers need to be subtracted which
reduces the s/n ratio. the conversion is actually quite complex and
part of the reason why the camera and software is so slow.

> > in fact, bayer
> > *is* a form of slightly lossy compression.
>
> It's not. With bayer the missing colour data is guessed (incorrectly for
> most pixels).

it's calculated, not guessed. it's also very accurate except in a
couple of edge cases that don't matter in the real world. if most of
the bayer pixels were as inaccurate you claim, then why do the photos
look as good as they do?

foveon also calculates the pixels (or should i say guesses). in fact,
there's more guessing in foveon than with bayer. if you take multiple
photos in succession with a sigma camera you might get different
colours. there are even differences between multiple cameras, which
shouldn't happen if it was more accurate.

Kennedy McEwen

unread,
May 2, 2009, 4:58:48 PM5/2/09
to
In article <MPG.2465ef6a...@news.supernews.com>, Alfred Molon
<alfred...@yahoo.com> writes

>In article <ZLJ4r7FD...@kennedym.demon.co.uk>, Kennedy McEwen
>says...
>
>> Not if you are localising individual photo-electrons with 1-bit dynamic
>> range! Take a look at some of the non-Foveon concepts for full colour
>> pixels - the photoelectrons are stored on the silicon in different
>> spatial locations depending on their colour.
>
>That sounds like Foveon... or are you suggestion that not in every
>location on the sensor you are capturing all colours? Then you are
>throwing away photons.

No, Foveon use penetration depth to differentiate photon energy and thus
wavelength, however that isn't the only way and spatial storage does not
mean that photons are "thrown away", as you call it, just because that
is what Bayer filtering does, although by much less than the 2/3 that
you claim.

To explain how this is feasible, start by recognising that the image
sensing and storage operations are completely separate functions. The
sensor can image every part of the focal plane, whilst the sensing and
storage uses is only part of it. That happens already, using
microlenses. These enable image capture across a much larger area than
that of the underlying photodiode area. Indeed, recent gap-less
microlens designs capture 100% of the incident photons. Since that is
today's technology I don't expect you to disagree that it is practical -
all of the photons incident on the focal plane are captured by the
microlenses. Only the Bayer filter of the current sensors cause some
photons to be filtered out in parts of the focal plane and thus lost.
Without Bayer filters, all of the photons would be stored on the
underlying photodiode area - but without colour discrimination.

Now, instead of uniform micro lenses, lets replace them with
microprisms, or a highly chromatic microlenses, or even diffraction
gratings - indeed, almost any systematic structure on the focal plane at
this resolution, less than the wavelength of light, will produce spatial
chromatic dispersion. Rather than separating the wavelength by
penetration depth on the silicon, as with Foveon, you now have a spatial
chromatic separation. All of the incident photons are captured and
separated spatially by photon energy; wavelength; colour. Remember
this is at a resolution well beyond anything the lens can achieve, so
there is no spatial image content at this level. All photons are
captured, all produce photoelectrons which are localised by wavelength
at a resolution well in excess of optical resolutions.

Thus the spatial location of the photoelectron determines what
wavelength of photon generated it and, since we are grossly oversampling
the optical resolution, the electron density can readily be downsampled
to optical resolution levels in full colour by noise shaping filters as
in bitstream technology.

Bob Larter

unread,
May 3, 2009, 1:52:57 AM5/3/09
to
Alfred Molon wrote:
> In article <010520092149418345%nos...@nospam.invalid>, nospam says...
>
>> with foveon, there's also a loss of sensitivity from slicing a pixel
>> into three layers, so each 'colour' gets 1/3 the light (simplified),
>
> No. With a full colour sensor each photon is captured, while with Bayer
> 2/3 of the photons are thrown away.
>
>> along with the losses in the conversion to rgb.
>
> What losses do you mean?
>
>> in fact, bayer
>> *is* a form of slightly lossy compression.
>
> It's not. With bayer the missing colour data is guessed (incorrectly for
> most pixels).

You aren't related to George Preddy, are you?

--
W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est
---^----^---------------------------------------------------------------

Alfred Molon

unread,
May 3, 2009, 5:13:16 AM5/3/09
to
In article <QGaqYXEIQL$JF...@kennedym.demon.co.uk>, Kennedy McEwen
says...


> Now, instead of uniform micro lenses, lets replace them with
> microprisms, or a highly chromatic microlenses, or even diffraction
> gratings - indeed, almost any systematic structure on the focal plane at
> this resolution, less than the wavelength of light, will produce spatial
> chromatic dispersion. Rather than separating the wavelength by
> penetration depth on the silicon, as with Foveon, you now have a spatial
> chromatic separation. All of the incident photons are captured and
> separated spatially by photon energy; wavelength; colour. Remember
> this is at a resolution well beyond anything the lens can achieve, so
> there is no spatial image content at this level. All photons are
> captured, all produce photoelectrons which are localised by wavelength
> at a resolution well in excess of optical resolutions.

Is it technically feasible to have a microprism for each separate pixel?
That would be 10-20 million microprisms, one per pixel. Seems impossible
to produce such a sensor. And the below silicon would need to have a
separate photodiode placed exactly where the split light beam is
arriving. Individual colour pixel capacity would also be very small. But
again I have my doubts it is possible to produce such a sensor.

Kennedy McEwen

unread,
May 3, 2009, 11:11:41 AM5/3/09
to
In article <MPG.24677e711...@news.supernews.com>, Alfred Molon
<alfred...@yahoo.com> writes

>In article <QGaqYXEIQL$JF...@kennedym.demon.co.uk>, Kennedy McEwen
>says...
>
>> Now, instead of uniform micro lenses, lets replace them with
>> microprisms, or a highly chromatic microlenses, or even diffraction
>> gratings - indeed, almost any systematic structure on the focal plane at
>> this resolution, less than the wavelength of light, will produce spatial
>> chromatic dispersion. Rather than separating the wavelength by
>> penetration depth on the silicon, as with Foveon, you now have a spatial
>> chromatic separation. All of the incident photons are captured and
>> separated spatially by photon energy; wavelength; colour. Remember
>> this is at a resolution well beyond anything the lens can achieve, so
>> there is no spatial image content at this level. All photons are
>> captured, all produce photoelectrons which are localised by wavelength
>> at a resolution well in excess of optical resolutions.
>
>Is it technically feasible to have a microprism for each separate pixel?
>That would be 10-20 million microprisms, one per pixel. Seems impossible
>to produce such a sensor.

Yes, it must be totally impossible to produce a sensor with a crude
optical element for every pixel. That could never be done. Nikon,
Canon, Sony, Casio, Kodak, Panasonic they are all lying - Alfred Molon
says its impossible! I guess it never occurred to you that a prism is
even simpler than a microlens.

> And the below silicon would need to have a
>separate photodiode placed exactly where the split light beam is
>arriving. Individual colour pixel capacity would also be very small.

Once more - you really haven't got your head round this have you - the
small individual pixel capacity is the objective, not the problem! All
that matters is that you have approximately the same storage per unit
area as you have with current sensors, so that when the noise shaping
filter is applied you achieve the same dynamic range or better on the
optically matched downsampled images.

And another thing, I probably mentioned it in the original post, but it
hasn't come up since then: as you are sampling the image at far higher
resolution than the optical limit, there is no need for an anti-alias
filter. Similarly, the reconstruction filter needs no interpolation and
can have a spatial frequency response which is flat out to the Nyquist
limit of the sampling density. In fact, it can even have am MTF which
increases, to compensate to some extent for the optical MTF. As a
result, images will be significantly sharper - in a similar way that
images from a 20Mp sensor downsampled appropriately to 5Mp are far
sharper than images directly from any top of the range 5Mp sensor
available a few years ago.

0 new messages