The first step was to eliminate any effect of light fall-off caused by
the lens, which was quite simple - eliminate the lens! ;-)
So I set up 20mA current flowing through a 5mm AGI-5N3CMPW white LED
(data sheet at http://www.maplin.co.uk/Media/PDFs/n21by.pdf) to produce
a "point source of light".
The Canon 5D camera without lens was mounted on a tripod about 80cm from
this LED source in a darkroom, so the only light reaching the camera was
from the LED. With the camera facing the LED, the TTL meter indicated a
manual exposure of 1/80th second.
Two exposures were made in jpeg format at this exposure. The first with
the camera facing the light source, giving light perfectly perpendicular
to the sensor. The second with the camera rotated on the tripod head so
that the light source cast a shadow across the centre of the frame - the
most extreme angle of incidence that it is possible to create from any
lens through the Canon mount. In fact, since the lens requires a
physical mount, which takes up some space, this is actually a steeper
angle of incidence than would be possible with a real lens. This was
less simple to achieve, since the focus screen diffuses the lens mount
shadow quite a lot, so a couple of test shots were made and reviewed on
the LCD screen to get the shadow in the centre of the frame.
The two jpeg images were then imported into Photoshop and cropped to the
exposed areas, obviously only half of the frame was exposed on the
extreme angle, with the lens mount casting a shadow over the other half.
Histograms for the exposed areas were:
Central source: 73.80 Photoshop levels.
Edge source: 71.82 Photoshop levels.
This indicates a light fall-off due to extreme angle of incidence of
only 2.68%, or approximately 4 HUNDREDTHS of a stop!
This simple test, which *anyone* can independently repeat with their own
camera and confirm for themselves, categorically *PROVES* that there is
essentially *NO* sensitivity to angle of incidence on the Canon 5D
sensor (and I suspect *any* dSLR sensor!).
Any light fall off that is present on this camera is, to all intents and
purposes, *EXACTLY* the same as it was for full frame film cameras - the
effect is *ALL* in the lens.
This simple test, which *anyone* can independently repeat with their own
camera and confirm for themselves, also completely *debunks* one of the
primary Olympus argument in favour of the 4/3 format!
By the way, this is *NOT* a test for those of a nervous disposition
likely to be scared by seeing dust on their focal plane. A 5mm source
at 80cm range is equivalent to shooting at f/160 and every single spec
shows up. The test images from my apparently clean 5D sensor looked
positively filthy! ;-) I would, however, be interested to hear from
any Olympus owners who try it, since it would show whether the
ultrasonic cleaner really does work.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
> The first step was to eliminate any effect of light fall-off caused by
> the lens, which was quite simple - eliminate the lens! ;-)
Not a valid test, since it's the lens that causes extreme angle of
incidence of light on the sensor.
--
Jeremy | jer...@exit109.com
Don't be an idiot: the sensor doesn't know (let alone care) where the
photons are coming from.
> Don't be an idiot: the sensor doesn't know (let alone care) where the
> photons are coming from.
So you're saying that the light falling on the sensor is the same with or
without a lens? That's just not the case.
--
Jeremy | jer...@exit109.com
> eawck...@yahoo.com <eawck...@yahoo.com> wrote:
>
> > Don't be an idiot: the sensor doesn't know (let alone care) where the
> > photons are coming from.
>
> So you're saying that the light falling on the sensor is the same with or
> without a lens? That's just not the case.
One of the idiot theories being bandied about here is that there is
something special about a digital sensor in that it can't handle light
"landing" on the pixel at a high angle of incidence.
http://en.wikipedia.org/wiki/Angle_of_incidence
Look at the diagram. The pixel has no idea where that photon is coming
from, beyond its angle of arrival. None. It matters zero whether the
last thing the photon touched was a piece of glass or the PN junction
in an LED.
Basically, McEwen has shown the reigning (idiot) theory is, as
expected, total bullshit. His simple demonstration essentially makes
most of the Nikon Nutcases here look like ignoramuses, and if they
persist in the face of physical reality, ineducable idiots.
I know you are smart Jeremy. Please make the right choice.
> Look at the diagram. The pixel has no idea where that photon is coming
> from, beyond its angle of arrival.
I don't know sensor design. But people who do say differently, especially
with the Bayer filter and microlenses and whatnot involved.
> Basically, McEwen has shown the reigning (idiot) theory is, as
> expected, total bullshit. His simple demonstration essentially makes
> most of the Nikon Nutcases here look like ignoramuses, and if they
> persist in the face of physical reality, ineducable idiots.
"Nikon Nutcases" is a pretty silly indictment. I have nothing against Canon.
I see no real advantage in 35mm sensors, from Canon or Nikon or anyone else.
--
Jeremy | jer...@exit109.com
It's a pity you went to all that trouble to carry out a completely
irrelevant and grossly misleading test.
To be of any worth, your point source should have been somewhere close
to the distance from the sensor represented by the rear element of a
lens such as an EF 24mm or 28mm. Only then would you have seen the
significant difference in illumination caused by the changed angle of
incidence - you would also have to make an allowance for the decreased
illumination of the corners due to the inverse square law, but the
light fall off due to incident angle to the sensor would still have
been clearly apparent.
So what did you do instead? You put the light source 80cm away so the
angle of incidence at the corner was only 89.93 degrees rather than
90.00 degrees at the centre. 0.07 degrees different? It's so small,
it is almost impossible to measure. No wonder there was no difference
in illumination!
What a complete waste of time. You have proved nothing - except your
own complete inability to understand the problem.
Light fall-off is not caused by the sensor.
idiot theories
reigning (idiot)
total bullshit
Nikon Nutcases
ignoramuses
ineducable idiots
Wow, a most intelligent presentation.
> Two exposures were made in jpeg format at this exposure. The first with
> the camera facing the light source, giving light perfectly perpendicular
> to the sensor. The second with the camera rotated on the tripod head so
> that the light source cast a shadow across the centre of the frame - the
> most extreme angle of incidence that it is possible to create from any
> lens through the Canon mount. In fact, since the lens requires a
> physical mount, which takes up some space, this is actually a steeper
> angle of incidence than would be possible with a real lens. This was
> less simple to achieve, since the focus screen diffuses the lens mount
> shadow quite a lot, so a couple of test shots were made and reviewed on
> the LCD screen to get the shadow in the centre of the frame.
>
> The two jpeg images were then imported into Photoshop and cropped to the
> exposed areas, obviously only half of the frame was exposed on the
> extreme angle, with the lens mount casting a shadow over the other half.
> Histograms for the exposed areas were:
>
> Central source: 73.80 Photoshop levels.
> Edge source: 71.82 Photoshop levels.
>
> This indicates a light fall-off due to extreme angle of incidence of
> only 2.68%, or approximately 4 HUNDREDTHS of a stop!
>
> This simple test, which *anyone* can independently repeat with their own
> camera and confirm for themselves, categorically *PROVES* that there is
> essentially *NO* sensitivity to angle of incidence on the Canon 5D
> sensor (and I suspect *any* dSLR sensor!).
First, let me say that I take no brand side in these DSLR battles
and I think this is a big non-issue and people should go make
pictures rather than flaming each other over spec-sheet
hypotheticals. Also I am in favor of measuring things rather
than theorizing in a vacuum.
That said, I don't understand how you could get the same light
level in this experiment. The LED is at the same distance
from the sensor in both cases. But in the second case, the
camera is rotated, so each pixel surface is at an angle to the
LED. Put another way, from the LED's position, each pixel
subtends a smaller angle when rotated, so it should receive a
smaller amount of light. The amount of light falling on a
given pixel should be smaller by cos(theta), where theta is
the angle by which you rotated the camera. This is
independent of any arguments about whether or not the
pixel _sensitivity_ is dependent on angle of incidence - it
would be equally true of film.
Assuming that the Canon sensor to mount distance is 44mm
and the diameter of the lens mount is 54mm, you would have
rotated the camera by about theta = arctan(27/44) = 31.5 deg
to put the lensmount shadow on the center of the sensor.
Then the cos(theta) factor should be 0.85, so the signal
should be 15% lower even if the pixels are completely
insensitive to angle of incidence.
I don't know enough about how the in camera processing
determines the output jpeg pixel values. Is it possible that you
need to shoot raw to do this test?
> So what did you do instead? You put the light source 80cm away so the
> angle of incidence at the corner was only 89.93 degrees rather than
> 90.00 degrees at the centre. 0.07 degrees different? It's so small,
> it is almost impossible to measure. No wonder there was no difference
> in illumination!
He put the source 0.8m away and rotated the sensor. I guess you missed
that part, eh?
> What a complete waste of time. You have proved nothing - except your
> own complete inability to understand the problem.
You are beyond all possible hope, Polson.
> What makes you think the sensor treats light from a lens any differently
> from any other source of light?
It's not a matter of the sensor treating it differently. The light is
different coming from a lens than what you've tested.
> If the detector has less sensitivity to light from the lens at extreme
> angles then it has less sensitivity to light from any source at this same
> angle.
Your experiment doesn't test this. Your light source was so far away from
the sensor that has no real relevance to what happens when a lens is on the
camera.
--
Jeremy | jer...@exit109.com
Does the lens matter? Yes, obviously. A lens will vignette, and this
test doesn't refute that. It merely suggests that the vignette will be
similar to what was seen on film.
I'd like to see the test shots for myself, but the test itself seems
pretty strongly conceived.
Will
> Wow, a most intelligent presentation.
Don't like what I write, killfile me. If you identify the newsreader
you are using, I'll happily google up the instructions for you.
> Isn't it? The lens is going to reduce transmission, and it is going to
> focus the photons, directing them from the scene onto very specific
> parts of the sensor at angles ranging from 0 to (perhaps) 90 degrees.
A pixel on the sensor is not illuminated by a point light source. It's
a circular light source. From the perspective of the edges of the sensor
it is an elliptical source. You can't model this by envisioning a single
ray coming from a point in the scene and landing on the corresponding
point on the sensor, because that's just not how it works, a fact you can
easily demonstrate by, for example, noting the effect that blocking a
small portion of the lens has on the final image.
--
Jeremy | jer...@exit109.com
> > Look at the diagram. The pixel has no idea where that photon is coming
> > from, beyond its angle of arrival.
>
> I don't know sensor design. But people who do say differently, especially
> with the Bayer filter and microlenses and whatnot involved.
They can say what they like, but the data remains.
>> Battleax wrote:
>> Don't be an idiot
>>idiot theories
>>reigning (idiot)
>>total bullshit
>>Nikon Nutcases
>>ignoramuses
>>ineducable idiots
>> Wow, a most intelligent presentation.
>
> Don't like what I write, killfile me. If you identify the newsreader
> you are using, I'll happily google up the instructions for you.
>
You prove you have electronics training, but you remain uneducated
> You prove you have electronics training, but you remain uneducated
I am not here to win friends and awards, fucktard. Once again: if you
don't like what I write, then killfile me. What is your newsreader?
> Kennedy McEwen <r...@kennedym.demon.co.uk> wrote:
>
> > What makes you think the sensor treats light from a lens any differently
> > from any other source of light?
>
> It's not a matter of the sensor treating it differently. The light is
> different coming from a lens than what you've tested.
The sensor has no idea. It doesn't even "know" there is (or isn't) a
lens! It just responds to light. It's why we call them "sensors".
> > If the detector has less sensitivity to light from the lens at extreme
> > angles then it has less sensitivity to light from any source at this same
> > angle.
>
> Your experiment doesn't test this. Your light source was so far away from
> the sensor that has no real relevance to what happens when a lens is on the
> camera.
The sensor doesn't know or care how far the photon travelled.
> A pixel on the sensor is not illuminated by a point light source. It's
> a circular light source. From the perspective of the edges of the sensor
> it is an elliptical source. You can't model this by envisioning a single
> ray coming from a point in the scene and landing on the corresponding
> point on the sensor, because that's just not how it works,
What the hell are you babbling about? From the geometrical optic
perspective that is _exactly_ how it works. It's why we call them
"images": for each point on the focal plane, there is a corresponding
direction in space from whence the photon really came.
> a fact you can
> easily demonstrate by, for example, noting the effect that blocking a
> small portion of the lens has on the final image.
Yes, it has no effect at all on the projection. I don't understand why
you believe the sensor knows or cares about any thing but counting
photons.
Correct me if I am wrong, but I thought that angle of incidence had to do
with the angle at which the light strikes the sensor. A point source of
light at 80cm is going to have a very low angle of incidence--better to put
your point source at the nominal exit pupil position of a lens (just in
front of the mirror using a lens so that it is spread evenly across the
sensor. That would give you a truer representation of the incident angle of
light leaving the lens. Or do I totally have my head up my ass here?
Toby
>> If the detector has less sensitivity to light from the lens at extreme
>> angles then it has less sensitivity to light from any source at this same
>> angle.
>
>Your experiment doesn't test this. Your light source was so far away from
>the sensor that has no real relevance to what happens when a lens is on the
>camera.
>
The reason the light source was placed so far away was so that the angle
of incidence could be changed by one, and only one, adjustment -
changing the angle of focal plane relative to the light source. What
other variation would you suggest is necessary?
Must be a troll.
That page shows no microlens. No sensor. Doesn't even mention the
word 'pixel' despite the poster's 'convenient' suggestion that it does.
In fact it has sweet fa to do with a light sensor, and merely shows a
*reflection*.
Here's just *one* REAL link on the topic, that is actually relevant.
http://micro.magnet.fsu.edu/primer/digitalimaging/cmosimagesensors.html
"The shape of the miniature lens elements approaches that of a convex
meniscus lens and serves to focus incident light directly into the
photosensitive area of the photodiode... over 70 percent of the
photodiode area may be shielded by transistors and stacked or
interleaved metallic bus lines, which are optically opaque and absorb
or reflect a majority of the incident photons colliding with the
structures. These stacked layers of metal can also lead to undesirable
effects such as vignetting, pixel crosstalk, light scattering, and
diffraction..... Reflection and transmission of incident photons occurs
as a function of wavelength, with a high percentage of shorter
wavelengths (less than 400 nanometers) being reflected, although these
losses can (in some cases) extend well into the visible spectral
region.... Although the application of microlens arrays helps to focus
and steer incoming photons into the photosensitive region and can
double the photodiode sensitivity, these tiny elements also demonstrate
a selectivity based on wavelength and incident angle."
Go visit the link - it has diagrams that actually show a real sensor,
not a flaming mirror.
Sheesh.
>Assuming that the Canon sensor to mount distance is 44mm
>and the diameter of the lens mount is 54mm, you would have
>rotated the camera by about theta = arctan(27/44) = 31.5 deg
>to put the lensmount shadow on the center of the sensor.
>Then the cos(theta) factor should be 0.85, so the signal
>should be 15% lower even if the pixels are completely
>insensitive to angle of incidence.
>
As I mentioned in another post, I was surprised the numbers were so
close, since I was anticipating having some more complex calculations to
make. I only measured two angles, normal and extreme left side. I'll
try others, as well as tilting the camera up and down through the
extremes to see if there is any sensitivity to rotational angle of
incidence on the sensor, and want to average over several frames at each
angle to reduce whatever shutter variation the camera has, but that
might be a few days before I get time to do a more complete analysis.
In the Canon lens mount, this angle cannot exceed about 26deg, due to
the need for the mirror to clear the rear lens element and the limited
diameter of the lens mount.
That is an effect of the lens - it is *NOT* a matter that differentiates
the sensitivity to angle of incidence between film and digital sensors.
> You can't model this by envisioning a single
>ray coming from a point in the scene and landing on the corresponding
>point on the sensor, because that's just not how it works, a fact you can
>easily demonstrate by, for example, noting the effect that blocking a
>small portion of the lens has on the final image.
>
Jeremy, have you ever done *any* lens design? Have you ever spoken to
any lens designers? I have access to a whole team of lens designers
working with me every day of the week! I can assure you that every lens
*is* modelled by the effect of single rays of light and the difference
between these single rays is what determines how the lens behaves, how
flat the field is, what geometric distortion and aberrations it
produces. Every lens is modelled and designed this way, not just by my
guys, but by every lens designer that has ever turned his hand to the
skill.
>To be of any worth, your point source should have been somewhere close
>to the distance from the sensor represented by the rear element of a
>lens such as an EF 24mm or 28mm.
No - if the point source was close to the focal plane then there will be
a change of angle of incidence *across* the focal plane. That is not
what I wanted to achieve. By placing the source far away from the focal
plane that effect is minimised so that the angle of incidence is the
same for *all* pixels in the frame and controlled simply by rotating the
camera relative to the point source.
> Only then would you have seen the
>significant difference in illumination caused by the changed angle of
>incidence - you would also have to make an allowance for the decreased
>illumination of the corners due to the inverse square law, but the
>light fall off due to incident angle to the sensor would still have
>been clearly apparent.
But I wasn't trying to measure the inverse square law! I was trying to
measure the alleged variation in sensitivity of digital sensors to angle
of incidence of, not distance from, the light source! This is the only
alleged difference between film and digital sensor - everything else,
inverse square law, angles of incidence etc. are *EXACTLY* the same
whether the medium is digital or film.
>
>So what did you do instead? You put the light source 80cm away so the
>angle of incidence at the corner was only 89.93 degrees rather than
>90.00 degrees at the centre. 0.07 degrees different?
>
I think you have miss-read the test. I didn't measure a change of angle
across the field, which was specifically minimised, as you have
calculated, to 0.07 degrees. I rotated the camera so that the light
from the point source was just grazing the edge of the lens mount and
thus incident on the detector at a more extreme angle than any practical
lens can produce. All the pixels in the half of the frame that was
exposed thus see this light at approximately the same angle
(+/-0.035deg). I compared that with light that was directly
perpendicular to the sensor (+/-0.035deg). The difference between these
angles was in excess of 30deg.
> In what way is this light different - do all the photons carry
> individual "I came from the lens" ID tags?
They come from various points around the lens. You've not simulated this
in your test, eliminating variables that are present in reality.
> The reason the light source was placed so far away was so that the angle
> of incidence could be changed by one, and only one, adjustment -
> changing the angle of focal plane relative to the light source. What
> other variation would you suggest is necessary?
Using a lens.
--
Jeremy | jer...@exit109.com
> What the hell are you babbling about? From the geometrical optic
> perspective that is _exactly_ how it works. It's why we call them
> "images": for each point on the focal plane, there is a corresponding
> direction in space from whence the photon really came.
No, that's not how it works.
Light from a point in the scene goes to *all* points on the lens; the lens
then, assuming that point in the scene is in focus, converges them all back
upon that point on the sensor. If the point in the scene is at a different
distance from the focus distance, they don't completely converge, but form
a circle on the sensor.
The light hitting a pixel at the edge of the sensor does *not* all come
from the edge of the lens. An experiment eliminating the lens does not
simulate the light that hits the sensor from a lens.
--
Jeremy | jer...@exit109.com
>> A pixel on the sensor is not illuminated by a point light source. It's
>> a circular light source. From the perspective of the edges of the sensor
>> it is an elliptical source.
>
> That is an effect of the lens - it is *NOT* a matter that differentiates
> the sensitivity to angle of incidence between film and digital sensors.
It could be. You don't create a valid experiment by changing multiple
things and then claiming that your observations are due to one of the
changes, without accounting for the others.
> Jeremy, have you ever done *any* lens design?
No, I haven't.
And it doesn't matter; we're not talking about lens design.
> I can assure you that every lens *is* modelled by the effect of single
> rays of light and the difference between these single rays is what
> determines how the lens behaves,
Can you assure me that the light that falls on a pixel on the sensor can
be modeled as a single ray? No, you can't, because that's not how it
works, and from your statements, I can only assume you know that.
--
Jeremy | jer...@exit109.com
"Unlike film's silver halide crystals, which are distributed over a
flat surface and will react to light hitting from any incident angle,
the pixels of silicon require that light strike them within a much
smaller deviation from the perpendicular. To compensate for this
difference (i.e., to redirect incoming light impacting the pixels from
different incident angles so the pixels receive a higher electric
charge), sensor designers bond a domed micro lens over every pixel.
This increases the angular response of the pixels and, hence, the
photosensitivity of the sensor."
1. Tell us, why *do* they fit microlenses to sensors?
2. Will a photon at 5 degrees result in a 'hit' on a sensor? (hint -
almost certainly not)
3. At what angle *does* it get a guaranteed hit?
4. What about film at 5 degrees? (hint - yes)
5. Was that the same answer as 2?
6. Will the sensor hit be accurate, ie the correct color?
7. Are there other issues?
I could go on, but my point is that simplistic comments and irrelevant
links do not an argument make.
So I'm not at all convinced Kennedy's experiment is correct, I believe
he has missed some basic issues, eg:
- what is the *real* angle of light coming from a *real* wide angle's
rear element? is the angle he was able to achieve really in the same
ball park?
- can the real, and quite complex, situation where a single sensor
element is receiving light rays from the whole or a large part of the
rear element, be modelled with a single point source?
- are there other issues here, eg blooming, diffraction, fringing,
internal reflections, incorrect colour, all of which may interact to
add or alter the result, that are being ignored/not measured?
It's interesting, but to blithely claim that the angle of incidence is
irrelevant on the basis of this simplistic test, or on e's opinion
(without supporting links, I note) that 'some people who design
sensors' (unnamed of course), seems to reflect more on the claimant
than it does on reality. Can we see the links? Otherwise it is
hearsay...
Forgive me if I am more interested in the opinions of educational
institutions and sensor designers and scientists, than Mr 'e...@yahoo',
especially given that lame link.
Actually, he did test it, and in a more accurate way. By using an
almost point source, and varying the angle, all angles can be
tested. If there were detected fall off, then interpreting the
numbers to a real lens would need some math, which is what you are
alluding to, I think. But by testing individual angles, one could
see where the problem begins, and provide data for an accurate
model. The fact that little fall off was observed means one
does not need to do any more tests and modeling. The only thing
I didn't see in Kennedy's post was what the maximum angle was.
>>The reason the light source was placed so far away was so that the angle
>>of incidence could be changed by one, and only one, adjustment -
>>changing the angle of focal plane relative to the light source. What
>>other variation would you suggest is necessary?
>
> Using a lens.
That introduces other variables that would make results harder
to interpret as it has already been established that there is
no commercial camera lens in production with zero light fall off
that any of us could buy.
Roger
> eawck...@yahoo.com <eawck...@yahoo.com> wrote:
>
>
>>What the hell are you babbling about? From the geometrical optic
>>perspective that is _exactly_ how it works. It's why we call them
>>"images": for each point on the focal plane, there is a corresponding
>>direction in space from whence the photon really came.
>
>
> No, that's not how it works.
Yes it is.
>
> Light from a point in the scene goes to *all* points on the lens; the lens
> then, assuming that point in the scene is in focus, converges them all back
> upon that point on the sensor. If the point in the scene is at a different
> distance from the focus distance, they don't completely converge, but form
> a circle on the sensor.
Think of an individual photon. It travels in a straight line and hits
the lens in a particular spot, it's direction is changed as it enters the
glass, etc until it emerges from the other side of a series of
lenses. Now do another photon hitting another part of the lens.
In lens design, this is called a ray, and the programs that
track the photon paths through the lens is called ray tracing.
A typical ray tracing program tracks thousands of photon paths
through the system. I have written ray tracing programs and published
results from those programs.
>
> The light hitting a pixel at the edge of the sensor does *not* all come
> from the edge of the lens. An experiment eliminating the lens does not
> simulate the light that hits the sensor from a lens.
Yes, Kennedy's experiment does. But it tracks only one angle at a time.
But that is a good thing, as it is scientifically better
because it took out a lot of variables that could confuse
interpretation.
Roger
> That is absolutely correct and demonstrates that the sensor is
> lambertian, just like most other reflecting and absorbing surfaces. A
> sheet of paper doesn't appear any darker when viewed or lit off-normal
> to when it is viewed or lit perpendicular, yet that sheet of paper
> subtends a solid angle to the eye and the light source which is reduced
> proportional to the cosine of the angles, just as the sensor was.
Yes, but that's not really relevant to the test you want to make.
Lambertian-ness is really a statement about the surface's
diffuse reflecting properties. Here we are interested in its
absorbing properties. The thing is basically a photon counter.
When it's tilted relative to the source, it sees fewer photons
per surface area due to foreshortening. It has to get fewer counts
per pixel. As a reductio ad absurdum, imagine that you had a
bare sensor without a camera and you gradually turned it until
the sensor was nearly edge on to the light source - the signal
per area would have to drop to zero.
I'm assuming the shutter time was the same both times around,
so what I'm speculating is that the in-camera software commits
some kind of automatic gain adjustment to the raw data that
affects the numbers you see in the jpeg. I have no proof of
this and assume exposing in raw mode would prove or disprove it.
> eawck...@yahoo.com <eawck...@yahoo.com> wrote:
>
> > What the hell are you babbling about? From the geometrical optic
> > perspective that is _exactly_ how it works. It's why we call them
> > "images": for each point on the focal plane, there is a corresponding
> > direction in space from whence the photon really came.
>
> No, that's not how it works.
Come on, Mr. Nixon! Don't be foolish.
> Light from a point in the scene goes to *all* points on the lens; the lens
> then, assuming that point in the scene is in focus, converges them all back
> upon that point on the sensor. If the point in the scene is at a different
> distance from the focus distance, they don't completely converge, but form
> a circle on the sensor.
How the photons that arrive at a pixel make the trip is _irrelevant_.
If they all come in over a wide cone, or along a narrow cone, or even a
single line, the sensor is utterly ignorant of this fact: it just
counts them up. The sensor has no idea -- and neither does the person
looking at the final image -- if it is being illuminated by a pinhole
or super-expensive zero-aberration optic admitting a normal plane wave
(qualifiers omitted for space).
> The light hitting a pixel at the edge of the sensor does *not* all come
> from the edge of the lens. An experiment eliminating the lens does not
> simulate the light that hits the sensor from a lens.
The purpose of the experiment was (in essence) to measure the quantum
efficiency of a pixel as a function of angle of incidence. This
function is _intrinsic_ to the pixel -- not to the lens.
The data shows the function is a constant. (Maybe there are some wild
excursions between the two points, but given the data and physical
reality, I seriously doubt it.)
Now we can, as you irrationally demand, conduct the experiment using a
lens. But then we have the problem of deconvolving the _unknown_ light
fall-off of the lens. But even if we _knew_ this function, somehow,
and perfectly, the result obtained would be ... _the same as the one
observed by McEwen_.
>> The reason the light source was placed so far away was so that the angle
>> of incidence could be changed by one, and only one, adjustment -
>> changing the angle of focal plane relative to the light source. What
>> other variation would you suggest is necessary?
>
>Using a lens.
>
So are you changing your story now? You and others made repeated claims
that this was an effect of the sensor, and that the effect was different
on digital sensors to film. The test shows it clearly isn't present on
digital sensors, can only come from the lens, and must therefore also be
present to exactly the same degree on film.
You were one of those making the claim that it was a sensor defect. Now
you claim need to add a lens to show that sensor defect because the
sensor on its own is defect free. Yet you don't believe that the effect
is due to *only* the lens. That is an absurd argument.
> Can you assure me that the light that falls on a pixel on the sensor can
> be modeled as a single ray?
Yes, that is how it works. The complete model of the system is
multiple rays, but understanding what every ray does is what
is important. Kennedy showed that with 2 rays: on perpendicular
and one about 30 degrees incident. The advantage of the experiment
is it takes the lens out of the system so we really know what the
sensor does. It would be nice to see more angles. From multiple
angles, one could then understand what the sensor response to
a lens would be. This test by itself does not do that, except
that finding no variation then implies no variation with
a lens either (the sensor response pert). The lens will still
have light fall off.
Roger
> An experiment eliminating the lens does not
>simulate the light that hits the sensor from a lens.
It doesn't have to! Removing the lens from the measurement eliminates
all other variables of lens performance, such as transmission
variations, vignetting etc. and permits *only* the sensitivity of the
sensor to angle of incidence of light from whatever source to be
assessed.
No variation has been measured between the most extreme angles (which
exceed any of the angles that a physical lens could produce at the field
corners) and perpendicular incidence (which a long focal length high f/#
lens would produce). Consequently, your original arguments that such a
sensitivity variation to angle of incidence existed and was responsible
for different light fall off between full frame film and full frame
digital sensors has been disproven, at least in the case of the Canon
5D. There are other sources of light fall off, but these are all
present in the lens and thus present equally in both film and digital
sensors.
But he backed up his argument with words like idiot, nutcases, ignoramuses.
Surely this holds some sway over your intelligent, well researched response
:)
> Kennedy McEwen wrote:
>
>
>>That is absolutely correct and demonstrates that the sensor is
>>lambertian, just like most other reflecting and absorbing surfaces. A
>>sheet of paper doesn't appear any darker when viewed or lit off-normal
>>to when it is viewed or lit perpendicular, yet that sheet of paper
>>subtends a solid angle to the eye and the light source which is reduced
>>proportional to the cosine of the angles, just as the sensor was.
>
>
> Yes, but that's not really relevant to the test you want to make.
> Lambertian-ness is really a statement about the surface's
> diffuse reflecting properties. Here we are interested in its
> absorbing properties. The thing is basically a photon counter.
> When it's tilted relative to the source, it sees fewer photons
> per surface area due to foreshortening. It has to get fewer counts
> per pixel. As a reductio ad absurdum, imagine that you had a
> bare sensor without a camera and you gradually turned it until
> the sensor was nearly edge on to the light source - the signal
> per area would have to drop to zero.
>
> I'm assuming the shutter time was the same both times around,
> so what I'm speculating is that the in-camera software commits
> some kind of automatic gain adjustment to the raw data that
> affects the numbers you see in the jpeg. I have no proof of
> this and assume exposing in raw mode would prove or disprove it.
This is a good point. The foreshortening should reduce the number of
photons per pixel, independent of reflection or absorption
by the pixel, and independent of Lambertian or not. Think
of a pixel as a little square hole. As you tilt it, the
number of photons going through the hole gets smaller,
because the projected area gets smaller.
Roger
I just checked specifications of an 8Mp Kodak CCD sensor, 4/3 size.
OK, so it's not a great sensor, and maybe it is completely incomparable
to a Canon CMOS.. (does anyone have a link to the *full* tech specs of
any Canon or Sony sensor?)
In those specs, a simple graph shows the sensitivity to light incident
angle. At just 15 degrees (ie 75 degrees from the sensor face), the
light sensitivity is down by about 20%. At 25 degrees, it is nearer to
50%. The graph does not go any further... In fact there is a related
recommendation of keeping the incident angle to within 12 degrees of
straight on, to avoid noticable vignetting.
Hmmm. Who to believe... So is the Canon sensor performance strikingly
different to this? Does anyone have links to similar specs for other
sensors?
The link is:
http://www.kodak.com/global/plugins/acrobat/en/digital/ccd/products/fullframe/KAF-8300CELongSpec.pdf
But be warned, it says it is 836 Mb!! Something tells me that must be
a misprint, but it is clearly a very large file, and I aborted before
it finished. The graph is Figure 5, I think..
> That is correct, I have measured the difference between two angles of
> incidence on the detector, the extreme conditions. I would expect any
> difference to show up at the extremes. It doesn't, so at what angles do
> you believe this angular sensitivity that you have been claiming will be
> present?
I have absolutely no idea.
>> Using a lens.
>
> So are you changing your story now?
No. I'm simply looking for a test that doesn't introduce changes whose
effects are not satisfactorily explained.
> You were one of those making the claim that it was a sensor defect. Now
> you claim need to add a lens to show that sensor defect because the
> sensor on its own is defect free. Yet you don't believe that the effect
> is due to *only* the lens. That is an absurd argument.
No, it's not. I suggest testing film vs. digital by making film vs. digital
the only thing changed in the test. You can eliminate the falloff from the
lens by measuring the difference between the two.
I don't *care* which way the test turns out; I have no stake in it. But I
would like to know one way or the other.
Well, that's not even entirely true. Since I think we *are* headed in the
direction of having 35mm sensors in digital cameras, I would actually prefer
that you're right about this. (Of course, there are other downsides to the
35mm sensor format that are put forth, but this is one of them and I'd like
it to be entirely fictional, but I haven't been convinced that it is.)
--
Jeremy | jer...@exit109.com
> Come on, Mr. Nixon! Don't be foolish.
You must really hate being taken seriously.
> Now we can, as you irrationally demand, conduct the experiment using a
> lens. But then we have the problem of deconvolving the _unknown_ light
> fall-off of the lens. But even if we _knew_ this function, somehow,
> and perfectly, the result obtained would be ... _the same as the one
> observed by McEwen_.
Okay. So why not do it? We *can* measure the falloff from the lens as
opposed to the sensor by simply using film rather than digital.
--
Jeremy | jer...@exit109.com
> eawck...@yahoo.com <eawck...@yahoo.com> wrote:
>
> > Come on, Mr. Nixon! Don't be foolish.
>
> You must really hate being taken seriously.
You'll have to excuse me if I do not understand why a previously sane
entity is in the process of going into the deep end of irrationality.
But at least you now appear to be backing away from the abyss...
> > Now we can, as you irrationally demand, conduct the experiment using a
> > lens. But then we have the problem of deconvolving the _unknown_ light
> > fall-off of the lens. But even if we _knew_ this function, somehow,
> > and perfectly, the result obtained would be ... _the same as the one
> > observed by McEwen_.
>
> Okay. So why not do it?
For the reasons everyone is telling you: we can only estimate the
effect, we don't know (but probably can assume) what the
QE(angle-of-incidence) is for film, and a host of other problems ...
while on the other hand we now have a perfectly good, trivial, and
robust way of directly measuring the function for a pixel. Someone has
handed you a means to obtain easy knowledge on a silver platter. What
is there to dislike?
> For the reasons everyone is telling you: we can only estimate the
> effect, we don't know (but probably can assume) what the
> QE(angle-of-incidence) is for film, and a host of other problems ...
We can know simply by measuring it, which has the additional side effect
of being *exactly* the thing we're trying to measure. If you want to
know the difference between using film and using digital, why not measure
the difference between using film and using digital, rather than measuring
something else entirely?
--
Jeremy | jer...@exit109.com
>> Jeremy, have you ever done *any* lens design?
>
>No, I haven't.
>
>And it doesn't matter; we're not talking about lens design.
>
You really are making things difficult for yourself. You have been
talking about how a lens functions - how exactly do you think a lens
design is undertaken without considering how the lens functions? Each
possible ray path must be traced through the lens.
>> I can assure you that every lens *is* modelled by the effect of single
>> rays of light and the difference between these single rays is what
>> determines how the lens behaves,
>
>Can you assure me that the light that falls on a pixel on the sensor can
>be modeled as a single ray?
I did not say that the light falling on a single pixel could be modelled
as a single ray, it is clearly a number of rays extending over a range
of incident angles. I tested the sensitivity of the sensor at two
extremes of that range and found essentially no difference. The reason
for choosing the extremes was simple - if there is a difference it is
most likely to show up at the extremes. It didn't.
It is just possible, but unlikely, that some intermediate angles do
produce some significant variation and I certainly plan to test other
angles in due course. However when, as I now expect, each of those
lesser incident angles show no essential difference then clearly the sum
of any subset of all of incident angles will be the same, and thus
demonstrate fully that your claimed sensor response is completely bogus.
I am sure you know about Kirchoff's Laws, relating to this
Incident = reflected + transmitted + absorbed.
In this case, transmitted is zero, since the sensor is opaque. You have
accepted that reflected is the same cosine law as lambertian, so
absorbed, and thus detected, is simply the difference.
> The thing is basically a photon counter.
>When it's tilted relative to the source, it sees fewer photons
>per surface area due to foreshortening. It has to get fewer counts
>per pixel. As a reductio ad absurdum, imagine that you had a
>bare sensor without a camera and you gradually turned it until
>the sensor was nearly edge on to the light source - the signal
>per area would have to drop to zero.
>
However, few surfaces are perfectly lambertian right out to extreme
angles of incidence and I doubt that the sensor is either, so your
reduction doesn't apply.
>I'm assuming the shutter time was the same both times around,
Yes - unchanged and the camera was in manual mode.
>so what I'm speculating is that the in-camera software commits
>some kind of automatic gain adjustment to the raw data that
>affects the numbers you see in the jpeg. I have no proof of
>this and assume exposing in raw mode would prove or disprove it.
>
That would mean it would be impossible to underexpose a jpeg even in
manual mode, which clearly is perfectly possible as other posts have
demonstrated.
No, a point source at a long distance has a low *variation* in range of
angles incidence across the field. The average angle of incidence can
be varied over the larger range that the lens mount will permit to enter
the camera. As other contributors, both negative and positive, have
noted, the total range can be varied by +/-31.5deg, which is more that
any practical lens can achieve in the same mount, with a variation at
any specific angle of +/-0.035deg across the field.
>--better to put
>your point source at the nominal exit pupil position of a lens (just in
>front of the mirror using a lens so that it is spread evenly across the
>sensor. That would give you a truer representation of the incident angle of
>light leaving the lens.
With the light source right in front of the mirror then there will be a
range of angles across the field - however there will also be a range of
distances from the light source and we know that the light intensity has
an inverse square law. In addition, the light distribution from the LED
has an angular variation, which would be added to any sensor angular
response and inverse square law intensity variation. It would then be
very difficult, if not impossible, to separate all of the contributing
variables. By placing the point source a significant distance from the
focal plane all of the pixels get the same light intensity incident at
the same angle. That angle can be changed by rotating the focal plane
relative to line between it and the light source
The test is very simple to conduct and it is simple to extend to
numerous angles, which is why I provided details of it - so that others
can, if they wish, make the same or similar tests on their cameras. Not
all digital sensors are the same, but I doubt that dSLR sensors will
vary significantly. However, for all we know, the small pixel APS and
4/3 sensors might actually have more of an angular sensitivity variation
than the larger FF sensors, or CCDs could be worse than CMOS - now that
would be a turn up for the books! Its a simple test, it measures
exactly what it claims to measure - angular response of the pixels.
How do you think the lens can create a larger angle of incidence than
what is achievable through the lens mount? There is nothing to bend the
light to a greater angle between the rear element of the lens, which
must be far enough forward to clear the mirror, and the focal plane. The
rear lens element must fit into the lens mount and thus must be smaller
than it.
If you can suggest any way of a lens creating a larger angle of
incidence on a dSLR then propose it.
> We can know simply by measuring it, which has the additional side effect
> of being *exactly* the thing we're trying to measure. If you want to
> know the difference between using film and using digital, why not measure
> the difference between using film and using digital, rather than measuring
> something else entirely?
The whole point of this exercise has been to question the theory that
there is something special about digital sensors re: high angle of
incidence, vignetting, etc.
An experiment was proposed, results published. If the results stand,
no further measurements are needed: any observed vignetting must be
from the lens, not the sensor. Using a different protocol is possible,
but at far higher cost comparitively speaking. Why do more work for
the same result?
Now if you want to know how film performs, well, that's something you
can undertake I guess.
Even without pixel depth the angle straight on for half the sensor would
be a 17mm wide swath of light vs about a 16mm wide swath at 30 degrees
or roughly 5% loss. I just sketched it out with autocad & counted the
number of 1mm lines that hit... somewhere in that ballpark. Presumably
lens designers are already trying to compensate for that and they have
to let it slide sometimes as part of the give & take of lens design. But
anyways the LED test should be showing around 5% just due to angle so
something isn't working. Maybe with the microprisms poking up it doesn't
act like it's angled light as long as each microlens isn't shading the
next one?
Will
> ...the angle straight on for half the sensor would
> be a 17mm wide swath of light vs about a 16mm wide swath at 30 degrees
> or roughly 5% loss. I just sketched it out with autocad & counted the
> number of 1mm lines that hit... somewhere in that ballpark. ...
> anyways the LED test should be showing around 5% just due to angle so
> something isn't working. Maybe with the microprisms poking up it doesn't
> act like it's angled light as long as each microlens isn't shading the
> next one?
Another thing that might be a factor is the LED light is not focused & a
blurry image is going to spread out... although holding the light back
may fix that, I don't know. Light falloff could be obscured through a
lens by throwing the focus way off because that spreads things around.
> As Roger has explained, that is not exactly as simple as you think to
> make quantitative measurements. The qualitative observations that
> people who have used the same lens on both film and digital, including
> myself, suggests that any difference which does exist is very small. The
> test was an attempt to measure the only variable that could account for
> such a difference if it was present - sensor angular response. It
> didn't find any in the most extreme case.
The problem is: you're saying that the people who design and make the
sensors themselves are either wrong or lying about this. Perhaps they
are, but this test has not convinced me that is the case.
> The test is very simple to conduct and it is simple to extend to
> numerous angles, which is why I provided details of it - so that others
> can, if they wish, make the same or similar tests on their cameras.
You're also suggesting that the sensor designers never thought of this
simple test, and simply agreed among themselves that the angle of
incidence is important without actually verifying it.
I'm sorry, but I'm going to need a little more to believe that.
--
Jeremy | jer...@exit109.com
>>http://en.wikipedia.org/wiki/Angle_of_incidence
>
>
> Must be a troll.
>
> That page shows no microlens. No sensor. Doesn't even mention the
> word 'pixel' despite the poster's 'convenient' suggestion that it does.
> In fact it has sweet fa to do with a light sensor, and merely shows a
> *reflection*.
>
> Here's just *one* REAL link on the topic, that is actually relevant.
> http://micro.magnet.fsu.edu/primer/digitalimaging/cmosimagesensors.html
here's another:
<http://www.microscopy.fsu.edu/primer/digitalimaging/concepts/microlensarray.html>
> "The shape of the miniature lens elements approaches that of a convex
> meniscus lens and serves to focus incident light directly into the
> photosensitive area of the photodiode... over 70 percent of the
> photodiode area may be shielded by transistors and stacked or
> interleaved metallic bus lines, which are optically opaque and absorb
> or reflect a majority of the incident photons colliding with the
> structures. These stacked layers of metal can also lead to undesirable
> effects such as vignetting, pixel crosstalk, light scattering, and
> diffraction..... Reflection and transmission of incident photons occurs
> as a function of wavelength, with a high percentage of shorter
> wavelengths (less than 400 nanometers) being reflected, although these
> losses can (in some cases) extend well into the visible spectral
> region.... Although the application of microlens arrays helps to focus
> and steer incoming photons into the photosensitive region and can
> double the photodiode sensitivity, these tiny elements also demonstrate
> a selectivity based on wavelength and incident angle."
>
> Go visit the link - it has diagrams that actually show a real sensor,
> not a flaming mirror.
>
> Sheesh.
>
But by removing the photons that would be arriving from many diffeent
angles might he also have removed some side-effect of photons
interacting that would change the result? I'm not saying, I'm asking.
I know I don't really understand quantum mechanics, but what I've been
able to understand tells me that deconstructing a particle system in an
experiment like this _might_ produce a counterintuitive result.
> Following from the discussion here over the past few days about light
> fall off on the Canon 5D and whether this is worse that it was with film
> I decided to try a little experiment.
>
Why not just shoot the same scene using a wide angle lens on both a digital
and a film camera body, scan the film and compare them like that? If they
both end up with the same fall off, you've answered the question.
This other test sounds like a BS way of "proving" this, especially given you
wanted a specific outcome to your test before you did it.
--
Stacey
> I am sure you know about Kirchoff's Laws, relating to this
> Incident = reflected + transmitted + absorbed.
>
> In this case, transmitted is zero, since the sensor is opaque. You have
> accepted that reflected is the same cosine law as lambertian, so
> absorbed, and thus detected, is simply the difference.
When the sensor is at an angle to the source, foreshortening
means fewer photons are incident on the sensor per area.
If the surface is matte, the reflected photons are redistributed
over all angles, with an intensity given by the cosine law, which
is a statement about Lambertianness (Lambertiality?) But
this can't change the fact that there are fewer photons incident;
the incident intensity depends on the angle of illumination,
although the fraction reflected may not.
<http://en.wikipedia.org/wiki/Lambert%27s_cosine_law>
"This means that although the radiance of the surface
depends on the angle from the normal to the illuminating
source, it will not depend on the angle from the normal
to the observer." (in the section on Lambertian reflectors)
> > The thing is basically a photon counter.
> >When it's tilted relative to the source, it sees fewer photons
> >per surface area due to foreshortening. It has to get fewer counts
> >per pixel. As a reductio ad absurdum, imagine that you had a
> >bare sensor without a camera and you gradually turned it until
> >the sensor was nearly edge on to the light source - the signal
> >per area would have to drop to zero.
> >
> However, few surfaces are perfectly lambertian right out to extreme
> angles of incidence and I doubt that the sensor is either, so your
> reduction doesn't apply.
Independent of Lambertianness, think of how it behaves before
getting all the way to the extreme angle. Imagine that the angle
of incidence is 70 degrees off normal - there are cos(70) = 0.34
times as many photons incident per area. This is hard to see
by eye, but it's real. It's also why incident light meters use those
little white domes rather than a flat white disc.
It's like the higher latitudes in winter. The sun's rays are
incident on the earth at a greater angle, so there's less
energy/area, so it gets colder.
> >so what I'm speculating is that the in-camera software commits
> >some kind of automatic gain adjustment to the raw data that
> >affects the numbers you see in the jpeg. I have no proof of
> >this and assume exposing in raw mode would prove or disprove it.
> >
> That would mean it would be impossible to underexpose a jpeg even in
> manual mode, which clearly is perfectly possible as other posts have
> demonstrated.
All I'm questioning is the mapping between the pixel numbers
in the jpeg and the number of photons detected per pixel, which
is what we really want to know. One could take an underexposed
noisy picture, multiply all the pixel values by 100, and it would still
be an underexposed noisy picture.
The other possibility is that due to fluctuations in whatever
(shutter speed? LED output?) the test didn't detect the ~15%
difference that should be there due to a statistical fluke.
I kind of doubt this. But if it were true, it would mean you'd
have to take a number of exposures to figure out what the
noise in each measurement was. (Maybe you did this and
didn't say.)
Except that a sensor element with a microlens is a bit more complex than
a little square hole.
Does the anti-alias filter have any effects that depends on the angle of
light?
--
That was it. Done. The faulty Monk was turned out into the desert where it
could believe what it liked, including the idea that it had been hard done
by. It was allowed to keep its horse, since horses were so cheap to make.
-- Douglas Adams in Dirk Gently's Holistic Detective Agency
And to show I'm being fair, I would also need to know how much light
from various areas of the lens will actually *hit* the sensor from
those angles - does the opposite edge actually contribute much? I
suspect not, but without knowing the makeup of the light spread across
the sensor, it remains an unknown variable that would, or at least
could, affect the result..
And no matter how you look at it - there is a bit of a problem with
your results - (compounded by the fact that you didn't actually state
*what* incident angle you achieved)... That problem is that all of the
actual, published technical data that I can find (which isn't much
admittedly) on ccd and cmos sensors, indicates that they suffer
noticable and significant light fall off at angles over 10-15 degrees
away from incident. Losses of 15% and more, not 2.6%...
Your description of the experiment raised some other issues, too, eg:
>The first step was to eliminate any effect of light fall-off caused by
>the lens, which was quite simple - eliminate the lens! ;-)
But hang on - the lens *does* in fact restrict the light very
effectively to the image circle of that lens - how was your light
source collimated (which is not the same as a 'point source')? Could
there have been reflections from the chamber, focusing screen, etc? To
do this properly you would surely want a thin 'tube' of light.
>The second with the camera rotated on the tripod head so
>that the light source cast a shadow across the centre of the frame
This concerns me, because that description shows you have a significant
amount of light hitting the sides of the mirror chamber, the focussing
screen, etc and bouncing around onto the sensor- this would not happen
(as much) with a lens, because there is effectively nothing much
outside the image circle.
>- the most extreme angle of incidence that it is possible to
>create from any lens through the Canon mount.
OK, but what angle *is* that?
> In fact, since the lens requires a physical mount, which takes up
>some space, this is actually a steeper angle of incidence than would
>be possible with a real lens.
I'm not sure I am convinced of that. I don't have a DSLR in front of
me, but when a wide angle lens is fitted and focused to infinity, and
the *opposite* side of the rear element is used as a potential light
source for an opposite-edge-located sensor, I would have thought it
would be quite a bit steeper angle than you could achieve without the
lens. But without some much more precise detail, i can't really argue
this.
>This was less simple to achieve, since the focus screen diffuses the lens
>mount shadow quite a lot
Like I said, it sounds like there is a lot of light bouncing around!!!
This is a rather worrying statement, and as I stated above, it suggests
that the sensors may be getting illumination from areas other than your
point source.
>Histograms for the exposed areas were:
>Central source: 73.80 Photoshop levels.
>Edge source: 71.82 Photoshop levels.
>This indicates a light fall-off due to extreme angle of incidence of
>only 2.68%, or approximately 4 HUNDREDTHS of a stop!
Could we see the images and exif data? I don't usually work in 'PS
levels'! I would imagine by their nature they are not ridiculously
large files. It would also be interesting to see what the 'unexposed
areas' actually look like.. As was stated above though, RAW data would
be more interesting.
>This simple test, which *anyone* can independently repeat
That's where I disagree. I don't think this is a simple test at all!!
>categorically *PROVES* that there is essentially *NO* sensitivity to angle
>of incidence on the Canon 5D sensor (and I suspect *any* dSLR sensor!).
Categorically? Hmm. I used to work in the sciences, and I have heard
people get very excited before.... and then realise that somewhere in
their data there is a significant 'whoops' (or several..)...
And it flies in the face of the figures I have seen, eg Figure 5 on
that kodak spec sheet I linked to. I'll try to find another set of
data for Canon or Sony to see if that is just an unusual/poor sensor
design.
Please don't get me wrong, Kennedy - I appreciate the effort you have
gone to and it's a fascinating topic, but be open to suggestions to
improve the experiment, and don't dissmiss the naysayers. Sometimes,
naysayers are very useful people to have around! I'm happy to have my
arguments shredded, and if they are wrong I'll give up gracefully and
congratulate the 'winners', *but I want links and references and
verifiable data*! Too many people here with 'opinions'... Me
included!
And I have to say that the more research I do into this (yes, I have no
life!), the more conviced I am that there *is* an issue with light
incidence. How much, I don't know, and I am frustrated by the fact
that it should be fairly easy to test *properly*, using the same wide
lens on a full frame Canon-DSLR and film SLR. But has anyone done it?
- if they have, I can't find it.
PS There's another interesting read here...
http://www.swissarmyfork.com/digital_lens_faq.htm
Scroll down to "3) Do we really need digital lenses?" It's an old
document, and I'm not suggesting that Mr Wisniewski is a foremost
authority, but it sums up my argument pretty well, and his numbers are
backed up by what I read from the manufacturers and designers of
sensors. In simple terms, they *are* sensitive to angle of incidence
where film is not, and from what I have seen and read, this *can* be a
problem that is worse on digital than on film.
Just depends on the lens and the sensor. (o: Like I said, I'd *love*
to see the specs of the Canon CMOS, but I haven't found it yet - and
I'm not sure, if I was Canon, that I would want it to be 'out
there'....
In the same way that Olympus want to push their telecentric approach to
help boost the 4/3 system, Canon and Nikon and maybe Sony might be
biased in exactly the reverse way...
(O:
PS - More links:
http://www.dalsa.com/dc/documents/Image_Sensor_Architecture_Whitepaper_Digital_Cinema_00218-00_03-70.pdf
Check pages 7 thru 9. Before dismissing it, find out who 'Dalsa' are..
http://oemagazine.com/FromTheMagazine/feb02/detectors.html
"Microlensed imagers ... show a strong sensitivity dependence on
incident photon wavelength and angle."
OK, it's an old article, but the basic CCD and CMOS design hasn't
changed much. And again, find out who James Janesick is...
But if his test is repeatable, he does have point. How can a DSLR show
additional light fall-off compared to film when the sensor itself show
not exhibit that property?
ahem. repeatability is *not* the only criteria for proof of a
postulation..!! I reckon I would probably get similar results if I
repeated the test, but I am not at all convinced that this test is
valid. See above posts for details.
Shooting negative / print film, for example, would mask vignette and
other optical defects more than, say, Velvia would. Which one is our
benchmark for what "film" is?
Kennedy's test, which is very sound in its approach, not only removes
the variables of photochemical finishing, it also removes the variable
of the lens, whose inherent vignette is going to make it difficult to
imperically measure any exposure variation introduced by either medium
independent of the lens.
Which comes to the final reason a film-to-digital comparison would be a
waste: how do you imperically measure either of these in a way that is
relevant to the other? Print both of them? How? Scan the
slides/negatives? How? Printing and scanning both change the outcome as
much as everything else.
There are simply too many variables to make a worthwhile test by
shooting the two media against one another. Empirical testing is only
useful if there is one variable at play. And in this test, there is:
the angle of incidence of the light striking the sensor.
Will
Kennedy,
Thanks for that interesting test - it has provoked a lot of discussion!
One factor I have not seen mentioned is the light spectrum. I am guessing
that people are seeing light fall-off in blue skies with wide-angle
lenses. It would be interesting to see if the fall-off differed between
the red and blue ends of the spectrum.
I would also suggest measuring with RAW rather than JPEG data, as the
differences you are seeing are rather small.
Cheers,
David
It is showing around 5%.
(73.80 / 71.82) ^ 2.2 = 1.06.
Andrew.
Oh, really? You seem to be carefully avoiding some posts above that
show how non-trivial and non-robust this experiment is.
And please post just ONE SINGLE link to one of the 'people' who know
sensor design and state that incident angle is irrelevant. I've posted
several that state the exact reverse, including a graph from a sensor
manufacturer... It appears their method must have used a slightly
different methodology, or perhaps Canon CMOS sensors are just
completely different to other sensors..?
And to go back to basics (feel free to argue these point by point):
1. The problem is 'light fall off on DSLR's' - yes or no?
2. Hands up those who shoot their DSLR without a lens?
3. If the lens is kept the same, and you use it on a film camera (eg a
Canon SLR) and then a digital camera (eg a Canon FF DSLR) at the same
exposure, same focus setting, same scene, same lighting... You could
simply look at the images and compare. Too tricky? (And yes, of
course you would use transparency film to avoid processing issues)
Can you *please* explain how a 'better' test is to remove the lens
altogether, use a non-collimated light source, don't tell us what
angles were achieved, and not even know what angles *might* be expected
from a typical problematic wide angle lens....? Sheeesh.
If you can't get that, there is no hope for you, and you need *not*
attend any more of my science lectures.......
Your comments about 'we can only estimate the effect' and the 'host of
other problems' are incomprehensible..
The simplest way to identify the effect is by looking at, and
measuring, images that *show the problem*. And doesn't the problem
(allegedly) occur when you use a wide angle lens???? By pulling off
that lens and running this experiment you have *added* a whole new set
of variables and substituted something that is NOT the same. As I and
others have stated, a lens has an image circle - the light is
effectively constrained to that area - a point light source is not
collimated and is quite different - so even silly things like the small
amount of reflected light off the walls of the mirror box *could* be
relevant... But you can't know because you have removed the lens......
sigh.
I give up.
Can you give a message ID of a post that summarizes your position?
I did not find anything in the 'above posts' that clearly explains why
his experiment is the wrong way of measuring the sensor's response to
light with a maximum angle of incidence.
you did not use coherent source of light, thus your test proves nothing.
And, BTW. how is it that sensor vendors document much bigger light
falloff than the one you "detected"? Do you even have idea what's the
actual mechanism of this light fallof, so that you know what to measure?
B.
> Not quantitatively we can't - there are a lot of approximations involved
> which would no doubt be exploited by some as masking any difference. For
> example, even neglecting the response curve of the film, how can you be
> sure that the lens on your film camera stops down to exactly the same
> f/# as on the digital camera in the time between the mirror rising and
> the shutter opening? It might be the same lens, but the actual iris is
> a function of lots of other mechanical and electrical linkages that can
> change when you switch bodies. How do you know the shutter in both
> cameras has the same unevenness across the frame - all shutters vary
> some amount as they cross the frame. As others have tried to, comparing
> film and digital in a fair and meaningful way is not as trivial as you
> seem to think. That is why eliminating as many variables as possible,
> whilst retaining quantitative measurements is a much preferable approach.
Yes, you can do this qualitatively. Print film has similar
response curves to digital cameras. I've shown that on my web site.
Your f/# and shutter arguments are irrelevant. What you want to
test is the relative fall of from center to corner. It
matters not what the exposure time is. It does matter what the
f/stop is, but most decent systems should reproduce to a
fraction of an f/stop with the same lens. And if you do the
test with the lens wide open, the stop is not moving.
In my QE testing, I specifically changed aperture, and of course
one must change exposure. Normalizing aperture, the plot
is quite linear, showing that both shutter and aperture
are predictable and reproducible to a fraction of a stop.
See Figure 3 at:
http://www.clarkvision.com/imagedetail/evaluation-1d2
Roger
Photons do not interact in this case. E.G., all the photons
flying around in a lit room do not interact with each other.
The only thing that has changed in the experiment is diffraction
effects, but for the wide apertures we have been discussing,
that is not relevant either.
What is relevant and has been pointed out is scattered light
affecting the results. Kennedy needs to control
that in his experiment. Not having any pictures
or diagrams of the experiment, it is hard to evaluate.
Roger
> In article <441A345...@qwest.net>,
> Roger N. Clark (change username to rnclark) <user...@qwest.net> wrote:
>
>>This is a good point. The foreshortening should reduce the number of
>>photons per pixel, independent of reflection or absorption
>>by the pixel, and independent of Lambertian or not. Think
>>of a pixel as a little square hole. As you tilt it, the
>>number of photons going through the hole gets smaller,
>>because the projected area gets smaller.
>
>
> Except that a sensor element with a microlens is a bit more complex than
> a little square hole.
Yes, I agree. And depending on the lens design, it could
make the problem better or worse.
>
> Does the anti-alias filter have any effects that depends on the angle of
> light?
Probably.
Roger
photons are coming from."
With a focal plane using microlenses, this is not true. In the focal
plane itself (behind the microlens array), the active (light detecting)
area is only 25 - 35% of the total pixel area. (The rest of the area
is taken up with gates and electronics to get the photo-electrons out
of the focal plane.) The microlens acts like light funnel, catching
the light from most of the pixel area, and focusing it onto the active
area of the focal plane, and increasing the focal plane sensitivity.
Microlenses aren't particularly good lenses, but they do focus the
light into a blob smaller than the active area. At high incidence
angles, not all of the light from the microlens hits the active area
and there is an intensity fall-off. (This is in addition to the normal
cos^4 effect.)
An effect of microlens focal planes that I have SEEN was an on-axis
reduction in expected sensitivity at low f-numbers. What happened here
was that for low f-numbers (high cone angles), the light after the
microlens focused IN FRONT of the active area, and by the time it
reached the active area it had expanded, and the blur spot was larger
than the active area, reducing the light gathered.
The same thing ocurred, more or less, from both Sony and Kodak focal
plans, which were the only candidates we were looking at at that time.
Note that both Sony and Kodak consider the details of their microlens
focal planes proprietary and were no help in tracking down this
problem. A lot of the focal plane data was "backed out." That said I
came up with a realistic, simple geometric model that matched the
experimental data we were getting very well. As I remember, the effect
was noticeable at around F/2.5 and faster. Since I don't know the
particulars of Canon's focal plans, YMMD.
BTW: Do you guys talk this way to people in person when you dissagree?
Hack
--//--
No, I am not. There don't seem to be many sensor designers who do say
that there should be light fall-off due to microlens arrays - only those
in the 4/3 consortium, and none of the articles saying that come from
the sensor manufacturers, only the camera manufacturers. Also, as
already mentioned, the problem might even be more of an issue with their
smaller pixels than it is with larger, less noisy pixels.
> Perhaps they
>are, but this test has not convinced me that is the case.
>
>> The test is very simple to conduct and it is simple to extend to
>> numerous angles, which is why I provided details of it - so that others
>> can, if they wish, make the same or similar tests on their cameras.
>
>You're also suggesting that the sensor designers never thought of this
>simple test,
I would be very surprised if some sort of similar test wasn't actually
performed at least on a batch basis by the sensor manufacturer. I know
that we do something similar to this with our sensor production, which
is what prompted the idea in the first place. Then again, our sensors
routinely operate with f/1 lenses and faster, so extreme angular
response is important.
No - the light from the lens is incoherent so constructive and
destructive interactions cancel each other out. If the lens was
photographing a laser illuminated scene then perhaps there could be some
issues, but that is rather unusual and people are claiming this effect
is significant in normal light.
Not sure I understand what you are describing here, but I measured about
half the loss you are estimating, so it might not be very far from an
emipirical explanation.
>This other test sounds like a BS way of "proving" this, especially given you
>wanted a specific outcome to your test before you did it.
Actually, I expected to find about half a stop difference - I was
extremely surprised by an entirely negligible difference.
You don't understand this, do you. I didn't *want* to "know the
difference between using film and digital"!
>, why not measure
>the difference between using film and using digital, rather than measuring
>something else entirely?
>
Because that "something else entirely" *is* what I wanted to know!
That "something else entirely" was precisely how much the sensitivity to
incident light angle the digital sensor (which you and many others
continually cite without any reference quantifying it) actually was. So
I set out to measure that - not the difference between film and digital,
but to quantify how big this effect that *you* were claiming dominated
corner light fall-off on the 5D actually was. Quantify the cause, not
the symptom.
The *implication* of this being zero is that there is no difference
between film and digital sensors *because* that is the only difference
people like you ever do cite. Over to you Batman, come up with some new
excuses why digital full frame sensors should be different to film,
since your original justification is now in flames!
> In article <121k2ou...@corp.supernews.com>, Jeremy Nixon
> <jer...@exit109.com> writes
> >eawck...@yahoo.com <eawck...@yahoo.com> wrote:
> >
> >> Look at the diagram. The pixel has no idea where that photon is coming
> >> from, beyond its angle of arrival.
> >
> >I don't know sensor design. But people who do say differently, especially
> >with the Bayer filter and microlenses and whatnot involved.
> >
> People with a vested interest say things which are not entirely
> accurate, that is why independently verifiable tests and measurements
> matter.
I very much like the idea of testability; the basis of science and all
that.
However, your theoretical limitation on the possible angle is based on
the light entering through the lens mount. Actual light coming out of
an actual lens exits the rear of the lens well inside the lens mount,
so that's not the real constraint on the real light angles on the
sensor. It may well be that real lenses don't have light exiting the
rear at sharper angles than possible through the lens mount, but I
can't prove why they couldn't.
--
David Dyer-Bennet, <mailto:dd...@dd-b.net>, <http://www.dd-b.net/dd-b/>
RKBA: <http://noguns-nomoney.com/> <http://www.dd-b.net/carry/>
Pics: <http://dd-b.lighthunters.net/> <http://www.dd-b.net/dd-b/SnapshotAlbum/>
Dragaera/Steven Brust: <http://dragaera.info/>
> And from:
> http://www.ws.binghamton.edu/fridrich/562/sensors.pdf
>
> "Unlike film's silver halide crystals, which are distributed over a
> flat surface and will react to light hitting from any incident angle,
> the pixels of silicon require that light strike them within a much
> smaller deviation from the perpendicular. To compensate for this
> difference (i.e., to redirect incoming light impacting the pixels from
> different incident angles so the pixels receive a higher electric
> charge), sensor designers bond a domed micro lens over every pixel.
> This increases the angular response of the pixels and, hence, the
> photosensitivity of the sensor."
>
> 1. Tell us, why *do* they fit microlenses to sensors?
> 2. Will a photon at 5 degrees result in a 'hit' on a sensor? (hint -
> almost certainly not)
> 3. At what angle *does* it get a guaranteed hit?
> 4. What about film at 5 degrees? (hint - yes)
> 5. Was that the same answer as 2?
> 6. Will the sensor hit be accurate, ie the correct color?
> 7. Are there other issues?
>
> I could go on, but my point is that simplistic comments and irrelevant
> links do not an argument make.
>
> So I'm not at all convinced Kennedy's experiment is correct, I believe
> he has missed some basic issues, eg:
> - what is the *real* angle of light coming from a *real* wide angle's
> rear element? is the angle he was able to achieve really in the same
> ball park?
It was clearly greater than 5%, anyway. Much greater.
> - can the real, and quite complex, situation where a single sensor
> element is receiving light rays from the whole or a large part of the
> rear element, be modelled with a single point source?
It's a valid question, but I'm pretty sure the answer is yes. I'm
reasonably confident that two sources would add as expected from the
individual results.
> - are there other issues here, eg blooming, diffraction, fringing,
> internal reflections, incorrect colour, all of which may interact to
> add or alter the result, that are being ignored/not measured?
Internal reflections, obviously. Again, I don't think they account
for a big part of vignetting, though.
> It's interesting, but to blithely claim that the angle of incidence is
> irrelevant on the basis of this simplistic test, or on e's opinion
> (without supporting links, I note) that 'some people who design
> sensors' (unnamed of course), seems to reflect more on the claimant
> than it does on reality. Can we see the links? Otherwise it is
> hearsay...
I had one point to raise myself that I think the test misses. I'm
still interested in understanding what it does and doesn't prove.
> In article <1142565932.6...@j33g2000cwa.googlegroups.com>,
> mark.t...@gmail.com writes
> >
> >So I'm not at all convinced Kennedy's experiment is correct, I believe
> >he has missed some basic issues, eg:
> >- what is the *real* angle of light coming from a *real* wide angle's
> >rear element? is the angle he was able to achieve really in the same
> >ball park?
>
> How do you think the lens can create a larger angle of incidence than
> what is achievable through the lens mount? There is nothing to bend
> the light to a greater angle between the rear element of the lens,
> which must be far enough forward to clear the mirror, and the focal
> plane. The rear lens element must fit into the lens mount and thus
> must be smaller than it.
I see no reason for the restriction on exit angle that you're
assuming. What's your argument?
> In article <121kdmu...@corp.supernews.com>, Jeremy Nixon
> <jer...@exit109.com> writes
> >eawck...@yahoo.com <eawck...@yahoo.com> wrote:
> >
> >> Come on, Mr. Nixon! Don't be foolish.
> >
> >You must really hate being taken seriously.
> >
> >> Now we can, as you irrationally demand, conduct the experiment using a
> >> lens. But then we have the problem of deconvolving the _unknown_ light
> >> fall-off of the lens. But even if we _knew_ this function, somehow,
> >> and perfectly, the result obtained would be ... _the same as the one
> >> observed by McEwen_.
> >
> >Okay. So why not do it? We *can* measure the falloff from the lens as
> >opposed to the sensor by simply using film rather than digital.
> >
> Not quantitatively we can't - there are a lot of approximations
> involved which would no doubt be exploited by some as masking any
> difference. For example, even neglecting the response curve of the
> film, how can you be sure that the lens on your film camera stops down
> to exactly the same f/# as on the digital camera in the time between
> the mirror rising and the shutter opening? It might be the same lens,
> but the actual iris is a function of lots of other mechanical and
> electrical linkages that can change when you switch bodies. How do
> you know the shutter in both cameras has the same unevenness across
> the frame - all shutters vary some amount as they cross the frame. As
> others have tried to, comparing film and digital in a fair and
> meaningful way is not as trivial as you seem to think. That is why
> eliminating as many variables as possible, whilst retaining
> quantitative measurements is a much preferable approach.
Use an old manual-diaphragm lens, and the bulb setting for the shutter
(controlling the exposure with the lens cap, the power supply to the
light, or something). (Yeah, finding an old manual-diaphragm lens
that fits the camera and is an extreme enough wideangle to be a good
test may not be feasible).
He specifically refers to exposure time unevenness across the frame --
which clearly *is* relevant since what we'll be measuring in the end
is exposure differences across the frame.
You're right about the aperture of course, since we'll be making
comparisons only between spots in the *same* frame, not between
frames.
> Kennedy McEwen wrote:
>
> > Following from the discussion here over the past few days about light
> > fall off on the Canon 5D and whether this is worse that it was with film
> > I decided to try a little experiment.
>
> Why not just shoot the same scene using a wide angle lens on both a digital
> and a film camera body, scan the film and compare them like that? If they
> both end up with the same fall off, you've answered the question.
You've answered *a* question; the practical effects *with that lens*.
And if there are differences you don't know what caused them (which is
irrelevant to just using the camera, of course, but many of us are
curious beyond that).
This test was designed specifically to see if angle of incidence makes
a difference -- testing one factor independently.
It looks to me like there's *something* funny going on since the
measured difference is a lot less than simple cosine law would require.
> One problem that I see from such a test is that film itself is an
> unreliable medium for this kind of test. Which film stock would you use
> for the test? How would you expose the test? How would you process the
> test? How would you print the test? Each stage of this could change the
> outcome of your test, to exagerate or hide exposure variations in the
> corners, and it would be hard to find a neutral happy medium.
Doesn't matter, doesn't matter, doesn't matter, and I wouldn't, I'd
measure directly from the processed film.
The measurements being made are of *relative* degree of exposure
within a single frame. The film stock is pretty uniform across the
area of a single frame, and the processing will be pretty uniform
acorss the area of a single frame, and that's all that matters.
> In article <eprSf.47949$F_3....@newssvr29.news.prodigy.net>, Paul
> Furman <paul-@-edgehill.net> writes
>
>>
>> Even without pixel depth the angle straight on for half the sensor
>> would be a 17mm wide swath of light vs about a 16mm wide swath at 30
>> degrees or roughly 5% loss.
>
>
> Not sure I understand what you are describing here, but I measured about
> half the loss you are estimating, so it might not be very far from an
> emipirical explanation.
Ah, that's pretty close to the 5%. I drew a Nikon mount and also looked
at some lenses which coincidentally achieve about the same angles. It's
kind of sketchy but gives the general idea:
<http://www.edgehill.net/1/?SC=go.php&DIR=Misc/photography/lens-angles>
>[...]
I repeated McEwen's experiment last night with a Canon 1DMkII and a
PrincetonTec "Impact" LED flashlight (it has a very clean pattern).
The flashlight has a 20mm exit aperture, and was placed 7000mm from the
camera -- about f/350 equivalent. The camera itself was mounted on a
Acratech ball, and carefully placed (albeit by eye) so that rotating
the head horizontally kept the sensor "in place", more or less. The
Acratech has 5 degree angle markers etched on it (used here for the
first time!). Observations below are the raw data from a square patch
of pixels 128x128 at the centre of the frame, collecting frames at 5
degree increments:
====== Bayer Channel ======
AOI G0 B R G1
--- ------ ------ ------ ------
-25 484.6 510.2 164.1 479.3
-20 801.6 861.6 295.8 799.5
-15 928.0 964.0 359.7 926.9
-10 992.6 1008.8 396.1 990.8
-05 1025.4 1031.6 415.7 1023.5
+00 1032.9 1036.9 420.4 1031.0
+05 1023.4 1029.3 415.1 1021.5
+10 998.7 1011.4 400.4 996.8
+15 948.4 974.8 371.8 946.3
+20 850.8 897.3 320.3 848.8
+25 539.8 570.1 189.8 543.0
(I tried to format this; apologies if it failed). There are errors in
the AOI (made by eyeball, with attention paid to parallax effects) and
do not be misled by the number of digits in the sensor values: the
usual +-sqrt() applies.
I tried to get "AOI 0" to be square to the source, but again, this is
all eyeball and fingers stuff. It looks like I missed by a few
degrees. I also tried to level the head and such; again, eyeball and
fingers.
Nevertheless, as is clearly evident -- and in stark contrast to
McEwen's results -- there is a very strong function at work. Playing
around with gnuplot, a cos(aoi)^3 is a semi-reasonable fit, though
that's just dicking around with math -- no physical process is
proposed. There are also slight differences between the channels.
Dispersion effects?
Conducting this experiment revealed, however, that the above data is
probably a bit higher than it would be: there is a major problem with
glare. The first times I ran it, I noticed a strong left/right
gradient with in images in the aoi > 0 cases, and even some streaks of
flare. Looking "up LED" from the camera, I noticed a number of forward
scattering objects that may have contributed to this. I covered these
with large black blankets (don't ask). However, there are still
signifigant glare problems in the final images used to produce the
above table -- which is why I selected the centre of the image as it is
probably the area least effected by this problem. My guess is that
this glare is from camer's internal baffling; it's probably not as
black as it should be, and they certainly weren't designed to deal with
this kind of use of the camera.
So I think the test should be done with a collimated source, probably
the easiest to obtain is a laser pointer. Unless someone beats me to
it, I'll re-do the test next week with such a source.
But looking at the above table, any falloff beyond cos(aoi) is
basically negligible +- 10 degrees, and only 20% at +- 20 degrees.
With a ruler, a room light and a piece of paper (a poor-mans optical
bench), I tried to calculate the location of the exit pupil of my
17-35/2.8 lens at 17mm ... from this crude estimate, it looks like a
pixel at the corner of the sensor is still inside the 20 degree cone.
The same for my 20/2.8 and a 50/1.4, though the cones are in more
favourable positions of course.
Finally: DUST! This is indeed an excellent dust-detector! But there
is more: as one swings the sensor around, the dust "moves" because it
is offset from the pixels themselves. I found an easy to recognize
blob and observed it's position as the sensor swung. At 16 pixels near
the AOI 0, we can then estimate that the sensor itself is covered by
about 16/tan(5) = 183 pixel-lengths of glass (AA filter, etc). At
8.2um per pixel, this comes to 1.5mm.
>One factor I have not seen mentioned is the light spectrum. I am guessing
>that people are seeing light fall-off in blue skies with wide-angle
>lenses. It would be interesting to see if the fall-off differed between
>the red and blue ends of the spectrum.
>
I did think of that initially, but I happened to have the white LED and
DC current source readily at hand from some previous work. It was just
sitting there as I read the other threads on the topic and I thought
"Hang on, this is essentially all that complex mechanised kit at work
does, but I only need a few data points to see how significant it is."
;-)
However, after thinking about using colour LEDs I came to the conclusion
that if any angular response variation also had a colour component to it
then the shading caused by it would also have a colour cast. From the
images I have seen of the problem it doesn't appear to be coloured.
>I would also suggest measuring with RAW rather than JPEG data, as the
>differences you are seeing are rather small.
>
Agreed - and I will do for the additional tests.
Why would I want to use a coherent light source? None of the light
reaching the sensor is usually coherent and to do so, such as with a
laser illumination source, would only introduce speckle noise. The
correct light source for this type of measurement is an incoherent
source.
You probably mean, as others have suggested, that I did not use a
collimated source - which is something completely different. However,
to those who are concerned about such matters, calculate how collimated
the beam from a 5mm LED at 80cm distance is. Here's a hint - it is more
collimated than light from the sun, and more than adequate for this
purpose.
>And, BTW. how is it that sensor vendors document much bigger light
>falloff than the one you "detected"?
Do you have, or have you ever seen, a document specifying this parameter
for the Canon 5D sensor? I doubt it, but you are welcome to provide
references to it here and now.
>Do you even have idea what's the actual mechanism of this light fallof,
>so that you know what to measure?
>
Yes. That is why I design optical imaging sensor systems for a living.
Do you?
As I have posted on many occasions in the past few months, I use my old
OM 18mm f/3.5 prime lens on the Canon 5D with an adapter - I have
*never* seen light fall off in the corners that is any worse than I
experienced on the OM-1, 2, 3, 4, 4Ti cameras I used this lens on in the
past. Others have related similar experience. Consequently a
film/digital comparison is likely to be dominated by media response
curve variations and other parameters. That is why a direct measurement
of the only cited cause of this alleged difference is more meaningful.
>Actual light coming out of
>an actual lens exits the rear of the lens well inside the lens mount,
>so that's not the real constraint on the real light angles on the
>sensor.
Well the rear lens element can't go very far inside the mount because
the mirror gets in the way - that is why all these super wide angle SLR
lenses are retrofocus designs.
> It may well be that real lenses don't have light exiting the
>rear at sharper angles than possible through the lens mount, but I
>can't prove why they couldn't.
Lets say they can be 4mm inside the mount flange, but the bayonet itself
is 5mm or so thick with any clearance for lens movement inside that and
then the mount for the rear element. Also, if you look inside the SLR
mount you will see that it is obscured at the bottom and top by the
support for the electrical contacts and the prism mount respectively.
Pretty soon you get down to a practical rear element limit that is about
30mm maximum diameter no closer than 40mm from the focal plane. Not
much scope for it to be any worse than the test limits, though I could
extend the shadow to the opposite edge of the frame with a couple of
extra degrees rotation, to take the test conditions well beyond the
limits of a practical SLR lens.
Yes you are correct, it was late last night when I responded - there
*should* be a cos(theta) reduction *if* the pixels were flat, without
any microlenses at all. The microlenses, however change this
characteristic, since they present almost the same collection area
independent of the incident angle. Think of a sphere - it has the same
diameter from whatever angle you look at it. Obviously being spherical
segments rather than full spheres, the lenses will change their
effective collection area at extreme angles, but this may be well beyond
the angles available from practical camera lenses. So the actual area
reduction at each pixel really depends on how completely spherical the
microlenses are and also how much they obscure each other at high
incidence angles.
These microlenses are actually made by heating etch resist material
deposited on the pixel until it melts and reflows into a spherical
segment. We make thermal imaging detectors with a similar process to
create indium bump bonds at each pixel for connecting each cadmium
mercury telluride alloy sensel to its CMOS circuit in the readout
matrix. Depending on the thickness of the deposited material, it is
possible to create almost perfect spheres with a small flat base of much
less than the radius of the sphere itself. So creating almost perfect
truncated spheres for microlenses is quite possible, and that would
reduce the sensitivity to the incident light compared to a flat surface.
Also, if the microlenses were perfect immersion lenses then there would
be no deflection of the light off of the sensitive area at the centre of
each hemisphere even at extreme angles of incidence, which is the
conventional argument for the angular response. With perfect immersion
lenses the only loss in signal would be the obscuration by adjacent
microlenses.
Interestingly, if you model the microlenses as spheres with a diameter
equal to the pixel pitch, the obscuration of one sphere by its adjacent
sphere at an incident angle of 30deg works out at approximately 5.77%.
This may be what Paul Furman was talking about when he discussed the
Autocad modelling. (Is it, Paul?)
In exact terms, it is 1/3 - sqrt(3)/2pi - I can go into the geometry of
this if you like, but it is reasonably simple, just the percentage area
intersected by two circles separated by 2r.cos30.
Obviously the lenses will not be complete spheres, but truncated ones,
however the result remains the same even if they are only marginally
larger than hemispheres, provided the radius is maintained above the
surface at the point where the incident ray is tangential to the sphere.
This only requires the microlens to be marginally greater than a
hemisphere. Finally, to create such a microlens geometry, requires that
the lenses themselves be smaller than the actual pixel pitch - reducing
the percentage loss significantly for even small reductions in spherical
radius.
I think this explains the almost null result - in fact ad *less* than
5.77% it provides a very close match to the actual observed signal
reduction of 2.68%!
>
>The other possibility is that due to fluctuations in whatever
>(shutter speed? LED output?) the test didn't detect the ~15%
>difference that should be there due to a statistical fluke.
>I kind of doubt this. But if it were true, it would mean you'd
>have to take a number of exposures to figure out what the
>noise in each measurement was. (Maybe you did this and
>didn't say.)
>
No, I didn't, but I plan to run a more comprehensive test now, which
will use longer separation of the LED from the focal plane, effectively
reducing the shutter speed, and hence improving accuracy and
repeatability. The LED supply was a stabilised current source, so there
should be no variation in its output.
>> And, BTW. how is it that sensor vendors document much bigger light
>> falloff than the one you "detected"?
>
> Do you have, or have you ever seen, a document specifying this parameter
> for the Canon 5D sensor? I doubt it, but you are welcome to provide
> references to it here and now.
I certainly haven't. If you're suggesting that this particular sensor is
different from the others, okay -- maybe it is, and Canon is keeping quiet
about it. Why wouldn't they trumpet it from the rooftops, though?
--
Jeremy | jer...@exit109.com
> No, I am not. There don't seem to be many sensor designers who do say
> that there should be light fall-off due to microlens arrays - only those
> in the 4/3 consortium, and none of the articles saying that come from
> the sensor manufacturers, only the camera manufacturers.
Every one who says anything about it says that angle of incidence matters
significantly.
> I would be very surprised if some sort of similar test wasn't actually
> performed at least on a batch basis by the sensor manufacturer.
So they performed the test, discovered that angle of incidence doesn't
matter at all, and then embarked on a vast conspiracy to convince the
world that it does?
There aren't that many companies making commercial sensors like this, so
I guess the conspiracy wouldn't have to be all that vast. But why? Why
mislead everyone about this single point? Why are you the first person
to reveal the horrible truth? Are there now assassins after you, to
silence you before you expose this plot?
--
Jeremy | jer...@exit109.com