Additionally, these scanners' CCDs usually do not have a single row of
2400 cells, but two rows of 1200 each, which are positioned at an
half-pixel offset.
Now, if this is true (please confirm), don't we effectively have 4x
multi-sampling when scanning at 1200 dpi?
There are several issues that I don't find clear.
First: when scanning at 1200 dpi, do scanners actually use both CCD
arrays and "mix" the results (I'm not simply saying "average" the
results, since it might be too simplicistic given the half-pixel
offset), or do they only "turn on" one array?
Second: when scanning at 2400 dpi, do scanners give out pixels in the
order "1st pixel of 1st array | 1st pixel of 2nd array | 2nd pixel of
1st array | 2nd pixel of 2nd array", or do they somehow consider the
fact that nearby pixel overlay one another by half their width?
Of course, this also applies vertically, since while the motor moves by
1/2400th of an inch steps, pixels are 1/1200th of an inch "wide".
Third: when scanning at "4800" dpi, what do scanners do about the
horizontal resolution? Interpolation, I suppose. What kind of
interpolation? Does it vary from scanner to scanner?
And, do scanners that claim 2400x4800 resolution *really move the motor
by 1/4800th steps when instructed to scan at 4800 dpi*, or do they just
interpolate (since I know there are also other reasons for having
1/4800th stepping motors)? Does this vary from scanner to scanner?
Now, let's see how all this relates to multi-sampling.
Let's suppose I want to scan at 4800 dpi, with 2x multi-sampling -- for
the moment, let's ignore the fact that it might really be 4x
multi-sampling because of the double CCD array.
The scanner gives me an image. I can turn it into *two* images, one
made of the even lines of the original image, and the other made of the
odd lines (clearly, I must first downsample the original image
horizontally, since it was interpolated to 2x by the scanner).
I can then average the two images. Have I just obtained 2x
multi-sampling?
Apparently not, since I forgot that even and odd lines were sampled at
1/4800th of an inch apart from each other.
But I do know they're separated by a consistent 1/4800th of an inch. So
I could first sub-pixel-align the two images (a no brainer, since I
know they're misaligned by exactly one pixel), and only then do the
merge.
Have I now obtained 2x multi-sampling? Apparently, I have. But now I
wonder: what would have happened if I had just scaled down the original
image to half its size vertically?
Wouldn't that be equivalent to the procedure I described of splitting
it in two, aligning and merging?
Programs usually offer more than one algorithm for scaling down images:
bilinear, bicubic, etc.
Which of these is equivalent to splitting/aligning/merging, if any?
Now you probably also see why I asked all those questions about scanner
behavior above, since to answer my doubts about multi-sampling one must
be aware of how the scanner really behaves, and whatever it does to the
data *before* giving them out to the user.
Perhaps this whole article can be "scaled down" to the question: is
scanning at 4800 dpi and then scaling down to 1200 dpi (with what?
bilinear, bicubic...) equivalent to 4x multi-sampling at 1200 dpi?
(Make substitutions between 4800, 2400 and 1200 above, and you'll get
the other possible scenarios)
by LjL
ljl...@tiscali.it
>Additionally, these scanners' CCDs usually do not have a single row of
>2400 cells, but two rows of 1200 each, which are positioned at an
>half-pixel offset.
...
Check the archives (for example on Google). Kennedy
(<r...@nospam.demon.co.uk>) wrote many extensive and very technical
articles about this in quite some detail, for example:
Subject: Re: filmscanner vs hi-res flatbed
Subject: Re: REPOST: Re: Plustek OptikFilm 7200
etc.
Don.
Maybe even a bit *too* technical ;-)
I've read these and similar threads before, and I am aware that the
topic of "staggered CCD arrays" (and stepping motors that step less
than one pixel is wide) has been investigated to death.
However, it was mainly about "does a 1200+1200 dpi scanner resolve as
much as a 2400 dpi scanner?", and "does a 1200+1200 dpi resolve
anything more than a 1200 dpi scanner at all?", and "staggered arrays
reduce aliasing but make the image softer".
Instead, my post wanted to investigate the question: is scanning with a
1200+1200 dpi scanner comparable to multi-sampling with a 1200 dpi
scanner?
And if it is, should we process the image taking account of the pixel
offset/overlap, and if so, how?
I've read clues that unsharp masking can be a perfectly valid technique
to compensate for sensor overlap, for example... but it's all a bit too
vague in the threads I've read, covering wider topics than I am
currently focusing on -- such as resolution, aliasing, etc.
by LjL
ljl...@tiscali.it
>> >Additionally, these scanners' CCDs usually do not have a single row of
>> >2400 cells, but two rows of 1200 each, which are positioned at an
>> >half-pixel offset.
>> ...
>>
>> Check the archives (for example on Google). Kennedy
>> (<r...@nospam.demon.co.uk>) wrote many extensive and very technical
>> articles about this in quite some detail, for example:
>>
>> Subject: Re: filmscanner vs hi-res flatbed
>> Subject: Re: REPOST: Re: Plustek OptikFilm 7200
>> etc.
>
>Maybe even a bit *too* technical ;-)
Yes, Kennedy does that! ;o)
But I like it and always file such messages for future use even if
most of it over my head at the time.
>Instead, my post wanted to investigate the question: is scanning with a
>1200+1200 dpi scanner comparable to multi-sampling with a 1200 dpi
>scanner?
>And if it is, should we process the image taking account of the pixel
>offset/overlap, and if so, how?
>I've read clues that unsharp masking can be a perfectly valid technique
>to compensate for sensor overlap, for example... but it's all a bit too
>vague in the threads I've read, covering wider topics than I am
>currently focusing on -- such as resolution, aliasing, etc.
I haven't really looked into all that because I'm too busy with my
film scanner so someone else will have to jump in...
Don.
Sorry, but sometimes it needs that technical detail to explain the true
implications of the concept.
<snip>
>
>Instead, my post wanted to investigate the question: is scanning with a
>1200+1200 dpi scanner comparable to multi-sampling with a 1200 dpi
>scanner?
That depends on whether the subject contains any information at higher
than 1200ppi and if the lens is capable of resolving it. If it isn't
then it is exactly the same as multisampling - which is why I always
jump on posters who claim that their is no advantage to this scanning
approach: even when there is no resolution advantage there is always the
multisampling advantage.
In simple terms, the double CCD captures twice as much information as a
single line device. If that information does not go into increased
resolution then it appears as increased signal to noise similar to
multisampling.
This is no different from a single line sensor with double the pixel
density when scanning an object which does not have as much resolution
in the original - there is always an advantage to getting more samples
of nominally the same data, but it can be debatable whether that
advantage is worth the time and effort to do so.
>And if it is, should we process the image taking account of the pixel
>offset/overlap, and if so, how?
>
The simplest method of doing this is a pixel average and downsample by a
factor of two. Suffice to say that there isn't an exact method of
separating the resolution from the SNR gain. Half pixel realignment
isn't really a solution in these cases because it involves resampling
losses in itself which are likely to exceed any benefit that they are
intended to gain. Some blurring, up to a quarter of a pixel may be
advantageous.
>I've read clues that unsharp masking can be a perfectly valid technique
>to compensate for sensor overlap, for example... but it's all a bit too
>vague in the threads I've read, covering wider topics than I am
>currently focusing on -- such as resolution, aliasing, etc.
>
Yes, this is essentially the opposite of what you are trying to do - put
more of the additional information of the double scan into increased
resolution rather than improved signal to noise ratio at the lower
resolution. Hence my comment that limited blurring may offer a benefit.
--
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he's pissed.
Python Philosophers (replace 'nospam' with 'kennedym' when replying)
> In article <1126228438.1...@g43g2000cwa.googlegroups.com>,
> ljl...@tiscalinet.it writes
> >
> >Don ha scritto:
> >
> >> [snip]
> >>
> >> Check the archives (for example on Google). Kennedy
> >> (<r...@nospam.demon.co.uk>) wrote many extensive and very technical
> >> articles about this in quite some detail, for example:
> >>
> >> Subject: Re: filmscanner vs hi-res flatbed
> >> Subject: Re: REPOST: Re: Plustek OptikFilm 7200
> >> etc.
> >
> >Maybe even a bit *too* technical ;-)
>
> Sorry, but sometimes it needs that technical detail to explain the true
> implications of the concept.
Oh but it wasn't a bad comment on you, I meant too technical *for me*.
When you talk about MFT and so on, I think I can grasp the basic ideas
behind those concepts, but can't really *understand* them to any
extent.
But it's certainly a very good thing that you can discuss the more
technical details on a newsgroup with people who understand them,
that's just what the Internet is good for in research!
> <snip>
> >
> >Instead, my post wanted to investigate the question: is scanning with a
> >1200+1200 dpi scanner comparable to multi-sampling with a 1200 dpi
> >scanner?
>
> That depends on whether the subject contains any information at higher
> than 1200ppi and if the lens is capable of resolving it. If it isn't
> then it is exactly the same as multisampling - which is why I always
> jump on posters who claim that their is no advantage to this scanning
> approach: even when there is no resolution advantage there is always the
> multisampling advantage.
Yes. But I can see two scenarios:
1) when there is no resolution advantage, is it really *exactly* as
multisampling, or does it lose some ground because of the misalignment?
or can the lost ground be re-gained with appropriate post-processing?
2) when there *is* resolution advantage, can the multisampling
advantage exploited *together* with the resolution advantage, or must a
choice be made?
What I suspected is that a choice must be made, and that the choice
typically favors resolutions over multi-sampling (i.e. noise
reduction).
Anyway, you see, I was thinking more about the *vertical* axis of
scanning (i.e. the "4800 dpi" of my scanner), where the resolution gain
appears to be practically null, with pixels overlapping for three
fourths their size.
There is also a post by you where you say that half-stepping on the
vertical axis is next to useless, at least concerning resolution.
But I can clearly see that it *is* useful in terms of noise reduction,
just by taking a scan at 2400x4800 (and then downsampling the 4800) and
one at 2400x2400.
When half-stepping, scanners usually interpolate on the horizontal axis
to get a 1:1 ratio. This I don't like (and in fact I'm trying to modify
my SANE driver accordingly): I'd like to take a purely 2400x4800 scan,
and then downsample *appropriately* on the vertical axis.
My main concern, which you address below, was on the meaning of
"appropriate downsampling" when downsampling an image that is made by
3/4ths overlapping pixels.
> [snip]
>
> This is no different from a single line sensor with double the pixel
> density when scanning an object which does not have as much resolution
> in the original - there is always an advantage to getting more samples
> of nominally the same data, but it can be debatable whether that
> advantage is worth the time and effort to do so.
More than "debatable", I'd call it a personal choice.
My scans at 1200x1200 are awfully noisy; those at 2400x2400 are better,
but I certainly do appreciate the benefit of 2400x4800, at least for
some pictures.
What worries me is the "nominally the same data" part. It's not
nominally the same data in the real world, unless the original is of a
much lower resolution than the sampling rate.
It's *almost* the same data, but shifted -- half a pixel horizontally
(double CCD), and 1/4 of a pixel vertically (half-stepping).
So, I'm under the impression that scanning at 2400x4800 (let's talk
about the half-stepping and ignoring the double CCD) and then
downsampling the vertical axis gives me a less noisy, but blurrier
image than scanning at 2400x2400.
This wouldn't happen with "real" multi-sampling, i.e. samples taken at
exactly the same position. Question is, is there a software fix for
this? I'm taking your answer, below, as a "mostly no"...?
> >And if it is, should we process the image taking account of the pixel
> >offset/overlap, and if so, how?
> >
> The simplest method of doing this is a pixel average and downsample by a
> factor of two.
I.e. an image made by (all pixels from line n + every pixel from line
n+1) / 2 (that is considering only one direction)?
But this is really the same as treating it as a "standard"
multi-sampling, i.e. with no offset, isn't it?
Then what about the various bilinears, biquadratics and bicubics?
> Suffice to say that there isn't an exact method of
> separating the resolution from the SNR gain.
Which is to say that the offset between each pair of scan lines can't
be really accounted for in software?
> Half pixel realignment
> isn't really a solution in these cases because it involves resampling
> losses in itself which are likely to exceed any benefit that they are
> intended to gain. Some blurring, up to a quarter of a pixel may be
> advantageous.
Hm. Blurring, at what stage? Scans taken at 4800 and then resampled to
2400 (Photoshop, bicubic) look already blurrier than scans taken at
2400, as I said.
So, I take it you'd be blurring by 1/4 of a pixel and then
downsampling? But you'd still be downsampling with the method you
described above (average), rather than the standard functions in say
Photoshop, correct?
In any case I don't fully understand why you say that half-pixel
realignment isn't worth doing. I know the explanation would get
technical, but just tell me, shouldn't it be just as worthless when
done on multi-scans (the Don way, I mean, taking multiple scans and
then sub-pixel aligning them)?
The only difference is that, in "our" case, the amount of misalignment
is known. Which should even be an advantage, or shouldn't it?
> >I've read clues that unsharp masking can be a perfectly valid technique
> >to compensate for sensor overlap, for example... but it's all a bit too
> >vague in the threads I've read, covering wider topics than I am
> >currently focusing on -- such as resolution, aliasing, etc.
> >
> Yes, this is essentially the opposite of what you are trying to do - put
> more of the additional information of the double scan into increased
> resolution rather than improved signal to noise ratio at the lower
> resolution. Hence my comment that limited blurring may offer a benefit.
I see. But do you agree with me, in any case, that on the vertical
axis, the 4800 dpi of "resolution" are worthless as *resolution* and
much more useful as a substitute for multi-sampling (i.e. for improving
SNR)?
But anyway, what do you have to say about the unsharp masking -- which
I certainly consider doing on 2400x2400 scans?
My impression is that the standard, consumer-oriented Internet sources
say "just apply as much unsharp masking as you see fit".
But shouldn't there be an exact amount and radius of unsharp masking
that can be computed from the scanner's characteristics, seeing from
the things you said in the various threads (which I only very partially
understood, though)?
> [snip]
by LjL
ljl...@tiscali.it
> [snip]
I forgot one more thing I wanted to ask.
Assume I settle on a solution I like for downsampling my vertical 4800
dpi to 2400.
As I wrote in the other post, I'm trying to path my scanner driver to
have it output 2400 dpi on the *horizontal* axis instead of
interpolated 4800, but I'm afraid I might not make it.
(SANE doesn't even natively support 4800x4800dpi with interpolated x
axis, I have to patch it for that; and then, doing 2400x4800dpi
*without* interpolated x axis looks very hard, because the driver isn't
really written with different x/y resolutions in mind.)
So, assuming I only get 4800x4800dpi with interpolated x axis, how do I
downsample that axis?
Bicubic resize in Photoshop gives blurrier data, *on the x axis* (also
on the y axis, but I've treated that in the other post), than a simple,
uninterpolated 2400x2400 scan does.
Must I know the exact interpolation algorithm used by my scanner, in
order to recover the original data? Or doesn't even that suffice, and
some data gets lost irremediably with the interpolation?
I suppose an interpolation that works as
pixel1 -- (pixel1 + pixel2)/2 -- pixel2 -- (pixel2 + pixel3)/2 --
pixel3 -- etc
should be easily "reversed". But I currently have no clue about the
interpolation used by my scanner, and don't know whether some
interpolation methods are "irreversible".
Is there even perhaps a safe bet about my scanner's algorithm, that is
do most or all scanners use a specific algorithm?
by LjL
ljl...@tiscali.it
> Many flatbed scanners claim to offer a vertical resolution that is
> twice the horizontal resolution, such as 2400x4800 dpi. I understand
> this to mean that, while there are only 2400 cells in the CCD, the
> stepping motor can move by steps of 1/4800th of an inch.
>
> Additionally, these scanners' CCDs usually do not have a single row of
> 2400 cells, but two rows of 1200 each, which are positioned at an
> half-pixel offset.
>
> Now, if this is true (please confirm), don't we effectively have 4x
> multi-sampling when scanning at 1200 dpi?
Actually, many linear CCDs are 8400 or 10200 cells (pixel sites), though
divided by three to give each colour Red, Green, and Blue. Kodak have some
nice White Papers on these.
So in theory an 8400 element linear CCD should be able to resolve 2800
dpi, and a 10200 element CCD should be able to do 3400 dpi. The reality is
that each pixel site is not that efficient, and only resolves a fraction
of the total possible. Often that can be 0.3 to 0.8 of the cell site for
commercial imagers. That would give us an actual best of 2720 dpi for the
10200 element CCD, and 2240 dpi for the 8400 element CCD.
You should be aware that there are linear CCDs in scanners that are less
than 8400 elements, and expect those to perform worse. The stepper motors
and scanner optics will affect resolution. The size of the cell site for a
linear CCD will affect resolution and colour. The scanner optics could
have the most affect on resolution, and often are the limiting factor in
low end and mid range gear.
>
>
> There are several issues that I don't find clear.
>
> First: when scanning at 1200 dpi, do scanners actually use both CCD
> arrays and "mix" the results (I'm not simply saying "average" the
> results, since it might be too simplicistic given the half-pixel
> offset), or do they only "turn on" one array?
>
> Second: when scanning at 2400 dpi, do scanners give out pixels in the
> order "1st pixel of 1st array | 1st pixel of 2nd array | 2nd pixel of
> 1st array | 2nd pixel of 2nd array", or do they somehow consider the
> fact that nearby pixel overlay one another by half their width?
> Of course, this also applies vertically, since while the motor moves by
> 1/2400th of an inch steps, pixels are 1/1200th of an inch "wide".
>
> Third: when scanning at "4800" dpi, what do scanners do about the
> horizontal resolution? Interpolation, I suppose. What kind of
> interpolation? Does it vary from scanner to scanner?
Interpolation can happen at an up or down value. It is controlled by fixed
sets of algorithms determined by the scanner manufacturers. Obviously,
this would vary between companies. In short, there is not one answer to
your questions, since different scanners will arrive at final files by
using different methods.
>
> And, do scanners that claim 2400x4800 resolution *really move the motor
> by 1/4800th steps when instructed to scan at 4800 dpi*, or do they just
> interpolate (since I know there are also other reasons for having
> 1/4800th stepping motors)? Does this vary from scanner to scanner?
Usually interpolated. Don't think this is all bad. While more resolution
and details might not be visible, overscanning can give smoother colour
transitions, since there are more final pixels in the resulting file. Of
course this only works if your printing output can use that extra
information.
>
>
> Now, let's see how all this relates to multi-sampling.. . . . . . . . .
> . . . . . .
Multi-sampling is usually just done to decrease noise or sometimes to help
colour accuracy. The effectiveness of this will vary for each type of
scanner, each scanner manufacturer, and the software in use.
>
>
> . . . . . . . . . . .
>
> Now you probably also see why I asked all those questions about scanner
> behavior above, since to answer my doubts about multi-sampling one must
> be aware of how the scanner really behaves, and whatever it does to the
> data *before* giving them out to the user.
>
> Perhaps this whole article can be "scaled down" to the question: is
> scanning at 4800 dpi and then scaling down to 1200 dpi (with what?
> bilinear, bicubic...) equivalent to 4x multi-sampling at 1200 dpi?
> (Make substitutions between 4800, 2400 and 1200 above, and you'll get
> the other possible scenarios)
Scanning at some multiple of the claimed resolution might improve your
scans, if that is what you are after with all this investigation. If you
really want to get technical, check out the Dalsa and Kodak web sites,
then find the White Papers for their linear CCDs. You will get far more
technical information that way, though maybe more than is practical.
Ciao!
Gordon Moat
A G Studio
<http://www.allgstudio.com/technology.html>
> ljl...@tiscalinet.it wrote:
>
> > [snip]
>
> [snip]
>
> Scanning at some multiple of the claimed resolution might improve your
> scans, if that is what you are after with all this investigation. If you
> really want to get technical, check out the Dalsa and Kodak web sites,
> then find the White Papers for their linear CCDs. You will get far more
> technical information that way, though maybe more than is practical.
I don't want to get *too* technical.
In short, my scanner's got 2400 dpi horizontal. Sure, there are
complications: it's a "staggered" CCD, for one, and then all you've
written that I snipped (although I believe my scanner has three --
actually six -- linear CCDs, one for each color, not one -- actually
two -- very big linear CCD).
But let's just pretend for a moment that it's 2400 dpi optical, period.
What I want to do is scan at 4800 dpi in the *vertical* direction, i.e.
run the motor at "half-stepping". My scanner can do that.
The problem is twofold:
1) (the less important one) My scanner's software insists on
interpolating horizontally in order to fake 4800 dpi on both the x and
y axis, and I don't know how to "revert" this interpolation to get the
original data back (just downsampling with Photoshop appears to lose
something). But as you said, the interpolation algorithm varies between
scanners, so I'll have to find out what mine does, I suppose -- or,
hopefully, just manage to hack the open-source driver I'm using to
support 2400x4800 with no interpolation.
2) (the more important one) I, of course, don't want a 2:1 ratio image.
I just want 2400x2400, and use the "extra" 2400 I've got on the y axis
as one would use multi-sampling on a scanner supporting it. Yes, to get
better image quality and less noise, as you said.
But the question is, how to do it *well*?
I feel that I shouldn't just pretend I'm really multi-sampling (i.e.
taking two readouts for each scanline), because I am not. I ought to
somehow take into account the fact that each scanline is shifted by
"half a pixel" from the previous one.
Should I ignore this, and go on processing as if I were "really"
multi-sampling? Or should I downsample the image using bilinear,
bicubic, or something else more appropriate -- something that can take
the half-pixel offset into account?
I realize that simply downsampling the picture to 2400x2400 in
Photoshop or something gives decent results. But I'd just like to know
if there's something I'm missing.
In my mind, the "right" thing to do would be to consider the scan as
two separate scans (one made from the even scanlines, one made from the
odd scanlines); then merge the two image at an half-pixel offset. But
Kennedy said this is not such a great idea.
And in any case, even if Kennedy were wrong, I suppose there must be
some simpler transformation that gives the same result as the alignment
thing above... after all, it seems stupid to actually perform the
alignment and then the merging, when we know the misalignment is
exactly x=0, y=0.5.
All the other questions I posed in the original message were mostly
about how all this relates (if anyhow) with the fact the CCD is
"staggered" (which in turn means that each sensors already overlaps
each other sensors by half their size -- or *about* half their size,
since as you pointed out, things get actually a bit more complicated).
by LjL
ljl...@tiscali.it
> [snip]
Hey, I've come across anrticle by you (in "EPSON Scan wouldn't make
large files (>1000 MB)", 2004), where you say
--- CUT ---
You are right about resolution, even the theoretical resolution gain is
marginal and almost certainly well below the sample to sample
production
variation. But I don't think there has ever been any question about
the
noise reduction aspect - if you resample the image back to 3200ppi
using
nearest neighbour resizing it is mathematically exactly the same as 2x
multiscanning. That yields exactly 1.414x noise reduction - and all in
a single pass with a scanner which formally doesn't provide
multisampling at all. With no significant resolution gain, the noise
reduction is just there in the image without resampling.
--- CUT ---
Is nearest neighbour resizing (though I've got no idea what it is! but
thankfully there's the Internet for that) what I am looking for?
I mean, "mathematically exactly the same as 2x multiscanning" is really
close to what I had in mind. Confirm?
But I think I can see some bad news, too, as in
--- CUT ---
Not always. Some, indeed most flatbeds these days, exploit what is
known as "half stepping" of the stepper motor drive. These half steps
are less precise than the full step and less robust because only half
the holding force is produced by the motor coils [...]
--- CUT ---
So does that mean that I might possibly losing more than I gain by
half-stepping? Although I suppose that, at most, I would end up with a
scan whose geometry doesn't perfectly match that of the original... or?
by LjL
ljl...@tiscali.it
>
>--- CUT ---
>
>Not always. Some, indeed most flatbeds these days, exploit what is
>known as "half stepping" of the stepper motor drive. These half steps
>are less precise than the full step and less robust because only half
>the holding force is produced by the motor coils [...]
>
>--- CUT ---
>
>So does that mean that I might possibly losing more than I gain by
>half-stepping? Although I suppose that, at most, I would end up with a
>scan whose geometry doesn't perfectly match that of the original... or?
>
No, you won't end up with bad geometry from this, just that the
precision of that half pixel shift is not constant - it may be 2/3rds of
a pixel on one step and 1/3rd on the next - or 3 at 7/16th followed by 1
at 1/4. This is just another reason why attempting subpixel alignment
for this benefit is likely to cause more pain than gain.
Check the explanation of linear and tri-linear CCDs at
http://www.kodak.com/global/en/service/professional/tib/tib4131.jhtml?id=
0.1.14.34.5.10&lc=en
The entire Kodak inventory of linear CCDs is listed at
http://www.kodak.com/global/en/digital/ccd/products/linear/linearMain.jhtml
and not one has the interleaved colour structure you describe.
In addition, the Sony inventory of linear CCDs is listed at
http://products.sel.sony.com/semi/ccd.html#CCD%20Linear%20Sensors nor
the NEC product inventory at
http://www.necel.com/partic/display/english/ccdlinear/ccdlinear_list.html
nor the Fairchild site at
http://www.fairchildimaging.com/main/prod_fpa_ccd_linear.htm have
anything similar.
Now, I am not saying these devices don't exist, but I would like some
pointer as to where you are getting this information from since it is
not from the Kodak or Dalsa sites you reference, and more likely to be a
misunderstanding on your part. Whilst there may well be colour
interleaved linear CCDs these are certainly not used on any commercial
scanners that I am aware of.
>So in theory an 8400 element linear CCD should be able to resolve 2800
>dpi, and a 10200 element CCD should be able to do 3400 dpi.
A colour CCD with a total of 8400 elements would only be capable of
resolving 2800 colour samples across the A4 page - somewhat better than
300ppi - whilst your 10200 element colour CCD would only be capable of
400ppi! The real requirements for flatbed scanners are *much* higher
than these!
An A4 scanner with a 1200ppi capability has a tri-linear CCD with round
10,500 cells in *each* line ie. a total of more than 31,000 cells. A
4800ppi full page scanner requires a CCD with more than 42000 cells in
each line, a total of over 125,000 cells.
cf. http://www.necel.com/nesdis/image/S17546EJ1V0DS00.pdf for data on
such a device, where each line is in itself produced by having four real
lines offset by a quarter of a pixel pitch. Guess which scanner that's
in! ;-)
>check out the Dalsa and Kodak web sites,
Dalsa don't make linear CCDs (in fact they don't design CCDs - all of
their products are identical in form, function and nomenclature to
Philips devices - even the data sheets are Philips with a DALSA sticker
over the top!).
Interleaved colours (by Bayer masking) is common on two dimensional CCDs
(indeed, Bayer was a Kodak employee!) but this is unnecessary in linear
devices. I suspect that you are confusing the two.
That's OK - I didn't take it that way and appreciate that you recognise
the value of the detail, something that seems to get overlooked more and
more these days.
>
>When you talk about MFT and so on, I think I can grasp the basic ideas
>behind those concepts, but can't really *understand* them to any
>extent.
MTF is just a measure of the contrast that a particular component can
reproduce as a function of spatial frequency - it is the spatial
frequency response of the component - just like the graphs that used to
be printed on the back of audio tapes and hi-fi components showing their
response to audio frequencies. The main advantage of MTF in the
analysis of imaging systems is that the total response of a system is
exactly the product of all of the linearly combined individual
components. So with a knowledge of the components, you can derive an
exact measure of the system frequency response - and with a knowledge of
the frequency response you can predict the behaviour. You probably do
understand this, but I added it after writing a lot more about MTF than
I initially intended to later in this post.
>But I can see two scenarios:
>1) when there is no resolution advantage, is it really *exactly* as
>multisampling, or does it lose some ground because of the misalignment?
>or can the lost ground be re-gained with appropriate post-processing?
Well, it isn't *exactly* the same as multisampling, but the difference
is minimal and averages out. Only in the special case where there is no
spatial frequency higher than 0cy/mm on the image (a completely bland
and uniform scene) then clearly it doesn't matter where in that scene
the samples are taken, they should all produce the same data, varying
only by the noise of the system. However, if the scene contains a
single low spatial frequency of, say, 1cy/mm then there will be a
systematic difference between samples taken at different phases of that
pattern - even though the spatial frequency is much lower than the
resolution of the basic single CCD line let alone the combination of the
two offset lines. However, since there is no correlation between the
sensor and the scene, that difference will be positive just as often as
it is negative and on average it will cancel out.
With clearly no resolution to be gained, this effect is negligible.
However you can see that with a spatial frequency at the limit of the
single line of sensors, ie. still no actual resolution to be gained,
then a second sample with a half pitch offset can be up to a 50% of the
reproduced contrast level from the original sample. (for example, say
the original CCD line sampled the peak and troughs of the sine wave,
then the corresponding offset sample would be at the mid point, with 50%
level difference from either adjacent original - again averaging out to
zero. Obviously the contrast itself is only 64% of the peak due to the
finite width of the CCD cell being half the cycle of the sine wave, so
you are looking at a total possible error of 32% - and the lens reduces
this significantly further, perhaps to 2-3% at this spatial frequency.)
The noise, however, is always reduced by the square root of the number
of samples used.
>2) when there *is* resolution advantage, can the multisampling
>advantage exploited *together* with the resolution advantage, or must a
>choice be made?
>
I don't see what you mean by "a choice being made" - it is merely
increased data sampling and you can increase the frequency response of
the system through post processing to maximise the resolution gain at
the expense of signal to noise ratio or decrease the frequency response
to maximise the SNR at the expense of resolution.
The fact that the sensors overlap in object space is not really an issue
- in fact, the individual sensors of *ALL* scanners and digital cameras
can be considered to overlap in the same way sue to the blurring effect
of the lens (you can view this as blurring the image of the scene on the
sensor and as blurring the image of the sensor on the scene). It just
comes down to the component MTFs and the sampling density employed as to
how significant that "overlap" appears relative to the samples used.
One analogy that may help you visualise this is to consider the linear
CCD, not as an array of individual identical sensors, but as a single
cell which scans the image along the axis of the CCD. This single cell
will produce a signal continuous waveform as it scans along the axis of
the CCD. If that waveform is now sampled only at the precise positions
where the cells in the original CCD exist then the resulting sampled
waveform will be indistinguishable from the output of the original CCD.
Now, that probably doesn't seem to make much difference initially, but
since the result is the same then the same equation describes the
waveform. The waveform of the scanned single element is simply the
convolution of the image projected onto it by the lens and the point
spread function of the single element - effectively its width. This
corresponds exactly to the product of the fourier transform of the image
(ie. its representation as a series of spatial frequencies as reproduced
by the lens) and the MTF of the individual cell. So now we have a
spatial frequency representation of the continuous waveform of the
single scanned cell - the fourier transform of the waveform. The
sampling process is simply multiplying the waveform by a series of delta
functions at the sampling positions, which corresponds to convolving the
fourier transform with a series of delta functions at the sampling
frequency and its harmonics. (This is the source of the aliasing etc.
where the negative components in frequency space appear as positive
frequencies when their origin is shifted to the first sampling frequency
- but that is another issue.)
So we can derive an equation to describe the output of the linear CCD by
considering it as a sampled version of a single scanned element. The
real advantage is that this equation is not restricted by the physical
manufacturing limitations of the CCD - there is no relationship between
the pixel size and pitch inherent in that equation. The cell dimension
can be very small or very large compared to the sampling frequency - the
equation remains unchanged.
For a square cell of width a, the MTF is readily computed to be
sin(pi.a.f)/(pi.a.f) [ensuring you calculate the sin in radians]. You
might like to plot out a few of these curves for different sizes of
cell. A cell from a single line 1200ppi CCD will have an effective cell
width of around 20um, a cell from a single line 2400ppi CCD will have a
width of around 10um. What you should see from this exercise is that
changing the cell width only changes the spatial frequency response of
the system. This is completely independent of the sampling density -
the size of the CCD cell is just a spatial filter with a particular
frequency response, just the same as the lens itself is another such
filter with known frequency response. Unlike the CCD cellular MTF, the
lens has a finite MTF (meaning that it falls to zero at a particular
spatial frequency and stays there at higher frequencies than this
cut-off). One of the rules of fourier transforms is that finite
functions on one side of the transform result in infinite functions on
the other side - so, while the CCD cell has a finite dimension and
spread it has an infinite frequency response (albeit at low amplitude),
the lens has a finite frequency response and consequently an infinite
spreading effect on the image (albeit at low amplitude). Hence my
earlier comment that the no optical scanner actually has sensors which
do not overlap to some degree. All that is different is how much
response remains in the system at the sampling density - ideally,
invoking Nyquist, there should be no response to frequencies greater
than half the sampling density.
With a scanner which has a staggered CCD or half steps the linear CCD in
the scan axis, all that is happening is that you move the sampling
frequency further up the MTF curve - where the contrast reproduced by
the lens and the CCD itself is less. So there really isn't a choice to
be made that is any different from how you would treat a full stepped
4800ppi scanned image to how you would treat a half stepped 4800ppi
image - they both behave exactly the same.
>
>There is also a post by you where you say that half-stepping on the
>vertical axis is next to useless, at least concerning resolution.
>
Yes, for the reasons provided above. Once you include the optic MTF
and the CCD cell MTF and then plot where the sampling frequency is, it
is clear that all of the spatial frequencies that can be resolved by the
system are done so well before the advantage of half stepping
(effectively increasing the sampling density by x4) is realised.
>But I can clearly see that it *is* useful in terms of noise reduction,
>just by taking a scan at 2400x4800 (and then downsampling the 4800) and
>one at 2400x2400.
>
Yes, because the resolution benefit is negligible, so all of the
additional information is simply noise reduction.
>When half-stepping, scanners usually interpolate on the horizontal axis
>to get a 1:1 ratio. This I don't like (and in fact I'm trying to modify
>my SANE driver accordingly): I'd like to take a purely 2400x4800 scan,
>and then downsample *appropriately* on the vertical axis.
>
That would be an average of each of the two 4800ppi samples.
>My scans at 1200x1200 are awfully noisy; those at 2400x2400 are better,
>but I certainly do appreciate the benefit of 2400x4800, at least for
>some pictures.
>
Yes, 2400x2400ppi downsampled to 1200x1200ppi will have a x2 improvement
in SNR, assuming that the noise is not limited by bit depth you use in
the process. 2400x4800ppi down to 1200x1200ppi should provide about
x2.8 in SNR.
>What worries me is the "nominally the same data" part. It's not
>nominally the same data in the real world, unless the original is of a
>much lower resolution than the sampling rate.
>It's *almost* the same data, but shifted -- half a pixel horizontally
>(double CCD), and 1/4 of a pixel vertically (half-stepping).
>
It is shifted, but that is just a higher frequency sample. The shift in
sample position will only produce a difference in signal if there is a
resolution gain to be obtained - what you are trying to do is forego any
resolution benefit for the SNR benefit.
>So, I'm under the impression that scanning at 2400x4800 (let's talk
>about the half-stepping and ignoring the double CCD)
- they are both the same thing in principle.
> and then
>downsampling the vertical axis gives me a less noisy, but blurrier
>image than scanning at 2400x2400.
>
The slight loss in doing this is due to the change in the MTF of the
system. If you average two cells from a 1200ppi line that are offset by
1/4800ppi then there is a slight increase in the overall size of the
cell - but this is marginal in the scheme of things. A rough gide of
how significant can be seen by examining the MTF of a cell 1/1200" wide
at a spatial frequency of 4800ppi, the shift that is present. A more
detailed assessment of the MTFs shows that the difference at the
limiting resolution of the 1200ppi image is only 3%, and less than 1% at
4800ppi, confirming that the shift itself is negligible as is the
resolution gain.
>This wouldn't happen with "real" multi-sampling, i.e. samples taken at
>exactly the same position. Question is, is there a software fix for
>this? I'm taking your answer, below, as a "mostly no"...?
>
No, not at all, just that re-alignment isn't it - there are too many
losses in the process for it to yield a worthwhile benefit.
>> >And if it is, should we process the image taking account of the pixel
>> >offset/overlap, and if so, how?
>> >
>> The simplest method of doing this is a pixel average and downsample by a
>> factor of two.
>
>I.e. an image made by (all pixels from line n + every pixel from line
>n+1) / 2 (that is considering only one direction)?
>But this is really the same as treating it as a "standard"
>multi-sampling, i.e. with no offset, isn't it?
Yes, because when the sampling density is this much higher than the
resolution of the system that shift is no longer significant.
>
>Then what about the various bilinears, biquadratics and bicubics?
>
Just different downsampling algorithms - the difference between them
swamping the effect that this minor shift has. In effect these are
interpolations with different frequency responses - the higher the order
the flatter and sharper cut-off of the frequency response, so the better
the result.
>> Suffice to say that there isn't an exact method of
>> separating the resolution from the SNR gain.
>
>Which is to say that the offset between each pair of scan lines can't
>be really accounted for in software?
>
Exactly - but it can be very closely approximated.
>
>In any case I don't fully understand why you say that half-pixel
>realignment isn't worth doing. I know the explanation would get
>technical, but just tell me, shouldn't it be just as worthless when
>done on multi-scans (the Don way, I mean, taking multiple scans and
>then sub-pixel aligning them)?
What you are doing is not the same as what Don is trying to achieve. You
are multisampling, which means the noise throughout the density range of
the image reduces by the square root of number of samples. Don is
extending the dynamic range of the image directly with an improvement in
the noise only in the extended region which is directly proportional to
the scale of the two exposures. These are very different effects for
very different applications. Don's technique, for example, is very
useful with high contrast and density originals, but offers no advantage
with low contrast materials such as negatives. Conversely,
multiscanning offers the same, albeit reduced, advantage to both. For
example, Don's technique can extend the effective scan density by, say,
10:1 (increasing the Dmax by 1) reducing the noise in the shadows by the
same amount, in only two exposures. Multiscanning will only reduce the
noise by 29% (ie. 71% of its original) with two exposures, or 68% (ie.
to 32% of its original level) with 10 exposures.
Since the benefits of multiscanning are much less direct, being only a
square root function, the susceptibility of that benefit to unnecessary
processing losses is consequentially higher.
>The only difference is that, in "our" case, the amount of misalignment
>
>I see. But do you agree with me, in any case, that on the vertical
>axis, the 4800 dpi of "resolution" are worthless as *resolution* and
>much more useful as a substitute for multi-sampling (i.e. for improving
>SNR)?
>
Absolutely! (and have stated as much on several occasions).
>But anyway, what do you have to say about the unsharp masking -- which
>I certainly consider doing on 2400x2400 scans?
>My impression is that the standard, consumer-oriented Internet sources
>say "just apply as much unsharp masking as you see fit".
>
>But shouldn't there be an exact amount and radius of unsharp masking
>that can be computed from the scanner's characteristics, seeing from
>the things you said in the various threads (which I only very partially
>understood, though)?
>
Yes, there should be and there is. There is also an exact amount of USM
that is required for any particular output, whether screen or print and
that changes with scale etc. The general advice of "apply as required"
is usually given because estimating the exact amount of sharpening (not
just USM) to compensate for the scanner, printer (and any loss in the
original, such as the camera lens etc.) is extremely complex.
> Gordon Moat ha scritto:
>
> > ljl...@tiscalinet.it wrote:
> >
> > > [snip]
> >
> > [snip]
> >
> > Scanning at some multiple of the claimed resolution might improve your
> > scans, if that is what you are after with all this investigation. If you
> > really want to get technical, check out the Dalsa and Kodak web sites,
> > then find the White Papers for their linear CCDs. You will get far more
> > technical information that way, though maybe more than is practical.
>
> I don't want to get *too* technical.
>
Though you want to hack the driver. ;-)
>
> In short, my scanner's got 2400 dpi horizontal. Sure, there are
> complications: it's a "staggered" CCD, for one, and then all you've
> written that I snipped (although I believe my scanner has three --
> actually six -- linear CCDs, one for each color, not one -- actually
> two -- very big linear CCD).
If it is moving the optics, and not the CCD, then it has a three or four row
CCD with RGB filtering over it. If it is moving the CCDs, then it could be
several. Of course, you could crack it open and find out. ;-)
>
>
> But let's just pretend for a moment that it's 2400 dpi optical, period.
You would be lucky for it to be much better than half that, but for sake of
discussion . . . . . . .
>
>
> What I want to do is scan at 4800 dpi in the *vertical* direction, i.e.
> run the motor at "half-stepping". My scanner can do that.
>
> The problem is twofold:
>
> 1) (the less important one) My scanner's software insists on
> interpolating horizontally in order to fake 4800 dpi on both the x and
> y axis, and I don't know how to "revert" this interpolation to get the
> original data back (just downsampling with Photoshop appears to lose
> something). But as you said, the interpolation algorithm varies between
> scanners, so I'll have to find out what mine does, I suppose -- or,
> hopefully, just manage to hack the open-source driver I'm using to
> support 2400x4800 with no interpolation.
Make that a three fold problem . . . how and what do you plan to use to view
that image? In PhotoShop, you would view 2400 by 4800 as a rectangle; if all
the information was 2400 by 2400 viewing, then you have a square; if you want
a square image and have 2:1 ratio of pixels then your square image will be
viewed like a stretched rectangle. This is similar to a problem that comes up
in video editing for still images; video uses non-square pixels, so the
square pixel still images need to be altered to fit a non-square video
display.
>
>
> 2) (the more important one) I, of course, don't want a 2:1 ratio image.
> I just want 2400x2400, and use the "extra" 2400 I've got on the y axis
> as one would use multi-sampling on a scanner supporting it. Yes, to get
> better image quality and less noise, as you said.
> But the question is, how to do it *well*?
Or how to actually still view it as a square image.
>
> I feel that I shouldn't just pretend I'm really multi-sampling (i.e.
> taking two readouts for each scanline), because I am not. I ought to
> somehow take into account the fact that each scanline is shifted by
> "half a pixel" from the previous one.
> Should I ignore this, and go on processing as if I were "really"
> multi-sampling? Or should I downsample the image using bilinear,
> bicubic, or something else more appropriate -- something that can take
> the half-pixel offset into account?
Perhaps using some high end video editing software would get you closer,
since you could work directly with non-square pixels.
>
>
> I realize that simply downsampling the picture to 2400x2400 in
> Photoshop or something gives decent results. But I'd just like to know
> if there's something I'm missing.
>
> In my mind, the "right" thing to do would be to consider the scan as
> two separate scans (one made from the even scanlines, one made from the
> odd scanlines); then merge the two image at an half-pixel offset. But
> Kennedy said this is not such a great idea.
> And in any case, even if Kennedy were wrong, I suppose there must be
> some simpler transformation that gives the same result as the alignment
> thing above... after all, it seems stupid to actually perform the
> alignment and then the merging, when we know the misalignment is
> exactly x=0, y=0.5.
Okay, just a side note on technology. Canon came up with a half pixel shift
idea in 3 CCD video several years ago. Panasonic and Sony tried something
similar, but basically gave up on it on professional 2/3" 3 CCD cameras. The
Canon idea was to slightly alter the spacing to enhance edge resolution, and
choose green since it corresponds to how human eyes like to view things. Then
the in-camcorder processing put all that back together as a real image. I
don't know of a way to separate out the original capture information, unless
you got that prior to in-camera processing.
>
>
> All the other questions I posed in the original message were mostly
> about how all this relates (if anyhow) with the fact the CCD is
> "staggered" (which in turn means that each sensors already overlaps
> each other sensors by half their size -- or *about* half their size,
> since as you pointed out, things get actually a bit more complicated).
I have not heard of anyone outside of Canon still using a staggered idea. I
think Microtek may have tried it, or possibly UMAX. In order to really do
something different with that, much like the video example above, it seems
you would need to get the electronic signal directly off the CCD prior to any
in-scanner processing of the capture signal. Basically that means hacking
into the scanner. I don't see how that would be practical; even if you came
up with something, you still have a low cost scanner with limited optical
(true) resolution and colour abilities.
> In article <4325DEBE...@attglobal.net>, Gordon Moat
> <mo...@attglobal.net> writes
> >
> >Actually, many linear CCDs are 8400 or 10200 cells (pixel sites), though
> >divided by three to give each colour Red, Green, and Blue. Kodak have some
> >nice White Papers on these.
> >
> Not generally - colour linear CCDs used in scanners are generally
> tri-linear. Each colour is a separate parallel line of CCDs, they are
> not divided.. . . . . . . Now, I am not saying these devices don't exist, but
> I would like some
> pointer as to where you are getting this information from since it is
> not from the Kodak or Dalsa sites you reference, and more likely to be a
> misunderstanding on your part. Whilst there may well be colour
> interleaved linear CCDs these are certainly not used on any commercial
> scanners that I am aware of.
Okay, maybe I should have stated that better. So I will give you one to find
and read about. That is the Kodak KLI-10203 Imaging Sensor. It is correctly
termed a 3 x 10200 imager, so I apologize for not being more thorough in my
description of it. The white paper and long spec sheet for this one is 27
pages, so I will skip on typing the details in this message.
>
>
> >So in theory an 8400 element linear CCD should be able to resolve 2800
> >dpi, and a 10200 element CCD should be able to do 3400 dpi.
>
> A colour CCD with a total of 8400 elements would only be capable of
> resolving 2800 colour samples across the A4 page - somewhat better than
> 300ppi - whilst your 10200 element colour CCD would only be capable of
> 400ppi! The real requirements for flatbed scanners are *much* higher
> than these!
If you could figure out what scanner uses the KLI-10203, then you might be
surprised at your statements. Just to give you a hint, it is only available in
a few high end products. The lowest spec (and lowest cost) of those does 3200
dpi true resolution. That is across the entire bed, and not just down the
middle.
>
>
> An A4 scanner with a 1200ppi capability has a tri-linear CCD with round
> 10,500 cells in *each* line ie. a total of more than 31,000 cells. A
> 4800ppi full page scanner requires a CCD with more than 42000 cells in
> each line, a total of over 125,000 cells.
Okay, just to through out some numbers, and then you can do calculations, or
whatever. Using the KLI-10203 again, the cell sites are 7 祄 square pixels.
There are 3 rows of 10200 cells each, so 30600 total cells. Row spacing is 154
祄 centre to centre. There is no sideways offset of cells in each row, and the
spacing allows a processing timing gap of 22 lines.
>
>
> cf. http://www.necel.com/nesdis/image/S17546EJ1V0DS00.pdf for data on
> such a device, where each line is in itself produced by having four real
> lines offset by a quarter of a pixel pitch. Guess which scanner that's
> in! ;-)
>
> >check out the Dalsa and Kodak web sites,
>
> Dalsa don't make linear CCDs (in fact they don't design CCDs - all of
> their products are identical in form, function and nomenclature to
> Philips devices - even the data sheets are Philips with a DALSA sticker
> over the top!).
Dalsa bought out the Philips imaging chip business, though they kept some
engineers and other workers. Is it still possible to buy imaging chips directly
from Philips? Anyway, they do have some nice information on chips on their
website. Fill Factory in Belgium are another company with some nice technical
information. With Sony, I have not been very impressed with the level of
information from them, though they do make lots of imaging chips for lots of
companies.
>
>
> Interleaved colours (by Bayer masking) is common on two dimensional CCDs
> (indeed, Bayer was a Kodak employee!) but this is unnecessary in linear
> devices. I suspect that you are confusing the two.
Okay, to be more specific, each row on a linear CCD has a colour filter over
it. In other words, on our KLI-10203 example, one row has a red filter, one row
has a blue filter, and one row has a green filter. There is no need for a Bayer
pattern, since one row is scanned at a time, and the final result is three
colour channels of information.
There are 3 CCD digital still cameras, and they do not use Bayer pattern
filters either. They do use one overall colour filter over the surface of each
of the three chips. The result again is three colour channels of information.
Bayer patterning is a altering of colour filters over each pixel on one imaging
chip. Basically, the patterns will vary across manufacturers, though usually
RGBG with twice as many green filtered pixels as red or blue. The information
to create three colour channels of information is interpolated (often by in
camera processing). This is unlike scanning, or 3 CCD stills cameras.
Okay, so I don't recall mentioning interleaving, but interpolation was
mentioned, though only for upsizing or downsizing to change resolution. The OP
wants to use what he thinks might be extra resolution in one dimension of the
specifications for his scanner.
An exception to colour filtering is in many Nikon film scanners, since they use
coloured LEDs as a light source. I would suspect those are Sony imaging chips
in those Nikon scanners. While many do like the LED approach, it is interesting
to note that is not done in any high end scanning systems. I doubt it is some
patent issue, and more likely that a single light source provides a more
predictable scanning operation in regards to colour accuracy over the life of
the scanner.
Anyway, I apologize for not being more clear, a 10200 linear CCD should be
correctly termed a 3 x 10200 element linear CCD. Regardless the resolution is
still limited by the physical size of the cell site, the scanner optics, and
the accuracy of movement of the imaging components within the scanner. A linear
image sensor with a single array of 1000 photosites of pitch 10 祄 would have a
resolution of 2540 dpi (1000 / (1000 x .01 mm x 1"/25.4mm)). If that sensor
were used in an optical system to image an 8" wide document, then the
resolution in the document plane would be 125 dpi (1000 pixels / 8"). If we
consider the 7 祄 cell size for the KLI-10203, for example, then we can
estimate for that imager.
Scanner optics are still the most limiting factor, and could be the main piece
that contributes to limiting true optical resolution. Trying to find
information about scanner optics is tough, though there is a little about these
available from Rodenstock and a few other companies. Interestingly, it is much
easier to find out information on high end systems, and nearly impossible to
find useful information on low end systems. Maybe that is just the way it
should be.
> In article <1126572389.0...@g47g2000cwa.googlegroups.com>,
> ljl...@tiscalinet.it writes
>
> > [snip]
>
> [snip]
>
> >Is nearest neighbour resizing (though I've got no idea what it is! but
> >thankfully there's the Internet for that) what I am looking for?
> >
> That depends on the version you use. Direct nearest neighbour
> downsampling would not help since that just throws away the unused data.
> However many applications prefilter the data prior to downsampling, if
> not by exactly the correct blur average, by something very close to it.
> The exact function would be the 2:1 pixel average in each axis that you
> are downsampling by.
Oh. Which is what you suggested to me in other messages, isn't it?
Well, I suppose I'll settle for that then.
In the end I don't really care what applications do, since I'm going to
write my own little program to do this -- I need to do that for other
reasons, anyway.
So, just out of curiosity, nearest neighbour after the applying the
"correct blur average" corresponds to 2:1 pixel average?
Just one more time, to be sure, what you're telling me to do is
for(int y=0; y<OriginalHeight/2; y++) {
for(int x=0; x<OriginalWidth; x++) {
NewImage[x, y] = (OldImage[x, y*2] + OldImage[x, y*2+1]) / 2;
}
}
> [snip: half-stepping the motor is not very precise, and although it
> doesn't result in bad geometry, it contributes to making
> sub-pixel alignment worthless]
I see. Good to know.
by LjL
ljl...@tiscali.it
The KLI-10203 is a tri-linear CCD (check the FIRST line of the data
sheet!) - *each* of the lines is 10200 cells long and *each* of the
lines is a separate colour - no interleaving. So, contrary to your
claim that this could only resolve 3400ppi, because it has 3 colours in
each line, it can 5100cy/length. Without optical scaling it resolves
3600ppi, with optical scaling (as would be used in a scanner
application) this can be set to match whatever the scanner width is - on
the 8.5in flatbed scanner configuration that the OP is referencing, it
would produce around 1200ppi.
>
>> >So in theory an 8400 element linear CCD should be able to resolve 2800
>> >dpi, and a 10200 element CCD should be able to do 3400 dpi.
>>
>> A colour CCD with a total of 8400 elements would only be capable of
>> resolving 2800 colour samples across the A4 page - somewhat better than
>> 300ppi - whilst your 10200 element colour CCD would only be capable of
>> 400ppi! The real requirements for flatbed scanners are *much* higher
>> than these!
>
>If you could figure out what scanner uses the KLI-10203, then you might be
>surprised at your statements.
I don't think so, mainly since the statement is based on *YOUR* figures
that the 10200pixel CCD is only capable of 3400ppi! Perhaps you see now
why it was ridiculous? And before you wriggle further - KODAK DON'T
MAKE A 3400PIXEL LONG TRILINEAR (10200 TOTAL CELLS) CCD AND NEVER HAVE!!
An A4 flatbed scanner, as the type under discussion in this thread,
means a scan width of at approximately 8.5"; 10200 pixels across that
distance yields exactly 1200ppi - no division by three because the
colours are on three separate lines of 10200 cells *each*, not
interleaved on a single line as you suggested.
>Just to give you a hint, it is only available in
>a few high end products. The lowest spec (and lowest cost) of those does 3200
>dpi true resolution. That is across the entire bed, and not just down the
>middle.
Not across the full A4 width o a single pass it isn't. To achieve
3200ppi resolution requires a scan width of no greater than 3.2" -
around a third of the width of the flatbed under discussion!
>
>> An A4 scanner with a 1200ppi capability has a tri-linear CCD with round
>> 10,500 cells in *each* line ie. a total of more than 31,000 cells. A
>> 4800ppi full page scanner requires a CCD with more than 42000 cells in
>> each line, a total of over 125,000 cells.
>
>Okay, just to through out some numbers, and then you can do calculations, or
>whatever. Using the KLI-10203 again, the cell sites are 7 μm square pixels.
>There are 3 rows of 10200 cells each, so 30600 total cells. Row spacing is 154
>μm centre to centre. There is no sideways offset of cells in each row, and the
>spacing allows a processing timing gap of 22 lines.
>
And how much of that determines the ppi of the final application? Hint
- nothing, but now we know you can read a data sheet!
So where does *your* figure of 3400ppi limitation for this particular
device come from - apart from your initial misreading of the data?
>
>Dalsa bought out the Philips imaging chip business, though they kept some
>engineers and other workers. Is it still possible to buy imaging chips directly
>from Philips?
Certainly was the last time I tried, which I believe was earlier this
year although time flies.
>Anyway, they do have some nice information on chips on their
>website.
They do, but *none* of them are linear arrays and making inferences from
the limitations of 2-D arrays, particularly colour arrays, on linear
devices is misleading at best and completely deceptive at worst. For
example, DALSA's biggest array is only 5344 pixels along the largest
axis - but you wouldn't interpret that as state of the art for a linear
array!
>>
>> Interleaved colours (by Bayer masking) is common on two dimensional CCDs
>> (indeed, Bayer was a Kodak employee!) but this is unnecessary in linear
>> devices. I suspect that you are confusing the two.
>
>Okay, to be more specific, each row on a linear CCD has a colour filter over
>it.
Precisely - but that isn't what you wrote last time! You stated that
the 3 colours resulted in a resolution of only one third of the number
of pixels in the line.
<snip the millenium prize for rewording the previous post!>
>
>Okay, so I don't recall mentioning interleaving, but interpolation was
>mentioned, though only for upsizing or downsizing to change resolution. The OP
>wants to use what he thinks might be extra resolution in one dimension of the
>specifications for his scanner.
>
No he doesn't - or at least that isn't what he has asked about. He is
interested in using available samples in two axes that do not provide as
much resolution as he would like as a means of achieving improved signal
to noise at a lower resolution.
The CCD in his case is similar to the NEC uPD8880 device, a trilinear
array with 21360 cells in each colour, capable of producing 2400ppi
across an A4 platform. Each of the colour lines comprises two rows of
10,680 cells capable of reproducing 1200ppi on the flatbed, but offset
by half a pixel pitch to create a 2400ppi sample density. In addition,
the scanner motor is capable of moving the scan head in 4800ppi steps,
further oversampling the original pixels. He is interested in using
these oversamples optimally for signal to noise improvement at 2400ppi
and possibly as low as 1200ppi rather than have some of their
information being used to achieve resolution which is already
compromised by the optical system of the scanner.
>An exception to colour filtering is in many Nikon film scanners, since they use
>coloured LEDs as a light source. I would suspect those are Sony imaging chips
>in those Nikon scanners.
You would be wrong.
>While many do like the LED approach, it is interesting
>to note that is not done in any high end scanning systems.
Wrong again! It is exactly the process used in high end film scanner
systems - the difference being that the LEDs are replaced with colour
lasers to achieve a higher intensity and thus a faster throughput.
>I doubt it is some
>patent issue, and more likely that a single light source provides a more
>predictable scanning operation in regards to colour accuracy over the life of
>the scanner.
>
>Anyway, I apologize for not being more clear, a 10200 linear CCD should be
>correctly termed a 3 x 10200 element linear CCD. Regardless the resolution is
>still limited by the physical size of the cell site, the scanner optics, and
>the accuracy of movement of the imaging components within the scanner. A linear
>image sensor with a single array of 1000 photosites of pitch 10 μm would have a
>resolution of 2540 dpi (1000 / (1000 x .01 mm x 1"/25.4mm)). If that sensor
>were used in an optical system to image an 8" wide document, then the
>resolution in the document plane would be 125 dpi (1000 pixels / 8"). If we
>consider the 7 μm cell size for the KLI-10203, for example, then we can
>estimate for that imager.
>
You don't need to go round the houses - the calculation is trivial. An
8.5in scan width with 10200 cells per line (no matter what the optical
system or the cell size or pitch is) results in 10200/8.5 = 1200ppi.
>Just one more time, to be sure, what you're telling me to do is
>
>for(int y=0; y<OriginalHeight/2; y++) {
> for(int x=0; x<OriginalWidth; x++) {
> NewImage[x, y] = (OldImage[x, y*2] + OldImage[x, y*2+1]) / 2;
> }
>}
>
Seems OK.
That'll give you half as many y pixels in the new image as the old, but
with an SNR of about x1.4 of the old - assuming that the bit depth in
NewImage[x,y] is adequate to avoid overflow prior to that divide by 2
step.
You could go further with a reduction to 1200x1200ppi by summing 4y and
2x pixels for each NewImage[x,y] and achieve an SNR improvement of x2.8,
but you need to maintain adequate temporary precision to compute the sum
of 8 pixels, before dividing and truncating, to get the final result.
It figures that an amateur mathematician hobbyist would have never used a high end
scanner. Theories disappear when you actually are able to use devices that have
these installed in them. There are no imaging chips with 100% efficient cell sites,
nor any without a dead zone between cell sites of greater than 1 µm in size. You can
calculate all you want, but actual tests of this gear are far better than theory.
>
> >
> >> >So in theory an 8400 element linear CCD should be able to resolve 2800
> >> >dpi, and a 10200 element CCD should be able to do 3400 dpi.
> >>
> >> A colour CCD with a total of 8400 elements would only be capable of
> >> resolving 2800 colour samples across the A4 page - somewhat better than
> >> 300ppi - whilst your 10200 element colour CCD would only be capable of
> >> 400ppi! The real requirements for flatbed scanners are *much* higher
> >> than these!
> >
> >If you could figure out what scanner uses the KLI-10203, then you might be
> >surprised at your statements.
>
> I don't think so, mainly since the statement is based on *YOUR* figures
> that the 10200pixel CCD is only capable of 3400ppi!
Not the CCD, but the system in which it is installed. You cannot have a flat bed
scanner without optical components. Those optical components will limit the total
system resolution. In fact, that resolution is based on actual tests of scanners
with that exact 10200 pixel (3 rows to be specific) imaging CCD. I don't pull these
numbers out of my ass, I get them from the industry that uses these things and
actually does test them.
> Perhaps you see now
> why it was ridiculous? And before you wriggle further - KODAK DON'T
> MAKE A 3400PIXEL LONG TRILINEAR (10200 TOTAL CELLS) CCD AND NEVER HAVE!!
That statement shows your level of ineptitude, and lack of reading comprehension.
The 3400 dpi figure is the OPTICAL resolution, not the size of the file. The number
of cells does not determine the optical resolution, since all system components
affect the "optical" (or true, or actual) resolution. In fact, the current best flat
bed actual optical resolution is 5600 dpi across the entire 12" by 17" scanner bed,
and those two particular scanners used an 8000 element tri-linear CCD. That very
simple fact should tell you that the optical resolution is not simply a factor of
the imaging chip construction.
>
>
> An A4 flatbed scanner, as the type under discussion in this thread,
> means a scan width of at approximately 8.5"; 10200 pixels across that
> distance yields exactly 1200ppi - no division by three because the
> colours are on three separate lines of 10200 cells *each*, not
> interleaved on a single line as you suggested.
Three rows of 10200 cell sites each, 30600 in total . . . did you not read my second
reply, or are you just trying to be dense on purpose. Further information is that
the particular example I chose, the KLI-10203, has a physical dimension of 76.87 mm
by 1.6 mm . . . seems to me that is much smaller than 8.5" across, unless you are
using a different metric to english conversion.
Just to update you a little bit, the smallest bed width in which the KLI-10203 is
actually installed is 305 mm, or about 12". The length of that particular smallest
scanner is 457 mm, or about 18". Much larger than A4. In fact, I don't know of any
true high optical resolution scanners that are A4 sized, nor do I know of any A4
sized flatbeds that use the KLI-10203. Maybe I should have picked a lesser imager
for this discussion.
>
>
> >Just to give you a hint, it is only available in
> >a few high end products. The lowest spec (and lowest cost) of those does 3200
> >dpi true resolution. That is across the entire bed, and not just down the
> >middle.
>
> Not across the full A4 width o a single pass it isn't. To achieve
> 3200ppi resolution requires a scan width of no greater than 3.2" -
> around a third of the width of the flatbed under discussion!
What you are missing is that not all scanning systems in flat beds use a "pass" in
one direction method of scanning. There are XY scan and XY stitch, and variations of
that to scan the entire flat bed area. Send off an 8" by 10" transparency to
Creo/Kodak and ask them to scan it for you . . . of course, I should alert you that
they only offer that for potential customers who are serious about buying their
products.
>
> >
> >> An A4 scanner with a 1200ppi capability has a tri-linear CCD with round
> >> 10,500 cells in *each* line ie. a total of more than 31,000 cells. A
> >> 4800ppi full page scanner requires a CCD with more than 42000 cells in
> >> each line, a total of over 125,000 cells.
> >
> >Okay, just to through out some numbers, and then you can do calculations, or
> >whatever. Using the KLI-10203 again, the cell sites are 7 ÃŽ*m square pixels.
> >There are 3 rows of 10200 cells each, so 30600 total cells. Row spacing is 154
> >ÃŽ*m centre to centre. There is no sideways offset of cells in each row, and the
> >spacing allows a processing timing gap of 22 lines.
> >
> And how much of that determines the ppi of the final application? Hint
> - nothing, but now we know you can read a data sheet!
>
> So where does *your* figure of 3400ppi limitation for this particular
> device come from - apart from your initial misreading of the data?
Actual test of high end scanning gear. True optical resolution. In fact, the very
best can do much better than 3400 dpi, though all of those use a different imaging
chip. Many of those use an 8000 element tri-linear CCD, and add better optics,
active chip cooling, and even more precise positioning. Try Creo EverSmart line,
Dainippon Screen Cezanne, and Fuji Lanovia Quattro. Actually, the Fuji Lanovia
Quattro has a 10500 Quad-linear CCD for colour scans based on their super CCD
technology, and adds a single line 16800 element CCD for copydot usage (do I need to
explain copydot scanning?), so that particular Fuji (and their FineScan 5000)
actually do better than 5000 dpi across the scanning bed. I could also mention
Purop-Eskofot, but they are not easy to find.
Just to give you a very simple explanation, that 1200 dpi figure you calculated
would be very close to the actual in a system in which very simple optics were used
in the scanner. In fact, around 1999, when these chips were new, that was nearly the
limit in almost any flat bed scanner. Since that time, scanner optics have improved,
and positioning of optical elements has improved. Those improvements are expensive
to implement, and why you only see them at the high end. However, those improved
optics and better ways to move the optical elements help that family of circa 72 mm
CCDs achieve better than 1200 dpi true optical resolution, and even high
interpolated resolution.
>
>
> >
> >Dalsa bought out the Philips imaging chip business, though they kept some
> >engineers and other workers. Is it still possible to buy imaging chips directly
> >from Philips?
>
> Certainly was the last time I tried, which I believe was earlier this
> year although time flies.
Okay, glad to see you got something right, and nice to hear Philips chip division is
still plugging away. ;-)
>
>
> >Anyway, they do have some nice information on chips on their
> >website.
>
> They do, but *none* of them are linear arrays and making inferences from
> the limitations of 2-D arrays, particularly colour arrays, on linear
> devices is misleading at best and completely deceptive at worst. For
> example, DALSA's biggest array is only 5344 pixels along the largest
> axis - but you wouldn't interpret that as state of the art for a linear
> array!
>
> >>
> >> Interleaved colours (by Bayer masking) is common on two dimensional CCDs
> >> (indeed, Bayer was a Kodak employee!) but this is unnecessary in linear
> >> devices. I suspect that you are confusing the two.
> >
> >Okay, to be more specific, each row on a linear CCD has a colour filter over
> >it.
>
> Precisely - but that isn't what you wrote last time! You stated that
> the 3 colours resulted in a resolution of only one third of the number
> of pixels in the line.
I misstated it, though hopefully it is more clear in the following posts. Also, I
did apologize for not being as correct and thorough as I usually write. Interesting
that your earlier tone is different . . . almost makes me feel that you respond in
line prior to reading everything, which would be careless in the event that is the
situation.
>
>
> <snip the millenium prize for rewording the previous post!>
> >
> >Okay, so I don't recall mentioning interleaving, but interpolation was
> >mentioned, though only for upsizing or downsizing to change resolution. The OP
> >wants to use what he thinks might be extra resolution in one dimension of the
> >specifications for his scanner.
> >
> No he doesn't - or at least that isn't what he has asked about. He is
> interested in using available samples in two axes that do not provide as
> much resolution as he would like as a means of achieving improved signal
> to noise at a lower resolution.
>
> The CCD in his case is similar to the NEC uPD8880 device, a trilinear
> array with 21360 cells in each colour, capable of producing 2400ppi
> across an A4 platform. Each of the colour lines comprises two rows of
> 10,680 cells capable of reproducing 1200ppi on the flatbed, but offset
> by half a pixel pitch to create a 2400ppi sample density. In addition,
> the scanner motor is capable of moving the scan head in 4800ppi steps,
> further oversampling the original pixels. He is interested in using
> these oversamples optimally for signal to noise improvement at 2400ppi
> and possibly as low as 1200ppi rather than have some of their
> information being used to achieve resolution which is already
> compromised by the optical system of the scanner.
Okay, so sounds like a UMAX, Epson, or maybe a Microtek.
>
>
> >An exception to colour filtering is in many Nikon film scanners, since they use
> >coloured LEDs as a light source. I would suspect those are Sony imaging chips
> >in those Nikon scanners.
>
> You would be wrong.
Big Fluffy Dog . . . I have run through enough broken Nikon scanners to avoid them.
They are poor production choices. Great shame they are not as well built and rugged
as their top level cameras.
>
>
> >While many do like the LED approach, it is interesting
> >to note that is not done in any high end scanning systems.
>
> Wrong again! It is exactly the process used in high end film scanner
> systems - the difference being that the LEDs are replaced with colour
> lasers to achieve a higher intensity and thus a faster throughput.
I don't recall Imacon using LEDs . . . okay, just checked and all current models are
not LEDs. Or perhaps you actually think a Nikon film scanner is a high end product?
Put any Nikon scanner into a high volume environment, and they break just a bit too
soon to take them seriously for producing income. It is better to spend a bit more
and get high resolution with high volume and little to no downtime. Now to be just a
little critical of Imacon, they did have some units in the recent past that were a
little more troublesome than should have been expected, though their service is very
fast and efficient (a statement few would make of the current situation at Nikon
USA).
>
>
> >I doubt it is some
> >patent issue, and more likely that a single light source provides a more
> >predictable scanning operation in regards to colour accuracy over the life of
> >the scanner.
> >
> >Anyway, I apologize for not being more clear, a 10200 linear CCD should be
> >correctly termed a 3 x 10200 element linear CCD. Regardless the resolution is
> >still limited by the physical size of the cell site, the scanner optics, and
> >the accuracy of movement of the imaging components within the scanner. A linear
> >image sensor with a single array of 1000 photosites of pitch 10 ÃŽ*m would have a
> >resolution of 2540 dpi (1000 / (1000 x .01 mm x 1"/25.4mm)). If that sensor
> >were used in an optical system to image an 8" wide document, then the
> >resolution in the document plane would be 125 dpi (1000 pixels / 8"). If we
> >consider the 7 ÃŽ*m cell size for the KLI-10203, for example, then we can
> >estimate for that imager.
> >
> You don't need to go round the houses - the calculation is trivial.
The calculation is from the spec sheet, and used as an example. It was also posted
as a lure to see how you would respond. I did not come up with the original
calculation in that paragraph, I merely transposed it. Anyway . . . . .
> An
> 8.5in scan width with 10200 cells per line (no matter what the optical
> system or the cell size or pitch is) results in 10200/8.5 = 1200ppi.
Why don't you tell me how 3400 dpi measured optical resolution is possible using a
circa 72 mm 10200 element tri-linear CCD. This should be quite amusing. Oh, and just
for fun, use that 12" by 17" bed as your explanation basis. The device is the Creo
iQSmart1, in case you have not figured that one out yet.
What I think you are missing is that "line" is a term for the line of the CCD, which
is about 72 mm, not 8.5". I am sure you have read about many scanners with a "sweet
spot" near the centre of the flat bed. This is due to limitations in movement of the
optics, mirror, CCD platen, or any other components that move to allow scanning to
occur. Low end and mid range systems, of which I am certain are your primary
experience, have very simple and very limited imaging components. Better control of
optics, movements, and signal processing will improve results.
Come on Kennedy, I thought you were smarter than this. See this as a challenge, and
then figure out why high end scanning gear works so well, and costs so much. I judge
scanners based on actual tests performed to determine true optical capability, and
not just resolution. Good design control will also help colour accuracy and Dmin to
Dmax performance. Read too many Epson, Canon, UMAX, MicroTek, Minolta, or other low
and mid range gear spec sheets, and you can easily be fooled into thinking these
cheaper devices are much better than they really perform. At least the lower cost
film scanners do better than the low cost flat bed scanners.
I have no idea, and care less, what your particular bent or limitation
is, although your comments betray lack of any scientific or
instrumentation design knowledge. I assume you have some photographic
knowledge, and as a consequence some experience of using commercial
scanner systems. Suffice to say that I have spent over 25 years in the
electro-optic imaging industry and in that time have designed, built and
tested many high end imagers and scanning systems for applications you
would probably never be able to contemplate. Please don't use your
personal limitations as an excuse for blatant stupidity.
>Theories disappear when you actually are able to use devices that have
>these installed in them. There are no imaging chips with 100% efficient
>cell sites,
>nor any without a dead zone between cell sites of greater than 1 μm in
>size. You can
>calculate all you want, but actual tests of this gear are far better
>than theory.
>
Dead zones between pixels determine the fill factor and *improve* the
resolved MTF - they make absolutely no difference to any of these
calculations! As far as tests are concerned - you should revise yours:
the Kodak specification for the device you referenced actually explains
this effect in surprising detail for a data sheet. Perhaps you will
read it, but it has no effect on the fact that this device will produce
a resolution of 1200ppi on an 8.5" scan width.
>>
>> I don't think so, mainly since the statement is based on *YOUR* figures
>> that the 10200pixel CCD is only capable of 3400ppi!
>
>Not the CCD, but the system in which it is installed. You cannot have a
>flat bed
>scanner without optical components.
No, you are wriggling again! Your initial comment made no statement
about optics - this was, according to you, the maximum that a 10200 cell
linear array could resolve, and it is as wrong now as it was then -
despite a feeble attempt to invoke optics at the last minute!
> Those optical components will limit the total
>system resolution. In fact, that resolution is based on actual tests of
>scanners
>with that exact 10200 pixel (3 rows to be specific) imaging CCD. I
>don't pull these
>numbers out of my ass,
>
Sounds like you are pulling excuses out of your ass though.
>I get them from the industry that uses these things and
>actually does test them.
There's the rub, bozo - I am part of that industry and have been two and
a half decades and these figures are trivial to derive from basic design
criteria and tolerancing.
The MTF of your example Kodak array is around 60% at Nyquist, depending
on the clock rate. The MTF of a suitable optic can easily exceed 70% at
the same resolution. If you are measuring much less than 35% contrast
at 1200ppi on an 8.5" scan from this device then you really need to be
re-examining your optical layout, because it certainly isn't high
performance. As for the optical MTF at your claimed 3400ppi limit for
the device: it should readily exceed 90% and thus has little effect at
all on the performance of the device.
>> Perhaps you see now
>> why it was ridiculous? And before you wriggle further - KODAK DON'T
>> MAKE A 3400PIXEL LONG TRILINEAR (10200 TOTAL CELLS) CCD AND NEVER HAVE!!
>
>That statement shows your level of ineptitude, and lack of reading
>comprehension.
>The 3400 dpi figure is the OPTICAL resolution, not the size of the
>file.
On the contrary, it shows you have no idea what you are talking about.
Name ONE (even an obsolete example) Kodak trilinear CCD with 10200 total
cells in each line which had an optical resolution of only 3400ppi when
optically scaled to an 8.5" scan width. You really are talking
absurdities! Even directly at the focal plane itself, the KLI-10203
device is capable of 3600 samples per inch with an MTF of approximately
60% at that resolution (and I remind you that your allegation was not
specific to this device with its particular pixel size, but to all 10200
element linear arrays!).
>The number
>of cells does not determine the optical resolution,
It certainly does in terms of "dpi", "ppi" parameters that you have been
quoting. These terms define the SAMPLING RESOLUTION!
>since all system components
>affect the "optical" (or true, or actual) resolution.
I suggest you learn something about imaging system design before making
yourself look even more stupid than you already do. First lesson should
be what defines optical resolution and what units it is measured in.
Clue: you haven't mentioned them once yet!
> In fact, the current best flat
>bed actual optical resolution is 5600 dpi across the entire 12" by 17"
>scanner bed,
>and those two particular scanners used an 8000 element tri-linear CCD.
>That very
>simple fact should tell you that the optical resolution is not simply a
>factor of
>the imaging chip construction.
>
You really don't have a clue, do you? How many swathes does this Rolls
Royce of scanners make to achieve 5600ppi on a 12" scan width with only
8000 pixels in each line? Perhaps you dropped a zero, or misunderstood
the numbers or just lied.
>>
>Further information is that
>the particular example I chose, the KLI-10203, has a physical dimension
>of 76.87 mm
>by 1.6 mm . . . seems to me that is much smaller than 8.5" across,
>unless you are
>using a different metric to english conversion.
You would build a scanner from such a detector without an imaging optic
to project the flatbed onto the focal plane? And you *STILL* claim you
know what you are talking about? You really are stretching credulity to
extremes now.
>
>Just to update you a little bit, the smallest bed width in which the
>KLI-10203 is
>actually installed is 305 mm, or about 12".
In which case it would be unable to yield much more than 800ppi in a
single swathe at that width!
>> >The lowest spec (and lowest cost) of those does 3200
>> >dpi true resolution. That is across the entire bed, and not just down the
>> >middle.
>>
>> Not across the full A4 width o a single pass it isn't. To achieve
>> 3200ppi resolution requires a scan width of no greater than 3.2" -
>> around a third of the width of the flatbed under discussion!
>
>What you are missing is that not all scanning systems in flat beds use
>a "pass" in
>one direction method of scanning.
I fail to see how I could have missed this point when I specifically
made reference to the condition of a single pass in the sentence you
have quoted above!
Since the scanner under discussion on this thread is a single pass
scanner, and the OP is specifically interested in what he can achieve in
that single pass, I see no need to extend the explanation to swathe
equipment.
>>
>> So where does *your* figure of 3400ppi limitation for this particular
>> device come from - apart from your initial misreading of the data?
>
>Actual test of high end scanning gear. True optical resolution.
Incredible. Not only because even cheap scanners now achieve better
than this, but because neither "ppi" nor "dpi" is an appropriate
measurement unit for "optical resolution" in the first place!
>
>Just to give you a very simple explanation, that 1200 dpi figure you calculated
>would be very close to the actual in a system in which very simple
>optics were used
>in the scanner.
I didn't suggest otherwise - a simple optic with a single pass scan.
That is what we are discussing in this thread. You are the one bringing
in additional complications to justify your original mistaken advice to
the OP.
>In fact, around 1999, when these chips were new, that was nearly the
>limit in almost any flat bed scanner. Since that time, scanner optics
>have improved,
>and positioning of optical elements has improved. Those improvements
>are expensive
>to implement, and why you only see them at the high end. However, those
>improved
>optics and better ways to move the optical elements help that family of
>circa 72 mm
>CCDs achieve better than 1200 dpi true optical resolution, and even high
>interpolated resolution.
>
I suggest you look up the original patents for this "microscan"
technology - you will find a familiar name in the inventors - and it was
well before 1999 - although that could be around the time that the
original patents expired. Even so, as the inventor of aspects of that
particular technology, I can assure you that diffraction is still the
limit of all optics.
>> Precisely - but that isn't what you wrote last time! You stated that
>> the 3 colours resulted in a resolution of only one third of the number
>> of pixels in the line.
>
>I misstated it, though hopefully it is more clear in the following
>posts.
No, your "following posts" were full of excuses and feeble
justifications (such as optics) to justify your original assertion
rather than a simple statement that you were wrong.
>Also, I
>did apologize for not being as correct and thorough as I usually write.
No you didn't, you said "OK Maybe I should have stated that better".
That does not, under any circumstances, amount to either an apology or
an admission of being incorrect, let alone both.
>Interesting
>that your earlier tone is different . . . almost makes me feel that you
>respond in
>line prior to reading everything, which would be careless in the event
>that is the
>situation.
No, I browse a post first to capture the gist of the message and then
respond to the specific lines I quote.
>> No he doesn't - or at least that isn't what he has asked about. He is
>> interested in using available samples in two axes that do not provide as
>> much resolution as he would like as a means of achieving improved signal
>> to noise at a lower resolution.
>>
>> The CCD in his case is similar to the NEC uPD8880 device, a trilinear
>> array with 21360 cells in each colour, capable of producing 2400ppi
>> across an A4 platform. Each of the colour lines comprises two rows of
>> 10,680 cells capable of reproducing 1200ppi on the flatbed, but offset
>> by half a pixel pitch to create a 2400ppi sample density. In addition,
>> the scanner motor is capable of moving the scan head in 4800ppi steps,
>> further oversampling the original pixels. He is interested in using
>> these oversamples optimally for signal to noise improvement at 2400ppi
>> and possibly as low as 1200ppi rather than have some of their
>> information being used to achieve resolution which is already
>> compromised by the optical system of the scanner.
>
>Okay, so sounds like a UMAX, Epson, or maybe a Microtek.
>
Or just about any consumer grade flatbed scanner in that class of the
market these days.
>>
>> >An exception to colour filtering is in many Nikon film scanners,
>> >since they use
>> >coloured LEDs as a light source. I would suspect those are Sony
>> >imaging chips
>> >in those Nikon scanners.
>>
>> You would be wrong.
>
>Big Fluffy Dog . . . I have run through enough broken Nikon scanners to
>avoid them.
>They are poor production choices. Great shame they are not as well
>built and rugged
>as their top level cameras.
>
And what does that have to do with your allegation that they contain
Sony CCDs? You are like a child pissing up a wall.
>>
>> >While many do like the LED approach, it is interesting
>> >to note that is not done in any high end scanning systems.
>>
>> Wrong again! It is exactly the process used in high end film scanner
>> systems - the difference being that the LEDs are replaced with colour
>> lasers to achieve a higher intensity and thus a faster throughput.
>
>I don't recall Imacon using LEDs . . . okay, just checked and all
>current models are
>not LEDs.
Did you actually read what was written, Bozo? Why are you still asking
about LEDs?
>I did not come up with the original
>calculation in that paragraph,
Why is that no surprise??
>> An
>> 8.5in scan width with 10200 cells per line (no matter what the optical
>> system or the cell size or pitch is) results in 10200/8.5 = 1200ppi.
>
>Why don't you tell me how 3400 dpi measured optical resolution is
>possible using a
>circa 72 mm 10200 element tri-linear CCD.
TIP: optical resolution is measured at the flatbed surface, not at the
focal plane - the reason for that is that only the flatbed surface is
accessible for testing other than during design and manufacture and it
is the only position that matters to the end user. The physical size of
the CCD has no direct influence on the resolution obtained other than
its implications on the optical system requirements. 7um pixels are
relatively trivial to resolve optically - low cost digital still cameras
work well with sub-3um pixels, albeit with limited minimum apertures,
but the pixel resolution is not particularly demanding.
> This should be quite amusing.
It should indeed since it is quite simple really. In terms of
measurement: assess the MTF of the scanner using an ISO-12233 or
ISO-16067 references depending on subject matter and determine the
optical resolution at an agreed minimum MTF. Industry standard is
nominally 10%, but some people play specmanship games though that is
unnecessary here. You should note that this optical resolution will not
be in dpi or ppi, but I leave it to you to figure what it will be, since
you demonstrate ignorance and need to learn some facts.
In terms of design, just for fun, use your example of the KLI-10203
which has a nyquist MTF of better than 60% at 2MHz clock rate. Fit an
IR filter, cut-off around 750nm, to eliminate out of band response.
Select a 1:3 f/4 relay objective from one of many optical suppliers. Few
will fail to meet an MTF of over 70% on axis at the sensor's nyquist
frequency and those from the better suppliers including Pilkington,
Perkin Elmer etc should achieve this across the entire field. Damping
mechanism and timing to eliminate lateral post-step motion or, ideally,
continuous backscan compensation of focal plane by multi-facet polygon.
Result: Scan width = 8.5"; sampling resolution = 1200ppi; MTF at Nyquist
for native resolution >=35% (ie. well resolved, optical resolution
exceeds sampling resolution!).
MTF at Nyquist for 3400dpi should exceed 80%, based on CTE limited MTF
of 95% for the detector and 90% optical MTF with 1 wavefront error at
this lower resolution.
These are just figures for optics and your example detector that I
happen to have in front of me at the moment - with a little searching it
might be possible to obtain better. Nevertheless, 1200ppi resolution is
clearly practical on an 8.5" scan width with the device you seem to
believe can only achieve 3400ppi. Hardly surprising though is it -
similar CCDs from other manufacturers are actually specified as
1200ppi/A4 devices!
>Oh, and just
>for fun, use that 12" by 17" bed as your explanation basis. The device
>is the Creo
>iQSmart1, in case you have not figured that one out yet.
>
Cor, shifting goalposts really is your forte isn't it. We determine a
projected resolution on an 8.5" width platform and you want to see it
achieved on a 12" platform. Do you understand the ratio of 8.5 and 12?
You are an idiot and I rest my case!
I really am enjoying your selective snipping, so without further delay, on with the
show . . . .
Kennedy McEwen wrote:
> In article <4327B3C8...@attglobal.net>, Gordon Moat
> <mo...@attglobal.net> writes
> >Kennedy McEwen wrote:
> >
> >> The KLI-10203 is a tri-linear CCD (check the FIRST line of the data
> >> sheet!) - *each* of the lines is 10200 cells long and *each* of the
> >> lines is a separate colour - no interleaving. So, contrary to your
> >> claim that this could only resolve 3400ppi, because it has 3 colours in
> >> each line, it can 5100cy/length. Without optical scaling it resolves
> >> 3600ppi, with optical scaling (as would be used in a scanner
> >> application) this can be set to match whatever the scanner width is - on
> >> the 8.5in flatbed scanner configuration that the OP is referencing, it
> >> would produce around 1200ppi.
> >
> >It figures that an amateur mathematician hobbyist would have never used
> >a high end
> >scanner.
>
> I have no idea, and care less, what your particular bent or limitation
> is, although your comments betray lack of any scientific or
> instrumentation design knowledge. I assume you have some photographic
> knowledge, and as a consequence some experience of using commercial
> scanner systems. Suffice to say that I have spent over 25 years in the
> electro-optic imaging industry . . . . .
And yet not one imaging publication pays you to write. So instead you come to
Usenet and try to gather fans. Interesting behaviour. When will I see your writings
in Imaging Technologies, Electronic Publishing, Photo Techniques, or Advanced
Imaging magazines?
As if you would care, my expertise is commercial imaging and commercial printing.
One does not need to know how to construct a camera, printer, scanner, or computer
in order to use them. It is possible to test devices without tearing them apart,
and many people have done just that, and written extensively about that.
>
>
> . . . . . . . . . .
> >Not the CCD, but the system in which it is installed. You cannot have a
> >flat bed
> >scanner without optical components.
>
> No, you are wriggling again! Your initial comment made no statement
> about optics
How could you not account for the optics. Also, I suggest you read again, since I
have mentioned optics many times. Maybe if you did not snip so much, and could
actually remember that.
> - this was, according to you, the maximum that a 10200 cell
> linear array could resolve, and it is as wrong now as it was then -
> despite a feeble attempt to invoke optics at the last minute!
The best system using such an array only does an actual 3400 dpi, though the
interpolated resolution can be higher.
>
>
> . . . . . . .
>
> >I get them from the industry that uses these things and
> >actually does test them.
>
> There's the rub, bozo
Resorting to name calling just makes you look bad, and I would have expected better
from you.
> - I am part of that industry and have been two and
> a half decades and these figures are trivial to derive from basic design
> criteria and tolerancing.
So you never actually test these devices? You only do calculations?
>
>
> The MTF of your example Kodak array is around 60% at Nyquist, depending
> on the clock rate. The MTF of a suitable optic can easily exceed 70% at
> the same resolution. If you are measuring much less than 35% contrast
> at 1200ppi on an 8.5" scan from this device then you really need to be
> re-examining your optical layout, because it certainly isn't high
> performance. As for the optical MTF at your claimed 3400ppi limit for
> the device: it should readily exceed 90% and thus has little effect at
> all on the performance of the device.
>
> >> Perhaps you see now
> >> why it was ridiculous? And before you wriggle further - KODAK DON'T
> >> MAKE A 3400PIXEL LONG TRILINEAR (10200 TOTAL CELLS) CCD AND NEVER HAVE!!
> >
> >That statement shows your level of ineptitude, and lack of reading
> >comprehension.
> >The 3400 dpi figure is the OPTICAL resolution, not the size of the
> >file.
>
> On the contrary, it shows you have no idea what you are talking about.
> Name ONE (even an obsolete example) Kodak trilinear CCD with 10200 total
> cells in each line which had an optical resolution of only 3400ppi when
> optically scaled to an 8.5" scan width.
Optically scaled, now there is the rub. ;-)
See how it is not possible to talk about the imager without mentioning the optics.
So I mention the optics and you attempt to slam me for it . . . it should be
assumed at the start that a flat scanner has optics in place; it should not even
need to be stated.
> You really are talking
> absurdities! Even directly at the focal plane itself, the KLI-10203
> device is capable of 3600 samples per inch with an MTF of approximately
> 60% at that resolution (and I remind you that your allegation was not
> specific to this device with its particular pixel size, but to all 10200
> element linear arrays!).
Approximately 72 mm length of a row with 10200 elements gives about 141 samples per
millimetre. Convert to inches and it is about 3600 samples per inch, imagine that.
So how would you arrange the carriage and optics to get that many samples? Hint:
this is done in two scanners using that imager.
My statement used the term best, but if you want to think that is "all", then go
ahead. The reason I stated that was to give the OP a sense that his scanner might
have an actual resolution much lower than the best scanners now on the market. That
is despite the resolution probably stated by his scanners' manufacturer, or 2400 by
4800 as he originally posted. I seriously doubt his scanner achieves an actual
optical resolution of 2400 dpi, though I don't doubt his file size stated that
dimension.
>
>
> >The number
> >of cells does not determine the optical resolution,
>
> It certainly does in terms of "dpi", "ppi" parameters that you have been
> quoting. These terms define the SAMPLING RESOLUTION!
The size in microns of the cells has more affect on the resolution, though the
optics in the scanner can still be a greater limit. Some scanner optics are only
good for around 40 to 50 lp/mm, even in some high end systems. A few are more
capable.
>
>
> >since all system components
> >affect the "optical" (or true, or actual) resolution.
>
> I suggest you learn something about imaging system design before making
> yourself look even more stupid than you already do. First lesson should
> be what defines optical resolution and what units it is measured in.
> Clue: you haven't mentioned them once yet!
Nothing wrong with assuming people already understand cycles per millimetre, or
line pairs per millimetre. No wonder your posts drone on for hours, so many
disclaimers and definitions . . . are you this boring in person too . . . . . .
;-)
I mentioned it several times: all the components affect the system resolution.
Start with a great imager chip, then throw a crap lens in front of it; or try using
a poor stepper motor. We can equate, or reference dpi and ppi with lp/mm or cy/mm,
though there are some basics that should be stated. Even in commercial printing,
with image setters running 2400 dpi or 2540 dpi, the size of the dots is different,
and in some systems can be variable. When we look at file sizes in ppi, the size of
the image on the monitor can differ on monitors with differing display resolution;
put an image at 100% size on one monitor, then compare it to 100% size on a
different monitor, and that image can appear physically larger on one than the
other. A pixel can even vary in size on an imaging chip, though we do see that
expressed as the micron size to the chip. Some of the Nikon D-SLRs, and even some
video systems used none square pixels. We often assume square pixels, but in
reality there are non square pixels. We can also assume square dots or spots in
printing, though due to many factors (dot gain, ink properties, paper properties,
et al) the actual dots or spots can end up more rounded.
>
>
> > In fact, the current best flat
> >bed actual optical resolution is 5600 dpi across the entire 12" by 17"
> >scanner bed,
> >and those two particular scanners used an 8000 element tri-linear CCD.
> >That very
> >simple fact should tell you that the optical resolution is not simply a
> >factor of
> >the imaging chip construction.
> >
> You really don't have a clue, do you? How many swathes does this Rolls
> Royce of scanners make to achieve 5600ppi on a 12" scan width with only
> 8000 pixels in each line? Perhaps you dropped a zero, or misunderstood
> the numbers or just lied.
> >>
>
No misunderstanding. The optics and CCD carriage move in two dimensions to cover
the entire bed. This is different from a "pass" technique of scanning used in some
devices. The 8000 element tri-linear CCD has 5 祄 cell sites, though the movement
of components in necessary to achieve the very high true optical resolution
possible. The line is not 12", since the imaging chip is closer to 72 mm in size.
Feel free to have a test scan done by the manufacturer, don't just take my word for
it. If you have ever read anything I have written, you should know that I encourage
people to investigate more, and learn more. There would be no benefit for me to lie
about this, but it makes quite a statement about you to accuse me of doing that.
>
> >Further information is that
> >the particular example I chose, the KLI-10203, has a physical dimension
> >of 76.87 mm
> >by 1.6 mm . . . seems to me that is much smaller than 8.5" across,
> >unless you are
> >using a different metric to english conversion.
>
> You would build a scanner from such a detector without an imaging optic
> to project the flatbed onto the focal plane? And you *STILL* claim you
> know what you are talking about? You really are stretching credulity to
> extremes now.
Look up XY scan and XY stitch technologies. Components need to move to take
advantage of the imager, but patents on technologies limit the approaches of how
this is accomplished. That you have not used such scanner does not invalidate the
facts that they exist. Again, I suggest having the manufacturers do test scans for
you.
>
> >
> >Just to update you a little bit, the smallest bed width in which the
> >KLI-10203 is
> >actually installed is 305 mm, or about 12".
>
> In which case it would be unable to yield much more than 800ppi in a
> single swathe at that width!
So figure it out . . . how can they possibly do better? If you are as smart as you
think you are, then it should be easy for you to solve this one, otherwise you have
become complacent in your 25 years in the "business".
>
>
> >> >The lowest spec (and lowest cost) of those does 3200
> >> >dpi true resolution. That is across the entire bed, and not just down the
> >> >middle.
> >>
> >> Not across the full A4 width o a single pass it isn't. To achieve
> >> 3200ppi resolution requires a scan width of no greater than 3.2" -
> >> around a third of the width of the flatbed under discussion!
> >
> >What you are missing is that not all scanning systems in flat beds use
> >a "pass" in
> >one direction method of scanning.
>
> I fail to see how I could have missed this point when I specifically
> made reference to the condition of a single pass in the sentence you
> have quoted above!
So since you don't understand how that works (two dimensional movement), you want
to restrict the discussion to your parameters . . . amazing. :-)
The entire point of mentioning high end systems is to point out that lower spec
systems are less capable. That the single pass scan is a further limit should be
obvious to many people reading this. My feeling is that when people are aware of
what is out there being used at the top, they might get a better understanding of
the low to mid range gear they can afford. View Camera magazine had a nice test of
some scanners a few months ago, including some lower priced flat bed scanners.
Those tests are just one source showing that claimed resolution and file sizes to
not equate to actual optical resolution in low and mid range scanners. The OPs 2400
by 4800 is very likely much less than that in true optical resolution; so why try
to jump through all those hoops to not gain any more resolution. My feeling is that
he should either be happy with what he has, or save his efforts and money to get a
better scanner. Unfortunately, if all I did was type one paragraph like this in my
original reply to him, he would have been left with questions. Maybe between both
of us he will get enough to go do some of his own investigation and come to his own
conclusions.
>
>
> Since the scanner under discussion on this thread is a single pass
> scanner, and the OP is specifically interested in what he can achieve in
> that single pass, I see no need to extend the explanation to swathe
> equipment.
So why did you see the need to encourage him to spend time on "improving" his
scanner when you knew it was limited? I would think the OP has better things to do
with his time than waste efforts on a scanner already giving its' best.
>
>
> >>
> >> So where does *your* figure of 3400ppi limitation for this particular
> >> device come from - apart from your initial misreading of the data?
> >
> >Actual test of high end scanning gear. True optical resolution.
>
> Incredible. Not only because even cheap scanners now achieve better
> than this . . . .
Bullshit. Testing by several different individuals, and writings in many
publications, indicate that the stated "resolution" in these low end systems is
merely the file size, not the true optical resolution. Perhaps you should try
reading actual test reports, rather than manufacturer brochures. Even Epson have
one high spec system where they state a close to true optical capability, yet they
also sell cheaper systems that claim more resolution . . . so why would imaging
professionals, pre-press specialists, and service bureaux use the higher cost and
seemingly lower spec Epson when choosing Epson gear . . . quite simply the
Expression 10000XL is a better and more capable scanner than the Perfection 4870 in
true optical resolution.
It is a shame that there is not some standard for stating specifications enforced
upon manufacturers. We would all do much better with actual performance numbers,
rather than calculated or theoretical. This happened previously with CRTs when the
diagonal was stating, and then manufacturers were compelled by regulations to state
the "viewing area". I think this should happen with scanners, though somehow I
doubt we will see a change in the industry.
> , but because neither "ppi" nor "dpi" is an appropriate
> measurement unit for "optical resolution" in the first place!
Would you rather I use lp/mm, since we are discussing scanning of photos? I think
more readers would understand dpi and ppi than lp/mm or cy/mm, or other things like
lines per height, or even just simply lines. Perhaps photosites per inch is more to
your liking . . . . . MTF at Nyquist . . . . Raleigh limit instead of Nyquist limit
. . . . . . I think it is better to just test the device as configured and use
terms other people understand, and I think many here understand dpi and ppi.
>
>
> >
> >Just to give you a very simple explanation, that 1200 dpi figure you calculated
> >would be very close to the actual in a system in which very simple
> >optics were used
> >in the scanner.
>
> I didn't suggest otherwise - a simple optic with a single pass scan.
> That is what we are discussing in this thread. You are the one bringing
> in additional complications to justify your original mistaken advice to
> the OP.
I bring in the "additional complications" to point out that there are better
systems out there. If people understand those better systems, they might get a more
realistic sense of performance and capability in their lesser systems. Nothing
wrong with being on a budget for scanners, especially if it is not for generating
income, but one should not expect more than the budget scanner can accomplish, and
one would do better to not bemoan the scanner when performance is less than they
expected. This is performance for the dollar; you pay heavily to get the best, and
few people want (or need) to spend that much. When people just read the brochure
and think their under $1000 scanner is better than one over $10000, I think it can
help to understand a little of why that is simply not true (the lower cost scanner
specifications as stated by the manufacturer).
>
>
> >In fact, around 1999, when these chips were new, that was nearly the
> >limit in almost any flat bed scanner. Since that time, scanner optics
> >have improved,
> >and positioning of optical elements has improved. Those improvements
> >are expensive
> >to implement, and why you only see them at the high end. However, those
> >improved
> >optics and better ways to move the optical elements help that family of
> >circa 72 mm
> >CCDs achieve better than 1200 dpi true optical resolution, and even high
> >interpolated resolution.
> >
> I suggest you look up the original patents for this "microscan"
> technology - you will find a familiar name in the inventors - and it was
> well before 1999 - although that could be around the time that the
> original patents expired. Even so, as the inventor of aspects of that
> particular technology, I can assure you that diffraction is still the
> limit of all optics.
Sure, fairly common knowledge when doing image analysis. However, the technology I
mostly referenced, at least with the Creo scanners, is based on a separate patent.
That patent is held by Creo, though since the buyout from Kodak I would guess that
Kodak now controls those patents.
>
>
> >> Precisely - but that isn't what you wrote last time! You stated that
> >> the 3 colours resulted in a resolution of only one third of the number
> >> of pixels in the line.
>
> >
> >I misstated it, though hopefully it is more clear in the following
> >posts.
It is a loose association, and my oversimplified original statement did not contain
the ten more paragraphs of information that may have satisfied those few
individuals starving for explicit details.
>
> . . . . . . . . .
>
> >Also, I
> >did apologize for not being as correct and thorough as I usually write.
>
> No you didn't, you said "OK Maybe I should have stated that better".
> That does not, under any circumstances, amount to either an apology or
> an admission of being incorrect, let alone both.
Here's a suggestion: pull up the past posts, then do a search for the word
"apologize", and you will find it. If you want to ignore it, then that states a
great deal about your character. I would doubt that you would ever state that you
were incorrect, though we can all imagine why that might be true.
>
>
> >Interesting
> >that your earlier tone is different . . . almost makes me feel that you
> >respond in
> >line prior to reading everything, which would be careless in the event
> >that is the
> >situation.
>
> No, I browse a post first to capture the gist of the message and then
> respond to the specific lines I quote.
Certainly did not seem that way . . . glad to know you are at least "browsing".
;-)
>
>
> >> No he doesn't - or at least that isn't what he has asked about. He is
> >> interested in using available samples in two axes that do not provide as
> >> much resolution as he would like as a means of achieving improved signal
> >> to noise at a lower resolution.
> >>
> >> The CCD in his case is similar to the NEC uPD8880 device, a trilinear
> >> array with 21360 cells in each colour, capable of producing 2400ppi
> >> across an A4 platform. Each of the colour lines comprises two rows of
> >> 10,680 cells capable of reproducing 1200ppi on the flatbed, but offset
> >> by half a pixel pitch to create a 2400ppi sample density. In addition,
> >> the scanner motor is capable of moving the scan head in 4800ppi steps,
> >> further oversampling the original pixels. He is interested in using
> >> these oversamples optimally for signal to noise improvement at 2400ppi
> >> and possibly as low as 1200ppi rather than have some of their
> >> information being used to achieve resolution which is already
> >> compromised by the optical system of the scanner.
> >
> >Okay, so sounds like a UMAX, Epson, or maybe a Microtek.
> >
> Or just about any consumer grade flatbed scanner in that class of the
> market these days.
Sure . . . look at that, we agree on something. :-)
>
> >>
> >> >An exception to colour filtering is in many Nikon film scanners,
> >> >since they use
> >> >coloured LEDs as a light source. I would suspect those are Sony
> >> >imaging chips
> >> >in those Nikon scanners.
> >>
> >> You would be wrong.
> >
> >Big Fluffy Dog . . . I have run through enough broken Nikon scanners to
> >avoid them.
> >They are poor production choices. Great shame they are not as well
> >built and rugged
> >as their top level cameras.
> >
> And what does that have to do with your allegation that they contain
> Sony CCDs?
The term "I would suspect" means that was an assumption. English is my second
language, so if I was incorrect in that usage, feel free to correct me. The nature
and tone of your response is why I replied in the above manner; had you simply
stated what CCDs were in a Nikon scanner I would have replied differently.
Unfortunately, you chose another method of response, which again states more about
your personality.
> You are like a child pissing up a wall.
Which again states more about your personality. ;-)
>
> >>
> >> >While many do like the LED approach, it is interesting
> >> >to note that is not done in any high end scanning systems.
> >>
> >> Wrong again! It is exactly the process used in high end film scanner
> >> systems - the difference being that the LEDs are replaced with colour
> >> lasers to achieve a higher intensity and thus a faster throughput.
> >
> >I don't recall Imacon using LEDs . . . okay, just checked and all
> >current models are
> >not LEDs.
>
> Did you actually read what was written, Bozo?
Name calling shows your level of emotional maturity, but it does not bother me so
if you feel you must continue with that, enjoy yourself. ;-)
> Why are you still asking
> about LEDs?
Why did you not answer the question? You could simply state which high end scanning
system, or which scanning system you believe to be high end, uses LEDs as the light
source. The only high end film scanner I know on the market today is the line made
by Imacon, none of which use LEDs. My first mention of Nikon scanners was an aside
to indicate they used LEDs, so you mentioned other film scanners; again, the
question remains . . . though you could just resort to name calling again instead
of providing an answer . . . your choice.
>
>
> >I did not come up with the original
> >calculation in that paragraph,
>
> Why is that no surprise??
I posted that to see if you would refute it, and you did . . . go figure. My guess
is that you would refute it simply because you thought I wrote that, with a simple
desire for you to attempt to make yourself look like an intellectual. That you
would refute a calculation example from the very designers of that imager states a
great deal about you.
>
>
> >> An
> >> 8.5in scan width with 10200 cells per line (no matter what the optical
> >> system or the cell size or pitch is) results in 10200/8.5 = 1200ppi.
> >
> >Why don't you tell me how 3400 dpi measured optical resolution is
> >possible using a
> >circa 72 mm 10200 element tri-linear CCD.
>
> TIP: optical resolution is measured at the flatbed surface, not at the
> focal plane - the reason for that is that only the flatbed surface is
> accessible for testing other than during design and manufacture and it
> is the only position that matters to the end user.
Quite simple, and I don't really think that is something that needs to be
mentioned. All of us should assume that true optical resolution is a statement of
what a system is capable of achieving, and capable in a manner in which that
scanner would normally be put to use.
> The physical size of
> the CCD has no direct influence on the resolution obtained other than
> its implications on the optical system requirements. 7um pixels are
> relatively trivial to resolve optically - low cost digital still cameras
> work well with sub-3um pixels, albeit with limited minimum apertures,
> but the pixel resolution is not particularly demanding.
It does influence the fill factor, which can influence the saturation and colour
response. Differences in colour responsiveness can affect apparent edge definition.
Edge definition can be thought of as an aspect of resolution, though it is mostly a
function of contrast. It is interesting that the better resolving systems use
smaller micron sized imaging cells, but to claim there is no correlation requires
much more explanation.
Low fill factor reduces device sensitivity. Of course other aspects of the design
can improve sensitivity. A 5 祄 square pixel on one imager can be as spectrally
sensitive as a 7 祄 square pixel on another imager, if some aspects of the device
are improved, and that is despite the 7 祄 square pixel having almost twice the
area of the 5 祄 square pixel imager. One great affect is the colour filtering over
the pixels, and varying thickness can influence spectral sensitivity.
>
>
> > This should be quite amusing.
>
> It should indeed since it is quite simple really. In terms of
> measurement: assess the MTF of the scanner using an ISO-12233 or
> ISO-16067 references depending on subject matter and determine the
> optical resolution at an agreed minimum MTF. Industry standard is
> nominally 10%, but some people play specmanship games though that is
> unnecessary here. You should note that this optical resolution will not
> be in dpi or ppi, but I leave it to you to figure what it will be, since
> you demonstrate ignorance and need to learn some facts.
See, now there you go farting higher than your own rear. What would you like to
use, Photosites per inch, or some other measure? Or do you just want to state MTF
at Nyquist and be done with it? Maybe we should agree on lp/mm, since it is heavily
used in photography.
> In terms of design, just for fun, use your example of the KLI-10203
> which has a nyquist MTF of better than 60% at 2MHz clock rate. Fit an
> IR filter, cut-off around 750nm, to eliminate out of band response.
I have not seen one of those installed with an IR cut-off filter. It is listed in
the specifications that the colour dyes of the filters are IR transparent beyond
700 nm, so your assumption is not a bad one, though why add another level of
complication?
>
> Select a 1:3 f/4 relay objective from one of many optical suppliers. Few
> will fail to meet an MTF of over 70% on axis at the sensor's nyquist
> frequency and those from the better suppliers including Pilkington,
> Perkin Elmer etc should achieve this across the entire field. Damping
> mechanism and timing to eliminate lateral post-step motion or, ideally,
> continuous backscan compensation of focal plane by multi-facet polygon.
> Result: Scan width = 8.5"; sampling resolution = 1200ppi; MTF at Nyquist
> for native resolution >=35% (ie. well resolved, optical resolution
> exceeds sampling resolution!).
>
> MTF at Nyquist for 3400dpi should exceed 80%, based on CTE limited MTF
> of 95% for the detector and 90% optical MTF with 1 wavefront error at
> this lower resolution.
So how to Creo, Dainippon Screen and Fuji Electronic imaging achieve that and more?
>
>
> These are just figures for optics and your example detector that I
> happen to have in front of me at the moment - with a little searching it
> might be possible to obtain better. Nevertheless, 1200ppi resolution is
> clearly practical on an 8.5" scan width with the device you seem to
> believe can only achieve 3400ppi.
"Only achieve 3400 ppi" . . . how is that worse than 1200 ppi? I don't think 1200
ppi at 8.5" scan width is useful for current commercial printed intended images,
unless you want to restrict the output dimensions. More resolution is always
better, especially when going to large printed outputs.
> Hardly surprising though is it -
> similar CCDs from other manufacturers are actually specified as
> 1200ppi/A4 devices!
>
> >Oh, and just
> >for fun, use that 12" by 17" bed as your explanation basis. The device
> >is the Creo
> >iQSmart1, in case you have not figured that one out yet.
> >
> Cor, shifting goalposts really is your forte isn't it
What . . . I tell you a scanner, and ask you to explain how it achieves an optical
resolution at the imaging bed that is better than you claim is possible. You can
think Creo are not telling the truth, or you can send off an image to them and
request it to be scanned. Proof is just a short test run away.
> We determine a
> projected resolution on an 8.5" width platform and you want to see it
> achieved on a 12" platform. Do you understand the ratio of 8.5 and 12?
> You are an idiot and I rest my case!
Name calling again . . . . . I suppose maybe I should be expecting that from you by
now, though I am just a little disappointed in your behaviour.
Anyway, the one way to solve this is for you to send a sample transparency to Creo
and request a full resolution scan from the iQSmart and EverSmart Supreme scanners.
In this case, a picture (scanned) is worth a thousand words . . . or maybe two
thousand considering all we have typed so far. ;-)
Enjoy yourself, and do try to cut back on the caffeine. ;-) Getting out more often
might be nice too. ;-)
Have a nice day . . . and I mean that. ;-)
> ljl...@tiscalinet.it wrote:
>
> [snip]
>
> > I don't want to get *too* technical.
>
> Though you want to hack the driver. ;-)
Programming is my field, optics isn't. Not that I'm terribly good at
that either (in fact I haven't hacked the driver succesfully so far),
but off hand I can't think of anything I'm terribly good at.
>> [snip]
>
> If it is moving the optics, and not the CCD, then it has a three or four row
> CCD with RGB filtering over it. If it is moving the CCDs, then it could be
> several. Of course, you could crack it open and find out. ;-)
Well, I dunno, but it looks like it's moving everything through the
glass.
In any case, I'm not really interested in the CCD layout, except for
its - ehm - staggeredness, in that the 2400dpi are obtained with two
shifted 1200dpi CCDs.
> > But let's just pretend for a moment that it's 2400 dpi optical, period.
>
> You would be lucky for it to be much better than half that, but for sake of
> discussion . . . . . . .
Well, if you can suggest a simple method for measuring real resolution,
I would be happy to try and find out. Well, I wouldn't necessarily be
*happy*, on a second thought.
But anyway, yes, let's just pretend for the sake of discussion.
> > What I want to do is scan at 4800 dpi in the *vertical* direction, i.e.
> > run the motor at "half-stepping". My scanner can do that.
> >
> > The problem is twofold:
> >
> > 1) (the less important one) My scanner's software insists on
> > interpolating horizontally in order to fake 4800 dpi on both the x and
> > y axis, and I don't know how to "revert" this interpolation to get the
> > original data back (just downsampling with Photoshop appears to lose
> > something). But as you said, the interpolation algorithm varies between
> > scanners, so I'll have to find out what mine does, I suppose -- or,
> > hopefully, just manage to hack the open-source driver I'm using to
> > support 2400x4800 with no interpolation.
>
> Make that a three fold problem . . . how and what do you plan to use to view
> that image? In PhotoShop, you would view 2400 by 4800 as a rectangle; if all
> the information was 2400 by 2400 viewing, then you have a square; if you want
> a square image and have 2:1 ratio of pixels then your square image will be
> viewed like a stretched rectangle. This is similar to a problem that comes up
> in video editing for still images; video uses non-square pixels, so the
> square pixel still images need to be altered to fit a non-square video
> display.
But keeping the image at a 2:1 ratio is not what I plan to do.
What I plan to do is to take it back to a 1:1 ratio by downsampling on
the y axis: this way I get a 2400x2400 dpi picture, which is less noisy
than the same picture taken directly at 2400x2400 dpi from the scanner.
My main doubt was, what is the "best" or the "right" algorithm to do
the downsampling? I usually just use (bi)cubic resampling when I want
to resize something; but in this case, I've got some specific
information about the original data -- i.e. that each line is shifted
by half a pixel from the previous line (a quarter of a pixel actually,
since the CCD is staggered, but I think this shouldn't be important as
long as I want to keep the final image at 2400x2400).
I just thought that this knowledge might have allowed me to choose an
appropriate downsampling algorithm, instead of just using whatever
Photoshop offers.
> > 2) (the more important one) I, of course, don't want a 2:1 ratio image.
> > I just want 2400x2400, and use the "extra" 2400 I've got on the y axis
> > as one would use multi-sampling on a scanner supporting it. Yes, to get
> > better image quality and less noise, as you said.
> > But the question is, how to do it *well*?
>
> Or how to actually still view it as a square image.
By downsampling.
After all, it's the same with scanners that can do "real"
multi-sampling, only that
1) lines do not exhibit a sub-pixel shift, since the CCD is kept in the
same positing for each sample
2) the scanner firmware, or the driver, hides it all to the user and
just gives out a ready-to-use 1:1 ratio image
> [snip]
>
> I have not heard of anyone outside of Canon still using a staggered idea. I
> think Microtek may have tried it, or possibly UMAX.
Well, Epson certainly does it in (most of) their flatbeds.
> In order to really do
> something different with that, much like the video example above, it seems
> you would need to get the electronic signal directly off the CCD prior to any
> in-scanner processing of the capture signal. Basically that means hacking
> into the scanner. I don't see how that would be practical; even if you came
> up with something, you still have a low cost scanner with limited optical
> (true) resolution and colour abilities.
But I don't think the data that come out of my scanner are very much
adulterated (at least if I disable color correction, set gamma=1.0 and
such stuff).
Well, when I scan at "4800 dpi" from Windows, they're certainly
adulterated in the sense that interpolation is used on the x axis, in
order to get a 1:1 ratio, apparent 4800x4800 dpi image.
But I know (really, I'm not just assuming) that this interpolation is
done in the *driver*, not in the scanner; so assuming that I can hack
the Linux driver to support 4800 dpi vertically, horizontal
interpolation becomes a non-issue: after all, it's easier to *not*
interpolate than to interpolate!
by LjL
ljl...@tiscali.it
ljl...@tiscalinet.it wrote:
> Gordon Moat ha scritto:
>
> > ljl...@tiscalinet.it wrote:
> >
> > [snip]
> >
> > > I don't want to get *too* technical.
> >
> > Though you want to hack the driver. ;-)
>
> Programming is my field, optics isn't. Not that I'm terribly good at
> that either (in fact I haven't hacked the driver succesfully so far),
> but off hand I can't think of anything I'm terribly good at.
Perhaps you might want to look at the firmware. The code might be simpler, and you
may be able to affect a change in less attempts.
I had a project working on a future full frame digital camera a few months ago.
The programmer on that sub-contracting me to help with the colours aspects of the
software, and with user interface development. I admit I could never do the
programming end of it, and it is something which simply does not interest me.
Colour models are another realm, and I enjoyed that aspect. Wish I could tell you
more about the project, but I signed a confidentiality agreement.
>
>
> >> [snip]
> >
> > If it is moving the optics, and not the CCD, then it has a three or four row
> > CCD with RGB filtering over it. If it is moving the CCDs, then it could be
> > several. Of course, you could crack it open and find out. ;-)
>
> Well, I dunno, but it looks like it's moving everything through the
> glass.
If you can figure out how to easily get it apart, you might discover a bit more.
However, dust could then become an issue. It would make a nice opportunity to
clean the other side of the glass.
>
> In any case, I'm not really interested in the CCD layout, except for
> its - ehm - staggeredness, in that the 2400dpi are obtained with two
> shifted 1200dpi CCDs.
>
> > > But let's just pretend for a moment that it's 2400 dpi optical, period.
> >
> > You would be lucky for it to be much better than half that, but for sake of
> > discussion . . . . . . .
>
> Well, if you can suggest a simple method for measuring real resolution,
> I would be happy to try and find out. Well, I wouldn't necessarily be
> *happy*, on a second thought.
Ted Harris wrote an article in the May/June 2005 issue of View Camera about
scanners. He used a USAF1951 test target and a T4110 step wedge; might be
something to try out. Really well written and presented article, though it would
have been nice to include larger sample images.
>
> But anyway, yes, let's just pretend for the sake of discussion.
>
> > > What I want to do is scan at 4800 dpi in the *vertical* direction, i.e.
> > > run the motor at "half-stepping". My scanner can do that.
> > >
> > > The problem is twofold:
> > >
> > > 1) (the less important one) My scanner's software insists on
> > > interpolating horizontally in order to fake 4800 dpi on both the x and
> > > y axis, and I don't know how to "revert" this interpolation to get the
> > > original data back (just downsampling with Photoshop appears to lose
> > > something). But as you said, the interpolation algorithm varies between
> > > scanners, so I'll have to find out what mine does, I suppose -- or,
> > > hopefully, just manage to hack the open-source driver I'm using to
> > > support 2400x4800 with no interpolation.
> >
> > Make that a three fold problem . . . how and what do you plan to use to view
> > that image? In PhotoShop, you would view 2400 by 4800 as a rectangle; if all
> > the information was 2400 by 2400 viewing, then you have a square; if you want
> > a square image and have 2:1 ratio of pixels then your square image will be
> > viewed like a stretched rectangle. This is similar to a problem that comes up
> > in video editing for still images; video uses non-square pixels, so the
> > square pixel still images need to be altered to fit a non-square video
> > display.
>
> But keeping the image at a 2:1 ratio is not what I plan to do.
> What I plan to do is to take it back to a 1:1 ratio by downsampling on
> the y axis: this way I get a 2400x2400 dpi picture, which is less noisy
> than the same picture taken directly at 2400x2400 dpi from the scanner.
So you want to capture a stacked rectangle in one dimension, then compress the
long end to display a square. Sounds like some pretty hefty algorithm to avoid
creating artefacts in the final image file. Wouldn't that actually reduce the edge
sharpness and resolution of any diagonals in the source image?
>
>
> My main doubt was, what is the "best" or the "right" algorithm to do
> the downsampling? I usually just use (bi)cubic resampling when I want
> to resize something; but in this case, I've got some specific
> information about the original data -- i.e. that each line is shifted
> by half a pixel from the previous line (a quarter of a pixel actually,
> since the CCD is staggered, but I think this shouldn't be important as
> long as I want to keep the final image at 2400x2400).
I think Bart van der Wolf had a short test page of several algorithms. You might
want to search archives, or contact him about that. Few people ever suggest using
bilinear, though there are some images that work better using that. It would seem
that having an option to use more than one algorithm would be of greater benefit
that forcing just one to work. Of course, the programming would be much more hefty
to do that.
>
>
> I just thought that this knowledge might have allowed me to choose an
> appropriate downsampling algorithm, instead of just using whatever
> Photoshop offers.
In a production environment, what we do is Scan Once, Output Many workflow.
Basically scanning at the highest resolution, then later using that file to match
output requirements on a case by case basis. Time constraints sometimes mean just
scanning at the output resolution needed for a project, though that can mean that
later on you would need to scan the same image in a different manner for another
project.
Ideally you try to do as much as you can prior to dumping the image file into
PhotoShop. Nearly all operations in PhotoShop are destructive editing. The other
issue is that getting the scan optimal reduces billing time in PhotoShop, though
that is more a commercial workflow necessity.
>
>
> > > 2) (the more important one) I, of course, don't want a 2:1 ratio image.
> > > I just want 2400x2400, and use the "extra" 2400 I've got on the y axis
> > > as one would use multi-sampling on a scanner supporting it. Yes, to get
> > > better image quality and less noise, as you said.
> > > But the question is, how to do it *well*?
> >
> > Or how to actually still view it as a square image.
>
> By downsampling.
Resize on one axis. Still images sent to video NLEs need a conversion to allow
them to display properly, i.e. you want a picture with a round basketball to still
look like a round basketball when displayed on a video monitor or television.
>
> After all, it's the same with scanners that can do "real"
> multi-sampling, only that
> 1) lines do not exhibit a sub-pixel shift, since the CCD is kept in the
> same positing for each sample
> 2) the scanner firmware, or the driver, hides it all to the user and
> just gives out a ready-to-use 1:1 ratio image
Just a guess, but I would think the firmware would be the place to find these
instructions. Seems that if you found your information there, then it might just
be as simple as removing that portion of it, if it does in fact work the way you
suspect.
>
>
> > [snip]
> >
> > I have not heard of anyone outside of Canon still using a staggered idea. I
> > think Microtek may have tried it, or possibly UMAX.
>
> Well, Epson certainly does it in (most of) their flatbeds.
Interesting. Just a side note, the 4870 Photo was in that test group from View
Camera magazine. While Epson state it is a 4800 ppi scanner, the best Ted Harris
got was 2050 ppi. The interesting thing I found was that Dmax tested was
substantially less than Epson claims. Rather than attempt to repeat the article
here, it might be worth it for you to order a copy, or get an old issue of this.
My feeling is that working on the actual Dmax would be more beneficial to final
image quality than playing with the resolving ability. You can try simple things
like using drum scanner oil on the flat bed, though clean-up is another issue.
There have been some good articles in Reponses Photo (French) in the past about
this method, and sometimes in a few other publications.
>
>
> > In order to really do
> > something different with that, much like the video example above, it seems
> > you would need to get the electronic signal directly off the CCD prior to any
> > in-scanner processing of the capture signal. Basically that means hacking
> > into the scanner. I don't see how that would be practical; even if you came
> > up with something, you still have a low cost scanner with limited optical
> > (true) resolution and colour abilities.
>
> But I don't think the data that come out of my scanner are very much
> adulterated (at least if I disable color correction, set gamma=1.0 and
> such stuff).
Sort of like a RAW capture from a digital camera.
>
>
> Well, when I scan at "4800 dpi" from Windows, they're certainly
> adulterated in the sense that interpolation is used on the x axis, in
> order to get a 1:1 ratio, apparent 4800x4800 dpi image.
I don't think overscanning or interpolation is all bad. Kai Hammann and a few
others have written about this for a few scanning devices. Basically what I have
found in practice is that overscanning can allow a smoother tonal transition of
colour in large areas of colour, such as sky in landscape images. Overscanning
might not improve true resolution, but it can often make the colour output just a
bit smoother. If you have not guessed it by now, I am more inclined to favour
colour quality over outright resolution.
>
>
> But I know (really, I'm not just assuming) that this interpolation is
> done in the *driver*, not in the scanner; so assuming that I can hack
> the Linux driver to support 4800 dpi vertically, horizontal
> interpolation becomes a non-issue: after all, it's easier to *not*
> interpolate than to interpolate!
>
> by LjL
> ljl...@tiscali.it
Sounds like you are working with SANE. Maybe the driver would do it, but I still
think a look at the firmware might show you another option. Anyway, best of luck,
and enjoy your project.
> In article <4328DACD...@attglobal.net>, Gordon Moat
> <mo...@attglobal.net> writes
> >Good afternoon,
> >
> >I really am enjoying your selective snipping, so without further delay,
> >on with the
> >show . . . .
> >
> That is fine, but I have better things to do with my time than debate
> technical issues with someone who has absolutely no understanding of
> them and continues to lie in order to justify their previous errors. I
> wrote at the end of my last post that you were an idiot and I had
> finished debating with you, you are clearly too stupid to understand
> something as basic as that.
> --
Awe . . . did I hurt your feelings? Well, it was fun while it lasted. I guess
you have nothing left to learn. ;-)
I would certainly like to look at the firmware!
But I don't think there is any (documented) way to look at it and/or
modify it. I guess it's one of the things Epson wants to keep most
secret of their scanners!
> [snip]
>
> If you can figure out how to easily get it apart, you might discover a bit more.
> However, dust could then become an issue. It would make a nice opportunity to
> clean the other side of the glass.
Nah, it's too new to make such attempts...
>> [snip]
>>
>>Well, if you can suggest a simple method for measuring real resolution,
>>I would be happy to try and find out. Well, I wouldn't necessarily be
>>*happy*, on a second thought.
>
> Ted Harris wrote an article in the May/June 2005 issue of View Camera about
> scanners. He used a USAF1951 test target and a T4110 step wedge; might be
> something to try out. Really well written and presented article, though it would
> have been nice to include larger sample images.
I don't want to spend money (or send money abroad, as is often the case
with these things) on test targets. I would certainly like to know my
scanner's true resolution, but I'm not going to spend money for that...
after all, there's nothing I can do to improve it.
(Though I might consider getting a calibration target - that does have a
practical use!)
I am currently experimenting with "slanted edge" and Imatest, and I'll
publish some results soon, although I'm afraid they're going to be
heavily off, as I haven't quite understood the procedure.
> [snip]
>
>>But keeping the image at a 2:1 ratio is not what I plan to do.
>>What I plan to do is to take it back to a 1:1 ratio by downsampling on
>>the y axis: this way I get a 2400x2400 dpi picture, which is less noisy
>>than the same picture taken directly at 2400x2400 dpi from the scanner.
>
> So you want to capture a stacked rectangle in one dimension, then compress the
> long end to display a square. Sounds like some pretty hefty algorithm to avoid
> creating artefacts in the final image file. Wouldn't that actually reduce the edge
> sharpness and resolution of any diagonals in the source image?
That's exactly my main concern.
On the other hand, just about everyone recommends to scan at a high
resolution and then scale down instead of scanning directly at a lower
resolution. And this is exactly what I'm doing, except that the scaling
down is only in one direction in my case.
>>My main doubt was, what is the "best" or the "right" algorithm to do
>>the downsampling? I usually just use (bi)cubic resampling when I want
>>to resize something; but in this case, I've got some specific
>>information about the original data -- i.e. that each line is shifted
>>by half a pixel from the previous line (a quarter of a pixel actually,
>>since the CCD is staggered, but I think this shouldn't be important as
>>long as I want to keep the final image at 2400x2400).
>
> I think Bart van der Wolf had a short test page of several algorithms. You might
> want to search archives, or contact him about that. Few people ever suggest using
> bilinear, though there are some images that work better using that.
I have found http://heim.ifi.uio.no/~gisle/photo/interpolation.html .
Bart van der Wolf is involved, but I don't know if it's the page you meant.
But, you see, I'm not looking for the "best looking" algorithm in
general -- what I'm looking for is the right algorithm to downsample
things made with an half-pixel shift etc. etc.
It might even come out to be bilinear!
> It would seem
> that having an option to use more than one algorithm would be of greater benefit
> that forcing just one to work. Of course, the programming would be much more hefty
> to do that.
But my goal is to automate the scanning process, so looking at each
image before storing it isn't really an option.
Storing the images in the original ratio to leave room for future
decisions is also, well, not "not an option" but impractical, due to the
file sizes involved.
But again, more than the algorithm that "looks best", I'm searching for
the algorithm that is the most "correct" in the context I'm working with.
Hopefully, it will also be the one that looks best with most images!
> [snip]
> Ideally you try to do as much as you can prior to dumping the image file into
> PhotoShop. Nearly all operations in PhotoShop are destructive editing. The other
> issue is that getting the scan optimal reduces billing time in PhotoShop, though
> that is more a commercial workflow necessity.
My idea is to have a script (which, in a basic form, is already in
place) to batch-scan and do all the non-destructive (or
destructive-but-the-file-would-be-too-large-otherwise) corrections.
The resulting images would be stored for archival.
Another script, or the same script, would create copies of the images
for viewing, where the various destructive transformations *are* applied
(USM, the finer histogram corrections, resizing to 1200x1200dpi, etc).
This second part would of course be performed by me in Photoshop instead
of by the script, for pictures I care about particularly.
The script would just work as a "one hour photo" equivalent.
>>>> [snip]
>>>
>>>Or how to actually still view it as a square image.
>>
>>By downsampling.
>
> Resize on one axis. Still images sent to video NLEs need a conversion to allow
> them to display properly, i.e. you want a picture with a round basketball to still
> look like a round basketball when displayed on a video monitor or television.
Ok, but I'm not doing video...
Actually, I also intend to be able to display my pictures on TV (I've
got a "WebTV" from my ISP), but that's a very minor concern.
Besides, aren't you talking about NTSC? I'm PAL, and I'm not sure but I
think PAL pixels are square (we've got more scanlines than NTSC).
> [snip]
>
> My feeling is that working on the actual Dmax would be more beneficial to final
> image quality than playing with the resolving ability.
Well, multi-sampling does improve DMax AFAIK, and multi-sampling is what
I'm trying to "simulate" (actually there's nothing to simulate, 2x
multi-sampling is there in my scanner, it's just that it shifts the CCD
a little after the first sampling...).
You can see from the "positive vs negative" thread that I'm also trying
to work out the way exposure time control works in my scanner (which,
technically, comes with "auto-exposure" only). Longer exposure times
(or, possibly, superimposing a long exp scan to a short exp scan, as Don
does) would also help DMax.
> You can try simple things
> like using drum scanner oil on the flat bed, though clean-up is another issue.
> There have been some good articles in Reponses Photo (French) in the past about
> this method, and sometimes in a few other publications.
I could try just for seeing what comes out of it, but I can't really do
that normally... this scanner is used for scanning paper sheet by other
people here. I don't have it all for me to play with.
> [snip]
>
> Sounds like you are working with SANE.
Correct. SANE also have the advantage (over VueScan and Windows
software, but I don't have Window on that computer anyway) that it
allows to easily use the scanner from the network.
We've got three computers here, plus the "server" that the scanner is
connected to, and thanks to SANE and SaneTwain we can all scan from our
own computers.
> Maybe the driver would do it, but I still
> think a look at the firmware might show you another option. Anyway, best of luck,
> and enjoy your project.
Thanks! Well, the scans I'm getting right now aren't *so* bad for what I
must do with them, so at worst I'll be left with decent scans if I fail,
and will hopefully have learned something in the process.
Nothing to lose.
by LjL
ljl...@tiscali.it
Few things irritate me to the point of calling them, but liars, losers
and idiots are pretty high and you top all of those lists at the moment!
Look, perhaps this is none of my business, but... I know when you want
to insult people you leave them in no doubt that you have, but aren't
you just exaggerating a little? Reading back into the thread (but it's
3:53) it looks like he started, but still!
by LjL
ljl...@tiscali.it
>I know when you want to insult people you leave them in no doubt that
>you have,
that is generally the intention. :-)
>but aren't you just exaggerating a little?
The USAF-1951 test chart is a mediocre *analogue* test chart. Even when
used with analogue imaging components such as the film cameras it was
originally intended to measure, it suffers major drawbacks, not least of
which is the fact that each spatial frequency is restricted to only 2.5
line pairs. The result is that any second order component of the system
MTF (and the optics are more than likely to have higher orders)
attenuates or exaggerates the contrast of the middle black line of the
three by an unknown amount. Most professional measurement instruments
transitioned to 6.5 or more line pairs by the early 60s or FT based
approaches later but, having been the first publicly available standard
pattern, the USAF-1951 chart lived on and was copied in amateur circles
long after it had become obsolete in the scientific community.
With the advent of digital imaging, that limitation became even more
restrictive since all bar pattern based testing is open to
misinterpretation through aliasing. When there are many line pairs, the
onset of aliasing at the hard resolution limit of the system is obvious,
but it is virtually impossible to identify in the 2.5line pairs of the
USAF-1951 target until it has reached a level which would be highly
objectionable in any image!
In short, popular though it may be amongst novices who are enamoured by
its apparent fine detail and pseudo-technical journalists, it is worse
than useless for the assessment of digital images from any source,
producing ambiguous results at best and totally misleading results in
general.
It certainly is no exaggeration to state that only an idiot would
recommend such a misleading and ambiguous reference for the assessment
of a scanner.
BTW - I am sure you know about time zones and people aren't always in
the same time zone as their usenet server. ;-)
> In article <KBpWe.30279$nT3....@tornado.fastwebnet.it>, Lorenzo J.
> Lucchini <ljl...@tiscali.it> writes
>
> >I know when you want to insult people you leave them in no doubt that
> >you have,
>
> that is generally the intention. :-)
>
> >but aren't you just exaggerating a little?
>
> The USAF-1951 test chart is a mediocre *analogue* test chart.
>
> [snip]
>
> It certainly is no exaggeration to state that only an idiot would
> recommend such a misleading and ambiguous reference for the assessment
> of a scanner.
You know that's not the reason of this flamewar (not even remotely). He
said "Ted Harris wrote an article [...]. He used a USAF1951 test target
[...]; might be something to try out."
Hardly "recommending".
But I don't really want to argue this -- hey, it's your flamewar. I'm
even one of the few who enjoys reading flamewars.
All I intended to tell you is that, in my opinion, you overreacted. Of
this opinion you can do what you prefer... just thought it might have
been good to tell you.
> BTW - I am sure you know about time zones and people aren't always in
> the same time zone as their usenet server. ;-)
Uh? Are you referring to me saying it was 3:53?
I said that because it *was* 3:53 here (no not PM) when I re-read the
thread, only to point out that I might have been too sleepy not to get
confused about something.
by LjL
ljl...@tiscali.it
> Gordon Moat wrote:
> > Good afternoon LjL,
> >
> > [snip]
> >
> > Perhaps you might want to look at the firmware. The code might be simpler, and you
> > may be able to affect a change in less attempts.
>
> I would certainly like to look at the firmware!
> But I don't think there is any (documented) way to look at it and/or
> modify it. I guess it's one of the things Epson wants to keep most
> secret of their scanners!
Not sure if you have any test gear, or chip readers, but I think that would be
necessary. Of course, the other way is if they offered a firmware download for update.
Unfortunately, I don't think Epson has much in the line of firmware for download.
>
>
> > [snip]
> >
> > If you can figure out how to easily get it apart, you might discover a bit more.
> > However, dust could then become an issue. It would make a nice opportunity to
> > clean the other side of the glass.
>
> Nah, it's too new to make such attempts...
>
> >> [snip]
> >>
> >>Well, if you can suggest a simple method for measuring real resolution,
> >>I would be happy to try and find out. Well, I wouldn't necessarily be
> >>*happy*, on a second thought.
> >
> > Ted Harris wrote an article in the May/June 2005 issue of View Camera about
> > scanners. He used a USAF1951 test target and a T4110 step wedge; might be
> > something to try out. Really well written and presented article, though it would
> > have been nice to include larger sample images.
>
> I don't want to spend money (or send money abroad, as is often the case
> with these things) on test targets. I would certainly like to know my
> scanner's true resolution, but I'm not going to spend money for that...
> after all, there's nothing I can do to improve it.
>
> (Though I might consider getting a calibration target - that does have a
> practical use!)
The small Kodak Q-13 is around $US 20. That has very useful test colour patches which
you could use to improve colour. You could also use a scan of that to create an overall
correction Action to use in PhotoShop. I think it could save you some time in
correcting colour on scans.
>
>
> I am currently experimenting with "slanted edge" and Imatest, and I'll
> publish some results soon, although I'm afraid they're going to be
> heavily off, as I haven't quite understood the procedure.
Slanted edge is one way to do this. Recall that scanners and digital cameras work best
in a vertical or horizontal orientation; go much at all diagonal and the results will
always be less. Of course, in the real world of images there are few perfectly straight
lines that are photographed, except in architecture.
>
>
> > [snip]
> >
> >>But keeping the image at a 2:1 ratio is not what I plan to do.
> >>What I plan to do is to take it back to a 1:1 ratio by downsampling on
> >>the y axis: this way I get a 2400x2400 dpi picture, which is less noisy
> >>than the same picture taken directly at 2400x2400 dpi from the scanner.
> >
> > So you want to capture a stacked rectangle in one dimension, then compress the
> > long end to display a square. Sounds like some pretty hefty algorithm to avoid
> > creating artefacts in the final image file. Wouldn't that actually reduce the edge
> > sharpness and resolution of any diagonals in the source image?
>
> That's exactly my main concern.
> On the other hand, just about everyone recommends to scan at a high
> resolution and then scale down instead of scanning directly at a lower
> resolution. And this is exactly what I'm doing, except that the scaling
> down is only in one direction in my case.
It is a good work practice. However, if you know you will not need the greater file
size scan for any uses, then you can save lots of time just scanning at the size you
need. You can always scan at a larger size later, if you find you need it for a certain
output. Of course, if you have the time, nothing wrong with always scanning at maximum.
>
>
> >>My main doubt was, what is the "best" or the "right" algorithm to do
> >>the downsampling? I usually just use (bi)cubic resampling when I want
> >>to resize something; but in this case, I've got some specific
> >>information about the original data -- i.e. that each line is shifted
> >>by half a pixel from the previous line (a quarter of a pixel actually,
> >>since the CCD is staggered, but I think this shouldn't be important as
> >>long as I want to keep the final image at 2400x2400).
> >
> > I think Bart van der Wolf had a short test page of several algorithms. You might
> > want to search archives, or contact him about that. Few people ever suggest using
> > bilinear, though there are some images that work better using that.
>
> I have found http://heim.ifi.uio.no/~gisle/photo/interpolation.html .
> Bart van der Wolf is involved, but I don't know if it's the page you meant.
Well, as far as I remember, that was one of them. The reference to Douglas in Australia
is another aspect, and I was asked to become involved in some discussions at the
beginning of this year. I think many people want a very simple answer, though I do not
believe there is a simple answer, since many other aspects are involved and printing is
one of those.
>
>
> But, you see, I'm not looking for the "best looking" algorithm in
> general -- what I'm looking for is the right algorithm to downsample
> things made with an half-pixel shift etc. etc.
> It might even come out to be bilinear!
Very true. I was hoping that you finding that page might suggest some other things to
try out. If you only test two methods, maybe you might miss a third or fourth that
could have worked better. I don't really see one method for every type of image, and I
think some adapting depending upon image type could help. Not sure what Techno Aussie
does, though I did not find it too complex to not come up with something that works in
a similar manner, though probably taking more steps than Douglas used.
There are also some nice commercial products out there which you might be able to
alter. Perhaps as a programmer you could reverse engineer one of those. Maybe Genuine
Fractels, Nik Sharpener Pro, or getting the SDK for PhotoShop Plug-Ins. There are also
the old Kai's products that did some interesting localized interpolations, and might
yield some fun code.
>
>
> > It would seem
> > that having an option to use more than one algorithm would be of greater benefit
> > that forcing just one to work. Of course, the programming would be much more hefty
> > to do that.
>
> But my goal is to automate the scanning process, so looking at each
> image before storing it isn't really an option.
> Storing the images in the original ratio to leave room for future
> decisions is also, well, not "not an option" but impractical, due to the
> file sizes involved.
Okay, so a search for the best compromise.
>
>
> But again, more than the algorithm that "looks best", I'm searching for
> the algorithm that is the most "correct" in the context I'm working with.
> Hopefully, it will also be the one that looks best with most images!
Out of curiosity, what types of images would you normally be scanning?
>
>
> > [snip]
> > Ideally you try to do as much as you can prior to dumping the image file into
> > PhotoShop. Nearly all operations in PhotoShop are destructive editing. The other
> > issue is that getting the scan optimal reduces billing time in PhotoShop, though
> > that is more a commercial workflow necessity.
>
> My idea is to have a script (which, in a basic form, is already in
> place) to batch-scan and do all the non-destructive (or
> destructive-but-the-file-would-be-too-large-otherwise) corrections.
>
> The resulting images would be stored for archival.
You might want to download a trial version of BinuScan or Creo oXYgen Scan. Even though
they might not run with your Epson, you might get some ideas for programming your own
solution.
>
>
> Another script, or the same script, would create copies of the images
> for viewing, where the various destructive transformations *are* applied
> (USM, the finer histogram corrections, resizing to 1200x1200dpi, etc).
PhotoShop has a nice Actions feature since version 5.0. You can create nearly any
combination to run on a folder of images. Once you start an Action, if you know it will
run a while, then you can leave the computer and take a break. ;-)
>
>
> This second part would of course be performed by me in Photoshop instead
> of by the script, for pictures I care about particularly.
> The script would just work as a "one hour photo" equivalent.
On Mac OS, there is AppleScript, and with Windows there is also scripting. Not really
programming, though it can function for some nice automation. I use a few of these
types of scripts, though mostly just when doing operations involving Quark or InDesign.
>
>
> >>>> [snip]
> >>>
> >>>Or how to actually still view it as a square image.
> >>
> >>By downsampling.
> >
> > Resize on one axis. Still images sent to video NLEs need a conversion to allow
> > them to display properly, i.e. you want a picture with a round basketball to still
> > look like a round basketball when displayed on a video monitor or television.
>
> Ok, but I'm not doing video...
> Actually, I also intend to be able to display my pictures on TV (I've
> got a "WebTV" from my ISP), but that's a very minor concern.
>
> Besides, aren't you talking about NTSC? I'm PAL, and I'm not sure but I
> think PAL pixels are square (we've got more scanlines than NTSC).
Just a different resize ratio. I mentioned video since your rectangular scan could be
considered as non-square pixels, since your final image would be a square. Perhaps you
might get an idea from video still image processing. I think there is an automated
feature for something like that in PhotoShop CS, though of course you could just create
an Action that does the same thing.
That would be different than altering the Epson scanner driver. However, if you could
somehow prevent the Epson driver from making a square image from the rectangular
information, then you could perform the more automated steps in PhotoShop by using
Actions you create. Batch mode in PhotoShop using Actions does work nicely.
>
>
> > [snip]
> >
> > My feeling is that working on the actual Dmax would be more beneficial to final
> > image quality than playing with the resolving ability.
>
> Well, multi-sampling does improve DMax AFAIK, and multi-sampling is what
> I'm trying to "simulate" (actually there's nothing to simulate, 2x
> multi-sampling is there in my scanner, it's just that it shifts the CCD
> a little after the first sampling...).
>
Sounds more like poor registration. There is some documentation with SilverFast about
them implementing multi-scanning on scanners that did not originally offer it,
basically something to do with aligning pixels on successive scans instead of relying
on the scanner to be that accurate. Multi-scan does increase your scan times, though if
you hit ENTER, then had an automated process for the rest the time might not be
impacted so badly.
>
>
> You can see from the "positive vs negative" thread that I'm also trying
> to work out the way exposure time control works in my scanner (which,
> technically, comes with "auto-exposure" only). Longer exposure times
> (or, possibly, superimposing a long exp scan to a short exp scan, as Don
> does) would also help DMax.
Crap . . . autoexposure only would suck. I don't think I could deal with a limitation
like that. Shame some software is not available for you to better control exposure. The
only thing that bothers me more in low end gear is a lack of focus control.
>
>
> > You can try simple things
> > like using drum scanner oil on the flat bed, though clean-up is another issue.
> > There have been some good articles in Reponses Photo (French) in the past about
> > this method, and sometimes in a few other publications.
>
> I could try just for seeing what comes out of it, but I can't really do
> that normally... this scanner is used for scanning paper sheet by other
> people here. I don't have it all for me to play with.
Bummer. The Kami oil evaporates very quick, and will leave the surface clean. I think
Aztek might sell that direct, though you could also try ICG. Perhaps it is something to
try when you have more time to the scanner.
>
>
> > [snip]
> >
> > Sounds like you are working with SANE.
>
> Correct. SANE also have the advantage (over VueScan and Windows
> software, but I don't have Window on that computer anyway) that it
> allows to easily use the scanner from the network.
>
> We've got three computers here, plus the "server" that the scanner is
> connected to, and thanks to SANE and SaneTwain we can all scan from our
> own computers.
I think Caldera Graphics has some nice UNIX based imaging and scanning software. I used
Caldera Cameleo a few years ago at one location; not as user friendly as some software
though very effective.
>
>
> > Maybe the driver would do it, but I still
> > think a look at the firmware might show you another option. Anyway, best of luck,
> > and enjoy your project.
>
> Thanks! Well, the scans I'm getting right now aren't *so* bad for what I
> must do with them, so at worst I'll be left with decent scans if I fail,
> and will hopefully have learned something in the process.
> Nothing to lose.
Cool! Nice discussion with you, hope something works out.
> Kennedy McEwen ha scritto:
>
> > In article <KBpWe.30279$nT3....@tornado.fastwebnet.it>, Lorenzo J.
> > Lucchini <ljl...@tiscali.it> writes
> >
> > >I know when you want to insult people you leave them in no doubt that
> > >you have,
> >
> > that is generally the intention. :-)
> >
> > >but aren't you just exaggerating a little?
> >
> > The USAF-1951 test chart is a mediocre *analogue* test chart.
> >
> > [snip]
> >
> > It certainly is no exaggeration to state that only an idiot would
> > recommend such a misleading and ambiguous reference for the assessment
> > of a scanner.
>
> You know that's not the reason of this flamewar (not even remotely). He
> said "Ted Harris wrote an article [...]. He used a USAF1951 test target
> [...]; might be something to try out."
> Hardly "recommending".
>
> But I don't really want to argue this -- hey, it's your flamewar. I'm
> even one of the few who enjoys reading flamewars.
Hello again LjL,
Glad someone was enjoying what I wrote. ;-) Sometimes it seems that when
you enter comp.periphs.scanners one needs to check their humour at the
door. :-)
It seems I made him so mad that there is no way he would ever agree with
anything I type, even if it was relating the tests of others, or making a
statement like "the sky is blue". Oh well, I guess I will avoid typing
something he finds irritating.
Anyway, like I state to many people, don't just take one source for
anything; go out and investigate on your own based on many recommendations.
I also do not believe in re-inventing the wheel; so if someone else has
some useful information from however crude a test, then they might want to
read it, rather than repeat it.
Not everyone has a Siemennsstar pattern for testing. In the world of
commercial printing, that is basically what we use, though in the US most
printing shops call them targets. Normally on a print this will also
indicate registration between plates and dot gain, though there are other
tools to do the same. If you have a nice high resolution laser printer, or
something else with fine output, I have a test target that can be printed
and used to evaluate commercial prints. It is often placed outside the crop
lines along the edge of the sheet. It also contains colours and percentages
of such, which are useful on print runs.
When I sent off items to test a few Creo scanners, I sent actual
transparencies (photos) of things I would need to scan. I think that sort
of test can be more important in choosing a scanner than some test target
resolution. I also have facilities to view at 4x, 7x, 8x and 10x through a
loupe, or 20x to 50x through a microscope; which should cover just about
any printing enlargement I would need to perform.
If you consider what output parameters you need to meet, then you can try
to fine tune your scanning to best fit into those parameters. Sometimes you
might find the printer is the greatest limit, and other times you will find
it is your scanner. Usually I have seen a greater problem with colour
issues than a lack of resolution. Kai Hammann has some nice articles about
scanning, and I basically agree with him that a skilled scanner operator
can nearly match a scan on a lesser scanner to what someone less skilled
can accomplish on a better scanner. Practice your skills, and hone your
eyes, and you can improve.
Even that's not the whole story. The sensor pitch and effective area
also determine limiting resolution.
SNIP
> Well, if you can suggest a simple method for measuring real
> resolution,
> I would be happy to try and find out. Well, I wouldn't necessarily
> be
> *happy*, on a second thought.
> But anyway, yes, let's just pretend for the sake of discussion.
Scanning a sharp (slanted approx. 5 degrees) edge, without clipping,
will allow to determine limiting resolution (according to the ISO, 10%
modulation is close to visual limiting resolution) and the Modulation
Transfer Function (MTF, or contrast as a function of spatial
resolution).
SNIP
> My main doubt was, what is the "best" or the "right" algorithm
> to do the downsampling?
http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/down_sample.htm
or in an attempt to focus on scanner related (subject to
grain-aliasing) down-sampling:
http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/example1.htm
[...]
> I just thought that this knowledge might have allowed me to
> choose an appropriate downsampling algorithm, instead of just
> using whatever Photoshop offers.
To answer that question, also requires to determine actual resolution
limitations. Nevertheless, the better downsampling algorithms will
behave better regardless of the data quality 'thrown' at them.
SNIP
>> I have not heard of anyone outside of Canon still using a
>> staggered idea.
Canon (thru 'vibrating' mirrors) and Epson (thru staggered sensor
lines) are certainly contendors.
Bart
Well, let's refer to the other thread about this, as I have in fact now
tried the method you suggest.
>> My main doubt was, what is the "best" or the "right" algorithm
>> to do the downsampling?
>
> http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/down_sample.htm
> or in an attempt to focus on scanner related (subject to grain-aliasing)
> down-sampling:
> http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/example1.htm
> [...]
Both pages were excellent reading, but I don't think they really address
*my* problem: basically, you started with high-resolution images,
resized down, and saw what happened to the detail that was too
high-resolution to correctly render in the smaller images.
My problem is: I have a 4800 dpi scan (well, only on the vertical axis),
but it really only contains the equivalent of a 2400 dpi scan at best,
so aliasing when downsampling shouldn't really be an issue.
But sharpness is an issue: as I know that pixels overlap, and I know by
how much (though not really, as you point out below), I feel there ought
to be an algorithm that is suited to my particular situation, though it
might not be the "best" algorithm in the general case.
But also read below.
>> I just thought that this knowledge might have allowed me to
>> choose an appropriate downsampling algorithm, instead of just
>> using whatever Photoshop offers.
>
> To answer that question, also requires to determine actual resolution
> limitations. Nevertheless, the better downsampling algorithms will
> behave better regardless of the data quality 'thrown' at them.
Well, are my Imatest results enough to attempt that?
And, perhaps the key in obtaining what I want really is in sharpening,
and not so much in the resizing algorithm.
In that case, I suppose I should find out the "correct" amount of
sharpening from the Imatest data -- though I should run Imatest on a
4800 dpi scan to get appropriate results for applying sharpening to a
(downsampled) 4800 dpi scan, shouldn't I?
Still, resampling-then-sharpening sounds like an unnecessarily lossy
operation, since I know everything (or, I better know everything) in
advance of resizing; and even resample-then-sharpen leaves the question
of what resampling algorithm to use, as I suppose they aren't all the
same even when aliasing is taken out of the equation.
I am very interested in this possibility of calculating the correct
amount of sharpening from MFT results anyway, even aside from the issue
of 4800 dpi scanning.
Obviously, even my 2400 dpi scans need some sharpening, and I'm not the
kind of guy who likes to decide the amount by eyeballing even if there
is "a right way".
>>> [snip]
> [snip]
by LjL
ljl...@tiscali.it
But everbody can!
For those who want to do their own testing, I have created a target
file that is also better suited for Digcam sensors, not only for
analog film, and you can make your own target from it at home with a
decent inkjet printer.
For HP/Canon inkjet printers (3.8MB):
http://www.xs4all.nl/~bvdwolf/main/downloads/Jtf60cy-100mm_600ppi.gif
For Epson inkjet printers (5.3MB):
http://www.xs4all.nl/~bvdwolf/main/downloads/Jtf60cy-100mm_720ppi.gif
Print it at the indicated ppi without printer enhancements on glossy
Photopaper which should produce a 100x100mm target, and shoot it with
your (digi)cam from a (non-critical) distance like between 25-50x the
focal length. That will produce a discrete sampled sensor array
capture that's limited by the combined optical components and capture
medium in the optical chain.
Lot's of interesting conclusions can be drawn from the resulting
image. The target is cheap to produce, and when it gets worn-out, you
just print a new one.
Bart
> "Gordon Moat" <mo...@attglobal.net> wrote in message
> news:432B71D2...@attglobal.net...
> > ljl...@tiscalinet.it wrote:
> SNIP
> > Not everyone has a Siemennsstar pattern for testing.
>
> But everbody can!
>
> For those who want to do their own testing, I have created a target
> file that is also better suited for Digcam sensors, not only for
> analog film, and you can make your own target from it at home with a
> decent inkjet printer.
> For HP/Canon inkjet printers (3.8MB):
> http://www.xs4all.nl/~bvdwolf/main/downloads/Jtf60cy-100mm_600ppi.gif
> For Epson inkjet printers (5.3MB):
> http://www.xs4all.nl/~bvdwolf/main/downloads/Jtf60cy-100mm_720ppi.gif
>
That is awesome Bart! Thanks for sharing!
I already have software that generates these automatically, but nice to
see a ready made one.
>
> Print it at the indicated ppi without printer enhancements on glossy
> Photopaper which should produce a 100x100mm target, and shoot it with
> your (digi)cam from a (non-critical) distance like between 25-50x the
> focal length. That will produce a discrete sampled sensor array
> capture that's limited by the combined optical components and capture
> medium in the optical chain.
Have you tried just printing them to a B/W laser printer? Different
results than inkjet?
>
>
> Lot's of interesting conclusions can be drawn from the resulting
> image. The target is cheap to produce, and when it gets worn-out, you
> just print a new one.
>
> Bart
As always Bart, you are a wealth of information.
Yes, there are several programs that can produce such a pattern, but
this one also includes improved centre rendition to avoid
print-aliasing (at 600 or 720 ppi native resolution) and it includes a
mid-gray background which allows to post-process to a 'correct'
contrast (the mid-grey corners should render as RGB 128/128/128, which
is also a check for uniform lighting and correct exposure). Adding a
grayscale step-wedge will help even more, but there are better,
spectrally uniform, versions available than CMY inkjet-inks allow.
SNIP
> Have you tried just printing them to a B/W laser printer?
> Different results than inkjet?
I use/suggest an inkjet printer because it allows more accurate
simulation of continuous tone images, so I designed the pattern for
those. A laser printer could be used, but the native printing density
in PPI needs to be an exact match with either 600 ppi (or 720 ppi) for
good results. The only laser printer versions that I've seen made by
others, were printed having a completely wrong gamma/contrast, so the
grayscale values would probably need a type of pre-calibration.
> As always Bart, you are a wealth of information.
Thanks. All I do is share some of the findings I've gathered/expanded
over the years.
I live by (at least) one of the principles I later heard explained by
one of the Apple 'fellows' (Guy Kawasaki, also described by some as a
'pyrotechnical mind') in (one of) his book(s) "How to Drive Your
Competition Crazy".
The (his wording) principle is:
Eat like a bird (many times your own weight (= absorb all kinds of
knowledge), and poop like an elephant (share huge amounts, even with
competitors)).
Bart