I've put up a first version of a webpage reviewing different
Many original images, from scanners or digicams are too large for
webdisplay. However, downsizing without proper precautions will produce
resampling artifacts. The methods described allow to improve the image
quality if downsizing needs to be applied.
FWIW, CS has new sampling algorithms for downsampling.
>FWIW, CS has new sampling algorithms for downsampling.
Bart checked both new CS bicubic methods (sharper and smoother) in his tests.
I've been using bicubic sharper to downsample since I got CS and it works well
You're welcome. I thought it was intriguing to realize that by reducing the
size, we increase the artifacts. Theoretically simple to predict results,
but still revealing to see it with your own eyes.
Correct, so the CS results shown on the webpage may/will look different with
older versions. That's why I included a link to the target, so people can
try it themselves.
It will even work better by pre-blurring in relation to the amount of size
reduction. Bicubic sharper will require less additional sharpening, although
the latter provides more control.
>From: "Bart van der Wolf" bvd...@no.spam
>It will even work better by pre-blurring in relation to the amount of size
What works for me with film scans is to not do ANY sharpening on the scan so
it's still a touch soft due to blooming from the scanner, then downsample in
50% steps using 'bicubic sharper' as many times as needed to get close to the
target size, then doing it one more time to the target size. Since my film
scans are all pretty much one of three basic sizes, depending on whether I've
scanned 35 mm or 6x4.5 cm or 6x7 cm film, I have actions that do the steps and
it's pretty quick.
I think not sharpening the initial scan takes care of the pre-blurring :) With
'bicubic sharper' I can see artifacting or a kind of crunchy look sometimes
when I've downsampled sharpened files.
At any rate this method seems to work well for me on actual images, giving
better results than I got pre-CS with a variety of techniques. I can see how
it won't work well with a target like you used though. For grins I'll download
your test pattern and see if downsizing in increments (probably smaller than
50% increments since the file is so sharp ... maybe 10% steps?) gives better
results than resampling in one fell swoop.
I downsampled using bicubic sharper in CS in 90% steps (15 repetitions in an
action to get to 206x206 pixels) and the results are somewhere between
ImageMagick's Triangle and Lanczos results, much better than resampling with
one step. This is without a pre-blur.
Interesting experiment ... thanks for posting. I think when I have images with
plenty of fine details I'll downsample in smaller increments using 'bicubic
sharper' ... perhaps at even finer steps it would be even better.
That would work a bit better with unsharpened (Bayer CFA) digicam images and
most Flatbed scanners. A good filmscanner (assuming a top-notch film) will
have real detail down to the single pixel.
> I think not sharpening the initial scan takes care of the pre-blurring :)
> 'bicubic sharper' I can see artifacting or a kind of crunchy look
> when I've downsampled sharpened files.
In print that would probably become invisible, but downsizing is more often
done for web publishing, so artifacts are not welcome.
> At any rate this method seems to work well for me on actual images, giving
> better results than I got pre-CS with a variety of techniques. I can see
> it won't work well with a target like you used though. For grins I'll
> your test pattern and see if downsizing in increments (probably smaller
> 50% increments since the file is so sharp ... maybe 10% steps?) gives
> results than resampling in one fell swoop.
The benefit of the target is that it'll represent a worst-case scenario. If
the method used behaves well on the target (no artifacts beyond the radius
as a reduction percentage of the diagonal), there's no need to question less
critical subjects. Although pre-blurring may seem counter productive for
increasing the quality, remember that any pre-blur introduced will also be
reduced in size.
In Photoshop CS, an 8-b/ch Gaussian blur extends to no more than the
following number of pixels:
Given the shape of a Gaussian curve, I'd say that e.g. a 5x reduction (to
20%) would tolerate up to a 1.2 radius before losing resolution that can't
be reliably restored by post-sharpening. It is also possible to apply a
self-defined averaging filter with the Other|Custom filter.
Pretty good, although that may/will work out differently in other software.
> Interesting experiment ... thanks for posting.
That's the purpose. Especially if fixed reduction factors are used, one can
optimize the action quite well with that target.
> I think when I have images with plenty of fine details I'll downsample in
> smaller increments using 'bicubic sharper' ... perhaps at even finer steps
> it would be even better.
Yes, try finer and coarser because it depends on the software algorithms
used. Let the worst-case target be your guide.