Downsizing methods

42 views
Skip to first unread message

Bart van der Wolf

unread,
May 2, 2004, 8:45:23 PM5/2/04
to
FYI

I've put up a first version of a webpage reviewing different
resizing/downsampling methods:
http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/down_sample.htm

Many original images, from scanners or digicams are too large for
webdisplay. However, downsizing without proper precautions will produce
resampling artifacts. The methods described allow to improve the image
quality if downsizing needs to be applied.

Bart

Mike Russell

unread,
May 3, 2004, 1:53:00 PM5/3/04
to

Thanks, Bart, for an original and very thorough article. Not much has been
done on this important topic and you've given us a lot to think about.
--

Mike Russell
www.curvemeister.com
www.geigy.2y.net


jjs

unread,
May 3, 2004, 5:06:19 PM5/3/04
to

"Mike Russell" <REgei...@pacbellTHIS.net> wrote in message
news:0Gvlc.4948$Bu3....@newssvr27.news.prodigy.com...

> Thanks, Bart, for an original and very thorough article. Not much has
been
> done on this important topic and you've given us a lot to think about.

FWIW, CS has new sampling algorithms for downsampling.


Bill Hilton

unread,
May 3, 2004, 5:16:23 PM5/3/04
to
>From: "jjs" nos...@please.xxx

>FWIW, CS has new sampling algorithms for downsampling.

Bart checked both new CS bicubic methods (sharper and smoother) in his tests.

I've been using bicubic sharper to downsample since I got CS and it works well
for me.

Bart van der Wolf

unread,
May 3, 2004, 5:45:10 PM5/3/04
to

"Mike Russell" <REgei...@pacbellTHIS.net> wrote in message
news:0Gvlc.4948$Bu3....@newssvr27.news.prodigy.com...
SNIP

> Thanks, Bart, for an original and very thorough article. Not much has
been
> done on this important topic and you've given us a lot to think about.

You're welcome. I thought it was intriguing to realize that by reducing the
size, we increase the artifacts. Theoretically simple to predict results,
but still revealing to see it with your own eyes.

Bart

Bart van der Wolf

unread,
May 3, 2004, 5:48:38 PM5/3/04
to

"jjs" <nos...@please.xxx> wrote in message
news:109dd21...@news.supernews.com...
SNIP

> FWIW, CS has new sampling algorithms for downsampling.

Correct, so the CS results shown on the webpage may/will look different with
older versions. That's why I included a link to the target, so people can
try it themselves.

Bart

Bart van der Wolf

unread,
May 3, 2004, 5:52:39 PM5/3/04
to

"Bill Hilton" <bhilt...@aol.comedy> wrote in message
news:20040503171623...@mb-m29.aol.com...
SNIP

> Bart checked both new CS bicubic methods (sharper and smoother) in his
tests.
>
> I've been using bicubic sharper to downsample since I got CS and it works
well
> for me.

It will even work better by pre-blurring in relation to the amount of size
reduction. Bicubic sharper will require less additional sharpening, although
the latter provides more control.

Bart

Bill Hilton

unread,
May 3, 2004, 7:17:55 PM5/3/04
to
>> "Bill Hilton" wrote

>>
>> I've been using bicubic sharper to downsample since I got CS and it
>> works well for me.

>From: "Bart van der Wolf" bvd...@no.spam
>
>It will even work better by pre-blurring in relation to the amount of size
>reduction.

What works for me with film scans is to not do ANY sharpening on the scan so
it's still a touch soft due to blooming from the scanner, then downsample in
50% steps using 'bicubic sharper' as many times as needed to get close to the
target size, then doing it one more time to the target size. Since my film
scans are all pretty much one of three basic sizes, depending on whether I've
scanned 35 mm or 6x4.5 cm or 6x7 cm film, I have actions that do the steps and
it's pretty quick.

I think not sharpening the initial scan takes care of the pre-blurring :) With
'bicubic sharper' I can see artifacting or a kind of crunchy look sometimes
when I've downsampled sharpened files.

At any rate this method seems to work well for me on actual images, giving
better results than I got pre-CS with a variety of techniques. I can see how
it won't work well with a target like you used though. For grins I'll download
your test pattern and see if downsizing in increments (probably smaller than
50% increments since the file is so sharp ... maybe 10% steps?) gives better
results than resampling in one fell swoop.

Bill


Bill Hilton

unread,
May 3, 2004, 8:20:01 PM5/3/04
to
>For grins I'll download your test pattern and see if downsizing in increments
>(probably smaller than 50% increments since the file is so sharp ... maybe
>10% steps?) gives better results than resampling in one fell swoop.

Hi Bart,

I downsampled using bicubic sharper in CS in 90% steps (15 repetitions in an
action to get to 206x206 pixels) and the results are somewhere between
ImageMagick's Triangle and Lanczos results, much better than resampling with
one step. This is without a pre-blur.

Interesting experiment ... thanks for posting. I think when I have images with
plenty of fine details I'll downsample in smaller increments using 'bicubic
sharper' ... perhaps at even finer steps it would be even better.

Bill


Bart van der Wolf

unread,
May 3, 2004, 9:27:17 PM5/3/04
to

"Bill Hilton" <bhilt...@aol.comedy> wrote in message
news:20040503191755...@mb-m12.aol.com...
SNIP

> What works for me with film scans is to not do ANY sharpening on the scan
so
> it's still a touch soft due to blooming from the scanner, then downsample
in
> 50% steps using 'bicubic sharper' as many times as needed to get close to
the
> target size, then doing it one more time to the target size. Since my
film
> scans are all pretty much one of three basic sizes, depending on whether
I've
> scanned 35 mm or 6x4.5 cm or 6x7 cm film, I have actions that do the steps
and
> it's pretty quick.

That would work a bit better with unsharpened (Bayer CFA) digicam images and
most Flatbed scanners. A good filmscanner (assuming a top-notch film) will
have real detail down to the single pixel.

> I think not sharpening the initial scan takes care of the pre-blurring :)
With
> 'bicubic sharper' I can see artifacting or a kind of crunchy look
sometimes
> when I've downsampled sharpened files.

In print that would probably become invisible, but downsizing is more often
done for web publishing, so artifacts are not welcome.

> At any rate this method seems to work well for me on actual images, giving
> better results than I got pre-CS with a variety of techniques. I can see
how
> it won't work well with a target like you used though. For grins I'll
download
> your test pattern and see if downsizing in increments (probably smaller
than
> 50% increments since the file is so sharp ... maybe 10% steps?) gives
better
> results than resampling in one fell swoop.

The benefit of the target is that it'll represent a worst-case scenario. If
the method used behaves well on the target (no artifacts beyond the radius
as a reduction percentage of the diagonal), there's no need to question less
critical subjects. Although pre-blurring may seem counter productive for
increasing the quality, remember that any pre-blur introduced will also be
reduced in size.

In Photoshop CS, an 8-b/ch Gaussian blur extends to no more than the
following number of pixels:
Radius Pixels
0.0-0.1 0
0.2-0.5 1
0.6-0.8 2
0.9-1.2 3
1.3-1.6 4
1.7-2.3 5
2.4-2.6 6
2.7-2.9 7
3.0-3.3 8
3.4-3.6 9
3.7-3.9 10
Given the shape of a Gaussian curve, I'd say that e.g. a 5x reduction (to
20%) would tolerate up to a 1.2 radius before losing resolution that can't
be reliably restored by post-sharpening. It is also possible to apply a
self-defined averaging filter with the Other|Custom filter.

Bart

Bart van der Wolf

unread,
May 3, 2004, 9:43:25 PM5/3/04
to

"Bill Hilton" <bhilt...@aol.comedy> wrote in message
news:20040503202001...@mb-m05.aol.com...
SNIP

> I downsampled using bicubic sharper in CS in 90% steps (15 repetitions
> in an action to get to 206x206 pixels) and the results are somewhere
> between ImageMagick's Triangle and Lanczos results, much better than
> resampling with one step. This is without a pre-blur.

Pretty good, although that may/will work out differently in other software.

> Interesting experiment ... thanks for posting.

That's the purpose. Especially if fixed reduction factors are used, one can
optimize the action quite well with that target.

> I think when I have images with plenty of fine details I'll downsample in
> smaller increments using 'bicubic sharper' ... perhaps at even finer steps
> it would be even better.

Yes, try finer and coarser because it depends on the software algorithms
used. Let the worst-case target be your guide.

Bart

Reply all
Reply to author
Forward
0 new messages