Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

colorspace-faq - purpose of the faq ?

195 views
Skip to first unread message

Timo Autiokari

unread,
Feb 25, 1998, 3:00:00 AM2/25/98
to

Dear Mr. Poynton,

What is the purpose of your GammaFAQ?

In your forewords you say:

***"In video, computer graphics and image processing, gamma represents a
numerical parameter that describes the nonlinearity of intensity reproduction.
Having a good understanding of the theory and practice of gamma will enable you
to get good results when you create, process and display pictures." ***

Many people understand the above as if it would apply to
digital-photographic-imaging as well.

However you do not say that it would apply. Instead you speak about "image
processing" and about "create, process and display pictures". These differ from
digital-photographic-imaging in two aspect:

Digital photographic images (1) originates form the real world, and they are
then (2) enhanced where as "image processing" and "create, process and display
pictures" refers to something that originates from calculations or algorithms,
synthetically, and are then displayed as they are, they are not enhanced.

So, I find many fundamental issues in your GammaFAQ that are, to say the least,
misleading when they are applied to digital-photographic-imaging. I take only
one example from the GammaFAQ, the same text is in the ColorFAQ as well:

>>13. How many bits do I need to smoothly shade from black
>>to white?
>>...To shade smoothly over this range, so as to produce no
>>perceptible steps, at the black end of the scale it is necessary
>>to have coding that represents different intensity levels 1.00, 1.01,
>>1.02 and so on. If linear light coding is used, the "delta" of 0.01 must
>>be maintained all the way up the scale to white. This requires about
>>9,900 codes, or about fourteen bits per component. If you use nonlinear
>>coding, then the 1.01 "delta" required at the black end of the scale
>>applies as a ratio, not an absolute increment, and progresses like
>>compound interest up to white. This results in about 460 codes, or about
>>nine bits per component. Eight bits, nonlinearly coded according to
>>Rec. 709, is sufficient for broadcast-quality digital television at a contrast
>>ratio of about 50:1....

Now, the conditional expressions: "If linear light coding is used..." and "If
you use nonlinear coding...". are fundamentally faulty when applied to digital
photographic imaging. Also the meaning of what actually the "to code" means
gets to be a bit blurry.

In digital-photographic-imaging where a CCD imager is used (cameras and
scanners) there is absolutely no possibility to choose what coding is used. The
CCD sees the light linearly and the coding happens linearly by the AD converter
that is inside the CCD device. _This_ is where the coding happens. And it
happens linearly.

So, the data that comes out from the CCD imager device (the integrated circuit)
is already coded and that has been done linearly, always. The light-to-data
coding has been done at this point.

There after it is only possible to alter the _result_ of this coding, in other
words to make the image data suitable for an output device or an other. This is
the same as compensation.

In other words after the coding is done it is not possible to just calculate any
better shading (or better human perception) into the data. That would require
more data.

On the other hand if you _create_ computer generated graphics (like virtual
reality 'images') using algorithms then you have all the possibilities to affect
to the 'coding' freely because the 'coding' is done by algorithms or
calculations, so any intensity values can be freely chosen. There you can have
the better shading of the black and the better perception for the eye,
simultaneously.

So my question is: Does your GammaFAQ cover digital-photographic-imaging and if
your answer is "yes" then how do you explain the better perceptual coding above?


Timo Autiokari
http://www.clinet.fi/~timothy/calibration/index.htm

Stephen H. Westin

unread,
Feb 25, 1998, 3:00:00 AM2/25/98
to

tim...@clinet.fi (Timo Autiokari) writes:

> Dear Mr. Poynton,
>
> What is the purpose of your GammaFAQ?
>
> In your forewords you say:

> ***"In video, computer graphics and image processing, gamma
> represents a numerical parameter that describes the nonlinearity of
> intensity reproduction. Having a good understanding of the theory
> and practice of gamma will enable you to get good results when you
> create, process and display pictures." ***

> Many people understand the above as if it would apply to
> digital-photographic-imaging as well.

Actually, he is dealing simply with the issues of display of a known
digital image on a CRT. As you point out, there are additional issues
involved with image acquisition.

<snip>

> So, I find many fundamental issues in your GammaFAQ that are, to say
> the least, misleading when they are applied to
> digital-photographic-imaging. I take only one example from the
> GammaFAQ, the same text is in the ColorFAQ as well:

> >>13. How many bits do I need to smoothly shade from black
> >>to white?
> >>...To shade smoothly over this range, so as to produce no
> >>perceptible steps, at the black end of the scale it is necessary
> >>to have coding that represents different intensity levels 1.00, 1.01,
> >>1.02 and so on. If linear light coding is used, the "delta" of 0.01 must
> >>be maintained all the way up the scale to white. This requires about
> >>9,900 codes, or about fourteen bits per component. If you use nonlinear
> >>coding, then the 1.01 "delta" required at the black end of the scale
> >>applies as a ratio, not an absolute increment, and progresses like
> >>compound interest up to white. This results in about 460 codes, or about
> >>nine bits per component. Eight bits, nonlinearly coded according to
> >>Rec. 709, is sufficient for broadcast-quality digital television at a contrast
> >>ratio of about 50:1....

I think this description is a bit pessimistic. I would put the minimum
for a digital image displayed on a CRT at about 12 bits minimum, if
linear encoding is used.

> Now, the conditional expressions: "If linear light coding is
> used..." and "If you use nonlinear coding...". are fundamentally
> faulty when applied to digital photographic imaging. Also the
> meaning of what actually the "to code" means gets to be a bit
> blurry. In digital-photographic-imaging where a CCD imager is used
> (cameras and scanners) there is absolutely no possibility to choose
> what coding is used. The CCD sees the light linearly and the coding
> happens linearly by the AD converter that is inside the CCD device.
> _This_ is where the coding happens. And it happens linearly.

Then how does our Kodak DCS420 camera deliver images with a gamma
correction of around 1.6? We know; we measured it. And I think you
will find a similar nonlinearity in most commercially-available
cameras.

<snip>

> In other words after the coding is done it is not possible to just
> calculate any better shading (or better human perception) into the
> data. That would require more data.

Yup. That's why you have to think about these issues. As do
manufacturers of digital cameras.

<snip>

Another subtle point is that it is, in general, impossible to
transform an acquired image into any standard tristimulus color space;
since the three filter responses are always different from the CIE
matching functions, there will always be cases where the
transformation will give the wrong answer. Fortunately, most spectra
in the real world don't incur egregious errors in such a process. Just
found this out a few weeks ago, myself.

--
-Stephen H. Westin
Any information or opinions in this message are mine: they do not
represent the position of Cornell University or any of its sponsors.

Michael McGuire

unread,
Feb 25, 1998, 3:00:00 AM2/25/98
to

0]>
....snippage

: In digital-photographic-imaging where a CCD imager is used (cameras and


: scanners) there is absolutely no possibility to choose what coding is used. The
: CCD sees the light linearly and the coding happens linearly by the AD converter
: that is inside the CCD device. _This_ is where the coding happens. And it
: happens linearly.

: So, the data that comes out from the CCD imager device (the integrated circuit)


: is already coded and that has been done linearly, always. The light-to-data
: coding has been done at this point.

...snippage

: Timo Autiokari
: http://www.clinet.fi/~timothy/calibration/index.htm

A/D convertors in cameras and scanners are in fact external to the CCD's--not
on the same chip, and thus accessible to adjustment by other than the maker of
the CCD. Further there is no requirement that an A/D converter have an output
linearly proportional to its input voltage. An implementation of an A/D
consists of N voltage comparators and a voltage divider comprised of a string
of N resistors fed by a stabilized voltage. Each node of the divider is
connected to the plus input of a voltage comparator. The minus input of each of
the comparators is connected to the input signal. The output is the number of
comparators that are turned on by the input signal, that is all those whose
voltage divider (plus) input is less than the input signal. If the resistor
values in the divider are all equal then you get linear output. But they could
just as well be a power law sequence for a power law relationship of output to
input or whatever function you like.

From the point of view of optimizing signal-to-noise performance, a square root
scaled sequence would be superior for CCD's given the Poisson law fluctuation
of the incoming photon flux--yet another expression of Nature's preference for
the non-linear, and in the same direction from linear as CRT gamma, printer
correction, and perceptual linearity.

Mike
--
Michael McGuire Hewlett Packard Laboratories
email:xmcg...@xhpl.xhp.com P.0. Box 10490 (1501 Page Mill Rd.)
(remove x's from email if not Palo Alto, CA 94303-0971
a spammer)
Phone: (650)-857-5491
************BE SURE TO DOUBLE CLUTCH WHEN YOU PARADIGM SHIFT.**********

Charles Poynton

unread,
Feb 26, 1998, 3:00:00 AM2/26/98
to

Concerning an ongoing public and private attempt to educate Timo Autiokari
concerning transfer functions, in article
<34f66f5b...@news.clinet.fi>, tim...@clinet.fi (Timo Autiokari)
wrote under the heading "What is the purpose of your GammaFAQ?":

> the conditional expressions: "If linear light coding is used..." and
> "If you use nonlinear coding...". are fundamentally faulty when
> applied to digital photographic imaging.

The statements are not faulty, and they apply quite well to digital
photographic imaging.

Timo continues,

> In digital-photographic-imaging where a CCD imager is used (cameras and
> scanners) there is absolutely no possibility to choose what coding is
used. The
> CCD sees the light linearly and the coding happens linearly by the AD
converter
> that is inside the CCD device. _This_ is where the coding happens. And it
> happens linearly.
>
> So, the data that comes out from the CCD imager device (the integrated
circuit)
> is already coded and that has been done linearly, always. The light-to-data
> coding has been done at this point.

In a note attached to this follow-up, I reiterate what I told Timo, in
private e-mail, yesterday. (He acknowledged receiving it from me.) The
note explains how scanners and cameras use CCD devices. I see that Michael
McGuire and Mitch Valburg have already posted follow-ups concerning CCDs
and ADCs, and I expect (and hope for) several more third-party follow-ups
in the next few days. I'm sorry that Timo paid little attention to my
message, because he could have cleared up some confusion, instead of
creating some more.

C.

p.s. Timo: I took care in my previous posting to direct follow-ups to just
<news:sci.image.processing> and <news:rec.photo.digital>. Perhaps you
could do the same, instead of continuing to cross-post to 6 groups.

--
Charles Poynton
<mailto:poy...@poynton.com> [Mac Eudora/MIME/BinHex/uu]
<http://www.inforamp.net/~poynton/>
--


A Rough note for Timo concerning the Gamma FAQ
as it applies to CCDs and ADCs in scanners and cameras


Copyright (c) 1998-02-26
Charles Poynton


Nearly all contemporary CCDs are intrinsically analog devices - or more
properly, their output is sampled but not quantized. (For definitions of
sampling and quantization, see Chapter 1 of my book; that chapter is on
the web.) Today, only a few CCD devices have integral A-to-D converters.
(Soon, many will, but solutions to the 8-bit linear light problem must
first be found.)

Contemporary desktop scanners generally take one of two approaches:

- Some have a 10-bit (or sometimes a 12-bit) A-to-D converter, followed
by a digital hardware lookup table where a nonlinear correction is
performed, producing 8 bits out.

- Some have an analog nonlinear correction circuit, followed by an
8-bit A-to-D converter.

Video cameras invariably take the second approach, except for the most
sophisticated studio cameras, costing $80,000 or more, which employ 12-bit
converters and digital gamma correction with 10 (nonlinear) bits out.
Industrial machine vision cameras, and astrophotography cameras, sometimes
directly produce linear-light intensity output.

A very low-end, cheap scanner might get away with no analog processing, a
CCD and an 8-bit ADC. (I'm not certain whether this is done even in in
cheap commercial units, I've never taken a really cheap unit apart. Maybe
a QuickTake 100 internally codes 8-bit linear-light, does anyone know?)
But such a scanner cannot reproduce smooth shades in dark regions of the
image. As I mention in my book, the quantization requirements near black
are relaxed when the contrast ratio of the display medium is low. Scanners
for print work generally have less demanding requirements than video
cameras, because offset printing generally has a lower contrast ratio than
television. Most demanding of all is motion picture film, or projected 35
mm transparencies [slides] in a dark room. (Most desktop computer
applications are not demanding of good shading near black, because most
desktop computer environments are brightly lit, consequently the contrast
ratio is poor.)

Timo suggested (in private e-mail) that the images in the following three
cases would appear exactly identical, assuming 8 bits per color component:

1. Raw, linear-light image data is processed through a display driver
that imposes a lookup table exactly compensating the nonlinearity
of the CRT.

2. Raw, linear-light image data is gamma-corrected by 8-bit per channel
image manipulation software, processed through no lookup table (or a
lookup table containing a ramp), and then displayed on a CRT.

3. Software in the camera or the scanner applies gamma
correction; the resulting 8-bit image displayed directly on the CRT.

Timo incorrectly concludes that these three cases appear exactly
identical. His cases 1 and 3 correspond to the second and first rows
respectively of Figure 6.8 in my book. (The same figure is in question 14
of the Gamma FAQ.)

I'm sure that Timo correctly concluded that, in each of his 3 cases, black
is reproduced correctly, and white is reproduced correctly, and mid grey
is reproduced (almost) correctly. That's not the problem.

The problem is that the boundary between adjacent code values in dark
areas of the picture is more or less visible - or perhaps objectionable -
depending which of these schemes is used! In decent viewing conditions,
with a video camera or a decent scanner, the images will _not_ appear
exactly the same. In case 1, the dark shades will exhibit banding, and
case 3 they will not.

In case 1 (Figure 6.8, second row, CG), the rounding of intensities in the
dark shades to the same code value - by conversion of the intensity value
to 8 bits - erases distinctions between dark shades (intensities) that I
can see - that my vision can easily distinguish.

If you have seen banding in a (supposedly) continuous-tone image, at 24
bits per pixel, then you can be fairly certain that the image data has
been subjected to 8-bit linear-light coding (or poorly-chosen nonlinear
coding) someplace along the path from creation, capture, processing,
recording, placing in a framebuffer, running a lookup table, converting to
analog, and displaying.

--

Timo Autiokari

unread,
Feb 26, 1998, 3:00:00 AM2/26/98
to

On 25 Feb 1998 23:40:19 GMT, mi...@xhplmmcg.xhpl.xhp.com (Michael McGuire) wrote:

>A/D convertors in cameras and scanners are in fact external to the CCD's--not
>on the same chip, and thus accessible to adjustment by other than the maker of
>the CCD. Further there is no requirement that an A/D converter have an output
>linearly proportional to its input voltage. An implementation of an A/D
>consists of N voltage comparators and a voltage divider comprised of a string
>of N resistors fed by a stabilized voltage. Each node of the divider is
>connected to the plus input of a voltage comparator. The minus input of each of
>the comparators is connected to the input signal. The output is the number of
>comparators that are turned on by the input signal, that is all those whose
>voltage divider (plus) input is less than the input signal. If the resistor
>values in the divider are all equal then you get linear output. But they could
>just as well be a power law sequence for a power law relationship of output to
>input or whatever function you like.


In the article:
http://x4.dejanews.com/getdoc.xp?AN=296741174&CONTEXT=888477590.22675835&hitnum=1
you yourself say:

<< A well set up CCD--probably needs to be cooled--can put out 14 bit data
<< where all the noise intrinsic to the CCD is in the lowest order bit. CCD's
<< respond linearly to light intensity so 14 bits amounts to 14 doublings of
<< light intensity which is to say 14 stops. ...


<< Mike
<< --
<< Michael McGuire Hewlett Packard Laboratories

So please tell me what are you trying to say now ?

In your reply you use the wording "the could just be" in your illusion: "If the


resistor values in the divider are all equal then you get linear output. But
they could just as well be a power law sequence for a power law relationship of
output to input or whatever function you like."

There are two good reasons why the coding is linear in AD converter:

(1) It would be rather foolish for the manufacturer of the converter to create
an AD converter that is very accurate in design, so that it can detect a very
small change on the other end of the range and then make the rest of the range
to be much more loose.

(2) There are technical problems in the production of AD converters. It is
relatively easy to produce 8 resistors onto the chip that all have accurately
the same value. In an AD ladder it does not matter what the exact value of the
resistors is, only the value needs to be the same for each resistor in the
ladder. This is the very basic issue that the production of AD converters relies
on. It becomes increasingly difficult to produce more than 8 resistors that have
exactly the same value. This is the reason why higher bit-dept converters are
more expensive and that there are not many over 16 bit accuracy. On the other
hand producing 8 resistors that each have different value, but so that these
values accurately follows some non-linear function would be *very* difficult.
There is no technology in the horizon that could make this approach feasible, it
is much easier to produce larger amount of linearly coded bits than the
equivalent amount of non-linearly compressed bits.

Below are links to AD converter pages of Analog Devices Inc., National
Semiconductor Corp., and Fujitsu Microelectronic Inc., if anyone is interested,
please see the specifications. All the converters are linear. It is easiest to
just locate the *nonlinearity error* or the *linearity* spec.

http://products.analog.com/products/list_generics.asp?category=86
http://products.analog.com/products/list_generics.asp?category=85
http://www.national.com/catalog/AnalogDataAcquisition.html
http://fujitsumicro.com/products/analog/data.html

If anyone knows on-line specifications of CCD devices could you please post the
links.

And yes, there are both internal and external AD conversions with CCD imaging
systems. It does not change the fact that the conversion happens linearly,
inside an integrated circuit where the coding is in the from of physical
resistor elements on the chip.

It is also possible (unlikely but possible) that in some very specific camera
there is an non-liner analog signal conditioning between the CCD output and the
AD converter input. If Mr. Poynton's pages are targeted for such exotic systems
then in my opinion it would be nice if he could say indicate this on these
pages.

Timo Autiokari

Charles Poynton

unread,
Feb 26, 1998, 3:00:00 AM2/26/98
to

In article <34f637f...@news.clinet.fi>, tim...@clinet.fi (Timo
Autiokari) wrote:

> It is also possible (unlikely but possible) that in some very specific camera

> there is an non-linear analog signal conditioning between the CCD output


> and the AD converter input. If Mr. Poynton's pages are targeted for such
> exotic systems then in my opinion it would be nice if he could say indicate
> this on these pages.

Could I ask for a few volunteers in s.e.t.a or s.e.t.b to explain to Mr.
Autiokari that this not only possible and likely, and not just in "exotic"
systems, but in ***ALL*** consumer and most professional video cameras
work? I have tried repeatedly in private e-mail and public posts to
explain, to no avail. A short note like this ought to suffice:

Dear Mr. Autiokari,

In every consumer video camera, and most professional video cameras, there
is nonlinear analog signal conditioning between the CCD output and the AD
converter input.

Certain high-end professional studio video cameras have no nonlinear


analog signal conditioning between the CCD output and the AD converter

input; in these cameras, the output of the CCD has more than 8 bits, and
nonlinear processing is performed digitally.

Alan Roberts

unread,
Feb 26, 1998, 3:00:00 AM2/26/98
to

Charles Poynton (poy...@poynton.com) wrote:
: In article <34f637f...@news.clinet.fi>, tim...@clinet.fi (Timo

: Autiokari) wrote:
:
: > It is also possible (unlikely but possible) that in some very specific camera
: > there is an non-linear analog signal conditioning between the CCD output
: > and the AD converter input. If Mr. Poynton's pages are targeted for such
: > exotic systems then in my opinion it would be nice if he could say indicate
: > this on these pages.
:
: Could I ask for a few volunteers in s.e.t.a or s.e.t.b to explain to Mr.
: Autiokari that this not only possible and likely, and not just in "exotic"
: systems, but in ***ALL*** consumer and most professional video cameras
: work? I have tried repeatedly in private e-mail and public posts to
: explain, to no avail. A short note like this ought to suffice:

I did that yesterday, recommending him to look at manufacturers data sheets
on real cameras. He hasn't responded to me yet. He should do some reading if
he's going to try to catch up with my 30 years in the camera business, let
alone your experience :-)

--
******* Alan Roberts ******* BBC Research & Development Department *******
* My views, not necessarily Auntie's, but they might be, you never know. *
**************************************************************************

Timo Autiokari

unread,
Feb 26, 1998, 3:00:00 AM2/26/98
to

On Thu, 26 Feb 1998 02:04:45 -0500, poy...@poynton.com (Charles Poynton) wrote:

>Today, only a few CCD devices have integral A-to-D converters.
>(Soon, many will, but solutions to the 8-bit linear light problem must
>first be found.)

No, the problem is the monitor gamma. It is not a problem for TV and video
because the images are not edited. But in the case the image is gamma
compensated then the image manipulation software sees the compensated (and
possibly compressed) image. Editing such an un-natural image gives poor quality.
E.g. Photoshop makes it easy to edit gamma compensated images (this is why it
has the two gamma settings). It shows the image properly on monitor but the data
is kept in the gamma space and the problem is that image editing is done to the
gamma compensated image data. So the Photoshop effectively hides the appearance
of the actual compensated data. To get a feeling what the compensated data
actually looks like just apply a gamma say 1/2.0 to an decent image.

>Video cameras invariably take the second approach,

My concern is not the video, it is digital photographic imaging.

>A very low-end, cheap scanner might get away with no analog processing, a
>CCD and an 8-bit ADC. (I'm not certain whether this is done even in in

>cheap commercial units, I've never taken a really cheap unit apart. ...)

So, you are not certain. Most of the 'really cheap' digital cameras have the 8
bit/color CCD. see e.g. http://plugin.com/dcg2.html . Are you similarly
un-certain about them too ? There really is no non-linear analog amplifiers in
them. Non-linear analog amplifiers are expensive and they eat lot of current
(such amplifiers need to have stable ambient temperature in order to be
accurate, so small miniature ovens are usually used to achieve this, similar
ovens are used for accurate frequency generation in counters and function
generators to keep the crystal stable).

>Timo suggested (in private e-mail) that the images in the following three
>cases would appear exactly identical, assuming 8 bits per color component:

No I did not say that. In my private e-mail I said:

"In the below three cases the image will appear *exactly* the same. There will
be no differences at all (still considering the 8bit/color CCD):"

What a master you are in the art of twisting words. You then go on and say:

>In decent viewing conditions, with a video camera or a decent
>scanner, the images will _not_ appear exactly the same. In
>case 1, the dark shades will exhibit banding, and case 3 they
>will not.

So here you change the "8bit/color CCD" into "a video camera or a decent
scanner". Again my concern is not the video. My concern are cameras and
scanners in the area of digital photographic imaging. Maybe to you a "decent
scanner" is only a 12 bit/color scanner, many people do have 'cheap' 8 bit/color
scanners.

May I please suggest to you that you place a warning on your FAQ pages like:
Not suitable for really cheap 8bit/color digital cameras nor for scanners that
are not decent. Or simply: Suitable only for video cameras and only displaying
images.

Timo Autiokari

Timo Autiokari

unread,
Feb 26, 1998, 3:00:00 AM2/26/98
to

On Thu, 26 Feb 1998 09:22:56 -0500, poy...@poynton.com (Charles Poynton) wrote:

>In every consumer video camera, and most professional video cameras, there

>is nonlinear analog signal conditioning between the CCD output and the AD
>converter input.

Really, I'm not concerned about the video. I've actually said that for video the
gamma compensation is good. You seem to have trouble with handling the digital
photographic imaging and image editing aspect, as you avoid to mention about
them in your on-line documents also.

The problem with your FAQs are that they are being applied to digital
photographic imaging and there the tool is digital camera. It is different form
the video cameras.

Here is a simple way to see if there is a non-linear analog signal conditioning
in the camera:

-open any decently exposed image into Photoshop that is acquired using a
8bit/color CCD device using a gamma setting other than 1.0 in the acquire module
(in case there is such setting) .
-choose Image/Histogram.
-in Photoshop the Luminosity -channel is smoothed so select the red, green or
blue from the dropdow-box.

Now, if you do not see gaps in the histogram then there is the non-linear analog
signal conditioning in the camera or the scanner. But if you do see the gaps
then the data is just modified by the software of the camera or scanner. To
verify, open a couple of other images and do the same to see that the gaps
appear generally in the same places.

Timo Autiokari

Keith Jack

unread,
Feb 26, 1998, 3:00:00 AM2/26/98
to

Timo Autiokari wrote in message <34f637f...@news.clinet.fi>...


>On 25 Feb 1998 23:40:19 GMT, mi...@xhplmmcg.xhpl.xhp.com (Michael McGuire)
wrote:
>

[edit]


>(2) There are technical problems in the production of AD converters. It is
>relatively easy to produce 8 resistors onto the chip that all have
accurately
>the same value. In an AD ladder it does not matter what the exact value of
the
>resistors is, only the value needs to be the same for each resistor in the
>ladder. This is the very basic issue that the production of AD converters
relies
>on. It becomes increasingly difficult to produce more than 8 resistors that
have
>exactly the same value. This is the reason why higher bit-dept converters
are
>more expensive and that there are not many over 16 bit accuracy. On the
other
>hand producing 8 resistors that each have different value, but so that
these
>values accurately follows some non-linear function would be *very*
difficult.
>There is no technology in the horizon that could make this approach
feasible, it
>is much easier to produce larger amount of linearly coded bits than the
>equivalent amount of non-linearly compressed bits.

Actually, at the last company I worked at, we developed an ADC that
was 8-bit flash (255 resistors), and designed to perform a nonlinear
function. Was just as simple to make as a standard ADC (the design
change took less than an hour), and was designed specifically for
interfacing
to CCDs. Amazing what you can do when you have a physicist available --
a simple experiment on his kitchen table proved the concept and the
first
silicon worked great.


Michael McGuire

unread,
Feb 27, 1998, 3:00:00 AM2/27/98
to

0]>
: On 25 Feb 1998 23:40:19 GMT, mi...@xhplmmcg.xhpl.xhp.com (Michael McGuire) wrote:

: >A/D convertors in cameras and scanners are in fact external to the CCD's--not

: >on the same chip, and thus accessible to adjustment by other than the maker of
: >the CCD.

.....
: >values in the divider are all equal then you get linear output. But they could

: >just as well be a power law sequence for a power law relationship of output to
: >input or whatever function you like.

: << A well set up CCD--probably needs to be cooled--can put out 14 bit data
: << where all the noise intrinsic to the CCD is in the lowest order bit. CCD's
: << respond linearly to light intensity so 14 bits amounts to 14 doublings of
: << light intensity which is to say 14 stops. ...
: << Mike
: << --
: << Michael McGuire Hewlett Packard Laboratories

: So please tell me what are you trying to say now ?

------------------------------------------------------------------------------
Good juvenile lawyering, Timo, but irrelevant to the subject at hand. The
question there was about the possible dynamic range of CCD's, and not about the
details of the A/D conversion or output encoding. Expressed in linear bits
that's what can be achieved. I was not anticipating cross examination by you,
or I would have pointed out that the 14 linear bits could be encoded with no
perceptual loss with fewer bits with the appropriate non-linear function, as
all the rest of the world knowledgeable about this subject has been trying to
show you.
--------------------------------------------------------------------------------

: In your reply you use the wording "the could just be" in your illusion: "If the


: resistor values in the divider are all equal then you get linear output. But
: they could just as well be a power law sequence for a power law relationship of
: output to input or whatever function you like."

: There are two good reasons why the coding is linear in AD converter:

: (1) It would be rather foolish for the manufacturer of the converter to create
: an AD converter that is very accurate in design, so that it can detect a very
: small change on the other end of the range and then make the rest of the range
: to be much more loose.

--------------------------------------------------------------------------------
But, stripping away your pejorative verbiage, this is exactly the description
of a possible non-linear A/D, small steps at one end of the scale and
progressively larger to the other end. Obviously such an A/D would be designed
for a particular purpose and not offered for general purpose uses. An
alternative general purpose possibility would be a programmable A/D.
--------------------------------------------------------------------------------

: (2) There are technical problems in the production of AD converters. It is


: relatively easy to produce 8 resistors onto the chip that all have accurately
: the same value. In an AD ladder it does not matter what the exact value of the
: resistors is, only the value needs to be the same for each resistor in the
: ladder. This is the very basic issue that the production of AD converters relies
: on. It becomes increasingly difficult to produce more than 8 resistors that have
: exactly the same value. This is the reason why higher bit-dept converters are
: more expensive and that there are not many over 16 bit accuracy. On the other
: hand producing 8 resistors that each have different value, but so that these
: values accurately follows some non-linear function would be *very* difficult.
: There is no technology in the horizon that could make this approach feasible, it
: is much easier to produce larger amount of linearly coded bits than the
: equivalent amount of non-linearly compressed bits.

-----------------------------------------------------------------------------
But as other posters to this thread have remarked, non-linear A/D's have been
made for and used in high end video cameras. You have conceded below my first
point of my original post, that in cameras and scanners, A/D's are not usually
combined with CCD's--apparently there are a few counter examples. Here we see
my other point that they need not have a linear output. But in the overall
context of this thread, it really doesn't matter. If a linear A/D has
sufficient dynamic range and a step size less than the noise level of the CCD,
then the transformation to non-linear encoding can be done digitally with
no perceptual loss.
------------------------------------------------------------------------------

: Below are links to AD converter pages of Analog Devices Inc., National


: Semiconductor Corp., and Fujitsu Microelectronic Inc., if anyone is interested,
: please see the specifications. All the converters are linear. It is easiest to
: just locate the *nonlinearity error* or the *linearity* spec.

------------------------------------------------------------------------------
It is completely unsurprising and irrelevant to this discussion that general
purpose A/D's are linear.
------------------------------------------------------------------------------

: If anyone knows on-line specifications of CCD devices could you please post the
: links.

-------------------------------------------------------------------------------
Try digging in the web pages for Sony, Toshiba, or Philips. They all make them.
-------------------------------------------------------------------------------

: And yes, there are both internal and external AD conversions with CCD imaging


: systems. It does not change the fact that the conversion happens linearly,
: inside an integrated circuit where the coding is in the from of physical
: resistor elements on the chip.

----------------------------------------------------------------------------
Irrelevant. I can write programs for a digital computer that computes values
of both linear and non-linear functions. Are the bits that represent these
values linear or non-linear? Non-linear of course, they are either on or off.
Depending on the resistor values, I can go A/D linearly or non-linearly, but
of course the resistors are linear--so what?
-----------------------------------------------------------------------------

: It is also possible (unlikely but possible) that in some very specific camera

: there is an non-liner analog signal conditioning between the CCD output and the
: AD converter input. If Mr. Poynton's pages are targeted for such exotic systems


: then in my opinion it would be nice if he could say indicate this on these
: pages.

------------------------------------------------------------------------------
No, the understanding that everyone but you seems to have, is that encoding of
images for the same perceptual quality--minimization of contouring etc--can be
achieved with noticeably fewer bits with the right non-linear encoding, than
with linear.
------------------------------------------------------------------------------

Mike
--
Michael McGuire Hewlett Packard Laboratories

Alan Roberts

unread,
Feb 27, 1998, 3:00:00 AM2/27/98
to

Timo Autiokari (tim...@clinet.fi) wrote:

: It is also possible (unlikely but possible) that in some very specific camera
: there is an non-liner analog signal conditioning between the CCD output and the
: AD converter input. If Mr. Poynton's pages are targeted for such exotic systems
: then in my opinion it would be nice if he could say indicate this on these
: pages.

You should check your facts more carefully. Most TV cameras sold to
broadcasters have a non-linear circuit between the ccd and the ADC. It
isn't a standard gamma circuit, that's done in the digits, the "pre-gamma"
circuit is used to compress the video signal in a known way, including
knee function to handle high overloads. The precise nature of this curve
is used in the digital processing to notionally recover the linear signal
before applying the required non-linear processing. Cameras are really a
lot more complex than you seem to imagine. Have you actually looked at
one? I've been doing exactly that for 30 years now, and even some domestic
camcorders are processed in this way, hardly exotic.

Timo Autiokari

unread,
Feb 27, 1998, 3:00:00 AM2/27/98
to

On 27 Feb 1998 02:58:02 GMT, mi...@xhplmmcg.xhpl.xhp.com (Michael McGuire) wrote:

>Good juvenile lawyering, Timo, but irrelevant to the subject at hand.

In the article:
<<>http://x4.dejanews.com/getdoc.xp?AN=296741174&CONTEXT=888477590.22675835&hitnum=1
<<>: you said:

"CCD's respond linearly to light intensity"

That is not irrelevant to this subject.

>I was not anticipating cross examination by you, or I would have pointed
>out that the 14 linear bits could be encoded with no perceptual loss with
>fewer bits with the appropriate non-linear function, as all the rest of the
>world knowledgeable about this subject has been trying to show you.

I have never said that it couldn't be done. I *have* said that it can be done.
And I have said it is good for TV and video.

Non-linear functions in general have the ability to compress data. This is
rather basic knowledge and has been widely used e.g. in audio.

What I have been saying is that even if it is good in TV and video to encode the
gamma correction and bith-depth compression into the video signal, it is not at
all good for digital photographic imaging.

It should not be difficult to see that when the acquire device does the gamma
compensation then the image editor will see the compensated and compressed
"signal" that we, in digital photographic imaging, call as the _image_.

In TV and video the signal (image) is suitable for the eye only after the
monitor does the gamma.

Now, Mr. Poynton's humbug about the perception mis-leads people in digital
photographic imaging, badly.

The gamma compensation in TV and video cameras is done because the effective
gamma from the scene to the eye needs to be linear (or very near to linear).

In TV this has been done from the early days of television in the TV camera. It
was so chosen in the beginning, so that a difficult and expensive pure analog
non-linear correction was not needed in the receivers.

There is no problem in this since no-one is looking the signal itself. They look
the picture on the TV and the CRT first apples the gamma. Only after that the
signal is proper, natural, for the eye again.

So, all the TV sets have the gamma therefore the gamma compensation must be done
today and in the future also, in the TV and video camera (not necessarily in the
camera but before sending the signal)

Now, technology has advanced so that the gamma of the television can be actually
useful. Best broadcast quality TV cameras can now acquire 10 bit or 12 bit data
and because of the non-linearly of the monitor we get an other benefit for free,
bit-depth compression.

The problem is that Mr. Poynton says that the "non-linear coding" is done
*because* it gives better image quality. This is not true, this way.

The better perception is there for TV and video, but it is the *consequence* of
the improved technology. And for TV and video it is a free benefit, due to the
unavoidable gamma correction that must be done anyway.

The benefit however is only there for TV and video, since there it is the
question of transmission only. The transmitted signal is gamma compensated and
it can be bit-depth compressed.

If the TV signal that is on the transmission path is converted into image
without applying the same gamma that the CRT applies, the image can not be
edited properly. Because it is heavily un-natural form. But this is what Mr
Poynton is suggesting to every one. And it mis-leads people in digital
photographic imaging badly.

Some image manipulation software like Photoshop makes it possible to edit such
gamma compensated image. This is why it has the two gamma settings. It shows the
image properly but the image data is still in gamma space and the image
manipulation operations are done to the data. To see how the data looks like,
just apply a gamma 1/2.0 to a image that show properly. Then think what the
various image editing operations might do with that.

>But, stripping away your pejorative verbiage

>But as other posters to this thread have remarked, non-linear A/D's have been
>made for and used in high end video cameras.

But, but, but, maybe he is kind enough to reveal the type code of this device. I
work as a component specialist for 12th year now and have not seen any info
about non-linear AD converters. I'm quite surprised that Mr. Michael McGuire
from Hewlett-Packard Laboratories believes that such thing exists.

>If a linear A/D has sufficient dynamic range and a step size less than
>the noise level of the CCD, then the transformation to non-linear encoding
>can be done digitally with no perceptual loss.

The above is only partially true. You need also to say *where* the signal (or
data) is perceived. If it is perceived on TV or on uncalibrated monitor then
this is so because the CRT makes the image proper again by applying the gamma.
And again, the image editor perceives the signal (data) before the monitor and
some rather heavy perceivable problems exists there.

Timo Autiokari

Timo Autiokari

unread,
Feb 27, 1998, 3:00:00 AM2/27/98
to

On 27 Feb 1998 09:26:29 GMT, al...@rd.bbc.co.uk (Alan Roberts) wrote:

>You should check your facts more carefully. Most TV cameras sold to
>broadcasters have a non-linear circuit between the ccd and the ADC.

Firstly, TV and Video cameras necessarily need *not* an ADC (Analog Digital
Converter) at all. You should know considering your 30 year experience. There
was no ADCs 30 year ago, but there was the television.

TV and video cameras supply the video signal, this signal is analog, not a
digital one. However all modern TV and video cameras do have ADC even if it is
not obvious.

>It isn't a standard gamma circuit, that's done in the digits, the "pre-gamma"
>circuit is used to compress the video signal in a known way, including
>knee function to handle high overloads.

It was Mr Poynton that insists that there is an analog non-linear conversion
between CCD and ADC and the context was that it would do the gamma compensation.

Now such an *analog* amplifier that does the *gamma* compensation is very
difficult one. If there is such devices in cameras today, they are rare.

I do know that there is pre-conditioning circuits. They are piecewise linear
*not* non-linear. Then there are signal processors for this purpose and even if
they seem to be analog devices (so that they have analog input for the CCD and
analog video outputs) the actually have a flash ADC inside them. Because of the
speed that is needed with video they are often only 6 bit or 8 bit devices.
There are fast 10 bit and 12 bit flashs but they are very expensive. Such are
only used in broadcast quality systems. Then there are other rather genius
methods in achieving the "no missing codes" that is often seen on the specs.

Do you know what happens to the CCD signal (information) when it goes through
such analog-digital-analog device (or a chain of devices). Have you ever seen a
spec of those devices? Their overall error is around 1% for a high quality
device. For lower grade devices the error is usually expressed (for some
reasons) in decibels and a value of 1 dB seems to be quite common. That
translates to 10% overall error.

1% is equal to 2.5 levels in linearly coded light and 10% is equal to 25
levels. Now compare: Mr Poynton is worried about the "perception" issues below
1 level (0.4%) of linearly coded light. While this small portion of his signal
path generates in a broadcast quality system 1% and in a consumer grade system
some 10 % errors.

>The precise nature of this curve is used in the digital processing to notionally
>recover the linear signal before applying the required non-linear processing.

Yes. As I explained above. And it generates large errors.

>Cameras are really a lot more complex than you seem to imagine. Have you
>actually looked at one? I've been doing exactly that for 30 years now, and
>even some domestic camcorders are processed in this way, hardly exotic.

Again: an *analog* amplifier that does the *gamma* compensation is very
difficult one. If there is such devices they are rare. That was the context by
Mr. Poynton.

Congratulations, you have got one thing right: as you say "even some domestic


camcorders are processed in this way"

_Some_ domestic camcorders do. Some other do not.

But Mr Poynton says his pages are suitable for anyone.

And the issue is not about TV or video cameras. My question was if Mr. Poynton's
pages are applicable to digital photographic imaging, where we use digital
cameras. There are such things also, even if they are cameras, they are not TV
nor video cameras.

And the digital cameras do not have video circuitry inside them. If they provide
gamma compensated images then it is done by software. If the ADC in the camera
is 8bit then nothing is gained only the images are damaged.

In my other message in this thread there is a simple procedure to see if the
camera provides such gamma compensated images, if anyone is interested to
experiment a bit.

Timo Autiokari

Ed Ellers

unread,
Feb 27, 1998, 3:00:00 AM2/27/98
to

Alan Roberts wrote:

"Cameras are really a lot more complex than you seem to imagine."

At least they are if they're any good. :-)

Stephen H. Westin

unread,
Feb 28, 1998, 3:00:00 AM2/28/98
to

tim...@clinet.fi (Timo Autiokari) writes:

<snip>

> Again: an *analog* amplifier that does the *gamma* compensation is very
> difficult one. If there is such devices they are rare. That was the context by
> Mr. Poynton.

Well, thirty years ago, these amplifiers were ubiquitous; do you
really think that broadcast TV cameras have been digital since the
'40s? After all, they all have built-in gamma correction.

<snip>

> And the digital cameras do not have video circuitry inside them. If they provide
> gamma compensated images then it is done by software. If the ADC in the camera
> is 8bit then nothing is gained only the images are damaged.

You keep saying this; where dod you find this out?

<snip>

I still find it astonishing that you keep on arguing with people of
the stature of Mr. Roberts and Mr. Poynton, with no authoritative
references to back you up. Roberts and Poynton can serve as their
*own* authoritative references.

JG Smith

unread,
Mar 1, 1998, 3:00:00 AM3/1/98
to

tim...@clinet.fi (Timo Autiokari) wrote:

>Dear Mr. Poynton,
>
>What is the purpose of your GammaFAQ? ... ... etc., etc.
>

This is a very amusing thread. Judging from the commentary, apparently
neither side understands what the other is talking about.

Rather typical of "appointed" authority (as compared to "recognized"
authority) in the academic field. Leaves one to wonder if either knows
what he's talking about ... hmmm!

Alan Roberts

unread,
Mar 2, 1998, 3:00:00 AM3/2/98
to

Timo Autiokari (tim...@clinet.fi) wrote:

: On 27 Feb 1998 09:26:29 GMT, al...@rd.bbc.co.uk (Alan Roberts) wrote:
:
: >You should check your facts more carefully. Most TV cameras sold to
: >broadcasters have a non-linear circuit between the ccd and the ADC.
:
: Firstly, TV and Video cameras necessarily need *not* an ADC (Analog Digital
: Converter) at all. You should know considering your 30 year experience. There
: was no ADCs 30 year ago, but there was the television.
:
: TV and video cameras supply the video signal, this signal is analog, not a
: digital one. However all modern TV and video cameras do have ADC even if it is
: not obvious.

Oh, dear, I say to you again, check your facts. You are wrong. Quite wrong.
This is getting rather tedious, do you ever listen to anyone?
TV cameras come in all varieties, as I have been trying to tell you, you
cannot make blanket statements about them. They start with vhs camcorders
and end at studio cameras, each of which may be analogue or digital at any
stage of the processing. The latest breed of DOMESTIC cameras are digital,
but they still have analogue preprocessing so that the digital processing
is easier. If you can't understand that, I suggest you go away and read
the manuals on some of them. That's how I got to understand it, why can't you?

: It was Mr Poynton that insists that there is an analog non-linear conversion


: between CCD and ADC and the context was that it would do the gamma compensation.
:

: Now such an *analog* amplifier that does the *gamma* compensation is very
: difficult one. If there is such devices in cameras today, they are rare.

Nonsense, it's very easy. EVERY tv camera has one. Why not check your facts,
is that so hard to do.?

: I do know that there is pre-conditioning circuits. They are piecewise linear
: *not* non-linear.

Again, nonsense, they are not piecewise linear. Check the circuit diagrams.

: Then there are signal processors for this purpose and even if


: they seem to be analog devices (so that they have analog input for the CCD and
: analog video outputs) the actually have a flash ADC inside them. Because of the
: speed that is needed with video they are often only 6 bit or 8 bit devices.
: There are fast 10 bit and 12 bit flashs but they are very expensive. Such are
: only used in broadcast quality systems. Then there are other rather genius
: methods in achieving the "no missing codes" that is often seen on the specs.

Not so. Check your facts. There are truly analogue circuits that do this,
and they are in common usage in tv cameras. Your depth of lisunderstanding
is breathtaking.

: >Cameras are really a lot more complex than you seem to imagine. Have you

: >actually looked at one? I've been doing exactly that for 30 years now, and
: >even some domestic camcorders are processed in this way, hardly exotic.

:
: Again: an *analog* amplifier that does the *gamma* compensation is very


: difficult one. If there is such devices they are rare. That was the context by
: Mr. Poynton.

Nonsense, when will you actually try to find out the truth by actually
looking at real cameras instead of inventing difficulties?

Alan Roberts

unread,
Mar 2, 1998, 3:00:00 AM3/2/98
to

Stephen H. Westin (westin*nos...@graphics.cornell.edu) wrote:

: I still find it astonishing that you keep on arguing with people of


: the stature of Mr. Roberts and Mr. Poynton, with no authoritative
: references to back you up. Roberts and Poynton can serve as their
: *own* authoritative references.

Thanks Stephen, it makes a change to be recognised.

I intend to retreat from this thread now, clearly Timo has made his mind up
how the universe works, and will not be dissuaded.

Walter Hafner

unread,
Mar 2, 1998, 3:00:00 AM3/2/98
to

tim...@clinet.fi (Timo Autiokari) writes:

> If anyone knows on-line specifications of CCD devices could you please post the
> links.

Sure. Have a look at:

http://www.photomet.com/

I think its a very good page! CCD technology explained to the max.

Under http://www.photomet.com/ref/refgain.html the "gain" is described:
Photometrics cameras are indeed linear (this is an exception) by
default but can be changed up to 4 in hardware (at least that's my
interpretation of the page)

The term "linearity" (http://www.photomet.com/ref/reflin.html) on the
photometrics pages refer to a different concept. I qoute:

: Hence, non-linearity is a measure of
: the deviation from the following relationship:
: Digital Signal = Constant x Amount of Incident Light

-Walter

--
Walter Hafner_____________________________ haf...@forwiss.tu-muenchen.de
<A href=http://www.forwiss.tu-muenchen.de/~hafner/>*CLICK*</A>
The best observation I can make is that the BSD Daemon logo is _much_
cooler than that Penguin :-) (Donald Whiteside)

Dave Martindale

unread,
Mar 2, 1998, 3:00:00 AM3/2/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>It is also possible (unlikely but possible) that in some very specific camera
>there is an non-liner analog signal conditioning between the CCD output and the
>AD converter input. If Mr. Poynton's pages are targeted for such exotic systems
>then in my opinion it would be nice if he could say indicate this on these
>pages.

Is a frame grabber digitizing the output of an analog video camera an
"exotic system"? The video camera is required to apply a non-linear
transform between the output of the CCD and the video voltage representing
light intensity, since this non-linear transformation is part of the
specification of the video signal for both NTSC and PAL. (The transform
is called "gamma correction"). The frame grabber simply does A/D conversion
on the video signal, so the non-linear transfer characteristic is built
into the digital sample values as well.

In addition, flatbed scanners and at least some digital cameras perform
their own sort of gamma correction on the voltages coming from the
CCD. They have to, or the images would look poor on most PCs.

>And yes, there are both internal and external AD conversions with CCD imaging
>systems. It does not change the fact that the conversion happens linearly,
>inside an integrated circuit where the coding is in the from of physical
>resistor elements on the chip.

But how the A/D conversion itself is performed is essentially irrelevant.
What you *care* about is how the digital sample values are related to
the original light intensity seen by the CCD. And it is a simple,
measurable fact of life that many image sensors have some sort of
nonlinear transform built in. In some cases, it is a nonlinear analog
circuit between the CCD and the A/D. In other cases, the A/D is directly
digitizing the CCD output to 12 or 14 bits, but this data is passed through
a lookup table to produce 8 or 10 bits of data to be stored in the image
file. The nonlinear transfer function is performed by the lookup table.

Timo, I'll happily believe that if you build a CCD camera the output
will be linearly proportional to intensity. But that just isn't the way
most real cameras and scanners are built, for a variety of good reasons.

Dave

Dave Martindale

unread,
Mar 2, 1998, 3:00:00 AM3/2/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>Here is a simple way to see if there is a non-linear analog signal conditioning
>in the camera:
>
>-open any decently exposed image into Photoshop that is acquired using a
>8bit/color CCD device using a gamma setting other than 1.0 in the acquire module
>(in case there is such setting) .
>-choose Image/Histogram.
>-in Photoshop the Luminosity -channel is smoothed so select the red, green or
>blue from the dropdow-box.
>
>Now, if you do not see gaps in the histogram then there is the non-linear analog
>signal conditioning in the camera or the scanner. But if you do see the gaps
>then the data is just modified by the software of the camera or scanner. To
>verify, open a couple of other images and do the same to see that the gaps
>appear generally in the same places.

This test is useless. It will tell you if an image has been modified
by a particular sort of lookup table that is altering the gamma by mapping
8-bit input values to 8-bit output values. But if the processing has
been done with samples that are wider than this, you won't see the
discontinuities in the histogram.

For example, I have worked with a particular image in the following
sequence:

- the image was photographed on film, with a gamma of 0.6 relative to
the original scene.

- the film was digitized with a CCD and 14 bit A/D convertor

- the output of the A/D convertor was run through a lookup table to
produce a 10-bit value that is proportional to film density, then
these values are stored in image file #1. (This is a nonlinear
transform)

- this file was read in by another program which converted from negative
density space back into an approximation of the light intensity in the
original scene. This new image was written with 16 bits per sample.
(Another nonlinear transform)

- File #2 was read into Photoshop in 16-bit mode. A gamma correction
factor was applied using the Levels control panel. In addition,
a small fraction of the 16-bit intensity space is expanded to fill
the whole range (equivalent to about a 3 f-stop exposure change).

- finally, the image was converted to 8 bits per sample and written out
again.

If you were to look at a histogram of the 10-bit data or the 16-bit
data with a tool that showed you all 1024 or 65536 "bins", you would
see discontinuities caused by using lookup tables for the nonlinear
transforms. But viewed in Photoshop, with its histograms that always
have 256 bins, you won't see anything. And the final resulting image,
despite having undergone *three* nonlinear transforms, has a nice
smooth histogram. This is because the nonlinear operations were done
with enough bits that the quantization errors remain well below the
size of the errors caused by the 8-bit output format itself.

In fact, I'd argue that *all* image processing should be done with
more than 8 bits, using 8 bits per sample only for storing final images
that will not undergo further processing.

Dave

Dave Martindale

unread,
Mar 2, 1998, 3:00:00 AM3/2/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>Non-linear functions in general have the ability to compress data. This is
>rather basic knowledge and has been widely used e.g. in audio.
>
>What I have been saying is that even if it is good in TV and video to encode the
>gamma correction and bith-depth compression into the video signal, it is not at
>all good for digital photographic imaging.

Gamma correction in video turns out to be good for two things in video.
The first one, as you note, is to compensate for the nonlinearity of the
CRT. This compensation has to be done somewhere, and doing it in the
camera (of which there are few) is cheaper than doing it in the TV set
(of which there are many).

But it *also* turns out that gamma correction is closely related to the
nonlinear way that the human visual system perceives intensity. Gamma
correction causes more of the video voltage range (or code space, in
digital video) to be used for the darker portions of the picture and
less for the brighter parts of the picture, which matches the eye's
response. The net effect is that we can *usually* represent an image
without banding artifacts (Mach bands) using only 8 bits per sample
using gamma correction, while if the samples were linearly proportional
to intensity we'd need about 12 bits for the same performance.

And this second advantage has nothing at all to do with video, or with
the ultimate display device. It simply says that nonlinear representations
need fewer bits for the same quality, or give better quality with the same
number of bits, than a linear representation.

In addition, there is nothing wrong with this. Nowhere is there a stone
tablet with a commandment saying that the digital sample values in an image
file should be linearly related to intensity. And, in fact, few real
images really use a linear encoding for storing pixels. I do it sometimes,
but I'm usually careful to use about 16 bits per sample to avoid creating
artifacts. Sometimes I use 32-bit floating point for sample values.
It works great! But it's not efficient for storage.

>Now, Mr. Poynton's humbug about the perception mis-leads people in digital
>photographic imaging, badly.

No he doesn't. The *only* place where it is common for the sample values
stored in image files to be linearly proportional to intensity is in
computer graphics. And that's mostly because the graphics people don't
know any better.

>The problem is that Mr. Poynton says that the "non-linear coding" is done
>*because* it gives better image quality. This is not true, this way.

No, it *is* true. See above.

>If the TV signal that is on the transmission path is converted into image
>without applying the same gamma that the CRT applies, the image can not be
>edited properly. Because it is heavily un-natural form.

What do you mean by "cannot be edited properly", and "un-natural"?

Do you mean that pixel values are no longer linearly related to light
intensity, and thus standard linear image processing operations like
blur, sharpen, and resize will not give the correct answer when applied
to these nonlinear images? If so, you're right.

There are two approaches for dealing with this. One is to realize that
the values in the file are just numbers, not light intensity, and you
can convert the numbers to (linear) intensity any time you want. So you
can take your 8-bit gamma-corrected image, convert it to a linear form
(remember to use at least 12 bits to avoid artifacts), do your image
processing operation, then re-convert the pixel values to gamma-corrected
8 bit samples. This adds only a little bit of roundoff error, and the
whole process has a lot less error than you would get by converting the
image to 8 bit linear and working with it in that space. (Though using
12 bit linear everywhere would be slightly better yet).

The other approach is to apply linear image processing operations to
the 8-bit pixels even though it is mathematically incorrect. This is
commonly done in video. When the process is done within a control
loop, with a human operator looking at the result and modifying the
parameters until they see what they like, this works quite well.
It doesn't matter much if the math is wrong, since nobody is making
measurements from the images - they just want something that looks good.

>Some image manipulation software like Photoshop makes it possible to edit such
>gamma compensated image. This is why it has the two gamma settings. It shows the
>image properly but the image data is still in gamma space and the image
>manipulation operations are done to the data.

Yes, this is approach #2 above.

>To see how the data looks like,
>just apply a gamma 1/2.0 to a image that show properly. Then think what the
>various image editing operations might do with that.

No, the data is intended to be viewed just as you see it. When you apply
the 0.5 gamma to the image, *you* are distorting the image to something
that it is not intended to be.

Timo, how old are you? Forgive me if this doesn't apply to you, but
you sound exactly like an undergrad who has always assumed that sample
values in images files should be linearly related to intensity because
that's the mathematically *right* way to do it. And you defend with
great vigour the purity of your conception of how images *ought* to
be stored in data files.

Sadly, it just isn't this simple. If you used floating-point for
storage and computation, a linear representation would be natural.
But real physical devices have to be built with no more bits than
necessary in order to keep the costs down, and nonlinear encoding
of pixels provides a way to maintain quality with fewer bits.
There are good reasons for this, mostly because our vision is
itself nonlinear. As you learn about perception and image processing,
you will see that there is no perfect way to do something. There are
just a bunch of tradeoffs, and all you can do is make intelligent ones.

In some cases, we are stuck with tradeoffs that made sense at the time
but no longer do. (e.g. using 20% of the video bandwidth to transmit
non-image information). In other cases, the original design was done
by people who really did understand the tradeoffs, and they are still
valid today.

Dave

Michael McGuire

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

...... snipping Timo's usual hymn to linearity

: >But, stripping away your pejorative verbiage
: >But as other posters to this thread have remarked, non-linear A/D's have been

: >made for and used in high end video cameras.

: But, but, but, maybe he is kind enough to reveal the type code of this device. I
: work as a component specialist for 12th year now and have not seen any info
: about non-linear AD converters. I'm quite surprised that Mr. Michael McGuire


: from Hewlett-Packard Laboratories believes that such thing exists.

------------------------------------------------------------------------------
12 years a component specialist and never heard of an ASIC. That's an
Application Specific Integrated Circuit to enlighten him. One of the
competive advantages we have here is the ability to make our chips. We
generally do not make them available or publish their specifications.
------------------------------------------------------------------------------

: >If a linear A/D has sufficient dynamic range and a step size less than

: >the noise level of the CCD, then the transformation to non-linear encoding
: >can be done digitally with no perceptual loss.

: The above is only partially true. You need also to say *where* the signal (or


: data) is perceived. If it is perceived on TV or on uncalibrated monitor then
: this is so because the CRT makes the image proper again by applying the gamma.
>>>: And again, the image editor perceives the signal (data) before the monitor and
>>>: some rather heavy perceivable problems exists there.

-------------------------------------------------------------------------------
This last sentence is not true if one considers the most likely destinations
of the image or the most likely operations in an image editor that might cause
problems. If the destination is a CRT, the Timo apparently already agrees that
the correction for gamma = 2.2 is correct especially if initially done at
higher bit depth. Now consider taking this to an Mac system where the
gamma = 1.8. The necessary correction from 2.2 -> 1.8 is gamma = 1.1. This is
much gentler and more accurate at 8 bits than banging it all the way from
gamma = 1.0. The other likely destination of the image is a printer. All the
printers I have tested and that's quite a few, at least at their native dot
resolution, have a response similar in shape to a CRT but usually somewhat more
radical. This is a consequence of round dots having to completely cover square
pixels to avoid gaps in solid fills. Put down dots covering 1/4 of the pixels,
but not overlapping, and you get pi/8 coverage of the paper, not 1/4. The
correction curve for a printer lies somewhere above that for a gamma = 2.2
monitor. Again going there from being already accurately corrected for
gamma = 2.2 is going to be gentler and more accurate that going there from 1.0.
This leaves the question what might be done in an image editor keeping in mind
these destinations for the image. Of course anything can be done, but likely
corrections to normal images are linear stretching or compression of the tonal
scale and gamma-curve-like corrections to enhance contrast in the shadows or
highlights. Any of these operations can produce contouring if applied strongly
enough, but they will do it sooner if applied to a gamma 1.0 image which is
then corrected for screen or printer destinations.

I don't expect Timo will agree with much of this any more than he has with
Alan Roberts or Charles Poynton, but then he doesn't agree with anybody.

Valburg

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

To all the extremely knowledgeable gentlemen weighing in on this thread,
I say thank you! This has been very stimulating and informative.

Could one of you answer a couple of questions which are, perhaps, of
practical interest to those of us just working on still images with
commonly available tools, and not involved in R&D in this field?

The first is a repeat of a question I posed previously: Could you tell
us whether there is a way to determine the native gamma
of a particular scanner model, in order to avoid reducing the 8 bits
worth of information by adjusting the gamma or midpoint to another
setting? Or is this, perhaps, a concern of more significance in theory
than in practice; that is, perhaps the extent of mid-point adjustments
commonly practiced are gentle enough so as to make little difference in
image quality (loss of "bins" through quantization error?)?

Is there commonly available image manipulation software which allows
working at bit-depths greater than 8 and then reducing bit-depth to 8
for output or storage (as mentioned most recently in posts by Dave
Martindale)?

Thanks!

Mitch Valburg


Walter Hafner

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

Valburg <lk...@psu.edu> writes:

[snip]


> The first is a repeat of a question I posed previously: Could you tell
> us whether there is a way to determine the native gamma
> of a particular scanner model, in order to avoid reducing the 8 bits

[snap]

Related question:

Is the method of finding the gamma-factor of particular displays as
described in

http://www.povray.org/binaries/ (last paragraph)

any good?

Timo Autiokari

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

On 2 Mar 1998 09:40:48 GMT, al...@rd.bbc.co.uk (Alan Roberts) wrote:

>Stephen H. Westin (westin*nos...@graphics.cornell.edu) wrote:
>
>: I still find it astonishing that you keep on arguing with people of
>: the stature of Mr. Roberts and Mr. Poynton, with no authoritative
>: references to back you up. Roberts and Poynton can serve as their
>: *own* authoritative references.
>
>Thanks Stephen, it makes a change to be recognised.

At least the gentlemen have been authoring the colorspace-faq together:
http://www.altavista.digital.com/cgi-bin/query?pg=aq&what=web&kl=XX&q=%22Alan+Roberts%22+and+%22Charles+A.+Poynton%22&r=&d0=21%2FMar%2F86&d1=&search.x=65&search.y=8
So there is nothing special there if they both see the issue similarly.

>I intend to retreat from this thread now, clearly Timo has made his mind up
>how the universe works, and will not be dissuaded.

Not the universe. I'm just trying to explain that there is much more that needs
to be considered in imaging than just the video. And what is good for TV and
Video is not good for imaging.

Please see e.g. http://www-s.ti.com/sc/psheets/soca010/soca010.pdf . The title
of the Texas Instruments Application Report is "CCD Image Sensors and
Analog-to-Digital Conversion". Read it folks, it provides some basics for this
discussion. It tells that a typical CCD has a dynamic of 60dB that is equal to
10 linear bits, that with double sampling it can be increased to 73dB (13 linear
bits). But often a 6 bit or 8 bit converter is used, so there will be analog
pre-conditioning before the linear AD conversion is done. Yes, 6 or 8 linear
bits that's 48dB or less. Because of the pre-conditioning the gamma can be
calculated into the signal using such a low digital resolution. This is an
accuracy trade-off. And analog signal processing is not at all accurate so
imaging systems that have such analog processing are not useful. They do provide
the "no missing codes" and such but it is more of error than actually captured
data.

The document btw is written in 93 but it has a copyright 96. Todays improvements
in video signal processing is mainly the12 bit converters that are becoming
available in the higher end consumer video cameras. So inaccurate
pre-conditioning is not needed.

But, the problem is that Mr. Poynton's terminology "more perceptual coding" is
misleading if applied to image editing. No one is viewing the video signal as it
appears in the middle of the transfer path.

And the whole issue is simply to convert the linear-light into gamma compensated
signal with minimum errors and if this is done with the latest technology then
there the *result* of the gamma compensation will give a free benefit, bit-dept
compression. But Mr. Poynton calls it "more perceptual coding".

It is "more perceptual" only after the CRT applies the gamma.

This gamma *is* good for TV and video since the CRT will UNcode the coding and
the image is then good for the eye. But if the image is to be edited then gamma
compensation is not good at all. The image will be in "coded" state, it is not
natural and it is not good for the eye nor for image editing in any sense.
Editing "coded" images results poor performance.

Timo Autiokari


Stephen H. Westin

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

tim...@clinet.fi (Timo Autiokari) writes:

<snip>

> Please see e.g.
> http://www-s.ti.com/sc/psheets/soca010/soca010.pdf. The title of the


> Texas Instruments Application Report is "CCD Image Sensors and
> Analog-to-Digital Conversion". Read it folks, it provides some
> basics for this discussion. It tells that a typical CCD has a
> dynamic of 60dB that is equal to 10 linear bits, that with double
> sampling it can be increased to 73dB (13 linear bits).

And the camera I am using most at the moment has a chilled sensor, so
we see most of 12 bits out of it. So?

> But often a 6 bit or 8 bit converter is used, so there will be analog
> pre-conditioning before the linear AD conversion is done. Yes, 6 or 8 linear
> bits that's 48dB or less. Because of the pre-conditioning the gamma can be
> calculated into the signal using such a low digital resolution. This is an
> accuracy trade-off. And analog signal processing is not at all accurate so
> imaging systems that have such analog processing are not useful.

Again, "imaging systems that have such analog processing" have been
widely used over the last 60 years or so. Which makes me think that
they might just be useful. At least slightly :)

<snip>

> But, the problem is that Mr. Poynton's terminology "more perceptual
> coding" is misleading if applied to image editing. No one is viewing
> the video signal as it appears in the middle of the transfer path.

True. For most processing, you will want to linearize the signal. And
then, quite possibly, re-correct for display and storage.

> And the whole issue is simply to convert the linear-light into gamma
> compensated signal with minimum errors and if this is done with the
> latest technology then there the *result* of the gamma compensation
> will give a free benefit, bit-dept compression. But Mr. Poynton
> calls it "more perceptual coding".

Which it is. Quantization steps are smaller for low brightness, which
is a Good Thing perceptually.

> It is "more perceptual" only after the CRT applies the gamma.

No, it's more perceptual, period. Some systems (e.g. Kodak Cineon) use
a logarithmic-based coding to achieve the same result, though it's not
correct for any CRT.

> This gamma *is* good for TV and video since the CRT will UNcode the
> coding and the image is then good for the eye. But if the image is
> to be edited then gamma compensation is not good at all. The image
> will be in "coded" state, it is not natural and it is not good for
> the eye nor for image editing in any sense.

Yes, it *is* good for the eye. Linear quantization will always, for a
given number of levels, show more quantization artifacts than a
gamma-like quantization.

Look up the CIE L*a*b* system; in an effort to model the visual
system, it uses a cube root transform on luminance. Which equates to
correction for a monitor gamma of 3.0.

> Editing "coded" images
> results poor performance.

Except in the special case of Gaussian filter kernels; these simply
get narrower or wider as a result of gamma correction or its removal.

If you're trying to say that linear intensity is probably the domain
in which you want to process images, I don't think anyone will argue
with you.

Dave Martindale

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

westin*nos...@graphics.cornell.edu (Stephen H. Westin) writes:
>If you're trying to say that linear intensity is probably the domain
>in which you want to process images, I don't think anyone will argue
>with you.

I think the key observation in all this is that you don't have to *process*
images using the same encoding of (real number) intensity into (integer)
sample codes that you use to *store* images.

Just because Photoshop does arithmetic on whatever sample codes are stored
in the file doesn't mean that all image processing is, or should be, done
this way.

Another (but less important) observation is that if you are only going to
store 8 bits per sample, you *must not* use a linear encoding of intensity
into sample value because it will cause quantization artifacts in the
dark areas of the image, for typical image brightness range. If you
do convert to a linear space for processing, you should use *at least*
12 bits per sample.

8 bit linear is bad, awful, ugly, and there is no excuse for using it.

Dave

Timo Autiokari

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

On 2 Mar 1998 19:58:23 -0800, da...@cs.ubc.ca (Dave Martindale) wrote:

> And it is a simple, measurable fact of life that many image sensors
>have some sort of nonlinear transform built in.

CCD's are quite linear.

>In some cases, it is a nonlinear analog circuit between the CCD and the A/D.

The "nonlinear analog circuits" in video most often is a device that have both
analog input and analog outputs but *internally* they have a 6 bit or 8 bit
flash AD converter (and in broadcast quality system even higher resolution
ADCs). They also can have other signal conditioning like automatic gain control
etc. So they just appear to be analog, but if you look into the specifications
you can easily see that they are not. They are so called mixed signal devices.
These devices do not provide digital output so if one likes to have digitized
data out from such video camera the video signal then needs to be converted.

Only way back before the digital and ccd era the video was processed purely in
analog ways. It helped that the imager tubes, diode arrays etc of those days had
a non-linear characteristics of their own but the pure analog signal
conditioning was difficult indeed and very inaccurate.

>In other cases, the A/D is directly digitizing the CCD output to 12 or 14 bits, but

>this data is passed through a lookup table to produce 8 or 10 bits of data to be

>stored in the image file. The nonlinear transfer function is performed by the
>lookup table.

Yes nowadays they are becoming available and this is a very good improvement for
the video quality since the mixed signal pre-conditioning and processing
components are not at all accurate.

>Timo, I'll happily believe that if you build a CCD camera the output
>will be linearly proportional to intensity. But that just isn't the way
>most real cameras and scanners are built, for a variety of good reasons.

I can not see any real reason why the linear space is not allowed.

Maybe the manufacturers want to keep the border wide enough between high-end and
consumer grade systems. Most of the high-end systems allow the linear space.
They need to, otherwise the pre-press people would not by them.

Timo Autiokari

Timo Autiokari

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

On Tue, 03 Mar 1998 10:02:23 -0500, Valburg <lk...@psu.edu> wrote:

>Could you tell us whether there is a way to determine the native gamma
>of a particular scanner model,

If it has a CCD-rod then the native gamma is 1.0.

To check a scanner or a camera:

1. Scan a good image that also have a lot of shades of black (shadows). Open
into Photoshop (some other sw may not be able to show the histogram accurately
enough, Photoshop does)
2. Choose Image/Histogram.
3. In Photoshop the Luminosity -channel in Histogram is smoothed so select the


red, green or blue from the dropdow-box.

If you do not see any gaps in the red, green and blue histograms and there is no
spiking (hedgehog) either then you have linear setting.

If you see the gaps then the data is being modified by the software of the
scanner. To verify, scan a couple of other images and do the same to see that


the gaps appear generally in the same places.

If you do not see the gaps (or there is only one or two of them) but there is
spiking somewhere on the curve then there is either the non-linear "analog"
signal conditioning hardware in the scanner or you have some 10bit or better
scanner. Again to verify, scan a couple of other images and do the same to see
that the spiking appear generally in the same places.

If you have a setting for the gamma in the scanner software then adjust it until
no gaps nor spiking is seen. Should be rather linear then.

The "analog" signal processing can be easily detected by scanning the same
target three or more times and then doing subtractions in Photoshop. The error
of such "analog" signal conditioning circuit is often/usually in the range of
several percentages and can be easily found out since most of it is random, so
between the identical scans you can detect some 2 to 20 levels errors. In case
of direct AD conversion from the CCD the error between scans should be below
detection.

>in order to avoid reducing the 8 bits worth of information by adjusting

>the gamma or midpoint to another setting? Or is this, perhaps, a concern
>of more significance in theory than in practice; that is, perhaps the extent
>of mid-point adjustments commonly practiced are gentle enough so as
>to make little difference in image quality (loss of "bins" through quantization
>error?)?

If you would do some experiments you would very soon notice that it is not a
theory but very hard fact in practice. In video such errors just do not mean a
thing. Often on CRT they do not mean much, in case the www publishing the images
are usually scaled down and this helps hiding the problems. But if you display
the images at 100% scaling on-screen or you print the images there will be
problems indeed because of the gamma compensated images.

>Is there commonly available image manipulation software which allows
>working at bit-depths greater than 8 and then reducing bit-depth to 8
>for output or storage (as mentioned most recently in posts by Dave
>Martindale)?

Photoshop 4.0.x allows this but it is limited. You can do Levels and Curves,
Crop and Save. At least black-point and white-point scaling seems to work
properly in Levels. The middle-box in Levels is *not* correct gamma control so
do not trust it, it will leave the shadows behind badly. In the Curves dialog
linear scaling seem to work properly, curve-adjustments especially such that
have many points possibly do not work correctly. The maps (*.amp) in curves seem
not work properly (if many points). But the 5.0 will have a lot of 16 bit stuff
available.

Timo Autiokari

Timo Autiokari

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

On 3 Mar 1998 03:40:53 GMT, mi...@xhplmmcg.xhpl.xhp.com (Michael McGuire) wrote:

>>>>: And again, the image editor perceives the signal (data) before the monitor and
>>>>: some rather heavy perceivable problems exists there.

>This last sentence is not true if one considers the most likely destinations

>of the image or the most likely operations in an image editor that might cause
>problems. If the destination is a CRT, the Timo apparently already agrees that
>the correction for gamma = 2.2 is correct especially if initially done at
>higher bit depth.

Not all the images are to be shown on the www, so compromises are not always
needed. Therefore gamma 2.2 is not always correct. As you say yourself it
depends where the image is to be displayed. On PC the correct gamma compensation
is 2.5 for Mac it probably is 1.8.

> Now consider taking this to an Mac system where the gamma = 1.8. The necessary
>correction from 2.2 -> 1.8 is gamma = 1.1. This is much gentler and more accurate
>at 8 bits than banging it all the way from gamma = 1.0.

Firstly a quick lesson about how to calculate with gamma is needed here.

If the file has inverse gamma 2.2 then to change the gamma of the file to be
inverse of 1.8 you specify X=required_gamma that needs to be applied to the file
and you calculate:

(1/2.2) * X = (1/1.8)
X= (1/1.8) /( 1/2.2)
X=2.2/1.8
X= 1.22

So, in your example you will need to apply gamma 1.22 to the file. 1.22 is not
gentle and it will be applied to data that _already_ contains errors and heavy
quantization.

>The other likely destination of the image is a printer. All the printers I have
>tested and that's quite a few, at least at their native dot resolution, have a
>response similar in shape to a CRT but usually somewhat more radical.

I have measured transfer curves of many printers and they very often do have
artificial gamma match in their software. But only HP printers have such a high
value like 2.2. Actually about 1 year ago the 5C had a driver that had an
artificial gamma match at some enormous 2.6 but the new driver put that into 2.0
or so.

>This is a consequence of round dots having to completely cover square
>pixels to avoid gaps in solid fills. Put down dots covering 1/4 of the pixels,
>but not overlapping, and you get pi/8 coverage of the paper, not 1/4.

A quick lesson about the dot gain of printers is needed here.

The dot gain can be positive or negative. Your round dots would leave white
displayed (where the white should actually be covered with ink). This is the
opposite of dot gain, the dots are loosing area and that does _not_ make the
transfer curve to appear similarly as it appears with CRT but exactly the
opposite. In case the dots are gaining the area, _then_ the images will print
too dark and then the transfer curve of the printer bends towards the transfer
curve of CRT.

>The correction curve for a printer lies somewhere above that for a gamma = 2.2
>monitor.

No, it can be loosely approximated with a gamma compensation of 1.0 to 2.0. But
no printer follows a gamma accurately.

>This leaves the question what might be done in an image editor keeping in mind
>these destinations for the image. Of course anything can be done, but likely
>corrections to normal images are linear stretching or compression of the tonal
>scale

And here you will have _considerable_ troubles. When your image has a gamma
compensation then if you do a "linear stretching or compression of the tonal
scale" (e.g. the Levels in Photoshop) what will happen to the colors (hues)?
They will change, since you apply a linear scaling to RGB values whose
components have exponential distribution. The result is that the image is not
anymore in any gamma space and colors (hues) are distorted.

> and gamma-curve-like corrections to enhance contrast in the shadows or
>highlights. Any of these operations can produce contouring if applied strongly
>enough, but they will do it sooner if applied to a gamma 1.0 image which is
>then corrected for screen or printer destinations.

No, the contouring appears when you edit images that have errors and large
quantization.

The final compensation from 1.0 to any gamma space creates the errors and
quantization only once.

The best thing is that the resulting image will show the linear levels
_correctly_ without *any* errors from level 0 to level 65 (in case gamma
compensation 2.5 is applied).

And there are much more in image editing. Such as color correction. With linear
images this is often very easy, you just do linear changes to the color channels
(easiest in Curves but only linear changes are needed). This in case you need to
correct e.g. wrong color temperature due to the lights at the scene. If you look
in CIE_XYZ you can easily see that both the RGB to CIE_XYZ and color
temperature changes are linear transformations. So you can do this linearly in
RGB also. In case there is gamma compensation in the image file it is very hard
to correct color temp or anything like such.

Most of the filters require linear representation of intensities such as
UnsharpMask. Scaling needs it etc.

One can also want to use a printing service to print his/her best shots with a
high end dye-sub. A dye-sub is able to show much more colors and shades than the
home printer. A gamma image can show ~decently using an ink-jet but when the
image is printed using dye-sub the artificants appear. If one has the error free
and quantization free linear images then high quality printing is possible using
dye-subs, most if not all of them have the linear mode also, like the Tektronix
Phaser 450e that I use.

>I don't expect Timo will agree with much of this any more than he has with
>Alan Roberts or Charles Poynton, but then he doesn't agree with anybody.

Now, Mr Poynton's on-line pages were the reason why I started this thread. I
have spent a considerable amount of my time in explaining people what the gamma
actually is. I got bored of doing that and I've read his material very carefully
many times.

Mr Roberts has been co-authoring the colorspace-faq with Mr Poynton, see:
http://www.altavista.digital.com/cgi-bin/query?pg=aq&what=web&kl=XX&q=%22Alan+Roberts%22+and+%22Charles+A.+Poynton%22&r=&d0=21%2FMar%2F86&d1=&search.x=65&search.y=8
So there is nothing special there if they both see (or try to explain) the issue
similarly.

Timo Autiokari

Timo Autiokari

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

On 2 Mar 1998 20:44:35 -0800, da...@cs.ubc.ca (Dave Martindale) wrote:

>Nowhere is there a stone tablet with a commandment saying that the
>digital sample values in an image file should be linearly related to intensity.

The problem is that there is a stone tablet that says we all need to obey the
non-linear space. This is what the manufacturers have been engraving into the
appliances. While it would be very easy for them to allow also the better linear
space.

>What do you mean by "cannot be edited properly", and "un-natural"?
>Do you mean that pixel values are no longer linearly related to light
>intensity, and thus standard linear image processing operations like
>blur, sharpen, and resize will not give the correct answer when applied
>to these nonlinear images? If so, you're right.

That is exactly what I mean. Thank you very much.

>There are two approaches for dealing with this. One is to realize that
>the values in the file are just numbers, not light intensity, and you
>can convert the numbers to (linear) intensity any time you want. So you
>can take your 8-bit gamma-corrected image, convert it to a linear form
>(remember to use at least 12 bits to avoid artifacts), do your image
>processing operation, then re-convert the pixel values to gamma-corrected
>8 bit samples. This adds only a little bit of roundoff error, and the
>whole process has a lot less error than you would get by converting the
>image to 8 bit linear and working with it in that space. (Though using
>12 bit linear everywhere would be slightly better yet).

This is not good and quite often quite impossible. It creates unnecessary errors
even in 12 bit and in most of the systems you can only acquire the 8 bit gamma
compensated image. There is not much use to take that into 16 bit space anymore
because the quantization is already there.

>The other approach is to apply linear image processing operations to
>the 8-bit pixels even though it is mathematically incorrect. This is
>commonly done in video. When the process is done within a control
>loop, with a human operator looking at the result and modifying the
>parameters until they see what they like, this works quite well.
>It doesn't matter much if the math is wrong, since nobody is making
>measurements from the images - they just want something that looks good.

You are correct, it really does not matter in video because the general image
quality is only a very small fraction of that what can be achieved in digital
photographic imaging, using a digital camera (not a video camera). Video comes
and goes, it is just set to appear somewhat decently. But one can spend many
hours with one single photographic image. There every bit of accuracy is very
much needed.

>>To see how the data looks like,
>>just apply a gamma 1/2.0 to a image that show properly. Then think what the
>>various image editing operations might do with that.
>
>No, the data is intended to be viewed just as you see it.

In Photoshop you do not see the _data_, you see on-line image of it after the
monitor gamma. The best way to see how the image _data_ looks like is to put the
value 1.0 into the "Monitor Gamma" box, provided that Photoshop is properly
calibrated using the Gamma Slider.

>Timo, how old are you? Forgive me if this doesn't apply to you, but
>you sound exactly like an undergrad who has always assumed that sample
>values in images files should be linearly related to intensity because
>that's the mathematically *right* way to do it.

Well, for me the undergrad times are tens of years back. However linear space is
not only mathematically right, but it provides best image quality. I've done a
lot of comparisons on high and mid quality systems.

>As you learn about perception and image processing, you will see that
>there is no perfect way to do something. There are just a bunch of tradeoffs,
>and all you can do is make intelligent ones.

The gamma-space is a very bad trade-off for video on the expense of digital
photographic imaging.

And there is no real reason to have that a trade-off. It would be a very simple
thing for the manufacturers to just provide a by-pass for those who want to use
the linear space.

But for some unbelievable reason they do not provide it in general, only in the
most expensive devices they allow linear imaging.

Some of the manufacturers like HP do not allow it at all, because they seem to
genuinely believe that the gamma space is so very good and so much better in
every case. Obviously they have been reading some FAQs and believe in them
blindly.

Timo Autiokari

Dave Martindale

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>And here you will have _considerable_ troubles. When your image has a gamma
>compensation then if you do a "linear stretching or compression of the tonal
>scale" (e.g. the Levels in Photoshop) what will happen to the colors (hues)?
>They will change, since you apply a linear scaling to RGB values whose
>components have exponential distribution. The result is that the image is not
>anymore in any gamma space and colors (hues) are distorted.

If you do a simple linear stretch by moving the "white point" marker
and don't touch the black and midpoint markers, you scale all of the
intensities in the image by the same factor. This is equivalent to
changing the F-stop on a camera lens. This is true if the sample
values stored in the pixel are linearly proportional to intensity, but
it is ALSO true if the values are gamma corrected. Don't believe me?
Just do the math. It does work.

On the other hand, if you move the "black point" marker off zero, then
you are mucking about with the tonal scale of the whole image, and
the sample values are no longer related to original scene intensity
by a simple rule. This is true for both linear and gamma-encoded
sample values, so gamma-encoded samples suffer no additional disadvantage
here.

If you move the "midpoint" marker in Levels, you change the gamma of the
image. If it was already gamma-encoded, the new image is still gamma
encoded but with a different value of gamma. No problem. If the original
image was linear, you have now converted it to a gamma-encoded one.
Oops.

It seems like gamma-encoded images are *more* robust than linear ones
under the sort of manipulations you can do with the Levels menu.

>No, the contouring appears when you edit images that have errors and large
>quantization.
>
>The final compensation from 1.0 to any gamma space creates the errors and
>quantization only once.

Bullshit, to put it politely. I could acquire an image with an excellent
cooled CCD camera and 16-bit A/D conversion. There would be no contouring.
If I convert this image to 8-bit gamma-corrected encoding, there would
likely still be no contouring. But if I convert directly from the
16-bit form to 8-bit linear samples, rounding every sample value to the
nearest 8-bit value for the maximum accuracy, there will likely be
contouring in the shadow areas. The contours are cause because the
8-bit linear code simply isn't good enough to represent the image.
At this point, there has been no editing done, and no quantization
errors added other than those necessary to fit the data into 8 bits.

It's easy to see why. Suppose the image has a useful brightness range
of 100:1. Then the maximum intensity is represented by 255, and the
darkest shadow is 2.5. Oops, no way to represent 2.5 accurately, so we
have to use either 2 or 3. These sample values, when accurately reproduced
by the display, differ in intensity by a factor of 1.5. The brightness
difference between codes 3 and 4 is 1.33. Even though this is in dark areas
of the image where the eye's sensitivity is reduced, the eye can easily
still see this 50% or a 33% brightness "step" caused by quantization.

If the same image was stored gamma-corrected with a gamma of 0.5, the
darkest shadow with an intensity of 0.01 of maximum would be represented
by a sample value of 25.5. The nearest integer codes are 25 and 26.
A code of 25 represents a (linear light) intensity of 0.0096, while
a code of 26 represents an intensity of 0.0104. The ratio between these
is 1.08 - so we have only an 8% brightness "step" between adjacent code
values in the shadow areas of the image, while the linear encoding gave
us a 50% step in the same place. No wonder the gamma-corrected version
works better.

I *have* seen at least one image where even 8 bits gamma corrected
was not enough - you could see bands in an area of slowly changing
colour. But that's only one or two samples in years of looking at
images rather critically. In comparison, I've seen many 8-bit linear
images with quantization bands.

>in CIE_XYZ you can easily see that both the RGB to CIE_XYZ and color
>temperature changes are linear transformations. So you can do this linearly in
>RGB also. In case there is gamma compensation in the image file it is very hard
>to correct color temp or anything like such.
>
>Most of the filters require linear representation of intensities such as
>UnsharpMask. Scaling needs it etc.

Here, you are arguing that filtering operations should be done in linear
space. However, you do *not* have to store images using linear samples
in order to do your operations in linear space. The two issues are
essentially unrelated. I use an image package that lets me *store*
pixels in linear, gamma-corrected, logarithmic, or offset linear form,
yet it can convert any of these to linear floating point to do filtering
operations.

>One can also want to use a printing service to print his/her best shots with a
>high end dye-sub. A dye-sub is able to show much more colors and shades than the
>home printer. A gamma image can show ~decently using an ink-jet but when the
>image is printed using dye-sub the artificants appear. If one has the error free
>and quantization free linear images then high quality printing is possible using
>dye-subs, most if not all of them have the linear mode also, like the Tektronix
>Phaser 450e that I use.

Yes, but a dye-sub might be able to show the quantization errors in
the shadows that are inherent in using 8-bit linear encoding. Better
to give the printer 8-bit gamma corrected data with a gamma of about 1/1.8,
to ensure this doesn't happen. Why 1.8? Because most high-end scanners
and printers are designed to work with high-end image editing stations, which
are mostly Macs.

>Now, Mr Poynton's on-line pages were the reason why I started this thread. I
>have spent a considerable amount of my time in explaining people what the gamma
>actually is. I got bored of doing that and I've read his material very carefully
>many times.

Then why do you still disagree with it? He is accurately describing the
way the world is.

Dave

Dave Martindale

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>The problem is that there is a stone tablet that says we all need to obey the
>non-linear space. This is what the manufacturers have been engraving into the
>appliances. While it would be very easy for them to allow also the better linear
>space.

The problem is that the linear space is clearly, demonstrably, very much
inferior to the linear space when you are working with only 8 bits per sample.
It's only when you have at least 12 bits per sample that you can deal with
the brightness range of a good photographic print or transparency and
avoid quantization artifacts. And to handle the brightness range
captured by a photographic *negative*, you need about 16 bits.

So, are you saying that the appliance manufacturers should give you the
option of getting 8-bit linear data? That would look *worse* than what
you are getting now. Or are you saying that they should give you 12-bit
linear data? They could, but be prepared to pay extra for it. Since
8-bit gamma corrected works almost as well as 12-bit linear but costs
less (time, memory, CPU bandwidth), few people would be prepared to
pay extra just for the warm fuzzies of knowing they were doing things
"correctly".

>You are correct, it really does not matter in video because the general image
>quality is only a very small fraction of that what can be achieved in digital
>photographic imaging, using a digital camera (not a video camera). Video comes
>and goes, it is just set to appear somewhat decently. But one can spend many
>hours with one single photographic image. There every bit of accuracy is very
>much needed.

If you really want to capture all of the information in a photographic
negative, manipulate it for hours, and write it back to film without
causing any artifacts due to quantization, you *must* use something
better than an 8-bit linear form of the image. 8-bit gamma will let
you get further without artifacts, but eventually quantization will
cause problems there as well. You should be using 12-bit linear
storage or better. So what if Photoshop won't do this (yet)? Write
your own image processing operations. Do all the calculations in
floating point. That will avoid any problems. Oh, and use scanners
and film recorders that handle at least 12 bits as well.

On the other hand, if you are limited by budget or programming ability
to only use 8-bit-wide samples, using gamma-corrected samples is the
best you can do. It's much better than linear, and somewhat better
than 8-bit logarithmic encoding. (Log encoding comes into its own
at 9 or 10 bits, but at 8 it still has some quantization artifacts
of its own).

>In Photoshop you do not see the _data_, you see on-line image of it after the
>monitor gamma. The best way to see how the image _data_ looks like is to put the
>value 1.0 into the "Monitor Gamma" box, provided that Photoshop is properly
>calibrated using the Gamma Slider.

That's the correct way of displaying data that has been linearly encoded.
For data that has been nonlinearly encoded, this will *not* show you want
the image is supposed to look like. You are assuming, a priori, that
only linear data is "correct", and when you set up Photoshop to show
linear data and an image then looks bad, that the image itself must be
bad. But the real problem is your own assumption, and the incorrect
way it causes you to set up Photoshop. When Photoshop's monitor gamma
setting is set correctly *for the image in question*, it looks fine.

>However linear space is
>not only mathematically right, but it provides best image quality. I've done a
>lot of comparisons on high and mid quality systems.

What were you comparing with what? How many bits wide were the images?
How many bits wide was the frame buffer and DACs? How did gamma correction
for the display ultimately get done? If you tell us these things, we can
probably figure out what you are seeing.

I've done many of my own experments and demonstrations, using 8-bit
linear, gamma-corrected, and logarithmic encoding of intensity, as
well as doing processing using 12-bit linear, 16-bit linear, 32-bit
floating point, and 10-bit log encoding. I've written all of the software
involved myself, so I *know* where all the sources of roundoff error
are. I've had access to 12-bit CCD cameras and 12-bit film recorders.

My own experience is that 8-bit linear is unacceptable for photographic
imaging, while 8-bit gamma-corrected is good enough most of the time.

>The gamma-space is a very bad trade-off for video on the expense of digital
>photographic imaging.

You keep stating this, but provide no believable reason *why*.

>And there is no real reason to have that a trade-off. It would be a very simple
>thing for the manufacturers to just provide a by-pass for those who want to use
>the linear space.
>
>But for some unbelievable reason they do not provide it in general, only in the
>most expensive devices they allow linear imaging.

Many low-end devices do not have 12-bit A/D converters, so there is no
place you could have access to a 12-bit linear data stream. 8-bit
linear is much worse than 8-bit gamma corrected, for reasons listed above.
And if you use Photoshop for your imaging, how would you deal with 12-bit
or wider data anyway? Photoshop lets you do almost nothing with samples
wider than 8 bits.

>Some of the manufacturers like HP do not allow it at all, because they seem to
>genuinely believe that the gamma space is so very good and so much better in
>every case. Obviously they have been reading some FAQs and believe in them
>blindly.

Perhaps HP employs some engineers who understand the issues better than you
do, and understand that (a) providing an 8 bit linear path is useless,
and (b) providing a 12-bit linear path is not economically justified for
this device.

In general, I don't think HP engineers have to learn their imaging theory
from Web pages.

Dave

Dave Martindale

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>To check a scanner or a camera:
>
>1. Scan a good image that also have a lot of shades of black (shadows). Open
>into Photoshop (some other sw may not be able to show the histogram accurately
>enough, Photoshop does)
>2. Choose Image/Histogram.
>3. In Photoshop the Luminosity -channel in Histogram is smoothed so select the
>red, green or blue from the dropdow-box.
>
>If you do not see any gaps in the red, green and blue histograms and there is no
>spiking (hedgehog) either then you have linear setting.

This method will probably identify a scanner which uses an 8-bit A/D converter
on the output of the CCD, followed by a lookup table to implement the
non-linear gamma correction. However, if the gamma correction is done
by an analog amplifier ahead of the CCD, the histogram will be smooth.
If the scanner uses a wider (e.g. 12 bit) A/D converter and does the
gamma correction in a lookup table, the histogram will also be smooth.

In other words, Timo's test allows you to identify a badly designed
scanner that is causing excessive quantization error. But it will tell
you nothing about a well-designed scanner, which will have a smooth
histogram regardless of whether the output is linear or gamma corrected.

If *I* wanted to measure the characteristic response of a scanner, I'd
get a step grey wedge, like the Kodak standard grey scale. I'd scan
it with the scanner, load the image into Photoshop, and calculate the
average sample value in each of the steps of the grey scale.

Then I'd enter the data into a spreadsheet. For each patch on the grey
scale, you know the photographic density, so you know how much light it
physically reflects. You also know what sample code values the scanner
assigned to each of those intensities. If you plot both on a linear
scale, you'll get a curved line. If you plot both on a log-log scale,
you should get an approximately straight line, with the slope of the
line being the gamma of the scanner. I might try a least-squares fit
of the data.

To quote Sam Wormley, "One well-performed experiment is worth a
thousand expert opinions." Probably the most valuable thing I've
seen in 15 years of reading Usenet.

Dave

Dave Martindale

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>>Timo, I'll happily believe that if you build a CCD camera the output
>>will be linearly proportional to intensity. But that just isn't the way
>>most real cameras and scanners are built, for a variety of good reasons.

>I can not see any real reason why the linear space is not allowed.

Are you talking about getting 8-bit linear data out? For reasons I've
explained in another article, and Poynton's FAQ also explains, 8-bit linear
suffers (badly!) from quantization artifacts being visible in dark areas
of the image.

Are you talking about getting 12-bit linear data out? Many devices just
don't have 12-bit linear data available anywhere internally. Even if they
did, it would cost time and space and complexity to pass it back to the
user. If the user is using Photshop, they can't use the extra bits anyway.

I would make use of the extra bits - I write my own imaging software.
But most consumers wouldn't, so I don't expect to see this in consumer
products.

Dave

gkin...@cybernet1.com

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

In article <35064429...@news.clinet.fi>, tim...@clinet.fi (Timo
Autiokari) wrote: ...{{in the interest of saving bandwidth I have taken the
liberty of not including the previous message.}} I got into reading this
thread from rec.photo.digital, not the scientific groups. I must confess that
there is a great deal I do not understand well enough to put into proper
terminology. But perhaps you could diverge from your learned disagreements to
help me with a problem which is related to image linearity. When I have
successfully tricked my supposedly idiot proof auto-focus, auto-exposure and
auto-white balencing camera (An Olympus D600L) into taking a underexposed
picture I have a problem in salvaging a useable image. An historigram shows
a mass of data clumped in the lower third with nothing in the highlight or
midrange. (I use Picture Publish 5 from Micrografx). Then I go into "adjust
tone balance" and move the highlight marker to the first point that has
image data greater than zero. Then I fool arround with the midpoint selector
to get the smoothest image I can get. The results don't look too bad on
screen (I use a PC with a 19" moniter) but the printed output resembles a
color-coded contour map more than a photograph. When I open the historigram
of the adjusted file I see something that looks like a picket fence made with
random length lumber. Clearly the image would be healthier if those gaps were
filled up. Those are the data bins my 8x3 bit program sorts the data into,
right? Why can't the full bins spill some of their data into the less
fortunate ones. Would that give me a better picture? I read all 41 posts
that my browser picked up in the hopes that I would learn something I could
use. Unfortunately my technical expertise is somewhat lacking in this area,
but this is REC.photo.digital and I want to rock and rec. with my camera..
Please put some of your erudite knowledge into concepts that non-tech head
photographers can use. True knowledge is a great puzzelment Glenn Kinsley
ex'59 MIT

-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/ Now offering spam-free web-based newsreading

Timo Autiokari

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

On 3 Mar 1998 19:25:48 -0800, da...@cs.ubc.ca (Dave Martindale) wrote:
>tim...@clinet.fi (Timo Autiokari) writes:

>>I can not see any real reason why the linear space is not allowed.
>
>Are you talking about getting 8-bit linear data out? For reasons I've
>explained in another article, and Poynton's FAQ also explains, 8-bit linear
>suffers (badly!) from quantization artifacts being visible in dark areas
>of the image.

Yes, I'm talking about getting it not only out but also getting it into printers
too. I do this in 8 bit every day and have no problems with the dark areas of
the image.

The "better shading" is the only argument what one can have to support the gamma
space. What is so special about the black color ? The important information
and the quality of the images is not in the shadows. It is in the colors, every
where else but not in shadows.

The eye can not see the 1/256 intensity step nor the 2/256 step. The gamma
space cuts more than 50% of the available colors and it cuts it more heavily
from the highlights and midtones. In reality this generates artificants to
midtones and to highlights very easily when such images are edited.

I choose the better and cleaner colors than "better shadows". And there usually
is no problems what so ever with the shadows. Even if there would be artificants
in deep shadows they are much easier to clean up than artificants that are in
the midtones and highlight areas.

Timo Autiokari

Bruce Lucas

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

Timo Autiokari wrote in message <34fc28a7...@news.clinet.fi>...

>This gamma *is* good for TV and video since the CRT will UNcode the coding
and

>the image is then good for the eye. But if the image is to be edited then


gamma
>compensation is not good at all. The image will be in "coded" state, it is
not
>natural and it is not good for the eye nor for image editing in any sense.

>Editing "coded" images results poor performance.
>


Examples please?

Bruce Lucas

Timo Autiokari

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

On 3 Mar 1998 18:36:56 -0800, da...@cs.ubc.ca (Dave Martindale) wrote:

>If you do a simple linear stretch by moving the "white point" marker
>and don't touch the black and midpoint markers, you scale all of the
>intensities in the image by the same factor. This is equivalent to
>changing the F-stop on a camera lens. This is true if the sample
>values stored in the pixel are linearly proportional to intensity, but
>it is ALSO true if the values are gamma corrected. Don't believe me?
>Just do the math. It does work.

If you only change the white-point then the gamma space do not change, this is
true, but almost in every case you need to change the black-point also. However
there are an other problem related to white-point adjustment:

What you do not seem to understand is that the image-file (the data) will
degrade much sooner in case when the gamma compensation is in the image data.
When you scale the white-point then the highest values (R, G, B value of
pixels) will saturate, they will hit the level 255 very soon. And because there
is the gamma compensation in the image data there are a _lot_ of high R, G or B
values in the data. This saturation is the same as artificants. So the data
deteriorates and is not good for printing anymore.

If you only stare the image on the monitor then there is no difference between
the performance of the linear image and the gamma compensated image in respect
the white point adjustment.

In case of the linear image you only cut out what you want to cut out and below
that point there will be _no_ saturation. So the image file is still good for
printing (and has the same or better performance on monitor than the originally
gamma compensated image has).

>On the other hand, if you move the "black point" marker off zero, then
>you are mucking about with the tonal scale of the whole image, and
>the sample values are no longer related to original scene intensity
>by a simple rule. This is true for both linear and gamma-encoded
>sample values, so gamma-encoded samples suffer no additional disadvantage
>here.

The above is not true at all. When you apply a linear transformation onto linear
image the resulting image will surely be linear. This should be quite
elementary. There will be no hue-shift, only the intensities change. So with a
linear image you get only the desired effect (you cut out what you want to cut
out).

But when you move the origin (black-point) of an inverse gamma distributed
intensities of an image then in addition to the intensity change the whole image
moves out from the gamma space and it will not be in another gamma space either.
It will be very difficult to correct. Because the colors are the tristimuli
components then there will hue-shift. So the result is that you get two problems
from the black-point change (yes, they are related to each other) (1) the
non-linearity of the image is not anymore a simple gamma function and (2) there
is color shift. (if you somehow can figure the correction curve that is needed
to recover from that then of course both the errors will be corrected
simultaneously).

This problem is not easy to observe, as you are changing the intensity scale in
the first place, so when the non-linearity of the image changes at the same time
it is usually not notices. The result btw mainly is too dark midtones and
shadows. Do the math.

>If you move the "midpoint" marker in Levels, you change the gamma of the
>image. If it was already gamma-encoded, the new image is still gamma
>encoded but with a different value of gamma. No problem.

You are correct with the gamma here.

A warning: the "midpoint" marker in Levels (in Photoshop) is *not* a true gamma
control, there is an Adobe tweaking in it so that the shadows will not be
affected much. (This is a problem in Photshop, not in your reasoning).

>It seems like gamma-encoded images are *more* robust than linear ones
>under the sort of manipulations you can do with the Levels menu.

No gamma-encoded images are much worse as I have explained above. Just do the
math.

>It's easy to see why. Suppose the image has a useful brightness range


>of 100:1. Then the maximum intensity is represented by 255, and the
>darkest shadow is 2.5. Oops, no way to represent 2.5 accurately, so we
>have to use either 2 or 3. These sample values, when accurately reproduced
>by the display, differ in intensity by a factor of 1.5. The brightness
>difference between codes 3 and 4 is 1.33.

You are quite very wrong here. Those percentages would apply only if you could
view the image on CRT in total darkness, the light would come through the glass
of the monitor without any reflections and so that all the light from the
monitor that do not hit your eyes would end up into a blackhole.

You can not see an 1/256 linear difference on monitor. Just experiment and you
will see.

>Even though this is in dark areas of the image where the eye's sensitivity
>is reduced,

Now I have the urge to quote Mr Poynton:

"Through an amazing coincidence, vision's response to intensity
is effectively the inverse of a CRT's nonlinearity"

He seems to be saying totally the opposite what you are say above. You know, the
inverse gamma does a rocket launch in the dark so according to Mr. Poynton the
sensitivity would be much higher there.

Is your statement correct or is the statement of Mr Poynton correct? It seems to
me that both can not be correct at the same time. Or do someone miss some piece
of essential information?

>Here, you are arguing that filtering operations should be done in linear
>space. However, you do *not* have to store images using linear samples
>in order to do your operations in linear space.

Really. An I do not need use the car to go to the work because I can just use
the helicopter.

>a dye-sub might be able to show the quantization errors in
>the shadows that are inherent in using 8-bit linear encoding. Better
>to give the printer 8-bit gamma corrected data with a gamma of about 1/1.8,
>to ensure this doesn't happen.

Please tell me what is so important in the shadows (that you really do not even
see) ? The quality of images is *mostly* elsewhere, in colors, all over the
image, not in shadows. And editing gamma compensated images will produce
artificants and it decreases the sharpness and it flattens the chroma and causes
hue-shifts there, to mention a few.

Timo Autiokari

Stephen H. Westin

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

tim...@clinet.fi (Timo Autiokari) writes:

> On 3 Mar 1998 19:25:48 -0800, da...@cs.ubc.ca (Dave Martindale) wrote:
> >tim...@clinet.fi (Timo Autiokari) writes:
>
> >>I can not see any real reason why the linear space is not allowed.
> >
> >Are you talking about getting 8-bit linear data out? For reasons I've
> >explained in another article, and Poynton's FAQ also explains, 8-bit linear
> >suffers (badly!) from quantization artifacts being visible in dark areas
> >of the image.

> Yes, I'm talking about getting it not only out but also getting it
> into printers too.

Which are limited to about a 30:1 contrast ratio, much less than that
of a good CRT, or projected transparencies.

> I do this in 8 bit every day and have no problems
> with the dark areas of the image.

Are these scanned or digitized images? If so, you may be benefiting
from noise that masks the quantization artifacts.

Also, have you controlled for linearity in transfer function? Where
does, say, 50% gray measure in reflectance compared to black and
white?

> The "better shading" is the only
> argument what one can have to support the gamma space. What is so
> special about the black color ?

Visual sensitivity, which discriminates intensity levels more finely
in dark areas of the image.

> The important information and the quality of the images is not in
> the shadows. It is in the colors, every where else but not in
> shadows.

Until a human looks at the image.

> The eye can not see the 1/256 intensity step nor the 2/256 step.

Please cite the experiments that show this. Please also detail the
conditions: contrast range, ambient light level, visual adaptation
state, etc.

<snip>

Dave Martindale

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

Valburg <lk...@psu.edu> writes:
>The first is a repeat of a question I posed previously: Could you tell

>us whether there is a way to determine the native gamma
>of a particular scanner model, in order to avoid reducing the 8 bits

>worth of information by adjusting the gamma or midpoint to another
>setting? Or is this, perhaps, a concern of more significance in theory
>than in practice; that is, perhaps the extent of mid-point adjustments
>commonly practiced are gentle enough so as to make little difference in
>image quality (loss of "bins" through quantization error?)?

With most scanners, you're probably best off just leaving all of the
scanning controls set to their default setting, and adjusting things with
Photoshop later, since the scanning adjustments probably just do something
equivalent to "Levels" and "Curves" in Photoshop, but without any preview
or undo facility.

With *some* scanners, you may be able to adjust controls that affect the
analog signal processing ahead of the A/D convertor, and in those cases
it may be worth playing with the adjustments to get the optimum data out
of the A/D convertor.

To measure the actual scanner gamma, scan a grey scale that you know
the patch reflectances of. I described this in more detail in another
message in rec.photo.digital in the last few days.

>Is there commonly available image manipulation software which allows
>working at bit-depths greater than 8 and then reducing bit-depth to 8
>for output or storage (as mentioned most recently in posts by Dave
>Martindale)?

Photoshop lets you read in 16-bit data then rescale it (but not much else).
You have to convert it to 8 bit before you can apply most operations.
I don't know about other options.

Dave

Dave Martindale

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>What you do not seem to understand is that the image-file (the data) will
>degrade much sooner in case when the gamma compensation is in the image data.
>When you scale the white-point then the highest values (R, G, B value of
>pixels) will saturate, they will hit the level 255 very soon. And because there
>is the gamma compensation in the image data there are a _lot_ of high R, G or B
>values in the data. This saturation is the same as artificants. So the data
>deteriorates and is not good for printing anymore.

Again, rubbish. Suppose you start with an image that has been linearly
encoded. You look at the histogram and decide to rescale the image with
a white point of 230. When you do this, all values from 231 to 255 are
clamped to 255, while the range from 0 to 230 is rescaled to span 0 to 255.

Now suppose you start with the *same* image, but stored using gamma 0.5
encoding. A portion of the image that had a sample value of 230 in the
linear image would now be stored as a sample value of 242 in the new image.
So if you rescale the image with a new white point of 242, this produces
*exactly the same* visual result as rescaling at 230 in the linear image.
Exactly the same portions of the image saturate and are clamped at 255.
All other portions of the image are increased in brightness *by exactly
the same amount*. There is no difference at all in the result - except
that the gamma-corrected image has fewer quantization artifacts in
shadow areas, like it always did.

Please try working out the math for yourself rather than just baldly
posting statements that simply are not true.

>In case of the linear image you only cut out what you want to cut out and below
>that point there will be _no_ saturation. So the image file is still good for
>printing (and has the same or better performance on monitor than the originally
>gamma compensated image has).

Again, the effect of changing the white point is the same for both linear
and gamma-encoded images. A little bit of math proves this. If you don't
see this when you are looking at images, you are doing something wrong.

>>On the other hand, if you move the "black point" marker off zero, then
>>you are mucking about with the tonal scale of the whole image, and
>>the sample values are no longer related to original scene intensity
>>by a simple rule. This is true for both linear and gamma-encoded
>>sample values, so gamma-encoded samples suffer no additional disadvantage
>>here.
>
>The above is not true at all. When you apply a linear transformation onto linear
>image the resulting image will surely be linear. This should be quite
>elementary. There will be no hue-shift, only the intensities change. So with a
>linear image you get only the desired effect (you cut out what you want to cut
>out).

If you apply a linear transform to linearly encoded pixels,
the result is "linear" in the mathematical sense. Mathematically, the
sample values are linearly related to the intensity if the function
relating the two looks like:

sample = A * intensity + B

That's a linear equation. But in photographic terms, if the sample value
is linearly related to intensity, the value "B" in the above function must
be zero. This is necessary to have the property that doubling the
intensity doubles the sample value. Using a black point offset scales
the sample values so that "B" becomes non-zero, and sample values are no
longer proportional to intensity. They are proportional to intensity plus
an offset. If you look at the relationship between scene brightness and
image brightness on a log-log scale (the way the eye sees), the transfer
characteristic is no longer a straight line.

The same thing happens if you apply a black level shift to a gamma encoded
image.

>So the result is that you get two problems
>from the black-point change (yes, they are related to each other) (1) the
>non-linearity of the image is not anymore a simple gamma function and (2) there
>is color shift.

Shifting the black level causes this effect in *both* linear and gamma
encoded images. Try doing the math - it's easy to see.

>A warning: the "midpoint" marker in Levels (in Photoshop) is *not* a true gamma
>control, there is an Adobe tweaking in it so that the shadows will not be
>affected much. (This is a problem in Photshop, not in your reasoning).

Yeah, I know about that.

>You are quite very wrong here. Those percentages would apply only if you could
>view the image on CRT in total darkness, the light would come through the glass
>of the monitor without any reflections and so that all the light from the
>monitor that do not hit your eyes would end up into a blackhole.
>
>You can not see an 1/256 linear difference on monitor. Just experiment and you
>will see.

It depends on how your monitor is calibrated, doesn't it? If you choose
to use linear sample values, then *by definition* a value of 3 should be
50% brighter than 2. If it *is* that much brighter, you can clearly see
the step between them. If it *isn't* that much brighter, then your
monitor is not calibrated to display your images correctly.

Of course, it would take a dark room to seek this. But suppose that your
image has only a 30:1 brightness range. Then the darkest shadows will
have a sample value of 8 or 9. The difference between these is still
12.5%, and you can still see a step that size. In a gamma-encoded image,
the same brightness would be stored as 46 or 47, and the step size between
these two adjacent codes is only 4.3%.

The important difference is that in these (not terribly deep) shadows,
the smallest representable difference in brightness in a linear-encoded
image is three times the size of the smallest representable brightness
difference in the gamma-encoded image. The size of the steps in the
gamma image are usually (but not always) small enough to be invisible.
The three times larger steps in the linear image are often visible.

>Now I have the urge to quote Mr Poynton:
>
> "Through an amazing coincidence, vision's response to intensity
> is effectively the inverse of a CRT's nonlinearity"
>
>He seems to be saying totally the opposite what you are say above. You know, the
>inverse gamma does a rocket launch in the dark so according to Mr. Poynton the
>sensitivity would be much higher there.
>
>Is your statement correct or is the statement of Mr Poynton correct? It seems to
>me that both can not be correct at the same time. Or do someone miss some piece
>of essential information?

No, both statements are correct, and neither contradict each other.
The gamma correction or "inverse gamma" curve has a very high slope in
the dark part of the image, when viewed on linear axes. This tells us
that we need to allocate a larger portion of the 256 codes available
to us in the darker portion of the image, and fewer codes in the lighter
portion than linear encoding would do. This has the effect of making
the relative step size between adjacent codes smaller in the dark areas,
where we need that. It also makes the relative step size larger in the
bright areas, but we can get away with this because a linear code has
smaller steps than necessary in the bright areas.

Does anyone else think that I'm contradicting Charles Ponyton anywhere?
Or is it just Timo that reads it that way?

>>Here, you are arguing that filtering operations should be done in linear
>>space. However, you do *not* have to store images using linear samples
>>in order to do your operations in linear space.
>
>Really. An I do not need use the car to go to the work because I can just use
>the helicopter.

Ahem. How is this in any way relevant to the argument?

You can do all your processing in linear space if you want, and you can
store your image files on disk this way if you want. But either you'll
have to use 12 bits or more per sample, or you will get quantization
artifacts. No one is stopping you from doing this.

On the other hand, most of the world has figured out that they can get
most of the benefits of 12-bit linear coding in only 8 bits using
gamma encoding. So they use it. It's a reasonable compromise. There
are some people for whom this compromise is not acceptable (e.g. X-ray
imaging, astrophotography) and they continue using more bits.

Again, I ask: do you really want 8 bits linear, or 12 bits linear?
It is pointless to say that you just want "linear" devices and processing
without saying how many bits you intend to use. 8 bits linear is simply
bad because of quantization artifacts. 12 bits linear is quite a
reasonable choice, if you're prepared to pay the price (usually a factor
of 1.5 in disk space, a factor of 2 in RAM, and somewhat more CPU).

Dave

Dave Martindale

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

gkin...@cybernet1.com writes:
>An historigram shows
>a mass of data clumped in the lower third with nothing in the highlight or
>midrange. (I use Picture Publish 5 from Micrografx). Then I go into "adjust
>tone balance" and move the highlight marker to the first point that has
>image data greater than zero. Then I fool arround with the midpoint selector
>to get the smoothest image I can get. The results don't look too bad on
>screen (I use a PC with a 19" moniter) but the printed output resembles a
>color-coded contour map more than a photograph. When I open the historigram
>of the adjusted file I see something that looks like a picket fence made with
>random length lumber.

The problem is that your original image only uses 1/3 of the available
256 codes - so there are less than 100 distinct brightnesses in the
image. As long as it remains dark and muddy, you don't see the steps
between them. But when you rescale it to fill the full brightness range,
you can see how large the gaps between the sample values really are.

>Clearly the image would be healthier if those gaps were
>filled up. Those are the data bins my 8x3 bit program sorts the data into,
>right? Why can't the full bins spill some of their data into the less
>fortunate ones. Would that give me a better picture?

You *can* spread the image around into more histogram bins. Try adding
some random noise to the picture, then look at the histogram. The
pickets in that picket fence will spread out when you add noise, with the
amount of spreading depending on the amplitude of the noise. Adjust the
amplitude until you like the image, or until you like the histogram.

Unfortunately, you'll end up with a noisy image. The fine intensity
information that you would need to produce a truly good image is just not
there - it was lost between the CCD and the A/D converter when the
original image was taken, and nothing can get it back.

Dave

Dave Martindale

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>Yes, I'm talking about getting it not only out but also getting it into printers
>too. I do this in 8 bit every day and have no problems with the dark areas of
>the image.

If you are just printing on paper, you probably have a 30:1 brightness
range or less in your output. This is least likely to show artifacts
in the dark areas. People working with CRT displays in a dark room
have about a 100:1 brightness range available, so it's more significant
there. Projected transparency film (slides or movies) have a brightness
range of several hundred to 1; it's even more important there.

Also, if you always work with scanned images, the noise from the film grain,
CCD electronics noise, and other sources tends to mask quantization errors.
That doesn't mean they are not there, just that they are less visble.
Images produced by computer rendering techniques with no added noise are
the most likely to show quantization artifacts, because there is no
masking noise.

>The "better shading" is the only argument what one can have to support the gamma

>space. What is so special about the black color ? The important information


>and the quality of the images is not in the shadows. It is in the colors, every
>where else but not in shadows.

I can't say anything about your images. But in *mine*, the shadows are
important.

>The gamma
>space cuts more than 50% of the available colors and it cuts it more heavily
>from the highlights and midtones. In reality this generates artificants to
>midtones and to highlights very easily when such images are edited.

A gamma encoded image has the same number of available colours as a
linear image - they are just distributed differently across the tonal scale.

What sort of artifacts are you talking about? Can you put even one example
image somewhere that people can FTP so they can look at it, so we can see
what you mean?

Also, please keep in mind that if you are looking at your images on
a typical graphics display with 8-bit DACs, your image *is* being converted
to gamma-corrected form before it reaches the DACs. All of the "extra
colours" that you so want to preserve disappear when your image goes
through the gamma-correction lookup table that *must* be present somewhere
in the system (either software or hardware) ahead of the DACs.

Only if you have a frame buffer with 10-bit or wider DACS, and a wider
lookup table to match it, will you see the extra resolution in the bright
areas of the image on screen.

Dave

gkin...@cybernet1.com

unread,
Mar 5, 1998, 3:00:00 AM3/5/98
to

In article <6dkj1s$q...@redgreen.cs.ubc.ca>, da...@cs.ubc.ca (Dave
Martindale) wrote: > > gkin...@cybernet1.com writes: > >An historigram

shows > >a mass of data clumped in the lower third with nothing in the
highlight or > >midrange....(snip.. > The problem is that your original image

only uses 1/3 of the available > 256 codes - so there are less than 100
distinct brightnesses in the > image. (more snips) > Unfortunately, you'll

end up with a noisy image. The fine intensity > information that you would
need to produce a truly good image is just not > there - it was lost between
the CCD and the A/D converter when the > original image was taken, and
nothing can get it back. > > Dave > Thanks Dave. I can see now why a
30 or 36 bit device would be a lot handier for someone like me who likes to
make things do stuff they can't do.

Timo Autiokari

unread,
Mar 5, 1998, 3:00:00 AM3/5/98
to

On Wed, 4 Mar 1998 10:49:48 -0500, "Bruce Lucas" <lu...@watson.ibm.com> wrote:

>Examples please?

My pleasure:
http://www.clinet.fi/~timothy/calibration/gamma_errors/comparison.htm

There is currently three cases, more to come. They are between the so called
"more perceptible coding from 12 bit" and the linear 8 bit.

Timo Autiokari

Timo Autiokari

unread,
Mar 5, 1998, 3:00:00 AM3/5/98
to

On 03 Mar 1998 16:48:05 +0100, Walter Hafner <haf...@forwiss.tu-muenchen.de>
wrote:

>Is the method of finding the gamma-factor of particular displays as
>described in http://www.povray.org/binaries/ (last paragraph) any good?

Yes it is, if you also read the directions from the text file.

I would like to invite you to see mine at:
http://www.clinet.fi/~timothy/calibration/g/index.htm

Timo Autiokari

Timo Autiokari

unread,
Mar 5, 1998, 3:00:00 AM3/5/98
to

In message 6dkin6$qb4 @redgreen.cs.ubc.ca on 1998/03/04 da...@cs.ubc.ca (Dave
Martindale) wrote:

>tim...@clinet.fi (Timo Autiokari) writes:
>>This saturation is the same as artificants. So the data deteriorates and
>>is not good for printing anymore.

>Again, rubbish. Suppose you start with an image that has been linearly
>encoded. You look at the histogram and decide to rescale the image with
>a white point of 230. When you do this, all values from 231 to 255 are
>clamped to 255, while the range from 0 to 230 is rescaled to span 0 to 255.

>Now suppose you start with the *same* image, but stored using gamma 0.5
>encoding. A portion of the image that had a sample value of 230 in the
>linear image would now be stored as a sample value of 242 in the new image.
>So if you rescale the image with a new white point of 242, this produces
>*exactly the same* visual result as rescaling at 230 in the linear image.
>Exactly the same portions of the image saturate and are clamped at 255.
>All other portions of the image are increased in brightness *by exactly
>the same amount*. There is no difference at all in the result - except
>that the gamma-corrected image has fewer quantization artifacts in
>shadow areas, like it always did.

You are quite wrong with the above.

You level 242 in the gamma compensated image contains large quantization. The
level 230 in linear image does not contain _any_ quantization. So the image will
be in much better condition for the printer. Quantization is the prime source of
artifacts.

>If you apply a linear transform to linearly encoded pixels,
>the result is "linear" in the mathematical sense. Mathematically, the
>sample values are linearly related to the intensity if the function
>relating the two looks like:

> sample = A * intensity + B

>That's a linear equation. But in photographic terms, if the sample value
>is linearly related to intensity, the value "B" in the above function must
>be zero. This is necessary to have the property that doubling the
>intensity doubles the sample value. Using a black point offset scales
>the sample values so that "B" becomes non-zero, and sample values are no
>longer proportional to intensity. They are proportional to intensity plus
>an offset. If you look at the relationship between scene brightness and
>image brightness on a log-log scale (the way the eye sees), the transfer
>characteristic is no longer a straight line.

>The same thing happens if you apply a black level shift to a gamma encoded
>image.

>Shifting the black level causes this effect in *both* linear and gamma
>encoded images. Try doing the math - it's easy to see.

With the above you are hopelessly wrong.

What you cut out using the black point change is the _desired effect_. That is
the constant B in your equation. Note, you cut it out. Then what you have left
is the function: A * intensity that is linear function.

No, you have not done the math so you do not know, you only believe. Do not
worry, I have made this easy for you. Please see:
http://www.clinet.fi/~timothy/calibration/gamma_errors/comparison.htm
You can download a test setup and do it your own, no need for math, just do it
and see the result. They do _not_ cause the same effect at all. The gamma
compensated image gives very bad result.

>>A warning: the "midpoint" marker in Levels (in Photoshop) is *not* a true gamma
>>control, there is an Adobe tweaking in it so that the shadows will not be
>>affected much. (This is a problem in Photshop, not in your reasoning).

>Yeah, I know about that.

Thank you for confirming it. Now we only need to get Adobe to acknowledge this
problem. Currently there is no way to correctly compensate the gamma in
Photoshop. (unless you use a specially created *.amp file)

>>You can not see an 1/256 linear difference on monitor. Just experiment and you
>>will see.

>It depends on how your monitor is calibrated, doesn't it? If you choose
>to use linear sample values, then *by definition* a value of 3 should be
>50% brighter than 2. If it *is* that much brighter, you can clearly see
>the step between them. If it *isn't* that much brighter, then your
>monitor is not calibrated to display your images correctly.

It depend on the monitor calibration, but this affect both the gamma compensated
images and linear images. On properly calibrated monitor you can not see the
1/256 linear-light intensity step. Again please go to:
http://www.clinet.fi/~timothy/calibration/gamma_errors/comparison.htm
and see the first image.

>Does anyone else think that I'm contradicting Charles Ponyton anywhere?
>Or is it just Timo that reads it that way?

You said: "Even though this is in dark areas of the image where the eye's
sensitivity is reduced, "

Mr Poynton clearly says that the sensitivity of the eye is not _recused_ there
but actually the eye would be more sensitive in the shadows. Therefore he is
suggesting the "more perceptible coding" that puts more codes there.

>>>Here, you are arguing that filtering operations should be done in linear
>>>space. However, you do *not* have to store images using linear samples
>>>in order to do your operations in linear space.
>>

>>Really. An I do not need use the car to go to the work because I can just use
>>the helicopter.

>Ahem. How is this in any way relevant to the argument?

You said: " I could acquire an image with an excellent cooled CCD camera and
16-bit A/D conversion." That is quite near to the case where I had a
helicopter, financially. So what you say is a very rare exception.

>You can do all your processing in linear space if you want, and you can
>store your image files on disk this way if you want. But either you'll
>have to use 12 bits or more per sample, or you will get quantization
>artifacts. No one is stopping you from doing this.

This is rubbish if something is. Such setup is only available for a few.

>On the other hand, most of the world has figured out that they can get
>most of the benefits of 12-bit linear coding in only 8 bits using
>gamma encoding. So they use it. It's a reasonable compromise.

So now you yourself admit that the gamma space is a compromise. BUT there needs
not to be a compromise. It is a question about software driven digital systems.
It is the easiest thing to enable the linear space. Why should we settle with a
compromise?

Timo Autiokari

Dave Martindale

unread,
Mar 5, 1998, 3:00:00 AM3/5/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>You level 242 in the gamma compensated image contains large quantization. The
>level 230 in linear image does not contain _any_ quantization. So the image will
>be in much better condition for the printer. Quantization is the prime source of
>artifacts.

This makes no sense whatsoever. The original light intensity hitting the
CCD in the camera or the scanner is for all intents and purposes a real
number. To convert it to an 8-bit integer, it must be quantized.
Quantization is just the process of taking a quantity that can have any
real-number value, and deciding how to assign it a fixed integer value.
The integer value assigned always represents a whole range of real values
(intensity, in this case). The integer value represents a particular
real value, and the process of converting from real to integer causes the
value of the intensity to be changed somewhat. This is quantization error.

And it is present for *both* linear and gamma-encoded images. The only
different is the size of the quantization error. Linear encoding results
in small quantization errors in bright areas, but much larger errors in
dark areas. Gamma encoded images spread the sample values differently,
yielding larger (but still usually not visible) errors in the bright
areas of the image, and reducing errors in dark areas.

But how can you say that a linear image "does not contain any quantization"?
That doesn't make sense.

>No, you have not done the math so you do not know, you only believe. Do not
>worry, I have made this easy for you. Please see:
>http://www.clinet.fi/~timothy/calibration/gamma_errors/comparison.htm
>You can download a test setup and do it your own, no need for math, just do it
>and see the result. They do _not_ cause the same effect at all. The gamma
>compensated image gives very bad result.

I've looked at your tests. They do not show what they purport to show at
all.

You start off with a demonstration of black level shift. The "linear"
image just contains all 256 possible grey levels in a nice matrix.
You apply a black level shift, and then perform gamma correction.

Then for the second half of the test, you do the gamma correction (to the
linear image), then do a black level shift, and then compare the results.

However, several things about this are badly done. In the first place,
you should never, ever quantize an image to 8 bits more than once if you
can avoid it, particularly with such widely differing encodings. By
quantizing an image to 8 bits linear, you have thrown away a fair amount
of intensity information, particularly in the shadows. Then, when you
gamma correct that linear image, you discard additional information in
the highlights - but you can't get back the information in the shadows
because it's already been discarded. A similar thing happens when you
produce gamma-encoded 8 bit images and later convert them to linear.

So no matter what order you do the operations in, quantizing to 8-bit
linear and then 8-bit gamma (or vice versa) gives you an image that
is *worse* than a single quantization to either linear or gamma alone
would be. Essentially, you are demonstrating why you don't want to
quantize twice, not why one is better than the other.

The other thing that your demonstration shows is that *if* you are
looking at an artificially-generated grey scale, which you apply a
black-level shift to, and then display gamma corrected, you prefer the
subjective effect caused by applying the black level shift before the
gamma correction rather than after it.

But - so what? If someone was actually working with a gamma-corrected
image, it would come out of the camera or scanner that way - it
wouldn't have the artifacts that you have created by creating a grey
scale in linear space and converting to gamma encoding. In addition,
if you *really* want exactly the effect that you get by a straight
black level shift in linear space, you can get the same effect using
Curves in gamma space. On the other hand, for particular images
you might prefer the effect you get doing a simple black level shift
in gamma space. You need to look at real images when judging this,
not grey ramps.

>images and linear images. On properly calibrated monitor you can not see the
>1/256 linear-light intensity step. Again please go to:
>http://www.clinet.fi/~timothy/calibration/gamma_errors/comparison.htm
>and see the first image.

I've looked at it. I *can* easily see the first step if I set up my
monitor to display linear images correctly (gamma 1.0). If I set the
monitor to display gamma-encoded images (gamma 2.2) the first step is
not visible. I think this demonstrates that gamma encoding gives much
smaller step sizes than linear. What you seem to be doing is taking
a grey ramp and displaying it directly on your monitor as if it were
already gamma corrected, but pretending that it's linear.

>>Does anyone else think that I'm contradicting Charles Ponyton anywhere?
>>Or is it just Timo that reads it that way?
>
>You said: "Even though this is in dark areas of the image where the eye's
>sensitivity is reduced, "
>
>Mr Poynton clearly says that the sensitivity of the eye is not _recused_ there
>but actually the eye would be more sensitive in the shadows. Therefore he is
>suggesting the "more perceptible coding" that puts more codes there.

We are not contradicting each other; you just haven't understood what
we are saying. Charles is saying that, when the response of the eye is
looked at in *absolute* terms, sensitivity is greater in the shadows.
I am talking about sensitivity in *relative* terms, and in those terms
contrast sensitivity goes down somewhat with decreasing light.

Let's take an example: Suppose the brightest object in a scene reflects
1000 units of light to our eye. If an object beside it reflects 990
units of light, we will just be able to see the difference in brightness.

Suppose that we have another object that reflects 100 units of light,
and another object beside it that reflects 99 units. It turns out that
we will also just barely be able to see this difference.

Now, the *absolute* difference between the first two objects is 10 units,
while the absolute difference between the next two objects is only 1
unit. But the *relative* difference is the same in both cases: 1% of
the total brightness.

So if you think in absolute terms, the eye is 10 times more sensitive
to changes in intensity when the intensity is reduced by a factor of 10.
It can see a difference of 1 unit at 1000, but can't see any change
smaller than 10 units at 1000. On the other hand, if you think in
relative terms, the minimum change the eye can see is 1% of the light
level at both levels.

From this, you might think that the minimum change you could see at a
light level of 10 units would be 0.1 unit - so you could see the
difference between 9.9 and 10. It turns out that you can't, because
the eye's *relative* contrast sensitivity, which is constant at high
light levels, drops at lower levels. So you might only be able to
see the difference between 8 and 10.

How does this effect image encoding? A linear encoding assigns its
available codes as if the eye could see the same *absolute* differences
at all light levels. This is far from true, and you end up with the
codes spaced much more closely together than the eye can discriminate
in the bright areas, and so far apart that the eye sees quantization
in the dark areas. A logarithmic encoding provides constant *relative*
spacing throughout the brightness range, and in fact works really,
really well if you give it 9 or 10 bits. But at 8 bits, the representable
sample values are about 2% apart, and you can see quantization effects
in the brighter areas of the image, so it doesn't work.

8-bit gamma encoding is somewhere in between linear and logarithmic.
It gives fewer codes to the highlight, but still enough that you can't
see the steps. It gives more to the shadow areas, making much smaller
steps than linear, and *usually* you can't see the size of steps that
remain because of the eye's reduced sensitivity to *relative* brightness
changes in shadow areas. It's a compromise which usually works, more
often than linear or log do at 8 bits wide.

>You said: " I could acquire an image with an excellent cooled CCD camera and
>16-bit A/D conversion." That is quite near to the case where I had a
>helicopter, financially. So what you say is a very rare exception.

But my point was that you *can* store your images in non-linear space
(e.g. gamma encoded) and still do your image processing operations in
linear space. Just take the 8-bit gamma encoded pixels, convert them to
12-bit linear using a lookup table (cheap!), do the image processing
operation, and convert back to 8-bit gamma using another lookup table.
This will give a result that has far fewer artifacts than storing and
processing the image as 8-bit linear. Now, how is this like a helicopter?

Are you saying that Photoshop is like a car (i.e. you have one), while
doing 8-bit gamma/12-bit linear hybrid processing is like a helicopter
(because you don't have one)? Then I do have a helicopter - I just
write whatever software I need when Photoshop isn't good enough.

>>On the other hand, most of the world has figured out that they can get
>>most of the benefits of 12-bit linear coding in only 8 bits using
>>gamma encoding. So they use it. It's a reasonable compromise.
>
>So now you yourself admit that the gamma space is a compromise. BUT there needs
>not to be a compromise. It is a question about software driven digital systems.
>It is the easiest thing to enable the linear space. Why should we settle with a
>compromise?

Anything is a compromise. Even if you use a cooled CCD and 16-bit A/D
converter, you still lose some of the intensity information that was
present in the original image. But there comes a point when an imaging
system is good enough to capture an image without visible artifacts.
This happens when it can capture the entire brightness range of interest
without clipping, and maintains sufficient intensity resolution in both
highlights and shadow that you can't see the steps. I find 32-bit
floating point, 16-bit linear, and 10-bit logarithmic encodings, when
designed well, capable of handling this task. Nothing else I've seen is.

But if you restrict yourself to 8 bits, *any* 8 bit encoding at all, I will
bet I can show you an image that has artifacts using that encoding.
8 bit linear is the worst of them. You might not see its artifacts
very often, if you work only in reflection prints where the contrast
range is only 30:1, and only with digitized images that have noise in
them. But it is not adequate for many people.

Linear is simpler than other systems. But it involves worse compromises
than 8-bit gamma for most users of imaging systems.

Dave

Dave Martindale

unread,
Mar 5, 1998, 3:00:00 AM3/5/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>My pleasure:
>http://www.clinet.fi/~timothy/calibration/gamma_errors/comparison.htm

I started out looking at your monitor calibration images. I do not
understand the point of adjusting so "the black horizontal swatch just
merges with the background at the L5 marker". They are both exactly
the same pixel code (43 on the gamma 2.2 chart), so they will *always*
appear the same no matter what the contrast and brightness settings.
The same is true for the white swatch and background at L250.

These are nice demonstrations of the effect of gamma through the whole
brightness range. Other test charts I've seen like this only check a
single point (usually 50% grey).

Unfortunately, I can't look at the unsharp masking test images on your
web pages. My version of Netscape will not handle PNG directly. It is
configured to save PNG images to disk, but it just dies instead. I did
fetch the original images and process them in Photoshop in the way that
you indicated, so I hope I'm looking at the same images you are for the
comments that follow:

I think the main difference between the two "text" images is that you
didn't scale the "amount" of the mask properly when you were working in
gamma encoded space. Equal amounts of image processing operations do
not produce the same effect in gamma encoded space as they do in linear
space. For example, if you want to make the entire image 10% brighter
in linear space, you multiply every sample value by 1.1. If you want
the same effect in gamma-2.2 space, you need to multiply by

1.1 ^ (1/2.2) = 1.04

to get the same effect. The same applies to image processing
operations. To get the same effect visually, the numerical size of the
change must be different.

In this case, you probably decided to apply a 120% amount unsharp mask
by judging the effect of the operation by eye, and adjusting until it
looked as good as you make it. Then you apply the same operation in
gamma space, it doesn't look good, and you conclude that gamma space is
inferior. But that's the wrong conclusion.

If you start out with the gamma encoded image, and apply an unsharp
mask with the same radius and threshold but adjust the amount by hand
for optimum results, you'd probably pick a value of about 80. If you
then started with a new copy of the linear image and applied the same
unsharp mask to it, you would probably think the result was too soft.

The conclusion: you need to set the "amount" differently for image
processing operations in linear and gamma spaces. But this doesn't
prove that one is better than the other, just different.

You also show histograms for the two processed images. Notice how the
gamma one is smoother, while the linear one has some "picket fence"
effects? The picket fence is almost certainly because of the linear->
gamma conversion done with 8-bit inputs and outputs, causing some
additional quantization error. If you do the gamma correction in 16
bit and then convert to 8, or if you just leave the result in 8 bit
linear without doing the gamma correction at all, you get a smooth
histogram.

This is much more dramatic in the second example, which has more
information in the dark areas. See that horrible picket fence at the
left side of the linear histogram? The missing sample values are
information that you have lost forever.

Now about the "lupe" test images: The main problem I have with these
is that the original image is obviously somewhat gamma-corrected in the
first place - it is not linear. Just display the 16-bit original in
Photoshop, and then adjust the screen gamma. With it set to 1.0, the
image should look good if it is truly linear, but it's bright and
washed out. At 2.2, it's much better, but perhaps too dark. Setting
the CRT gamma to 1.8 gives about the best results. From this, I
conclude that the image came from some sort of scanner that applied a
built-in gamma of 1/1.8 to the image.

Given this, the additional 1/2.2 correction that you do via your lookup
table makes no sense. It makes the image look bad. But, you say, it
should look equally bad via both processing routes. Well, they are
pretty close. The difference is that the words "scale lupe" are
clearer in the linear-filtered image. This seems to be because in the
"linear" image, there is a huge brightness difference between the white
text and the black background. The unsharp mast operation actually
generates an overshoot - there is a black border around the white text
that is blacker than the body of the lupe. It's not too visible right
after the unsharp mask operation, but it does become visible after the
"gamma correction" Curves operation. When the unsharp mask operation
is performed on the gamma-corrected data, the black paint isn't nearly
so black, the contrast between white and black isn't so large, and the
overshoot doesn't happen.

When you look at the two images at the correct size, the one with the
overshoot (the black border around some of the letters) looks sharper,
even though it is less correct. Again, this is an example of where you
need to select the unsharp mask "amount" differently for a linear and a
gamma image. If you increased the amount for the gamma image in this
case, you could make it look very close to the linear version. Or
decreasing the amount applied to the linear image would make it look
very nearly like the gamma version.

Thanks for going to the trouble of setting up these sample images. I
still don't agree with your explanations of what is happening, but with
images to look at I can at least see what you see, and see for myself
why it's really happening.

By the way, how did you build the gamma-correction lookup table to use
in "Curves"? I did figure out that the middle box in "Levels" is not
quite proper gamma correction, but didn't figure out how to set up my
own table for it.

Dave

PS: I trimmed the Newsgroups list.

Dave Martindale

unread,
Mar 6, 1998, 3:00:00 AM3/6/98
to

Walter Hafner <haf...@forwiss.tu-muenchen.de> writes:
>Is the method of finding the gamma-factor of particular displays as
>described in
>
>http://www.povray.org/binaries/ (last paragraph)
>
>any good?

The chart they provide is pretty good. Its use of long horizontal black
and white lines avoids problems caused by the more common checkerboard
patterns. It's the best chart I've seen for making a quick one-point
measurement of gamma.

Dave

Timo Autiokari

unread,
Mar 6, 1998, 3:00:00 AM3/6/98
to

On 5 Mar 1998 23:52:47 -0800, da...@cs.ubc.ca (Dave Martindale) wrote:

>I started out looking at your monitor calibration images. I do not
>understand the point of adjusting so "the black horizontal swatch just
>merges with the background at the L5 marker". They are both exactly
>the same pixel code (43 on the gamma 2.2 chart),

Yes. Maybe I should say "seems to merge". If that is a problem then please just
use the other criteria, set the Brightness so that you just can discriminate the
left end of the swatch from the background.

>so they will *always* appear the same no matter what the contrast and
>brightness settings.

No. e.g. set the Brightness to minimum.

>Unfortunately, I can't look at the unsharp masking test images on your
>web pages. My version of Netscape will not handle PNG directly.

I noticed this too v 4.02 does not show PNG, 4.04 does, I do not know about
4.03.

>I think the main difference between the two "text" images is that you
>didn't scale the "amount" of the mask properly
>

> 1.1 ^ (1/2.2) = 1.04
>
>to get the same effect. The same applies to image processing
>operations. To get the same effect visually, the numerical size of the
>change must be different.

No. Most of the operations are kernels, if you decrease the amount then you do
not get the effect. Due to the gamma compensation some operations affect
heavily at shadows and only slightly on highlight. In some other operations it
happens visa versa. On linear images they operate equally well all over the
image.

Maybe you could put up a better demonstration. Maybe you could show that you get
equally good sharpening for the gamma image than I get for the linear image.
That is why I provide the originals. Or use your own originals. But you can not,
I know, I have tested this, but please try and if you succeed put it up on the
web, do not just speculate. Please. If you lack the www space, I will provide it
for you.

>The conclusion: you need to set the "amount" differently for image
>processing operations in linear and gamma spaces. But this doesn't
>prove that one is better than the other, just different.

Eye can see sharpness difference quite well. so it is not just a difference.

>You also show histograms for the two processed images. Notice how the
>gamma one is smoother, while the linear one has some "picket fence"
>effects? The picket fence is almost certainly because of the linear->
>gamma conversion done with 8-bit inputs and outputs, causing some
>additional quantization error.

Well, that is one of the _main_ point in my presentation. These gaps represent
the 0,4% linear light intensity step on a gamma 2.2 monitor. And the eye can not
see such a small step.

>This is much more dramatic in the second example, which has more
>information in the dark areas. See that horrible picket fence at the
>left side of the linear histogram? The missing sample values are
>information that you have lost forever.

They are the 0.4% intensity steps there too. It is not dramatic nor horrible, it
is just the gamma compensation what you see: The monitor will put them side by
side again, in 0.4% increments.

>Now about the "lupe" test images: The main problem I have with these
>is that the original image is obviously somewhat gamma-corrected in the
>first place - it is not linear.

You can consider them to be bullet straight linear images. If you see something
else then your system is not properly calibrated.

>Just display the 16-bit original in Photoshop, and then adjust the screen
>gamma. With it set to 1.0, the image should look good if it is truly linear,

Yes _provided_ that the Gamma Slider is adjusted properly in the Calibrate
dialog and it very rarely is. Please see
http://www.clinet.fi/~timothy/calibration/gammaimg.htm

>but it's bright and washed out.

then you _really_ do not have a linear setup. Please download the
http://www.clinet.fi/~timothy/calibration/mci07.gif and display it in Photoshop.
If you see that the gamma swatches match then you have a linear setup.

>At 2.2, it's much better, but perhaps too dark. Setting the CRT gamma to
>1.8 gives about the best results. From this, I conclude that the image came
>from some sort of scanner that applied a built-in gamma of 1/1.8 to the image.

As I say on the page the images are from Canon EOS*DCS*3 _camera_. It provides
linear images. Ask Canon. The originals are 12 bit linear images from the Canon
EOS*DCS*3 camera. It is not the same as Kodak EOS*DCS*3 camera.

>The difference is that the words "scale lupe" are clearer in the
>linear-filtered image. This seems to be because in the "linear"
>image, there is a huge brightness difference between the white
>text and the black background. The unsharp mast operation actually
>generates an overshoot

You just keep on. Go to the lupe text and zoom up to 5x. You can then see that
the overshooting is in the image that was processed in the gamma space, that is
why it lack sharpness. And you can see at 5x that in the image that was
processed in the linear space the intensity edge is much more cleaner and much
more sharp.

>Thanks for going to the trouble of setting up these sample images.

You are welcome. I only hope for objective assessment.

>I still don't agree with your explanations of what is happening, but with
>images to look at I can at least see what you see, and see for myself
>why it's really happening.

Yes. It is hard when one believes so hard and so long into something that then,
all of a sudden appears to be all in vain.

>By the way, how did you build the gamma-correction lookup table to use
>in "Curves"? I did figure out that the middle box in "Levels" is not
>quite proper gamma correction, but didn't figure out how to set up my
>own table for it.

I wrote them using VBfA in Excel . There is some hundred of them zipped on my
site ready for download. What is a bit irritating is that there is a bug in the
Actions feature in Photoshop so that the Actions does not run a Curves
adjustment (that loads a previously saved curve). You can create the action, and
you can run it. But nothing happens to the image when you run it.

Timo Autiokari

Dave Martindale

unread,
Mar 6, 1998, 3:00:00 AM3/6/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>>so they will *always* appear the same no matter what the contrast and
>>brightness settings.
>
>No. e.g. set the Brightness to minimum.

The L5 patch and the surround are exactly the same pixel code, so they
will always be the same brightness on any monitor at any brightness
or contrast setting.

Perhaps what you really mean is "adjust the brightness so the patch
to the left of L5 is visibly darker than the surround".

>>I think the main difference between the two "text" images is that you
>>didn't scale the "amount" of the mask properly
>>
>> 1.1 ^ (1/2.2) = 1.04
>>
>>to get the same effect. The same applies to image processing
>>operations. To get the same effect visually, the numerical size of the
>>change must be different.
>
>No. Most of the operations are kernels, if you decrease the amount then you do
>not get the effect. Due to the gamma compensation some operations affect
>heavily at shadows and only slightly on highlight. In some other operations it
>happens visa versa. On linear images they operate equally well all over the
>image.

Yes, they operate differently. In the case of the text, the text is
about the same darkness and the background is about the same lightness
so you can get a good result from Unsharp Mask in either linear or gamma
images. Your demonstration does not show that because it uses the wrong
amount of unsharp mask for the image in question - it's not a fair
demonstration. For this image, either linear or gamma works fine.

I will grant you that you probably *could* find an image that needs
the same amount of unsharp masking in shadows and highlights, and processing
the image in linear space does exactly what you want everywhere, while
doing the processing in gamma space results in too much sharpening in the
bright areas and too little in the shadows. The math says that this
may happen. If you found such an image and showed it to us, your argument
would be more convincing.

But what you've shown us so far just shows errors in processing, not an
image that is processed correctly in linear space and which *cannot* be
processed correctly in gamma space, no matter what the settings.

>Maybe you could put up a better demonstration. Maybe you could show that you get
>equally good sharpening for the gamma image than I get for the linear image.
>That is why I provide the originals. Or use your own originals. But you can not,
>I know, I have tested this, but please try and if you succeed put it up on the
>web, do not just speculate. Please. If you lack the www space, I will provide it
>for you.

Try taking your text image and sharpen it with 80% instead of 120%. The
text edges look almost identical. Any difference is well below the level
of other artifacts in the image.

>Well, that is one of the _main_ point in my presentation. These gaps represent
>the 0,4% linear light intensity step on a gamma 2.2 monitor. And the eye can not
>see such a small step.

The eye sees *relative* intensity changes, not absolute. In the highlight
areas, the steps are indeed 0.4% of peak white (255), and you indeed cannot
see them. But at 1/30 of full intensity, the sample values have already
dropped to 8. The difference between 8 and 9 is 12.5% of the brightness
of the value 8. It doesn't matter than it's 0.4% of a bright area somewhere
else in the image, it's 12.5% of the local brightness, and the eye *can*
see that. Check any text on human vision - it is the relative intensity
that matters, not absolute.

Another way of thinking of this: Can you draw a grey scale that shows all
of the pixel values from 0 to 255, and where you *cannot* see any of the
boundaries between adjacent patches? You can't do it with linear images.
(No fair using Photoshop's gradient, which adds noise to the gradient to
smooth it. Use a patch of all 0, a patch of all 1, etc.) Your "linear.psd"
test image is essentially already the image needed. But if you display
that image linearly (monitor gamma of 1), you see the steps. If you
treat the image as gamma encoded (just set monitor gamma to 2 or 2.2),
the boundaries mostly disappear.

>>This is much more dramatic in the second example, which has more
>>information in the dark areas. See that horrible picket fence at the
>>left side of the linear histogram? The missing sample values are
>>information that you have lost forever.
>
>They are the 0.4% intensity steps there too. It is not dramatic nor horrible, it
>is just the gamma compensation what you see: The monitor will put them side by
>side again, in 0.4% increments.

No, there are large gaps between the "pickets". Some of those steps are
larger than 1% even in absolute intensity relative to 255. Surely you'll
agree that steps this large are not good?

Dave

Dave Martindale

unread,
Mar 6, 1998, 3:00:00 AM3/6/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>I noticed this too v 4.02 does not show PNG, 4.04 does, I do not know about
>4.03.

Unfortunately, some of us are still using 3.01 on our old slow computers.
It is nice to see someone using PNG, and I *can* view PNGs if I can get
them downloaded. Perhaps you could put the PNG images into a zip file
that people can FTP. The problem with the current setup is that Netscape
dies if I force it to fetch those images, and until they've been fetched
Netscape won't save them.

>You can consider them to be bullet straight linear images. If you see something
>else then your system is not properly calibrated.
>
>>Just display the 16-bit original in Photoshop, and then adjust the screen
>>gamma. With it set to 1.0, the image should look good if it is truly linear,
>
>Yes _provided_ that the Gamma Slider is adjusted properly in the Calibrate
>dialog and it very rarely is. Please see
>http://www.clinet.fi/~timothy/calibration/gammaimg.htm

I've looked at your (nicely done) gamma calibration images. I'm using
a Mac, where the overall gamma of the display is set by the Gamma
control panel that comes with Photoshop, and SGI workstations using my
own tool to load the LUT for gamma correction.

I can confirm that your 10.gif file looks right when I set the gamma to
1.0, that 18.gif looks right when I set the gamma to 1.8, and 22.gif
looks correct when the gamma is set to 2.2. So the monitor is set up
correctly.

>then you _really_ do not have a linear setup. Please download the
>http://www.clinet.fi/~timothy/calibration/mci07.gif and display it in Photoshop.
>If you see that the gamma swatches match then you have a linear setup.

I've loaded that, and the swatches do match when the display gamma is
set to 1.0. I *am* properly calibrated for linear display. But the
16-bit original image just does not look good under those conditions.
It looks like it has been gamma-corrected.

How did you produce the 16-bit original, anyway? I did a histogram of
it, and there are only 4067 non-zero entries, suggesting data that was
originally 12 bits. But it hasn't just been expanded from 12 to 16
bits by left shifting or multiplying, because in that case the non-zero
entries would be spaced 16 codes apart. They aren't. they are about
every 10 apart in the upper (bright) end of the histogram, but much
further apart in the dark areas. If I delete all of the zero entries
in the histogram, I'm left with a set of 4067 sample values that are
used in the 16-bit file.

If I assume that these are 4067 consecutive sample values in the
original 12-bit data (i.e. the original image contained values 0, 1,
..., 4066) and plot them as an X-Y graph, that should show me the
transfer function that was used between the 12-bit original data and
the 16-bit data in your image file. When I do this, I get a nice
smooth curve that looks very much like a power function with an
exponent of about 2. If I replot the data on a log-log scale, I get an
almost straight line, with some droop in the dark areas.

From the look of this graph, and the general appearance of the image, I
have to conclude that some gamma correction *was* applied to this image
in converting from 12 to 16 bit form.

Then I used the histogram to build an inverse lookup table, converting
16-bit values back to their original 12-bit values (assuming each
12-bit value occurred at least once in the original image). When I do
this, I get a 12-bit image that *does* look linear.

>As I say on the page the images are from Canon EOS*DCS*3 _camera_. It
provides >linear images. Ask Canon. The originals are 12 bit linear
images from the Canon >EOS*DCS*3 camera. It is not the same as Kodak
EOS*DCS*3 camera.

Yes, it appears that the 12-bit image was linear, but something
happened during the 12->16 bit conversion. I can FTP the 12-bit image
I recreated for you to look at if you like, along with the lookup table
that did it. Just tell me where to put them.

Another thing that would be very useful: photograph a grey step tablet,
like the "Kodak Gray Scale" or similar. Then put the 12-bit data
somewhere. This would allow measuring the actual transfer
characteristic of the camera, rather than trusting Canon's description
of it.

>Yes. It is hard when one believes so hard and so long into something
that then, >all of a sudden appears to be all in vain.

Indeed. But some of your demonstrations have flaws that need to be
fixed before you'll convince me that the remaining differences are
solely due to the difference between linear and gamma encoding. We do
seem to be converging towards understanding, though. Please figure out
what went wrong during the 12->16 bit conversion, and retry the unsharp
mask test with truly linear and gamma-encoded images with individually
optimized "amount" values, and we can look at them again.

Or if you can provide the original 12-bit data directly from the Canon
camera, I can do the equivalent at this end, without the bogus
conversion to 16 bits. There seems to be no reason to use 16 bits at
all with this image, except that Photoshop doesn't support 12 bit data
very well.

Dave

Timo Autiokari

unread,
Mar 7, 1998, 3:00:00 AM3/7/98
to

On 6 Mar 1998 18:02:46 -0800, da...@cs.ubc.ca (Dave Martindale) wrote:

>Unfortunately, some of us are still using 3.01 on our old slow computers.

I have changed that. They are now all JPEG as default and there are currently 8
examples. Please see
http://www.clinet.fi/~timothy/calibration/gamma_errors/index.htm
More to come.

>I've looked at your (nicely done) gamma calibration images. I'm using
>a Mac, where the overall gamma of the display is set by the Gamma
>control panel that comes with Photoshop, and SGI workstations using my
>own tool to load the LUT for gamma correction.

I have PC systems and the originals are linear. Please see the examples, if
there would be a 1.6 gamma compensation in the originals and then after I add a
gamma 2.2 compensation that would be a total of 3.52 gamma compensation, quite
impossible. Possibly your problem is with your own tool that loads the LUT.

>How did you produce the 16-bit original, anyway? I did a histogram [...]
>Yes, it appears that the 12-bit image was linear, but [...]
>But some of your demonstrations have flaws that [...]
>Please figure out what went wrong during the 12->16 bit conversion, [...]

Nothing went wrong. The images as not the _raw_ data from the camera, but the
black-point and white-point have been linearly adjusted, slightly. The other way
to achieve this would be to set up the lightning and exposure settings with the
utmost care (and a lot of lighting hardware). This is very time consuming and
most often impossible. So the originals are slightly scaled, linearly. There
will be some amount of fencing _always_ in any sort of 12 bit camera data in
case the lightning on the scene does not match perfectly with the dynamic range
of the camera. However I did the lightning carefully so only a small adjustment
was required. After you gamma compensate the original and then convert it into
the 8 bit space you will see from the histogram that the image does indeed have
the so called "more perceptual coding" (no gaps in the histogram).

Timo Autiokari


Dave Martindale

unread,
Mar 7, 1998, 3:00:00 AM3/7/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>I have PC systems and the originals are linear. Please see the examples, if
>there would be a 1.6 gamma compensation in the originals and then after I add a
>gamma 2.2 compensation that would be a total of 3.52 gamma compensation, quite
>impossible. Possibly your problem is with your own tool that loads the LUT.

Please read what I've written carefully. My own LUT-loading tool and the
Mac's Gamma control panel *are* working correctly, because calibration test
images appear correct. This includes your own calibration images - they
appear correct too. If there was any problem, your own calibration images
would reveal the problem. There is not.

>Nothing went wrong. The images as not the _raw_ data from the camera, but the
>black-point and white-point have been linearly adjusted, slightly. The other way
>to achieve this would be to set up the lightning and exposure settings with the
>utmost care (and a lot of lighting hardware). This is very time consuming and
>most often impossible. So the originals are slightly scaled, linearly. There
>will be some amount of fencing _always_ in any sort of 12 bit camera data in
>case the lightning on the scene does not match perfectly with the dynamic range
>of the camera. However I did the lightning carefully so only a small adjustment
>was required. After you gamma compensate the original and then convert it into
>the 8 bit space you will see from the histogram that the image does indeed have
>the so called "more perceptual coding" (no gaps in the histogram).

No, the 16 bit image is *not* linearly scaled from the original 12-bit
data. If you did a straightforward scaling from 12 bits to 16 bits,
you would either just multiply each sample value by 16, or multiply by
65535/4095. If you then take a histogram of the 16-bit data, most of
the "bins" in the histogram will be zero, but the non-zero bins will
be spaced every 16 apart. Even if you adjusted the black point as
well as the white point, the mathematics of a linear transform
guarantee that the spacing of the sample values in 16-bit code will
be uniform, though not necessarily every 16 anymore. Do you agree so far?

In fact, the histogram of your 16-bit image is not like this at all.
The non-zero bins are very far apart in the shadow areas of the image,
and much closer together in the highlights. From the histogram, I can
approximately reconstruct the function used to map 12 bits to 16 bits,
and its graph is curved, not straight. The graph is approximately the
same as one that is performing gamma correction for a gamma somewhere
around 1.7.

Now, if you don't believe me, just find a tool that creates a histogram
of your own 16-bit "lupe" image and look at it for yourself. Or I can
mail you the one that I generated here if you don't have such a tool.
Then build a map of what original 12-bit codes were mapped to what 16-bit
codes - or I can mail you the one I have built already. Then draw a
graph of this mapping. It is not a straight line. Generate a graph of
an ideal gamma correction function with an exponent of 1/1.7, 12 bits
input, and 16 bits output. Graph it too, and see how closely it matches
the 12-bit to 16-bit mapping that you used. Or I can mail you an Excel
spreadsheet that shows this.

You seem to believe that I'm waving my hands and making false and
unsupported assertions. But I downloaded the 16-bit image that *you*
provided and did some simple computations on it. Anyone else who
fetches the same image can do the same. And there is clearly something
wrong in the 12- to 16-bit conversion that produced an effect that is
very close to a gamma change. There can be no doubt about this if
you actually look at the data.

Given this, I ask again: how did you do the 12 to 16 bit conversion?
Did you use Photoshop's Levels feature, or something else? If you used
Levels, did you just set the black and white point, or did you set the
midpoint slider as well? (This would cause a gamma change). Or perhaps
you left the midpoint slider alone, but Photoshop did something strange
that involved the monitor gamma you have set. I do know that PC Photoshop
handles gamma correction differently than Mac and SGI Photoshop, because
the system handles the gamma correction on the latter two platforms and
Photoshop seems to do that itself on the PC.

Or could you simply make the unaltered 12-bit data available? I'm just
trying to diagnose what happened during the 12-bit to 16-bit conversion.
It absolutely, definitely, was not a linear mapping. I'm not imagining
it, and I'm not doing the math wrong. Please try duplicating the
calculations I did - or I can mail you the resulting files and spreadsheet.

It may be that the 12->16 bit conversion doesn't affect what you have
on your web page. Perhaps you worked directly from the 12-bit original
without using the 16-bit image. But I'm trying to reproduce what you
did using the 16-bit image, and it is clearly *not* equivalent to the
original 12-bit data. So I'm stuck until I get a decent version of
the 12-bit data.

As for there being no gaps in the 8-bit histogram: Well, of course there
aren't. Any reasonable mapping that takes 12-bit data into 8-bit data,
even nonlinearly, even if it makes a pass through 16 bits on the way,
should have filled histogram bins. There are so many fewer output bins
than original sample values that this should always happen. Typically,
the only time you get unfilled bins in the histogram is when you map from
8 bits to 8 bits - and that always involves some degradation in the image.

Dave

Martin Tom Brown

unread,
Mar 9, 1998, 3:00:00 AM3/9/98
to

In article <6dpjob$a...@redgreen.cs.ubc.ca>
da...@cs.ubc.ca "Dave Martindale" writes:

> tim...@clinet.fi (Timo Autiokari) writes:

> >No. Most of the operations are kernels, if you decrease the amount then
> >you do
> >not get the effect. Due to the gamma compensation some operations affect
> >heavily at shadows and only slightly on highlight.
> > In some other operations it happens visa versa.
> > On linear images they operate equally well all over the image.
>
> Yes, they operate differently. In the case of the text, the text is
> about the same darkness and the background is about the same lightness
> so you can get a good result from Unsharp Mask in either linear or gamma
> images. Your demonstration does not show that because it uses the wrong
> amount of unsharp mask for the image in question - it's not a fair
> demonstration. For this image, either linear or gamma works fine.

I can't comment on that particular image, but in general he does have
a point where manipulations which require linearity are concerned.
For example given an oversampled image 2N x 2M if I want to predict
accurately the image at lower resolution N x M I have to go back to
the linear intensity space to do the intensity summation calculation.
The process of binning down the data in blocks of four requires
that the intensity data be used and not the non linear gamma data.

If you do it on gamma corrected data you get a bias. Because

a^g + b^g + c^g +d^g != (a+b+c+d)^g for g != 1

x^g meaning value x raised to power g

> But what you've shown us so far just shows errors in processing, not an
> image that is processed correctly in linear space and which *cannot* be
> processed correctly in gamma space, no matter what the settings.

The counter example is above. However, there is nothing at all to
stop you taking a gamma corrected image back to true intensity space
doing the calculations and then gamma correcting it for final display.

It is more usual to keep scientific images in linear intensity space
until it is required to display them and then gamma correct. Unless
you intend to do photometry the differences may well be academic.

Regards,
--
Martin Brown <mar...@nezumi.demon.co.uk> __ CIS: 71651,470
Scientific Software Consultancy /^,,)__/


Timo Autiokari

unread,
Mar 9, 1998, 3:00:00 AM3/9/98
to

On Mon, 09 Mar 98 09:51:44 GMT, Martin Tom Brown <Mar...@nezumi.demon.co.uk>
wrote:

>However, there is nothing at all to stop you taking a gamma
>corrected image back to true intensity space doing the
>calculations and then gamma correcting it for final display.

Yes there is. Back and forth un-compensating and the compensating again will eat
details. Not from the shadows that will convert accurately but the midtones and
highlights are heavily affected.



>It is more usual to keep scientific images in linear intensity space
>until it is required to display them and then gamma correct.

It may be more usual there but it is more accurate for all imaging, not just for
scientific images.

>Unless you intend to do photometry the differences may well be academic.

No there are huge differences.
Please see my comparisons and if you like repeat the experiments.
http://www.clinet.fi/~timothy/calibration/gamma_errors/index.htm

Timo Autiokari

Dave Martindale

unread,
Mar 9, 1998, 3:00:00 AM3/9/98
to

Mar...@nezumi.demon.co.uk writes:
>I can't comment on that particular image, but in general he does have
>a point where manipulations which require linearity are concerned.
>For example given an oversampled image 2N x 2M if I want to predict
>accurately the image at lower resolution N x M I have to go back to
>the linear intensity space to do the intensity summation calculation.
>The process of binning down the data in blocks of four requires
>that the intensity data be used and not the non linear gamma data.
>
>If you do it on gamma corrected data you get a bias. Because
>
> a^g + b^g + c^g +d^g != (a+b+c+d)^g for g != 1

Yes, that's true for any nonlinear encoding, not just gamma encoded.
Several people, including myself, have agreed with Timo that image
processing operations are ideally *applied* in a linear space.
This does not necessarily mean that images should be *stored* in
a linear encoding - the two issues are separate.

>However, there is nothing at all to
>stop you taking a gamma corrected image back to true intensity space
>doing the calculations and then gamma correcting it for final display.

Indeed, I've suggested this to Timo. He does not seem receptive to
this suggestion.

As far as I can tell from what he's said and his examples, Timo *does*
have a point. Sometimes linear *is* better, under certain assumptions.

1) If the only image processing tool you use is Photoshop, and

2) you are working on images that will ultimately be printed on paper,
so the contrast range is limited to 30:1 or less,

then using 8-bit linear encoding to store your images might be the best
tradeoff available.

My reasoning to support this is:

1) Photoshop forces you to do most image processing with 8-bit samples,
since it supports almost no operations on wider data.

2) This forces you to choose between 8-bit linear and some non-linear
encoding during processing. 8-bit linear avoids the processing
bias you described above, but has the problem of banding artifacts
due to insufficient resolution in the shadows. 8-bit gamma encoding
avoids the banding artifacts, but has some errors in the image
processing operations.

3) Many Photoshop users are working on images that will go back to film,
so a 100:1 brightness range (or more) is needed. Gamma encoding is
likely necessary for them to avoid artifacts. But if your images
are limited to 30:1 or less, this may not be a problem.

4) The biases introduced by nonlinear sample values are significant in
certain types of filtering operations, but not in many of the other
things that Photoshop does. So the importance of linear samples
depends on what you use Photoshop for.

In the end, if you have to use Photoshop, and your only output medium
is paper, and you make significant use of image filtering operations in
Photoshop, then linear samples may indeed give the best results. This
does seem to describe Timo's situation.

The problem is that many other Photoshop users have different
situations, and gamma-encoded samples are an overall better tradeoff
for many of those users.

In addition, many other people are not bound to Photoshop at all, and
have the freedom to do image processing operations on wider samples,
and the freedom to store images in one space while processing them in
another space. These users have yet different tradeoffs available.

The problem with Timo's articles is that, having identified that 8-bit
linear is theoretically better for something, he has turned this into
a religious crusade and is painting anyone who advocates using gamma-
encoded pixels under any conditions as the devil.

Dave

Dave Martindale

unread,
Mar 9, 1998, 3:00:00 AM3/9/98
to

Mar...@nezumi.demon.co.uk writes:
>It is more usual to keep scientific images in linear intensity space
>until it is required to display them and then gamma correct. Unless

>you intend to do photometry the differences may well be academic.

Storing the values in linear form is the ideal, but you'll generally
need to use 12 to 16 bits per sample to do this without losing information,
at least with photographic images. You need 12 bits if you want to store
only the intensity range that can be recorded in a transparency, while
15 or 16 bits are needed for the intensity range that was captured by
a negative.

If you can't afford that much storage, 10-bit logarithmic encoding will
still store the full negative intensity range with intensity steps that
are still too small to see over the whole intensity range. This is
"visually lossless" storage. Of course, these 10-bit log values should
be converted to 16-bit integer linear or floating point for processing,
to avoid the bias introduced by non-linear processing.

But if you can only afford to store 8 bits per sample, no encoding is
completely visually lossless. 8-bit gamma encoded storage, combined
with conversion to 12-bit linear form for processing, is probably the best
option available. This does imply compressing or clamping the scene
intensity range to what the output medium will accomodate - it's not
reasonable to try to capture the full negative intensity range using
*any* 8-bit encoding.

If you are using Photoshop, you don't even have the ability to store
8-bit gamma and process in 12-bit linear. You have three choices:

- Store and process in 8-bit gamma. This gives some errors in filtering
operations.

- Store and process in 8-bit linear. This has quantization problems in
dark areas of the image.

- Store in 8-bit gamma, process in 8-bit linear, with repeated conversions
back and forth. Adds lots of extra quantization steps, so probably
always worse than either of the two previous choices. Ugh. Yechh.

Dave

Dave Martindale

unread,
Mar 9, 1998, 3:00:00 AM3/9/98
to

>On Mon, 09 Mar 98 09:51:44 GMT, Martin Tom Brown <Mar...@nezumi.demon.co.uk>
>wrote:
>>However, there is nothing at all to stop you taking a gamma
>>corrected image back to true intensity space doing the
>>calculations and then gamma correcting it for final display.

tim...@clinet.fi (Timo Autiokari) writes:
>Yes there is. Back and forth un-compensating and the compensating again will eat
>details.

You're both right, in a way. Timo, if you convert back and forth between
8-bit gamma and 8-bit linear, you get cascading quantization errors.
This is always bad practice, as you note.

However, Martin is talking about converting from a gamma-encoded *storage*
format to another linear-encoded *computational* format. He didn't say
anything about sample widths. Any reasonable scientist would realize
that if your samples are stored as 8-bit gamma encoded, they need to
be converted to a linear format that is at least 12 bits wide for
computation. These days, with CPUs that do floating-point multiplies
faster than integer multiplies, it often makes sense to convert directly
to 32-bit floating point for computations.

In this way, you can do the computations in linear space, avoiding any
artifacts from nonlinear encoding, and then convert back to gamma encoding
for storage without adding any significant quantization errors from the
conversions.

I'm sure Martin is not suggesting converting to 8-bit linear space for
computation - no sensible scientist would suggest that as a way to
*reduce* errors.

Dave

Michael McGuire

unread,
Mar 9, 1998, 3:00:00 AM3/9/98
to

0]> Dave Martindale wrote:
: >On Mon, 09 Mar 98 09:51:44 GMT, Martin Tom Brown <Mar...@nezumi.demon.co.uk>

: >wrote:
: >>However, there is nothing at all to stop you taking a gamma
: >>corrected image back to true intensity space doing the
: >>calculations and then gamma correcting it for final display.

: tim...@clinet.fi (Timo Autiokari) writes:
: >Yes there is. Back and forth un-compensating and the compensating again will eat
: >details.

: You're both right, in a way. Timo, if you convert back and forth between
: 8-bit gamma and 8-bit linear, you get cascading quantization errors.
: This is always bad practice, as you note.

....
There is a simple way to avoid the cascading quantization errors while keeping
the image in gamma compensated form. A lookup table for three 8 bit color
planes will have at most 768 entries. It is no great computational burden to
compute this many entries with floating point arithmetic, concatenating the
transformation from gammma to linear, the linear operation itself, and the
transformation back to gamma before rounding to integers. Whether PhotoShop
does this in some cases is the matter for some investigation. Ideally an image
editor would track all the operations done to an image and assemble them in
this manner into one grand transformation before integerizing and taking the
source image into the destination image. I believe Live Picture does something
like this, but I haven't looked into it.

Mike
--
Michael McGuire Hewlett Packard Laboratories
email:xmcg...@xhpl.xhp.com P.0. Box 10490 (1501 Page Mill Rd.)
(remove x's from email if not Palo Alto, CA 94303-0971
a spammer)
Phone: (650)-857-5491
************BE SURE TO DOUBLE CLUTCH WHEN YOU PARADIGM SHIFT.**********

Dave Martindale

unread,
Mar 10, 1998, 3:00:00 AM3/10/98
to

mi...@xhplmmcg.xhpl.xhp.com (Michael McGuire) writes:
>There is a simple way to avoid the cascading quantization errors while keeping
>the image in gamma compensated form. A lookup table for three 8 bit color
>planes will have at most 768 entries. It is no great computational burden to
>compute this many entries with floating point arithmetic, concatenating the
>transformation from gammma to linear, the linear operation itself, and the
>transformation back to gamma before rounding to integers.

That works for "point" operations - where the red output value depends
only on the red input value. You could implement Levels and Curves
in Photoshop this way.

Unfortunately, most operations aren't that simple. Some operations
(e.g. hue/saturation changes) have outputs that depend on all three of
RGB at one pixel. So a lookup table that enumerated all 256 possible
R values times all possible G values times all possible B values would
be 2^24 bytes in size. You'd need three of them, for a total of 48 Mb
of memory dedicated to your LUT. In addition, calculating a LUT this
size takes a long time. If your *image* is smaller than the LUT, it's
faster to just directly calculate the output image than the LUT.

And the above still assumes that the output pixel depends only on the
input pixel in the same location. The output of spatial filtering
operations (sharpen, blur, unsharp mask) depends on the value of 9 or
25 or even more input pixels surrounding the output pixel location.
Trying to build lookup tables for all possible inputs to *this*
sort of operation is quite futile.

Still, you can use lookup tables for the conversion from 8 bit gamma to
some suitable wide linear integer, and from there back to 8 bit gamma.

Dave

Dave Martindale

unread,
Mar 10, 1998, 3:00:00 AM3/10/98
to

>In article <6e1fje$d...@redgreen.cs.ubc.ca> da...@cs.ubc.ca (Dave Martindale) writes:
>
> But if you can only afford to store 8 bits per sample, no encoding is
> completely visually lossless. 8-bit gamma encoded storage, combined
> with conversion to 12-bit linear form for processing, is probably the best
> option available.

nou...@nohost.nodomain (Thomas) writes:
>Digital cameras store images in compressed formats, meaning that they
>use a variable number of bits per pixel, dependent on both the
>location of the pixel and the content of the image. So, it really
>makes no sense to talk about "8 bit gamma encoded storage" when you
>are talking about current digital cameras.

We have been discussing storing of images in limited number of bits per
sample, but *no* compression of the sort you are discussing. Of course,
compression adds its own artifacts. But most flatbed cameras and
drum scanners and fixed CCD cameras (basically, anything that is
connected to a computer for output) usually provide uncompressed
output. Under these conditions, you can make reasonable calculations
about the maximum error that can be caused by various storage formats
and calculations. And, you can design systems that are pretty much
guaranteed to introduce no artifacts of their own under any conditions.

Portable digital cameras have the problem of storing images in
very limited storage space, so they use compression. Because they
compress across mutiple dimensions of the image at once, it becomes
far more complex trying to describe and measure all of the possible
problems that might occur at different levels of compression, and for
different tradeoffs between spatial, intensity, and colour resolution.

Dave

ran...@alumni.rpi.edu

unread,
Mar 10, 1998, 3:00:00 AM3/10/98
to

In article <889437...@nezumi.demon.co.uk>,
Mar...@nezumi.demon.co.uk wrote:

> However, there is nothing at all to
> stop you taking a gamma corrected image back to true intensity space
> doing the calculations and then gamma correcting it for final display.

Provided, of course, you use a high enough resolution to prevent
loss during the corrections. Dave has repeated this in several of his
messages, and it's extremely important. Ideally, you would do it
all in floating point, but if you don't want to do that, at least use
3 or 4 more bits for the intermediate linear data than for the stored data.

> It is more usual to keep scientific images in linear intensity space
> until it is required to display them and then gamma correct. Unless
> you intend to do photometry the differences may well be academic.

Yes, if the original data is linear you should store it that way. The
PNG format allows you to mark your file with a chunk that says "this
image is linear" (file_gamma=1.0), so the only time you do a gamma
correction is when you display it, and if the destination is a printer
that also is linear, no conversion will happen at all (which is what
Timo was after in the first place, I think). If the destination
is a monitor with gamma=2.2, then the conversion happens at the last
possible moment, after you've done any image processing in linear space.

On the other hand, if the original data is photographic data in a
gamma-encoded space, you should store it that way, and mark the PNG
file with the file_gamma=0.45 chunk. But if you are doing compositing
and what-not, convert to high-resolution linear space and do the
composition, and then leave it in linear space, marked with file_gamma=1.0,
to avoid repetitive gamma-encoding and decoding when you do more
image processing on it.

Glenn Randers-Pehrson
PNG/MNG develppment group

Michael McGuire

unread,
Mar 10, 1998, 3:00:00 AM3/10/98
to

0]>

: Dave

I would agree with you that "matrix" operations will not succumb to this
approach, but the point operations do count for a large percentage of the
operations you would likely want to do to an image if not a large percentage of
the total possible operations. I will point out that in an appropriate color
space some of these operations you cite become one dimensional or at most two.
Saturation changes are just linear remappings of a and b in CIELab. A hue
change for 8 bit channels would take a 16k(taking advantage of symmety) lookup
table addressed by a and b which would be within the bounds of reason to
compute. As for spatial filtering operations, it is generally best if they are
applied only to the luminance component because among other things the eye has
more spatial frequency response and is more sensitive to noise in this channel.
Since the L component is perceptually linear, that coding of the luminance is
probably best although it may not be as easy to analyze mathematically as
intensity linear coding. Contrast adjustment is done only to the L component
and doesn't have the effect of changing saturation as it does in RGB. But of
course one has to get to CIELab from whatever flavor of RGB one starts with,
and to the destination space when you are finished which are both matrix type
operations and not satisfactory to do with floating point. But if the coding
of the source and destination images is not so different from the cube root of
CIELab, this won't bite too hard. Of course some things aren't so sweet in
CIELab. There are issues of gamut mismatch due to what you have done vs whats
available in the destination space. Dealing with a large change of illuminant
or white point isn't at all convenient in CIELab, but it is easily done in RGB
as a simple point operation. A certain amount of smarts about the order in
which operations are performed helps a lot.

0 new messages