colorspace-faq - purpose of the faq ?

176 views
Skip to first unread message

Timo Autiokari

unread,
Feb 25, 1998, 3:00:00 AM2/25/98
to

Dear Mr. Poynton,

What is the purpose of your GammaFAQ?

In your forewords you say:

***"In video, computer graphics and image processing, gamma represents a
numerical parameter that describes the nonlinearity of intensity reproduction.
Having a good understanding of the theory and practice of gamma will enable you
to get good results when you create, process and display pictures." ***

Many people understand the above as if it would apply to
digital-photographic-imaging as well.

However you do not say that it would apply. Instead you speak about "image
processing" and about "create, process and display pictures". These differ from
digital-photographic-imaging in two aspect:

Digital photographic images (1) originates form the real world, and they are
then (2) enhanced where as "image processing" and "create, process and display
pictures" refers to something that originates from calculations or algorithms,
synthetically, and are then displayed as they are, they are not enhanced.

So, I find many fundamental issues in your GammaFAQ that are, to say the least,
misleading when they are applied to digital-photographic-imaging. I take only
one example from the GammaFAQ, the same text is in the ColorFAQ as well:

>>13. How many bits do I need to smoothly shade from black
>>to white?
>>...To shade smoothly over this range, so as to produce no
>>perceptible steps, at the black end of the scale it is necessary
>>to have coding that represents different intensity levels 1.00, 1.01,
>>1.02 and so on. If linear light coding is used, the "delta" of 0.01 must
>>be maintained all the way up the scale to white. This requires about
>>9,900 codes, or about fourteen bits per component. If you use nonlinear
>>coding, then the 1.01 "delta" required at the black end of the scale
>>applies as a ratio, not an absolute increment, and progresses like
>>compound interest up to white. This results in about 460 codes, or about
>>nine bits per component. Eight bits, nonlinearly coded according to
>>Rec. 709, is sufficient for broadcast-quality digital television at a contrast
>>ratio of about 50:1....

Now, the conditional expressions: "If linear light coding is used..." and "If
you use nonlinear coding...". are fundamentally faulty when applied to digital
photographic imaging. Also the meaning of what actually the "to code" means
gets to be a bit blurry.

In digital-photographic-imaging where a CCD imager is used (cameras and
scanners) there is absolutely no possibility to choose what coding is used. The
CCD sees the light linearly and the coding happens linearly by the AD converter
that is inside the CCD device. _This_ is where the coding happens. And it
happens linearly.

So, the data that comes out from the CCD imager device (the integrated circuit)
is already coded and that has been done linearly, always. The light-to-data
coding has been done at this point.

There after it is only possible to alter the _result_ of this coding, in other
words to make the image data suitable for an output device or an other. This is
the same as compensation.

In other words after the coding is done it is not possible to just calculate any
better shading (or better human perception) into the data. That would require
more data.

On the other hand if you _create_ computer generated graphics (like virtual
reality 'images') using algorithms then you have all the possibilities to affect
to the 'coding' freely because the 'coding' is done by algorithms or
calculations, so any intensity values can be freely chosen. There you can have
the better shading of the black and the better perception for the eye,
simultaneously.

So my question is: Does your GammaFAQ cover digital-photographic-imaging and if
your answer is "yes" then how do you explain the better perceptual coding above?


Timo Autiokari
http://www.clinet.fi/~timothy/calibration/index.htm

Stephen H. Westin

unread,
Feb 25, 1998, 3:00:00 AM2/25/98
to

tim...@clinet.fi (Timo Autiokari) writes:

> Dear Mr. Poynton,
>
> What is the purpose of your GammaFAQ?
>
> In your forewords you say:

> ***"In video, computer graphics and image processing, gamma
> represents a numerical parameter that describes the nonlinearity of
> intensity reproduction. Having a good understanding of the theory
> and practice of gamma will enable you to get good results when you
> create, process and display pictures." ***

> Many people understand the above as if it would apply to
> digital-photographic-imaging as well.

Actually, he is dealing simply with the issues of display of a known
digital image on a CRT. As you point out, there are additional issues
involved with image acquisition.

<snip>

> So, I find many fundamental issues in your GammaFAQ that are, to say
> the least, misleading when they are applied to
> digital-photographic-imaging. I take only one example from the
> GammaFAQ, the same text is in the ColorFAQ as well:

> >>13. How many bits do I need to smoothly shade from black
> >>to white?
> >>...To shade smoothly over this range, so as to produce no
> >>perceptible steps, at the black end of the scale it is necessary
> >>to have coding that represents different intensity levels 1.00, 1.01,
> >>1.02 and so on. If linear light coding is used, the "delta" of 0.01 must
> >>be maintained all the way up the scale to white. This requires about
> >>9,900 codes, or about fourteen bits per component. If you use nonlinear
> >>coding, then the 1.01 "delta" required at the black end of the scale
> >>applies as a ratio, not an absolute increment, and progresses like
> >>compound interest up to white. This results in about 460 codes, or about
> >>nine bits per component. Eight bits, nonlinearly coded according to
> >>Rec. 709, is sufficient for broadcast-quality digital television at a contrast
> >>ratio of about 50:1....

I think this description is a bit pessimistic. I would put the minimum
for a digital image displayed on a CRT at about 12 bits minimum, if
linear encoding is used.

> Now, the conditional expressions: "If linear light coding is
> used..." and "If you use nonlinear coding...". are fundamentally
> faulty when applied to digital photographic imaging. Also the
> meaning of what actually the "to code" means gets to be a bit
> blurry. In digital-photographic-imaging where a CCD imager is used
> (cameras and scanners) there is absolutely no possibility to choose
> what coding is used. The CCD sees the light linearly and the coding
> happens linearly by the AD converter that is inside the CCD device.
> _This_ is where the coding happens. And it happens linearly.

Then how does our Kodak DCS420 camera deliver images with a gamma
correction of around 1.6? We know; we measured it. And I think you
will find a similar nonlinearity in most commercially-available
cameras.

<snip>

> In other words after the coding is done it is not possible to just
> calculate any better shading (or better human perception) into the
> data. That would require more data.

Yup. That's why you have to think about these issues. As do
manufacturers of digital cameras.

<snip>

Another subtle point is that it is, in general, impossible to
transform an acquired image into any standard tristimulus color space;
since the three filter responses are always different from the CIE
matching functions, there will always be cases where the
transformation will give the wrong answer. Fortunately, most spectra
in the real world don't incur egregious errors in such a process. Just
found this out a few weeks ago, myself.

--
-Stephen H. Westin
Any information or opinions in this message are mine: they do not
represent the position of Cornell University or any of its sponsors.

Michael McGuire

unread,
Feb 25, 1998, 3:00:00 AM2/25/98
to

0]>
....snippage

: In digital-photographic-imaging where a CCD imager is used (cameras and


: scanners) there is absolutely no possibility to choose what coding is used. The
: CCD sees the light linearly and the coding happens linearly by the AD converter
: that is inside the CCD device. _This_ is where the coding happens. And it
: happens linearly.

: So, the data that comes out from the CCD imager device (the integrated circuit)


: is already coded and that has been done linearly, always. The light-to-data
: coding has been done at this point.

...snippage

: Timo Autiokari
: http://www.clinet.fi/~timothy/calibration/index.htm

A/D convertors in cameras and scanners are in fact external to the CCD's--not
on the same chip, and thus accessible to adjustment by other than the maker of
the CCD. Further there is no requirement that an A/D converter have an output
linearly proportional to its input voltage. An implementation of an A/D
consists of N voltage comparators and a voltage divider comprised of a string
of N resistors fed by a stabilized voltage. Each node of the divider is
connected to the plus input of a voltage comparator. The minus input of each of
the comparators is connected to the input signal. The output is the number of
comparators that are turned on by the input signal, that is all those whose
voltage divider (plus) input is less than the input signal. If the resistor
values in the divider are all equal then you get linear output. But they could
just as well be a power law sequence for a power law relationship of output to
input or whatever function you like.

From the point of view of optimizing signal-to-noise performance, a square root
scaled sequence would be superior for CCD's given the Poisson law fluctuation
of the incoming photon flux--yet another expression of Nature's preference for
the non-linear, and in the same direction from linear as CRT gamma, printer
correction, and perceptual linearity.

Mike
--
Michael McGuire Hewlett Packard Laboratories
email:xmcg...@xhpl.xhp.com P.0. Box 10490 (1501 Page Mill Rd.)
(remove x's from email if not Palo Alto, CA 94303-0971
a spammer)
Phone: (650)-857-5491
************BE SURE TO DOUBLE CLUTCH WHEN YOU PARADIGM SHIFT.**********

Charles Poynton

unread,
Feb 26, 1998, 3:00:00 AM2/26/98
to

Concerning an ongoing public and private attempt to educate Timo Autiokari
concerning transfer functions, in article
<34f66f5b...@news.clinet.fi>, tim...@clinet.fi (Timo Autiokari)
wrote under the heading "What is the purpose of your GammaFAQ?":

> the conditional expressions: "If linear light coding is used..." and
> "If you use nonlinear coding...". are fundamentally faulty when
> applied to digital photographic imaging.

The statements are not faulty, and they apply quite well to digital
photographic imaging.

Timo continues,

> In digital-photographic-imaging where a CCD imager is used (cameras and
> scanners) there is absolutely no possibility to choose what coding is
used. The
> CCD sees the light linearly and the coding happens linearly by the AD
converter
> that is inside the CCD device. _This_ is where the coding happens. And it
> happens linearly.
>
> So, the data that comes out from the CCD imager device (the integrated
circuit)
> is already coded and that has been done linearly, always. The light-to-data
> coding has been done at this point.

In a note attached to this follow-up, I reiterate what I told Timo, in
private e-mail, yesterday. (He acknowledged receiving it from me.) The
note explains how scanners and cameras use CCD devices. I see that Michael
McGuire and Mitch Valburg have already posted follow-ups concerning CCDs
and ADCs, and I expect (and hope for) several more third-party follow-ups
in the next few days. I'm sorry that Timo paid little attention to my
message, because he could have cleared up some confusion, instead of
creating some more.

C.

p.s. Timo: I took care in my previous posting to direct follow-ups to just
<news:sci.image.processing> and <news:rec.photo.digital>. Perhaps you
could do the same, instead of continuing to cross-post to 6 groups.

--
Charles Poynton
<mailto:poy...@poynton.com> [Mac Eudora/MIME/BinHex/uu]
<http://www.inforamp.net/~poynton/>
--


A Rough note for Timo concerning the Gamma FAQ
as it applies to CCDs and ADCs in scanners and cameras


Copyright (c) 1998-02-26
Charles Poynton


Nearly all contemporary CCDs are intrinsically analog devices - or more
properly, their output is sampled but not quantized. (For definitions of
sampling and quantization, see Chapter 1 of my book; that chapter is on
the web.) Today, only a few CCD devices have integral A-to-D converters.
(Soon, many will, but solutions to the 8-bit linear light problem must
first be found.)

Contemporary desktop scanners generally take one of two approaches:

- Some have a 10-bit (or sometimes a 12-bit) A-to-D converter, followed
by a digital hardware lookup table where a nonlinear correction is
performed, producing 8 bits out.

- Some have an analog nonlinear correction circuit, followed by an
8-bit A-to-D converter.

Video cameras invariably take the second approach, except for the most
sophisticated studio cameras, costing $80,000 or more, which employ 12-bit
converters and digital gamma correction with 10 (nonlinear) bits out.
Industrial machine vision cameras, and astrophotography cameras, sometimes
directly produce linear-light intensity output.

A very low-end, cheap scanner might get away with no analog processing, a
CCD and an 8-bit ADC. (I'm not certain whether this is done even in in
cheap commercial units, I've never taken a really cheap unit apart. Maybe
a QuickTake 100 internally codes 8-bit linear-light, does anyone know?)
But such a scanner cannot reproduce smooth shades in dark regions of the
image. As I mention in my book, the quantization requirements near black
are relaxed when the contrast ratio of the display medium is low. Scanners
for print work generally have less demanding requirements than video
cameras, because offset printing generally has a lower contrast ratio than
television. Most demanding of all is motion picture film, or projected 35
mm transparencies [slides] in a dark room. (Most desktop computer
applications are not demanding of good shading near black, because most
desktop computer environments are brightly lit, consequently the contrast
ratio is poor.)

Timo suggested (in private e-mail) that the images in the following three
cases would appear exactly identical, assuming 8 bits per color component:

1. Raw, linear-light image data is processed through a display driver
that imposes a lookup table exactly compensating the nonlinearity
of the CRT.

2. Raw, linear-light image data is gamma-corrected by 8-bit per channel
image manipulation software, processed through no lookup table (or a
lookup table containing a ramp), and then displayed on a CRT.

3. Software in the camera or the scanner applies gamma
correction; the resulting 8-bit image displayed directly on the CRT.

Timo incorrectly concludes that these three cases appear exactly
identical. His cases 1 and 3 correspond to the second and first rows
respectively of Figure 6.8 in my book. (The same figure is in question 14
of the Gamma FAQ.)

I'm sure that Timo correctly concluded that, in each of his 3 cases, black
is reproduced correctly, and white is reproduced correctly, and mid grey
is reproduced (almost) correctly. That's not the problem.

The problem is that the boundary between adjacent code values in dark
areas of the picture is more or less visible - or perhaps objectionable -
depending which of these schemes is used! In decent viewing conditions,
with a video camera or a decent scanner, the images will _not_ appear
exactly the same. In case 1, the dark shades will exhibit banding, and
case 3 they will not.

In case 1 (Figure 6.8, second row, CG), the rounding of intensities in the
dark shades to the same code value - by conversion of the intensity value
to 8 bits - erases distinctions between dark shades (intensities) that I
can see - that my vision can easily distinguish.

If you have seen banding in a (supposedly) continuous-tone image, at 24
bits per pixel, then you can be fairly certain that the image data has
been subjected to 8-bit linear-light coding (or poorly-chosen nonlinear
coding) someplace along the path from creation, capture, processing,
recording, placing in a framebuffer, running a lookup table, converting to
analog, and displaying.

--

Timo Autiokari

unread,
Feb 26, 1998, 3:00:00 AM2/26/98
to

On 25 Feb 1998 23:40:19 GMT, mi...@xhplmmcg.xhpl.xhp.com (Michael McGuire) wrote:

>A/D convertors in cameras and scanners are in fact external to the CCD's--not
>on the same chip, and thus accessible to adjustment by other than the maker of
>the CCD. Further there is no requirement that an A/D converter have an output
>linearly proportional to its input voltage. An implementation of an A/D
>consists of N voltage comparators and a voltage divider comprised of a string
>of N resistors fed by a stabilized voltage. Each node of the divider is
>connected to the plus input of a voltage comparator. The minus input of each of
>the comparators is connected to the input signal. The output is the number of
>comparators that are turned on by the input signal, that is all those whose
>voltage divider (plus) input is less than the input signal. If the resistor
>values in the divider are all equal then you get linear output. But they could
>just as well be a power law sequence for a power law relationship of output to
>input or whatever function you like.


In the article:
http://x4.dejanews.com/getdoc.xp?AN=296741174&CONTEXT=888477590.22675835&hitnum=1
you yourself say:

<< A well set up CCD--probably needs to be cooled--can put out 14 bit data
<< where all the noise intrinsic to the CCD is in the lowest order bit. CCD's
<< respond linearly to light intensity so 14 bits amounts to 14 doublings of
<< light intensity which is to say 14 stops. ...


<< Mike
<< --
<< Michael McGuire Hewlett Packard Laboratories

So please tell me what are you trying to say now ?

In your reply you use the wording "the could just be" in your illusion: "If the


resistor values in the divider are all equal then you get linear output. But
they could just as well be a power law sequence for a power law relationship of
output to input or whatever function you like."

There are two good reasons why the coding is linear in AD converter:

(1) It would be rather foolish for the manufacturer of the converter to create
an AD converter that is very accurate in design, so that it can detect a very
small change on the other end of the range and then make the rest of the range
to be much more loose.

(2) There are technical problems in the production of AD converters. It is
relatively easy to produce 8 resistors onto the chip that all have accurately
the same value. In an AD ladder it does not matter what the exact value of the
resistors is, only the value needs to be the same for each resistor in the
ladder. This is the very basic issue that the production of AD converters relies
on. It becomes increasingly difficult to produce more than 8 resistors that have
exactly the same value. This is the reason why higher bit-dept converters are
more expensive and that there are not many over 16 bit accuracy. On the other
hand producing 8 resistors that each have different value, but so that these
values accurately follows some non-linear function would be *very* difficult.
There is no technology in the horizon that could make this approach feasible, it
is much easier to produce larger amount of linearly coded bits than the
equivalent amount of non-linearly compressed bits.

Below are links to AD converter pages of Analog Devices Inc., National
Semiconductor Corp., and Fujitsu Microelectronic Inc., if anyone is interested,
please see the specifications. All the converters are linear. It is easiest to
just locate the *nonlinearity error* or the *linearity* spec.

http://products.analog.com/products/list_generics.asp?category=86
http://products.analog.com/products/list_generics.asp?category=85
http://www.national.com/catalog/AnalogDataAcquisition.html
http://fujitsumicro.com/products/analog/data.html

If anyone knows on-line specifications of CCD devices could you please post the
links.

And yes, there are both internal and external AD conversions with CCD imaging
systems. It does not change the fact that the conversion happens linearly,
inside an integrated circuit where the coding is in the from of physical
resistor elements on the chip.

It is also possible (unlikely but possible) that in some very specific camera
there is an non-liner analog signal conditioning between the CCD output and the
AD converter input. If Mr. Poynton's pages are targeted for such exotic systems
then in my opinion it would be nice if he could say indicate this on these
pages.

Timo Autiokari

Charles Poynton

unread,
Feb 26, 1998, 3:00:00 AM2/26/98
to

In article <34f637f...@news.clinet.fi>, tim...@clinet.fi (Timo
Autiokari) wrote:

> It is also possible (unlikely but possible) that in some very specific camera

> there is an non-linear analog signal conditioning between the CCD output


> and the AD converter input. If Mr. Poynton's pages are targeted for such
> exotic systems then in my opinion it would be nice if he could say indicate
> this on these pages.

Could I ask for a few volunteers in s.e.t.a or s.e.t.b to explain to Mr.
Autiokari that this not only possible and likely, and not just in "exotic"
systems, but in ***ALL*** consumer and most professional video cameras
work? I have tried repeatedly in private e-mail and public posts to
explain, to no avail. A short note like this ought to suffice:

Dear Mr. Autiokari,

In every consumer video camera, and most professional video cameras, there
is nonlinear analog signal conditioning between the CCD output and the AD
converter input.

Certain high-end professional studio video cameras have no nonlinear


analog signal conditioning between the CCD output and the AD converter

input; in these cameras, the output of the CCD has more than 8 bits, and
nonlinear processing is performed digitally.

Alan Roberts

unread,
Feb 26, 1998, 3:00:00 AM2/26/98
to

Charles Poynton (poy...@poynton.com) wrote:
: In article <34f637f...@news.clinet.fi>, tim...@clinet.fi (Timo

: Autiokari) wrote:
:
: > It is also possible (unlikely but possible) that in some very specific camera
: > there is an non-linear analog signal conditioning between the CCD output
: > and the AD converter input. If Mr. Poynton's pages are targeted for such
: > exotic systems then in my opinion it would be nice if he could say indicate
: > this on these pages.
:
: Could I ask for a few volunteers in s.e.t.a or s.e.t.b to explain to Mr.
: Autiokari that this not only possible and likely, and not just in "exotic"
: systems, but in ***ALL*** consumer and most professional video cameras
: work? I have tried repeatedly in private e-mail and public posts to
: explain, to no avail. A short note like this ought to suffice:

I did that yesterday, recommending him to look at manufacturers data sheets
on real cameras. He hasn't responded to me yet. He should do some reading if
he's going to try to catch up with my 30 years in the camera business, let
alone your experience :-)

--
******* Alan Roberts ******* BBC Research & Development Department *******
* My views, not necessarily Auntie's, but they might be, you never know. *
**************************************************************************

Timo Autiokari

unread,
Feb 26, 1998, 3:00:00 AM2/26/98
to

On Thu, 26 Feb 1998 02:04:45 -0500, poy...@poynton.com (Charles Poynton) wrote:

>Today, only a few CCD devices have integral A-to-D converters.
>(Soon, many will, but solutions to the 8-bit linear light problem must
>first be found.)

No, the problem is the monitor gamma. It is not a problem for TV and video
because the images are not edited. But in the case the image is gamma
compensated then the image manipulation software sees the compensated (and
possibly compressed) image. Editing such an un-natural image gives poor quality.
E.g. Photoshop makes it easy to edit gamma compensated images (this is why it
has the two gamma settings). It shows the image properly on monitor but the data
is kept in the gamma space and the problem is that image editing is done to the
gamma compensated image data. So the Photoshop effectively hides the appearance
of the actual compensated data. To get a feeling what the compensated data
actually looks like just apply a gamma say 1/2.0 to an decent image.

>Video cameras invariably take the second approach,

My concern is not the video, it is digital photographic imaging.

>A very low-end, cheap scanner might get away with no analog processing, a
>CCD and an 8-bit ADC. (I'm not certain whether this is done even in in

>cheap commercial units, I've never taken a really cheap unit apart. ...)

So, you are not certain. Most of the 'really cheap' digital cameras have the 8
bit/color CCD. see e.g. http://plugin.com/dcg2.html . Are you similarly
un-certain about them too ? There really is no non-linear analog amplifiers in
them. Non-linear analog amplifiers are expensive and they eat lot of current
(such amplifiers need to have stable ambient temperature in order to be
accurate, so small miniature ovens are usually used to achieve this, similar
ovens are used for accurate frequency generation in counters and function
generators to keep the crystal stable).

>Timo suggested (in private e-mail) that the images in the following three
>cases would appear exactly identical, assuming 8 bits per color component:

No I did not say that. In my private e-mail I said:

"In the below three cases the image will appear *exactly* the same. There will
be no differences at all (still considering the 8bit/color CCD):"

What a master you are in the art of twisting words. You then go on and say:

>In decent viewing conditions, with a video camera or a decent
>scanner, the images will _not_ appear exactly the same. In
>case 1, the dark shades will exhibit banding, and case 3 they
>will not.

So here you change the "8bit/color CCD" into "a video camera or a decent
scanner". Again my concern is not the video. My concern are cameras and
scanners in the area of digital photographic imaging. Maybe to you a "decent
scanner" is only a 12 bit/color scanner, many people do have 'cheap' 8 bit/color
scanners.

May I please suggest to you that you place a warning on your FAQ pages like:
Not suitable for really cheap 8bit/color digital cameras nor for scanners that
are not decent. Or simply: Suitable only for video cameras and only displaying
images.

Timo Autiokari

Timo Autiokari

unread,
Feb 26, 1998, 3:00:00 AM2/26/98
to

On Thu, 26 Feb 1998 09:22:56 -0500, poy...@poynton.com (Charles Poynton) wrote:

>In every consumer video camera, and most professional video cameras, there

>is nonlinear analog signal conditioning between the CCD output and the AD
>converter input.

Really, I'm not concerned about the video. I've actually said that for video the
gamma compensation is good. You seem to have trouble with handling the digital
photographic imaging and image editing aspect, as you avoid to mention about
them in your on-line documents also.

The problem with your FAQs are that they are being applied to digital
photographic imaging and there the tool is digital camera. It is different form
the video cameras.

Here is a simple way to see if there is a non-linear analog signal conditioning
in the camera:

-open any decently exposed image into Photoshop that is acquired using a
8bit/color CCD device using a gamma setting other than 1.0 in the acquire module
(in case there is such setting) .
-choose Image/Histogram.
-in Photoshop the Luminosity -channel is smoothed so select the red, green or
blue from the dropdow-box.

Now, if you do not see gaps in the histogram then there is the non-linear analog
signal conditioning in the camera or the scanner. But if you do see the gaps
then the data is just modified by the software of the camera or scanner. To
verify, open a couple of other images and do the same to see that the gaps
appear generally in the same places.

Timo Autiokari

Keith Jack

unread,
Feb 26, 1998, 3:00:00 AM2/26/98
to

Timo Autiokari wrote in message <34f637f...@news.clinet.fi>...


>On 25 Feb 1998 23:40:19 GMT, mi...@xhplmmcg.xhpl.xhp.com (Michael McGuire)
wrote:
>

[edit]


>(2) There are technical problems in the production of AD converters. It is
>relatively easy to produce 8 resistors onto the chip that all have
accurately
>the same value. In an AD ladder it does not matter what the exact value of
the
>resistors is, only the value needs to be the same for each resistor in the
>ladder. This is the very basic issue that the production of AD converters
relies
>on. It becomes increasingly difficult to produce more than 8 resistors that
have
>exactly the same value. This is the reason why higher bit-dept converters
are
>more expensive and that there are not many over 16 bit accuracy. On the
other
>hand producing 8 resistors that each have different value, but so that
these
>values accurately follows some non-linear function would be *very*
difficult.
>There is no technology in the horizon that could make this approach
feasible, it
>is much easier to produce larger amount of linearly coded bits than the
>equivalent amount of non-linearly compressed bits.

Actually, at the last company I worked at, we developed an ADC that
was 8-bit flash (255 resistors), and designed to perform a nonlinear
function. Was just as simple to make as a standard ADC (the design
change took less than an hour), and was designed specifically for
interfacing
to CCDs. Amazing what you can do when you have a physicist available --
a simple experiment on his kitchen table proved the concept and the
first
silicon worked great.


Michael McGuire

unread,
Feb 27, 1998, 3:00:00 AM2/27/98
to

0]>
: On 25 Feb 1998 23:40:19 GMT, mi...@xhplmmcg.xhpl.xhp.com (Michael McGuire) wrote:

: >A/D convertors in cameras and scanners are in fact external to the CCD's--not

: >on the same chip, and thus accessible to adjustment by other than the maker of
: >the CCD.

.....
: >values in the divider are all equal then you get linear output. But they could

: >just as well be a power law sequence for a power law relationship of output to
: >input or whatever function you like.

: << A well set up CCD--probably needs to be cooled--can put out 14 bit data
: << where all the noise intrinsic to the CCD is in the lowest order bit. CCD's
: << respond linearly to light intensity so 14 bits amounts to 14 doublings of
: << light intensity which is to say 14 stops. ...
: << Mike
: << --
: << Michael McGuire Hewlett Packard Laboratories

: So please tell me what are you trying to say now ?

------------------------------------------------------------------------------
Good juvenile lawyering, Timo, but irrelevant to the subject at hand. The
question there was about the possible dynamic range of CCD's, and not about the
details of the A/D conversion or output encoding. Expressed in linear bits
that's what can be achieved. I was not anticipating cross examination by you,
or I would have pointed out that the 14 linear bits could be encoded with no
perceptual loss with fewer bits with the appropriate non-linear function, as
all the rest of the world knowledgeable about this subject has been trying to
show you.
--------------------------------------------------------------------------------

: In your reply you use the wording "the could just be" in your illusion: "If the


: resistor values in the divider are all equal then you get linear output. But
: they could just as well be a power law sequence for a power law relationship of
: output to input or whatever function you like."

: There are two good reasons why the coding is linear in AD converter:

: (1) It would be rather foolish for the manufacturer of the converter to create
: an AD converter that is very accurate in design, so that it can detect a very
: small change on the other end of the range and then make the rest of the range
: to be much more loose.

--------------------------------------------------------------------------------
But, stripping away your pejorative verbiage, this is exactly the description
of a possible non-linear A/D, small steps at one end of the scale and
progressively larger to the other end. Obviously such an A/D would be designed
for a particular purpose and not offered for general purpose uses. An
alternative general purpose possibility would be a programmable A/D.
--------------------------------------------------------------------------------

: (2) There are technical problems in the production of AD converters. It is


: relatively easy to produce 8 resistors onto the chip that all have accurately
: the same value. In an AD ladder it does not matter what the exact value of the
: resistors is, only the value needs to be the same for each resistor in the
: ladder. This is the very basic issue that the production of AD converters relies
: on. It becomes increasingly difficult to produce more than 8 resistors that have
: exactly the same value. This is the reason why higher bit-dept converters are
: more expensive and that there are not many over 16 bit accuracy. On the other
: hand producing 8 resistors that each have different value, but so that these
: values accurately follows some non-linear function would be *very* difficult.
: There is no technology in the horizon that could make this approach feasible, it
: is much easier to produce larger amount of linearly coded bits than the
: equivalent amount of non-linearly compressed bits.

-----------------------------------------------------------------------------
But as other posters to this thread have remarked, non-linear A/D's have been
made for and used in high end video cameras. You have conceded below my first
point of my original post, that in cameras and scanners, A/D's are not usually
combined with CCD's--apparently there are a few counter examples. Here we see
my other point that they need not have a linear output. But in the overall
context of this thread, it really doesn't matter. If a linear A/D has
sufficient dynamic range and a step size less than the noise level of the CCD,
then the transformation to non-linear encoding can be done digitally with
no perceptual loss.
------------------------------------------------------------------------------

: Below are links to AD converter pages of Analog Devices Inc., National


: Semiconductor Corp., and Fujitsu Microelectronic Inc., if anyone is interested,
: please see the specifications. All the converters are linear. It is easiest to
: just locate the *nonlinearity error* or the *linearity* spec.

------------------------------------------------------------------------------
It is completely unsurprising and irrelevant to this discussion that general
purpose A/D's are linear.
------------------------------------------------------------------------------

: If anyone knows on-line specifications of CCD devices could you please post the
: links.

-------------------------------------------------------------------------------
Try digging in the web pages for Sony, Toshiba, or Philips. They all make them.
-------------------------------------------------------------------------------

: And yes, there are both internal and external AD conversions with CCD imaging


: systems. It does not change the fact that the conversion happens linearly,
: inside an integrated circuit where the coding is in the from of physical
: resistor elements on the chip.

----------------------------------------------------------------------------
Irrelevant. I can write programs for a digital computer that computes values
of both linear and non-linear functions. Are the bits that represent these
values linear or non-linear? Non-linear of course, they are either on or off.
Depending on the resistor values, I can go A/D linearly or non-linearly, but
of course the resistors are linear--so what?
-----------------------------------------------------------------------------

: It is also possible (unlikely but possible) that in some very specific camera

: there is an non-liner analog signal conditioning between the CCD output and the
: AD converter input. If Mr. Poynton's pages are targeted for such exotic systems


: then in my opinion it would be nice if he could say indicate this on these
: pages.

------------------------------------------------------------------------------
No, the understanding that everyone but you seems to have, is that encoding of
images for the same perceptual quality--minimization of contouring etc--can be
achieved with noticeably fewer bits with the right non-linear encoding, than
with linear.
------------------------------------------------------------------------------

Mike
--
Michael McGuire Hewlett Packard Laboratories

Alan Roberts

unread,
Feb 27, 1998, 3:00:00 AM2/27/98
to

Timo Autiokari (tim...@clinet.fi) wrote:

: It is also possible (unlikely but possible) that in some very specific camera
: there is an non-liner analog signal conditioning between the CCD output and the
: AD converter input. If Mr. Poynton's pages are targeted for such exotic systems
: then in my opinion it would be nice if he could say indicate this on these
: pages.

You should check your facts more carefully. Most TV cameras sold to
broadcasters have a non-linear circuit between the ccd and the ADC. It
isn't a standard gamma circuit, that's done in the digits, the "pre-gamma"
circuit is used to compress the video signal in a known way, including
knee function to handle high overloads. The precise nature of this curve
is used in the digital processing to notionally recover the linear signal
before applying the required non-linear processing. Cameras are really a
lot more complex than you seem to imagine. Have you actually looked at
one? I've been doing exactly that for 30 years now, and even some domestic
camcorders are processed in this way, hardly exotic.

Timo Autiokari

unread,
Feb 27, 1998, 3:00:00 AM2/27/98
to

On 27 Feb 1998 02:58:02 GMT, mi...@xhplmmcg.xhpl.xhp.com (Michael McGuire) wrote:

>Good juvenile lawyering, Timo, but irrelevant to the subject at hand.

In the article:
<<>http://x4.dejanews.com/getdoc.xp?AN=296741174&CONTEXT=888477590.22675835&hitnum=1
<<>: you said:

"CCD's respond linearly to light intensity"

That is not irrelevant to this subject.

>I was not anticipating cross examination by you, or I would have pointed
>out that the 14 linear bits could be encoded with no perceptual loss with
>fewer bits with the appropriate non-linear function, as all the rest of the
>world knowledgeable about this subject has been trying to show you.

I have never said that it couldn't be done. I *have* said that it can be done.
And I have said it is good for TV and video.

Non-linear functions in general have the ability to compress data. This is
rather basic knowledge and has been widely used e.g. in audio.

What I have been saying is that even if it is good in TV and video to encode the
gamma correction and bith-depth compression into the video signal, it is not at
all good for digital photographic imaging.

It should not be difficult to see that when the acquire device does the gamma
compensation then the image editor will see the compensated and compressed
"signal" that we, in digital photographic imaging, call as the _image_.

In TV and video the signal (image) is suitable for the eye only after the
monitor does the gamma.

Now, Mr. Poynton's humbug about the perception mis-leads people in digital
photographic imaging, badly.

The gamma compensation in TV and video cameras is done because the effective
gamma from the scene to the eye needs to be linear (or very near to linear).

In TV this has been done from the early days of television in the TV camera. It
was so chosen in the beginning, so that a difficult and expensive pure analog
non-linear correction was not needed in the receivers.

There is no problem in this since no-one is looking the signal itself. They look
the picture on the TV and the CRT first apples the gamma. Only after that the
signal is proper, natural, for the eye again.

So, all the TV sets have the gamma therefore the gamma compensation must be done
today and in the future also, in the TV and video camera (not necessarily in the
camera but before sending the signal)

Now, technology has advanced so that the gamma of the television can be actually
useful. Best broadcast quality TV cameras can now acquire 10 bit or 12 bit data
and because of the non-linearly of the monitor we get an other benefit for free,
bit-depth compression.

The problem is that Mr. Poynton says that the "non-linear coding" is done
*because* it gives better image quality. This is not true, this way.

The better perception is there for TV and video, but it is the *consequence* of
the improved technology. And for TV and video it is a free benefit, due to the
unavoidable gamma correction that must be done anyway.

The benefit however is only there for TV and video, since there it is the
question of transmission only. The transmitted signal is gamma compensated and
it can be bit-depth compressed.

If the TV signal that is on the transmission path is converted into image
without applying the same gamma that the CRT applies, the image can not be
edited properly. Because it is heavily un-natural form. But this is what Mr
Poynton is suggesting to every one. And it mis-leads people in digital
photographic imaging badly.

Some image manipulation software like Photoshop makes it possible to edit such
gamma compensated image. This is why it has the two gamma settings. It shows the
image properly but the image data is still in gamma space and the image
manipulation operations are done to the data. To see how the data looks like,
just apply a gamma 1/2.0 to a image that show properly. Then think what the
various image editing operations might do with that.

>But, stripping away your pejorative verbiage

>But as other posters to this thread have remarked, non-linear A/D's have been
>made for and used in high end video cameras.

But, but, but, maybe he is kind enough to reveal the type code of this device. I
work as a component specialist for 12th year now and have not seen any info
about non-linear AD converters. I'm quite surprised that Mr. Michael McGuire
from Hewlett-Packard Laboratories believes that such thing exists.

>If a linear A/D has sufficient dynamic range and a step size less than
>the noise level of the CCD, then the transformation to non-linear encoding
>can be done digitally with no perceptual loss.

The above is only partially true. You need also to say *where* the signal (or
data) is perceived. If it is perceived on TV or on uncalibrated monitor then
this is so because the CRT makes the image proper again by applying the gamma.
And again, the image editor perceives the signal (data) before the monitor and
some rather heavy perceivable problems exists there.

Timo Autiokari

Timo Autiokari

unread,
Feb 27, 1998, 3:00:00 AM2/27/98
to

On 27 Feb 1998 09:26:29 GMT, al...@rd.bbc.co.uk (Alan Roberts) wrote:

>You should check your facts more carefully. Most TV cameras sold to
>broadcasters have a non-linear circuit between the ccd and the ADC.

Firstly, TV and Video cameras necessarily need *not* an ADC (Analog Digital
Converter) at all. You should know considering your 30 year experience. There
was no ADCs 30 year ago, but there was the television.

TV and video cameras supply the video signal, this signal is analog, not a
digital one. However all modern TV and video cameras do have ADC even if it is
not obvious.

>It isn't a standard gamma circuit, that's done in the digits, the "pre-gamma"
>circuit is used to compress the video signal in a known way, including
>knee function to handle high overloads.

It was Mr Poynton that insists that there is an analog non-linear conversion
between CCD and ADC and the context was that it would do the gamma compensation.

Now such an *analog* amplifier that does the *gamma* compensation is very
difficult one. If there is such devices in cameras today, they are rare.

I do know that there is pre-conditioning circuits. They are piecewise linear
*not* non-linear. Then there are signal processors for this purpose and even if
they seem to be analog devices (so that they have analog input for the CCD and
analog video outputs) the actually have a flash ADC inside them. Because of the
speed that is needed with video they are often only 6 bit or 8 bit devices.
There are fast 10 bit and 12 bit flashs but they are very expensive. Such are
only used in broadcast quality systems. Then there are other rather genius
methods in achieving the "no missing codes" that is often seen on the specs.

Do you know what happens to the CCD signal (information) when it goes through
such analog-digital-analog device (or a chain of devices). Have you ever seen a
spec of those devices? Their overall error is around 1% for a high quality
device. For lower grade devices the error is usually expressed (for some
reasons) in decibels and a value of 1 dB seems to be quite common. That
translates to 10% overall error.

1% is equal to 2.5 levels in linearly coded light and 10% is equal to 25
levels. Now compare: Mr Poynton is worried about the "perception" issues below
1 level (0.4%) of linearly coded light. While this small portion of his signal
path generates in a broadcast quality system 1% and in a consumer grade system
some 10 % errors.

>The precise nature of this curve is used in the digital processing to notionally
>recover the linear signal before applying the required non-linear processing.

Yes. As I explained above. And it generates large errors.

>Cameras are really a lot more complex than you seem to imagine. Have you
>actually looked at one? I've been doing exactly that for 30 years now, and
>even some domestic camcorders are processed in this way, hardly exotic.

Again: an *analog* amplifier that does the *gamma* compensation is very
difficult one. If there is such devices they are rare. That was the context by
Mr. Poynton.

Congratulations, you have got one thing right: as you say "even some domestic


camcorders are processed in this way"

_Some_ domestic camcorders do. Some other do not.

But Mr Poynton says his pages are suitable for anyone.

And the issue is not about TV or video cameras. My question was if Mr. Poynton's
pages are applicable to digital photographic imaging, where we use digital
cameras. There are such things also, even if they are cameras, they are not TV
nor video cameras.

And the digital cameras do not have video circuitry inside them. If they provide
gamma compensated images then it is done by software. If the ADC in the camera
is 8bit then nothing is gained only the images are damaged.

In my other message in this thread there is a simple procedure to see if the
camera provides such gamma compensated images, if anyone is interested to
experiment a bit.

Timo Autiokari

Ed Ellers

unread,
Feb 27, 1998, 3:00:00 AM2/27/98
to

Alan Roberts wrote:

"Cameras are really a lot more complex than you seem to imagine."

At least they are if they're any good. :-)

Stephen H. Westin

unread,
Feb 28, 1998, 3:00:00 AM2/28/98
to

tim...@clinet.fi (Timo Autiokari) writes:

<snip>

> Again: an *analog* amplifier that does the *gamma* compensation is very
> difficult one. If there is such devices they are rare. That was the context by
> Mr. Poynton.

Well, thirty years ago, these amplifiers were ubiquitous; do you
really think that broadcast TV cameras have been digital since the
'40s? After all, they all have built-in gamma correction.

<snip>

> And the digital cameras do not have video circuitry inside them. If they provide
> gamma compensated images then it is done by software. If the ADC in the camera
> is 8bit then nothing is gained only the images are damaged.

You keep saying this; where dod you find this out?

<snip>

I still find it astonishing that you keep on arguing with people of
the stature of Mr. Roberts and Mr. Poynton, with no authoritative
references to back you up. Roberts and Poynton can serve as their
*own* authoritative references.

JG Smith

unread,
Mar 1, 1998, 3:00:00 AM3/1/98
to

tim...@clinet.fi (Timo Autiokari) wrote:

>Dear Mr. Poynton,
>
>What is the purpose of your GammaFAQ? ... ... etc., etc.
>

This is a very amusing thread. Judging from the commentary, apparently
neither side understands what the other is talking about.

Rather typical of "appointed" authority (as compared to "recognized"
authority) in the academic field. Leaves one to wonder if either knows
what he's talking about ... hmmm!

Alan Roberts

unread,
Mar 2, 1998, 3:00:00 AM3/2/98
to

Timo Autiokari (tim...@clinet.fi) wrote:

: On 27 Feb 1998 09:26:29 GMT, al...@rd.bbc.co.uk (Alan Roberts) wrote:
:
: >You should check your facts more carefully. Most TV cameras sold to
: >broadcasters have a non-linear circuit between the ccd and the ADC.
:
: Firstly, TV and Video cameras necessarily need *not* an ADC (Analog Digital
: Converter) at all. You should know considering your 30 year experience. There
: was no ADCs 30 year ago, but there was the television.
:
: TV and video cameras supply the video signal, this signal is analog, not a
: digital one. However all modern TV and video cameras do have ADC even if it is
: not obvious.

Oh, dear, I say to you again, check your facts. You are wrong. Quite wrong.
This is getting rather tedious, do you ever listen to anyone?
TV cameras come in all varieties, as I have been trying to tell you, you
cannot make blanket statements about them. They start with vhs camcorders
and end at studio cameras, each of which may be analogue or digital at any
stage of the processing. The latest breed of DOMESTIC cameras are digital,
but they still have analogue preprocessing so that the digital processing
is easier. If you can't understand that, I suggest you go away and read
the manuals on some of them. That's how I got to understand it, why can't you?

: It was Mr Poynton that insists that there is an analog non-linear conversion


: between CCD and ADC and the context was that it would do the gamma compensation.
:

: Now such an *analog* amplifier that does the *gamma* compensation is very
: difficult one. If there is such devices in cameras today, they are rare.

Nonsense, it's very easy. EVERY tv camera has one. Why not check your facts,
is that so hard to do.?

: I do know that there is pre-conditioning circuits. They are piecewise linear
: *not* non-linear.

Again, nonsense, they are not piecewise linear. Check the circuit diagrams.

: Then there are signal processors for this purpose and even if


: they seem to be analog devices (so that they have analog input for the CCD and
: analog video outputs) the actually have a flash ADC inside them. Because of the
: speed that is needed with video they are often only 6 bit or 8 bit devices.
: There are fast 10 bit and 12 bit flashs but they are very expensive. Such are
: only used in broadcast quality systems. Then there are other rather genius
: methods in achieving the "no missing codes" that is often seen on the specs.

Not so. Check your facts. There are truly analogue circuits that do this,
and they are in common usage in tv cameras. Your depth of lisunderstanding
is breathtaking.

: >Cameras are really a lot more complex than you seem to imagine. Have you

: >actually looked at one? I've been doing exactly that for 30 years now, and
: >even some domestic camcorders are processed in this way, hardly exotic.

:
: Again: an *analog* amplifier that does the *gamma* compensation is very


: difficult one. If there is such devices they are rare. That was the context by
: Mr. Poynton.

Nonsense, when will you actually try to find out the truth by actually
looking at real cameras instead of inventing difficulties?

Alan Roberts

unread,
Mar 2, 1998, 3:00:00 AM3/2/98
to

Stephen H. Westin (westin*nos...@graphics.cornell.edu) wrote:

: I still find it astonishing that you keep on arguing with people of


: the stature of Mr. Roberts and Mr. Poynton, with no authoritative
: references to back you up. Roberts and Poynton can serve as their
: *own* authoritative references.

Thanks Stephen, it makes a change to be recognised.

I intend to retreat from this thread now, clearly Timo has made his mind up
how the universe works, and will not be dissuaded.

Walter Hafner

unread,
Mar 2, 1998, 3:00:00 AM3/2/98
to

tim...@clinet.fi (Timo Autiokari) writes:

> If anyone knows on-line specifications of CCD devices could you please post the
> links.

Sure. Have a look at:

http://www.photomet.com/

I think its a very good page! CCD technology explained to the max.

Under http://www.photomet.com/ref/refgain.html the "gain" is described:
Photometrics cameras are indeed linear (this is an exception) by
default but can be changed up to 4 in hardware (at least that's my
interpretation of the page)

The term "linearity" (http://www.photomet.com/ref/reflin.html) on the
photometrics pages refer to a different concept. I qoute:

: Hence, non-linearity is a measure of
: the deviation from the following relationship:
: Digital Signal = Constant x Amount of Incident Light

-Walter

--
Walter Hafner_____________________________ haf...@forwiss.tu-muenchen.de
<A href=http://www.forwiss.tu-muenchen.de/~hafner/>*CLICK*</A>
The best observation I can make is that the BSD Daemon logo is _much_
cooler than that Penguin :-) (Donald Whiteside)

Dave Martindale

unread,
Mar 2, 1998, 3:00:00 AM3/2/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>It is also possible (unlikely but possible) that in some very specific camera
>there is an non-liner analog signal conditioning between the CCD output and the
>AD converter input. If Mr. Poynton's pages are targeted for such exotic systems
>then in my opinion it would be nice if he could say indicate this on these
>pages.

Is a frame grabber digitizing the output of an analog video camera an
"exotic system"? The video camera is required to apply a non-linear
transform between the output of the CCD and the video voltage representing
light intensity, since this non-linear transformation is part of the
specification of the video signal for both NTSC and PAL. (The transform
is called "gamma correction"). The frame grabber simply does A/D conversion
on the video signal, so the non-linear transfer characteristic is built
into the digital sample values as well.

In addition, flatbed scanners and at least some digital cameras perform
their own sort of gamma correction on the voltages coming from the
CCD. They have to, or the images would look poor on most PCs.

>And yes, there are both internal and external AD conversions with CCD imaging
>systems. It does not change the fact that the conversion happens linearly,
>inside an integrated circuit where the coding is in the from of physical
>resistor elements on the chip.

But how the A/D conversion itself is performed is essentially irrelevant.
What you *care* about is how the digital sample values are related to
the original light intensity seen by the CCD. And it is a simple,
measurable fact of life that many image sensors have some sort of
nonlinear transform built in. In some cases, it is a nonlinear analog
circuit between the CCD and the A/D. In other cases, the A/D is directly
digitizing the CCD output to 12 or 14 bits, but this data is passed through
a lookup table to produce 8 or 10 bits of data to be stored in the image
file. The nonlinear transfer function is performed by the lookup table.

Timo, I'll happily believe that if you build a CCD camera the output
will be linearly proportional to intensity. But that just isn't the way
most real cameras and scanners are built, for a variety of good reasons.

Dave

Dave Martindale

unread,
Mar 2, 1998, 3:00:00 AM3/2/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>Here is a simple way to see if there is a non-linear analog signal conditioning
>in the camera:
>
>-open any decently exposed image into Photoshop that is acquired using a
>8bit/color CCD device using a gamma setting other than 1.0 in the acquire module
>(in case there is such setting) .
>-choose Image/Histogram.
>-in Photoshop the Luminosity -channel is smoothed so select the red, green or
>blue from the dropdow-box.
>
>Now, if you do not see gaps in the histogram then there is the non-linear analog
>signal conditioning in the camera or the scanner. But if you do see the gaps
>then the data is just modified by the software of the camera or scanner. To
>verify, open a couple of other images and do the same to see that the gaps
>appear generally in the same places.

This test is useless. It will tell you if an image has been modified
by a particular sort of lookup table that is altering the gamma by mapping
8-bit input values to 8-bit output values. But if the processing has
been done with samples that are wider than this, you won't see the
discontinuities in the histogram.

For example, I have worked with a particular image in the following
sequence:

- the image was photographed on film, with a gamma of 0.6 relative to
the original scene.

- the film was digitized with a CCD and 14 bit A/D convertor

- the output of the A/D convertor was run through a lookup table to
produce a 10-bit value that is proportional to film density, then
these values are stored in image file #1. (This is a nonlinear
transform)

- this file was read in by another program which converted from negative
density space back into an approximation of the light intensity in the
original scene. This new image was written with 16 bits per sample.
(Another nonlinear transform)

- File #2 was read into Photoshop in 16-bit mode. A gamma correction
factor was applied using the Levels control panel. In addition,
a small fraction of the 16-bit intensity space is expanded to fill
the whole range (equivalent to about a 3 f-stop exposure change).

- finally, the image was converted to 8 bits per sample and written out
again.

If you were to look at a histogram of the 10-bit data or the 16-bit
data with a tool that showed you all 1024 or 65536 "bins", you would
see discontinuities caused by using lookup tables for the nonlinear
transforms. But viewed in Photoshop, with its histograms that always
have 256 bins, you won't see anything. And the final resulting image,
despite having undergone *three* nonlinear transforms, has a nice
smooth histogram. This is because the nonlinear operations were done
with enough bits that the quantization errors remain well below the
size of the errors caused by the 8-bit output format itself.

In fact, I'd argue that *all* image processing should be done with
more than 8 bits, using 8 bits per sample only for storing final images
that will not undergo further processing.

Dave

Dave Martindale

unread,
Mar 2, 1998, 3:00:00 AM3/2/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>Non-linear functions in general have the ability to compress data. This is
>rather basic knowledge and has been widely used e.g. in audio.
>
>What I have been saying is that even if it is good in TV and video to encode the
>gamma correction and bith-depth compression into the video signal, it is not at
>all good for digital photographic imaging.

Gamma correction in video turns out to be good for two things in video.
The first one, as you note, is to compensate for the nonlinearity of the
CRT. This compensation has to be done somewhere, and doing it in the
camera (of which there are few) is cheaper than doing it in the TV set
(of which there are many).

But it *also* turns out that gamma correction is closely related to the
nonlinear way that the human visual system perceives intensity. Gamma
correction causes more of the video voltage range (or code space, in
digital video) to be used for the darker portions of the picture and
less for the brighter parts of the picture, which matches the eye's
response. The net effect is that we can *usually* represent an image
without banding artifacts (Mach bands) using only 8 bits per sample
using gamma correction, while if the samples were linearly proportional
to intensity we'd need about 12 bits for the same performance.

And this second advantage has nothing at all to do with video, or with
the ultimate display device. It simply says that nonlinear representations
need fewer bits for the same quality, or give better quality with the same
number of bits, than a linear representation.

In addition, there is nothing wrong with this. Nowhere is there a stone
tablet with a commandment saying that the digital sample values in an image
file should be linearly related to intensity. And, in fact, few real
images really use a linear encoding for storing pixels. I do it sometimes,
but I'm usually careful to use about 16 bits per sample to avoid creating
artifacts. Sometimes I use 32-bit floating point for sample values.
It works great! But it's not efficient for storage.

>Now, Mr. Poynton's humbug about the perception mis-leads people in digital
>photographic imaging, badly.

No he doesn't. The *only* place where it is common for the sample values
stored in image files to be linearly proportional to intensity is in
computer graphics. And that's mostly because the graphics people don't
know any better.

>The problem is that Mr. Poynton says that the "non-linear coding" is done
>*because* it gives better image quality. This is not true, this way.

No, it *is* true. See above.

>If the TV signal that is on the transmission path is converted into image
>without applying the same gamma that the CRT applies, the image can not be
>edited properly. Because it is heavily un-natural form.

What do you mean by "cannot be edited properly", and "un-natural"?

Do you mean that pixel values are no longer linearly related to light
intensity, and thus standard linear image processing operations like
blur, sharpen, and resize will not give the correct answer when applied
to these nonlinear images? If so, you're right.

There are two approaches for dealing with this. One is to realize that
the values in the file are just numbers, not light intensity, and you
can convert the numbers to (linear) intensity any time you want. So you
can take your 8-bit gamma-corrected image, convert it to a linear form
(remember to use at least 12 bits to avoid artifacts), do your image
processing operation, then re-convert the pixel values to gamma-corrected
8 bit samples. This adds only a little bit of roundoff error, and the
whole process has a lot less error than you would get by converting the
image to 8 bit linear and working with it in that space. (Though using
12 bit linear everywhere would be slightly better yet).

The other approach is to apply linear image processing operations to
the 8-bit pixels even though it is mathematically incorrect. This is
commonly done in video. When the process is done within a control
loop, with a human operator looking at the result and modifying the
parameters until they see what they like, this works quite well.
It doesn't matter much if the math is wrong, since nobody is making
measurements from the images - they just want something that looks good.

>Some image manipulation software like Photoshop makes it possible to edit such
>gamma compensated image. This is why it has the two gamma settings. It shows the
>image properly but the image data is still in gamma space and the image
>manipulation operations are done to the data.

Yes, this is approach #2 above.

>To see how the data looks like,
>just apply a gamma 1/2.0 to a image that show properly. Then think what the
>various image editing operations might do with that.

No, the data is intended to be viewed just as you see it. When you apply
the 0.5 gamma to the image, *you* are distorting the image to something
that it is not intended to be.

Timo, how old are you? Forgive me if this doesn't apply to you, but
you sound exactly like an undergrad who has always assumed that sample
values in images files should be linearly related to intensity because
that's the mathematically *right* way to do it. And you defend with
great vigour the purity of your conception of how images *ought* to
be stored in data files.

Sadly, it just isn't this simple. If you used floating-point for
storage and computation, a linear representation would be natural.
But real physical devices have to be built with no more bits than
necessary in order to keep the costs down, and nonlinear encoding
of pixels provides a way to maintain quality with fewer bits.
There are good reasons for this, mostly because our vision is
itself nonlinear. As you learn about perception and image processing,
you will see that there is no perfect way to do something. There are
just a bunch of tradeoffs, and all you can do is make intelligent ones.

In some cases, we are stuck with tradeoffs that made sense at the time
but no longer do. (e.g. using 20% of the video bandwidth to transmit
non-image information). In other cases, the original design was done
by people who really did understand the tradeoffs, and they are still
valid today.

Dave

Michael McGuire

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

...... snipping Timo's usual hymn to linearity

: >But, stripping away your pejorative verbiage
: >But as other posters to this thread have remarked, non-linear A/D's have been

: >made for and used in high end video cameras.

: But, but, but, maybe he is kind enough to reveal the type code of this device. I
: work as a component specialist for 12th year now and have not seen any info
: about non-linear AD converters. I'm quite surprised that Mr. Michael McGuire


: from Hewlett-Packard Laboratories believes that such thing exists.

------------------------------------------------------------------------------
12 years a component specialist and never heard of an ASIC. That's an
Application Specific Integrated Circuit to enlighten him. One of the
competive advantages we have here is the ability to make our chips. We
generally do not make them available or publish their specifications.
------------------------------------------------------------------------------

: >If a linear A/D has sufficient dynamic range and a step size less than

: >the noise level of the CCD, then the transformation to non-linear encoding
: >can be done digitally with no perceptual loss.

: The above is only partially true. You need also to say *where* the signal (or


: data) is perceived. If it is perceived on TV or on uncalibrated monitor then
: this is so because the CRT makes the image proper again by applying the gamma.
>>>: And again, the image editor perceives the signal (data) before the monitor and
>>>: some rather heavy perceivable problems exists there.

-------------------------------------------------------------------------------
This last sentence is not true if one considers the most likely destinations
of the image or the most likely operations in an image editor that might cause
problems. If the destination is a CRT, the Timo apparently already agrees that
the correction for gamma = 2.2 is correct especially if initially done at
higher bit depth. Now consider taking this to an Mac system where the
gamma = 1.8. The necessary correction from 2.2 -> 1.8 is gamma = 1.1. This is
much gentler and more accurate at 8 bits than banging it all the way from
gamma = 1.0. The other likely destination of the image is a printer. All the
printers I have tested and that's quite a few, at least at their native dot
resolution, have a response similar in shape to a CRT but usually somewhat more
radical. This is a consequence of round dots having to completely cover square
pixels to avoid gaps in solid fills. Put down dots covering 1/4 of the pixels,
but not overlapping, and you get pi/8 coverage of the paper, not 1/4. The
correction curve for a printer lies somewhere above that for a gamma = 2.2
monitor. Again going there from being already accurately corrected for
gamma = 2.2 is going to be gentler and more accurate that going there from 1.0.
This leaves the question what might be done in an image editor keeping in mind
these destinations for the image. Of course anything can be done, but likely
corrections to normal images are linear stretching or compression of the tonal
scale and gamma-curve-like corrections to enhance contrast in the shadows or
highlights. Any of these operations can produce contouring if applied strongly
enough, but they will do it sooner if applied to a gamma 1.0 image which is
then corrected for screen or printer destinations.

I don't expect Timo will agree with much of this any more than he has with
Alan Roberts or Charles Poynton, but then he doesn't agree with anybody.

Valburg

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

To all the extremely knowledgeable gentlemen weighing in on this thread,
I say thank you! This has been very stimulating and informative.

Could one of you answer a couple of questions which are, perhaps, of
practical interest to those of us just working on still images with
commonly available tools, and not involved in R&D in this field?

The first is a repeat of a question I posed previously: Could you tell
us whether there is a way to determine the native gamma
of a particular scanner model, in order to avoid reducing the 8 bits
worth of information by adjusting the gamma or midpoint to another
setting? Or is this, perhaps, a concern of more significance in theory
than in practice; that is, perhaps the extent of mid-point adjustments
commonly practiced are gentle enough so as to make little difference in
image quality (loss of "bins" through quantization error?)?

Is there commonly available image manipulation software which allows
working at bit-depths greater than 8 and then reducing bit-depth to 8
for output or storage (as mentioned most recently in posts by Dave
Martindale)?

Thanks!

Mitch Valburg


Walter Hafner

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

Valburg <lk...@psu.edu> writes:

[snip]


> The first is a repeat of a question I posed previously: Could you tell
> us whether there is a way to determine the native gamma
> of a particular scanner model, in order to avoid reducing the 8 bits

[snap]

Related question:

Is the method of finding the gamma-factor of particular displays as
described in

http://www.povray.org/binaries/ (last paragraph)

any good?

Timo Autiokari

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

On 2 Mar 1998 09:40:48 GMT, al...@rd.bbc.co.uk (Alan Roberts) wrote:

>Stephen H. Westin (westin*nos...@graphics.cornell.edu) wrote:
>
>: I still find it astonishing that you keep on arguing with people of
>: the stature of Mr. Roberts and Mr. Poynton, with no authoritative
>: references to back you up. Roberts and Poynton can serve as their
>: *own* authoritative references.
>
>Thanks Stephen, it makes a change to be recognised.

At least the gentlemen have been authoring the colorspace-faq together:
http://www.altavista.digital.com/cgi-bin/query?pg=aq&what=web&kl=XX&q=%22Alan+Roberts%22+and+%22Charles+A.+Poynton%22&r=&d0=21%2FMar%2F86&d1=&search.x=65&search.y=8
So there is nothing special there if they both see the issue similarly.

>I intend to retreat from this thread now, clearly Timo has made his mind up
>how the universe works, and will not be dissuaded.

Not the universe. I'm just trying to explain that there is much more that needs
to be considered in imaging than just the video. And what is good for TV and
Video is not good for imaging.

Please see e.g. http://www-s.ti.com/sc/psheets/soca010/soca010.pdf . The title
of the Texas Instruments Application Report is "CCD Image Sensors and
Analog-to-Digital Conversion". Read it folks, it provides some basics for this
discussion. It tells that a typical CCD has a dynamic of 60dB that is equal to
10 linear bits, that with double sampling it can be increased to 73dB (13 linear
bits). But often a 6 bit or 8 bit converter is used, so there will be analog
pre-conditioning before the linear AD conversion is done. Yes, 6 or 8 linear
bits that's 48dB or less. Because of the pre-conditioning the gamma can be
calculated into the signal using such a low digital resolution. This is an
accuracy trade-off. And analog signal processing is not at all accurate so
imaging systems that have such analog processing are not useful. They do provide
the "no missing codes" and such but it is more of error than actually captured
data.

The document btw is written in 93 but it has a copyright 96. Todays improvements
in video signal processing is mainly the12 bit converters that are becoming
available in the higher end consumer video cameras. So inaccurate
pre-conditioning is not needed.

But, the problem is that Mr. Poynton's terminology "more perceptual coding" is
misleading if applied to image editing. No one is viewing the video signal as it
appears in the middle of the transfer path.

And the whole issue is simply to convert the linear-light into gamma compensated
signal with minimum errors and if this is done with the latest technology then
there the *result* of the gamma compensation will give a free benefit, bit-dept
compression. But Mr. Poynton calls it "more perceptual coding".

It is "more perceptual" only after the CRT applies the gamma.

This gamma *is* good for TV and video since the CRT will UNcode the coding and
the image is then good for the eye. But if the image is to be edited then gamma
compensation is not good at all. The image will be in "coded" state, it is not
natural and it is not good for the eye nor for image editing in any sense.
Editing "coded" images results poor performance.

Timo Autiokari


Stephen H. Westin

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

tim...@clinet.fi (Timo Autiokari) writes:

<snip>

> Please see e.g.
> http://www-s.ti.com/sc/psheets/soca010/soca010.pdf. The title of the


> Texas Instruments Application Report is "CCD Image Sensors and
> Analog-to-Digital Conversion". Read it folks, it provides some
> basics for this discussion. It tells that a typical CCD has a
> dynamic of 60dB that is equal to 10 linear bits, that with double
> sampling it can be increased to 73dB (13 linear bits).

And the camera I am using most at the moment has a chilled sensor, so
we see most of 12 bits out of it. So?

> But often a 6 bit or 8 bit converter is used, so there will be analog
> pre-conditioning before the linear AD conversion is done. Yes, 6 or 8 linear
> bits that's 48dB or less. Because of the pre-conditioning the gamma can be
> calculated into the signal using such a low digital resolution. This is an
> accuracy trade-off. And analog signal processing is not at all accurate so
> imaging systems that have such analog processing are not useful.

Again, "imaging systems that have such analog processing" have been
widely used over the last 60 years or so. Which makes me think that
they might just be useful. At least slightly :)

<snip>

> But, the problem is that Mr. Poynton's terminology "more perceptual
> coding" is misleading if applied to image editing. No one is viewing
> the video signal as it appears in the middle of the transfer path.

True. For most processing, you will want to linearize the signal. And
then, quite possibly, re-correct for display and storage.

> And the whole issue is simply to convert the linear-light into gamma
> compensated signal with minimum errors and if this is done with the
> latest technology then there the *result* of the gamma compensation
> will give a free benefit, bit-dept compression. But Mr. Poynton
> calls it "more perceptual coding".

Which it is. Quantization steps are smaller for low brightness, which
is a Good Thing perceptually.

> It is "more perceptual" only after the CRT applies the gamma.

No, it's more perceptual, period. Some systems (e.g. Kodak Cineon) use
a logarithmic-based coding to achieve the same result, though it's not
correct for any CRT.

> This gamma *is* good for TV and video since the CRT will UNcode the
> coding and the image is then good for the eye. But if the image is
> to be edited then gamma compensation is not good at all. The image
> will be in "coded" state, it is not natural and it is not good for
> the eye nor for image editing in any sense.

Yes, it *is* good for the eye. Linear quantization will always, for a
given number of levels, show more quantization artifacts than a
gamma-like quantization.

Look up the CIE L*a*b* system; in an effort to model the visual
system, it uses a cube root transform on luminance. Which equates to
correction for a monitor gamma of 3.0.

> Editing "coded" images
> results poor performance.

Except in the special case of Gaussian filter kernels; these simply
get narrower or wider as a result of gamma correction or its removal.

If you're trying to say that linear intensity is probably the domain
in which you want to process images, I don't think anyone will argue
with you.

Dave Martindale

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

westin*nos...@graphics.cornell.edu (Stephen H. Westin) writes:
>If you're trying to say that linear intensity is probably the domain
>in which you want to process images, I don't think anyone will argue
>with you.

I think the key observation in all this is that you don't have to *process*
images using the same encoding of (real number) intensity into (integer)
sample codes that you use to *store* images.

Just because Photoshop does arithmetic on whatever sample codes are stored
in the file doesn't mean that all image processing is, or should be, done
this way.

Another (but less important) observation is that if you are only going to
store 8 bits per sample, you *must not* use a linear encoding of intensity
into sample value because it will cause quantization artifacts in the
dark areas of the image, for typical image brightness range. If you
do convert to a linear space for processing, you should use *at least*
12 bits per sample.

8 bit linear is bad, awful, ugly, and there is no excuse for using it.

Dave

Timo Autiokari

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

On 2 Mar 1998 19:58:23 -0800, da...@cs.ubc.ca (Dave Martindale) wrote:

> And it is a simple, measurable fact of life that many image sensors
>have some sort of nonlinear transform built in.

CCD's are quite linear.

>In some cases, it is a nonlinear analog circuit between the CCD and the A/D.

The "nonlinear analog circuits" in video most often is a device that have both
analog input and analog outputs but *internally* they have a 6 bit or 8 bit
flash AD converter (and in broadcast quality system even higher resolution
ADCs). They also can have other signal conditioning like automatic gain control
etc. So they just appear to be analog, but if you look into the specifications
you can easily see that they are not. They are so called mixed signal devices.
These devices do not provide digital output so if one likes to have digitized
data out from such video camera the video signal then needs to be converted.

Only way back before the digital and ccd era the video was processed purely in
analog ways. It helped that the imager tubes, diode arrays etc of those days had
a non-linear characteristics of their own but the pure analog signal
conditioning was difficult indeed and very inaccurate.

>In other cases, the A/D is directly digitizing the CCD output to 12 or 14 bits, but

>this data is passed through a lookup table to produce 8 or 10 bits of data to be

>stored in the image file. The nonlinear transfer function is performed by the
>lookup table.

Yes nowadays they are becoming available and this is a very good improvement for
the video quality since the mixed signal pre-conditioning and processing
components are not at all accurate.

>Timo, I'll happily believe that if you build a CCD camera the output
>will be linearly proportional to intensity. But that just isn't the way
>most real cameras and scanners are built, for a variety of good reasons.

I can not see any real reason why the linear space is not allowed.

Maybe the manufacturers want to keep the border wide enough between high-end and
consumer grade systems. Most of the high-end systems allow the linear space.
They need to, otherwise the pre-press people would not by them.

Timo Autiokari

Timo Autiokari

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

On Tue, 03 Mar 1998 10:02:23 -0500, Valburg <lk...@psu.edu> wrote:

>Could you tell us whether there is a way to determine the native gamma
>of a particular scanner model,

If it has a CCD-rod then the native gamma is 1.0.

To check a scanner or a camera:

1. Scan a good image that also have a lot of shades of black (shadows). Open
into Photoshop (some other sw may not be able to show the histogram accurately
enough, Photoshop does)
2. Choose Image/Histogram.
3. In Photoshop the Luminosity -channel in Histogram is smoothed so select the


red, green or blue from the dropdow-box.

If you do not see any gaps in the red, green and blue histograms and there is no
spiking (hedgehog) either then you have linear setting.

If you see the gaps then the data is being modified by the software of the
scanner. To verify, scan a couple of other images and do the same to see that


the gaps appear generally in the same places.

If you do not see the gaps (or there is only one or two of them) but there is
spiking somewhere on the curve then there is either the non-linear "analog"
signal conditioning hardware in the scanner or you have some 10bit or better
scanner. Again to verify, scan a couple of other images and do the same to see
that the spiking appear generally in the same places.

If you have a setting for the gamma in the scanner software then adjust it until
no gaps nor spiking is seen. Should be rather linear then.

The "analog" signal processing can be easily detected by scanning the same
target three or more times and then doing subtractions in Photoshop. The error
of such "analog" signal conditioning circuit is often/usually in the range of
several percentages and can be easily found out since most of it is random, so
between the identical scans you can detect some 2 to 20 levels errors. In case
of direct AD conversion from the CCD the error between scans should be below
detection.

>in order to avoid reducing the 8 bits worth of information by adjusting

>the gamma or midpoint to another setting? Or is this, perhaps, a concern
>of more significance in theory than in practice; that is, perhaps the extent
>of mid-point adjustments commonly practiced are gentle enough so as
>to make little difference in image quality (loss of "bins" through quantization
>error?)?

If you would do some experiments you would very soon notice that it is not a
theory but very hard fact in practice. In video such errors just do not mean a
thing. Often on CRT they do not mean much, in case the www publishing the images
are usually scaled down and this helps hiding the problems. But if you display
the images at 100% scaling on-screen or you print the images there will be
problems indeed because of the gamma compensated images.

>Is there commonly available image manipulation software which allows
>working at bit-depths greater than 8 and then reducing bit-depth to 8
>for output or storage (as mentioned most recently in posts by Dave
>Martindale)?

Photoshop 4.0.x allows this but it is limited. You can do Levels and Curves,
Crop and Save. At least black-point and white-point scaling seems to work
properly in Levels. The middle-box in Levels is *not* correct gamma control so
do not trust it, it will leave the shadows behind badly. In the Curves dialog
linear scaling seem to work properly, curve-adjustments especially such that
have many points possibly do not work correctly. The maps (*.amp) in curves seem
not work properly (if many points). But the 5.0 will have a lot of 16 bit stuff
available.

Timo Autiokari

Timo Autiokari

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

On 3 Mar 1998 03:40:53 GMT, mi...@xhplmmcg.xhpl.xhp.com (Michael McGuire) wrote:

>>>>: And again, the image editor perceives the signal (data) before the monitor and
>>>>: some rather heavy perceivable problems exists there.

>This last sentence is not true if one considers the most likely destinations

>of the image or the most likely operations in an image editor that might cause
>problems. If the destination is a CRT, the Timo apparently already agrees that
>the correction for gamma = 2.2 is correct especially if initially done at
>higher bit depth.

Not all the images are to be shown on the www, so compromises are not always
needed. Therefore gamma 2.2 is not always correct. As you say yourself it
depends where the image is to be displayed. On PC the correct gamma compensation
is 2.5 for Mac it probably is 1.8.

> Now consider taking this to an Mac system where the gamma = 1.8. The necessary
>correction from 2.2 -> 1.8 is gamma = 1.1. This is much gentler and more accurate
>at 8 bits than banging it all the way from gamma = 1.0.

Firstly a quick lesson about how to calculate with gamma is needed here.

If the file has inverse gamma 2.2 then to change the gamma of the file to be
inverse of 1.8 you specify X=required_gamma that needs to be applied to the file
and you calculate:

(1/2.2) * X = (1/1.8)
X= (1/1.8) /( 1/2.2)
X=2.2/1.8
X= 1.22

So, in your example you will need to apply gamma 1.22 to the file. 1.22 is not
gentle and it will be applied to data that _already_ contains errors and heavy
quantization.

>The other likely destination of the image is a printer. All the printers I have
>tested and that's quite a few, at least at their native dot resolution, have a
>response similar in shape to a CRT but usually somewhat more radical.

I have measured transfer curves of many printers and they very often do have
artificial gamma match in their software. But only HP printers have such a high
value like 2.2. Actually about 1 year ago the 5C had a driver that had an
artificial gamma match at some enormous 2.6 but the new driver put that into 2.0
or so.

>This is a consequence of round dots having to completely cover square
>pixels to avoid gaps in solid fills. Put down dots covering 1/4 of the pixels,
>but not overlapping, and you get pi/8 coverage of the paper, not 1/4.

A quick lesson about the dot gain of printers is needed here.

The dot gain can be positive or negative. Your round dots would leave white
displayed (where the white should actually be covered with ink). This is the
opposite of dot gain, the dots are loosing area and that does _not_ make the
transfer curve to appear similarly as it appears with CRT but exactly the
opposite. In case the dots are gaining the area, _then_ the images will print
too dark and then the transfer curve of the printer bends towards the transfer
curve of CRT.

>The correction curve for a printer lies somewhere above that for a gamma = 2.2
>monitor.

No, it can be loosely approximated with a gamma compensation of 1.0 to 2.0. But
no printer follows a gamma accurately.

>This leaves the question what might be done in an image editor keeping in mind
>these destinations for the image. Of course anything can be done, but likely
>corrections to normal images are linear stretching or compression of the tonal
>scale

And here you will have _considerable_ troubles. When your image has a gamma
compensation then if you do a "linear stretching or compression of the tonal
scale" (e.g. the Levels in Photoshop) what will happen to the colors (hues)?
They will change, since you apply a linear scaling to RGB values whose
components have exponential distribution. The result is that the image is not
anymore in any gamma space and colors (hues) are distorted.

> and gamma-curve-like corrections to enhance contrast in the shadows or
>highlights. Any of these operations can produce contouring if applied strongly
>enough, but they will do it sooner if applied to a gamma 1.0 image which is
>then corrected for screen or printer destinations.

No, the contouring appears when you edit images that have errors and large
quantization.

The final compensation from 1.0 to any gamma space creates the errors and
quantization only once.

The best thing is that the resulting image will show the linear levels
_correctly_ without *any* errors from level 0 to level 65 (in case gamma
compensation 2.5 is applied).

And there are much more in image editing. Such as color correction. With linear
images this is often very easy, you just do linear changes to the color channels
(easiest in Curves but only linear changes are needed). This in case you need to
correct e.g. wrong color temperature due to the lights at the scene. If you look
in CIE_XYZ you can easily see that both the RGB to CIE_XYZ and color
temperature changes are linear transformations. So you can do this linearly in
RGB also. In case there is gamma compensation in the image file it is very hard
to correct color temp or anything like such.

Most of the filters require linear representation of intensities such as
UnsharpMask. Scaling needs it etc.

One can also want to use a printing service to print his/her best shots with a
high end dye-sub. A dye-sub is able to show much more colors and shades than the
home printer. A gamma image can show ~decently using an ink-jet but when the
image is printed using dye-sub the artificants appear. If one has the error free
and quantization free linear images then high quality printing is possible using
dye-subs, most if not all of them have the linear mode also, like the Tektronix
Phaser 450e that I use.

>I don't expect Timo will agree with much of this any more than he has with
>Alan Roberts or Charles Poynton, but then he doesn't agree with anybody.

Now, Mr Poynton's on-line pages were the reason why I started this thread. I
have spent a considerable amount of my time in explaining people what the gamma
actually is. I got bored of doing that and I've read his material very carefully
many times.

Mr Roberts has been co-authoring the colorspace-faq with Mr Poynton, see:
http://www.altavista.digital.com/cgi-bin/query?pg=aq&what=web&kl=XX&q=%22Alan+Roberts%22+and+%22Charles+A.+Poynton%22&r=&d0=21%2FMar%2F86&d1=&search.x=65&search.y=8
So there is nothing special there if they both see (or try to explain) the issue
similarly.

Timo Autiokari

Timo Autiokari

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

On 2 Mar 1998 20:44:35 -0800, da...@cs.ubc.ca (Dave Martindale) wrote:

>Nowhere is there a stone tablet with a commandment saying that the
>digital sample values in an image file should be linearly related to intensity.

The problem is that there is a stone tablet that says we all need to obey the
non-linear space. This is what the manufacturers have been engraving into the
appliances. While it would be very easy for them to allow also the better linear
space.

>What do you mean by "cannot be edited properly", and "un-natural"?
>Do you mean that pixel values are no longer linearly related to light
>intensity, and thus standard linear image processing operations like
>blur, sharpen, and resize will not give the correct answer when applied
>to these nonlinear images? If so, you're right.

That is exactly what I mean. Thank you very much.

>There are two approaches for dealing with this. One is to realize that
>the values in the file are just numbers, not light intensity, and you
>can convert the numbers to (linear) intensity any time you want. So you
>can take your 8-bit gamma-corrected image, convert it to a linear form
>(remember to use at least 12 bits to avoid artifacts), do your image
>processing operation, then re-convert the pixel values to gamma-corrected
>8 bit samples. This adds only a little bit of roundoff error, and the
>whole process has a lot less error than you would get by converting the
>image to 8 bit linear and working with it in that space. (Though using
>12 bit linear everywhere would be slightly better yet).

This is not good and quite often quite impossible. It creates unnecessary errors
even in 12 bit and in most of the systems you can only acquire the 8 bit gamma
compensated image. There is not much use to take that into 16 bit space anymore
because the quantization is already there.

>The other approach is to apply linear image processing operations to
>the 8-bit pixels even though it is mathematically incorrect. This is
>commonly done in video. When the process is done within a control
>loop, with a human operator looking at the result and modifying the
>parameters until they see what they like, this works quite well.
>It doesn't matter much if the math is wrong, since nobody is making
>measurements from the images - they just want something that looks good.

You are correct, it really does not matter in video because the general image
quality is only a very small fraction of that what can be achieved in digital
photographic imaging, using a digital camera (not a video camera). Video comes
and goes, it is just set to appear somewhat decently. But one can spend many
hours with one single photographic image. There every bit of accuracy is very
much needed.

>>To see how the data looks like,
>>just apply a gamma 1/2.0 to a image that show properly. Then think what the
>>various image editing operations might do with that.
>
>No, the data is intended to be viewed just as you see it.

In Photoshop you do not see the _data_, you see on-line image of it after the
monitor gamma. The best way to see how the image _data_ looks like is to put the
value 1.0 into the "Monitor Gamma" box, provided that Photoshop is properly
calibrated using the Gamma Slider.

>Timo, how old are you? Forgive me if this doesn't apply to you, but
>you sound exactly like an undergrad who has always assumed that sample
>values in images files should be linearly related to intensity because
>that's the mathematically *right* way to do it.

Well, for me the undergrad times are tens of years back. However linear space is
not only mathematically right, but it provides best image quality. I've done a
lot of comparisons on high and mid quality systems.

>As you learn about perception and image processing, you will see that
>there is no perfect way to do something. There are just a bunch of tradeoffs,
>and all you can do is make intelligent ones.

The gamma-space is a very bad trade-off for video on the expense of digital
photographic imaging.

And there is no real reason to have that a trade-off. It would be a very simple
thing for the manufacturers to just provide a by-pass for those who want to use
the linear space.

But for some unbelievable reason they do not provide it in general, only in the
most expensive devices they allow linear imaging.

Some of the manufacturers like HP do not allow it at all, because they seem to
genuinely believe that the gamma space is so very good and so much better in
every case. Obviously they have been reading some FAQs and believe in them
blindly.

Timo Autiokari

Dave Martindale

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>And here you will have _considerable_ troubles. When your image has a gamma
>compensation then if you do a "linear stretching or compression of the tonal
>scale" (e.g. the Levels in Photoshop) what will happen to the colors (hues)?
>They will change, since you apply a linear scaling to RGB values whose
>components have exponential distribution. The result is that the image is not
>anymore in any gamma space and colors (hues) are distorted.

If you do a simple linear stretch by moving the "white point" marker
and don't touch the black and midpoint markers, you scale all of the
intensities in the image by the same factor. This is equivalent to
changing the F-stop on a camera lens. This is true if the sample
values stored in the pixel are linearly proportional to intensity, but
it is ALSO true if the values are gamma corrected. Don't believe me?
Just do the math. It does work.

On the other hand, if you move the "black point" marker off zero, then
you are mucking about with the tonal scale of the whole image, and
the sample values are no longer related to original scene intensity
by a simple rule. This is true for both linear and gamma-encoded
sample values, so gamma-encoded samples suffer no additional disadvantage
here.

If you move the "midpoint" marker in Levels, you change the gamma of the
image. If it was already gamma-encoded, the new image is still gamma
encoded but with a different value of gamma. No problem. If the original
image was linear, you have now converted it to a gamma-encoded one.
Oops.

It seems like gamma-encoded images are *more* robust than linear ones
under the sort of manipulations you can do with the Levels menu.

>No, the contouring appears when you edit images that have errors and large
>quantization.
>
>The final compensation from 1.0 to any gamma space creates the errors and
>quantization only once.

Bullshit, to put it politely. I could acquire an image with an excellent
cooled CCD camera and 16-bit A/D conversion. There would be no contouring.
If I convert this image to 8-bit gamma-corrected encoding, there would
likely still be no contouring. But if I convert directly from the
16-bit form to 8-bit linear samples, rounding every sample value to the
nearest 8-bit value for the maximum accuracy, there will likely be
contouring in the shadow areas. The contours are cause because the
8-bit linear code simply isn't good enough to represent the image.
At this point, there has been no editing done, and no quantization
errors added other than those necessary to fit the data into 8 bits.

It's easy to see why. Suppose the image has a useful brightness range
of 100:1. Then the maximum intensity is represented by 255, and the
darkest shadow is 2.5. Oops, no way to represent 2.5 accurately, so we
have to use either 2 or 3. These sample values, when accurately reproduced
by the display, differ in intensity by a factor of 1.5. The brightness
difference between codes 3 and 4 is 1.33. Even though this is in dark areas
of the image where the eye's sensitivity is reduced, the eye can easily
still see this 50% or a 33% brightness "step" caused by quantization.

If the same image was stored gamma-corrected with a gamma of 0.5, the
darkest shadow with an intensity of 0.01 of maximum would be represented
by a sample value of 25.5. The nearest integer codes are 25 and 26.
A code of 25 represents a (linear light) intensity of 0.0096, while
a code of 26 represents an intensity of 0.0104. The ratio between these
is 1.08 - so we have only an 8% brightness "step" between adjacent code
values in the shadow areas of the image, while the linear encoding gave
us a 50% step in the same place. No wonder the gamma-corrected version
works better.

I *have* seen at least one image where even 8 bits gamma corrected
was not enough - you could see bands in an area of slowly changing
colour. But that's only one or two samples in years of looking at
images rather critically. In comparison, I've seen many 8-bit linear
images with quantization bands.

>in CIE_XYZ you can easily see that both the RGB to CIE_XYZ and color
>temperature changes are linear transformations. So you can do this linearly in
>RGB also. In case there is gamma compensation in the image file it is very hard
>to correct color temp or anything like such.
>
>Most of the filters require linear representation of intensities such as
>UnsharpMask. Scaling needs it etc.

Here, you are arguing that filtering operations should be done in linear
space. However, you do *not* have to store images using linear samples
in order to do your operations in linear space. The two issues are
essentially unrelated. I use an image package that lets me *store*
pixels in linear, gamma-corrected, logarithmic, or offset linear form,
yet it can convert any of these to linear floating point to do filtering
operations.

>One can also want to use a printing service to print his/her best shots with a
>high end dye-sub. A dye-sub is able to show much more colors and shades than the
>home printer. A gamma image can show ~decently using an ink-jet but when the
>image is printed using dye-sub the artificants appear. If one has the error free
>and quantization free linear images then high quality printing is possible using
>dye-subs, most if not all of them have the linear mode also, like the Tektronix
>Phaser 450e that I use.

Yes, but a dye-sub might be able to show the quantization errors in
the shadows that are inherent in using 8-bit linear encoding. Better
to give the printer 8-bit gamma corrected data with a gamma of about 1/1.8,
to ensure this doesn't happen. Why 1.8? Because most high-end scanners
and printers are designed to work with high-end image editing stations, which
are mostly Macs.

>Now, Mr Poynton's on-line pages were the reason why I started this thread. I
>have spent a considerable amount of my time in explaining people what the gamma
>actually is. I got bored of doing that and I've read his material very carefully
>many times.

Then why do you still disagree with it? He is accurately describing the
way the world is.

Dave

Dave Martindale

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>The problem is that there is a stone tablet that says we all need to obey the
>non-linear space. This is what the manufacturers have been engraving into the
>appliances. While it would be very easy for them to allow also the better linear
>space.

The problem is that the linear space is clearly, demonstrably, very much
inferior to the linear space when you are working with only 8 bits per sample.
It's only when you have at least 12 bits per sample that you can deal with
the brightness range of a good photographic print or transparency and
avoid quantization artifacts. And to handle the brightness range
captured by a photographic *negative*, you need about 16 bits.

So, are you saying that the appliance manufacturers should give you the
option of getting 8-bit linear data? That would look *worse* than what
you are getting now. Or are you saying that they should give you 12-bit
linear data? They could, but be prepared to pay extra for it. Since
8-bit gamma corrected works almost as well as 12-bit linear but costs
less (time, memory, CPU bandwidth), few people would be prepared to
pay extra just for the warm fuzzies of knowing they were doing things
"correctly".

>You are correct, it really does not matter in video because the general image
>quality is only a very small fraction of that what can be achieved in digital
>photographic imaging, using a digital camera (not a video camera). Video comes
>and goes, it is just set to appear somewhat decently. But one can spend many
>hours with one single photographic image. There every bit of accuracy is very
>much needed.

If you really want to capture all of the information in a photographic
negative, manipulate it for hours, and write it back to film without
causing any artifacts due to quantization, you *must* use something
better than an 8-bit linear form of the image. 8-bit gamma will let
you get further without artifacts, but eventually quantization will
cause problems there as well. You should be using 12-bit linear
storage or better. So what if Photoshop won't do this (yet)? Write
your own image processing operations. Do all the calculations in
floating point. That will avoid any problems. Oh, and use scanners
and film recorders that handle at least 12 bits as well.

On the other hand, if you are limited by budget or programming ability
to only use 8-bit-wide samples, using gamma-corrected samples is the
best you can do. It's much better than linear, and somewhat better
than 8-bit logarithmic encoding. (Log encoding comes into its own
at 9 or 10 bits, but at 8 it still has some quantization artifacts
of its own).

>In Photoshop you do not see the _data_, you see on-line image of it after the
>monitor gamma. The best way to see how the image _data_ looks like is to put the
>value 1.0 into the "Monitor Gamma" box, provided that Photoshop is properly
>calibrated using the Gamma Slider.

That's the correct way of displaying data that has been linearly encoded.
For data that has been nonlinearly encoded, this will *not* show you want
the image is supposed to look like. You are assuming, a priori, that
only linear data is "correct", and when you set up Photoshop to show
linear data and an image then looks bad, that the image itself must be
bad. But the real problem is your own assumption, and the incorrect
way it causes you to set up Photoshop. When Photoshop's monitor gamma
setting is set correctly *for the image in question*, it looks fine.

>However linear space is
>not only mathematically right, but it provides best image quality. I've done a
>lot of comparisons on high and mid quality systems.

What were you comparing with what? How many bits wide were the images?
How many bits wide was the frame buffer and DACs? How did gamma correction
for the display ultimately get done? If you tell us these things, we can
probably figure out what you are seeing.

I've done many of my own experments and demonstrations, using 8-bit
linear, gamma-corrected, and logarithmic encoding of intensity, as
well as doing processing using 12-bit linear, 16-bit linear, 32-bit
floating point, and 10-bit log encoding. I've written all of the software
involved myself, so I *know* where all the sources of roundoff error
are. I've had access to 12-bit CCD cameras and 12-bit film recorders.

My own experience is that 8-bit linear is unacceptable for photographic
imaging, while 8-bit gamma-corrected is good enough most of the time.

>The gamma-space is a very bad trade-off for video on the expense of digital
>photographic imaging.

You keep stating this, but provide no believable reason *why*.

>And there is no real reason to have that a trade-off. It would be a very simple
>thing for the manufacturers to just provide a by-pass for those who want to use
>the linear space.
>
>But for some unbelievable reason they do not provide it in general, only in the
>most expensive devices they allow linear imaging.

Many low-end devices do not have 12-bit A/D converters, so there is no
place you could have access to a 12-bit linear data stream. 8-bit
linear is much worse than 8-bit gamma corrected, for reasons listed above.
And if you use Photoshop for your imaging, how would you deal with 12-bit
or wider data anyway? Photoshop lets you do almost nothing with samples
wider than 8 bits.

>Some of the manufacturers like HP do not allow it at all, because they seem to
>genuinely believe that the gamma space is so very good and so much better in
>every case. Obviously they have been reading some FAQs and believe in them
>blindly.

Perhaps HP employs some engineers who understand the issues better than you
do, and understand that (a) providing an 8 bit linear path is useless,
and (b) providing a 12-bit linear path is not economically justified for
this device.

In general, I don't think HP engineers have to learn their imaging theory
from Web pages.

Dave

Dave Martindale

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>To check a scanner or a camera:
>
>1. Scan a good image that also have a lot of shades of black (shadows). Open
>into Photoshop (some other sw may not be able to show the histogram accurately
>enough, Photoshop does)
>2. Choose Image/Histogram.
>3. In Photoshop the Luminosity -channel in Histogram is smoothed so select the
>red, green or blue from the dropdow-box.
>
>If you do not see any gaps in the red, green and blue histograms and there is no
>spiking (hedgehog) either then you have linear setting.

This method will probably identify a scanner which uses an 8-bit A/D converter
on the output of the CCD, followed by a lookup table to implement the
non-linear gamma correction. However, if the gamma correction is done
by an analog amplifier ahead of the CCD, the histogram will be smooth.
If the scanner uses a wider (e.g. 12 bit) A/D converter and does the
gamma correction in a lookup table, the histogram will also be smooth.

In other words, Timo's test allows you to identify a badly designed
scanner that is causing excessive quantization error. But it will tell
you nothing about a well-designed scanner, which will have a smooth
histogram regardless of whether the output is linear or gamma corrected.

If *I* wanted to measure the characteristic response of a scanner, I'd
get a step grey wedge, like the Kodak standard grey scale. I'd scan
it with the scanner, load the image into Photoshop, and calculate the
average sample value in each of the steps of the grey scale.

Then I'd enter the data into a spreadsheet. For each patch on the grey
scale, you know the photographic density, so you know how much light it
physically reflects. You also know what sample code values the scanner
assigned to each of those intensities. If you plot both on a linear
scale, you'll get a curved line. If you plot both on a log-log scale,
you should get an approximately straight line, with the slope of the
line being the gamma of the scanner. I might try a least-squares fit
of the data.

To quote Sam Wormley, "One well-performed experiment is worth a
thousand expert opinions." Probably the most valuable thing I've
seen in 15 years of reading Usenet.

Dave

Dave Martindale

unread,
Mar 3, 1998, 3:00:00 AM3/3/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>>Timo, I'll happily believe that if you build a CCD camera the output
>>will be linearly proportional to intensity. But that just isn't the way
>>most real cameras and scanners are built, for a variety of good reasons.

>I can not see any real reason why the linear space is not allowed.

Are you talking about getting 8-bit linear data out? For reasons I've
explained in another article, and Poynton's FAQ also explains, 8-bit linear
suffers (badly!) from quantization artifacts being visible in dark areas
of the image.

Are you talking about getting 12-bit linear data out? Many devices just
don't have 12-bit linear data available anywhere internally. Even if they
did, it would cost time and space and complexity to pass it back to the
user. If the user is using Photshop, they can't use the extra bits anyway.

I would make use of the extra bits - I write my own imaging software.
But most consumers wouldn't, so I don't expect to see this in consumer
products.

Dave

gkin...@cybernet1.com

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

In article <35064429...@news.clinet.fi>, tim...@clinet.fi (Timo
Autiokari) wrote: ...{{in the interest of saving bandwidth I have taken the
liberty of not including the previous message.}} I got into reading this
thread from rec.photo.digital, not the scientific groups. I must confess that
there is a great deal I do not understand well enough to put into proper
terminology. But perhaps you could diverge from your learned disagreements to
help me with a problem which is related to image linearity. When I have
successfully tricked my supposedly idiot proof auto-focus, auto-exposure and
auto-white balencing camera (An Olympus D600L) into taking a underexposed
picture I have a problem in salvaging a useable image. An historigram shows
a mass of data clumped in the lower third with nothing in the highlight or
midrange. (I use Picture Publish 5 from Micrografx). Then I go into "adjust
tone balance" and move the highlight marker to the first point that has
image data greater than zero. Then I fool arround with the midpoint selector
to get the smoothest image I can get. The results don't look too bad on
screen (I use a PC with a 19" moniter) but the printed output resembles a
color-coded contour map more than a photograph. When I open the historigram
of the adjusted file I see something that looks like a picket fence made with
random length lumber. Clearly the image would be healthier if those gaps were
filled up. Those are the data bins my 8x3 bit program sorts the data into,
right? Why can't the full bins spill some of their data into the less
fortunate ones. Would that give me a better picture? I read all 41 posts
that my browser picked up in the hopes that I would learn something I could
use. Unfortunately my technical expertise is somewhat lacking in this area,
but this is REC.photo.digital and I want to rock and rec. with my camera..
Please put some of your erudite knowledge into concepts that non-tech head
photographers can use. True knowledge is a great puzzelment Glenn Kinsley
ex'59 MIT

-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/ Now offering spam-free web-based newsreading

Timo Autiokari

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

On 3 Mar 1998 19:25:48 -0800, da...@cs.ubc.ca (Dave Martindale) wrote:
>tim...@clinet.fi (Timo Autiokari) writes:

>>I can not see any real reason why the linear space is not allowed.
>
>Are you talking about getting 8-bit linear data out? For reasons I've
>explained in another article, and Poynton's FAQ also explains, 8-bit linear
>suffers (badly!) from quantization artifacts being visible in dark areas
>of the image.

Yes, I'm talking about getting it not only out but also getting it into printers
too. I do this in 8 bit every day and have no problems with the dark areas of
the image.

The "better shading" is the only argument what one can have to support the gamma
space. What is so special about the black color ? The important information
and the quality of the images is not in the shadows. It is in the colors, every
where else but not in shadows.

The eye can not see the 1/256 intensity step nor the 2/256 step. The gamma
space cuts more than 50% of the available colors and it cuts it more heavily
from the highlights and midtones. In reality this generates artificants to
midtones and to highlights very easily when such images are edited.

I choose the better and cleaner colors than "better shadows". And there usually
is no problems what so ever with the shadows. Even if there would be artificants
in deep shadows they are much easier to clean up than artificants that are in
the midtones and highlight areas.

Timo Autiokari

Bruce Lucas

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

Timo Autiokari wrote in message <34fc28a7...@news.clinet.fi>...

>This gamma *is* good for TV and video since the CRT will UNcode the coding
and

>the image is then good for the eye. But if the image is to be edited then


gamma
>compensation is not good at all. The image will be in "coded" state, it is
not
>natural and it is not good for the eye nor for image editing in any sense.

>Editing "coded" images results poor performance.
>


Examples please?

Bruce Lucas

Timo Autiokari

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

On 3 Mar 1998 18:36:56 -0800, da...@cs.ubc.ca (Dave Martindale) wrote:

>If you do a simple linear stretch by moving the "white point" marker
>and don't touch the black and midpoint markers, you scale all of the
>intensities in the image by the same factor. This is equivalent to
>changing the F-stop on a camera lens. This is true if the sample
>values stored in the pixel are linearly proportional to intensity, but
>it is ALSO true if the values are gamma corrected. Don't believe me?
>Just do the math. It does work.

If you only change the white-point then the gamma space do not change, this is
true, but almost in every case you need to change the black-point also. However
there are an other problem related to white-point adjustment:

What you do not seem to understand is that the image-file (the data) will
degrade much sooner in case when the gamma compensation is in the image data.
When you scale the white-point then the highest values (R, G, B value of
pixels) will saturate, they will hit the level 255 very soon. And because there
is the gamma compensation in the image data there are a _lot_ of high R, G or B
values in the data. This saturation is the same as artificants. So the data
deteriorates and is not good for printing anymore.

If you only stare the image on the monitor then there is no difference between
the performance of the linear image and the gamma compensated image in respect
the white point adjustment.

In case of the linear image you only cut out what you want to cut out and below
that point there will be _no_ saturation. So the image file is still good for
printing (and has the same or better performance on monitor than the originally
gamma compensated image has).

>On the other hand, if you move the "black point" marker off zero, then
>you are mucking about with the tonal scale of the whole image, and
>the sample values are no longer related to original scene intensity
>by a simple rule. This is true for both linear and gamma-encoded
>sample values, so gamma-encoded samples suffer no additional disadvantage
>here.

The above is not true at all. When you apply a linear transformation onto linear
image the resulting image will surely be linear. This should be quite
elementary. There will be no hue-shift, only the intensities change. So with a
linear image you get only the desired effect (you cut out what you want to cut
out).

But when you move the origin (black-point) of an inverse gamma distributed
intensities of an image then in addition to the intensity change the whole image
moves out from the gamma space and it will not be in another gamma space either.
It will be very difficult to correct. Because the colors are the tristimuli
components then there will hue-shift. So the result is that you get two problems
from the black-point change (yes, they are related to each other) (1) the
non-linearity of the image is not anymore a simple gamma function and (2) there
is color shift. (if you somehow can figure the correction curve that is needed
to recover from that then of course both the errors will be corrected
simultaneously).

This problem is not easy to observe, as you are changing the intensity scale in
the first place, so when the non-linearity of the image changes at the same time
it is usually not notices. The result btw mainly is too dark midtones and
shadows. Do the math.

>If you move the "midpoint" marker in Levels, you change the gamma of the
>image. If it was already gamma-encoded, the new image is still gamma
>encoded but with a different value of gamma. No problem.

You are correct with the gamma here.

A warning: the "midpoint" marker in Levels (in Photoshop) is *not* a true gamma
control, there is an Adobe tweaking in it so that the shadows will not be
affected much. (This is a problem in Photshop, not in your reasoning).

>It seems like gamma-encoded images are *more* robust than linear ones
>under the sort of manipulations you can do with the Levels menu.

No gamma-encoded images are much worse as I have explained above. Just do the
math.

>It's easy to see why. Suppose the image has a useful brightness range


>of 100:1. Then the maximum intensity is represented by 255, and the
>darkest shadow is 2.5. Oops, no way to represent 2.5 accurately, so we
>have to use either 2 or 3. These sample values, when accurately reproduced
>by the display, differ in intensity by a factor of 1.5. The brightness
>difference between codes 3 and 4 is 1.33.

You are quite very wrong here. Those percentages would apply only if you could
view the image on CRT in total darkness, the light would come through the glass
of the monitor without any reflections and so that all the light from the
monitor that do not hit your eyes would end up into a blackhole.

You can not see an 1/256 linear difference on monitor. Just experiment and you
will see.

>Even though this is in dark areas of the image where the eye's sensitivity
>is reduced,

Now I have the urge to quote Mr Poynton:

"Through an amazing coincidence, vision's response to intensity
is effectively the inverse of a CRT's nonlinearity"

He seems to be saying totally the opposite what you are say above. You know, the
inverse gamma does a rocket launch in the dark so according to Mr. Poynton the
sensitivity would be much higher there.

Is your statement correct or is the statement of Mr Poynton correct? It seems to
me that both can not be correct at the same time. Or do someone miss some piece
of essential information?

>Here, you are arguing that filtering operations should be done in linear
>space. However, you do *not* have to store images using linear samples
>in order to do your operations in linear space.

Really. An I do not need use the car to go to the work because I can just use
the helicopter.

>a dye-sub might be able to show the quantization errors in
>the shadows that are inherent in using 8-bit linear encoding. Better
>to give the printer 8-bit gamma corrected data with a gamma of about 1/1.8,
>to ensure this doesn't happen.

Please tell me what is so important in the shadows (that you really do not even
see) ? The quality of images is *mostly* elsewhere, in colors, all over the
image, not in shadows. And editing gamma compensated images will produce
artificants and it decreases the sharpness and it flattens the chroma and causes
hue-shifts there, to mention a few.

Timo Autiokari

Stephen H. Westin

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

tim...@clinet.fi (Timo Autiokari) writes:

> On 3 Mar 1998 19:25:48 -0800, da...@cs.ubc.ca (Dave Martindale) wrote:
> >tim...@clinet.fi (Timo Autiokari) writes:
>
> >>I can not see any real reason why the linear space is not allowed.
> >
> >Are you talking about getting 8-bit linear data out? For reasons I've
> >explained in another article, and Poynton's FAQ also explains, 8-bit linear
> >suffers (badly!) from quantization artifacts being visible in dark areas
> >of the image.

> Yes, I'm talking about getting it not only out but also getting it
> into printers too.

Which are limited to about a 30:1 contrast ratio, much less than that
of a good CRT, or projected transparencies.

> I do this in 8 bit every day and have no problems
> with the dark areas of the image.

Are these scanned or digitized images? If so, you may be benefiting
from noise that masks the quantization artifacts.

Also, have you controlled for linearity in transfer function? Where
does, say, 50% gray measure in reflectance compared to black and
white?

> The "better shading" is the only
> argument what one can have to support the gamma space. What is so
> special about the black color ?

Visual sensitivity, which discriminates intensity levels more finely
in dark areas of the image.

> The important information and the quality of the images is not in
> the shadows. It is in the colors, every where else but not in
> shadows.

Until a human looks at the image.

> The eye can not see the 1/256 intensity step nor the 2/256 step.

Please cite the experiments that show this. Please also detail the
conditions: contrast range, ambient light level, visual adaptation
state, etc.

<snip>

Dave Martindale

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

Valburg <lk...@psu.edu> writes:
>The first is a repeat of a question I posed previously: Could you tell

>us whether there is a way to determine the native gamma
>of a particular scanner model, in order to avoid reducing the 8 bits

>worth of information by adjusting the gamma or midpoint to another
>setting? Or is this, perhaps, a concern of more significance in theory
>than in practice; that is, perhaps the extent of mid-point adjustments
>commonly practiced are gentle enough so as to make little difference in
>image quality (loss of "bins" through quantization error?)?

With most scanners, you're probably best off just leaving all of the
scanning controls set to their default setting, and adjusting things with
Photoshop later, since the scanning adjustments probably just do something
equivalent to "Levels" and "Curves" in Photoshop, but without any preview
or undo facility.

With *some* scanners, you may be able to adjust controls that affect the
analog signal processing ahead of the A/D convertor, and in those cases
it may be worth playing with the adjustments to get the optimum data out
of the A/D convertor.

To measure the actual scanner gamma, scan a grey scale that you know
the patch reflectances of. I described this in more detail in another
message in rec.photo.digital in the last few days.

>Is there commonly available image manipulation software which allows
>working at bit-depths greater than 8 and then reducing bit-depth to 8
>for output or storage (as mentioned most recently in posts by Dave
>Martindale)?

Photoshop lets you read in 16-bit data then rescale it (but not much else).
You have to convert it to 8 bit before you can apply most operations.
I don't know about other options.

Dave

Dave Martindale

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>What you do not seem to understand is that the image-file (the data) will
>degrade much sooner in case when the gamma compensation is in the image data.
>When you scale the white-point then the highest values (R, G, B value of
>pixels) will saturate, they will hit the level 255 very soon. And because there
>is the gamma compensation in the image data there are a _lot_ of high R, G or B
>values in the data. This saturation is the same as artificants. So the data
>deteriorates and is not good for printing anymore.

Again, rubbish. Suppose you start with an image that has been linearly
encoded. You look at the histogram and decide to rescale the image with
a white point of 230. When you do this, all values from 231 to 255 are
clamped to 255, while the range from 0 to 230 is rescaled to span 0 to 255.

Now suppose you start with the *same* image, but stored using gamma 0.5
encoding. A portion of the image that had a sample value of 230 in the
linear image would now be stored as a sample value of 242 in the new image.
So if you rescale the image with a new white point of 242, this produces
*exactly the same* visual result as rescaling at 230 in the linear image.
Exactly the same portions of the image saturate and are clamped at 255.
All other portions of the image are increased in brightness *by exactly
the same amount*. There is no difference at all in the result - except
that the gamma-corrected image has fewer quantization artifacts in
shadow areas, like it always did.

Please try working out the math for yourself rather than just baldly
posting statements that simply are not true.

>In case of the linear image you only cut out what you want to cut out and below
>that point there will be _no_ saturation. So the image file is still good for
>printing (and has the same or better performance on monitor than the originally
>gamma compensated image has).

Again, the effect of changing the white point is the same for both linear
and gamma-encoded images. A little bit of math proves this. If you don't
see this when you are looking at images, you are doing something wrong.

>>On the other hand, if you move the "black point" marker off zero, then
>>you are mucking about with the tonal scale of the whole image, and
>>the sample values are no longer related to original scene intensity
>>by a simple rule. This is true for both linear and gamma-encoded
>>sample values, so gamma-encoded samples suffer no additional disadvantage
>>here.
>
>The above is not true at all. When you apply a linear transformation onto linear
>image the resulting image will surely be linear. This should be quite
>elementary. There will be no hue-shift, only the intensities change. So with a
>linear image you get only the desired effect (you cut out what you want to cut
>out).

If you apply a linear transform to linearly encoded pixels,
the result is "linear" in the mathematical sense. Mathematically, the
sample values are linearly related to the intensity if the function
relating the two looks like:

sample = A * intensity + B

That's a linear equation. But in photographic terms, if the sample value
is linearly related to intensity, the value "B" in the above function must
be zero. This is necessary to have the property that doubling the
intensity doubles the sample value. Using a black point offset scales
the sample values so that "B" becomes non-zero, and sample values are no
longer proportional to intensity. They are proportional to intensity plus
an offset. If you look at the relationship between scene brightness and
image brightness on a log-log scale (the way the eye sees), the transfer
characteristic is no longer a straight line.

The same thing happens if you apply a black level shift to a gamma encoded
image.

>So the result is that you get two problems
>from the black-point change (yes, they are related to each other) (1) the
>non-linearity of the image is not anymore a simple gamma function and (2) there
>is color shift.

Shifting the black level causes this effect in *both* linear and gamma
encoded images. Try doing the math - it's easy to see.

>A warning: the "midpoint" marker in Levels (in Photoshop) is *not* a true gamma
>control, there is an Adobe tweaking in it so that the shadows will not be
>affected much. (This is a problem in Photshop, not in your reasoning).

Yeah, I know about that.

>You are quite very wrong here. Those percentages would apply only if you could
>view the image on CRT in total darkness, the light would come through the glass
>of the monitor without any reflections and so that all the light from the
>monitor that do not hit your eyes would end up into a blackhole.
>
>You can not see an 1/256 linear difference on monitor. Just experiment and you
>will see.

It depends on how your monitor is calibrated, doesn't it? If you choose
to use linear sample values, then *by definition* a value of 3 should be
50% brighter than 2. If it *is* that much brighter, you can clearly see
the step between them. If it *isn't* that much brighter, then your
monitor is not calibrated to display your images correctly.

Of course, it would take a dark room to seek this. But suppose that your
image has only a 30:1 brightness range. Then the darkest shadows will
have a sample value of 8 or 9. The difference between these is still
12.5%, and you can still see a step that size. In a gamma-encoded image,
the same brightness would be stored as 46 or 47, and the step size between
these two adjacent codes is only 4.3%.

The important difference is that in these (not terribly deep) shadows,
the smallest representable difference in brightness in a linear-encoded
image is three times the size of the smallest representable brightness
difference in the gamma-encoded image. The size of the steps in the
gamma image are usually (but not always) small enough to be invisible.
The three times larger steps in the linear image are often visible.

>Now I have the urge to quote Mr Poynton:
>
> "Through an amazing coincidence, vision's response to intensity
> is effectively the inverse of a CRT's nonlinearity"
>
>He seems to be saying totally the opposite what you are say above. You know, the
>inverse gamma does a rocket launch in the dark so according to Mr. Poynton the
>sensitivity would be much higher there.
>
>Is your statement correct or is the statement of Mr Poynton correct? It seems to
>me that both can not be correct at the same time. Or do someone miss some piece
>of essential information?

No, both statements are correct, and neither contradict each other.
The gamma correction or "inverse gamma" curve has a very high slope in
the dark part of the image, when viewed on linear axes. This tells us
that we need to allocate a larger portion of the 256 codes available
to us in the darker portion of the image, and fewer codes in the lighter
portion than linear encoding would do. This has the effect of making
the relative step size between adjacent codes smaller in the dark areas,
where we need that. It also makes the relative step size larger in the
bright areas, but we can get away with this because a linear code has
smaller steps than necessary in the bright areas.

Does anyone else think that I'm contradicting Charles Ponyton anywhere?
Or is it just Timo that reads it that way?

>>Here, you are arguing that filtering operations should be done in linear
>>space. However, you do *not* have to store images using linear samples
>>in order to do your operations in linear space.
>
>Really. An I do not need use the car to go to the work because I can just use
>the helicopter.

Ahem. How is this in any way relevant to the argument?

You can do all your processing in linear space if you want, and you can
store your image files on disk this way if you want. But either you'll
have to use 12 bits or more per sample, or you will get quantization
artifacts. No one is stopping you from doing this.

On the other hand, most of the world has figured out that they can get
most of the benefits of 12-bit linear coding in only 8 bits using
gamma encoding. So they use it. It's a reasonable compromise. There
are some people for whom this compromise is not acceptable (e.g. X-ray
imaging, astrophotography) and they continue using more bits.

Again, I ask: do you really want 8 bits linear, or 12 bits linear?
It is pointless to say that you just want "linear" devices and processing
without saying how many bits you intend to use. 8 bits linear is simply
bad because of quantization artifacts. 12 bits linear is quite a
reasonable choice, if you're prepared to pay the price (usually a factor
of 1.5 in disk space, a factor of 2 in RAM, and somewhat more CPU).

Dave

Dave Martindale

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

gkin...@cybernet1.com writes:
>An historigram shows
>a mass of data clumped in the lower third with nothing in the highlight or
>midrange. (I use Picture Publish 5 from Micrografx). Then I go into "adjust
>tone balance" and move the highlight marker to the first point that has
>image data greater than zero. Then I fool arround with the midpoint selector
>to get the smoothest image I can get. The results don't look too bad on
>screen (I use a PC with a 19" moniter) but the printed output resembles a
>color-coded contour map more than a photograph. When I open the historigram
>of the adjusted file I see something that looks like a picket fence made with
>random length lumber.

The problem is that your original image only uses 1/3 of the available
256 codes - so there are less than 100 distinct brightnesses in the
image. As long as it remains dark and muddy, you don't see the steps
between them. But when you rescale it to fill the full brightness range,
you can see how large the gaps between the sample values really are.

>Clearly the image would be healthier if those gaps were
>filled up. Those are the data bins my 8x3 bit program sorts the data into,
>right? Why can't the full bins spill some of their data into the less
>fortunate ones. Would that give me a better picture?

You *can* spread the image around into more histogram bins. Try adding
some random noise to the picture, then look at the histogram. The
pickets in that picket fence will spread out when you add noise, with the
amount of spreading depending on the amplitude of the noise. Adjust the
amplitude until you like the image, or until you like the histogram.

Unfortunately, you'll end up with a noisy image. The fine intensity
information that you would need to produce a truly good image is just not
there - it was lost between the CCD and the A/D converter when the
original image was taken, and nothing can get it back.

Dave

Dave Martindale

unread,
Mar 4, 1998, 3:00:00 AM3/4/98
to

tim...@clinet.fi (Timo Autiokari) writes:
>Yes, I'm talking about getting it not only out but also getting it into printers
>too. I do this in 8 bit every day and have no problems with the dark areas of
>the image.

If you are just printing on paper, you probably have a 30:1 brightness
range or less in your output. This is least likely to show artifacts
in the dark areas. People working with CRT displays in a dark room
have about a 100:1 brightness range available, so it's more significant
there. Projected transparency film (slides or movies) have a brightness
range of several hundred to 1; it's even more important there.

Also, if you always work with scanned images, the noise from the film grain,
CCD electronics noise, and other sources tends to mask quantization errors.
That doesn't mean they are not there, just that they are less visble.
Images produced by computer rendering techniques with no added noise are
the most likely to show quantization artifacts, because there is no
masking noise.

>The "better shading" is the only argument what one can have to support the gamma

>space. What is so special about the black color ? The important information


>and the quality of the images is not in the shadows. It is in the colors, every
>where else but not in shadows.

I can't say anything about your images. But in *mine*, the shadows are
important.

>The gamma
>space cuts more than 50% of the available colors and it cuts it more heavily
>from the highlights and midtones. In reality this generates artificants to
>midtones and to highlights very easily when such images are edited.

A gamma encoded image has the same number of available colours as a
linear image - they are just distributed differently across the tonal scale.

What sort of artifacts are you talking about? Can you put even one example
image somewhere that people can FTP so they can look at it, so we can see
what you mean?

Also, please keep in mind that if you are looking at your images on
a typical graphics display with 8-bit DACs, your image *is* being converted
to gamma-corrected form before it reaches the DACs. All of the "extra
colours" that you so want to preserve disappear when your image goes
through the gamma-correction lookup table that *must* be present somewhere
in the system (either software or hardware) ahead of the DACs.

Only if you have a frame buffer with 10-bit or wider DACS, and a wider
lookup table to match it, will you see the extra resolution in the bright
areas of the image on screen.

Dave

gkin...@cybernet1.com

unread,
Mar 5, 1998, 3:00:00 AM3/5/98
to

In article <6dkj1s$q...@redgreen.cs.ubc.ca>, da...@cs.ubc.ca (Dave
Martindale) wrote: > > gkin...@cybernet1.com writes: > >An historigram

shows > >a mass of data clumped in the lower third with nothing in the
highlight or > >midrange....(snip.. > The problem is that your original image

only uses 1/3 of the available > 256 codes - so there are less than 100
distinct brightnesses in the > image. (more snips) > Unfortunately, you'll

end up with a noisy image. The fine intensity > information that you would
need to produce a truly good image is just not > there - it was lost between
the CCD and the A/D converter when the > original image was taken, and
nothing can get it back. > > Dave > Thanks Dave. I can see now why a
30 or 36 bit device would be a lot handier for someone like me who likes to
make things do stuff they can't do.

Timo Autiokari

unread,
Mar 5, 1998, 3:00:00 AM3/5/98
to

On Wed, 4 Mar 1998 10:49:48 -0500, "Bruce Lucas" <lu...@watson.ibm.com> wrote:

>Examples please?

My pleasure:
http://www.clinet.fi/~timothy/calibration/gamma_errors/comparison.htm

There is currently three cases, more to come. They are between the so called
"more perceptible coding from 12 bit" and the linear 8 bit.

Timo Autiokari

Timo Autiokari

unread,
Mar 5, 1998, 3:00:00 AM3/5/98
to

On 03 Mar 1998 16:48:05 +0100, Walter Hafner <haf...@forwiss.tu-muenchen.de>
wrote:

>Is the method of finding the gamma-factor of particular displays as
>described in http://www.povray.org/binaries/ (last paragraph) any good?

Yes it is, if you also read the directions from the text file.

I would like to invite you to see mine at:
http://www.clinet.fi/~timothy/calibration/g/index.htm

Timo Autiokari

Timo Autiokari

unread,
Mar 5, 1998, 3:00:00 AM3/5/98
to

In message 6dkin6$qb4 @redgreen.cs.ubc.ca on 1998/03/04 da...@cs.ubc.ca (Dave
Martindale) wrote:

>tim...@clinet.fi (Timo Autiokari) writes:
>>This saturation is the same as artificants. So the data deteriorates and
>>is not good for printing anymore.

>Again, rubbish. Suppose you start with an image that has been linearly
>encoded. You look at the histogram and decide to rescale the image with
>a white point of 230. When you do this, all values from 231 to 255 are
>clamped to 255, while the range from 0 to 230 is rescaled to span 0 to 255.

>Now suppose you start with the *same* image, but stored using gamma 0.5
>encoding. A portion of the image that had a sample value of 230 in the
>linear image would now be stored as a sample value of 242 in the new image.
>So if you rescale the image with a new white point of 242, this produces
>*exactly the same* visual result as rescaling at 230 in the linear image.
>Exactly the same portions of the image saturate and are clamped at 255.
>All other portions of the image are increased in brightness *by exactly
>the same amount*. There is no difference at all in the result - except
>that the gamma-corrected image has fewer quantization artifacts in
>shadow areas, like it always did.

You are quite wrong with the above.

You level 242 in the gamma compensated image contains large quantization. The
level 230 in linear image does not contain _any_ quantization. So the image will
be in much better condition for the printer. Quantization is the prime source of
artifacts.

>If you apply a linear transform to linearly encoded pixels,
>the result is "linear" in the mathematical sense. Mathematically, the
>sample values are linearly related to the intensity if the function
>relating the two looks like:

> sample = A * intensity + B

>That's a linear equation. But in photographic terms, if the sample value
>is linearly related to intensity, the value "B" in the above function must
>be zero. This is necessary to have the property that doubling the
>intensity doubles the sample value. Using a black point offset scales
>the sample values so that "B" becomes non-zero, and sample values are no
>longer proportional to intensity. They are proportional to intensity plus
>an offset. If you look at the relationship between scene brightness and
>image brightness on a log-log scale (the way the eye sees), the transfer
>characteristic is no longer a straight line.

>The same thing happens if you apply a black level shift to a gamma encoded
>image.

>Shifting the black level causes this effect in *both* linear and gamma
>encoded images. Try doing the math - it's easy to see.

With the above you are hopelessly wrong.

What you cut out using the black point change is the _desired effect_. That is
the constant B in your equation. Note, you cut it out. Then what you have left
is the function: A * intensity that is linear function.

No, you have not done the math so you do not know, you only believe. Do not
worry, I have made this easy for you. Please see:
http://www.clinet.fi/~timothy/calibration/gamma_errors/comparison.htm
You can download a test setup and do it your own, no need for math, just do it
and see the result. They do _not_ cause the same effect at all. The gamma
compensated image gives very bad result.

>>A warning: the "midpoint" marker in Levels (in Photoshop) is *not* a true gamma
>>control, there is an Adobe tweaking in it so that the shadows will not be
>>affected much. (This is a problem in Photshop, not in your reasoning).

>Yeah, I know about that.

Thank you for confirming it. Now we only need to get Adobe to acknowledge this
problem. Currently there is no way to correctly compensate the gamma in
Photoshop. (unless you use a specially created *.amp file)

>>You can not see an 1/256 linear difference on monitor. Just experiment and you
>>will see.

>It depends on how your monitor is calibrated, doesn't it? If you choose
>to use linear sample values, then *by definition* a value of 3 should be
>50% brighter than 2. If it *is* that much brighter, you can clearly see
>the step between them. If it *isn't* that much brighter, then your
>monitor is not calibrated to display your images correctly.

It depend on the monitor calibration, but this affect both the gamma compensated
images and linear images. On properly calibrated monitor you can not see the
1/256 linear-light intensity step. Again please go to:
http://www.clinet.fi/~timothy/calibration/gamma_errors/comparison.htm
and see the first image.

>Does anyone else think that I'm contradicting Charles Ponyton anywhere?
>Or is it just Timo that