OIIO maketx and 8bit maps, save space and bandwith

249 views
Skip to first unread message

Dorian Fevrier

unread,
Feb 5, 2014, 10:28:35 PM2/5/14
to apples...@googlegroups.com
This is to continue this interesting discussion about 8bits maps without
disturb the GsoC discussion.

Dorian:
- Tx compatibility (with Arnold txmake).

Franz:
Can you elaborate on this?

Dorian:
txmake from OpenImageIO (Arnold use this) support renderman and Arnold
.tx file. The point is to don't use EXR (and so, floatting 16bit
textures) when you only want to store 8bit colors (most of the time
actually).
.tx files are sort of tiff. The very good point is that OpenImageIO also
store a hash in the image metadata. So, if in your production you have a
lot of textures with the same pixel infos (middle gray), you can use the
hash table in your cache to handle this (maybe someone can confirm
OpenImageIO cache system already support this).

Franz:
So the main goal is to save disk space and network bandwidth? Too bad
that OpenEXR doesn't support 8-bit images...
(About the hash and duplication detection) Interesting, I was not aware
that OIIO was doing this.

Dorian:
Yes, Don't forget you often (always actually) have mipmaps. In float it
increase texture size a lot.
Totally agree with that (EXR doesn't support 8bits). I suppose exr was
not intent to be "texture". Don't know.

Franz:
(About mipmap size) Yes, 4x in theory (not accounting for compression).
(About EXR and 8bit) I think the main feature of OpenEXR has always been
HDRI support, typically to store environment maps; that probably
explains the focus on 16-bit and 32-bit channels.
If I'm not mistaking the concept of mipmaps was present from the
beginning too.

Est:
In the OSL branch, being merged right now, we will probably support all
those things, including using maketx
to generate mipmaps. That said, I don't agree with the idea of 8 bits
textures in general. It's true that for some
very specific cases, it can save space but, when using a linear color
workflow, 8 bit textures need to be
linearized in the shaders. This is error prone, inefficient and slightly
incorrect filtering-wise.

OpenEXR supports better compression algorithms than tiff, so the cost of
having compressed 16 bits
half mipmaps is usually not much more than 8 bit mipmaps.

I think that the best workflow is to linearize all textures when making
mipmaps.
maketx can do that easily, using hardcoded color conversions or OpenColorIO.

Dorian:
(About 8b maps have to be linearized in the shader and problem this can
bring) Excellent point Est, thanks for that!

Franz:
I guess there is a point in trying to save disk space and network
bandwidth by storing textures with 8 bits per channel, especially when
the source material for these textures is 8-bit anyway. The storage
format is orthogonal to the in-memory format where you'll likely want
halfs or floats per channel, for instance for proper linearization.

Then imagine how tiny textures would be on disk if they were both
represented with 8-bit per channel AND compressed with OpenEXR :)

Franz (again):
I partially take that back, I didn't realize you guys were talking about
mip pyramids and not just plain images. So yes, you want to build the
mipmaps in linear space, so you need to linearize the input textures
first, that means you'll want to convert them to half or float format
upon loading and before linearization. Which means that at the end your
mipmaps will be in half or float, and you would loose information by
quantizing back to 8-bit mip pyramids.

So yeah I tend to agree with Est here, doesn't seem worth the hassle really.

Dorian, can you tell us more about the size of the data sets you are
dealing with, and why/how you deal with 8-bit mipmapped textures in your
pipeline? I'm immensely curious to know more now :)


And my final answer (Dorian):
> Dorian, can you tell us more about...
No, I actually can't because I've never had to deal directly with Arnold
tx myself.

But I can say network bandwith can be a huge problem, specially on tiny
studios. I remember a shot on a project where the texture was in front
of camera with full definition. We where using piramidal mental ray maps
at that time and the renderfarms computing the shot was totally breaking
the network. I guess there was a cache/miss problem so this was maybe
not related to the map size itself but more the network IO count.
A second problem I've encounter on an other project was when someone was
starting a render that requiere a lot of textures, every render node was
trying to access datas at the same time. The network bandwith graphs was
growing few minutes and it was a mess during this time.
Maybe 8bit maps would not have save the problem in this case. I just
considere network bandwith as a valid input but Est is right, this
problem is not a sufficient reason to force shaders to do the conversion
themself.

> I guess there is a point in trying to save disk space and network
bandwidth by storing textures with 8 bits per channel
Actually I'm not sure because when you convert 8bits values to linear
you will (I suppose) had a lot of floatting values which are actually
the same. And so, the EXR compression should take advantage of this. I
don't have any pratical proof of what I'm saying. It just make sense.

Anyway, you have both convince me that we have more problems than
advantages to still deal with 8bit sRGB maps. Thanks a lot for your
valuable feedbacks guys, I really appreciate!

There is still the hash given by txmake that can be very usefull for
duplication detection on the AS side.

François Beaune

unread,
Feb 6, 2014, 4:54:56 AM2/6/14
to apples...@googlegroups.com
Thanks for the summary!

Franz




--
You received this message because you are subscribed to the Google Groups "appleseed-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to appleseed-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Est

unread,
Feb 6, 2014, 7:16:18 AM2/6/14
to apples...@googlegroups.com

SPI's color management workflow is described here:

Includes some info about textures.
Check also the part where they talk about the diffuse texture colorspace (dt) 
in spi's OpenColorIO configs.

Est.

François Beaune

unread,
Feb 6, 2014, 8:34:20 AM2/6/14
to apples...@googlegroups.com
Very interesting, thanks.

Franz


--
You received this message because you are subscribed to the Google Groups "appleseed-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to appleseed-de...@googlegroups.com.

Dorian Fevrier

unread,
Feb 6, 2014, 11:05:03 PM2/6/14
to apples...@googlegroups.com
Very interesting read!

Est, I have a question for you (because you get the point about problems to linearize on shader side):

On: https://sites.google.com/site/opencolorio/profiles/spi-workflow

> Color processing (linearization) is applied before mipmap generation in order to assure energy preservation in the render.  If the opposite processing order were used, (mipmap in the original space, color convert in the shader), the apparent intensity of texture values would change as the object approached to receeded to the camera.

Tell me if I'm wrong but is it the fact sRGB mipmaps are scaled, and so, pixels are merged and blurred with their sRGB values that make deep mipmap levels's pixels having values that, once shader linearized, are more brigth that how they are supposed to be?

> Painted textures that are intended to modulate diffuse color components are labelled dt (standing for "diffuse texture").

Any of you understand what they have in mind when they use the word "modulate"? We are just talking about diffuse color right? Or maybe at SIP there is diffuse color AND diffuse texture. If you want the diffuse texture, the diffuse color is just white (1.0, 1.0, 1.0), and you use diffuse color as a "tint" variation.
Actually we could consider diffuse color modulate the diffuse texture.

Est

unread,
Feb 7, 2014, 5:55:21 AM2/7/14
to apples...@googlegroups.com


On Friday, 7 February 2014 05:05:03 UTC+1, Narann wrote:
Very interesting read!

Est, I have a question for you (because you get the point about problems to linearize on shader side):

On: https://sites.google.com/site/opencolorio/profiles/spi-workflow

> Color processing (linearization) is applied before mipmap generation in order to assure energy preservation in the render.  If the opposite processing order were used, (mipmap in the original space, color convert in the shader), the apparent intensity of texture values would change as the object approached to receeded to the camera.

Tell me if I'm wrong but is it the fact sRGB mipmaps are scaled, and so, pixels are merged and blurred with their sRGB values that make deep mipmap levels's pixels having values that, once shader linearized, are more brigth that how they are supposed to be?

There are two filtering steps. One when the mipmap is created and another one, when the mipmap is sampled.
Both are wrong in sRGB (or any other non-linear) colorspace.

> Painted textures that are intended to modulate diffuse color components are labelled dt (standing for "diffuse texture").

Any of you understand what they have in mind when they use the word "modulate"? We are just talking about diffuse color right? Or maybe at SIP there is diffuse color AND diffuse texture. If you want the diffuse texture, the diffuse color is just white (1.0, 1.0, 1.0), and you use diffuse color as a "tint" variation.
Actually we could consider diffuse color modulate the diffuse texture.

By modulate they mean multiply.
This colorspace is used to ensure energy conservation in materials. 
dt, when linearized has pixel values between black (0) and almost white (0.9XX).
When you use the texture for any color parameter, for example diffuse color, you can be sure that your material will not 
reflect more light than it arrives at the surface. This detail is important in a physically based renderer, like appleseed.

Est.

Dorian FEVRIER

unread,
Feb 7, 2014, 6:43:23 AM2/7/14
to apples...@googlegroups.com
Thanks for the explanation Est! :)


François Beaune

unread,
Feb 7, 2014, 3:56:29 PM2/7/14
to apples...@googlegroups.com
On Fri, Feb 7, 2014 at 5:05 AM, Dorian Fevrier <fevrier...@yahoo.fr> wrote:

On: https://sites.google.com/site/opencolorio/profiles/spi-workflow

> Color processing (linearization) is applied before mipmap generation in order to assure energy preservation in the render.  If the opposite processing order were used, (mipmap in the original space, color convert in the shader), the apparent intensity of texture values would change as the object approached to receeded to the camera.

Tell me if I'm wrong but is it the fact sRGB mipmaps are scaled, and so, pixels are merged and blurred with their sRGB values that make deep mipmap levels's pixels having values that, once shader linearized, are more brigth that how they are supposed to be?

I'm not Est but let me offer an answer here:

Computing mipmap levels requires to linearly combine pixel colors. For instance, to go from a mipmap level N to a mipmap level N+1 using a box filter, each pixel at level N+1 will be the average of four pixels from level N (computing the average of two colors involves two kinds of linear operations: additions and multiplications by a scalar).

The problem is this: you cannot linearly combine colors that are not in linear space. Let's call T the transformation from linear RGB to sRGB. T itself is not a linear transformation because it involves some kind of gamma correction. Let's then say you've got two sRGB colors. Since they are in the sRGB color space, they are transformed versions of colors (let's call them A and B) from the linear RGB color space, so let's refer to our two sRGB colors as T(A) and T(B).

Let's now say you want to compute the "average" of these two sRGB colors (something which doesn't make sense): it is given by 0.5 * (T(A) + T(B)). Now, if T would be a linear transformation, these two rules would apply:

    (1)    T(A) + T(B) = T(A + B)
    (2)    0.5 * T(A + B) = T(0.5 * (A + B))

and since these rules apply, we conclude that:

    the average of our sRGB colors T(A) and T(B)
        which by definition is given by 0.5 * (T(A) + T(B))
        is equal to T(0.5 * (A + B)) thanks to the application of the two rules above
        is equal to the average, in sRGB, of the linear RGB colors A and B

which would be awesome. Unfortunately, as we stated above, T is not a linear transformation, so the two rules (1) and (2) do not apply, and 0.5 * (T(A) + T(B)) is NOT the average, in sRGB, of colors A and B. It is something, and it might very well be brighter or darker than the true average, depending on which operations are performed, and of the colors themselves.

I hope this is somewhat understandable :)

> Painted textures that are intended to modulate diffuse color components are labelled dt (standing for "diffuse texture").

Any of you understand what they have in mind when they use the word "modulate"?

You can consider that there are two kinds of textures:
  1. Textures that contain actual RGB colors.
  2. Textures that contain RGB multiplication factors, which are meant to "modulate" (i.e. multiply, pixel-by-pixel, component-by-component) other textures.
I suppose that what they call dt is a texture of the second kind, which then really contains multipliers. dt are necessarily linear by nature (i.e. they must never be color-transformed) because multipliers are NOT colors (even if they follow the RGB format and look like textures) and thus don't belong to a color space. I guess that's why they make this distinction.

Franz

Est

unread,
Feb 7, 2014, 11:33:12 PM2/7/14
to apples...@googlegroups.com


On Friday, 7 February 2014 21:56:29 UTC+1, Franz Beaune wrote:
On Fri, Feb 7, 2014 at 5:05 AM, Dorian Fevrier <fevrier...@yahoo.fr> wrote:

On: https://sites.google.com/site/opencolorio/profiles/spi-workflow

> Color processing (linearization) is applied before mipmap generation in order to assure energy preservation in the render.  If the opposite processing order were used, (mipmap in the original space, color convert in the shader), the apparent intensity of texture values would change as the object approached to receeded to the camera.

Tell me if I'm wrong but is it the fact sRGB mipmaps are scaled, and so, pixels are merged and blurred with their sRGB values that make deep mipmap levels's pixels having values that, once shader linearized, are more brigth that how they are supposed to be?

I'm not Est but let me offer an answer here:

Computing mipmap levels requires to linearly combine pixel colors. For instance, to go from a mipmap level N to a mipmap level N+1 using a box filter, each pixel at level N+1 will be the average of four pixels from level N (computing the average of two colors involves two kinds of linear operations: additions and multiplications by a scalar).

The problem is this: you cannot linearly combine colors that are not in linear space. Let's call T the transformation from linear RGB to sRGB. T itself is not a linear transformation because it involves some kind of gamma correction. Let's then say you've got two sRGB colors. Since they are in the sRGB color space, they are transformed versions of colors (let's call them A and B) from the linear RGB color space, so let's refer to our two sRGB colors as T(A) and T(B).

Let's now say you want to compute the "average" of these two sRGB colors (something which doesn't make sense): it is given by 0.5 * (T(A) + T(B)). Now, if T would be a linear transformation, these two rules would apply:

    (1)    T(A) + T(B) = T(A + B)
    (2)    0.5 * T(A + B) = T(0.5 * (A + B))

and since these rules apply, we conclude that:

    the average of our sRGB colors T(A) and T(B)
        which by definition is given by 0.5 * (T(A) + T(B))
        is equal to T(0.5 * (A + B)) thanks to the application of the two rules above
        is equal to the average, in sRGB, of the linear RGB colors A and B

which would be awesome. Unfortunately, as we stated above, T is not a linear transformation, so the two rules (1) and (2) do not apply, and 0.5 * (T(A) + T(B)) is NOT the average, in sRGB, of colors A and B. It is something, and it might very well be brighter or darker than the true average, depending on which operations are performed, and of the colors themselves.

I hope this is somewhat understandable :)

That's correct. Thanks for the detailed explanation Franz.
The same reasons apply when you sample the mipmap at render time.

> Painted textures that are intended to modulate diffuse color components are labelled dt (standing for "diffuse texture").

Any of you understand what they have in mind when they use the word "modulate"?

You can consider that there are two kinds of textures:
  1. Textures that contain actual RGB colors.
  2. Textures that contain RGB multiplication factors, which are meant to "modulate" (i.e. multiply, pixel-by-pixel, component-by-component) other textures.
I suppose that what they call dt is a texture of the second kind, which then really contains multipliers. dt are necessarily linear by nature (i.e. they must never be color-transformed) because multipliers are NOT colors (even if they follow the RGB format and look like textures) and thus don't belong to a color space. I guess that's why they make this distinction.

dt (diffuse texture) is an sRGB like colorspace. Normally, it's used in Photoshop and other sRGB apps.
When you linearize it, pixel values are never above 1. You can use the resulting texture as a 
diffuse color map or in any other cases when having bounded (0-1) values is important.
It would be case 1 in Franz's post.

In SPI's configs, the colorspace case 2, is called nc (non-color).

Est.

François Beaune

unread,
Feb 8, 2014, 4:25:17 AM2/8/14
to apples...@googlegroups.com
On Sat, Feb 8, 2014 at 5:33 AM, Est <rame...@gmail.com> wrote:

> Painted textures that are intended to modulate diffuse color components are labelled dt (standing for "diffuse texture").

Any of you understand what they have in mind when they use the word "modulate"?

You can consider that there are two kinds of textures:
  1. Textures that contain actual RGB colors.
  2. Textures that contain RGB multiplication factors, which are meant to "modulate" (i.e. multiply, pixel-by-pixel, component-by-component) other textures.
I suppose that what they call dt is a texture of the second kind, which then really contains multipliers. dt are necessarily linear by nature (i.e. they must never be color-transformed) because multipliers are NOT colors (even if they follow the RGB format and look like textures) and thus don't belong to a color space. I guess that's why they make this distinction.

dt (diffuse texture) is an sRGB like colorspace. Normally, it's used in Photoshop and other sRGB apps.
When you linearize it, pixel values are never above 1. You can use the resulting texture as a 
diffuse color map or in any other cases when having bounded (0-1) values is important.
It would be case 1 in Franz's post.

In SPI's configs, the colorspace case 2, is called nc (non-color).

Ok, I was wrong, my bad. Thanks for the clarification!

Franz

Dorian Fevrier

unread,
Feb 8, 2014, 8:19:39 AM2/8/14
to apples...@googlegroups.com
Thanks for the deep explaination Franz!

> In SPI's configs, the colorspace case 2, is called nc (non-color).

- dt for sRGB color
- nc for modular colors (spec mask)

This make more sens yes. :)
Reply all
Reply to author
Forward
0 new messages