WebGL2 has no texture swizzles for L/LA emulation

295 views
Skip to first unread message

Alecazam

unread,
Feb 24, 2015, 9:16:16 PM2/24/15
to webgl-d...@googlegroups.com
This will prevent simple emulation of L and LA textures.  These formats were stripped from the GL3.2 core spec in a breaking change to existing apps.  Texture swizzles emulate them, but Apple's initial implementation of 3.2 core didn't include swizzle on all platforms (but did in later OS updates).  

Unfortunately, it looks like WebGL 2 has decided to not emulate swizzles either.  Won't this be a breaking change for WebGL 1 apps that rely on these formats.  Apps rely on these formats to feed grayscale data to color shaders without having to modify the shader.  Each texture op will now require manual swizzling in the shaders that is disabled for color formats.

Mark Callow

unread,
Feb 24, 2015, 11:10:54 PM2/24/15
to webgl-d...@googlegroups.com

> On Feb 25, 2015, at 11:16 AM, Alecazam <al...@figma.com> wrote:
>
> This will prevent simple emulation of L and LA textures. These formats were stripped from the GL3.2 core spec in a breaking change to existing apps. Texture swizzles emulate them, but Apple's initial implementation of 3.2 core didn't include swizzle on all platforms (but did in later OS updates).
>
> Unfortunately, it looks like WebGL 2 has decided to not emulate swizzles either. Won't this be a breaking change for WebGL 1 apps that rely on these formats. Apps rely on these formats to feed grayscale data to color shaders without having to modify the shader. Each texture op will now require manual swizzling in the shaders that is disabled for color formats.

All true, but L & LA textures remain - a deprecated - part of OpenGL ES 3 and therefore WebGL 2.

Regards

-Mark


Alecazam

unread,
Feb 24, 2015, 11:15:49 PM2/24/15
to webgl-d...@googlegroups.com, khr...@callow.im

But the swizzles are in GL3/ES3 to emulate L and LA.  They’re just not going to be supported by WebGL 2.   I’m suggesting that you need one or the other.  Either the swizzles need to be exposed, or else leave in L and LA support (which have DX equivalents already, and use the swizzles internally to WebGL 2 to avoid exposing them).   

Mark Callow

unread,
Feb 24, 2015, 11:27:27 PM2/24/15
to webgl-d...@googlegroups.com
On Feb 25, 2015, at 1:15 PM, Alecazam <al...@figma.com> wrote:

But the swizzles are in GL3/ES3 to emulate L and LA.

They’re useful for more than that.

They’re just not going to be supported by WebGL 2.   I’m suggesting that you need one or the other.  Either the swizzles need to be exposed, or else leave in L and LA support (which have DX equivalents already, and use the swizzles internally to WebGL 2 to avoid exposing them).

It seems you did not understand what I wrote

L & LA textures remain … part of OpenGL ES 3 and therefore WebGL 2

Regards

    -Mark


Alecazam

unread,
Feb 24, 2015, 11:31:17 PM2/24/15
to webgl-d...@googlegroups.com, khr...@callow.im
To clarify my last statement, DX9 had the L and LA formats.  DX11 only has R and forces the shader to monkey-patch for swizzles.   The hw has all sorts of programmable swizzle support internally (as evidenced by the GL texture swizzle), but maybe DX12 will re-expose that. I just think this will be a mess for WebGL developers as well.

Sorry, Mark just saw your reply.  I re-read your sentence, and it's good to see these are still in WebGL 2.  The first read I thought you said they were deprecated.  Now, I'm just not sure how the DX11/Angle layer emulates them efficiently other than to mod the shader.

Shannon Woods

unread,
Feb 24, 2015, 11:32:44 PM2/24/15
to webgl-d...@googlegroups.com, khr...@callow.im
On Tue Feb 24 2015 at 11:15:52 PM Alecazam <al...@figma.com> wrote:

But the swizzles are in GL3/ES3 to emulate L and LA.  They’re just not going to be supported by WebGL 2.   I’m suggesting that you need one or the other.  Either the swizzles need to be exposed, or else leave in L and LA support (which have DX equivalents already, and use the swizzles internally to WebGL 2 to avoid exposing them).   

L and LA have equivalents in D3D9. D3D11 drops support for these, and also lacks swizzle. Developers on that API achieve the same effect by sampling .rrr instead.
 

On Tuesday, February 24, 2015 at 8:10:54 PM UTC-8, Mark Callow wrote:

> On Feb 25, 2015, at 11:16 AM, Alecazam <al...@figma.com> wrote:
>
> This will prevent simple emulation of L and LA textures.  These formats were stripped from the GL3.2 core spec in a breaking change to existing apps.  Texture swizzles emulate them, but Apple's initial implementation of 3.2 core didn't include swizzle on all platforms (but did in later OS updates).  
>
> Unfortunately, it looks like WebGL 2 has decided to not emulate swizzles either.  Won't this be a breaking change for WebGL 1 apps that rely on these formats.  Apps rely on these formats to feed grayscale data to color shaders without having to modify the shader.  Each texture op will now require manual swizzling in the shaders that is disabled for color formats.

All true, but L & LA textures remain - a deprecated - part of OpenGL ES 3 and therefore WebGL 2.

Regards

    -Mark


--
You received this message because you are subscribed to the Google Groups "WebGL Dev List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to webgl-dev-lis...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Shannon Woods

unread,
Feb 24, 2015, 11:34:50 PM2/24/15
to webgl-d...@googlegroups.com, khr...@callow.im
On Tue Feb 24 2015 at 11:31:19 PM Alecazam <al...@figma.com> wrote:
To clarify my last statement, DX9 had the L and LA formats.  DX11 only has R and forces the shader to monkey-patch for swizzles.   The hw has all sorts of programmable swizzle support internally (as evidenced by the GL texture swizzle), but maybe DX12 will re-expose that. I just think this will be a mess for WebGL developers as well.

Sorry, Mark just saw your reply.  I re-read your sentence, and it's good to see these are still in WebGL 2.  The first read I thought you said they were deprecated.  Now, I'm just not sure how the DX11/Angle layer emulates them efficiently other than to mod the shader.

ANGLE over D3D11 expands L/LA to RGBA behind the scenes, at a performance cost on shader upload/readback time.

 
On Tuesday, February 24, 2015 at 8:15:49 PM UTC-8, Alecazam wrote:

But the swizzles are in GL3/ES3 to emulate L and LA.  They’re just not going to be supported by WebGL 2.   I’m suggesting that you need one or the other.  Either the swizzles need to be exposed, or else leave in L and LA support (which have DX equivalents already, and use the swizzles internally to WebGL 2 to avoid exposing them).   


On Tuesday, February 24, 2015 at 8:10:54 PM UTC-8, Mark Callow wrote:

> On Feb 25, 2015, at 11:16 AM, Alecazam <al...@figma.com> wrote:
>
> This will prevent simple emulation of L and LA textures.  These formats were stripped from the GL3.2 core spec in a breaking change to existing apps.  Texture swizzles emulate them, but Apple's initial implementation of 3.2 core didn't include swizzle on all platforms (but did in later OS updates).  
>
> Unfortunately, it looks like WebGL 2 has decided to not emulate swizzles either.  Won't this be a breaking change for WebGL 1 apps that rely on these formats.  Apps rely on these formats to feed grayscale data to color shaders without having to modify the shader.  Each texture op will now require manual swizzling in the shaders that is disabled for color formats.

All true, but L & LA textures remain - a deprecated - part of OpenGL ES 3 and therefore WebGL 2.

Regards

    -Mark


Alecazam

unread,
Feb 24, 2015, 11:37:30 PM2/24/15
to webgl-d...@googlegroups.com, khr...@callow.im
I did a search for LUMINANCE in the webgl2 spec, but didn't see it.  I'm assuming it's inherited from the WebGL 1 context.   Again, monkey-patching shaders (especially a blur shader) isn't ideal when adding swizzles to the shaders (one time using L, the next LA, and the next RGBA).  These all require different swizzles/conditionals or else separate shaders.  If you have multiple textures all with varied formats, you start to get a lot of shader variants.  That was the original point of the swizzles was to avoid shader mods.

Mark Callow

unread,
Feb 24, 2015, 11:40:12 PM2/24/15
to Alecazam, webgl-d...@googlegroups.com

On Feb 25, 2015, at 1:37 PM, Alecazam <al...@figma.com> wrote:

I did a search for LUMINANCE in the webgl2 spec, but didn't see it.  I'm assuming it's inherited from the WebGL 1 context. 

Yes it is. WebGL2ContextBase implements WebGLContextBase.

Regards

    -Mark


Alecazam

unread,
Feb 24, 2015, 11:43:19 PM2/24/15
to webgl-d...@googlegroups.com, khr...@callow.im
> ANGLE over D3D11 expands L/LA to RGBA behind the scenes, at a performance cost on shader upload/readback time.

Oh that just made me so sad, but I do appreciate that piece of info.  Is there any 1/2 channel 8-bit format (A8?) that doesn't get expanded?  This would be good to indicate in the WebGL 1 performance docs.  This basically makes these formats highly unpredictable for memory tracking on an API that won't even tell you how much video memory you have.

Shannon Woods

unread,
Feb 24, 2015, 11:54:12 PM2/24/15
to webgl-d...@googlegroups.com, khr...@callow.im
RED and RG are directly supported, and don't get expanded. I don't recall if they're exposed via WebGL or WebGL2, but in ANGLE, those are the recommended 1 & 2 channel formats for performance.

It's also worth noting-- texture-object-level swizzle is implemented in ANGLE, but has its own significant drawbacks. Because the feature doesn't exist in D3D, we end up pre-swizzling the texture when the swizzle changes and cacheing it, at a memory and at-swizzle/at-upload/at-readback performance hit. This turned out to be less onerous than attempting to shadow sampler objects through shaders and potentially need to patch & recompile at draw time.

ANGLE itself has some information available on performance caveats in this presentation from 2013, although our ES3 implementation was very early at the time, so there's not a lot of detail on those features. 

-- Shannon --

On Tue, Feb 24, 2015 at 11:43 PM Alecazam <al...@figma.com> wrote:
> ANGLE over D3D11 expands L/LA to RGBA behind the scenes, at a performance cost on shader upload/readback time.

Oh that just made me so sad, but I do appreciate that piece of info.  Is there any 1/2 channel 8-bit format (A8?) that doesn't get expanded?  This would be good to indicate in the WebGL 1 performance docs.  This basically makes these formats highly unpredictable for memory tracking on an API that won't even tell you how much video memory you have.

--

Alecazam

unread,
Feb 25, 2015, 12:00:53 AM2/25/15
to webgl-d...@googlegroups.com, khr...@callow.im
RED and RG aren't WebGL 1 formats.  There's not even support for R32F, so there's no way to use MRT and output fp depth and still keep the bit depth at 32-bits.   There are a lot of texture gaps on desktop (no R32F, no RGBA16un) in WebGL 1 that would be good to address in WebGL 2.  Ken Russell has responded on some of my other message threads about these.

I'm assuming A8un/A16f/A32f don't get replicated, since there's no swizzle on them.   L32f converting to RGBA32f on D3D11/Angle is a 4x multiplier on video memory. 

Alecazam

unread,
Feb 25, 2015, 11:32:17 AM2/25/15
to webgl-d...@googlegroups.com, khr...@callow.im
I'm not sure about the texture format conformance suite for WebGL (OSX tests below).  See message for GL_RED usage.   It appears Chrome is exporting a WebGL 2.0 feature in WebGL 1.0.  

I'm just suggesting that this format minefield is a big problem for developers, and I'd rather have GL_RED/GL_ALPHA and change my own swizzles in shaders than see a 32-bit format expanded to 128-bits.  There doesn't appear to be a workaround for the two-component expansions on LA, that keeps the same texture size under ANGLE DX11.

Thanks for the great paper Shannon.   I've read the OpenGL Insights paper on early versions of ANGLE, and having worked with both APIs, I realize what a difficult problem bridging DX and GL is.  I'm just trying to find the least common denominator for performance here.

For 8un,16f,32f.

GL_RED 
  Chrome: success
  Firefox:   "Error: WebGL: texImage2D: Invalid format RED: Requires WebGL version 2.0 or newer."  
  Safari: INVALID_ENUM (so not helpful)

GL_ALPHA  success, but likely not a renderable format like R32F.  Can't typically use w/MRT for that reason.

GL_LUMINANCE               - valid on all GL/iOS/Android impls, but expands to 4 components on ANGLE DX11 (4x increase)
GL_LUMINANCE_ALPHA - valid on all GL/iOS/Android impls, but expands to 4 components on ANGLE DX11 (2x increase) 

Alecazam

unread,
Feb 25, 2015, 11:54:55 AM2/25/15
to webgl-d...@googlegroups.com, khr...@callow.im
I should add that the paper recommends the following for performance, but that doesn't work on Safari/Firefox WebGL 1.  It appears to be Chome-only workaround (I'm only testing OSX right now, so no ANGLE).  It also sounds like ANGLE opted for replicating the channels to emulate GL_LUMINANCE instead.   

  Use GL_RED for single-channel textures instead of GL_LUMINANCE

Shannon Woods

unread,
Feb 25, 2015, 12:56:45 PM2/25/15
to webgl-d...@googlegroups.com, khr...@callow.im
I should probably note that the paper I linked is mostly geared towards using ANGLE as an ES implementation-- using ANGLE via WebGL adds another level of interaction and restrictions. We contributed a chapter to WebGL Insights which is more specifically WebGL-focused.

GL_RED_EXT and GL_RG_EXT are available via EXT_texture_rg in ES 2.0, but WebGL doesn't incorporate this extension. GL_RED and GL_RG are in the core ES 3.0 feature set, but should not be available in WebGL 1; WebGL validation should catch and reject these, I believe. (If it doesn't in Chrome, I suspect that's a bug.)

I'm not sure what you mean by "ANGLE opted for replicating the channels ... instead"-- ANGLE doesn't have the option of dropping L/LA support, as they're included in ES, so it has to provide them somehow. Channel expansion is how we provide them when using the D3D11 backend. Client applications that are able to use RED or RG (or the EXT versions) can avoid the expansion, but WebGL isn't able to yet.

To unsubscribe from this group and stop receiving emails from it, send an email to webgl-dev-list+unsubscribe@googlegroups.com.

Alecazam

unread,
Feb 25, 2015, 2:06:44 PM2/25/15
to webgl-d...@googlegroups.com, khr...@callow.im
I'm actually seeing errors on GL_RED usage in Chrome as well, so something is up with my format tests and error intercepts.  It's unfortunate that the only way to test for formats is to try and allocate them (leading to GL errors in the browser console).  So that means that GL_ALPHA is the only non-expanded format for single byte when going through DX11/ANGLE path.

> I'm not sure what you mean by "ANGLE opted for replicating the channels ... instead"-- ANGLE doesn't have the option of dropping L/LA support, as they're included in ES, so it has to provide them somehow.

I'm not saying that choice wasn't warranted given that ANGLE keeps the existing shaders.   I was just pointing out that the document suggested using GL_RED in place of GL_LUMINANCE, but that doesn't appear to be an option in WebGL1.  WebGL1 could use extensions for the formats GL_RED/GL_RED_GREEN, and GL_RGBA16 (also needed in WebGL2).   That would address the two component expansion problem.  iOS has had all of these formats for quite a few years, so it's not a mobile limitation there.   It's unfortunate that DX exposed a reduced set of formats, but no way to swizzle them in the samplers (that wasn't the case on consoles).

I also think the need to swizzle texture content on upload is something that I'd also like to avoid with emulating ES3 swizzles in ANGLE, so I'll have to look at alternatives there as well.

Kenneth Russell

unread,
Feb 25, 2015, 2:13:57 PM2/25/15
to webgl-d...@googlegroups.com, Mark Callow
On Wed, Feb 25, 2015 at 11:06 AM, Alecazam <al...@figma.com> wrote:
> I'm actually seeing errors on GL_RED usage in Chrome as well,

Chrome doesn't yet have a WebGL 2 prototype available, so GL_RED
textures won't be supported yet. There's a lot of ongoing work to get
this prototype out, so please stay tuned.

-Ken

Alecazam

unread,
Feb 25, 2015, 2:49:25 PM2/25/15
to webgl-d...@googlegroups.com, khr...@callow.im
Looking over DXGI_FORMAT, even GL_ALPHA formats aren't available (they would require replication or shader swizzles from R8/R16/R32F).  Only GL_RED, GL_RG has a direct mapping to DX.  GL_RED and GL_RED_EXT are the same extension, so gl.RED could be allowed as a parameter to the texture creation calls in WebGL 1.  GL_RG would need an extension.  

And, yes, I know that WebGL2 is coming, but this seems like a big omission to efficient texture handling in WebGL 1.   One classic example is normal maps can take LA8un for high quality, DXT5nm for compressed, and DXT1nm for low quality.  You don't really want the LA8un format to expand.  RG also doesn't map to the same ga swizzle typically done in the shader, but LA8 does.

Alecazam

unread,
Feb 25, 2015, 4:56:12 PM2/25/15
to webgl-d...@googlegroups.com, khr...@callow.im
From the documentation, it looks like IE11 WebGL is having to do the same expansion as ANGLE-DX11.  Alpha, L, and LA textures are all converted into an RGBA texture.   That means there is no way to achieve a 1 or 2 component texture, putting pressure on valuable GPU memory.  So to summarize, here's the current situation for texture formats as I understand it.  ANGLE is also the default on Win, so an app could exhaust the GPU more quickly than on the OpenGL backend.

GL 3.2 core doesn't support ALPHA formats or LUMINANCE formats,  so only a swizzled GL_RED texture works there.

FMT   GL       ES       DX         IE/ANGLE-DX11 expansion
R        R         R         R            1x  <- no access to gl.RED from WebGL 1, adding in WebGL 2
RG     RG      RG       RG         1x <- no access to gl.RG from WebGL 1, adding in WebGL 2
L        R(s)     L          RGBA     4x <- available in WebGL1 and 2, but in both still is expanded on IE/ANGLE-DX11
LA      RG(s)  LA       RGBA      2x   "
A        R(s)     LA       RGBA      4x   "
RGB   RGBA RGBA RGBA      4/3x <- unavoidable expansion since hw only handles 4 byte multiples
RGBA RGBA RGBA RGBA      1x

Alecazam

unread,
Feb 25, 2015, 6:37:36 PM2/25/15
to webgl-d...@googlegroups.com, khr...@callow.im
A8_UNORM is a DX format, so it looks like that's the sole non-swizzled single-component with equivalency (no expansion needed).  There are no equivalents for 16f/32f (only R/RG).  The chart should read as follows.

FMT   GL       ES       DX         IE/ANGLE-DX11 expansion
A8      R8(s)   A8       A8            1x    "
A        R(s)     L       RGBA      4x   "

Alecazam

unread,
Feb 26, 2015, 2:12:43 PM2/26/15
to webgl-d...@googlegroups.com, khr...@callow.im
I can't speak for what IE does internally, but the Chromium tables here are quite enlightening.  These detail the exact mappings and expansions of each format in ANGLE DX11.  That does show the multipliers from my table, and also shows A8 as the sole single channel accessible currently.  It feels like WebGL 1 implementations should fast track EXT_texture_rg as an extension to get to RED and RG formats that also directly upload on desktop (and many mobile impls).  That would also pave the way to WebGL 2 allowing these formats.
Reply all
Reply to author
Forward
0 new messages