WebGL 1 extensions

423 views
Skip to first unread message

Alecazam

unread,
Oct 31, 2014, 2:27:22 PM10/31/14
to webgl-d...@googlegroups.com
Being a fan of desktop-based WebGL, here are some missing extensions from the list.  Tojiro had mentioned sRGB support on Chrome Canary which is one of those important extensions for color fidelity.  

Since WebGL 1 just got enabled by Apple, implemented by Microsoft, and all the browsers are catching up to a fully implemented set of extensions (sRGB, MRT, etc), it feels like waiting for WebGL 2 adoption across these platforms may be just that - waiting.  The extensions would allow these vendors to keep their WebGL 1 support, but provide enough access to the underlying GPU capabilities.

1.  Access to NV_texture_barrier (OSX) and Framebuffer_fetch (iOS/Nvidia).
       This is a critical extension for doing in-place buffer modifications without ping-pong.  These have subtle differences in shader specifications (GLSL and GLSL/ES being quite inconsistent on many fronts).  DX has a method of doing this as well without modification of the shader source (setting src texture and dst target to the same).  If this extension is advertised, then WebGL shouldn't disallow src/dst textures being the same (especially if GL is used as the back end).

2.  ARB_framebuffer_object
        This represents the ability to mix and match depth buffers bigger than the color buffer, so you can share depth across targets.  Not sure if ES/WebGL enforce that depth/color be the same size, but that is not a requirement for all desktop hardware that I've used.

3.  Multisample texture creation/resolve.  
        Desktop has this for quite some time.

4.  Geometry/Triangulation Shader access.   
        Using a simple linked shader would avoid the need for SSO or constant buffers, so they might be simpler target to expose.   Geometry shaders are the oldest and most widely supported obviously.



        

Andre Weissflog

unread,
Oct 31, 2014, 4:48:53 PM10/31/14
to webgl-d...@googlegroups.com
It would be good to see more WebGL1 extension which lead towards WebGL2, I think that even after WebGL2 is released we'll still be stuck for a few years with WebGL1, at least on mobile, with extensions it's probably easier to support the old and new world. On my personal wish list (ordered from highest priority to lowest):

- uniform buffer objects
- pixel buffer objects
ETC2/EAC texture compression formats
- ES3's additional texture pixel formats

Cheers,
-Floh.

Alec Miller

unread,
Oct 31, 2014, 5:11:47 PM10/31/14
to webgl-d...@googlegroups.com
I second more texture formats.  The additional texture formats are really a bit win for little change.  For example, ES1/2’s omission of RGBA16u as a viable format is really limiting for the desktop, but probably stems from the precision-specifiers additions.  RGBA16f just doesn’t have enough precision, but it’s better than nothing.  I think exposing more of the desktop texture formats is a big help to desktop WebGL.

UBO’s typically need SSO (separate shader objects) as well.  Probably best to wait on WebGL2 for this.  This is a major change to the way shaders work and why ES/GL suffer compared to DX.  With this, GL and ES work more like the DX loose-linking model but GL and ES still has the overly verbose location specifiers instead of custom semantics that DX uses (and a forced main() name for the shader).  The main() requirement still makes it hard to store all shaders in one file and compile them from that unit.

DX was a stronger and more versatile and readable shader model (along with semantics) from the beginning.  The linked shaders of old GL that ES adopted makes it difficult to mix the data from multiple stages, and makes compiles more extensive to do uniform reductions.  Mac/iOS currently have SSO support across all their active hardware.  

Everything is moving to ATSC which has similar compression characeristics to PVRTC, but hopefully there will be a public domain compressor.  The only PVRTC closed-source compressor that I know comes direct from Imagination.

cheers.

--
You received this message because you are subscribed to a topic in the Google Groups "WebGL Dev List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/webgl-dev-list/OEp2jOq8f-k/unsubscribe.
To unsubscribe from this group and all its topics, send an email to webgl-dev-lis...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Mark Callow

unread,
Nov 4, 2014, 9:38:56 PM11/4/14
to webgl-d...@googlegroups.com
On 01/11/2014 03:27, Alecazam wrote:

Being a fan of desktop-based WebGL, here are some missing extensions from the list.  Tojiro had mentioned sRGB support on Chrome Canary which is one of those important extensions for color fidelity.  

Since WebGL 1 just got enabled by Apple, implemented by Microsoft, and all the browsers are catching up to a fully implemented set of extensions (sRGB, MRT, etc), it feels like waiting for WebGL 2 adoption across these platforms may be just that - waiting.  The extensions would allow these vendors to keep their WebGL 1 support, but provide enough access to the underlying GPU capabilities.

...
This discussion should be on the public_webgl list.

I believe the various implementers should focus on completing their WebGL 2 implementations rather than adding more extensions to WebGL 1. Once WebGL 2 is released the only reasons to keep using WebGL 1 are

  1. Implementations on hardware that does not have OpenGL ES 3 level features.
  2. People who do not or can not update their browsers.
Given the length of time OpenGL 3.3 has been out (the base for ES 3), hardware in no. 1 will be entirely mobile and embedded devices. These OpenGL ES 2 devices will also be unable to run many of the proposed extensions.

Adding extensions does nothing to resolve the issues of those covered by no. 2.

I further suspect that adding this slate of extensions to a WebGL 1.0 implementation will take the same amount of time as completing a WebGL 2 implementation. Though, as I am not an implementer, I can't say with certainty.

Regards

    -Mark
--
注意:この電子メールには、株式会社エイチアイの機密情報が含まれている場合が有ります。正式なメール受信者では無い場合はメール複製、 再配信または情報の使用を固く禁じております。エラー、手違いでこのメールを受け取られましたら削除を行い配信者にご連絡をお願いいたし ます.

NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies.

Alecazam

unread,
Nov 5, 2014, 5:37:50 PM11/5/14
to webgl-d...@googlegroups.com
I'm more concerned about exposing desktop GPU functionality.  The mobile side will either catch up or diversify dramatically.  GL next/Mantle (desktop) and Metal (iOS) are good examples of where ES 2/3 no longer suffice to represent underlying GPU architecture, and the vendors have opted to bypass the abstraction of GL and ES.  Granted you still have fallbacks to ES2/ES3, but mobile compute is currently only accessible to Metal apps. 

In the meantime, there's a lot of GL 3.2+ functionality and DX11+ functionality and texture formats that could be easily added.   We already test against 16f/32f support, so why not have RGBA16u and others as possible formats for desktop WebGL users.  I just don't think the extensions go far enough, but they would at least provide WebGL1 desktop systems with more of the underlying functionality for little more than constant additions to some switch statements.   That seems considerably simpler than a full WebGL2 implementation.  I'll take that too, but I just don't see widespread WebGL2 for quite some time.

Mark Callow

unread,
Nov 5, 2014, 9:28:02 PM11/5/14
to webgl-d...@googlegroups.com

On 06/11/2014 07:37, Alecazam wrote:
Granted you still have fallbacks to ES2/ES3, but mobile compute is currently only accessible to Metal apps.

That is entirely Apple's choice, probably made as a way to push Metal. ES 3.1 has compute shaders.

...  That seems considerably simpler than a full WebGL2 implementation.  I'll take that too, but I just don't see widespread WebGL2 for quite some time.
ES 3.0 and ES 2.0 are compatible, therefore implementing WebGL 2 consists of slightly modifying a WebGL 1 implementation (different context name, etc.) then adding all the new features. That latter part is, as I said before, about the same amount of work as implementing extensions to add those same features to WebGL 1. For implementers using ANGLE, many of the requested extensions would require the same ANGLE work that is being done now to bring it to ES 3.0 level. Others go beyond ES 3.0 and will be extensions for WebGL 2 as well.

Alecazam

unread,
Nov 5, 2014, 11:41:43 PM11/5/14
to webgl-d...@googlegroups.com
ES3 is a considerably different API and shader language with SSO and CBO to name a few things, although they still have no spec for geometry or triangulation shaders, and WebGL 2 has selected a non-compute API for WebGL 2.  ES3 has numerous layout specifiers and named target outputs that ES2 doesn't have, and that require separate shader generation.  ES3 is essentially a new shader language with integer textures, texture constructs, sampler and image objects.  The work to support all this in GL 3.2/4.1 and DX11 (ignoring DX9) is considerable, as is falling back with similar shaders to ES2.  

Many of those features are also why DX11 is so different than DX9.  DX11 has a lot of immutable creation and immutable state objects which further complicate the loose state model of DX9/GL/ES.   I've written engines to all of mobile and desktop APIs, and GL and ES are the ones with the most difficulty to port shaders from one API to the next version.  I can't even get ES 2 to compile if (x > 0) because the ES spec requires if (x > 0.0).  And the shader precision specifiers further complicate shader development and porting. 

On the desktop, there is a lot more similarity and feature parity now that Apple has finally move to GL 3.2 and 4.1, and hopefully 4.3+.  NV_texture_barrier is available across all of their Intel/AMD/Nvidia hardware, yet there's no spec on WebGL 1 or 2 to access it.  Similarly texture formats have a lot of correspondence, and DX11 has a rich set of well described formats.  Perhaps, Microsoft will lead the way and have expose more of the desktop through WebGL, since they are DX11 clean.

Alecazam

unread,
Nov 7, 2014, 12:00:51 AM11/7/14
to webgl-d...@googlegroups.com
A few more extensions:

5.  Expose the vendor/device id.  It's  The id's are absolute and consistent, and can help with table-based workarounds.  The strings supplied by webgl_debug_renderer_info are sadly not.

   VENDOR = 0x10de, DEVICE= 0x0fe9

6. glDrawElementsBaseVertex support, so that 16-bit indices can be used to draw into a single much larger vb without re-binding it.  Avoids the need to use 32-bit indices for most cases.  ES3 spec left out this basic and useful functionality, so mobile shouldn't dictate an LCD set of features.

glDrawElementsBaseVertex (GLenum mode, GLsizei count, GLenum type, const GLvoid *indices, GLint basevertex)


Kenneth Russell

unread,
Nov 7, 2014, 8:48:43 PM11/7/14
to webgl-d...@googlegroups.com
On Thu, Nov 6, 2014 at 9:00 PM, Alecazam <al...@figma.com> wrote:
> A few more extensions:
>
> 5. Expose the vendor/device id. It's The id's are absolute and
> consistent, and can help with table-based workarounds. The strings supplied
> by webgl_debug_renderer_info are sadly not.
>
> VENDOR = 0x10de, DEVICE= 0x0fe9

It would be possible to expose these when they're available. On
non-PCI devices (like Android phones, for example), all that's
available are the renderer and vendor strings.

Please feel free to put up a pull request against the
WEBGL_debug_renderer_info adding a couple of new enums (you can leave
them as ????) for making these queries. The various WebGL implementers
can comment on it more easily then.


> 6. glDrawElementsBaseVertex support, so that 16-bit indices can be used to
> draw into a single much larger vb without re-binding it. Avoids the need to
> use 32-bit indices for most cases. ES3 spec left out this basic and useful
> functionality, so mobile shouldn't dictate an LCD set of features.
>
> glDrawElementsBaseVertex (GLenum mode, GLsizei count, GLenum type, const
> GLvoid *indices, GLint basevertex)

It's absurd that this took so long to incorporate into the ES specs.
It's present in ES 3.1. I think it should be spec'ed, but as a WebGL
2.0 extension. There are problems with using an ES 3.0 or 3.1 context
to back a WebGL 1.0 implementation (the draw_buffers extension will be
broken on most hardware, for one thing, and there may be other
problems.) This will also offer a kick in the pants to browsers to get
WebGL 2.0 implementations out.

-Ken

Alecazam

unread,
Nov 8, 2014, 2:28:42 AM11/8/14
to webgl-d...@googlegroups.com
I'll write up a spec for these when I get the chance, so thanks for the suggestions.  I'd really like these to be WebGL 1.0 extensions.  DirectX 9/11 (and therefor ANGLE) and GL 2+ have the baseVertex support forever.   I really don't want to wait for Khronos to make a performant version of GL or ES.  "GL Next" may help, but again only on desktop or on Tegra K1.   These small extensions (and more texture formats) would help a lot of desktop WebGL implementations and possibilities.   The older mobile systems still can't handle the javascript and GPU load from simple apps under WebGL 1, and I tested on recent ES3 devices.

Alecazam

unread,
Nov 8, 2014, 5:31:07 PM11/8/14
to webgl-d...@googlegroups.com
WEBGL_debug_renderer_info

This proposal recommends adding the following constants to get at vendor/renderer id integers.  These are far more reliable for table-based black/white-listing, and for correlating crashes and usage with specific GPU implementations.  The vendor/renderer strings have many variations, even for the same GPU, and require complex parsing rules that can get thrown off as new vendor strings are created and rebranded (f.e. GeForce 4 and GeForce4 MX are drastically different GPUs with similar naming, the latter being a rebranded GeForce 2).  
There are a small number of Vendor ID's, and many more Renderer ID's in use.  Id tables require introducing updated tables as new ids are introduced, but vendors supply tables of newly introduced renderer ids.  Assumptions are that new cards are at least as capable as previous ones, and that black/whitelists are updated with subsequent revisions of applications.  
About:gpu shows that the browser already records these ID's (typically from calls to Direct3D on Windows, or from IOKit on OSX).   Mobile devices do not supply the PCI-based ID's, but can still supply the string forms.

Two new enums UNMASKED_VENDOR_ID_WEBGL and UNMASKED_RENDERER_ID_WEBGL are accepted by pname parameter in getParameter().

[NoInterfaceObject]
interface WEBGL_debug_renderer_info {
const GLenum UNMASKED_VENDOR_ID_WEBGL            = 0x????;      integer (f.e. 0x10de "Nvidia")
const GLenum UNMASKED_RENDERER_ID_WEBGL          = 0x????;      integer (f.e. 0x0fe9)
}
  
  The following pname arguments return an integer describing some aspect of the underlying graphics driver.
  UNMASKED_VENDOR_ID_WEBGL        Return the VENDOR_ID integer of the underlying graphics driver.
  UNMASKED_RENDERER_ID_WEBGL      Return the RENDERER_ID integer of the underlying graphics driver.

 

Alecazam

unread,
Nov 8, 2014, 11:19:26 PM11/8/14
to webgl-d...@googlegroups.com
These are the desktop extensions that I think WebGL 1 and 2 would benefit from.  Some of these are fleshed out more than others.  Again the focus here is on long-standing GPU functionality on the desktop.  Once you have critical mass behind these, then that's more encouragement for the mobile APIs to include support for these.

ARB_debug_output
Expose access to this important debugging extensions on GL 4.3+ implementations.
GL_EXT_debug_label / GL_EXT_debug_marker
Expose access to these important debugging extensions on ES2+/GL2+ implementations.
WEBGL_max_buffer_memory

Expose GL 1.2-era/ES3+ constants to WebGL 1.  These provide a recommended maximum count of vertex data and index data for best performance of glDraw/RangeElements.

Two new enums MAX_ELEMENTS_VERTICES and MAX_ELEMENTS_INDICES are accepted by pname parameter in getParameter().

const GLenum MAX_ELEMENTS_VERTICES =         0x80E8;
const GLenum MAX_ELEMENTS_INDICES  =         0x80E9;

  
WEBGL_max_texture_memory

Expose the adapter maximum texture memory constant in WebGL 1.  This gives a limit to balance texture and buffer memory allocations.  A 16K x 16K texture can fall below the dimension limits, but exceed this maximum because of the format (f.e. RGBA32f).  This helps the caller understand when a memory allocation will never succeed vs. on that freeing/re-compaction of GPU heap may help recover from an GL_OUT_OF_MEMORY error. 

One new enum WEBGL_TEXTURE_MEMORY is accepted by pname parameter in getParameter().

#define WEBGL_TEXTURE_MEMORY              0x????

Calls to implement this extension:

DX11 DXGI_ADAPTER_DESC adapterDesc;
       pDXGIAdapter->GetDesc(&adapterDesc);
    adapterDesc.DedicatedVideoMemory
OSX  CGLDescribeRenderer(..., kCGLRPTextureMemoryMegabytes, &textureMemory);
Linux  - system call?
Mobile - none, uniform address space, access to RAM size?
ARB_framebuffer_object

This extension allows sharing depth targets/textures with larger or the same dimension as color targets/textures.  For sharing depth, this can be a large savings by using a single depth target or texture across several color targets.  The caller must ensure that FBO resizes of depth are always greater dimensions than any color target shared with that depth.

Can depth be smaller?  Move a tiled depth target/texture underneath a color target, and only render a scissored area of that.

As well as the following features:

         * Permit attachments with different width and height (mixed
           dimensions)

         * Permit color attachments with different formats (mixed
           formats).

different bit depths 16f + 32f + 8u on MRT

WEBGL_texture_barrier

This extension allows setting the color texture to the same as a readable texture.  This relaxes typical WebGL constraints on the src and dst texture not being the same.  Incompatible with MRT extensions.  This extension is called whenever subsequent draw operations may overlap pixels that were previously drawn, but allows in-place edits and post-processing of textures without the need to ping-pong buffers.
Simply exposes NV_texture_barrier(), which despite the name, is actually available on most GL3+ GPUs and Nvidia mobile GPU.
EXT_shader_buffer_fetch on mobile has shader implications that differ between ES2/ES3 and this call.  NV_texture_barrier() does not affect shader source.  Mobile devices without this extension, can continue to ping-pong buffers as before.
gl.ext.textureBarrier();  -> simply calls NV_texture_barrier() where available.
  

WEBGL_geometry_shader

This extension allows access to geometry shaders as part of linked-shader creation.  This avoids stream-out, but allows the shader stage to compile and execute on various adjacency specifications.  Implementation can be from EXT_geometry_shader4 (a GL 2.0 specification) or ARB_geometry_shader4 or ARB_gpu_shader5.

Accepted by the DrawArrays, DrawElements
        LINES_ADJACENCY_EXT                              0xA
        LINE_STRIP_ADJACENCY_EXT                         0xB
        TRIANGLES_ADJACENCY_EXT                          0xC
        TRIANGLE_STRIP_ADJACENCY_EXT                     0xD

pass to CreateShader
GEOMETRY_SHADER                 0x8DD9

GLSL Mods
 layout(input_primitive​) in;
layout(output_primitive​, max_vertices = vert_count​) out;
in gl_PerVertex
{
vec4 gl_Position;
float gl_PointSize;
float gl_ClipDistance[];
} gl_in[];

in int gl_PrimitiveIDIn;​// face id
leave these out:
layout(invocations = num_instances​) in;
in int gl_InvocationID;   //Requires GLSL 4.0 or ARB_gpu_shader5
out int gl_Layer;         // layered rendering to cube map
out gl_ViewportIndex;     // ARB_viewport_array, available on 3.3 HW from Nvdia/AMD
input/output/vertices out enums ?


Alecazam

unread,
Nov 14, 2014, 7:22:26 PM11/14/14
to webgl-d...@googlegroups.com
ARB_framebuffer_object extension justification:

Here's a recent attempt to share a larger depth buffer with a smaller color buffer on desktop.  

    WebGL: checkFramebufferStatus: attachments do not have the same dimensions 

This has big memory and performance implications by forcing allocating and swapping out depth.  This is absolutely supported for desktop GL with ARB_framebuffer_object for many years now, but WebGL is enforcing a constraint of limited mobile devices in its asserts from EXT_framebuffer_object limitiations.  These should be relaxed if the ARB_framebuffer_object is implemented and enabled and present on the hardware.

This also allows mixing of different bit depth MRT surface bit-depths for WEBGL_draw_buffers.  These are good optimizations for WebGL 1 and WebGL 2.
 

Mark Callow

unread,
Nov 17, 2014, 11:10:49 PM11/17/14
to webgl-d...@googlegroups.com

On 15/11/2014 09:22, Alecazam wrote:
ARB_framebuffer_object extension justification:

...


These are good optimizations for WebGL 1 and WebGL 2.

WebGL 2 (OpenGL ES 3) already supports mixed size attachments. IIRC the only requirement is that the attached depth buffer be at least as big as the color buffer. So an extension is only needed for WebGL 1.

Alecazam

unread,
Nov 18, 2014, 3:06:49 PM11/18/14
to webgl-d...@googlegroups.com
Good to know.  DX has done this since its inception since the target memory was owned.  GL was originally designed for a shared frame buffer that backed all of the windows.  I remember when SGI Iris cut from 24-bit to 12-bit when double-buffering was enabled.  I only bring up DX, since ANGLE should be able to support all of the extensions that I mentioned.  Consoles make good use of shared buffers, and Intel even has EDRAM now that would benefit from the shared depth targets.

One extension that I forgot to list are fences.  These are pretty critical to performant graphics, and they have a small footprint call/code wise.  Then one could start to track when buffers and textures usage is completed.  So many apis leave these out, but they are critical to reusing resources effectively.

Alecazam

unread,
Nov 25, 2014, 1:15:36 PM11/25/14
to webgl-d...@googlegroups.com
One last extension sorely needed is for async readback.  This and glGetError are the biggest causes of stalls in GL and ES.  The browsers appear to be using IOSurface for canvas, so that is one approach?  

Another approach within GL is to use the PixelBufferObject (basically a linear buffer), read into that, wait/test a fence, and/or stall on the lock of the PBO sometime later.   The fence and PBO have been around on a desktop forever and are cross-platform, but it's obviously two extension to WebGL 1.  DX11 has the system memory textures that work similarly.  I hope this at least makes the WebGL 2 cut, but it's a very needed WebGL 1 extension.

Reply all
Reply to author
Forward
0 new messages