Detecting when WebGL is out of memory

2,718 views
Skip to first unread message

Ivan Kuckir

unread,
Aug 28, 2017, 4:12:16 AM8/28/17
to WebGL Dev List
Hi guys,

what is the correct way of detecting, that WebGL is out of memory? Is an error thrown by JS, or I shuld check the state of the WebGL server after every call? 

My app needs as much memory as possible. I know, that there is a constant GLenum OUT_OF_MEMORY = 0x0505;,  but don't know how it should be used.

Alecazam

unread,
Aug 29, 2017, 9:02:58 AM8/29/17
to WebGL Dev List
This is the worst part of old GL, but WebGL has no alternative. Most of the create actions, that could return null, just allocate an id. Calling gl.getError() until it returns 0 and manually decripting the cryptic error codes is the only way. If you call it then the cpu/gpu synchronize causing stalls. This is one big reason for GL to go away. DirectX11 eliminated error codes from most of the functions, since you are really just building command streams that get executed in another thread/process.

Kenneth Russell

unread,
Aug 29, 2017, 6:01:43 PM8/29/17
to WebGL Dev List
In theory, you can call gl.getError() after calls like texImage2D which allocate storage. However, note that Chrome won't currently deliver that error -- it will instead lose the WebGL context -- but we're working on changing that behavior.

Still, it is best if you put a cap on your application's memory usage. One idea that came up a while ago which has been successful for other WebGL applications is to base your application's maximum GPU memory usage on the resolution of the main screen (window.screen.availWidth * window.screen.availHeight * window.devicePixelRatio * window.devicePixelRatio). Assume you should use a maximum of a certain number of bytes per pixel on all platforms. That heuristic works pretty well to keep video memory usage under control, but still let the app use more memory on more powerful machines (which usually also have larger screens).

-Ken


--
You received this message because you are subscribed to the Google Groups "WebGL Dev List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to webgl-dev-list+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Alecazam

unread,
Aug 29, 2017, 6:18:11 PM8/29/17
to WebGL Dev List
I've found caps based on a guess really don't work well, but it's our only approach to avoid lost context.  Restoring the context from that lost context is more likely to fail, since the heap is already fragmented.  So we set a cap of 1GB, and fail all gpu allocations after that.  But most users have 1.5, 2GB to 8GB or more on their cards these days.  So we're just artificially capping something that the user would benefit from us not capping it.  We're just trying to avoid the lost context.   Also oversubscription of gpu memory works on many platforms.  It would at least be good to have an extension to get at the GPU memory amount.  Fingerprint paranoid users could disable it, but at least progressive web apps could finally get data to set reasonable limits.
To unsubscribe from this group and stop receiving emails from it, send an email to webgl-dev-lis...@googlegroups.com.

Kenneth Russell

unread,
Aug 29, 2017, 6:30:21 PM8/29/17
to WebGL Dev List
There's unfortunately no reliable way to determine the amount of available "video memory" in OpenGL or OpenGL ES. Some vendor extensions exist, but the WebGL working group wasn't successful in encouraging development of a cross-GPU standard. Basing the memory usage on the screen resolution is the best heuristic we've come up with.


To unsubscribe from this group and stop receiving emails from it, send an email to webgl-dev-list+unsubscribe@googlegroups.com.

Alec Miller

unread,
Aug 29, 2017, 6:34:26 PM8/29/17
to webgl-d...@googlegroups.com
I don’t really care if ES exposes this, but Chrome displays these numbers for the desktop in chrome://gpu on both Mac/Win.  Those are the numbers to expose.  If they’re not available, then don’t expose the extension and we can keep guessing as we do now.  These are the calls for OSX.  I’m sure DXGI has the same IIRC.  

 CGLDescribeRenderer(info, j, kCGLRPVideoMemoryMegabytes, &videoMemory);
 CGLDescribeRenderer(info, j, kCGLRPTextureMemoryMegabytes, &textureMemory);



You received this message because you are subscribed to a topic in the Google Groups "WebGL Dev List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/webgl-dev-list/TrPjvxVk5rc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to webgl-dev-lis...@googlegroups.com.

Alecazam

unread,
Sep 15, 2017, 10:03:44 PM9/15/17
to WebGL Dev List
So what's the way for an application to get at the driver's szDisplayMemoryEnglish, displayed in chrome://gpu?  We really don't want to hardcode some guess here, and it sounds like if we exceed this then the context is lost, so all the more reason to report this number.

szDisplayMemoryEnglish 

8267 MB

Jeff Dash

unread,
Sep 15, 2017, 10:26:06 PM9/15/17
to webgl-d...@googlegroups.com
Chrome should fix their implementation so as not to lose context when allocation fails. Allocation is supposed to be fallible in this way. Exposing the max memory size doesn't help when memory is used by other instances.

--

Alec Miller

unread,
Sep 15, 2017, 10:30:25 PM9/15/17
to webgl-d...@googlegroups.com
Why not do both?  We’re moving to WebVR and other uses of the GPU that really should not have to guess at the upper limits. And yes, if it’s exhausted or oversubscribed or whatever that causes a failure, then we’ll have to handle the out of memory regardless.   You’d like to know the physical core count, the physical memory amount (which is also shared by multiple processes, but still useful), and the gpu memory.

You received this message because you are subscribed to a topic in the Google Groups "WebGL Dev List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/webgl-dev-list/TrPjvxVk5rc/unsubscribe.

To unsubscribe from this group and all its topics, send an email to webgl-dev-lis...@googlegroups.com.

Jeff Dash

unread,
Sep 15, 2017, 11:19:41 PM9/15/17
to webgl-d...@googlegroups.com
It's fine to ask for both, but not losing context on OOM is much more useful than knowing the max vram limit. Making assumptions about available vram based on max vram is not recommended. It's not flat useless, but it's a rather poor signal, and creates false confidence in stability.

What I'm trying to say is don't ask for a band-aid when your arm has fallen off. A max vram query is nice-to-have, but robust OOM behavior should be a requirement for creating good experiences.

WebVR is unlikely to materially change how useful this is, or possibly would have less of a problem with OOM due to perf constraints. (users will either have faster-therefore-bigger cards or will be reducing settings to compensate regardless)

I believe that games in general largely consume memory based on their graphics settings anyway, but I would be interested to learn of any heuristics in common use.


To unsubscribe from this group and all its topics, send an email to webgl-dev-list+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Alecazam

unread,
Sep 16, 2017, 2:57:17 AM9/16/17
to WebGL Dev List
I worked on 8 titles at EA, and the first thing we established was the GPU memory amount.  On approach was to allocated render targets until memory was exhausted, but that's not exactly being a good steward of the GPU.  DX9 would truncate the actual bytes to some multiple of 1MB.  On my Mac, I have a 2GB GPU, but it can oversubscribe to 4 or 6GB.  I'm not sure what the Intel part with 1.5GB.  It looks like chrome://gpu doesn't even bother to gather or report the memory numbers (snippet included in earlier post) there, but does on Windows.

I want the physical memory on the GPU (or shared in the case of Intel).  That's a reasonable amount to target, and then oversubscription until you hit out of memory and then fallback.

I'm not writing a game.  I'm writing a design tool, and user content can vary in complexity.   It sounds like the recommended approach then is not use any limits.  So just allocate with abandon until out of memory (at least on Firefox/Safari/Edge), or use a fixed guess and hope that you don't lose your context (on Chrome).  I regularly see systems with 384MB up to 8GB so should I limit to 384MB?  That's not a great user experience, or future-proof as GPUs get more memory (or shared memory).



On Friday, September 15, 2017 at 8:19:41 PM UTC-7, Jeff Dash wrote:
It's fine to ask for both, but not losing context on OOM is much more useful than knowing the max vram limit. Making assumptions about available vram based on max vram is not recommended. It's not flat useless, but it's a rather poor signal, and creates false confidence in stability.

What I'm trying to say is don't ask for a band-aid when your arm has fallen off. A max vram query is nice-to-have, but robust OOM behavior should be a requirement for creating good experiences.

WebVR is unlikely to materially change how useful this is, or possibly would have less of a problem with OOM due to perf constraints. (users will either have faster-therefore-bigger cards or will be reducing settings to compensate regardless)

I believe that games in general largely consume memory based on their graphics settings anyway, but I would be interested to learn of any heuristics in common use.

On Fri, Sep 15, 2017 at 7:30 PM, Alec Miller <al...@figma.com> wrote:
Why not do both?  We’re moving to WebVR and other uses of the GPU that really should not have to guess at the upper limits. And yes, if it’s exhausted or oversubscribed or whatever that causes a failure, then we’ll have to handle the out of memory regardless.   You’d like to know the physical core count, the physical memory amount (which is also shared by multiple processes, but still useful), and the gpu memory.
On Sep 15, 2017, at 7:26 PM, Jeff Dash <jda...@gmail.com> wrote:

Chrome should fix their implementation so as not to lose context when allocation fails. Allocation is supposed to be fallible in this way. Exposing the max memory size doesn't help when memory is used by other instances.
On Sep 15, 2017 7:03 PM, "Alecazam" <al...@figma.com> wrote:
So what's the way for an application to get at the driver's szDisplayMemoryEnglish, displayed in chrome://gpu?  We really don't want to hardcode some guess here, and it sounds like if we exceed this then the context is lost, so all the more reason to report this number.

szDisplayMemoryEnglish 

8267 MB
 




On Monday, August 28, 2017 at 1:12:16 AM UTC-7, Ivan Kuckir wrote:
Hi guys,

what is the correct way of detecting, that WebGL is out of memory? Is an error thrown by JS, or I shuld check the state of the WebGL server after every call? 

My app needs as much memory as possible. I know, that there is a constant GLenum OUT_OF_MEMORY = 0x0505;,  but don't know how it should be used.


-- 
You received this message because you are subscribed to the Google Groups "WebGL Dev List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to webgl-dev-lis...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to a topic in the Google Groups "WebGL Dev List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/webgl-dev-list/TrPjvxVk5rc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to webgl-dev-lis...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "WebGL Dev List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to webgl-dev-lis...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages