WebGL texture creation performance

621 views
Skip to first unread message

Edgar

unread,
Aug 26, 2015, 9:42:25 PM8/26/15
to Chromium-dev
I need to upload a set (50) of relatively large images (5 megapixel ARGB) to the GPU, currently each call to texImage2D() takes ~170ms, or ~118MB/second transfer speed to GPU. Much slower than I expected. On a laptop with Corei7 CPU and Nvidia Quadro K1100M GPU running Win7. Is this performance normal?  Is there any way to improve the performance? e.g. calling texImage2D in parallel somehow.

Thanks.

Kenneth Russell

unread,
Aug 26, 2015, 9:52:08 PM8/26/15
to lightc...@gmail.com, Chromium-dev
Could you please provide a test case? Given the copying between
Chromium's renderer and GPU processes it's possibly not surprising
this is slower than desired, but it's definitely something that should
be looked at and optimized further.
> --
> --
> Chromium Developers mailing list: chromi...@chromium.org
> View archives, change email options, or unsubscribe:
> http://groups.google.com/a/chromium.org/group/chromium-dev

Edgar

unread,
Aug 27, 2015, 3:33:20 PM8/27/15
to Chromium-dev, lightc...@gmail.com
Here's a quick test I created:
https://jsfiddle.net/bt3r2u1a/
Loading a 2000*1700 image took ~110ms on the same test system mentioned in my previous post.

Is there anything else we can do to improve the performance in the meantime?
Thanks.

Kenneth Russell

unread,
Aug 27, 2015, 4:36:32 PM8/27/15
to Edgar Chen, Chromium-dev, Justin Novosad, Stephen White
On Thu, Aug 27, 2015 at 12:33 PM, Edgar <lightc...@gmail.com> wrote:
> Here's a quick test I created:
> https://jsfiddle.net/bt3r2u1a/
> Loading a 2000*1700 image took ~110ms on the same test system mentioned in
> my previous post.

It's not clear whether the time's being spent in decompression of the
PNG or uploading of the texture.

We should ship the ImageBitmap API which should allow the
decompression to occur in the background, rather than during texture
upload time. There are a few embarrassingly old bugs about this:
crbug.com/249384 , crbug.com/249382 .

> Is there anything else we can do to improve the performance in the meantime?

Investigate using the "crunch" texture compressor and doing the
transcoding to one of the compressed texture formats in a web worker,
then transferring the compressed texture data to the main thread using
an ArrayBuffer and uploading using compressedTexImage2D. See
https://github.com/toji/texture-tester for an example. This should
eliminate a large portion of your texture upload cost from the main
thread, and is a portable technique across browsers.

-Ken

Noel Gordon

unread,
Aug 27, 2015, 10:09:22 PM8/27/15
to Chromium-dev, lightc...@gmail.com, ju...@chromium.org, senor...@chromium.org
On Friday, August 28, 2015 at 6:36:32 AM UTC+10, Kenneth Russell wrote:
 There are a few embarrassingly old bugs about this:
crbug.com/249384 , crbug.com/249382

Indeed.  Also https://crbug.com/91208 re: gl.texImage2D upload slowness.

~noel

Edgar

unread,
Aug 28, 2015, 4:09:44 PM8/28/15
to Chromium-dev, lightc...@gmail.com, ju...@chromium.org, senor...@chromium.org
Thanks Ken for the suggestions. I was under the impression that once the image onload event is fired, the PNG assigned to the image src should be fully decompressed? Since I do see a corresponding jump in memory usage, roughly equivalent the uncompressed footprint of the image, after the onload event. I notice this because I'm still trying find a way to keep a set of PNGs (still compressed) in memory (e.g. in arraybuffers), and on-demand decompress it somehow (image object's src property doesn't take arraybuffers unfortunately) and upload it to the GPU. 

In any case, I did try calling texImage2D with an arraybuffer (same size as the uncompressed footprint the test image) directly, and it performed significantly better (~10ms), so the maybe the performance issue is related to those bugs you and Noel mentioned, that are still not resolved?  Regarding using compressedTexImage2D, that won't work for us unfortunately since the texture compression schemes are all lossy, we need loss-less because we are actually encoding data in the textures that will be used by the shader code.

Kenneth Russell

unread,
Sep 1, 2015, 7:50:32 PM9/1/15
to Edgar Chen, Chromium-dev, Justin Novosad, Stephen White
Image decompression happens after the onload handler fires. In order
to handle web pages containing lots of large images, the browser needs
to decompress only the visible images, and aggressively unload those
that have scrolled past the viewport.

The ImageBitmap API will fix this problem. We should prioritize
finishing and shipping it.

In the meantime, you might want to look at png.js. That would give you
a portable, though slower, PNG decoding path that you could run on a
web worker. Then you can transfer the decompressed data back to the
main thread for upload via texImage2D.

-Ken
Reply all
Reply to author
Forward
0 new messages