Swiftshader

309 views
Skip to first unread message

John Davis

unread,
Mar 9, 2014, 3:07:55 PM3/9/14
to anglep...@googlegroups.com
What happens if the instruction count exceeds what the gpu can handle?  Will ANGLE kick over to Swiftshader?

Mark Callow

unread,
Mar 10, 2014, 1:30:03 AM3/10/14
to jda...@pcprogramming.com, anglep...@googlegroups.com

On 2014/03/10 4:07, John Davis wrote:
What happens if the instruction count exceeds what the gpu can handle?  Will ANGLE kick over to Swiftshader?

Be careful what you wish for. I have taking to calling it Unswiftshader because when Chrome switched to it, after dropping GPU support on Windows XP, the performance of the WebGL apps I was looking at dropped by about an order of magnitude. 

Regards

    -Mark

--
注意:この電子メールには、株式会社エイチアイの機密情報が含まれている場合が有ります。正式なメール受信者では無い場合はメール複製、 再配信または情報の使用を固く禁じております。エラー、手違いでこのメールを受け取られましたら削除を行い配信者にご連絡をお願いいたし ます.

NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies.

Nicolas Capens

unread,
Mar 10, 2014, 10:51:26 AM3/10/14
to jda...@pcprogramming.com, anglep...@googlegroups.com

Hi John,

No, the program will fail to link. Unlike desktop OpenGL, OpenGL ES defines that this can happen when running out of hardware resources.

The Direct3D 11 back-end of ANGLE increases the limits quite a bit, but it will take a while for it to come to Chrome Stable. Even then you'll have to provide a fallback path for older systems.

People generally prefer to keep using the GPU, even if the shader have to be simplified. Do you have a use case where you would prefer to keep using complex shaders but fall back to CPU-based rendering?

Cheers,
Nicolas
Lead SwiftShader developer


On Mar 9, 2014 3:19 PM, "John Davis" <jda...@pcprogramming.com> wrote:
What happens if the instruction count exceeds what the gpu can handle?  Will ANGLE kick over to Swiftshader?

--
You received this message because you are subscribed to the Google Groups "angleproject" group.
To unsubscribe from this group and stop receiving emails from it, send an email to angleproject...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

John Davis

unread,
Mar 10, 2014, 12:36:54 PM3/10/14
to Nicolas Capens, anglep...@googlegroups.com
Yes, render to texture with very complex shaders.

Nicolas Capens

unread,
Mar 11, 2014, 2:06:16 AM3/11/14
to callo...@artspark.co.jp, John Davis, anglep...@googlegroups.com
Hi Mark,

SwiftShader is very efficient, as a piece of software. But it can't make your CPU perform more operations than what the CPU hardware is capable of. Likewise the GPU driver software isn't the primary factor in GPU rendering performance. The GPU hardware is. So please make the correct distinction.

Future processors will most likely have unified CPU and GPU cores, so software and hardware rendering coincide.

Cheers,
Nicolas


--

Nicolas Capens

unread,
Mar 11, 2014, 2:11:53 AM3/11/14
to John Davis, anglep...@googlegroups.com
Hi John,

Automatically switching to SwiftShader in the middle of a frame won't be possible, but in theory we could allow to create a new context implemented by SwiftShader. We've recently been working on adding control over creating a D3D11 or D3D9 version of an ANGLE implementation, which we could extend. That said, this extension isn't exposed in WebGL, and I'm doubtful it ever will.

I'll have a discussion about it with my peers though...

Regards,
Nicolas

Mark Callow

unread,
Mar 11, 2014, 2:57:48 AM3/11/14
to nicolas...@gmail.com, John Davis, anglep...@googlegroups.com

On 2014/03/11 15:06, Nicolas Capens wrote:
Hi Mark,

SwiftShader is very efficient, as a piece of software. But it can't make your CPU perform more operations than what the CPU hardware is capable of. Likewise the GPU driver software isn't the primary factor in GPU rendering performance. The GPU hardware is. So please make the correct distinction.

Yes I do know that. I have implemented a software renderer or two.

My leaning to "unswift" comes from the observation that WebGL Aquarium in Chrome runs at only 4 fps on my 1.86GHz dual core desktop machine at a window size of approximately 1024 x 768 while with our s/w OpenGL ES 1.1 implementation Quake levels played using all of the pixels on a dual core 1024x768 easily ran at 30fps.

Yes I know my desktop is old, and by today's standards, low powered, yes I know ES 2 has to deal with calling user-supplied shaders and yes I know the scene is different but still, it seems like a large difference.

When Chrome was using my GPU, WebGL Aquarium ran at 60fps.


Future processors will most likely have unified CPU and GPU cores, so software and hardware rendering coincide.

I don't understand what you mean by "coincide" in this comment. Performance-wise I think there will still be a big difference.

John Davis

unread,
Mar 11, 2014, 4:51:45 AM3/11/14
to Nicolas Capens, anglep...@googlegroups.com
Having a webgl-Swiftshader CPU based context which could be invoked from javascript would be awesome!  For complex shaders used to generate textures, beyond what a gpu can handle because of instruction count.  This could hold us over until opencl is released.

John Davis

unread,
Mar 11, 2014, 4:55:01 AM3/11/14
to Nicolas Capens, anglep...@googlegroups.com
Or perhaps we could use webgl extensions, to indicate dx9, dx11, Swiftshader?

Mark Callow

unread,
Mar 11, 2014, 5:15:38 AM3/11/14
to nicolas...@gmail.com, John Davis, anglep...@googlegroups.com

On 2014/03/11 15:57, Mark Callow wrote:

 ES 1.1 implementation Quake levels played using all of the pixels on a dual core 1024x768 easily ran at 30fps.

-> dual-core 1024x768 *laptop* (with similar cpu speed)

Nicolas Capens

unread,
Mar 11, 2014, 12:38:19 PM3/11/14
to Mark Callow, John Davis, anglep...@googlegroups.com
On Tue, Mar 11, 2014 at 2:57 AM, Mark Callow <callo...@artspark.co.jp> wrote:

On 2014/03/11 15:06, Nicolas Capens wrote:
Hi Mark,

SwiftShader is very efficient, as a piece of software. But it can't make your CPU perform more operations than what the CPU hardware is capable of. Likewise the GPU driver software isn't the primary factor in GPU rendering performance. The GPU hardware is. So please make the correct distinction.
Yes I do know that. I have implemented a software renderer or two.

My leaning to "unswift" comes from the observation that WebGL Aquarium in Chrome runs at only 4 fps on my 1.86GHz dual core desktop machine at a window size of approximately 1024 x 768 while with our s/w OpenGL ES 1.1 implementation Quake levels played using all of the pixels on a dual core 1024x768 easily ran at 30fps.

For reference, SwiftShader runs things like Max Payne 2 at 30+ FPS at such a resolution on such a CPU. 

Yes I know my desktop is old, and by today's standards, low powered, yes I know ES 2 has to deal with calling user-supplied shaders and yes I know the scene is different but still, it seems like a large difference.

When Chrome was using my GPU, WebGL Aquarium ran at 60fps.

I know it might be surprising, but the Aquarium demo is actually far more complex than something like Quake or Max Payne. It uses a cube map for the environment, performs bumpy diffraction the aquarium glass, renders several layers of god rays, all the fish and the seaweed use multiple matrices and attributes for animation, the shaders use a fair number of trigonometric functions, and to top it all off it uses anti-aliasing.


Future processors will most likely have unified CPU and GPU cores, so software and hardware rendering coincide.
I don't understand what you mean by "coincide" in this comment. Performance-wise I think there will still be a big difference.

What I mean is that  SwiftShader is just an implementation of OpenGL which executes software on the CPU, while graphics drivers are an implementation of OpenGL which execute software on the GPU. When the GPU and CPU cores unify into one, the distinctions between what some call 'software rendering' and 'hardware rendering', even though they both consist of a layer of software and hardware to execute it, will disappear entirely.

Performance won't be an issue. AVX-512 will sooner or later make it into CPU cores, and it offers no less than 8 times higher floating-point computing power than what SwiftShader currently uses (up to SSE4). On top of that it will feature a fast gather implementation to load multiple elements from memory in parallel. And finally, when the integrated GPU is replaced by more unified cores, peak performance is doubled once again.

It may take many more years for this leap to happen, but it is bound to happen due to the scaling nature of computing power vs. bandwidth.

Cheers,
Nicolas

Nicolas Capens

unread,
Mar 11, 2014, 12:51:05 PM3/11/14
to John Davis, anglep...@googlegroups.com
I think the 'inverse' of the failIfMajorPerformanceCaveat context creation flag would be feasible. Something like veryHighShaderInstructionLimit which demands either a D3D11 based implementation or SwiftShader. I'll discuss it higher up. Could you describe in more detail your use case so that I can convince people it's worth the effort? Thanks.

John Davis

unread,
Mar 11, 2014, 12:59:22 PM3/11/14
to Nicolas Capens, anglep...@googlegroups.com
The use case is render to texture, instead of downloading textures.  In a nutshell, use procedural textures on the fly, with a single pass for a given patch to generate a static texture.  While this could be done in javascript, it would be slow due to lack of SIMD.
Reply all
Reply to author
Forward
0 new messages