What happens if the instruction count exceeds what the gpu can handle? Will ANGLE kick over to Swiftshader?
Be careful what you wish for. I have taking to calling it
Unswiftshader because when Chrome switched to it, after dropping
GPU support on Windows XP, the performance of the WebGL apps I was
looking at dropped by about an order of magnitude.
Regards
-Mark
NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies.
Hi John,
No, the program will fail to link. Unlike desktop OpenGL, OpenGL ES defines that this can happen when running out of hardware resources.
The Direct3D 11 back-end of ANGLE increases the limits quite a bit, but it will take a while for it to come to Chrome Stable. Even then you'll have to provide a fallback path for older systems.
People generally prefer to keep using the GPU, even if the shader have to be simplified. Do you have a use case where you would prefer to keep using complex shaders but fall back to CPU-based rendering?
Cheers,
Nicolas
Lead SwiftShader developer
What happens if the instruction count exceeds what the gpu can handle? Will ANGLE kick over to Swiftshader?
--
You received this message because you are subscribed to the Google Groups "angleproject" group.
To unsubscribe from this group and stop receiving emails from it, send an email to angleproject...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
Hi Mark,
SwiftShader is very efficient, as a piece of software. But it can't make your CPU perform more operations than what the CPU hardware is capable of. Likewise the GPU driver software isn't the primary factor in GPU rendering performance. The GPU hardware is. So please make the correct distinction.
Yes I do know that. I have implemented a software renderer or two.
My leaning to "unswift" comes from the observation that WebGL Aquarium in Chrome runs at only 4 fps on my 1.86GHz dual core desktop machine at a window size of approximately 1024 x 768 while with our s/w OpenGL ES 1.1 implementation Quake levels played using all of the pixels on a dual core 1024x768 easily ran at 30fps.
Yes I know my desktop is old, and by today's standards, low
powered, yes I know ES 2 has to deal with calling user-supplied
shaders and yes I know the scene is different but still, it seems
like a large difference.
When Chrome was using my GPU, WebGL Aquarium ran at 60fps.
Future processors will most likely have unified CPU and GPU cores, so software and hardware rendering coincide.
I don't understand what you mean by "coincide" in this comment.
Performance-wise I think there will still be a big difference.
ES 1.1 implementation Quake levels played using all of the pixels on a dual core 1024x768 easily ran at 30fps.
-> dual-core 1024x768 *laptop* (with similar cpu speed)
Yes I do know that. I have implemented a software renderer or two.On 2014/03/11 15:06, Nicolas Capens wrote:
Hi Mark,
SwiftShader is very efficient, as a piece of software. But it can't make your CPU perform more operations than what the CPU hardware is capable of. Likewise the GPU driver software isn't the primary factor in GPU rendering performance. The GPU hardware is. So please make the correct distinction.
My leaning to "unswift" comes from the observation that WebGL Aquarium in Chrome runs at only 4 fps on my 1.86GHz dual core desktop machine at a window size of approximately 1024 x 768 while with our s/w OpenGL ES 1.1 implementation Quake levels played using all of the pixels on a dual core 1024x768 easily ran at 30fps.
Yes I know my desktop is old, and by today's standards, low powered, yes I know ES 2 has to deal with calling user-supplied shaders and yes I know the scene is different but still, it seems like a large difference.
When Chrome was using my GPU, WebGL Aquarium ran at 60fps.
I don't understand what you mean by "coincide" in this comment. Performance-wise I think there will still be a big difference.
Future processors will most likely have unified CPU and GPU cores, so software and hardware rendering coincide.