Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.
The mission at Phoronix since 2004 has centered around enriching the Linux hardware experience. In addition to supporting our site through advertisements, you can help by subscribing to Phoronix Premium. You can also contribute to Phoronix through a PayPal tip or tip via Stripe.
I have a 2D HTML5 game engine (www.scirra.com) and really want to detect if WebGL is going to render with Chrome 18's 'Swiftshader' software renderer. If so we would much prefer to fall back to the ordinary canvas 2D context, as happens in other browsers. The mass of people out there have low end machines with weak CPUs that turn the game in to a slideshow when software rendering, and I think in many cases the 2D canvas would have been hardware accelerated. However, the WebGL context creation never fails in Chrome and there is no obvious way to detect SwiftShader.
I could try taking in to account things like the maximum texture size or the other MAX_* properties, but how do I know they don't vary between machines even with SwiftShader? And since I guess SwiftShader aims to mimic common hardware, using that approach might still get a lot of false positives.
I don't want to have to add in-game switches or a user setting, because how many users care about that? If the game is slow they'll just quit and most likely not search for a solution. "This game sucks, I'll go somewhere else." I think only a minority of users would bother reading instructions like "by the way, if this game is slow, try changing this setting to 'canvas 2D'..."
Note the addition of WEBKIT_WEBGL_compressed_textures. Some quick research indicates that this may or may not be widely supported. See this support table - both GL_EXT_texture_compression_s3tc and GL_ARB_texture_compression appear widely supported on desktop cards. Also the table only seems to list reasonably old models, so I could hazard a guess that all modern desktop graphics cards would support WEBKIT_WEBGL_compressed_textures... therefore my detection criteria for SwiftShader would be:
Of course, if SwiftShader adds compressed texture support in future, this breaks again. But I can't see the advantage of compressed textures with a software renderer! Also, it will still get lots of false positives if there are many real working video cards out there that don't support WEBKIT_WEBGL_compressed_textures!
SwiftShader is actually faster than some integrated graphics. So detecting GPU or CPU rendering gives you no guarantees about actual performance. Also, SwiftShader is the fastest software renderer around and should do a really decent job with simple games. Are you sure your application is properly optimized?
SwiftShader provides shared libraries (DLLs) which implement standardized graphics APIs. Applications already using these APIs thus don't require any changes to use SwiftShader. It can run entirely in user space, or as a driver (for Android), and output to either a frame buffer, a window, or an offscreen buffer.
To achieve exceptional performance, SwiftShader is built around two major optimizations that affect its architecture: dynamic code generation, and parallel processing. Generating code at run-time allows to eliminate code branches and optimizes register usage, specializing the processing routines for exactly the operations required by each draw call. Parallel processing means both utilizing the CPU's multiple cores and processing multiple elements accoss the width of the SIMD vector units.
The API layer is an implementation of a graphics API, such as OpenGL (ES) or Direct3D, on top of the Renderer interface. It is responsible for managing API-level resources and rendering state, as well as compiling high-level shaders to bytecode form.
Reactor is an embedded language for C++ to dynamically generate code in a WYSIWYG fashion. It allows to specialize the processing routines for the state and shaders used by each draw call. Its syntax closely resembles C and shading languages, to make the code generation easily readable.
The JIT layer is a run-time compiler, such as LLVM's JIT, or Subzero. Reactor records its operations in an in-memory intermediate form which can be materialized by the JIT into a function which can be called directly.
While making Reactor's syntax so similar to the C++ in which it is written might cause some confusion at first, it provides a powerful abstraction for code specialization. For example to produce the code for an addition or a subtraction, one could write x = addOrSub ? x + y : x - y;. Note that only one operation ends up in the generated code.
The VertexRoutine produces a function for processing a batch of vertices. The fixed-function T&L pipeline is implemented by VertexPipeline, while programmable vertex processing with a shader is implemented by VertexProgram. Note that the vertex routine also performs vertex attribute reading, vertex caching, viewport transform, and clip flag calculation all in the same function.
The PixelRoutine takes a batch of primitives and performs per-pixel operations. The fixed-function texture stages and legacy integer shaders are implemented by PixelPipeline, while programmable pixel processing with a shader is implemented by PixelProgram. All other per-pixel operations such as the depth test, alpha test, stenciling, and alpha blending are also performed in the pixel routine. Together with the traversal of the pixels in QuadRasterizer, it forms one function.
The GLSL compiler is implemented in src/OpenGL/compiler/. It uses Flex and Bison to tokenize and parse GLSL shader source. It produces an abstract syntax tree (AST), which is then traversed to output assembly-level instructions in OutputASM.cpp.
The EGL API is implemented in src/OpenGL/libEGL/. Its entry functions are listed in libEGL.def (for Windows) and libEGL.lds (for Linux), and defined in main.cpp and implemented in libEGL.cpp. The Display, Surface, and Config classes are respective implementations of the abstract EGLDisplay, EGLSurface, and EGLConfig types.
OpenGL ES 2.0 is implemented in src/OpenGL/libGLESv2/. Note that while OpenGL ES 3.0 functions are implemented in libGLESv3.cpp, it is compiled into the libGLESv2 library as standard among most implementations (some platforms have a libGLESv3 symbolically link to libGLESv2). We'll focus on OpenGL ES 2.0 in this documentation.
I think I've read the Graphics Performance Help page at least ten times and I'm very frustrated. I have a GeForce RTX 2070 graphics card, but whenever I run the compatibility check, it says Google Swiftshader is being user to render the graphics. I have done everything the help page has said, and it still won't work. If anyone can help me figure this out, I'd really appreciate it, because as it is now, it takes minutes for onshape to register any interaction I have with parts.
You used to have to go to the Nvidia Control Panel and change the preferred graphics processor for your browser to the RTX 2070. However with a new update to Windows I think this has changed. I was trying to get Chrome to use my RTX 2070 and noticed this notification on the NVIDIA Control Panel:
I ended up having to go to Settings App>System>Display. At the bottom of this page there is "Graphics Settings". I then had to set the Graphics Performance Preference for Chrome. Of course Chrome was not already in the drop down menu of apps to set performance so I had to browse for it. Then I was able to change the preference to High Performance (RTX 2070). Ran again and it was using the RTX 2070.
Hi Katie! Do you have multiple graphics cards installed on your device? Assuming you are using Chrome on a Windows device, you can ensure Windows defaults to the better graphics card by navigating through your system settings to Graphics settings and ensuring your browser is set to High performance. From there you can also ensure WebGL and hardware accelerated are turned on:
@Domenico_D Windows 10 has replaced the need to access the nVidia Control Panel for setting the GPU preference per application. The area is under Graphics Settings. It's a more generalized approach regardless of the graphics card vendor, nVidia, AMD, Intel, etc...
@Katie_Bosilovich , I just went through the similar issue. In my case changing the display setting in my browser (Firefox) fixed the problem (somewhere in the Firefox>options if I recall correctly). The default setting utilizes the integrated graphics hardware to conserve power. There is high-performance setting to override that to utilize your GeForce GPU.
CesiumJS relies on WebGL, which is hardware accelerated using the GPU. You should have no issues running CesiumJS on an integrated, discrete or mobile GPU. Could you share a Sandcastle demonstrating a sample scene where you are seeing performance issues? Additionally, it would be helpful if you could share the output from WebGL Report.
c80f0f1006