The High-Level Shader Language[1] or High-Level Shading Language[2] (HLSL) is a proprietary shading language developed by Microsoft for the Direct3D 9 API to augment the shader assembly language, and went on to become the required shading language for the unified shader model of Direct3D 10 and higher.
HLSL is analogous to the GLSL shading language used with the OpenGL standard. It is very similar to the Nvidia Cg shading language, as it was developed alongside it. Early versions of the two languages were considered identical, only marketed differently.[3] HLSL shaders can enable profound speed and detail increases as well as many special effects in both 2D and 3D computer graphics.[citation needed]
HLSL programs come in six forms: pixel shaders (fragment in GLSL), vertex shaders, geometry shaders, compute shaders, tessellation shaders (Hull and Domain shaders), and ray tracing shaders (Ray Generation Shaders, Intersection Shaders, Any Hit/Closest Hit/Miss Shaders). A vertex shader is executed for each vertex that is submitted by the application, and is primarily responsible for transforming the vertex from object space to view space, generating texture coordinates, and calculating lighting coefficients such as the vertex's normal, tangent, and bitangent vectors. When a group of vertices (normally 3, to form a triangle) come through the vertex shader, their output position is interpolated to form pixels within its area; this process is known as rasterization.
Optionally, an application using a Direct3D 10/11/12 interface and Direct3D 10/11/12 hardware may also specify a geometry shader. This shader takes as its input some vertices of a primitive (triangle/line/point) and uses this data to generate/degenerate (or tessellate) additional primitives or to change the type of primitives, which are each then sent to the rasterizer.
GPUs listed are the hardware that first supported the given specifications. Manufacturers generally support all lower shader models through drivers. Note that games may claim to require a certain DirectX version, but don't necessarily require a GPU conforming to the full specification of that version, as developers can use a higher DirectX API version to target lower-Direct3D-spec hardware; for instance DirectX 9 exposes features of DirectX7-level hardware that DirectX7 did not, targeting their fixed-function T&L pipeline.
The High Level Shading Language for DirectX implements a series of shader models. Using HLSL, you can create C-like programmable shaders for the Direct3D pipeline. Each shader model builds on the capabilities of the model before it, implementing more functionality with fewer restrictions.
Shader model 1 started with DirectX 8 and included assembly level and C-like instructions. This model has many limitations caused by early programmable shader hardware. Shader model 2 and 3 greatly expanded on the number of instructions, and constants shaders could use. They are much more powerful than shader model 1, but still carry some of the existing limitations of the first shader model.
Starting with Windows Vista, shader model 4 is a complete redesign. It allows unlimited instructions and constants (within hardware constraints of your machine), has templated objects to make texture sampling cleaner and more efficient, and has the fewest restrictions of any shader model. It does however require the Windows Driver Model which is only available on the Windows Vista (or later) operating system.
The HLSL shader model is a versioning approach indicating which new features are added to the language. Each level allows an application or game to target a well-known set of functionality for development, and allows hardware and driver developers to target that same description for support.
Advanced Texture Operations adds SampleCmpLevel, a new sample compare operation that neither uses the MIP level determined by the derivatives as with SampleCmp does, nor is it limited to MIP level zero as with SampleCmpLevelZero. The shader author can specify the level through a parameter.
Shader Model 6.7 adds two new builtin functions to HLSL: QuadAny and QuadAll. These functions return whether the provided expression parameter is true for any lane (QuadAny) or all lanes (QuadAll) in the current quad. With these you can avoid undefined behavior by executing shader code conditional on quad-level uniformity.
Shader Model 6.7 adds the WaveOpsIncludeHelperLanes entry function attribute. By applying this attribute, any wave ops invoked by shaders compiled with that entry function will include helper lanes in their calculations. This further advances the ability to leverage and identify helper lanes in shader development.
I have DirectX 12 but SFM tells me I need DirectX 9.0 Shader Model 3.0. Do I need shader model or something for DirectX 12? My AO is not working, not even SSBIAS works and makes my posters look ugly. Do I need a new Graphics Card or something for DirectX?
which shader models are supported for directx Windows and OpenGL crossplatform?
I would know which are the newest shader model which is supported for this two platforms.
Im using MonoGame 3.7.
DXTweaker aka DirectX Tweaker spoofs values but apps/games do some additional checks and do not detect shaders 3.0. By the way, it only exists as time bombed beta and you need to set date in VM to somewhere in 2005 to get it working if you want to try its tweaks.
I am writing a small utility that reports system capabilities. One is the highest shader model supported by the installed graphics card, and I am currently detecting this using Direct3D 9.0c's device capabilities and checking the VertexShaderVersion and PixelShaderVersion fields of the D3DCAPS9 structure.
Edit - purpose: I am not using this code to dynamically change features of a program, ie select shaders. I am using it to report hardware capabilities as a 'ping' to a server, which is used to we have a good idea of typical hardware that our customers use, which can inform future product decisions. (For example: how many customers have SM4 or above? How many are using a 64-bit OS? Etc.) This is why either (a) gracefully failing, so we know it failed, or (b) getting an accurate shader model number are the two preferred modes.
Edit - answers so far: The answer below by SigTerm suggests instantiating DirectX 11, 10.1, 10, and 9.0c in order, and basing the reported shader model on which version instantiated without failures (shader model 5, 4.1, 4, and DXCAPS in that order.) If possible, I'd appreciate a code example of the DX11 and 10 ways to do this.
This may not be a reliable solution. For example, I am running Windows on a VMWare Fusion virtual machine on OSX. The Fusion drivers report DX11 in DxDiag, yet I know from the Fusion tech specs that it only supports DX9.0c and shader model 3. Still, with this exception, this method seems the best way so far.
I think you're asking for the impossible, because shaders are provided by DirectX, and the driver/GPU might not even have a concept of a "shader model" under the hood. In this case the only way to detect capabilites will be to make GPU database of some sort, detect installed devices, and return answer from database. This won't be relabile, of course.
To enable these gathers, resources with various formats can be aliased to unsigned integer resource views with unsigned integer formats of equal size to the full element. These resource views are then used in the shader to retrieve the raw texture data.
For example, a R32_UINT format resource view could be created for a R8G8B8A8 texture and then within the shader the R32_UINT resource view could then be raw gathered into four 32-bit unsigned integers that represent the raw representation of the R8G8B8A8 data.The author is then able to use that data however they wish.
Current Sample and Load operations require their offsets to be immediate integers. Programmers had to decide on the offset values they wanted prior even to shader compile time. To say the least, this made them of limited use.
Given that DX9-level hardware is getting rare for anyone running Vista up these days, I tend to just stick with shader model 4.0 as a minimum. That said, you may feel differently. At least one source indicates that 20% of gamers can't run DX10 either due to hardware or OS, though it doesn't qualify that with market demographics.
You can use #pragma directives to indicate that a shaderA program that runs on the GPU. More info
See in Glossary requires certain GPU features. At runtime, Unity uses this information to determine whether a shader program is compatible with the current hardware.
You can specify individual GPU features with the #pragma require directive, or specify a shader model with the #pragma target directive. A shader model is a shorthand for a group of GPU features; internally, it is the same as a #pragma require directive with the same list of features.
It is important to correctly describe the GPU features that your shader requires. If your shader uses features that are not included in the list of requirements, this can result in either compile time errors, or in devices failing to support shaders at runtime.
If the list of requirements (or the equivalent target value) does not already include these values, Unity displays a warning message when it compiles the shader, to indicate that it has added these requirements. To avoid seeing this warning message, explicitly add the requirements or use an appropriate target value in your code.
You can also use the #pragma require directive followed by a colon and a list of space-delimited shader keywords. This means that the requirement applies only to variants that are used when any of the given keywords are enabled.
You can also use the #pragma target directive followed by a list of space-delimited shader keywords. This means that the requirement applies only to variants that are used when any of the given keywords are enabled.
582128177f