It's time to learn how to work with shaders and shader resources in developing your Microsoft DirectX game for Windows 8. We've seen how to set up the graphics device and resources, and perhaps you've even started modifying its pipeline. So now let's look at pixel and vertex shaders.
If you aren't familiar with shader languages, a quick discussion is in order. Shaders are small, low-level programs that are compiled and run at specific stages in the graphics pipeline. Their specialty is very fast floating-point mathematical operations. The most common shader programs are:
Shader programs are written in Microsoft High Level Shader Language (HLSL). HLSL syntax looks a lot like C, but without the pointers. Shader programs must be very compact and efficient. If your shader compiles to too many instructions, it cannot be run and an error is returned. (Note that the exact number of instructions allowed is part of the Direct3D feature level.)
In Direct3D, shaders are not compiled at run time; they are compiled when the rest of the program is compiled. When you compile your app with Microsoft Visual Studio 2013, the HLSL files are compiled to CSO (.cso) files that your app must load and place in GPU memory prior to drawing. Make sure you include these CSO files with your app when you package it; they are assets just like meshes and textures.
It's important to take a moment to discuss HLSL semantics before we continue, because they are often a point of confusion for new Direct3D developers. HLSL semantics are strings that identify a value passed between the app and a shader program. Although they can be any of a variety of possible strings, the best practice is to use a string like POSITION or COLOR that indicates the usage. You assign these semantics when you are constructing a constant buffer or input layout. You can also append a number between 0 and 7 to the semantic so that you use separate registers for similar values. For example: COLOR0, COLOR1, COLOR2...
Semantics that are prefixed with "SV_" are system value semantics that are written to by your shader program; your game itself (running on the CPU) cannot modify them. Typically, these semantics contain values that are inputs or outputs from another shader stage in the graphics pipeline, or that are generated entirely by the GPU.
Additionally, SV_ semantics have different behaviors when they are used to specify input to or output from a shader stage. For example, SV_POSITION (output) contains the vertex data transformed during the vertex shader stage, and SV_POSITION (input) contains the pixel position values that were interpolated by the GPU during the rasterization stage.
When declaring the structure for the constant buffer in your C++ code, ensure that all of the data is correctly aligned along 16-byte boundaries. The easiest way to do this is to use DirectXMath types, like XMFLOAT4 or XMFLOAT4X4, as seen in the example code. You can also guard against misaligned buffers by declaring a static assert:
This line of code will cause an error at compile time if ConstantBufferStruct is not 16-byte aligned. For more information about constant buffer alignment and packing, see Packing Rules for Constant Variables.
The vertex buffer supplies the triangle data for the scene objects to the vertex shader(s). As with the constant buffer, the vertex buffer struct is declared in the C++ code, using similar packing rules.
There is no standard format for vertex data in Direct3D 11. Instead, we define our own vertex data layout using a descriptor; the data fields are defined using an array of D3D11_INPUT_ELEMENT_DESC structures. Here, we show a simple input layout that describes the same vertex format as the preceding struct:
If you add data to the vertex format when modifying the example code, be sure to update the input layout as well, or the shader will not be able to interpret it. You might modify the vertex layout like this:
Just as with the constant buffer, the vertex shader has a corresponding buffer definition for incoming vertex elements. (That's why we provided a reference to the vertex shader resource when creating the input layout - Direct3D validates the per-vertex data layout with the shader's input struct.) Note how the semantics match between the input layout definition and this HLSL buffer declaration. However, COLOR has a "0" appended to it. It isn't necessary to add the 0 if you have only one COLOR element declared in the layout, but it's a good practice to append it in case you choose to add more color elements in the future.
Shaders take input types and return output types from their main functions upon execution. For the vertex shader defined in the previous section, the input type was the VS_INPUT structure, and we defined a matching input layout and C++ struct. An array of this struct is used to create a vertex buffer in the CreateCube method.
The vertex shader returns a PS_INPUT structure, which must minimally contain the 4-component (float4) final vertex position. This position value must have the system value semantic, SV_POSITION, declared for it so the GPU has the data it needs to perform the next drawing step. Notice that there is not a 1:1 correspondence between vertex shader output and pixel shader input; the vertex shader returns one structure for each vertex it is given, but the pixel shader runs once for each pixel. That's because the per-vertex data first passes through the rasterization stage. This stage decides which pixels "cover" the geometry you're drawing, computes interpolated per-vertex data for each pixel, and then calls the pixel shader once for each of those pixels. Interpolation is the default behavior when rasterizing output values, and is essential in particular for the correct processing of output vector data (light vectors, per-vertex normals and tangents, and others).
The example vertex shader is very simple: take in a vertex (position and color), transform the position from model coordinates into perspective projected coordinates, and return it (along with the color) to the rasterizer. Notice that the color value is interpolated right along with the position data, providing a different value for each pixel even though the vertex shader didn't perform any calculations on the color value.
A more complex vertex shader, such as one that sets up an object's vertices for Phong shading, might look more like this. In this case, we're taking advantage of the fact that the vectors and normals are interpolated to approximate a smooth-looking surface.
This pixel shader in this example is quite possibly the absolute minimum amount of code you can have in a pixel shader. It takes the interpolated pixel color data generated during rasterization and returns it as output, where it will be written to a render target. How boring!
The important part is the SV_TARGET system-value semantic on the return value. It indicates that the output is to be written to the primary render target, which is the texture buffer supplied to the swap chain for display. This is required for pixel shaders - without the color data from the pixel shader, Direct3D wouldn't have anything to display!
In another example, the pixel shader takes its own constant buffers that contain light and material information. The input layout in the vertex shader would be expanded to include normal data, and the output from that vertex shader is expected to include transformed vectors for the vertex, the light, and the vertex normal in the view coordinate system.
Shaders are very powerful tools that can be used to generate procedural resources like shadow maps or noise textures. In fact, advanced techniques require that you think of textures more abstractly, not as visual elements but as buffers. They hold data like height information, or other data that can be sampled in the final pixel shader pass or in that particular frame as part of a multi-stage effects pass. Multi-sampling is a powerful tool and the backbone of many modern visual effects.
Hopefully, you're comfortable with DirectX 11at this point and are ready to start working on your project. Here are some links to help answer other questions you may have about development with DirectX and C++:
I'm trying to figure out how to debug pixel and vertex shaders in DirectX, I've tried using Pix for Windows but find it to be quite buggy and effectively non-operational. Is there an alternative that would allow me to debug these shaders inside my own application?
If there is a bug, you should first check whether the buffers you use really have the right values, at the beginning it can happen often that some textures are created with wrong parameters or the data is expected in another order ... in my experience this is a common error-source.
Just output debug information as color. For example you can output 255*(normal+1) as r,g,b, you can output some intermediate shader variable as color, or you can check if it is within the bounds in the shader and output white if it is and black if it's not. That usually help.
PIX for Windows comes with the DirectX SDK (from June 2010 I can verify) and will allow you to visually debug what you are setting state wise and allow you to step through your shaders line by line. Its a life saver.
I am learning Direct3D 11, and in all basic tutorials I found on shader writing, Vertex and Pixel shaders are written so they transform whole scene same way. Tutorials like render cube with texture...
But I wonder, how do you differentiate objects? What if you want, for example, to simulate mirror surface on some object and use different shaders to render rest of the scene? I think that most games must use many vertex and pixel shaders to achieve various look and transformations.
While initializing the engine and loading the game world, prepare all the shaders you will use for rendering. By "prepare" I mean load them into memory, compile them if necessary, and do all the ID3D11Device::CreatePixelShader and similar calls to get the D3D shader objects allocated and ready to go. Keep the objects in an array or some other data structure.
93ddb68554