You'll need to check the GPU capabilities through D3DCAPS (check the DirectX SDK docs if you don't know it). More than just determining the shader model, you can check for specific capabilities supported.
If you are using D3D9, you can query for device capabilities reported by your card using the IDirect3D9::GetDeviceCaps method. This will give you a structure containing a lot of interesting information about what the hardware supports. Of interest to you concerning this problem will the fields in the resulting structure called VertexShaderVersion and PixelShaderVersion as well as possibly MaxVertexShader30InstructionSlots and MaxPixelShader30InstructionSlots. All four are described on the linked documentation page.
For D3D10 you should be guaranteed SM4. For D3D11 (which I'd recommend over 10, since it should be the case that you can use 11 if you can use 10) device capabilities are categorized into feature levels. If you're using the 10_0 feature level or greater, you should be guaranteed SM4. Below the 10_0 level you have some odd 10Level9 differences to take into account -- the upshot for you is that you have to use odd shader model designations like vs_4_0_level_9_1 in some (perhaps all, we're getting into territory I haven't explored much in practice) scenarios.
You'll note that I said "should be guaranteed" in a few places. This is because, as you alluded to, it's possible for cards to lie or for there to be driver bugs that effectively render particular hardware/driver combinations "non functional" (or at least broken in a fashion you'd want to work around). This is much rarer these days than it used to be, but in these cases you can't really trust the hardware or driver anyhow and will have to "do it yourself." One way to do this is to simply try to create something using SM3 and see if it fails... although this will not catch all bugs/failures.
What I have done to account for that kind of issue in the past is build up a locally-maintained "feature database" API that allows me to store information about particular card/driver failures and how to fall back to safe alternative code paths when that hardware/driver combination is present on the end-user's machine. Populating this database generally requires trial-and-error and a lot of different hardware configurations, so can be difficult for the lone developer to do, unfortunately.
just a general question about direct3D, if i was to write an application entirely in d3d11 with hlsl 5.0 shaders, will these still be compatible with only d3d10 or d3d9 capable systems, just with less graphically pleasing effects and the efficiency of d3d11? If so, how much less efficient would it be? If not, would i seriously need to make 2 or 3 different versions of the exact same program, except using d3d9, d3d10, and d3d11?
To sum this up, using DX11 somewhat makes sence if your target platform is DX10.0+-class hardware (because aside from HW tesselation, differences are tolerable and/or there are workarounds for missing features). If you want to support DX9 HW, I'd suggest you to stick to DX9 SDK as this will allow you to run your app on XP.
But that doesn't really get you much, because you need to pass in a list of feature levels you support (or NULL which implies a list) and the function has an out parm telling you what feature level you got. So a DX9 only card would come back with D3D_FEATURE_LEVEL_9_1 (or 9_2/9_3).
No, it is not backwards compatible. If you want to run on more platforms, a directX11 capable graphics card will certainly support directx9. So either develop one time in DX9, or in both DX11 and DX9.
it's telling you exactly the problem, your gpu is incompatible with the game's requirements- if you're lucky, you might just me missing a driver update, but if you look up your GPU model and it doesn't support the requirements, you can't play with your hardware
Hello! I recently downloaded AE Beta 24.2 to try bringing in some 3D models from Blender, but it doesn't seem to be working. The models do not appear on the screen, although I can see them there. I found that the advanced 3D option is missing. Could you please let me know which version includes the advanced option mode, or do you have any recommendations on this matter? Thank you!
Thank you, that helps. Unfortunately Advanced 3D currently requires a video card with 4 GB of GPU memory and must be a D3D11-compatible GPU (Feature Level 11.0, Shader Model 5.0). The intel card your machine is currently using does not meet those requirements.
You can use #pragma directives to indicate that a shaderA program that runs on the GPU. More info
See in Glossary requires certain GPU features. At runtime, Unity uses this information to determine whether a shader program is compatible with the current hardware.
You can specify individual GPU features with the #pragma require directive, or specify a shader model with the #pragma target directive. A shader model is a shorthand for a group of GPU features; internally, it is the same as a #pragma require directive with the same list of features.
It is important to correctly describe the GPU features that your shader requires. If your shader uses features that are not included in the list of requirements, this can result in either compile time errors, or in devices failing to support shaders at runtime.
If the list of requirements (or the equivalent target value) does not already include these values, Unity displays a warning message when it compiles the shader, to indicate that it has added these requirements. To avoid seeing this warning message, explicitly add the requirements or use an appropriate target value in your code.
You can also use the #pragma require directive followed by a colon and a list of space-delimited shader keywords. This means that the requirement applies only to variants that are used when any of the given keywords are enabled.
You can also use the #pragma target directive followed by a list of space-delimited shader keywords. This means that the requirement applies only to variants that are used when any of the given keywords are enabled.
c80f0f1006