The only shader model 5.1 feature that relate to dx11.3 is for the pixel shader using the conservative rasterisation ( plus stencil output ). The compute shader is virtually unmodified from 5.0 to 5.1 in regards to dx11.3, It is safe to assume that rejecting a 5.1 shader here is excessive, but it is not like you would have need it anyway.
The High-Level Shader Language[1] or High-Level Shading Language[2] (HLSL) is a proprietary shading language developed by Microsoft for the Direct3D 9 API to augment the shader assembly language, and went on to become the required shading language for the unified shader model of Direct3D 10 and higher.
HLSL is analogous to the GLSL shading language used with the OpenGL standard. It is very similar to the Nvidia Cg shading language, as it was developed alongside it. Early versions of the two languages were considered identical, only marketed differently.[3] HLSL shaders can enable profound speed and detail increases as well as many special effects in both 2D and 3D computer graphics.[citation needed]
HLSL programs come in six forms: pixel shaders (fragment in GLSL), vertex shaders, geometry shaders, compute shaders, tessellation shaders (Hull and Domain shaders), and ray tracing shaders (Ray Generation Shaders, Intersection Shaders, Any Hit/Closest Hit/Miss Shaders). A vertex shader is executed for each vertex that is submitted by the application, and is primarily responsible for transforming the vertex from object space to view space, generating texture coordinates, and calculating lighting coefficients such as the vertex's normal, tangent, and bitangent vectors. When a group of vertices (normally 3, to form a triangle) come through the vertex shader, their output position is interpolated to form pixels within its area; this process is known as rasterization.
Optionally, an application using a Direct3D 10/11/12 interface and Direct3D 10/11/12 hardware may also specify a geometry shader. This shader takes as its input some vertices of a primitive (triangle/line/point) and uses this data to generate/degenerate (or tessellate) additional primitives or to change the type of primitives, which are each then sent to the rasterizer.
GPUs listed are the hardware that first supported the given specifications. Manufacturers generally support all lower shader models through drivers. Note that games may claim to require a certain DirectX version, but don't necessarily require a GPU conforming to the full specification of that version, as developers can use a higher DirectX API version to target lower-Direct3D-spec hardware; for instance DirectX 9 exposes features of DirectX7-level hardware that DirectX7 did not, targeting their fixed-function T&L pipeline.
I talked to some UnrealEngine Mac users and they told me the difference between Unity and Unreal on Mac is, that Unreal supports Shader Model 5.0 on Macs which makes it possible to get AAA-graphic on a modern Mac.
The bit about mobile focus is news to me, probably because I have been focused on compute shaders rather than normal shaders. In this regard I never felt like the metal support for Unity was mobile-focussed in particular, and Ive had some great results on the desktop, especially once we were past a certain version of Unity and macOS.
There is stuff about the pragma target in the 2018.1 beta release notes. A two step approach, based on relaxing requirements for pragma target 5.0, and a new granular way for checking for supported features:
Graphics: Improved shader import handling when using #pragma target . If no #pragma geometry, #pragma hull or #pragma domain statements are used to specify entrypoint functions, these shader features (geometry or tessellation) are now dropped from the internal shader capabilities requirement, allowing greater compatibility across non-DX11 graphics targets. In practise, this now allows using #pragma target 5.0 with Metal, as long as geometry shaders are not used.
OK I will be very specific, its a combination of you being wrong about certain particular things, some of your questions being far too vague, and also the timing as regards the evolution of Unity and where things are at with 2018.1.
The present: The future of Unity High-end graphics is the HD scriptable pipeline and whatever other high-end pipelines other people make. The HD pipeline requires compute capabilities, and all the work Unity did on the past to make Unity compute shaders compile to and work with metal pays off. macos machines with metal are very much a target of the new HD pipeline, and they would even like to extend this to high-end mobile metal devices one day.
Also note that in terms of Unity being confident about metal, its only 2018.1 where macos metal support inside the editor is now turned on by default and no longer considered experimental. So all these pieces are only just coming together, and I expect to see further progress in 2018 for the HD pipeline in general, including on the mac.
How much RAM have you got? Neither UE4 or Unity work magic when loading very large assets into large scenes. Not an OS-specific issue but one of the barriers to working on this stuff with your dev machine that may affect mac users a bit more due to the number of mac models that dont feature upgradable RAM!
Plus there are all the reasons why particular shiny graphics might not be included in a game on some platforms due to other performance considerations than just the GPU. Some assumptions are bound to be made by devs as to the average power available on different platforms, and the hardware Apple have sold for many years does no favours to those perceptions, even if you happen to have a mac that has a bit more power on the GPU front.
That wont stop me using a game engine to develop graphics that I can run on a mac with a 1080ti in it if I want, the power is/will be available to me in the engine, but I certainly wont expect to see lots of similar eye-candy targeting this largely fictitious level of mac gpu power. Mind you we do live in an age where this doesnt have to be a fiction because egpus are an officially supported option now, but at this stage not one I think developers are placing many bets on (similar story with macOS VR).
Also with all this focus on graphics I should have taken time to say that there are other reasons why the shiny extras may be switched off, eg if other parts of Unity perform slower on macos than windows. And here again we have the same story, this is a moment of change for Unity where efforts regarding game performance have been focused on Job System/ECS/Burst, and much of these systems are still at the experimental stage. It would not surprise me if ECS/Burst are not performing as well on macOS as windows right now, but again its early days and Burst is not even available in standalone builds yet if I remember correctly. So anyway, more parts of Unity where the mac is not being ignored, may not be treated as priority #1 but certainly isnt languishing far down the priorities list. I look forward to the modular mac pro in the hope that all this love will find a most suitable target Failing that, the imac pro would be quite good for development when paired with a beefy GPU in a thunderbolt 3 enclosure, although I understand there is quite a lot of thermal throttling. I might live with a top-end machine from the non-pro imac lineup, again paired with an egpu.
Theoretically, you do want to use the lowest level you possibly can. The lower the level, the wider the range of hardware that your game supports, and the more people that can play (and buy) your game.
That said, as you're using DX10/11 already, you're already targeting markets that actually use Vista and up, which mostly coincide with the markets that have DX10-level hardware or better. Targeting SM 5.1 is probably a mistake, but 4.0 is likely just fine.
Given that DX9-level hardware is getting rare for anyone running Vista up these days, I tend to just stick with shader model 4.0 as a minimum. That said, you may feel differently. At least one source indicates that 20% of gamers can't run DX10 either due to hardware or OS, though it doesn't qualify that with market demographics.
There are markets where DX9 is sadly still the API of choice, but those are markets you're very unlikely to have any luck in for a variety of non-technical reasons (e.g. national laws that make life difficult for foreign games, nationalist loyalty from gamers towards domestic game manufacturers, or wildly different game design preferences than present in the West).
This of course is one of the supposed advantages of GL vs DX. With GL, you can use the latest hardware features even on older Windows versions. The flip side is that the drivers are likely out of date and buggy, so your GL app is likely to generate far more support requests and unhappy customers than a DX version. This is why, for example, the browsers all implement WebGL via a DX layer on Windows rather than just using GL directly. Even when the current drivers are high quality, folks on older Windows OSes are far far more likely to be running out of date drivers, and not even know how to upgrade them.
The out-of-date driver problem also affects DX, though to a far lesser degree. I'd suggest targeting only modern Windows OSes unless you have a very strong reason to indicate that XP users make up a significant portion of your target market, as the cost of targeting XP (in terms of development, testing, support, etc.) is just getting higher and higher as time goes on.
In this post, I will discuss the Direct3D shader bytecode format, and how to parse it without using the Direct3D API. This is basically a write-up of SlimShader, a .NET library that can read in Direct3D 10 / 11 HLSL bytecode and give you back the shader instructions and metadata as nicely structured objects.
c80f0f1006