Just a heads up, you can finally download that DX11 benchmark from the Chinese game Passion Leads Army from Giant IronHorse. The game is powered by Unreal Engine 3 so you've pretty much seen it all in terms of tech but, for those of you that really like benchmarks, you can download it and give it a spin.
Despite the fact that Unreal Engine 5 was announced just a few weeks ago, it is already being incorporated into gamers and software. One of such utilities is EzBench which is a free graphics benchmark with support for raytracing and 8K textures.
For most studios the number of open bug tickets usually increases in a steady linear upward trajectory from the end of preproduction to the last 10-20% of development. That is until the team focus shifts to polish, then the number of bugs falls drastically as the team crunches with many overtime hours and hopes they can get the majority fixed by release day. The peak number of bugs for large studios who observe this pattern is regularly in the thousands or tens of thousands. However if automated testing, orchestrated in tandem with a solid CI/CD pipeline, is done throughout the process after preproduction, the developers can address a significant portion of bugs as soon as they are created and therefore those bugs can be fixed much more easily. If a studio does take the automated testing approach, it means development will be slower or the studio will have to pay extra for specific test engineers, but the number of bugs throughout the process should be relatively small. This not only avoids a large number of crunch hours, but also means that you can have a build of your game ready very quickly if you need to show it off to a publisher or potential new investors.
As you can see Unreal already comes with hundreds of tests that will make sure all the base functions and classes are performing as expected. This is particularly useful if you are modifying engine code.
With the rise in popularity of blockchain technology leading to high demand for high-end-graphic cards and supply-chain shortages during the COVID-19 pandemic, the cost of acquiring a performant graphics card has been steadily increasing.1 Because the global chip shortage is expected to continue over the course of 20222 it is feasible to explore other ways to obtain access to high-end graphics cards instead of overpaying for a card or relying on an enthusiast friend to lend You one. The first thing that might come in mind is the use of disposable computational resources with the major Cloud providers. But how do the traditionally available high-end graphic cards compare to the cards offered in the cloud? In the following we will compare the newest G-Family instances available on AWS. Hereby, two publicly available Unreal Engine benchmarking tools will be run and evaluated on a g5.4xlarge instance. As a reference for the instance performance the enthusiast friend mentioned above has kindly provided his high-performance gaming set-up running a GeForce RTX3090.
As expected with the NVIDIA AWS AMI the A10G card is recognized correctly and has the necessary drivers installed.Two benchmarking tools based on Unreal Engine were installed to test the graphics performance of the g5.4xlarge instance, the Unreal Engine 4 Elemental DX12 Tech demo6 and Pure RayTracing Benchmark v1.51.7Additionally, a technical demo utilizing Unreal Engine 5 called The Market of Light was installed and benchmarked using the framerate capturing tool native to online gaming platform Steam. Because it is an interactive, game like demo, we want subjectively find out whether or not a Unreal Engine 5 based game/demo is enjoyable being hosted on a VM in AWS.
With the chosen benchmarking tools utilizing different approaches to challenge the instance performance, the results of the tools will be discussed separately.The reference high-end gaming PC utilizes an ASUS Strix RTX3090, an Intel i7-7700k processor, GeForce RTX3090 and 16GB DDR4 RAM at 2400 Mhz. The benchmarks for each tool are measured on the g5.4xlarge instance and compared to the reference.
The Elemental DX12 Tech demo is tested with no interaction between user and demo running a cinematic sequence rendered in real-time by the Unreal Engine 4. During the demo, the current FPS are shown along with latency. The demo was released in Q3 2015, so the used benchmark is quite old. Nevertheless, it still remains quite challenging on hardware. Remember, the GeForce 900 series cards were introduced at a similar time, where the top-tier models remain desirable and cost even above the retail price stated in 2016 to this day.8
Concluding the first test run, we can confirm that the g5.4xlarge instance with the NVIDIA A10G card is not quite as performing as our high-end reference, but delivers impressive performance nonetheless. A summary of all benchmark data is available at the end of the article.
Opposing to the Elemental DX12 Tech demo, the Pure Raytracing benchmark is a interactive benchmark, where a user can interact with the environment rendered by Unreal Engine in real time. For comparison reasons, we will stick to the cinematic demo performance also offered with the Pure Raytracing benchmark. With the benchmark being newer than the one in previous run, it potentially has more relevance for our use case. Some benchmark results are posted on the website already, but information about an NVIDIA A10G card is still missing.7The benchmark was run three times on full HD resolution of 1920 x 1080 pixel, with varying quality settings between usual, high and pro.
Figure 4: A screenshot of Pure Raytracing benchmark performed on a g5.4xlarge instance in interactive mode. Parameters, such as resolution, quality settings, current and average FPS are shown on the lower bottom of the figure
Compared to some benchmark performances listed on7, the GeForce A10G performance is comparable to a GeForce 2080 Ti Asus Rog Strix. I provide you with the liberty to find out the price of that card for yourself at that point. Just a hint: at the current price in Q1 2022 you could run a g5.4xlarge for quite a bit, without worrying of overshooting the price of the aforementioned 2080 Ti. Not surprisingly, the performance of an overclocked GeForce 3090 listed on7 was quite similar to our reference, being only 3 FPS higher. The OC 3090 has been listed with a whopping 57 FPS on Pro settings compared to our references 54 FPS.
Figure 6: A screenshot of Pure Raytracing benchmark performed on a g4dn.16xlarge instance in interactive mode. With an average of only 17 FPS the interactive mode is not as enjoyable as on a g5.4xlarge instance
This furthermore illustrates the performance increase between the instance generations within an AWS instance-family and the high computational power required for the used benchmarking tools.
Heaven was a DirectX 11 benchmark where you could explore a mythical village floating in the cloudy sky. The buildings and structures in the village were highly detailed and realistic thanks the use of dynamic tessellation, compute shaders, and shader model 5.0.
In recent years, Unreal Engine has emerged as a powerful tool for filmmakers in many ways. It can be used to create photo-real digital films; equally, it can be used as a VFX tool in a traditional VFX pipeline, as well as an engine to drive virtual production environments and real-time VFX.
The GeForce RTX 4090 is the current GPU king, featuring a staggering 24GB of memory. This card wins in virtually all benchmarks and is fully capable of running UE5 and a host of over 3D and post-production applications.
What I have in mind is to benchmark my GTX 650 now with some software, then underclock my GTX 970 until I get to the older card benchmark score. Is that a good approach? Would it effectively have the same performance?
df19127ead