Passmark Performance Download

0 views
Skip to first unread message

Carlito Austin

unread,
Aug 4, 2024, 2:16:28 PM8/4/24
to snowlaveca
Weoffer independent evaluations of software products for performance and system impact. Our consultancy services help you to stay ahead of your competitors at any point within your product's lifespan.

We specialize in the development of tools for the evaluation of computer hardware and software. This particular area of expertise means that we focus on solutions which monitor and compare hardware and software.


This chart made up of millions of PerformanceTest benchmark results and isupdated daily with new graphics card benchmarks. This high end chart contains highperformancevideo cards typically found in premium gaming PCs. Recently introduced AMD video cards (suchas the AMD RX 6950 XT) and nVidia graphics cards (such as the nVidia GeForce RTX 3090) usingthe PCI-Express (or PCI-E) standard are common in our high end video card charts.


I am looking for a utility that will benchmark CPU performance under single and multi threaded instances. At present I have an old rig with a dual core CPU (E7500) at 3.6 Ghz and I am looking at replacing it with a quad core CPU (Q9400) at 3.2 Ghz. I want to see if I will notice a performance improvement with the extra 2 cores (albeit with a drop in core speed). I will clock the CPU's with the same FSB (400Mhz) and the cache size is the same per CPU (1.5MB) and for what its worth I have 4GB ram (with potential to upgrade to 6GB)


Alternatively, one can use stress-ng. It has a CPU stress test as one of the many stress tests built into the tool. The cpu stress test contains many different CPU stress methods covering integer, floating point, bit operations, mixed compute, prime computation, and a wide range of computations.


But I recently was looking for a tool available in multiple "distros" (Termux not really being a distro) including Ubuntu, and while the above mentioned packages are a common good choice, I read here: _linux_stress_test_benchmark_cpu_perf/ that 7-zip has a built-in benchmarking tool! And 7zip can be found in nearly every distros repository.


Phoronix automates and standardizes the benchmarking of several real world use cases like compression, encryption and databases. Most tests benchmark open source software projects, but some also benchmark closed source software. They also host test results at: which anyone can upload to, so you can compare test results with other different systems.


We can see how that compares with other systems at: -linux-kernel I'm running an AMD Ryzen 7 PRO 7840U on a Lenovo ThinkPad P14s, and for that CPU the public results were 129 +/- 9. So sadface, there seems to be something wrong with my system, as I'm considerably slower than those tests, maybe a performance mode issue? Setting to High Performance But at least this illustrates the awesome value of having public results available!


where the test results is 22191.17 Bogo Ops/s, which is how stress-ng reports its output, and means just how many operations it did in a given amount of time. So we see that this is a different type of test, which rather than benchmarking the time to completion of a task rather set a timer and ran as many times as possible.


SOLIDWORKS Performance Test is a set of tests that compares your system against others. For more information, see about the SOLIDWORKS Performance Test. You can also Share Your Scores with other users.


Comprehensive performance evaluation software developed by the world's leading hardware vendors in cooperation with Dassault Systemes to exercise a full range of real-world graphics and CPU functionality.


In general server-grade hardware will perform better than desktop-grade equivalents when under load from multiple processes (e.g. running several services serving requests, hosting multiple users, etc). For day to day desktop usage server hardware won't necessarily perform any better, and can underperform. However, server hardware typically is less error and fault prone, and is often found in "workstation-grade" hardware--effectively high-end desktops used for professional purposes such as CAD, animation, or running local services like databases.


For a typical desktop user it's better to focus on the hardware metrics that reduce latency. More L2 or L3 cache on a CPU can be more beneficial than many more cores--a faster FSB frequency can be better than extra clock speed, etc.


The typical tradeoffs are that server hardware is more expensive, can (in some cases) consume more electricity, and isn't fully utilized in a desktop environment. In terms of performance per dollar for desktop usage it's usually best to get high quality desktop components--a motherboard with a high-end chipset, quality RAM, and quality SSD(s). Many-core server CPUs, ECC memory, and SAS drives don't provide the gains to a desktop that they do to a server.


All else being roughly equal, a processor is a processor. "Server" processors tend to have larger caches which is useful for switching among many tasks, but does less with single-threaded processing. What processor is going to give an edge is going to be highly dependent on the load you're placing on the processor. The Passmark score is going to be a good general indication of performance in general, so in general your new server should be slightly faster than the old one, and in some specific applications will be noticeable faster.


Other people already gave a good explanation on the advantages and disadvantages of server hardware vs. desktop hardware; however, there is another thing to point out here: you are going from 4 cores to 8. This means you'll get a much better response when running concurrent (and/or multithreaded) CPU-intensive applications. Of course, if this is not your use case, 8 cores will not be much more useful than 4.


You will notice the the response times and other measures are no absolutes. Taking a page from six sigma manufacturing principals, the cost to move from 1 exception in a million to 1 exception in a billion is extraordinary and the cost to move to zero exceptions is usually a cost not bearable by the average organization. What is considered acceptable response time for a unique application for your organization will likely be entirely different from a highly commoditized offering which is a public internet facing application. For highly competitive solutions response time expectations on the internet are trending towards the 2-3 second range where user abandonment picks up severely. This has dropped over the past decade from 8 seconds, to 4 seconds and now into the 2-3 second range. Some applications, like Facebook, shoot for almost imperceptible response times in the sub one second range for competitive reasons. If you are looking for a hard standard, they just don't exist.


To test the front-end then YSlow is great for getting statistics for how long your pages take to load from a user perspective. It breaks down into stats for each specfic HTTP request, the time it took, etc. Get it at


Firebug, of course, also is essential. You can profile your JS explicitly or in real time by hitting the profile button. Making optimisations where necessary and seeing how long all your functions take to run. This changed the way I measure the performance of my JS code.


Really the big thing I would think is response time, but other indicators I would look at are processor and memory usage vs. the number of concurrent users/processes. I would also check to see that everything is performing as expected under normal and then peak load. You might encounter scenarios where higher load causes application errors due to various requests stepping on each other.


If you really want to get detailed information you'll want to run different types of load/stress tests. You'll probably want to look at a step load test (a gradual increase of users on system over time) and a spike test (a significant number of users all accessing at the same time where almost no one was accessing it before). I would also run tests against the server right after it's been rebooted to see how that affects the system.


You'll also probably want to look at a concept called HEAT (Hostile Environment Application Testing). Really this shows what happens when some part of the system goes offline. Does the system degrade successfully? This should be a key standard.


My one really big piece of suggestion is to establish what the system is supposed to do before doing the testing. The main reason is accountability. Get people to admit that the system is supposed to do something and then test to see if it holds true. This is key because because people will immediately see the results and that will be the base benchmark for what is acceptable.


The gist:

I used the free trial period of PassMark PerformanceTest to benchmark my machine. If you're willing to run a similar test, I'd really love to see the comparison. It will assess and stress-test CPU, memory, drive(s), 2D and 3D graphics. If you've got something else (that's free), feel free to run it on your software and just let me know which you used so I can grab it myself.


The background and expansion:

I had been using AutoCAD C3D 2014 for a while with relatively poor performance -- slow and choppy panning and zooming with 2D drafting, often locking up and crashing on anything more serious -- on a Dell M4800 with a Quadro K1100M card. More than enough per Autodesk's hardware specifications (the rest of the system, too). I upgraded to C3D 2019, and had slightly worse (than my already crappy) performance, and it was affecting my work substantially. I decided to buy/build a new machine myself. I had a custom computer built around the Quadro P4000 card. 2x 6-core Xeon E5-2640 CPUs, 64 GB RAM, SSD, P4000 GPU, Windows 10 Pro 64-bit, very latest drivers and BIOS, etc.


I ran a benchmarking test on my machine, and found some interested results. The most surprising result was fairly low benchmark score on 2D graphics with this card. 3D graphics, I'm at 7369 or 86th percentile. 2D graphics, on the other hand, is 462 or 34th percentile. I'm not thrilled with my memory or disk benchmarks, but they're in the 60s for percentile, which suggests they're not the weak link.

3a8082e126
Reply all
Reply to author
Forward
0 new messages