Aida64 Memory Benchmark Download [NEW]

0 views
Skip to first unread message

Lina Gullace

unread,
Jan 25, 2024, 8:14:12 PM1/25/24
to porncleatwallkers

Benchmark pages of AIDA64 Extreme provide several methods to measure system performance. These benchmarks are synthetic, so their results show only the theoretical (maximum) performance of the system. CPU and FPU benchmarks of AIDA64 Extreme are built on the multi-threaded AIDA64 Benchmark Engine that supports up to 1280 simultaneous processing threads. It also supports multi-processor, multi-core and HyperThreading enabled systems.

These benchmarks measure the single and double precision (also known as 32-bit and 64-bit) floating-point performance through the computation of a scene with a SIMD-enhanced ray tracing engine. The code behind this benchmark method is written in Assembly, and it is extremely optimized for every popular AMD, Intel and VIA processor core variants by utilizing the appropriate x87, SSE, SSE2, SSE3, SSSE3, SSE4.1, AVX, AVX2, XOP, FMA, FMA4 and AVX-512 instruction set extension. Both FP32 and FP64 Ray-Trace test is HyperThreading, multi-processor (SMP) and multi-core (CMP) aware.

aida64 memory benchmark download


Download ✶✶✶ https://t.co/tp1RZXYbhM



Memory bandwidth benchmarks (Memory Read, Memory Write, Memory Copy) measure the maximum achievable memory data transfer bandwidth. The code behind these benchmark methods are written in Assembly and they are extremely optimized for every popular AMD, Intel and VIA processor achieveablecore variants by utilizing the appropriate x86/x64, x87, MMX, MMX+, 3DNow!, SSE, SSE2, SSE4.1, AVX, AVX2 and AVX-512 instruction set extension. The Memory Latency benchmark measures the typical delay when the CPU reads data from system memory. Memory latency time means the penalty measured from the issuing of the read command until the data arrives to the integer registers of the CPU.

This simple integer benchmark focuses on the branch prediction capabilities and the misprediction penalties of the CPU. It finds the solutions for the classic "Queens problem" on a 10 by 10 sized chessboard. At the same clock speed theoretically the processor with the shorter pipeline and smaller misprediction penalties will attain higher benchmark scores. For example -- with HyperThreading disabled -- the Intel Northwood core processors get higher scores than the Intel Prescott core based ones due to the 20-step vs 31-step long pipeline. CPU Queen test uses integer MMX, SSE2 and SSSE3 optimizations.

This benchmark stresses the SIMD integer arithmetic execution units of the CPU and also the memory subsystem. CPU PhotoWorxx test uses the appropriate x87, MMX, MMX+, 3DNow!, 3DNow!+, SSE, SSE2, SSSE3, SSE4.1, SSE4A, AVX, AVX2, XOP and AVX-512 instruction set extension and it is NUMA, HyperThreading, multi-processor (SMP) and multi-core (CMP) aware.

This integer benchmark measures combined CPU and memory subsystem performance through the public ZLib compression library. CPU ZLib test uses only the basic x86 instructions, and it is HyperThreading, multi-processor (SMP) and multi-core (CMP) aware.

This benchmark measures CPU performance using AES (Advanced Encryption Standard) data encryption. In cryptography AES is a symmetric-key encryption standard. AES is used in several compression tools today, like 7z, RAR, WinZip, and also in disk encryption solutions like BitLocker, FileVault (Mac OS X), TrueCrypt. CPU AES test uses the appropriate x86, MMX and SSE4.1 instructions, and it's hardware accelerated on VIA PadLock Security Engine capable VIA C3, VIA C7, VIA Nano and VIA QuadCore processors; and on Intel AES-NI instruction set extension and the future VAES capable processors. The test is HyperThreading, multi-processor (SMP) and multi-core (CMP) aware.

This benchmark measures CPU performance using the SHA1 hashing algorithm defined in the Federal Information Processing Standards Publication 180-4. The code behind this benchmark method is written in Assembly, and it is optimized for every popular AMD, Intel and VIA processor core variants by utilizing the appropriate MMX, MMX+/SSE, SSE2, SSSE3, AVX, AVX2, XOP, BMI, BMI2 and AVX-512 instruction set extension. CPU Hash benchmark is hardware accelerated on VIA PadLock Security Engine capable VIA C7, VIA Nano and VIA QuadCore processors.

This benchmark measures the single precision (also known as 32-bit) floating-point performance through the computation of several frames of the popular "Julia" fractal. The code behind this benchmark method is written in Assembly, and it is extremely optimized for every popular AMD, Intel and VIA processor core variants by utilizing the appropriate x87, 3DNow!, 3DNow!+, SSE, AVX, AVX2, FMA, FMA4 and AVX-512 instruction set extension. FPU Julia test is HyperThreading, multi-processor (SMP) and multi-core (CMP) aware.

This benchmark measures the double precision (also known as 64-bit) floating-point performance through the computation of several frames of the popular "Mandelbrot" fractal. The code behind this benchmark method is written in Assembly, and it is extremely optimized for every popular AMD, Intel and VIA processor core variants by utilizing the appropriate x87, SSE2, AVX, AVX2, FMA, FMA4 and AVX-512 instruction set extension. FPU Mandel test is HyperThreading, multi-processor (SMP) and multi-core (CMP) aware.

This benchmark measures the extended precision (also known as 80-bit) floating-point performance through the computation of a single frame of a modified "Julia" fractal. The code behind this benchmark method is written in Assembly, and it is extremely optimized for every popular AMD, Intel and VIA processor core variants by utilizing trigonometric and exponential x87 instructions. FPU SinJulia is HyperThreading, multi-processor (SMP) and multi-core (CMP) aware.

2) I read everywhere that the benchmark Everest (AIDA64 now) did not provide actual values (on X58 platform) compared to other software (Stream Benchmark for example) and that it could be because that AIDA64 is only mono thread for the benchmark

Quite frankly, that's an exaggeration that it would "not provide actual values". It's true, that our memory benchmarks are single-threaded, and that it's a different approach to measuring memory bandwidth than what certain other software use. However, it is no less than a useful benchmark, and it can be used to compare performance of various systems, or to measure a performance gain of a hardware upgrade, or the performance gain when you overclock or fine-tune your PC.

We've already started developing multi-threaded memory benchmarks, but they're lower on our priority list than certain other improvements, e.g. brand new CPU benchmark methods, auto-update feature, or GPGPU benchmarks. We do feel that there's a room for improvement about our memory benchmarks, but we don't reckon the existing benchmarks to be useless or crippled at all. Our technology partners (incl. Intel Corporation) happily use the existing memory benchmarks for various purposes in their labs, and that alone proves that they're indeed reliable and useful

My guess is next: Cores are in "sleep" (i know it for sure, since i run AIDA64 with opened Ryzen Master window) and benchmark end faster then cores wake up. When used fixed cores frequency - benchmark sent data on "ready" core and data more correct.

The latest AIDA64 update implements 64-bit AVX-512 accelerated benchmarks, adds monitoring of sensor values on Asus ROG RGB LED motherboards and video cards, and supports the latest AMD and Intel CPU platforms as well as the new graphics and GPGPU computing technologies by both AMD and nVIDIA.

It's a very strange issue. It might be because something messes about (interfers with) the CPU clock measurement part of AIDA64. When you get zero benchmark results, please check the BCLK shown on the AIDA64 CPUID Panel (main menu / Tools / AIDA64 CPUID).

Bios optimized settings, which is only overclocking CPU to 3.65ghz( memory clock is default 2133) . And I can't complete the Cache and Memory Benchmark, my computer black out during the copy phase every time. The system stability test works fine, the CPU temperature maintained about 61.

Again: The only thing that will return memory latency reports to normal values (after opening the Steam Client) is restarting the computer. Nothing else I've tried will revert this behavior once the Steam Client has been opened.

Fresh PC start> AIDA64 Cache & Memory Benchmark> 68ns memory latency> Open Steam >AIDA64 Cache & Memory Benchmark> 71/72ns memory latency> Force closing the Steam Client> AIDA64 Cache & Memory Benchmark> 71/72ns memory latency.

Fresh PC start> AIDA64 Cache & Memory Benchmark> 68ns memory latency> Open Steam >AIDA64 Cache & Memory Benchmark> 70ns memory latency> Force closing the Steam Client> AIDA64 Cache & Memory Benchmark> 68ns memory latency.

Oddly enough, aparently Steam is running or activating some background process/service with the HTC Vive plugged in as usual. This is not only consuming a few more CPU Cycles that it normally would, but *also continues to do so even after force closing the Steam Client*. That's why memory latency does NOT return values in that conditions.

Fresh PC start> AIDA64 Cache & Memory Benchmark> 68ns memory latency
Open Steam> AIDA64 Cache & Memory Benchmark> 70ns memory latency
Force closing the Steam Client> AIDA64 Cache & Memory Benchmark> 68ns memory latency.

AIDA64 provides a basic synthetic benchmark for comparing the read, write, and copy performance of system memory while also measuring latency. This should provide us the raw bandwidth for each memory configuration that was tested. Later on, we'll see how this translates into real world performance.

Our preliminary results show us the expected memory scaling. The faster DDR3-2133 memory has a 36% advantage over the slowest DDR3-1333 memory in the read test, and the copy and latency tests show similar results. However, the write test closes the gap with only a 7% difference between the fastest and the slowest. Overall, we see a linear performance increase as the memory clock speed is raised as well as when the CAS latency is lowered. Synthetic tests are really the best-case scenario, so let's move on to find out how the extra raw bandwidth affects our other tests.

df19127ead
Reply all
Reply to author
Forward
0 new messages