Cinebench Not Running

0 views
Skip to first unread message

Lorriane Nasuti

unread,
Aug 3, 2024, 2:48:19 PM8/3/24
to farcdesoled

Also make sure you are using good thermal past. I would use a dab about the size a green pea. Lots of opinions out there on what the right way to do this. I always pre spread it out and put a very thin layer on the cooler base too.

If that doesn't help and you are not having the performance crippled by throttling then you within spec. If you think you are losing performance. Talk to AMD support for advice and if need be start the RMA process. -email-form For warranty: -information/pib

I am seeing alot of people saying that as long as you don't reach 95 degrees (although the lower the better of course), then everything is fine. I ran another cinebench after changing my power settings and I got a maximum 79degrees which I am happy considering the factors (25 degrees ambient being one).

I also don't think the heat does me any favours so I am going to wait and do more tests in a cooler environment and if the issue starts causing me issues such as blue screen or shut down, I will reconsider my approach and get in touch with AMD.

Prime95 "small FFTs" is by far the best test for heat. Everything else is just a moderate to high load, Even CPU-Z. If you are worried about your setup and if something is not working as it should with regards to temps... just run Prime95 small FFTs and really push it to the limit. The reason why you want to do this is that you never really know what kind of load an app or a game or an encode or anything is going to put on your CPU... Not all 100% loads are created equally.

For me, I have PBO enabled which pushes up the temps far too high for my liking even with my iCUE H100i RGB PRO XT.. so I found a nice under-volt which is rock-solid stable with PBO. At first it seemed an offset of -0.14375V was good, but after a few random crashes I settled on -0.13125V.

There are plenty of utilities that will push the limits of a process. Everyone seems to have a favorite. Exactly as you said if CPUz causes over heating it shows you have an issue. You don't have to run prime 95 all night to know.

With an air cooler 15 minutes should be plenty of time to find the max temp... unless of course you have a case with bad airflow, then it may take a longer to heat up the air inside the case. But if your max is under 85C for this torture test I would put your fears to rest...

In my experience, having a good air flow in and around the case may help with CPU cooling and all temperatures. If introducing fresh air and extracting hot air are not in equilibrate, you can stuck hot air flow around the CPU cooler like in oven.

It is constantly staying above 100C which is somewhat concerning. When I was compiling some software the fans maxed out but the cpu was still at 100.2C running at 4.2Ghz around. It stayed that high for like 5 minutes or more.

In addition, CPU performance is affected not just by temps but by your other components. In particular RAM running below 5600MT/s will have a significant impact, but I could see a slow SSD or other components also having some effect.

I did look at that post and someone mentioned scores that were also around 13500. I just wanted to see if the majority of users with the Ryzen 9 7940HS version of the Framework 16 were experiencing the same issues.

This user with a 7840HS says they got about 15k in R23 once support sent them a new mainboard with fixed thermal performance. They were only getting about 14k with their original board with large temperature deltas between cores. High Thermals on a 7840HS - #19 by Snipe

In some previous benchmarks, my CPU had around a 13 degree difference between the hottest core are 99.6 (Reaches even higher but that was the temp I took the screenshot at) and the lowest core at around 86.2 degrees. This is one of the screenshots I filed to support.

Please keep us posted on how your ticket goes. I have a 7840HS but I am seeing the same as you, an average delta of 13 degrees between my hottest and coldest cores under stress though it reaches as high as 16 degrees at its peak.

I have been using mostly Cinebench and before seeing this thread and the one linked about the 23 degree delta I hate to say it but I was only looking at listed package temperature and not individual core temperatures.

I was using HWMonitor and just leaving it open and watching it. I have been rerunning it with HWInfo this time. Core 1 is consistently the coolest core and Core 6 is consistently the hottest for me. I have a TDP locked and TDP unlocked run logged here but they average out to be about the same after the limit kicks in with just the peaks being higher.
TDP Locked
35W6501530 154 KB

I will also add that I have tried a few operating systems on the FW16 now. Fedora 39/40, Zorin 17(17.1 I think but I cannot swear to it), and Windows 11 all just ran the CPU up to 99/100 degrees and kept it screaming there. Now that I am on Windows 10 it seems to actually care about the CPU somewhat and the temps are much more reasonable.

I am currently trying to make a batch file to run a simple Cinebench benchmark on the CPU and then output the results. I can make it run the tests and output into the command prompt window, but I'm not sure how to capture this output an put it into a file.

Here is a copy from my CLI trying to run my current command. I shortened the output for clarity, as without it the output is extremely long. All of the "Loading" lines show no signs of error other than the first module, which to my knowledge is not critical to running the benchmarks, as the app can run, the benchmarks will complete, and the app can create its own report files. The folder is also empty on a fresh install, so I don't know its purpose.

I want this output to go into a file with the path "C:\Test Results\CPUBench.txt". I would also like to filter out all lines with "Finalizing" and "Loading", as these aren't relevant information for me. I would also like to completely overwrite the file if at all possible.

I have some basic UNIX/LINUX experience, but Windows CLI is relatively unknown to me. My go-to standard output redirection symbol '>' doesn't seem to be working for this task. I can make it a batch file myself after I have the command working, so I really just need a way for this one command to do what I want.

I needed this functionality as well. Cinebench must be launched with start and its own console in order to see its stdout but start doesn't play well with stdout redirection. Here's what I wrote after some experimentation:

This performs 5 runs of Cinebench R23's multi-core test, with no minimum duration per test. The full log output is stored in r23_output.txt; the actual test results are extracted from the full log and stored in r23_results.txt. On my system I've chosen to put the Cinebench executable in c:\miscprogs\R23 - change to whatever path you've placed it in. If you'd like to run the single-core test then change g_CinebenchCpu1Test= to true.

I could run python -m timeit , which is probably the simplest way of measuring how long it takes to execute some code[1]. But maybe it's too simple, and I owe my readers some way of benchmarking that won't be interfered by sudden CPU spikes on my computer?

So here are a couple of different tools and techniques I tried. At the end of the article, I will tell you which one I chose and why. Plus, I will give you some rules of thumb for when each tool might be handy.

The easiest way to measure how long it takes to run some code is to use the timeit module. You can write python -m timeit your_code(), and Python will print out how long it took to run whatever your_code() does. I like to put the code I want to benchmark inside a function for more clarity, but you don't have to do this. You can directly write multiple Python statements separated by semicolons, and that will work just fine. For example, to see how long it takes to sum up the first 1,000,000 numbers, we can run this code:

However, python -m timeit approach has a major drawback - it doesn't separate the setup code from the code you want to benchmark. Let's say you have an import statement that takes a relatively long time to import compared to executing a function from that module. One such import can be import numpy. If we benchmark those two lines of code:

the import will take most of the time during the benchmark. But you probably don't want to benchmark how long it takes to import modules. You want to see how long it takes to execute some functions from that module.

To separate the setup code from the benchmarks, timeit supports -s parameter. Whatever code you pass here will be executed but won't be part of the benchmarks. So we can improve the above code and run it like this: python -m timeit -s "import numpy" "numpy.arange(10)".

We can be a bit more strict and decide to execute our code the same number of times each time. By default, if you don't specify the '-n' (or --number) parameter, timeit will try to run your code 1, 2, 5, 10, 20, ... until the total execution time exceeds 0.2 seconds. A slow function will be executed once, but a very fast one will run thousands of times. If you think executing different code snippets a different number of times affects your benchmarks, you can set this parameter to a predefined number.

One of the issues with running benchmarks with python -m timeit is that sometimes other processes on your computer might affect the Python process and randomly slow it down. For example, I've noticed that if I run my benchmarks with all the usual applications open (multiple Chrome instances with plenty of tabs, Teams and other messenger apps, etc.), they all take a bit longer than when I close basically all the apps on my computer.

And the results seemed more consistent than without using Docker. Rerunning benchmarks multiple times, I was getting results with smaller deviations. I still had a deviation - some runs were slightly slower, and some were slightly faster. However, that was the case for short code examples (running under 1 second). For longer code examples (running at least a few seconds), the difference between runs was even around 5% (I've tested docker with my bubble sort example from Upgrade Your Python Version article). So, as one vigilant commenter suggested, Docker doesn't really help much here.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages