Announcement

Collapse
No announcement yet.

Is PT taking background CPU/GPU/Disk usage into account?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Is PT taking background CPU/GPU/Disk usage into account?

    It appears sometimes the (mean?) averages displayed on the website can be skewed when a few users post a result with a significantly worse result than the rest; this is especially the case when there's only tens of results.

    e.g. I was looking at the results of the A9-9420e and found only 15 entries in PTv10, and most score in the 1200 - 1300, while one user submitted a score of 700.

    Looking at the downloaded baseline, it appears your software is also storing what background processes are running (not sure why?), and perhaps your software doesn't doesn't refuse to run a benchmark even when the CPU is overloaded with background tasks (whether background open browsers, or background Windows tasks), and worse still, doesn't inform the user and lets them submit a skewed result?

    I came across a video of a user a little dumbfounded when he attempted to run UserBenchmark and found it scored significantly worse than other similarly spec'd machines, unaware that it's likely background tasks which gave the wrong figures - the website even states that there was significant background CPU usage!

  • #2
    For a given CPU there is often a wide range of results. Here is the distribution of results for the Ryzen 7 2700X

    Click image for larger version  Name:	CPUMark-2700x.png Views:	0 Size:	16.3 KB ID:	48925

    But it is too simplistic to just blame background processes.

    Most background processes don't use much CPU time. Typically they sit there not doing anything until there is some trigger condition. This might be the arrival of an Email, the re-load of a web page, scanning a newly open file for viruses and 100 other things. So it isn't the number of background processes that is a problem, it is how active they are and what they are doing. But it is true from time to time there will be background tasks that are triggered in the middle of a benchmark. For example Window might decide it is time to check for updates. If you are a IT expert this can be reduced by stopping service and the like, but it can't be fully eliminated. However PerformanceTest has the option to automatically do multiple test runs and take the highest result. This removes the effect of short duration random background activity, but takes longer to do a test.

    But there are many other reasons that a CPU might benchmark badly (or benchmark above average). Some examples,
    - Overclocking
    - Windows power management (especially on Laptops with batteries)
    - Poor cooling and thermal throttling. e.g. dust, broken fan, no thermal paste
    - Running single channel RAM when the CPU support dual (or quad) channel
    - Poor memory setup when there a multiple CPUs (NUMA)
    - Running slower than typical RAM (or RAM that is faster than what is typical for this CPU model)
    - If the CPU uCode got patched for security issues
    - Some cores are disabled
    - Hyperthreading is disabled

    This page has a longer list
    https://forums.passmark.com/performa...-for-a-slow-pc

    But the saving grace is that all CPUs more or less suffer the save fate. So while the average is lower than if every system was perfectly setup, the scores between remain comparable. (i.e. every system is roughly equally messed up, on average, once we have enough samples)

    Comment


    • #3
      Many thanks for taking your time to reply David.

      I realise it's overly simple only to blame background processes, and poor results could be down to countless other things.
      And I'm sure you're right, most of the time poor results submitted won't be due to background processes.

      But I guess my post was also something for you guys to think about in terms of implementing a check in future releases in order to take 1 less element out of the equation in terms of poor results? e.g. wait for a few second when the system become idle, and if it doesn't, then warn end user and let them decide whether to go ahead anyway.

      Another thing to consider, since you already check thermals, is to warn the end user, and ask them before running a test when your software detects temps are above (for example) 90c mark while idling. Again, it's just something which PT can take into account to gather more accurate results.

      Yet another thing your software could do is to automatically switch to the 'high performance' power settings, again to eliminate any throttling.


      Originally posted by David (PassMark) View Post
      But the saving grace is that all CPUs more or less suffer the save fate. So while the average is lower than if every system was perfectly setup, the scores between remain comparable. (i.e. every system is roughly equally messed up, on average, once we have enough samples)
      Yes, that's a good point, but again, it applies more to those where the same hardware has hundreds or thousands of results submitted.

      Comment

      Working...
      X