No announcement yet.

Single Thread Score rating

  • Filter
  • Time
  • Show
Clear All
new posts

  • #76
    Originally posted by Xeinaemm View Post
    ... and you telling me that in less than month Intel magically upgraded older CPUs?
    I was irritated to read this as I remembered the cpus I was interested in buying and the way they performed across this whole V9 V10 thing.
    Intel's i3-9100F had a 'monstrous' 8900 cpumark and some 2400 ST. Now it's 6900 and some 2500 ST.
    Meanwhile AMD's Ryzen 1600/2600 had some 1800 ST now they are 300 points higher.


    • #77
      Intel's i3-9100F had a 'monstrous' 8900 cpumark and some 2400 ST. Now it's 6900 and some 2500 ST.
      As of today the i3-9100F single threaded results are,
      V9 - 2406
      V10 - 2486
      ~3% increase

      Ryzen 5 1600
      V9 - 1825
      V10 - 2108
      ~15% increase.

      So are you
      1) Upset that AMD benefited more than Intel in this case (i.e. the opposite of some others in this topic)? It is a no win situation for us. Any move up or down upsets roughly half our user base.
      2) Or just upset because you don't like change. Again it is a no win. As eventually the numbers will become irrelevant, unless we update it every decade or so. Leaving people upset about the change, or upset that we didn't change.


      • #78
        I see the single thread performance ranking is now topped by a yet to be released Zen 3 based processor, and yes, its improvement is significant. I am wondering to what extent the used compiler and its optimizations are taking advantage of the architecture of this new processor. If the benchmark program is compiled with a standard set of options, how fair is the ranking in assessing the actual capability? Any performance trick not known (yet) by the compiler or which is just not enabled so the program can run on older architectures would of course underestimate the potential of the hardware.


        • #79
          Yes, it seems to be a big jump over the previous generation. But it is early days at the moment, as we only have one sample for the AMD Ryzen 5 5600X, so the numbers will change.

          We are using Visual Studio 2019. So a modern compiler. We haven't changed compiler versions since the release of PerformanceTest. So the version has remained at Version 16.4.5. which came out in Feb 2020.

          I had a quick look through the release notes for Visual Studio 2019, but didn't see anything obvious that was included especially for Ryzen 5000. So I would be pretty surprised if using the current 16.7.6 release of VS2019 would make any significant difference.

          And generally speaking compilers don't like including multiple code paths. So as a developer we pick a target CPU instruction set and the compiler creates code that runs on that CPU and all later CPUs. It doesn't create 6 different code paths for the same piece of code and then picks the optimal path at run time based on the CPUs capabilities. Some libraries do this however. Here is an example of memcpy that has multiple code paths.


          • #80
            First post for me. I'm ex platform mangement for a telco and so used to have access to professional benchmark databases. What I would like see is real results and not regression scores. Having a background in stats and operations research, I know that a weakness of weighted scores is the weightings. The purpose of this type of analysis is to look at best fit charts. It's the complete set that is being analysed not individual results. Is the trend linear, geometric type of stuff. As an IT person I want to see MOPs results, mostly because it's the only way I can assess if your benchmark is useful, in other words - real. Is this possible? I feel very uneasy with scores derived from weightings.


            • #81
              What I would like see is real results and not regression scores
              I'm not sure what you mean. What even is a regression score in the context of a CPU benchmark?
              What's not 'real' about the results?

              I know that a weakness of weighted scores is the weightings
              If we thought weightings was a weakness we wouldn't be doing it. People like having a single number like the CPUMark, and to get a single number from many different measurements, all with different scales and units, then some type of weighting system isn't a weakness, it is a necessity.

              The purpose of this type of analysis is to look at best fit charts
              What chart? Fitted to what value? Without context, this doesn't mean anything.

              As an IT person I want to see MOPs results
              I assume MOPS is Millions of Operations Per Second?
              This is exactly what the PerformanceTest already reports. Have you used the software?

              Click image for larger version  Name:	MOPS.png Views:	0 Size:	20.7 KB ID:	48897


              • #82
                Dave, if I may say so, after reading 5 or so of these whinges I realized something that you guys have done by changing CPUmark etc so the scores are so different on different processors.
                You've exposed the fact that old cpus are old!

                It's a classic case of a "collector" suddenly finding out that his priceless treasure-trove of priceless relics is just a room full of junk.
                You've really got two choices. Keep benchmarking using code that your team built 5 years ago on a 5 year old PC, with a 5 year old compiler, library files that are at least 5 years old and the settings used 5 years ago. And never, ever, change it because it's more holy than the Holy Grail.

                Or update your benchmark and let the tears fall where they may.
                I say that you come out with a new version every month and just flush the results that are over a month old.

                That way you can optimize its utility and usefulness without having to worry about how different the scores are from earlier versions on earlier hardware.
                And that way it is more likely to remain RELEVANT.

                Unless you want to peg it to a K5 at 10 or something, as someone suggested earlier, laugably.
                Like it's only good if a K5 system scores a 10 on it
                Maybe run it by NIST to get the proper scores.

                Or should we go back to the days when benchmarks would score lower on AMD chips than Intel simply because the compilers didn't recognize AMD chips as fully Intel compatible and so would revert to using standard x86 calls instead of SSE calls? Or should you try every compiler, compiler setting and compiler version out there until you get a Ryzen 5 to beat a Xeon gold?

                You can't go around worrying about what will make these fanbois happy, that will just hobble your benchmark with a label of being biased towards Intel or AMD, for or against modern CPUS and so on. Pick a goal and code to it. then defend the goal and the coding.