Announcement

Collapse
No announcement yet.

CPU Speed Reported Differs between OS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • CPU Speed Reported Differs between OS

    Hi,

    We are running some tests on a SuperMicro Server with 2 X Intel 5570 (2.93GHz) CPU's.

    We have compared benchmarks in Windows 2003 32-bit and 64-bit, but don't understand why the CPU speed is reported differently in each OS with the same hardware, configuration etc.

    32-bit shows: 2668.3MHz/2657.9MHz
    64-bit shows: 2933.1MHz/2932.8MHz

    The 64-bit speed is as expected.


    Thanks,
    Mario

  • #2
    There are some known problems with the speed measurement of Xeon and some other high end CPU's. There should be some improvement with the next release PT patch release.

    Basically modern CPU's throttle themselves when not under load. To measure the speed properly we have to put them under some load to bring them to a higher power state. As CPU's have gotten faster the amount of load we were applying before the measurement has become trivial and newer ones haven't been throttling up to deal with it. So in the next patch we will be increasing the amount of load used.

    Comment


    • #3
      Thanks

      Thanks for your reply, it makes complete sense.

      I'll look out for the patch to be available.



      A suggestion...
      There are certain businesses that benefit by having lower internal system latency as opposed to having a huge amount of processing power. These businesses can take advantage of the powerful multi-core processes available now.

      It might be possible that these businesses may benefit by having only one of these modern CPU's installed in a system as opposed to two CPU's, depending if a certain thread is running on processor 1 or 2, and dependant on the memory configuration of that system.

      Does Passmark have any system 'latency' tests available, even on a separate application? if not, is there an official 'wishlist' link I could use to add this feature request?


      I am happy to provide more details if needed.


      Thanks,
      Mario

      Comment


      • #4
        I don't think there is any easy way to do an accurate CPU latency test. Any "latency" in a CPU should be roughly proportional to its speed. There is an overhead in swapping a process between CPU, but this is more of a O/S overhead. And any small CPU latency should be swapped by 1) the time required to get the executable software off disk (or from the disk cache) and 2) the time required to send out the result over a network.

        Comment


        • #5
          Yes I agree, this is not very easy to do.

          There is a market, however, that could benefit from using this kind of information. A 10 or 20us improvment in system latency makes a difference for people running applications for research, weather scenarios, financial algorithms etc.

          With the new highly powered multi-core processes, each with their own multiple dedicated memory channels, performance testing and tuning is entering a new set of challenges for those of us working in one of these areas.


          In fact, we've recently noticed a big difference whether an application thread runs on one core or another (generally termed 'CPU affinity'). Some sort of 'core test' comparing individual cores would be highly beneficial.

          Passmark runs very well in a multi-core hyperhtreaded enviornment with its multi-task tests. Is it very difficult to add a 'core test' or 'affinity test' to test which core provides consistently better results for whatever reason?


          Mario

          Comment


          • #6
            Changes in performance to do with CPU affinity (ie locking a running task to a single core or CPU) is not the same as latency. Locking a task to a CPU can often be a bad thing in a multitasking environment. You can end up with multiple tasks on one overloaded core, running very slowly, while many other cores sit around idle.

            Task scheulding in the operating system is there for a good reason.

            I think you need to precisely define your definition of "latency" in this context.

            Comment


            • #7
              I understand what you're saying.

              I think maybe I didn't make my previous post very clear (it seems I have gone off the track of the original post too - Let me know if I should rather start a new thread).


              My previous post refers to different points: latency and core performance.

              When I'm referring to latency, I mean to measure the additional time needed for something to be executed running from processor x compared to processor y; and then measure the different times with threads running on porc 2's memory compared to proc 1's. This is in the order of nanoseconds, and like you said, is difficult to do.

              Regarding the second point, I think there is scope here to make some tests. We've seen consistently better results on certain cores forcing a test application to use a particular core. Differences are in the order of microseconds.

              Again, I agree with you, the OS task scheduling does an excellent job. However, on normal systems, the CPU hardly hits close to 100% per core. Consequently I think it is possible to optimize an application by running it (or part thereof) on a particular core that has previously been found to be an 'optimal core' for that system, without impacting the rest of the OS.

              - That's where a 'core test' result would come in handy.



              Anyway, just to let you know that if you guys ever decide to include a 'system latency' or 'core performance' test, I would be interested in those features.

              Thanks for your comments.

              Mario

              Comment

              Working...
              X