Announcement

Collapse
No announcement yet.

Underperforming DDR4 3200Mhz CL16

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Underperforming DDR4 3200Mhz CL16

    Hey there,

    I built a new custom PC:
    Mainboard: Asus x299 TUF Mark 2
    CPU: 7820x @4.3GHz
    Memory: 4x4GB DDR4 GSkill Trident Z 3200MHz CL16-18-18-38 @3200MHz (F4-3200C16Q-16GTZB)

    Passmark Rating is fine except for only 2700 Memory Mark. CL16 is not the best but it's DDR4 3200 Quad Channel(!) which I expected to be around 3500 Memory Mark at least. I tested these sticks in a ROG Strix F Z370 in Dual Channel with a 8700k CPU and got around 3200 Memory Mark. Thinking about my former i5 with 4x4GB DDR3 G.Skill Ares which made 3100 Memory Mark in Dual Channel I have the strong impression there is something wrong with my new X299 system.

    I am using the latest BIOS and set the modules to 3200Mhz by XMP Profile. I did no custom tuning, nor do I have any strange setting in my BIOS. CPU-Z and HWInfo show the correct frequency of 1600MHz (24 x 66.7 MHz), the correct timing, size of 16GB. CPU-Z shows "NB-Frequency" at 2700 MHz in Quad Channel, Refresh Cycle at 417 clocks. There are no extensive background tasks running when performing Passmark tests.

    Any idea why my rating is so low? I can expect more than 2700 Memory Mark, right? Thanks in advance.

  • #2
    I pulled a bunch of result from other 7820x system with random motherboard and RAM configs. None of the dozen I looked at had more than 3200 Memory mark (some were over clocked as well). Average was 2673.

    So based on my (limited) sample you are above average.

    So I don't think there is anything wrong with your machine in particular. Maybe it is a general thing with this CPU, or all X299 boards.

    Comment


    • #3
      I see (though the samples I picked from the last baseline were all around ~3000). I really dont know much about benchmarking, what does this technically mean? Does it mean that the DDR4 Quad Channel has indeed lower performance than my DDR3 Dual Channel in my outdated i5 system? Or does it mean Memory Mark does not take things like Quad Channel into account (no offense here)?
      To me it seems more and more that X299 wasnt such a wise choice. Maybe a 8700k with decent OC or a Ryzen 1800x would have been better until Ice Lake 8 or 10 core CPU arrive with AVX512 support (I am really keen on the new AVX for video encoding). Would you mind sharing your honest opinion on that? Yeah it might hurt, but so does the Memory Mark.

      Comment


      • #4
        I just tested my ram with Aida64, here are the results (which are excellent):
        read: 76.000 mb/s
        write: 82.000 mb/s
        r/w: 67.000 mb/s
        ...compared to a 8700k system with dual channel and almost the same kind of ram, these rates are almost twice the rate,
        So I do not really get what passmarks memory mark is about, when it gives a x299 quad channel system a rating of 2700 while a 8700k system gets 3200+ with exactly the same sticks?
        Last edited by Scanner; Mar-05-2018, 05:24 PM.

        Comment


        • #5
          You would need to talk to the AIDA people about their results. I did a quick search & found someone else with your CPU, X299 and quad channel.
          See,
          https://forums.aida64.com/topic/4062...read-and-copy/
          Their results were better than yours. But they also stated that, "there must be a bottleneck about memory write bandwidth in the IMC (Integrated Memory Controller) of Skylake-X processors".


          compared to a 8700k system with dual channel and almost the same kind of ram, these rates are almost twice the rate,
          Quad channel doesn't give twice the system performance of dual channel, despite the name. In real life software the impact of quad over dual is so small it is nearly impossible to measure. (The CPU cache hides the RAM speed and not many apps are truly limited by RAM speeds)

          Comment


          • #6
            Thanks for your info, I checked the linked site as well and do really not like what I just read. I got my 7820x mainly for x265 encoding and the boost from the quad channel but now more and more downsides come up, the power consumption is another thing. I have a hardware power supply tracker running and my whole system (without display) needs ~300W when on full (AVX-extensive) load, running at only 4.3GHz. Even worse, the fps rate is almost the same as with an 8700k.

            I have 2 or 3 days left of a period were I can sent back my 7820x to the dealer. I have learned now that Quad Channel is just a minor feature and the costly x299 platform seems to have certain issues. Moreover, the 7820x is almost the same speed as an 8700k or 1800x when it comes to HEVC encoding.

            I would really appreciate your expert opinion here, it seems many important things and downsides are not mentioned in the endless number of reviews. Switching to low profile (1700x?) now to get all in with a fresh 8 or 10 core Icy Lake in a few months would also be an option for me, while getting a 7980XE is certainly not. Thanks in advance!

            Comment


            • #7
              The 8700K is a great CPU. It is still number #1 on the single threaded charts.
              https://www.cpubenchmark.net/singleThread.html
              and fairly high on the multi-threaded charts as well.

              So there are very few CPUs on the market that would be considered as a meaningful upgrade from a 8700K. The only way replacing the 8700K with something else is if the software you are using is highly optimised for multi-threading. (i.e. it can fully load 8 or more cores for extended periods). So it all comes down to what software you are using in the end.

              The 7820x is in no way a bad CPU however. But the software needs to support 8+ cores to justify it.

              i.e. in the end you need to look for benchmarks that use the exact software (and same version) that you use.

              Comment


              • #8
                I understand and as I said I am using mainly x265 for video encoding. I had the chance to do a test with a 8700k and 7820x with the same x265 version 2.6 and the same 50min 1080p 10MBit video. The results were:
                -- x265 10bit Slow crf 22.5: 112min encoding time, 9.29 fps) (8700k @4.7GHz),
                -- x265 10bit Slow crf 22.5: 89min encoding time, 11.70 fps) (7820x @4.2GHz).

                The results seem to depend highly on the preset of the x265 (fast, medium, slow, slower, ...), because
                -- x265 10bit Fast crf 23: 12min encoding time, 8700k @4.7GHz),
                -- x265 10bit Fast crf 23: 10min encoding time, 7820x @4.2GHz),
                was not that much difference. I am only using the Slow preset though.

                I think these results show that the 8700k is very strong in x265 with 2 less cores than 7820x. There are many handbrake or x265 tests in reviews, but strangely the 7820x or the 8700k almost never appear in the same test.
                What really bothers me is, that 7820x @4.2GHz needs about 300W while 8700k @4.7GHz took 220/230W. This might also come from the power consuming x299 platform. Maybe that does not sound much but when encoding runs 24/7 it adds up quickly. I dont need 8 sata ports, numerous usb ports or 3 ports for gfx cards and i will not upgrade to i9 x299 cpu as it is out of budget. Also the x299 system was around 300€ Euro more expensive.
                I find it really hard to decide if I should keep that x299 system, it doesnt seem to make perfect sense for my purpose. What exactly are the "very few CPUs on the market that would be considered as a meaningful upgrade from a 8700K"?

                Comment


                • #9
                  ...was not that much difference
                  The speed up was 17 - 20% in both cases. So that is a significant win for the 7820x. Power usage was 30% higher. So no free lunch. But a 20% gain is a 20%.gain.

                  As the encoding software likes lots of CPU cores, then anything in the top 10 list here would be a meaningful upgrade.
                  https://www.cpubenchmark.net/high_end_cpus.html
                  None of them are good value however. So you need to really put a lot of value on your time to justify them.

                  Comment


                  • #10
                    Originally posted by David (PassMark) View Post
                    The speed up was 17 - 20% in both cases. So that is a significant win for the 7820x. Power usage was 30% higher. So no free lunch. But a 20% gain is a 20%.gain.
                    Yes, indeed. I think I was expecting a lot more from the 7820x for some reason, I guess because of the 2 more cores compared to the 8700k. There is one more thing that counts for the Skylake: It already supports AVX-512, the new vector command set: https://www.anandtech.com/show/11928/intels-document-points-to-avx512-support-by-consumer-cannon-lake-cpus

                    x265 is supposed to make use of AVX-512 in near future and the devs are already working on it. This should give the encoding speed some boost: https://bitbucket.org/multicoreware/...vx-512-support

                    The 8700k does not support these command sets at all but Ice Lake will. However, we don't know what version of x265 will introduce AVX-512, maybe it will still take about a year. Then Ice Lake will arrive with 8 or 10 cores and I hardly believe the 7820x will keep up with them, since the 7800x was already outperformed by the 8700k.

                    Well, I gonna stick with the 7820x for now and see what the situation is in about a year. Thanks for your input!
                    Last edited by Scanner; Mar-06-2018, 04:49 PM.

                    Comment


                    • #11
                      I've found the same issue with Memory Mark where it under reports ram performance on X299 architecture versus Z270/Z370. The attached table shows three ram sets tested across 3 motherboards in both PassMark and Sandra. Just like Scanner's AIDA results Sandra shows an X299 increase well above the Z270/Z370. Now we have two benchmarks reporting significant increased performance while Memory Mark reports a significant decreased performance...
                      Attached Files

                      Comment


                      • #12
                        The memory mark is a weighted average result from the various sub-tests in the memory test suite. So it would be instructive to look at the individual results.

                        There is also some dependency for the "Available RAM" test on the amount of free RAM available. So comparing systems with 16GB and 32GB isn't fair.

                        Do you have the baseline files for each of the test runs? It would be more instructive to graph the individual tests. Also when you are looking for small performance differences we would suggest to do multiple test runs and take the best result (or an average if doing a large number of runs). In the PerformanceTest preferences window you can set it up to auto-run the tests multiple times and keep the best result.

                        Comment


                        • #13
                          Hi. First post here. I have the same issue with my memory.

                          So, recently, I built a new PC: AMD Ryzen 5 3600 + Team Group Night Hawk RGB 16GB (2x DDR4 3200MHz Cl16

                          I have activated the XMP profile, and CPU-Z confirms they are running dual channel. Memory mark however is around 2750.

                          My previous build with Intel Core i7 3770K + GeIL Enhance Corsa 16GB (4x4) DDR3 1600MHz Cl9 also had ~2750 memory mark, even touched 2812 once.

                          Things don't add up here. So, DDR4 3200MHz and DDR3 1600MHz are no different?

                          Comment


                          • #14
                            Intel memory controllers have been faster in recent history. So I would guess that the Ryzen 5 isn't using the full memory bandwidth available in the DDR4-3200.

                            Comment


                            • #15
                              Originally posted by David (PassMark) View Post
                              Intel memory controllers have been faster in recent history. So I would guess that the Ryzen 5 isn't using the full memory bandwidth available in the DDR4-3200.
                              Yes, I have heard things about Ryzen serries' higher latencies, but I think it's still not too slow to be equal to a DDR3 1600Mhz memory. Here's AIDA64 memtest screenshots for both my old and new rigs. Read/Copy speed are much higher than DDR3 1600MHz, Write speed is almost the same, but latency was lower with intel.

                              Do you think higher latency on AMD is enough to bring my memory mark as low as to a DDR3 1600MHz kit? Or there might be other reasons?

                              Comment

                              Working...
                              X