Announcement

Collapse
No announcement yet.

Rx6700xt Baseline Variations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Rx6700xt Baseline Variations

    Having just purchased a 6700xt I was curious to see how it faired compared to the other cards out there. I've ran the benchmarks a few times and got results from 22,250 - 23,100 for the 3d tests. I'm happy with this as it has it nipping at the heals of the 3070 and comfortably quicker than the 3060ti.

    What confuses me though is looking at the averages of the results, currently it sits at 18,400 ish. There are people running the tests and getting 3d scores as low as 11,000 - 14,000. How can such a big difference be possible with the same gpu's ?

    Is this normal for these tests, are people running systems that are really badly setup or is this some kind of glitch ?

  • #2
    Is this normal for these tests, are people running systems that are really badly setup
    Yes there will generally be a fairly wide range of results for similar hardware as every system is different and in some cases there will be patterns of bad performance. This is a good example of why it is so useful to be able to run a benchmark and compare it to systems with the same components.

    Looking at a subset of the lower results I can't seen anything immediately obvious (eg all same motherboard or driver versions), they are doing really badly on the DX12 test though (and lower on the DX9 and DX11). It's possible they are all using some third party software that is triggering this (eg there was recently a bug in the Asus AI Suite 3 software that was making things slower when running the floating point CPU test)

    Comment


    • #3
      That would make sense, I've heard and read horror stories about the AI suite form Asus before and haven't used it myself. Trouble is with Asus motherboards it pops up as a suggested install with the chipset drivers so those that don't know or are inexperienced will use it and be none the wiser.

      Comment


      • #4
        It seems that some of the low score results are due to the gpu being in a laptop, which would explain a lot given the reduced power throughput and cooling etc.

        Is it possible to filter laptops out of the results ?

        Comment


        • #5
          Generally if we detect it is a mobile card we append "mobile" after the name and treat it differently, so it may be this isn't being flagged as a mobile card (though I can't see any mentions of a mobile version from AMD).

          AMD has a history of re-badging/reusing the same hardware IDs for different products so this is probably what's happened here (it's possible this is the as yet unreleased RX 6000M). Once we see a few examples we should be able to update out lookup table to distinguish between the two.

          I should point out this is what the device driver is calling the card too, so potentially this will be fixed with an updated device driver that uses the correct name for the card.
          Last edited by Tim (PassMark); Apr-05-2021, 10:44 PM.

          Comment


          • #6
            I have tested systems with the RX 6800 and RX 6700 XT and both have low DX9 results with low GPU utilization.

            I have tried to raise this as an issue before, but I'm not sure what it would take for the issue to be addressed?

            Comment


            • #7
              @Mcleod: I tinkered with the Radeon settings and managed to get a much better score by turning on Radeon Anti-Lag and Radeon Enhanced Sync. GPU utilization during the test is higher now, but still not 100% during the test. This is also with an RX 6700 XT.

              Tim (PassMark) see the attached image for proof of the issue.


              Click image for larger version

Name:	passmark_proof.PNG
Views:	2068
Size:	265.9 KB
ID:	50488

              Comment


              • #8
                The standard DX9 test (as apposed to the advanced tests, under the Advanced menu) is relatively low resolution and doesn't use any of the new features in DX10,11 & 12. It was designed 10+ years ago and can run on very old low end systems. So to get high results you need a good CPU as well as a good GPU as the CPU can rapidly become a bottleneck if the GPU is fast.

                On the other hand the DX12 test is much newer, runs at higher resolution and uses new GPU features. So the ratio of CPU load to GPU load is much more tilted toward the GPU for DX12. Plus is uses a lot more video RAM. But this test won't run on older machines.

                Comment


                • #9
                  Originally posted by David (PassMark) View Post
                  The standard DX9 test (as apposed to the advanced tests, under the Advanced menu) is relatively low resolution and doesn't use any of the new features in DX10,11 & 12. It was designed 10+ years ago and can run on very old low end systems. So to get high results you need a good CPU as well as a good GPU as the CPU can rapidly become a bottleneck if the GPU is fast.

                  On the other hand the DX12 test is much newer, runs at higher resolution and uses new GPU features. So the ratio of CPU load to GPU load is much more tilted toward the GPU for DX12. Plus is uses a lot more video RAM. But this test won't run on older machines.
                  I think you're missing my point. The DX9 test results in major stutters, 53-79 FPS and >10% GPU utilization with the Radeon Anti-Lag and Radeon Enhanced Sync off. With those things turned on the result is over 200fps and higher GPU utilization.

                  Comment


                  • #10
                    Yes, that is a big difference. It would be nice if none of this video card device driver tuning mattered and things just worked correctly with default settings. Especially with old DirectX9 code.
                    But unfortunately it is often two steps forward, one step backwards with video card drivers.

                    Comment


                    • #11
                      Originally posted by docarter View Post
                      @Mcleod: I tinkered with the Radeon settings and managed to get a much better score by turning on Radeon Anti-Lag and Radeon Enhanced Sync. GPU utilization during the test is higher now, but still not 100% during the test. This is also with an RX 6700 XT.
                      Thanks for that I had a bit of a tinker with the settings myself to see what sort of results I got. I have to say I was staggered by the difference these two settings alone made lol.
                      Here are the results with the settings on.
                      Click image for larger version  Name:	enhsync anti lag on.JPG Views:	0 Size:	109.1 KB ID:	50505
                      And now with the settings off.

                      Click image for larger version  Name:	enh sync anti lag off.JPG Views:	0 Size:	112.8 KB ID:	50506

                      As you can see there is a huge difference. What is bugging me is that I can't remember if those settings were on by default when I first installed the card and drivers. The results I got and posted on here originally were roughly 22,500 so I'm wondering if AMD changed the defaults in an update or something.

                      Another thing I played with was the Smart Access Memory (resizable bar) feature. I don't have a 500 series chipset or Ryzen 3 cpu but recent updates to my boards bios made it available to try out anyway. Asus Strix B450-F gaming and R5 3600.
                      The above screens were taken when testing with SAM disabled, this below is with it enabled and quite a performance hit when you don't have the bandwidth of PCIe 4.0 to play with lol.

                      Click image for larger version  Name:	sam on snyc etc on.JPG Views:	0 Size:	81.6 KB ID:	50507

                      This was with Enhanced sync and Anti lag on also but by just enabling SAM I lost a 3rd of the performance lol. Quite interesting results.

                      Comment


                      • #12
                        Originally posted by Mcleod View Post

                        Thanks for that I had a bit of a tinker with the settings myself to see what sort of results I got. I have to say I was staggered by the difference these two settings alone made lol.
                        Here are the results with the settings on.
                        I also find the disparities interesting. But more surprising is that AMD and Passmark cannot be bothered to work on a fix.

                        I wonder if that's why UL is the industry standard benchmark.

                        Comment


                        • #13
                          There is nothing to fix. The opposite in fact.

                          Device driver settings have always effected quality and frame rates. That is their purpose. If device driver settings didn't effect the benchmark, then it would be a pretty poor benchmark (i.e. it wouldn't match the behaviour of real life games & 3D apps).

                          When you tick a box like "Limit frame rates to save power", then it is totally normal that frame rates are limited to save power. Same for underclocking your video card to reduce heat. You expect lower frame rates. To claim that AMD (or us) should somehow "fix" this is a nonsense.

                          Comment


                          • #14
                            DX9 test exhibits extreme stutters and framerate fluctuations with default driver settings. This is despite the RX 6000 series scaling normally with other DX9 applications like 3dmark06. Results from Notebookcheck.com

                            Passmark answer: Nothing to see here.

                            DX9 test performs more smoothly and twice as fast (CPU limited due to no multithreading) with random settings enabled.

                            Passmark answer: responds with "[w]hen you tick a box like 'Limit frame rates to save power'"


                            David, did you look at the screenshots? Because nothing I enabled included anything about limiting framerates.


                            Your hubris is astounding for someone who cant discern the correct usage of "effect" and "affect." Why don't you look at the distribution of results for the RX 6000 series versus the RTX 3000 series and tell me there is nothing to fix.

                            Comment


                            • #15
                              You are comparing the AMD RX 6800 to the nVidia RTX 3070.
                              The 3070 is 10x more popular than the 6800. As a result the bell curve distribution is much smoother for the 3070. This is normal for real world data plotted as a histogram.
                              [Please don't bother pointing out it isn't exactly a normal distribution bell curve, it is close enough]

                              If the default device driver settings are rubbish for this combination of video card / CPU / device driver version / monitor / OS Version / resolution, this isn't really our problem. Just change the driver settings if you are expert enough to do this and move on. Yes, it will result in a wider distribution of results, but this will be reflective of the real world.

                              Also there is this known issue with the latest Windows patch releases causing stutters
                              https://kotaku.com/microsoft-fixes-w...1846768098/amp

                              correct usage of "effect" and "affect."
                              Petty.
                              As per Godwin's law your next post should bring up the Nazis.

                              Comment

                              Working...
                              X