Announcement

Collapse
No announcement yet.

Intel 750/770 GPU Benchmark scores

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel 750/770 GPU Benchmark scores

    I first want to say I've mainly used this site to keep an eye on value GPUs and current prices as you do a tremendous job there.

    The Intel 750/770 cards are the first time I've noticed such low scores for a card other sources seem to tout as being more in line with top value cards. In this case the AMD 66XX series. Are the G3D marks on your site still using scores with the old drivers? Or is there something particular about the Intel cards with new drivers that still yield such poor results? (I would still buy an AMD card right now)


  • #2
    Nearly all data in the charts comes from user submissions from our PerformanceTest software. The results are an average of all submissions. As such the more samples we get the more accurate the results become, whereas GPUs with only a few samples have less accurate results.

    With the Intel Arc series, it may take a while as drivers are work in progress and currently their non DirectX 12 performance is severely lacking. The G3D Mark, that is the figure in the chart, is a composition of the various tests.

    Click image for larger version

Name:	image.png
Views:	480
Size:	83.4 KB
ID:	55195

    Comment


    • #3
      Thanks for the explanation.

      Comment


      • #4
        Although the averages are of less samples, as soon as your site hits more then 100 samples for a product values always seem more accurate then what is going on with the Intel ARC products. Which is strange because the number does not seem to get much better as what you see in driver improvement game reviews. I still think something special is hitting their value in the Passmark test suite. Then numbers are just to far apart from real world experiences people have and where other reviewers position the card performance wise. I do not expect you to change the test suite but at least look into it because the numbers are not off by a little but by a mile.

        Comment


        • #5
          I also forgot something to ask, but i could not change my previous post. How heavy does each directx version count in the total number? Because if all count the same that would not represent the usage in games at all. The distribution in commercial released games for the directx versions is not equal. For distribution among number of games without taking release date into account is:
          5% for directx12 (still young and more and more used)
          50% for directx 11
          3% for directx 10 (never a very popular directx version)
          42% for directx 9 (most are for older games, hardly anything released after 2018 that depended on this version, but still a lot of older games that are played a lot upon this day)

          Of course the newer the directx version the game that use that version are more recent in general. Directx 11 is available from October 2009 and as it has become the most popular version version it cannot be bad. It is a shame that so many companies made many games for years that where still based on Directx 9 from 2002. Not that my ranting matters for your test suite that should just reflect the real usage in released games.

          I know there is a lot of games that use version 9 that stick in peoples memory and they say that it's still a popular title and put them in lists as if they are still played a lot. But many (yes i know not all) of those mentioned have a player count of lower then 100 world wide. The list of DirectX titles that have a huge player base is not so big as many want to believe.

          Comment


          • #6
            because the number does not seem to get much better as what you see in driver improvement game reviews
            I think Intel are only optimizing for selected DX9,10 & 11 games (ie cheery picking in their press releases). I suspect the average improvement for all DX9 games is a lot lower than the few they optimized for. Those numbers in the post above were also posted 3 months old and slightly out of date now.

            Also after the initial un-optimised driver release it can take a long time and a lot of new benchmark samples before the average starts to look better. Half the people who bought these cards probably installed them and never updated the drivers after the initial install.


            Comment


            • #7
              DirectX9 get continued use because Microsoft never made DX12 available on Win7.

              Developers want a large audience, and the new features in DX12 compared to DX9 aren't that important for many games.

              Formula for the 3DMark and PassMark rating can be found here.

              Comment


              • #8
                Well, curiously enough, I've got some info I can add to this discussion. There's definitely some issues, for sure - and they stem from different places.

                PLACE #1 - Intel
                Intel's driver updates have definitely made a HUUUUGE different in performance of the A750/A770 cards in 'everyday' use; it's also made a big change in their performance in PassMark. When I first began testing the PC setup I'm on now (all the same except for an upgrade to system RAM), A770's were getting ~8900 in PassMark - about the same as a Radeon RX 580! Today, PassMark runs are showing ~14000 which is a far cry better, but still not truly accurate (then again, sample size is only 533 samples according to the website).

                PLACE #2 - Versioning
                The videocardbenchmark.net site page for the A770 (https://www.videocardbenchmark.net/g...c+A770&id=4605) and every other card page I checked says 'From submitted results to PerformanceTest V10 as of 2nd of October 2023.' I don't know how that factors in now that everything is supposed to use/reference v11, but it may be b0rking things up when trying to compare.​

                PLACE #3 - PerformanceTest 'Oddities'
                There are definitely some weird things that I can see just looking at PerformanceTest results and videocardbenchmark.net. Take this screenshot for example, which I captured from https://www.videocardbenchmark.net/c...Force-RTX-3060:
                Click image for larger version

Name:	Screenshot 2023-10-03 105120.png
Views:	300
Size:	91.8 KB
ID:	56062

                It says the numbers in parenthesis are the percent difference to the max in the group. However, it seems to be running everything against the LAST card in the group (the 3060), not the MAX in the group (the 3080) - EXCEPT for the G2D rating which actually DOES use the 3080 as the max in group.


                As for items directly from PerformanceTest, I ran some numbers yesterday 2023-10-02. I personally ran all of these benchmarks EXCEPT for the top two. As you can see, there were definitely some weird things going on with my 13900K since only once does it show all 24 cores out of all of those tests on both 1001 and 1006 versions.

                Now, I think that we'd likely all agree that the only way a 3080 will be SLOWER than an RX 580 is if something is wrong.

                Click image for larger version

Name:	Screenshot 2023-10-03 104003.png
Views:	299
Size:	310.1 KB
ID:	56063
                Click image for larger version

Name:	PerfRes2.png
Views:	298
Size:	26.7 KB
ID:	56064

                I think there may also be a Vsync/VRR bug specifically on the DX10 test. For me to have run it 5 different times on 3 different dates with 2 different revisions of the program and gotten exactly perfect 60 FPS scores each time...? Also, that 3080 to get a perfect 30 FPS? I'm wondering if they were running 4K30 on a secondary monitor and got hardlocked to 30 FPS for their score. I was running it at 4K60 on a single monitor on the A770 and saw framerates between 21 and 126 FPS, yet every run averaged a

                Comment


                • #9


                  Let's add a few other 3080 baselines to the mix just for posterity. I'm all for keeping *ALL* records to show proper/expected performance curves from samples, but there does need to be a point where results that are either A): outside of 'reasonably expectable' - let's say something like 'greater than 3 standard deviations from the mean,' or B): zero/null value, should be occluded from calculations or use as 'baselines.' Perhaps as an option/preference setting or checkbox, that way users who want 100% all everything can have it, and those who want a more 'real-world accurate' comparison can leave them out?





                  The page for the 3080 says that 13,602 data samples have been submitted for the 3080, so that's a fairly healthy sample size. The graph/chart of the distribution curve is significantly bonkazoid, though. It's broken up into 100 brackets, like so:




                  One of the things I noticed is that the bracket names overlap - the first bracket is "7000 - 7280" and the second one is "7280 - 7560" instead of "7281 - 7560." I don't think any results are being counted twice (or dropped), but it could be an issue?

                  Also, the data distribution MASSIVELY skewed by the long run-out for the lower score brackets. Even accounting for different cooling solutions and cases and airflow, there seems to be an issue here with erroneous results. I did some number crunching based on the info shown here and put it all in a Google Sheet:

                  https://docs.google.com/spreadsheets...1880526&rtpof= true&sd=true

                  In case that link isn't allowed, I've snipped a portion of it as shown below. Out of the 13602 samples:
                  • about a quarter of them fall into brackets 70-77
                  • about half of them fall into brackets 65-82
                  • about three-quarters of them fall into brackets 60-88
                  • the (roughly) top 5 percent of them occupy brackets 89-100
                  • the remaining TWENTY TWO PERCENT (almost 1/4 of the entire quantity of samples!) occupy brackets 1-59

                  I'm not a full-on stats guy, but when 22% of results take up 60% of the distribution brackets and 75% of results fall into 29% of the distribution brackets...there are some erroneous results that at a minimum need flagged with some kind of asterisk.


                  Comment


                  • #10
                    A770's were getting ~8900 in PassMark - about the same as a Radeon RX 580! Today, PassMark runs are showing ~14000 which is a far cry better
                    This is good data.
                    Shame significant number of people who already own the ARC card will probably never get around to updating their drivers. So average will likely be supressed for a long while.

                    says 'From submitted results to PerformanceTest V10 as of 2nd of October 2023.
                    We'll fix the typo. Vast majority of results are now from V11. Not V10 anymore.

                    ... it seems to be running everything against the LAST card in the group
                    Agreed. While the numbers aren't wrong. It's not totally logical at the moment. We'll fix it.

                    I think there may also be a Vsync/VRR bug specifically on the DX10 test
                    The benchmark application doesn't request Vsync/VRR to be used. So this is more likely a driver bug, or driver / monitor configuration issue.

                    Comment


                    • #11
                      ...there are some erroneous results that at a minimum need flagged with some kind of asterisk.
                      Unfortunately real life is a mess.
                      A larger percentage of people than you might expect have machines that are sub-optimal.
                      Dozens of different factors impact the results. Driver versions, OS versions, driver settings, monitor resolutions, CPU pairings, RAM speeds, RAM channels in use, power config settings, cooling, PCIe slot width, PCIe slot generation in use, overclocking, security patches, background crapware, overlays and streaming apps, DPI settings, HDR settings, etc.. etc...

                      All makes for a wide distribution of results. But the mess pretty much effects all computers so it kind of evens out when doing relative comparisons.

                      Comment


                      • #12
                        Originally posted by David (PassMark) View Post

                        Unfortunately real life is a mess.
                        A larger percentage of people than you might expect have machines that are sub-optimal.
                        Dozens of different factors impact the results. Driver versions, OS versions, driver settings, monitor resolutions, CPU pairings, RAM speeds, RAM channels in use, power config settings, cooling, PCIe slot width, PCIe slot generation in use, overclocking, security patches, background crapware, overlays and streaming apps, DPI settings, HDR settings, etc.. etc...

                        All makes for a wide distribution of results. But the mess pretty much effects all computers so it kind of evens out when doing relative comparisons.
                        A wide distribution of results, sure - not saying anything against that at all OR your software - I truly love it! I genuinely just want to try and help it be the best that it can. My point with the comment about flagging results was to help point out to folks looking at the DB numbers or trying to see how their system is doing was just that - to help guide them and make PerformanceTest even better.

                        The folks out there (like me) that do things like slap hyper-powered cards into older MB/CPU/RAM combos, or older cards into modern MBs (I've got a 5600G in an X370 motherboard with PCI slots to play with an old Voodoo Banshee) just for the silliness of it know we're being silly. The kid that slaves away mowing a million miles of lawns to save up and get a rocking awesome video card to drop into the PC they got for their birthday 2 years prior may not realize that his bargain sale PC is bottlenecking his rocking GPU.

                        Some sort of "Yo dawg, your GPU is turning in scores that are >80% below the average for your model - might need to look into the rest of your PC hardware and software because this seems outta whack" warning/message coupled with a 'bounds limiter' for videocardbenchmark.net that occludes/hides/stars stuff that looks sus...at the very least, places where results are submitted with 'test ran successfully with no errors,' but the returned test score is a flat zero - I mean, if 95%+ of all submissions for a particular model have a score that's greater than zero, then any 0 FPS scores can basically be guaranteed to be inaccurate.


                        Also - What's with the 13900K showing 8 cores every time EXCEPT for once when it showed all 24?

                        Comment


                        • #13
                          We have plans for the next major release to start to move into the space of automatically diagnosing performance problems (or at least giving intelligent suggestions). So some of this would be covered in that future release.

                          What's with the 13900K showing 8 cores every time EXCEPT for once when it showed all 24?
                          Where? Can you provide an example.
                          Note: Older versions of PerformanceTest didn't know anything about performance and efficiency cores (nor does Win7)

                          Comment


                          • #14
                            As shown in my earlier screenshot of some runs I had done - I've reattached it below hopefully in full-size, this time. Baselines for the 13900K with the Arc A770 were done all on Windows 11...



                            Click image for larger version

Name:	Screenshot 2023-10-03 104003.png
Views:	274
Size:	310.1 KB
ID:	56087

                            Comment


                            • #15
                              It is a display bug in the baseline search window. Seem it is displaying the number of P-Cores for baselines downloaded from our database. The total core count is correct in other windows however and it doesn't impact the results.

                              We'll fix it up.

                              Comment

                              Working...
                              X