I'm getting some very odd-looking and erratic results with two tests in the 2D suite on Intel G43 chipsets.
I'm comparing results on three different machines:
1) Q8400, G43 onboard GFX, 4GB RAM, Win7x64, Latest Intel driver, default timings (no overclock), ASRock Mobo.
2) Same as 1, except 6GB RAM and Intel Mobo.
3) Q9550, Nvidia 220GT, 8GB RAM, Win7x64, recent (but not latest) Nvidia driver, CPU overclocked, but default timings on everything else including GFX, Gigabyte Mobo.
As you can see, all are socket 775 with generally similar properties.
Most all the benchmark scores make sense and are proportional to expected system capabilities, except for the 2D suite of tests. Systems 1 and 2 are getting enormous and wildly-variable scores in "2D Complex Vectors" and "2D Image Rendering". System 3, with the Nvidia card is scoring far lower than the other two.
Specifically:
System 1: C.V. Run 1 - 873, Run 2 - 9308
System 2: C.V. Run 1 - 3059
System 3: C.V. Run 1 - 123
System 1: I.R. Run 1 - 14159 Run 2 - 7236
System 2: I.R. Run 1 - 3461
System 3: I.R. Run 1 - 422
I thought that perhaps the HPET might have something to do with this, but there are only negligible differences in the scores, with and without the HPET in a test I ran on System 2.
Now the problem ought to be obvious -- the Intel G43 chips are racking up scores that appear (to me) to be way out of line, and are varying wildly from trial to trial. The scores on from the Nvidia card are consistent from trial to trial to under 1% variation. Should these Intel G43 chipsets outperform a dedicated graphics card by these kinds of margins on these two tests? I doubt it. So what's the explanation? Here are two different sets of hardware, same OS, same chipset, same driver etc, both of which are achieving what appear to be impossible scores compared to a competent lower-end dedicated graphics card. BTW, the Nvidia easily outscores the G43s on all 3D tests. On the other 2D tests the scores are in the same ballpark.
Looks to me like there could be a bug in the benchmark test in systems where G43 chipsets are used.
I'm comparing results on three different machines:
1) Q8400, G43 onboard GFX, 4GB RAM, Win7x64, Latest Intel driver, default timings (no overclock), ASRock Mobo.
2) Same as 1, except 6GB RAM and Intel Mobo.
3) Q9550, Nvidia 220GT, 8GB RAM, Win7x64, recent (but not latest) Nvidia driver, CPU overclocked, but default timings on everything else including GFX, Gigabyte Mobo.
As you can see, all are socket 775 with generally similar properties.
Most all the benchmark scores make sense and are proportional to expected system capabilities, except for the 2D suite of tests. Systems 1 and 2 are getting enormous and wildly-variable scores in "2D Complex Vectors" and "2D Image Rendering". System 3, with the Nvidia card is scoring far lower than the other two.
Specifically:
System 1: C.V. Run 1 - 873, Run 2 - 9308
System 2: C.V. Run 1 - 3059
System 3: C.V. Run 1 - 123
System 1: I.R. Run 1 - 14159 Run 2 - 7236
System 2: I.R. Run 1 - 3461
System 3: I.R. Run 1 - 422
I thought that perhaps the HPET might have something to do with this, but there are only negligible differences in the scores, with and without the HPET in a test I ran on System 2.
Now the problem ought to be obvious -- the Intel G43 chips are racking up scores that appear (to me) to be way out of line, and are varying wildly from trial to trial. The scores on from the Nvidia card are consistent from trial to trial to under 1% variation. Should these Intel G43 chipsets outperform a dedicated graphics card by these kinds of margins on these two tests? I doubt it. So what's the explanation? Here are two different sets of hardware, same OS, same chipset, same driver etc, both of which are achieving what appear to be impossible scores compared to a competent lower-end dedicated graphics card. BTW, the Nvidia easily outscores the G43s on all 3D tests. On the other 2D tests the scores are in the same ballpark.
Looks to me like there could be a bug in the benchmark test in systems where G43 chipsets are used.
Comment