Big GPU-Shootout; Part III, PCIe 1.0 vs. PCIe 2.0
3DMark06
3DMark06 still remains the number one utility used for a system benchmark. The numbers it produces aren’t indicative of real-world gameplay – or any gameplay in general – and for that reason we really dislike using it to compare different systems. However, as long as the rest of the tech world uses it to evaluate gaming performance, we will too. We find it mostly useful for tracking changes in a single system, what we are mostly doing now. There are four “mini-games” that it uses for benchmarking graphics, as well as two CPU tests. The scores are “weighed” and added together to give an overall number and there is a further breakdown possible with these mini games that we are charting for you.
Above is a scene from one of the four benchmark “mini games” used to benchmark GPU performance. It will give your PC a real workout even though the default resolution is only 12×10 (as pictured). Here are the results of our 3DMark06 benchmark comparison using the benchmark at its default settings:
3DMARK06
Unlike with Part I and II of our series, we have a small surprise. We can see the ranking and we note the maturity of both sets of vendor’s drivers are good. HD4870X2 scales well although crossfireX-3 still barely scales in this combination of drivers and HW in this synthetic benchmark. However, the thing to note is the comparison of the our motherboard’s performance. First of all, there is little difference with 4870 -512MB in either the 1.0 PCIe board or in the PCIe 2.0 mother board. However, we do see GTX280 making a significant jump from 15073 to 16045 by simply upgrading to PCIE 2.0! There is almost no difference in 4870×2 or crossfire-X3 performance with either MB and it could possibly be attributed to drivers. We also note the 4870-1GB scores a bit higher than its 512MB VRAM sister card.
The 3DMark06 mini-games more specifically compare the video cards performance. We note that it is more significant than the final 3DMark overall score might suggest. Here we also note that 4870-X3 outperforms 4870-X2 in every test and generally there is a small performance increase going from the limited bandwidth of PCIe 1.0 to 2.0. Again, there is little difference in this benchmark with the 4870×2’s 2GB VRAM paired with the crossfired second card being either 512MB instead of 1.0GB or being limited to x4 PCIe instead of x16.
Curious, why’d you set Catalyst A.I. to “Advanced”?
How about a few links to explanations of Catalyst AI and what “advanced” really does? Here is an old article on it:
http://www.hardocp.com/article.html?art=NjY2LDI=
Here is the tweak guide which supports my own research:
http://www.tweakguides.com/ATICAT_7.html
“Catalyst A.I. allows users to determine the level of ‘optimizations’ the drivers enable in graphics applications. These optimizations are graphics ‘short cuts’ which the Catalyst A.I. calculates to attempt to improve the performance of 3D games without any noticeable reduction in image quality. In the past there has been a great deal of controversy about ‘hidden optimizations’, where both Nvidia and ATI were accused of cutting corners, reducing image quality in subtle ways by reducing image precision for example, simply to get higher scores in synthetic benchmarks like 3DMark. In response to this, both ATI and Nvidia have made the process transparent to a great extent. You can select whether you want to enable or disable Catalyst A.I. for a further potential performance boost in return for possibly a slight reduction in image quality in some cases. If Catalyst AI is enabled, you can also choose the aggressiveness of such optimizations, either Standard or Advanced on the slider. The Advanced setting ensures maximum performance, and usually results in no problems or any noticeable image quality reduction. If on the other hand you want to always ensure the highest possible image quality at all costs, disable Catalyst A.I. (tick the ‘Disable Catalyst A.I.’ box). I recommend leaving Catalyst A.I enabled unless you experience problems. ATI have made it clear that many application-specific optimizations for recent games such as Oblivion are dependent on Catalyst AI being enabled.
Note: As of the 6.7 Catalysts, Crossfire users should set Catalyst A.I. to Advanced to force Alternate Frame Rendering (AFR) mode in all Direct3D games for optimal performance. Once again, Catalyst A.I. should only be disabled for troubleshooting purposes, such as if you notice image corruption in particular games”
In other words, one can choose the aggressiveness of your optimizations, either “Standard” or “Advanced”. The Advanced setting ensures maximum performance – as for benchmarking games – and with no noticeable image quality reduction. However, if you are doing IQ comparisons as BFG10K did, and want to guarantee the very highest image quality, then disable Catalyst A.I. [but not for crossfire; set it to “Standard”]. I have always recommended leaving Catalyst A.I enabled unless you experience any glitches in games.
You have to realize that Cat AI is not necessarily supposed to give you a boost in every single game. It tries to do optimizations, if possible, but many times these are either not possible with a particular game, or the settings you’ve chosen in the game may be too low for it to make any noticeable impact.
That is why I recommend leaving it on “Advanced”; you get a possisble performance boost; if not then you lose nothing. Or you can set it to standard or off if you feel your image quality is being degraded.
Hope that explains it.
Very interesting, I’ll definitely be I check your site on a regular basis now.