Part IV: Big GPU Shootout – Bringing it all Together – the Summary
CRYSIS
Now we move on to Crysis. It is one of the most demanding games released to date for the PC. Crysis is a sci-fi first person shooter by Crytek and published by Electronic Arts on November, 2007. Crysis is based in a fictional near-future where an ancient alien spacecraft is discovered buried on an island near the coast of Korea. The single-player campaign has you assume the role of USA Delta Force, ‘Nomad’ in the game. He is armed with various futuristic weapons and equipment, including a “Nano Suit” which enables the player to perform extraordinary feats. Crysis uses DirectX10 for graphics rendering.
A standalone but related game, Crysis Warhead was released on September, 2008. It is notable for providing a similar graphical experience to Crysis, but with less graphical demands on the PC at its highest ‘enthusiast’ settings. CryEngine2 is the game engine used to power Crysis and Warhead and it is an extended version of the CryEngine that powers FarCry. As well as supporting Shader Model 2.0, 3.0, and DirectX10’s 4.0, CryEngine2 is also multi-threaded to take advantage of SMP-aware systems. Crysis also comes in 32-bit and 64-bit versions and Crytek has developed their own proprietary physics system, called CryPhysics. There are three built-in demos that are very reliable in comparing video card performance. However, it is noted that actually playing the game is a bit slower than the demo implies.
GPU Demo, Island
All of our settings are set to ‘maximum’ but we do NOT apply any AA/AF in the game. Here is Crysis’ Island Demo benchmarks, at 1920×1200 resolution, then 1680×1050:
We see 4870-1GB take a jump over the first September and October driver releases. 4870-X2 and crossfireX-3 configurations mostly stay steady. Nvidia matches a decent increase over the same months. Performance slips very slightly with the latest release, however.
(4870/Cat 8-9 correction – 31 Maximum, 24.80 Average and 20 Minimum)
(Geforce 180.48/181.20, not 181.84/181.40)
Not much difference here compared to 1920×1200 resolution but a little improvement is generally noted at 1680×1050. It is also no real advantage to play with crossfireX-3 over 4870-X2.
Man, there’s a *ton* of data here. Nice work.
Could have swore I already posted this here, but seems it’s gone. But anyways, why’d you use Catalyst A.I. “Advanced”?
heck .. i posted a really detailed response and it is gone also. =(
Let me check into this .. basically Cat AI set to “Advanced” maximizes performance – as for benchmarking – without impacting IQ
However, if you are doing an IQ comparison – like BFG10K’s – you set Cat AI to off or Standard for Crossfire.
I have zero idea what happened; but I found what i posted in my notes [thankfully]; here is a C&P of what I submitted originally – right after you asked:
How about a few links to explanations of Catalyst AI and what “advanced” really does? Here is an old article on it:
http://www.hardocp.com/article.html?art=NjY2LDI=
Here is the tweak guide which supports my own research:
http://www.tweakguides.com/ATICAT_7.html
“Catalyst A.I. allows users to determine the level of ‘optimizations’ the drivers enable in graphics applications. These optimizations are graphics ’short cuts’ which the Catalyst A.I. calculates to attempt to improve the performance of 3D games without any noticeable reduction in image quality. In the past there has been a great deal of controversy about ‘hidden optimizations’, where both Nvidia and ATI were accused of cutting corners, reducing image quality in subtle ways by reducing image precision for example, simply to get higher scores in synthetic benchmarks like 3DMark. In response to this, both ATI and Nvidia have made the process transparent to a great extent. You can select whether you want to enable or disable Catalyst A.I. for a further potential performance boost in return for possibly a slight reduction in image quality in some cases. If Catalyst AI is enabled, you can also choose the aggressiveness of such optimizations, either Standard or Advanced on the slider. The Advanced setting ensures maximum performance, and usually results in no problems or any noticeable image quality reduction. If on the other hand you want to always ensure the highest possible image quality at all costs, disable Catalyst A.I. (tick the ‘Disable Catalyst A.I.’ box). I recommend leaving Catalyst A.I enabled unless you experience problems. ATI have made it clear that many application-specific optimizations for recent games such as Oblivion are dependent on Catalyst AI being enabled.
Note: As of the 6.7 Catalysts, Crossfire users should set Catalyst A.I. to Advanced to force Alternate Frame Rendering (AFR) mode in all Direct3D games for optimal performance. Once again, Catalyst A.I. should only be disabled for troubleshooting purposes, such as if you notice image corruption in particular games”
In other words, one can choose the aggressiveness of your optimizations, either “Standard” or “Advanced”. The Advanced setting ensures maximum performance – as for benchmarking games – and with no noticeable image quality reduction. However, if you are doing IQ comparisons as BFG10K did, and want to guarantee the very highest image quality, then disable Catalyst A.I. [but not for crossfire; set it to “Standard”]. I have always recommended leaving Catalyst A.I enabled unless you experience any glitches in games.
You have to realize that Cat AI is not necessarily supposed to give you a boost in every single game. It tries to do optimizations, if possible, but many times these are either not possible with a particular game, or the settings you’ve chosen in the game may be too low for it to make any noticeable impact.
That is why I recommend leaving it on “Advanced”; you get a possible performance boost; if not then you lose nothing. Or you can set it to standard or off if you feel your image quality is being degraded.
Hope that explains it.
I figured it out =P
There are 4 parts so far to “Big GPU shootout”l this is ‘Part 4, The Summary’
You posted your question in ‘Part 3, PCIe 1.0 vs 2.0’, yesterday
http://alienbabeltech.com/main/?p=2249#comments
Ah, got ya.
Sorry about posting the same question twice.
I just thought it was a little different since most performance reviews I’ve seen leave it at “Standard”, but, they may just do so to simulate a straight-out-of-the-box scenario.
No problem whatsoever. It took me a bit to figure out what happened myself.
I am trying to simulate how some of us actually play games. I usually use “Advanced” in my own games – and always play with settings as maxed-out as the frame rates support. When there are issues, I note them. My results may vary slightly from other sites as there are no recognized standard or universal settings for all of us to use. Each of the games that I pick for my benchmarking is well-tested to make sure that “Advanced” does not cheat, IQ-wise from driver release to driver release. There is also not a big difference even in still closeups, nor is the performance increase very large. I doubt anyone can tell any real difference while actually playing the game, in most cases. This “Standard vs. Advanced” setting also may be the focus for my own IQ article in the future – if there is interest. I also play through most of these games myself and will update Source Engine’s HL2, for example to L4D, soon enough.
In fact, there are 3 new game benches that I am validating and adding for the upcoming “Quad-core vs. Dual-Core Shootout – Q9550S vs. E8600”; to be published this Sunday or Monday. Next week, is planned a really expanded test including CrossFireX-3 covering the same area so as to compare both CPUs at 3.99Ghz; and also with e8600 OC’d further to 4.25Ghz. It will also examine CPU “scaling” from stock speeds and in-between to try to find an elusive “sweet spot” for top video cards.