The GTX 590 vs. the HD 6990 – only One is “the World’s Fastest video card”
Not long ago we reintroduced Nvidia’s “Tank”, the GTX 580/570, as a much leaner, meaner and faster machine – all the while improving on the thermals, power draw and noise of the reference GTX 480. The Tank refers particularly to Nvidia’s flagship video cards which are equipped to handle any gaming situation at high resolution and with maximum details and with maximum filtering and anti-aliasing applied. Today, we see Nvidia new $699 flagship released – Tank times two! – the dual GF110 GPU GTX 590 which is designed to take on AMD’s $699 flagship HD 6990.
Nvidia released its long awaited GeForce GTX based on its brand new Fermi DX11 GF100 architecture back in April of last year, six months later than AMD’s own DX11 Cypress video cards. This new Fermi GPU – Graphics Processing Unit – a term originally originated by Nvidia is a continuation of their strategy since their G80 which launched over three year ago to create a General Purpose Processor – co-equal with the CPU – that also renders amazing graphics. The culmination of Nvidia’s efforts with their GF100 DX11 Fermi architecture was the GTX 480 with the caveat that it runs rather hot and the cooling solutions based on the reference design were rather noisy.
Things changed very rapidly as Nvidia’s introduced a new refined GTX 400 series ‘Tank’, the Galaxy GTX 480 SuperOverclock on a mature process which we covered in this review. Shortly thereafter, enter the completely redesigned Nvidia Tank – at $499 suggested etail pricing and designed to be faster and more efficient than even the super-overclocked GTX 480s.
We saw AMD introduce their new line up, HD 68×0 series to replace HD 58×0 series in our review here. We found out that the “Barts” GPU it is based on is only a mid-range launch so far with the HD 6870 only slightly faster than the HD 5850; the best part is that it replaces it for less money. And at the end of last year we saw AMD’s Cayman release in the form of HD 6970 and HD 6950. Nvidia took aim at the HD 6970 with the GTX 570, and the GTX 560 Ti takes on the HD 6870 and the HD 6950.
In turn, AMD released their flagship dual-GPU, $700 video card a few days ago on March 8 and we reviewed it here. It absolutely blew away the GTX 580 and AMD’s own HD 6970 with the caveat that it is rather noisy under full load. Well, now Nvidia has just now releases their own dual-GPU graphics card, their flagship GTX 590. It boasts 512 CUDA cores each. That means two independent NVIDIA GF110 GPUs are internally connected to deliver 1024 total cores of processing power. The GTX 590 has six 64-bit memory controllers which make up the 384-bit bandwidth per each GPU. This is matched with 3GB of total GDDR5 video frame buffer – 1.5 GB per GPU just like two down-clocked GTX 580.
Here are the specifications for the GTX 590:
Since, Nvidia’s new GTX 590 now also comes with a MSRP of $699, which one is worth your hard earned dollars and are they worth the $700 that one would currently spend for a single video card?
To properly bring you this review, we are using our reference HD 6990 and HD 6970 which we put through their paces last week with the release 11.4 (beta) Catalyst drivers in the launch article. We are putting HD 6990 head to head in 28 modern games and in 3 synthetic benchmarks to see which card may be for you using 1680×1050, 1920×1200 and/or 2560×1600 resolutions. Since we are using fast single-GPU video cards, it makes sense to test them at the highest resolutions and with the most demanding playable settings that they can handle. Since we are matching the two top dual-GPU video cards to each other in a performance showdown, we do also include HD 6970 and HD CrossFire as well as GTX 580 and GTX 560 Ti SLI configurations to get an idea of value.
Before we do performance testing, let’s take a look at the original Fermi GF100 GTX 480 and quickly recap its new DX11 architecture and features of the original Fermi GF100 which we covered in our reviews of the GTX 480, published here, here and here. Senior Editor BFG10K reviewed GTX 470 here and here and Senior editor MrK covered GTX 465 here.
We also examined the performance of Galaxy’s GTX 480 SuperOverclock and we also reran GTX 480 against stock and overclocked versions of HD 5870, HD 6870 and HD 6850 here just a few weeks ago. A recent review covered the GTX 580 a few months ago. We also covered GTX 570 and the launch of the HD 69×0 series against GTX 460. And Senior Editor Leon Hyman covered GTX 460-768M vs. HD 5830 here last week. Now you are up to date.
Specifications
The GeForce GTX 590 is basically two GTX 580s on a single PCB. However, it was exceptional engineering feat to get these two very powerful GPUs into a single enclosure in a two slot video card that can handle the extreme thermals and wattage without sounding like a hairdryer.
Vapor Chamber Cooling
One of the reasons the GTX 590 is barely any louder than a GTX 580 – about on the same level as a HD 6970 – is because the thermals are tamed by Nvidia’s vapor chambers.
Here is a good look behind the fan.
Take a look at the coolers.
The GTX 590 was designed from the ground up to deliver exceptional tessellation performance, which is a key component of Microsoft’s DirectX 11 development platform for PC games. Tessellation allows game developers to take advantage of the GeForce GTX 590 dual-GPU’s tessellation ability to increase the geometric complexity of models and characters to deliver far more realistic and visually rich gaming environments. You will soon see that although the clocks of Nvidia’s GTX 590 are clocked far lower at 605/1707MHz than the reference GTS 580 version at 772/2001.
In our testing, we lowered the clocks of a pair of GTX 580s in SLI down to match the reference GTX 590 and we discovered that it generally used 150W less than the stock-clocked pair! And this was without even lowering the voltage. Of course, Nvidia has had quite a bit of time to tweak these cards and we find that the GTX 590 generally outperforms the underclocked GTX 580 SLI’s pair. We will draw attention to this in our regular testing of the individual games that we tested with underclocked SLI’s GTX 580s vs. GTX 590.
Of course, we know that GTX 580 SLI beats HD 6970 CrossFire and HD 6990. We do not know if the GTX 590 can manage the same feat as it rather underclocked. We also want to know if there is any more headroom to overclock it further; we suspect there is.
We see the new GPU supports the new HDMI 1.4a connector standard as there is one mini HDMI output and 3 DVI outputs. We can finally enjoy Nvidia’s Surround – their answer to AMD’s Eyefinity – on a single card. Unfortunately, we did not realize that all three displays must have the same native resolution, unlike Eyefinity which only requires that all three displays support a common resolution. Next time we will bring you Eyefinity vs. Surround.
Now lets compare the brackets and the connectors of the GTX 590 (above) with the HD 6990 (below).
The GTX 590 is much shorter than the AMD card and is physically able to fit in a lot more cases. No doubt the OEMs will really appreciate this.
Needless to say, the new Fermi Dual-GF110 GTX 590 brings a lot of features to the table that current Nvidia customers will appreciate, including improved CUDA’s PhysX, 2D and 3D Surround to drive up to 3 LCDs with a single card, superb tessellation capabilities and two really fast GPUs in comparison to their GT200 series and even their hot running GF100 series cards.
WHY choose a GTX 590?
Nvidia themselves believes that the GTX 590 is about 1.5 times faster than a single GTX 580. Since the GTX 580 retails for $500, the performance to value ration remains the same. Now you can even have ultimate performance with Quad-SLI – 2 times GTX 590 as long as you have the right motherboard.
Another reason to choose a GTX 590 would be where you only have one 16x PCIe slot available and you want the single fastest video card. Either the GTX 590 or the Radeon HD 6990 might fit the bill. But which one to choose? Read on.
Should you SLI your GTX 590?
If you have a top PC and $1500 to spend on graphics, you might consider Quad-SLI. It is supported by GTX 590 and there is improved scaling. There are also recently more compelling reasons besides increased performance to consider GTX 590 SLI which includes being able to experience Nvidia’s multi-display 3D Vision Surround. The only other way to experience similar but lesser performance is with 3 x GTX 580 Tri-SLI for about the same costs.
At this time, Nvidia does not support GTX 580 plus GTX 590 Tri-SLI although they could easily enable it in the drivers if they choose. It would give enthusiasts a cheaper way to experience Trr-SLI without a very expensive motherboard as the two card can be run together without a PCIe slot in-between as shown below. Notice the GeForce logo lights up on the GTX 590; a nice touch!
The GTX 580 was used with the GTX 590 as a dedicated PhysX card; the SLI bridge is only shown for illustration. Here is what the control panel looks like.
New Power Monitoring Hardware – or no more Furmark!
Nvidia has added a power draw limitation system to their card beginning with the GTX 580 and also with the GTX 570 and GTX 560 Ti. When either Furmark or OCCT are detected, sensors measure the incoming current and voltage to calculate the total power draw. If the power draw exceeds a certain predetermined limit, the GTX 590 will automatically downclock to avoid damage to hardware components. After the power draw drops back to safe limits, the GPU returns to normal clocks much the same as with thermal management.
Because of this, we will no longer use Furmark for showing power draw and will return to using games to illustrate real world situations. Currently, this power management only switches on when Furmark or OCCT are detected and it should not limit overclocking unless Nvidia extends this management to regular PC games. Evidently this works by having the GeForce driver detect the program and treat it as a virus. In the case of the GTX 580 and the GTX 590, this power limiting circuitry implementation is mandatory by Nvidia’s parners.
As a total package, the new GTX 590 looks (and sounds) great! It looks and feels solid. Let’s show you the results of our one day (yes, we got it yesterday!) hand’s on test drive, shall we? We will put it to the test in 29 PC games and in three synthetic tests. But first, head to the next page to check out our test bed configuration.
Test Configuration
Test Configuration – Hardware
- Intel Core i7 920-reference 2.66 GHz and overclocked to 3.8 GHz; 21x multiplier for 3.97 GHz, Turbo is on.
- Gigabyte EX58-UD3R (Intel X58 chipset, latest BIOS, PCIe 2.0 specification; CrossFire/SLI 16x+16x).
- 6 GB OCZ DDR3 PC1800 Kingston RAM (3×2 GB, tri-channel at PC1600 speeds; 2×2 GB supplied by Kingston)
- GeForce GTX 590, 3 GB reference design and clocks (605/1707 MHz; also overclocked to 690/1825MHz) supplied by Nvidia under NDA.
- Two – GTX 560 Ti, 1.5 GB reference design and clocks (833/2004 MHz), supplied by Galaxy/Nvidia
- GeForce GTX 570, 1.2 GB reference design and clocks (732/1900 MHz), supplied by Nvidia.
- Two – GeForce GTX 580; 1.5 GB, (at reference clocks 772/2004 MHz; also under-clocked to 605/1707MHz) supplied by Nvidia
- ATI Radeon HD 6990 (4GB, reference clocks, 830/1250 MHz; also overclocked to 960/1390) supplied by AMD
- ATI Radeon HD 6970 (2GB, reference clocks, 880/1370 MHz; also flashed to stock HD 6970) supplied by AMD
- ATI Radeon HD 6950 (2GB 800/1250 MHz) suppliedby AMD
- Two – ATi Radeon HD 6870 (1GB, reference clocks, 900/1050 MHz) supplied by AMD
- Onboard Realtek Audio
- Two identical 500 GB Seagate Barracuda 7200.12 hard drives configured and set up identically from drive image; one partition for Nvidia GeForce drivers and one for ATI Catalyst drivers
- Two – Thermaltake ToughPower 775 W power supply unit supplied by Thermaltake
- Thermaltake Element G Case supplied by Thermaltake
- Noctua NH-U12P SE2 CPU cooler, supplied by Noctua
- Philips DVD SATA writer
- HP LP3065 2560×1600 thirty inch LCD; ASUS VG236 120Hz 1920×1080 twenty-three inch LCD with 3D Vision kit supplied by Nvidia/ASUS for 3D Vision evaluation.
Test Configuration – Software
- ATi Catalyst 11.2 WHQL driver for all Radeons except for HD 6990/HD 6970 (11-4 release/beta); latest CrossFire profiles; highest quality mip-mapping set in the driver; surface performance optimizations are off; “use applications settings” are checked
- NVIDIA GeForce release candidate 267.71 for GTX 590 and under-clocked GTX 580 SLI pair; WHQL 266.58 used for the other GeForce cards. High Quality
- Windows 7 64-bit; very latest updates
- DirectX July/November 2010
- All games are patched to their latest versions.
- vsync is forced off in the control panel.
- Varying AA enabled as noted in games and “forced” in Catalyst Control Center for UT3 ; all in-game settings are specified with 16xAF always applied; 16xAF forced in control panel for Crysis.
- All results show average, minimum and maximum frame rates except as noted.
- Highest quality sound (stereo) used in all games.
- Windows 7 64, all DX9 titles were run under DX9 render paths, DX10 titles were run under DX10 render paths and DX11 titles under DX11 render paths.
The Benchmarks
- Vantage
- 3DMark11
- F.E.A.R.
- X3:Terran Conflict
- Enemy Territory: Quake Wars
- Call of Duty 4
- Unreal Tournament 3
- Batman: Arkham Asylum
- Grand Theft Auto IV
- Serious Sam, Second Encounter HD (2010)
- Wolfenstein
- Left 4 Dead
- Grand Theft Auto IV
- Mafia II
- Call of Juarez
- Crysis
- Warhead
- Lost Planet
- World in Conflict
- Far Cry 2
- Just Cause 2
- H.A.W.X.
- Resident Evil 5
- Alien vs. Predator
- Battleforge
- STALKER, Call of Pripyat
- Dirt 2
- F1 2010
- Metro 2033
- Lost Planet 2
- H.A.W.X. 2
- Heaven 2
We have got an interesting project going. Let’s check our results and see if we can determine which card can wear “the world’s fastest video card” crown.
Vantage
Vantage is Futuremark’s DX10 test. It is really useful for tracking changes in a single system – especially driver changes. There are two mini-game tests, Jane Nash and Calico and also two CPU tests, but we are still focusing on the graphics performance. Here is a scene from Vantage’s second mini-game.
Let’s go right to the graphs and first check the basic tests with the default benchmark scores:
We note the rankings. Unfortunately, scores are completely meaningless when they are presented this way.
We also underclocked our GTX 580 SLI pair to the same clocks as the GTX 590, 605/1707MHz. We got a score of 33069, considerably lower than our GTX 590. We tested the vRAM clocks separately from the core clocks – it is a delicate balance to get the right performance. Overclocking only the core from 605 to 690MHz got us 41311, while leaving the core stock and overclocking the memory from 1707 to 1850MHz got a much lower score of 38701. Overclocking both to as far as they would work together optimally – 690/1825MHz – got us 41592. The memory clocks need to go up as the core is raised – but not too much or there is instability.
For our purposes here, Vantage is a meaningless test although they do attempt to compare one video card’s performance to another by using two short timedemos. Let’s move on to the latest Futuremark benchmark, 3DMark11 which is DX11 only.
3DMark11
3DMark11 is Futuremark’s brand new DX11-only benchmark. We are keeping track of the overall (for these tests, meaningless) scores and the framerates of the 4 graphics tests (which are more meaningful). First the basic tests results.
(Above the HD 6990 does not score zero; correct is 9647)
We note the rankings. Unfortunately, scores are completely meaningless when they are presented in this way. However, the next set of tests actually measures framerates of four short timedemos that focus on DX11 graphics performance.
We see an interesting lineup. Unfortunately for our purposes, 3DMark11 scores are just as meaningless as Vantage in an attempt to compare one video card’s performance to another – even in the same system.
Let’s move on to PC games and to real world situations and we will create our “snapshot” of current performance of our two dual-GPU video cards to see how they scale in CrossFire or SLI and we will compare them to the single-GPU fastest video cards as well as to two each CrossFired and SLI’d configurations to attempt to determine price to performance.
F.E.A.R.
F.E.A.R. – First Encounter Armed Assault is a DX9c game by Monolith Productions that was originally released in October 2005 by Vivendi Universal Production. Later, there were two expansions with the latest, Perseus Mandate, released in 2007. Although the game engine is aging, it still has some of the most spectacular effects of any game. F.E.A.R. showcases a powerful particle system, complete with sparks and smoke for collisions as well as featuring bullet marks and other effects including “soft shadows”. This is highlighted by the built-in performance test, although it was never updated.
This performance test will tell you how F.E.A.R. will run, but both of its expansions are progressively more demanding on your PC graphics and will run slower than the demo. We always run at least two sets of tests with all in-game features at ‘maximum’. F.E.A.R. uses the Jupiter Extended Technology engine from Touchdown Entertainment. We test this game with the most demanding settings. We use fully maxed details with 4xAA/16xAF; soft shadows ‘off’, as they do not play well with AA. Let’s start first at 2560×1600:
We see an interesting situation. Here the HD 6870 as a single GPU drops down in the minimums. Adding a second HD 6870 in CrossFire allows it to easily play this game fluidly. We also see the minimums nearly triple when a second card is added for CrossFire. However, generally we see scaling of the average framerates is not great with SLI nor CrossFire in F.E.A.R.
Let’s look at 1920×100:
The GTX 590 is the strongest of the two cards here although the HD 6990 comes much closer at 2560×1600 than at 1920×1200. The GTX 580 is the strongest single-GPU video card in this DX9 game yet it is easily beaten by GTX 560 Ti SLI. We also see CrossFired HD 6870 and HD 5870 scale a little better than their GeForce counterparts. There is not much difference in practically playing F.E.A.R. between the fastest and the slowest video configurations as the minimums are already sufficiently high.
X3: Terran Conflict
X3:Terran Conflict (X3:TC) is another beautiful stand-alone benchmark that runs multiple tests and will really strain a lot of older video cards. X3:TC is a space trading and combat simulator from Egosoft and is the most recent of their X-series of computer games.
X3:TC is a standalone expansion of X3: Reunion, based in the same universe and on the same engine. It complements the story of previous games in the X-Universe and especially continues the events after the end of X3: Reunion. Compared to Reunion, Terran Conflict features a larger universe, more ships, and of course, new missions. The X-Universe is huge. The Terran faction was added with their own set of technology including powerful ships and stations. Many new weapons systems were developed for the expansion and it has generally received good reviews. It has a rather steep learning curve.
First we note the results at 2560×1600 with completely maxed out settings plus 8xAA:
Only the very slightest edge goes to the GTX 590 over the HD 6990. This time all of our video cards run close to each other in a fairly tight grouping except for the more budget cards. However, all of our video cards perform well and all of them experience similar minimum framerates and a similar playing experience. We also note no change in the minimums by adding a second card yet all of our cards in multi-GPU blow past the fastest single-GPU video cards.
Enemy Territory: Quake Wars
Enemy Territory: Quake Wars is an objective-driven, class-based first person shooter set in the Quake universe. It was developed by id Software and Splash Damage and published by Activision. Quake Wars pits the combined human armies of the Global Defense Force against the technologically superior Strogg, an alien race who has come to earth to use humans for spare parts and food. It allows you to play a part, probably best as an online multi-player experience, in the battles waged around the world in mankind’s desperate war to survive.
Quake Wars is an OpenGL game based on id’s Doom3 game engine with the addition of their MegaTexture technology. It also supports some of the latest 3D effects seen in today’s games, including soft particles, although it is somewhat dated and less demanding on video cards than many DX10 games. id’s MegaTexture technology is designed to provide very large maps without having to reuse the same textures over and over again.
For our benchmark we chose the flyby, Salvage Demo. It is one of the most graphically demanding of all the flybys and it is very repeatable and reliable in its results. It is fairly close to what you will experience in-game. All of our settings are set to ‘maximum’ and we also apply 4xAA or 8xAA plus 16xAF in game. First we test at 2560×1600 resolution with all settings fully maxed in-game plus 4xAA/16xAF:
Let’s crank up the anti-aliasing from 4x to 8x while we test at 1920×1200 resolution.
The HD 6990 is faster however, the minimums are so high there is no practical difference playing with either card. Even cranking up the AA to 8x made little performance difference or change in the ranking or percentages. SLI simply does not scale well in this game whereas CrossFire scales fairly well.
Wolfenstein
Wolfenstein is a science fiction first-person shooter video game mostly co-developed by Raven and id Software and published by Activision. It is the sequel to Return to Castle Wolfenstein, and uses the id Tech 4 engine. The game was released in 2009. Our timedemo benchmark was created by ABT’s own Senior Editor and lead reviewer, BFG10K. It is very accurate and totally repeatable.
First we test at 2560×1600 with completely maxed out in-game settings.
Now we test at 1920×1200 with maxed out settings.
It looked like driver issues are to blame for Nvidia’s poor relative performance in this OpenGL game compared to the competing Radeons. It needs to be optimized to run well on GeForce hardware although it surprisingly does scale with SLI. We also see GTX 560 Ti SLI easily surpass our GTX 580 performance in this game.
In Wolfenstein the HD 6990 absolutely embarrasses the GTX 590. Fortunately the game is not so demanding that the GTX 590’s results are sufficient to get by. We have noted that AMD beats Nvidia in the OpenGL games that we test and we are beginning to wonder if it is architectural issues or drivers. We are looking forward to Rage releasing later on this year to perhaps answer this question.
Call Of Duty 4: Modern Warfare
Call of Duty 4: Modern Warfare (CoD4) is a first person shooter running on a custom engine. It has nice graphics but the engine is somewhat dated compared to others and it runs well on modern PCs. It is the first CoD installment to take place in a modern setting instead of in World War II.
It differs from the previous Call of Duty games by having a more film-like plot that uses intermixed story lines from two perspectives; that of a USMC sergeant and a British SAS sergeant. There is also a variety of short missions where players control other characters in flashback sequences to advance the story. Call of Duty 4’s move to modern warfare introduced a variety of modern conventional weapons and technologies including plastic explosives.
There are currently about 20 multiplayer maps in CoD4. It is very popular and there is a new expansion for it. CoD Modern Warfare 2 was also released with updated visuals but it is also not very demanding on graphics cards.
Our timedemo benchmark was created by ABT’s own Senior Editor and lead reviewer, BFG10K. It is very accurate and totally repeatable. Here is CoD4, first at 2560×1600 resolution with all in-game settings completely maxed out plus 4xAA: We see some pretty good scaling that approaches 100% in some cases. Scaling greater than 100% can be attributed to benchmark “noise”, the way averages are rounded off. or the nature of Alternate Frame Rendering (AFR). Let’s next test at 1920×1200. We see that a popular multiplayer game is very playable even on midrange graphics cards and it plays very smoothly with this generation’s top video cards. The HD 6990 pulls ahead of the GTX 590 in this benchmark. However, there is no practical difference as the minimums are already so high.
Unreal Tournament 3 (UT3)
Unreal Tournament 3 (UT3) is the fourth game in the Unreal Tournament series. UT3 is a first-person shooter and online multiplayer video game by Epic Games. Unreal Tournament 3 provides a good balance between image quality and performance, rendering complex scenes well even on lower-end PCs. Of course, on high-end graphics cards you can really turn up the detail.
UT3 is primarily an online multiplayer title offering several game modes and it also includes an offline single-player game with a campaign. For our tests, we used the very latest game patch for Unreal Tournament 3.
The game doesn’t have a built-in benchmarking tool, so we used FRAPS and did a fly-by of a chosen level. Here we note that performance numbers reported are a bit higher than compared to in-game. The map we use is called “Containment” and it is one of the most demanding of the fly-bys. Our tests were run at resolutions of 2560 x 1600 and 1920 x 1200 with UT3’s in-game graphics options set to their maximum values.
One drawback of the way the UT3 engine is designed is that there is no support for anti-aliasing built in. We forced 4xAA for 2560×1600 and 8xAA for 1920×1200 in each vendor’s control panel; 8xQ for Nvidia to match AMD Graphics’ 8xMSAA settings. We record a demo in the game and a set number of frames are saved in a file for playback. When playing back the demo, the game engine then renders the frames as quickly as possible, which is why you will often see it playing it back more quickly than you would actually play the game.
Here is Containment Demo, first at 2560×1600 with 4xAA forced in each vendor’s control panel:
We also tested 2560×1600 with 8xAA. The HD 6990 got 136 FPS versus the 127 frames per second with the GTX 590.
Now at 1920 x 1200 and with 8xAA (8xQ in Nvidia’s Control Panel) forced.
There is absolutely no problem playing this game fully maxed-out with any of our graphics configurations although the GTX 580 puts in the best showing at 2560×1600 as a single-GPU card.. The GTX 560 Ti has no trouble handling the HD 6870 and all SLI and CrossFired configurations give decent scaling beating the flagship video cards.
The game is played as an over-the-shoulder, third-person perspective action-adventure game with a primary focus on Batman’s combat abilities, stealth, detective skills and complete with an arsenal of gadgets that can be used in both combat and as exploring in “detective mode”.
Batman: Arkham Asylum uses a highly modified version of the Unreal Engine 3. It does not support AA natively but must be added in and supported by the game’s developer. Unfortunately we cannot compare Batman: Arkham Asylum using our GeForce exactly against the Radeon with PhysX on; so all of our testing is with it off. We are using the Game of the Year Edition of Batman: Arkham Asylum which supports in-game AA settings for both Radeon and GeForce cards.
We begin testing at 2560×1600 with details maxed and with 8xAA applied in the game’s setting control panel (8xQ for Nvidia).
All of our cards can play Batman at 1920×1200 with 8xAA. The GTX 580 is fastest single video card. We had some real issues with our GTX 590 and Batman that appear to be driver-related. Here we see evidence of “negative scaling” where a single GTX 580 is faster than the dual-GPUs. We didn’t see this with GTX 580 SLI. The HD 6990 wins by default.
Left 4 Dead
Left 4 Dead (L4D) is a 2008 co-op first-person shooter that was developed by Turtle Rock Studios and purchased by Valve Corporation during its development. Left 4 Dead uses Valve’s proprietary Source engine . L4D is set in the aftermath of a worldwide pandemic which pits its four protagonists against hordes of the infected zombies. There are four game modes: a single-player mode in which your allies are controlled by AI; a four-player, co-op campaign mode; an eight-player online versus mode; and a four-player survival mode. In all modes, an artificial intelligence (AI), dubbed the “Director”, controls pacing and spawns, to create a more dynamic experience with increased replay value. It is best as a multiplayer game with humans.
There is no built-in benchmark, so we created our own custom time demo which is very repeatable. The game is updated regularly by Steam and we chose the highest detail settings and 8xAA. Unfortunately Steam updated the game just as we began testing our GTX 590 and we had to create another time demo. It is very similar to the previous one but different enough not to compare to each other. You can see benches with the rest of the configurations in the Performance Summary.
We will save our comments until after we present both charts. First we test at 2560×1600 resolution:
On to our next chart at 1920×1200:
Left 4 Dead leaves the GTX 590 slightly behind the HD 6990. There is not any advantage in framerates as the minimums are already sky-high for both cards.
Grand Theft Auto IV
Grand Theft Auto IV (GTA IV) is a sandbox-style action-adventure video game released by Rockstar in late 2008. It is the sixth game in the Grand Theft Auto series. Two episodic expansion packs have since been released since then as late as April of this year. The game is set in a redesigned rendition of Liberty City, a fictional city based heavily on modern day New York City. It follows Niko Bellic, a war veteran from Eastern Europe. He comes to the United States in search of the American Dream and enters a world of organized crime, gangs and corruption. GTA IV is mostly composed of elements from driving games and third-person shooters which features free-roaming gameplay. It features an online multiplayer mode, the first of the GTA series to do so. Here are the settings that we used. The 1.5GB GTX 480/570/580 and 2GB HD 69×0 Radeons, by virtue of having more than 1GB vRAM, can use even higher settings than the 1GB video cards (which are pictured below running nearly out of resources). Unfortunately, HD 6970 CrossFire and HD 6990 did not even allow us to change any settings and we could not even make a clean exit.
First we test at 2560×1600 resolution.
This benchmark appears to be at the very least, highly CPU-limited at 1920×1200 except for our weakest cards which benefits from multi-GPU. At 2560×1600 we start to see some differences with some limited multi-GPU scaling.
In GTA-IV, the GTX 590 wins over the HD 6990 by default. We used the latest AMD CrossFire profiles at the very beginning of this week and still the game refused to allow us to change settings.
Serious Sam Second Encounter HD (2010) Serious Sam is the title of a series of first-person shooters created by the Croatian development team Croteam. It follows the adventures of its hero Sam “Serious” Stone and his fight against the forces of the extraterrestrial overlord Mental who seeks to destroy humanity. Its gameplay is a throwback to early first-person shooters like Quake and Doom with the twist of being set in wide-open environments with large groups of enemies attacking at any time, and there are many hidden areas and treasures to find and puzzles to solve.
Serious Sam features cooperative gameplay and allows for split screen action supporting up to 4 players. Serious Sam: The Second Encounter was remade as “HD” using Serious Engine 3. It was released on April, 2010 for PC. Besides updated visuals, new game modes including “Co-op Tournament” and “Survival” for single player, were introduced in this remake.
We use the basic 3 “ultra” presets for benching Serious Sam: The Second Encounter HD. There is possible further fine-tuning which will make the game even more demanding, but we chose the “ultra” presets with only one higher GPU setting, to allow for testing beyond 1920×1080.
We test first at 2560×1600 resolution:
And finally at 1920×1200 with the same ultra presets:
Serious Sam: The Second Encounter HD on the Serious 3 engine is quite demanding and yet all of our top configurations play it satisfactorily using the game’s built-in “ultra” presets. Although the GTX 560 Ti trades blows with the HD 6870, the HD 6950 is a bit faster and the stock GTX 580 simply blasts past them all. GTX 560 Ti SLI edges a single GTX 580 at 2560×1600 and is much faster at 1920×1200.
In Serious Sam: The Second Encounter HD, even the overclocked GTX 590 cannot catch the stock HD 6990. Again, the minimums are so high that there is little practical difference at ultra settings and we would be tweaking settings in the control panel to make the game even more demanding (and gorgeous).
Mafia II
Mafia II is a third-person action-adventure video game which is the sequel to Mafia: The City of Lost Heaven. It is developed by 2K Czech and is published by 2K Games and was released last year. Mafia II is set from 1943 to 1951 in Empire Bay which is a fictional city based mostly on San Francisco and New York City along with some influences from Chicago and also Detroit.
Mafia II is a gritty drama which chronicles the rise of World War II veteran Vito Scaletta who joins the Falcone Crime Family and becomes a ‘made’ man. There are 15 chapters in the game and over two hours of game engine generated cutscenes. Mafia II makes extensive use of Nvidia’s PhysX whose full effects are seen smoothly only by playing on a PhysX-enabled GeForce and preferably with a second video card dedicated to it.
For this article, we used the full retail game with Mafia II’s built-in benchmark with the highest settings for 2560×1600 and 1920×1200 – without PhysX – and this time we will reserve comment until after both charts.
First we test at 2560×1600.
Now at 1920×1200:
The GTX 580 is the fastest single-GPU card followed by the HD 6970. However, the GTX 560 Ti falls slightly behind the HD 6870. In Mafia II, the GTX 590 is well ahead of the HD 6990 at 1920×1200 maxed out . However, at 2560×1600 the HD 6990 pulls ahead again and it takes the overclocked GTX 590 to take it down.
This covers our DX9 games and we note some variability with SLI and CrossFire Scaling. Let’s move on to DX10 and DX11 games to see if anything changes.
Call of Juarez
Call of Juarez is one of the very earliest DX10 games. It is loosely based on Spaghetti Westerns that became popular in the early 1970s. Call of Juarez features its Chrome Engine using Shader Model 4 with DirectX 10. Our benchmark is built into Call of Juarez. It runs a simple flyby of a level that is created to showcase its DX10 effects. It offers good repeatability and it is a good stress test for DX10 features in graphics cards, although it is not quite the same as actual gameplay because the game logic and AI are stripped out of this demo.
Performing Call of Juarez benchmark is easy. You are presented with a simple menu to choose resolution, anti-aliasing, and two choices of shadow quality options. We set the shadow quality on “high” and the shadow map resolution to the maximum, 2048×2048. At the end of the run, the demo presents you with the minimum, maximum, and average frame rate, along with the option to quit or run the benchmark again. We always ran the benchmark at least a second time and recorded that generally higher score.
Here are Call of Juarez DX10 benchmark results, first at 1920×1200 as there is no 2560×1600 option available in the benchmark:
Now we test at 1680×1050:
Here the GTX 590 leads over the HD 6990. However, this game is no challenge for either of our cards at the benchmark’s limited resolutions. There is no problem maxing out this game at 2560×1600 and you can even play with higher levels of filtering.
Lost Planet
Lost Planet: Extreme Condition is a Capcom port of an Xbox 360 game. It takes place on the icy planet of E.D.N. III which is filled with monsters, pirates, big guns, and huge bosses. This frozen world highlights high dynamic range lighting (HDR) as the snow-white environment reflects blinding sunlight as DX10 particle systems toss snow and ice all around.
The game looks great in both DirectX 9 and 10 and there isn’t really much of a difference between the two versions except perhaps shadows. Unfortunately, the DX10 version doesn’t look that much better when you’re actually playing the game and it still runs slower than the DX9 version.
We use the in-game performance test from the retail copy of Lost Planet and updated through Steam to the latest version for our runs. This run isn’t completely scripted as the creatures act a little differently each time you run it, requiring multiple runs. Lost Planet’s Snow and Cave demos are run continuously by the performance test and blend into each other.
Here are our benchmark results with the more demanding benchmark, Snow. All settings are fully maxed out in-game including 2x or 4xAA/16xAF. Let’s start with 1920×1200 resolution with 2xAA.
Now at 1680×1050 and with 4xAA:
The HD 6870 is edged by the GTX 570 while the GTX 580 convincingly takes the single card crown. However, the HD 6870 is faster than our GTX 560 Ti and CrossFired HD 6870s also beat GTX 560 Ti SLI. However, we note another oddity in otherwise near-perfect CrossFire scaling where in one case, the scaling is slightly greater than 100%.
This game used to be impossible to play at 2560×1600 with any great DX10 level of details. Here we see the HD 6990 take a lead over the GTX 590. Neither video card will have issues at the highest resolutions.
CRYSIS
Next we move on to Crysis, a science fiction first person shooter by Crytek. It remains one of the most demanding games for any PC and it is also still one of the most beautiful games released to date. Crysis is based in a fictional near-future where an alien spacecraft is discovered buried on an island near the coast of Korea. The single-player campaign has you assume the role of USA Delta Force, ‘Nomad’ who is armed with futuristic weapons and equipment.
Crysis uses DirectX10 for graphics rendering. A standalone but related game, Crysis Warhead was released the following year. CryEngine2 is the game engine used to power Crysis and Warhead and it is an extended version of the CryEngine that also powers Far Cry. As well as supporting Shader Model 2.0, 3.0, and DirectX10’s 4.0, CryEngine2 is also multi-threaded to take advantage of dual core SMP-aware systems and Crytek has developed their own proprietary physics system, called CryPhysics.
It is noted that actually playing this game is a bit slower than the demo implies. All of our settings are set to the in-game maximum’s “very high” including 2xAA for 2560×1600, 1920×1200 and for 1680×1050 and we force 16xAF in the control panels. Here is Crysis’ Island Demo benchmark, first at 1920×1200 resolution:
Here the GTX 590 takes a solid lead over the HD 6990 at the lower resolution but its lead is cut down as the resolution goes up. We will make sure to test further in Warhead, a better optimized game.
Crysis Warhead
Crysis Warhead is a science fiction first-person shooter computer game developed by the Hungarian studio Crytek and published by Electronic Arts. Crysis Warhead is a stand-alone expansion to Crysis that was released in 2008. It is optimized better than the original Crysis to look as good with less hardware resources required to render it.
We test first at
1920×1200 with 2xAA/16xAF with maxed-out in-game “Enthusiast” (very high) settings:
And now the same settings at 1680×1050:
The GTX 560 Ti is slightly faster than the HD 6870 and as in Crysis, we see all of our cards scaling well. This time the HD 6990 beats the GTX 590 until the highest resolution when the situation becomes reversed.
FarCry 2
Far Cry 2 uses the name of the original Far Cry but it is not connected to the first game as it brings you a new setting and a new story. Ubisoft created it based on their Dunia Engine. The game setting takes place in an unnamed African country, during an uprising between two rival warring factions. Your mission is to kill “The Jackal”; the Nietzsche-quoting mercenary that arms both sides of the conflict that you are dropped into.
The Far Cry 2 game world is loaded in the background and on the fly to create a completely seamless open world. The Dunia game engine provides good visuals that scale well. The Far Cry 2 design team actually went to Africa to give added realism to this game. One thing to especially note is Far Cry 2’s very realistic fire propagation by their engine that is a far cry from the scripted fire and explosions that we are used to seeing.
First we test Far Cry 2 benchmark at 2560×1600 with AI enabled and we use the Ranch Long benchmark with ultra settings plus 4xAA. When we benched at 8xAA, the factory overclocked HD 6990 averaged 104.53 FPS, down from 127.82 at 4xAA and barely edged out by the GTX 590 at 105.04 FPS
Let’s move on down to 1920×1200 resolution while increasing our AA from 4x to 8x.
The GTX 580 is clearly the fastest of the single cards. Here we see a clean sweep by the GTX 580 in Far Cry 2 while the GTX 560 Ti even beats the fastest Radeon, HD 6970 in this game. However, we see CrossFired HD 6870s score nearly perfect scaling although they are both beaten by GTX 560 Ti SLI.
Here we see the GTX 590 edge the HD 6990 at 2560×1600 by the very slightest of margins only to increase it a bit more at lower resolutions. There is no practical difference playing this game with any of our multi-GPU setups. And as we saw, playing at 8xAA at 2560×1600 makes no difference to the ranking.
We again compared underclocked GTX 580 SLI to the same clocked GTX 590. This time the GTX 590 wins by a significant margin – 125.4 FPS to 110.77 !
World in Conflict Soviet Assault
World In Conflict is set in an alternate history Earth where the Cold War did not end and Russia invaded the USA in 1989 and the remaining Americans decided to strike back. World in Conflict (WiC) is a real-time tactical/strategy video game developed by Massive Entertainment and released in 2007. The expansion, Soviet Assault, was released in 2009.
Although it is generally considered a real-time strategy (RTS) game, World in Conflict includes gameplay typical of real-time tactical (RTT) games. WiC is filled with real vehicles from both the Russian and the American military. There are also tactical aids, including calling in massive bombing raids, access to chemical warfare, nuclear weapons, and far more.
Here is yet another amazing and very customizable and detailed DX10 benchmark that is available in-game or as a stand-alone. We use the full retail game’s in-game benchmark as it offers more settings than the demo and is updated by patches. The particle effects and explosions in World in Conflict Soviet Assault are truly spectacular! Every setting is fully maxed out.
We start our benching at 2560×1600:
Now we test at 1920×1200
You can call the results in World in Conflict a draw between the HD 6990 and the GTX 590.
Just Cause 2
Just Cause 2 is a 2010 sandbox-style action video game by Swedish developer Avalanche Studios and Eidos Interactive and is the sequel to the 2006 video game, Just Cause. Just Cause 2 employs the Avalanche Engine 2.0 which an updated version of the engine used in the original and there are impressive visuals as it is made just for DX10.
It is set on the fictional tropical island of Panau in Southeast Asia. Rico Rodriguez returns as the protagonist who aims to overthrow the evil dictator “Baby” Panay and also to confront his former boss, rogue agent Tom Sheldon. The game play is similar to that of its predecessor in that the player is free to roam the huge open world without a need to focus on the storyline. The Just Cause 2 AI has been rewritten and it even includes dual-grappling hooks which give players the ability to tether unlimited objects to each other including the tethering of enemies to vehicles and to each other which works very well as one of your goals is to cause maximum chaos. It is a lot of fun!
Here are the maximum settings available to a GeForce card; the bottom two, the Bokeh Filter and GPU water simulation, are unavailable to Radeons and they are left off on all runs to give solid apples-to-apples comparisons for all of our tested video cards and we used the Dark Tower benchmark built into the retail game. First the benches at 2560×1600 with 8xAA:
The stock HD 6990 sits in-between the stock and overclocked GTX 590.Let’s look at the performance at 1920×1200 but now with 2xAA, as in our usual testing:
Again it takes the overclocked GTX 590 to catch the stock HD 6990.
Here we see the HD 6990 run faster than the GTX 590. However, the differences practically are small although they increase a bit more at the highest resolution..
Tom Clancy’s H.A.W.X.
Tom Clancy’s H.A.W.X. is an air combat video game developed published by Ubisoft. It was released in United States on March 6, 2009. You have the opportunity to fly 54 aircraft over real world locations and cities in somewhat realistic environments that are created with satellite data. This game is a more of a take on flying than a real simulation and it has received mixed reviews.
The game story takes place during the time of Tom Clancy’s Ghost Recon Advanced Warfighter. H.A.W.X. is set in the year 2014 where private military companies have replaced government-run military in many countries. The player is placed into the cockpit as an elite ex-military pilot who is recruited by one of these corporations to work for them as a mercenary. You later return to the US Air Force with a team as you try to prevent a full scale terrorist attack on the United States which was started by your former employer.
H.A.W.X. runs on DX10.1 faster and with more detail than on the DX10 pathway. All of our video cards can take advantage of DX10.1. Let’s check out H.A.W.X. with our top cards at 2560×1600 with fully maxed out in-game settings and 8xAA:
The GTX 580 jets away from the Radeons and cleanly beats the GTX 570 while GTX 560 Ti is faster and sits in between the CrossFired HD 6870 and HD 6970s and GTX 590 outflys the HD 6990.
Here are our results at 1920×1200 resolution:
Although all of our single-GPU cards give a similar playing experience in this game with maxed out settings and 8xAA, the new GTX 580 is clearly the top gun followed by the GTX 570. We see much the same thing with our multi-GPU set up as the GTX 590 is the top gun single video card.. Of course. HD 6900 is in the same league and can play this game in Eyefinity with no issues. Unfortunately, we did not get to our Surround setup in time to do any super-widescreen testing.
Resident Evil 5
Resident Evil 5 is a survival horror third-person shooter developed and published by Capcom that has become the best selling single title in the series. The game is the seventh installment in the Resident Evil series and it was released for Windows in September 2009. Resident Evil 5 revolves around two investigators pulled into a bio-terrorist threat in a fictional town in Africa.
Resident Evil 5 features online co-op play over the internet and also takes advantage of Nvidia’s 3D Vision technology. The PC version comes with exclusive content the consoles do not have. The developer’s emphasis is in optimizing high frame rates but they have implemented HDR, tone mapping, depth of field and motion blur into the game.
Resident Evil 5‘s custom game engine, ‘MT Framework’, already supports DX10 to benefit from less memory usage and faster loading. Resident Evil 5 gives you choice as to DX10 or Dx 9 and we naturally ran the DX10 pathway. There are two benchmarks built-into Resident Evil 5. We chose the variable benchmark as it is best suited for testing video cards. Here it is at 2560×1600 resolution with maxed out in-game setting plus 8xAA:
Here are the results at 1920×1200 resolution:
The GTX 580 simply powers past all of its single-GPU competition. However, all of our video cards turn in respectable performances and their overall playability is similar at 1920×1200. Scaling is overall good but somewhat mixed for multi-GPU. This time we see the GTX 590’s lead over HD 6990 widen further as the resolution increases.
S.T.A.L.K.E.R., Call of Pripyat is the third game in the S.T.A.L.K.E.R. series. All of these games have non-linear storylines which feature role-playing game elements. In both games, the player assumes the identity of a S.T.A.L.K.E.R.; an illegal artifact scavenger in “The Zone” which encompasses about 30 square kilometers. It is the location of an alternate reality story surrounding the Chernobyl Power Plant after another (fictitious) explosion. S.T.A.L.K.E.R., Call of Pripyat features “a living breathing world” with highly developed NPC creature AI.
Call of Pripyat is compatible with DirectX 8, 9, 10 and 10.1. It uses the X-ray 1.6 Engine with dX 11, one outstanding feature being the inclusion of real-time GPU tesselation– a Shader model 3.0 & 4.0 graphics engine featuring HDR, parallax and normal mapping, soft shadows, motion blur, weather effects and day-to-night cycles. As with other engines using deferred shading, the original DX9c X-ray Engine does not support anti-aliasing with dynamic lighting enabled, although the DX10 and DX 11 versions do.
We are using the stand-alone “official” benchmark by Clear Sky’s creators. We picked the most stressful test out of the four, “Sun shafts”. It brings the heaviest penalty due to its extreme use of shaders to create DX10/DX10.1 and DX11 effects. We ran this benchmark fully maxed out in DX11.0 with “ultra” settings plus 4xAA, including applying edge-detect MSAA which chokes performance even further.
Here we present our maxed out DX11 settings for S.T.A.L.K.E.R., Call of Pripyat DX11 benchmark with 4xAA at 2560×1600:
The HD 6990 wins the top spot by a couple of frames per second. Now we back off the AA and resolution to our usual settings.
Now we move on to 1680×1050 with 2xAA:
This game was impossible to play last year at the highest settings with a single card and much if any AA. Now we see our cards breeze through 1680×1050 and 1920×1200 with the GTX 590 leading the HD 6990. We saw that if we upped the resolution to 2560×1600 and also increased the AA to the maximum 4xAA the benchmark supports, the HD 6990 pulls ahead of the GTX 590 by nearly 2 frames per second. We are seeing a trend where the GTX 590’s lead gets cut down as the resolution and AA go way up. Part of it may be attributed to the larger framebuffer of the HD 6990, but it is just likely architectural differences.
BattleForge
BattleForge is an online PC game developed by EA Phenomic and published by Electronic Arts. The full game was released in March 2009. BattleForge is a card-based RTS that revolves around acquiring and winning by means of micro-transactions for buying new cards. By May, 2009, BattleForge became a Play 4 Free game with fewer cards than the retail version.
BattleForge supports Directx 11 with full support for hardware tesselation. It is very impressive visually and quite demanding on any system. First we test with our cards at 1920×1200 using the BattleForge built-in benchmark with all of its settings completely maxed out and with 4xAA:
And now we test at 1920×1200 at our usual 4xAA.Now we test at 1680×1050; again with 4xAA.
The GTX 580 is again the fastest video card in BattleForge followed by the GTX 570 and then more distantly by the HD 6970. The GTX 560 Ti is faster than AMD’s HD 6970 in this benchmark at our tested resolutions. In BattleForge the GTX 590 rules the lower resolutions. At 2560×1600 and with 8xAA, the GTX 590 further distinguishes itself. We think there may still be driver issues with the Catalyst drivers and we look forward to our future monthly river performance tests.
Aliens vs Predator Aliens vs. Predator, known to fans as Aliens versus Predator 3 or AVP3 is a video game developed by Rebellion Developments, and published by Sega in February 2010. It is the sixth game of the Aliens versus Predator game series.
There are three campaigns in the game, one for each race or faction (the Predators, the Aliens and the Colonial Marines), that form one main storyline although they differ in objectives depending on your choice of campaign. Alien vs Predators DX11 benchmark is a stand alone bench that as the name says is only for DX11 cards. It is more demanding than actually playing the game generally.
First we bench at 1920×1200 with maxed out settings plus 2xAA.
The GTX 580 is the fastest single-GPU card. Now we test at 1680×1050 and 2xAA.
With Aliens vs Predator DX11 benchmark it takes the overclocked GTX 590 to match or beat the stock HD 6990..
DiRT 2
Colin McRae: DiRT 2 is a racing game that was released in September 2009, and is the sequel to Colin McRae: Dirt. It includes many new race-events, including stadium events as your RV travels from one event to another in many real-world environments across four continents. Dirt 2 includes five different event types even allowing you to compete at new locations. It also includes a new multiplayer mode.
Dirt 2 is powered by an updated version of the EGO engine which was featured in Race Driver: Grid. This updated EGO engine also features an updated physics engine. We are using the Dirt 2 full retail game built-in benchmark at the highest “ultra” DX11 setting with 8xAA applied.
First we test at 2560×1600:
The GTX 580 gets the single-GPU DiRT 2 checkered flag on the DX11 pathway as the GTXes pull further away from the Radeons at 1920×1200. However, even the lowest priced HD 6870 can play this game satisfactorily at the highest resolutions.
The GTX 590 beats the HD 6990 at the highest resolution although it is not as close as at 1920×1200 resolution. However, the HD 6990 has no issues speeding through completely maxed out at 2560×1600 resolutions and is known to tackle Eyefinity 5970×1600 also.
Metro 2033 is the “Crysis” of 2010. It is a very demanding game on any PC with the very latest DX11 visuals. Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel “Metro 2033” by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010.
The game utilizes multi-platform 4A Engine and there is some doubt if the games engine is related to the original XRay engine used in S.T.A.L.K.E.R.. The Metro 2033 story takes place mostly in post-apocalyptic Moscow’s metro system but occasionally the player has to go above ground on some missions and to search for valuables. Metro 2033‘s locations reflect the dark atmosphere of real metro tunnels but in a much more dangerous and lethal manner. Strange phenomena and noises are frequent, and mostly the player has to rely only on their flashlight to find their way around in otherwise total darkness. Even more deadly is the surface as it is severely irradiated and a gas mask must be worn at all times due to the toxic air.
THQ has released an official benchmark for Metro 2033 which provides minimum/maximum/average framerates, and you can adjust many graphics settings including PhysX, AA, DOF and tessellation, and the number of runs. Our presets are set to maximum (very high) with 1xAA and no PhysX nor DOF enabled. Here is our first chart at 1920×1200 as 2560×1600 proves too demanding without turning off most of the visuals that make this game really impressive.
However, actually playing the game, one can tolerate minimums into the 20s without noticing severe lag. We test at very High settings with AA and DOF off except as noted. Now at 1680×1050:
All of our single cards struggle with Metro 2033 with the aggressive settings that we used except for the GTX 580. Metro 2033 is an interesting benchmark.
We see the HD 6990 is ahead of the GTX 590 at the two resolutions we tested. Since these cards did so well, we further upped the resolution to 2560×1600 and also added 4xAA and enabled DOF.
This time, the GTX 590 (39.48 FPS) beats the HD 6990 (39.2 FPS)! In fact, we can further enable PhysX (which we cannot do on the Radeon at all) to drop the frame rate to an almost satisfactory 35.33 FPS. Overclocking would be the answer here in maxing out Metro 2033 at 2560×1600 with a GTX 590.
F1 2010
F1 2010 is a racing game based on the 2010 season of the Formula One world championship and the sequel to the 2009 video game in the same series. It was released in September 2010 by Codemasters. The EGO 1.5 engine powers it. The weather system is one of the best seen in a racing game and requires the player to adjust to changing track conditions. Watch out for bad AI drivers! First we test at 2560×1600 using ultra settings with the built-in benchmark.
AMD fixed their “negative scaling” since our last testing – where CrossFired cards were actually much slower in the minimums than their single card counterparts – with a recent CrossFire profile. If they had not, you would disable CrossFire (or SLI) in the respective control panels or you would have unacceptable dips in the minimums. Now we test at 1920×1200.
In this game the Radeons have the advantage and take the checkered flag.
Lost Planet 2
Lost Planet 2 is the sequel to Lost Planet: Extreme Condition and is also made by Capcom. The events take place ten years after the first game and on the same, now thawed, EDN III. The PC version was released on October 12, 2010 and it runs on the MT-Framework 2.0 engine; an updated version of the engine used in several Capcom games. Campaign mode can have up to 4 players working together over the Internet. Lost Planet 2 allows players to create and customize their own characters which will allow them to unlock more things after leveling up and downloading content.
We are using the retail game’s built-in benchmark in DX11 with maximum settings. As the game is quite demanding, we first test at 2560×1600 resolution with no AA.
We also tested the framerates of the GTX 590 at 2560×1600 with 4xAA against the HD 6990 and can see the GeForce pull even further away in this tessellation-heavy game.
GTX 590 – 59.8
HD 6990 – 43.6
Just as in the original game, none of our single-GPU cards can play easily play Lost Planet 2 at 2560×1600 at the highest settings although the GTX 580 comes closest; it is made for multi-GPU.
We also underclocked our GTX 580s in SLI to match the clockspeeds of the GTX 590:
GTX 590 (605/1707MHz) – 59.8 FPS
GTX 580 SLI (605/1707) – 56.2 FPS
Tom Clancy’s H.A.W.X. 2
Tom Clancy’s H.A.W.X. 2 is an air combat video game developed by Ubisoft for PC. We are using the built-in benchmark from the full retail game. The way tessellation is implemented shows AMD graphics cards are perhaps unnaturally slow compared with other DX11 titles although their newer drivers have made significant performance gains over earlier drivers in H.A.W.X. 2. without sacrificing any noticeable image quality.
H.A.W.X. 2 runs on DX11 faster and with more detail than on the DX10 pathway. Here the emphasis is on terrain tessellation which looks outstanding in DX11 and “flat” in DX10. Let’s check out H.A.W.X. 2 with our video cards at 2560×1600 and with fully maxed out in-game settings and with 8xAA:
And now we test at 1920×1200 resolution:
We see the GTX 580 flying away from the single-GPU competition. However, the single Radeons can also play this game maxed out at 1920×1200. The GTX 560 Ti SLI rules the skies in its price range and above as it outflys the HD 6970 CrossFire by a significant margin. And the GTX 590 is top gun as single fastest video card.
Heaven 2.0 Unigine
Finally we come to our last benchmark, Heaven 2.1, on the Unigine engine. It uses DX11 and heavy tessellation which will strain any graphics card. At least two DX11 games based on Unigine are expected to be released this year. We use the setting for “extreme tessellation” and high shaders and we also set AF to 16x. We will tell you right now that this test chokes the GTX 580 at the highest settings and resolution so we did not run it at 2560×1600 – until now.
Here is Heaven 2.1 benchmark with maxed settings, extreme tessellation and 2xAA at 1920×1200:
The GTX 580 is clearly the fastest single video card at the extreme tessellation setting of this benchmark and the GTX 590 is faster than the HD 6990. All of our multi-GPU configurations scale wel. However, this is a synthetic benchmark and we will withhold judgment until we play PC games using the Unigine engine.
3D Vision Testing
We received our 3D Vision Kit from ASUS/Nvidia the same day that we received our GTX 590. The kit is pictured as shown below:
The 23 inch 1920×1080 ASUS 120Hz display is beautiful and the screen is extra-bright for 3D Vision.
Here are the results of our benching with 3D Vision enabled versus 2D. There is a significant performance hit because each frame is rendered twice – once for the left eye and once for the right. We will go into much more detail in an upcoming review of 3D Vision.
All of the games were played with maximum settings with a single GTX 590. Metro 2033 did not enable AA, DoF or PhysX and Heaven 2.5 was run with extreme tessellation. 3D Vision is quite impressive:
3D Vision aims for 60 frames per second. Triple buffering is locked on in the drivers and cannot be disabled. We shall cover this in great detail in our 3D Vision evaluation article coming up in about a month. In the meantime, stay tuned for more 3D Vision results in upcoming evaluations.
Overclocking, noise and power usage
Using Vantage as an example, we also underclocked and overclocked our GTX 590 and GTX 580 SLI pair. For overclocking, we did not raise the GTX 590’s voltage nor did we alter the fan profile. It overclocked to a maximum of 690/1825 from the reference clocks of 605/1707. It only got slightly warmer and the fan tended to come on sooner and remain on longer but the sound was never annoying. In fact, it is only slightly louder than a GTX 580 and rather on the same sound level as a single HD 6970! You are never aware of the GTX 590 fan while gaming.
We underclocked our GTX 580 SLI pair to the same clocks as the GTX 590, 605/1707MHz. We discovered that it generally used 150W less than the stock-clocked pair! And this was without even lowering the voltage. We got a Vantage score of 33069, considerably lower than our GTX 590 score of 38435.
We tested the GTX 590 vRAM clocks separately from the core clocks – it is a delicate balance to get the right performance. Overclocking only the core from 605 to 690MHz got us 41311, while leaving the core stock and overclocking the memory from 1707 to 1850MHz got a much lower score of 38701. Overclocking both to as far as they would work together optimally – 690/1825MHz – got us 41592. The memory clocks need to go up as the core is raised – but not too much or there is instability and a performance decrease.
It might be strange that our GTX 590 consistently outperformed the GTX 580 pair in SLI and it could be drivers. It could also be more efficient memory timings which Nvidia worked to perfect in the GTX 590 over many weeks.
Noise
The GTX 590 stands out in stark contrast to the HD 6990 in design philosophy. It appears that the HD 6990 is engineered with brute force in mind. The clocks are set very high and the fan must run very fast to compensate for the increased thermals. It isn’t annoying in the normal (warrantied) factory-clocked setting of 830/1250 but it starts to become very noticeable in the (unwarrantied) BIOS position number two (880/1375MHz). If you overclock it further, you may well be startled in game wondering where the new ‘helicopter’ is that suddenly showed up in your medieval fantasy game.
Both cards – the GTX 590 and the HD 6990 – are amazingly fast. The GTX 590 is noticeably quieter in comparison to the Radeon. The closest analogy to the differing design philosophies might be the way car manufacturers approach designing their sports cars. Some are quite loud and others very quiet.
This section is also unfinished. We plan to add relative noise comparison to this review as well as power draw numbers. Basically, the numbers are very close overall and are in the same class in power draw and thermals. It’s the way that each company deals with it that expresses their individual design philosophies. Both cards have great appeal to the enthusiast who wants a single very powerful video card.
Performance Summary
.
Conclusion
This has been quite an enjoyable five-day hand’s on experience for us in comparing our brand-new, under-NDA, GTX 590 versus our HD 6990 and other video card configurations and we look forward to evaluating further new products from AMD and Nvidia. We always wish that we had more time then we were allowed to benchmark the GTX 590 so as to give you our first impressions. Fortunately, we have been gaming for months with our other test cards, so that we can provide you with a reliable comparison.
We feel priviliged to bring you our very first benchmarks and performance testing of Nvidia’s amazing value GTX 590. We like it quite a lot and it has exceeded this editor’s own expectations. In the meantime, feel free to comment below, ask questions or have a detailed discussion in our ABT forum. If you have any requests on what you would like for us to focus on for further testing or for any other information, please join our ABT forum or leave a comment.
Nvidia GTX 590
Pros and Cons:
Pros:
- Nvidia’s GTX 590 is much faster than the GTX 580 and it wins more benches than it loses to the HD 6990. Nvidia has brought good performance into a $700 package.
- There is further room for overclocking and good scalibility.
- New Fermi GF110 architecture brings support for GPU computing and a level of performance way beyond the last generation.
- DX11 and great support for tessellation, PhysX and CUDA, 3D gaming, and 2D/3D Surround on a single video card bring realism to gaming
- Nvidia’s highly efficient cooler is great for achieving and keeping your OC by keeping your GPU cool. It is one awesome cooler that tames GTX 590’s thermals very quietly, even at full load.
- Quad-SLI is possible with two of these cards; ultimate performance for 3D Vision.
Cons:
- Price.
- Tri-SLI – GTX 590 plus GTX 580 – is unsupported.
That’s it. For about the same price as a reference HD 6990, you get all the features that Nvidia video cards have to offer in a very solidly-built, cool and quiet-running GTX 590! If you require quiet running, Nvidia’s superb quiet cooling is a real winner and it is the single greatest difference that sets it apart from it’s competitor. Nvidia has succeeded in encasing the performance of two underclocked GTX 580s into a single $700 package; it’s the same price as the HD 6990, and its quieter running as a bonus for your ears.
The Competition:
The ATI Radeon HD 6990 offers its own set of unique features although it cannot unequivocally claim “world’s fastest card” as before. Launched just two weeks ago, the ATI Radeon HD 6990 continues to offer excellent performance at the same price point as the GTX 590 which also gives you an immersive gaming experience with AMD Eyefinity Technology; driving up to five displays simultaneously and AMD HD3D technology for stereoscopic 3D gaming and Blu-ray 3D playback. The HD 6990 scales superbly especially when you get to resolutions of 2560×1600 and above. With AMD you also have the possibility of pairing up two HD 6990s for Quad-Fire or for even more flexibility, you can pair a HD 6990 plus HD 6970 for Tri-Fire-X3.
The Future
We do not know what the future will bring, but this amazing card brings a great value to the Fermi family of upgraded GTX “tanks” in Nvidia’s lineup. Look for it at an etailer immediately. Pricing for the reference GTX 590 is $699 and will go up from there depending on what the individual partners offer.
This editor believes that Nvidia brings a very remarkable full-featured DX11 GPU lineup to the market that will find good acceptance among customers and their fans alike. Fermi architecture is impressive and flexible and it does translate to performance in gaming. We have also seen Nvidia’s drivers improve and their multi-GPU SLI scaling for newer games is very impressive. If you currently game on an older card, you will do yourself a big favor by upgrading. The move to a GTX 590 will give you better visuals on the DX11 pathway and you are no doubt thinking of GTX 590 Quad SLI further down the road (if money is no object) if you want to get even higher performance as you may want to use Suround’s three-panel display or even 3D Vision or 3D Vision Surround for really intense gaming.
If the many exclusive features of the new GTX 590 appeal to you and you are gaming at 1920×1080 or above, you cannot go wrong with a GTX 590. In this editor’s experience, it is also a great choice if you are considering overclocking further as scaling is superb and the reference cooling is up to the task. The competition is hot as the competing Radeons offer their own set of features including a cheaper way to experience 3-panel multi-display with Eyefinity.
Although there can only be “one fastest” video card, it is going to depend on which games you test, their resolutions and what level of detail and filtering you use for each game. And driver updates will sometime change performance from month to month. Both cards are super-fast and rather equal in overall performance – but different. If you prefer a noticeably quieter card, the GTX 590 may well be for you.
Stay tuned, there is a lot coming from us at ABT. We are going to follow up this review with much more multi-GPU testing. Watch for our EVGA GTX 560 Ti review this weekend! We also expect to cover a new card launching very shortly!
Mark Poppin
ABT Senior Editor
Please join us in our Forums
Become a Fan on Facebook
Follow us on Twitter
For the latest updates from ABT, please join our RSS News Feed
Join our Distributed Computing teams
- Folding@Home – Team AlienBabelTech – 164304
- SETI@Home – Team AlienBabelTech – 138705
- World Community Grid – Team AlienBabelTech
This review was so absolutely great. Miles above the Anand’s 10 games. I really really think you go above and beyond. One of the few places you can get a full review anymore. Such a great collection of data! Most sites do a handful of games and there is no way anyone could get a true picture. Most sites have diminished their launch articles to a point of incompleteness. Where one has to go somewhere else to get a full picture.
Apoppin, i know its a lot of work for you, but i commend you for your dedication to all the data. Yours again is the most useful especially on a card that is so close in performance to the competitors. I cannot thank you enough for a fantastic article!
Very nice in-depth review, bookmarking the site now! Thank you for your hard work!
I love ABT reviews. They always seem to be pretty fair when it comes to in-depth testing. Keep up the good work guys. Really. I mean that. I plan to work for one of the many review sites when i finish my CE degree, and ABT is high on the list.
I’m impressed that Nvidia beat ATI in cooling with their dual GPU. It’s usually the opposite.
It is nice to see the cost for both cards is about the same which gives more choices for the consumers to pick from.
Nice Job!
Also a small note:
When I click “View All” it doesn’t work, could you guys get it fixed?
Our web master is aware of this issue and is working on it.
Thank-you all for your comments!
The HD 6990 is honored under warranty by most vendors with the OC switch. So I’m quite puzzled why you OC’d the 590 and kept the 6990 the same.
If you want to see the HD 6990 tested with the BIOS position No. 2, please check out our HD 6990 launch article:
http://alienbabeltech.com/main/introducing-the-worlds-fastest-graphics-card-amds-flagship-hd-6990
The same settings were used and now you have both settings completely covered.
You will also note in the Performance Summary of this GTX 580 launch article that we pitted our overclocked HD 6990 (960/1390MHz) against our overclocked GTX 580 (690/1825Mhz)
I believe we covered all of the bases.
You pointed out that bios switch 2 was tested in another review. But that does not answer why it was not tested in this review side by side with the 590(I hope that was not your answer as to why). Just saying it’s a bit weird you throw in a bunch of old games one being from 2005, but not test something most 6990 owners would be using.
BIOS position No. 2 was tested in the review immediately preceding this one and anyone can easily check the performance difference:
http://alienbabeltech.com/main/introducing-the-worlds-fastest-graphics-card-amds-flagship-hd-6990
The launch article about the GTX 590 tested the stock GTX 590 against the stock HD 6990. Then we tested the overclocked GTX 590 and the overclocked HD 6990 – both overclocked as far as they would go – the 6990 being overvolted in the BIOS No. 2 position and the GTX 590 at stock voltage.
Including a third set of Radeon slightly-overclocked numbers would have cluttered the charts and taken precious extra time that was spent in benching. Especially because these figures are easily found in the previous article that were tested at the same settings and with the same drivers.
We don’t “throw in” a bunch of old games at random. We have been following many of these games for nearly three years – when some were only a couple of years old and many top cards struggled with them – and our regular readers appreciate it. We also include a good mix of new games and are always adding more. Next added will be Bulletstorm and Shogun 2 and of course Crysis 2 when it fully debugged and running on the DX11 pathway.
magnificent review
This wasn’t a terrible review but it was definitely biased towards nVidia.
In the Metro 2033 section you actually claim that the GTX 590 “Beats” the Radeon 6990 with 39.48 FPS vs. 39.2 FPS. You even used an exclaimation point!
Comparisons within 3-6 FPS are inconclusive, let alone 0.3 FPS! Calling that any kind of defenative win is just absurd and proves you’re milking every possible edge the 590 might have.
Don’t get me wrong; I’m using an nVidia card right now (Good ol’ GTX 460 SC) but I appreciate that both companies make stellar cards and it’s only fair to try and be unbiased as a reviewer since GPU consumers tend to be so polarized.
Ultimately nVidia tends to have a slight performance edge and AMD tends to have a price edge. The only things that really make each company’s cards unique are special features like CUDA and Eyefinity.
I dont think the 590 or equivelent is such a big step from the card I have (the 480) for me really to be that excited.
The step from 8800gtx to gtx480 was a really huge step tho (3x the improvement) like this is more like .5x the improvement, so I dont really give a shit.
Sorry for mentioning all those nvidia cards, but its what I purchase, I work with CUDA for my gpgpu programming, so maybe even im a little biased, but opencl is what you use for amd cards, and its just as good.
Way too many comparisons of other GPUs, it’s 590 vs 6990, remember? I was all confused with the color schemes on the graphs.
I know quite a little about gfxcards so I didnt find answers why nVidia is better in some games and AMD in other games? Directx vs opengl? I play only Waw, BO, ET and Brink so I still dont know do I buy next 580, 590, 6970 or 6990. I have now 570 SLI but somehow my computer doesnt work well when SLI is on so single card is the solution. But which one?
I would highly recommend that you ask you question in our forum. You will get good information there and far more detail than from comments here.
SLI should work well if you have a SLI MB. And there are many reasons why Nvidia is better in some games and AMD in others.
http://alienbabeltech.com/abt/
Would it be possible to get a copy/screenshots of the current bios setting using during the tests?
Wicked not biased review. Not.
what kind of a joke is this? , we are compareing over all proformance , why the HELL is phyx disabled ? , it is fair because its an official feature of nvidia , if amd does not have it , its not nvidias fault , we want an answer , it looked like u guys wanted nvidia to look slow or someting ?
There is no other way to compare performance directly.
How many PhysX games are we comparing? Two out of nearly thirty games. Just ignore those results and look at the rest of the review. We test about 3 times more games than any other tech site.
@najeeb what kind of joke are you on about? physx makes Nvidia cards slower in performance if turned on, if physx was on it’d be unfair to Nvidia right? In your case you should say “It actually seems like they’re trying to make Nvidia cards look faster than the AMD counterparts by disabling certain features.”
Wow that was extremely bias. Xbitlabs.com comparison was perfect.
I just watched an youtube video comparing both cards and Radeon uses a lot less power from psu and gave a better or equal fps result than Nvidia. Others reviews compare both cards to be equivalent. I think this review is very outsiders and biased.
Definitely appreciate you discussing this article. Great!!
I certainly enjoyed the way you explore your experience and knowledge of the subject! Keep up on it. Thanks for sharing the info!
Can anyone able to recommend comprehensive Rice B2B Database? Thank you 😀