Introducing Nvidia’s GTX 580 – Fermi Improved!
We would like to reintroduce Nvidia’s “Tank” as the GTX 580, and this time we present a much leaner, meaner and faster machine – all the while improving on the thermals, power draw and noise of the reference GTX 480. The Tank refers particularly to Nvidia’s flagship GTX 480 which is equipped to handle any gaming situation at high resolution and with maximum details and with maximum filtering and anti-aliasing applied.
Nvidia advertises their new high-performance GPU as “the World’s Fastest DX11 GPU” and we now bring you the details of our performance showdown with the reference GTX 480, the Galaxy SuperOverclocked GTX 480 (overclocked to identical GTX 580 overclocked speeds), the reference Diamond HD 5870 and the reference HD 6870 to see if we can verify Nvidia’s claim. On top of that, we also overclock our reference GTX 580 even further to see how it scales in 23 modern games. Here is the brand new reference GTX 580 (lower image) compared with the reference GTX 480 released back in April of this year.
Nvidia released its long awaited GeForce GTX based on its brand new Fermi DX11 GF100 architecture back in April of this year, six months later than AMD’s own DX11 Cypress video cards. This new Fermi GPU – Graphics Processing Unit – a term originally originated by Nvidia is a continuation of their strategy since their G80 which launched over three year ago to create a General Purpose Processor – co-equal with the CPU – that also renders amazing graphics. The culmination of Nvidia’s efforts with their new DX11 Fermi architecture, the GTX 480, their flagship GPU – up-until-this-moment – is the fastest single GPU with the caveat that it runs rather hot and the cooling solutions based on the reference design are rather noisy.
Just over six weeks ago, we introduced a new refined ‘Tank’, the Galaxy GTX 480 SuperOverclock on a mature process, which Galaxy calls “the fastest GTX 480 card in the world.” This overclocked GTX 480 is already the “fastest single GPU video card” in the slower-clocked reference version which we covered in this review. We found that the new Galaxy GTX 480 SOC is not only super-fast, but it is also 30 dBA quieter than the reference version and is also 30C cooler thanks to its impressive and well-engineered 3-slot design and Arctic-Cooling VGA cooler – and all for a suggested etail price of $489. Well now, enter the completely redesigned Nvidia Tank – at $499 suggested etail pricing. Best of all, the GTX 580 is designed to be faster and more efficient than even the super-overclocked GTX 480s.
We saw AMD introduce their new line up, HD 68×0 series to replace HD 58×0 series in our review last month here. We found out that the “Barts” GPU it is based on is only a mid-range launch so far with the HD 6870 only slightly faster than the HD 5850; the best part is that it replaces it for less money. We will use HD 6870 and HD 5870 to show you where the GTX 580 fits in relation to them and we also await for AMD’s high end “Cayman” HD 6900 series to be released on December 13, 2010. An evaluation such as this one can only give you the equivalent of a “snapshot” of a moment in time and we shall attempt to determine the value of this new video card in relationship to the others and what we can probably expect going forward.
AMD’s Take on GTX 580 – ABT’s interview with Stanley Ossias, Director, Mobile Discreet Graphics Product Management at AMD
AlienBabelTech was fortunate to interview Stanley Ossias, Director, Mobile Discreet Graphics Product Management at AMD this morning right about the time the GTX 580 NDA ended. We are going to bring you the full interview with Mr. Ossias later this week, but we will summarize his response in our conclusion.
The Competing Cards
The Galaxy GTX 480 SOC (top) is a massive 3-slot design with a heavy backplate to keep it cool; contrast it with the reference GTX 480 (center) and GTX 580 (bottom) and you can see it pictured also with the HD 5870 (right) and HD 6870 (left) which we are all going to benchmark for you.
Nvidia’s new GTX 580 now comes with a MSRP of $499. So we need to answer the question: Is it worth the $150 premium over the $350 or so dollars that one would currently spend for AMD’s top single-GPU video card – a HD 5870? Even AMD’s dual-GPU in a single video card, the HD 5970, is now dropping in price from over $600, now down to around $500 although it has always been in limited supply. AMD is aggressively preparing for Nvidia’s new Tank in much the same way that Nvidia met AMD’s new HD 68×0 series – with sharp price drops on their current cards as both companies bring out refreshed product lines each on the 40 nm process.
To properly bring you this review, we are using our reference Diamond HD 5870 (850/1200 MHz) as well as our reference stock-clocked AMD HD 6870 (900/1050 MHz) which we put through their paces this week with the very latest WHQL drivers – Catalyst 10-10. We have already tested similar combinations many times using our older video cards but shall not include them as this review only concerns the performance of the fastest of the fast single-GPU video cards.
You will see us pit our HD 5870 and the HD 6870 against the new GTX 580, both stock and overclocked, and also against the reference GTX 480 and versus our overclocked-to-the-max Galaxy GTX 480 SuperOverclock in 24 modern games and in 2 synthetic benchmarks generally using 1920×1200 and 2560×1600 resolutions. Since we are using the fastest of the fast single-GPU video cards, it makes sense to test them at the highest resolutions and with the most demanding settings. Since we are matching the top single-GPU video cards to each other in a performance showdown, we do not include the dual-GPU HD 5970, nor CrossFire, nor SLI configurations.
Is the GTX 480 SOC worth $499 which is about $150 more than its rival, AMD’s HD 5870? And what about overclocked versions of the GTX 480 which are also priced about $500?
We have already analyzed and compared the reference GTX 480 and the HD 5870 performance at all levels, many times since last April, and we definitely can announce a performance winner – the GTX 480, despite its high TDP and noise levels as well as higher price. Now we are going to look at Nvidia’s much more refined and quieter version of Fermi’s GF100 which they have reworked into GF110. We set it against the reference and super-overclocked GTX 480 as well as two of AMD’s fastest single-GPU cards, the HD 5870 (850/1200 MHz) and the brand new HD 6870 (900/1050 MHz), to see if the new GTX 580 is worth its price premium as the new “fastest single-GPU video card”.
It is very important to note that we overclocked our SuperOverclocked Galaxy GTX 480 as far as it will go – from the reference 700/1848 Mhz to 850/2004 MHz – to exactly match the overclocked-as-far-as-it-will-go GTX 580 from 772/1848 MHz to the *same* overclock of 850/2004 MHz to give you a good idea of performance increase with core scaling between the old and the new GPUs. In this manner, we can see the architectural improvements of the GTX 580 over the GTX 480 more easily,
Before we do performance testing, let’s take a look at the GTX 480 and quickly recap its new DX11 architecture and features of the original Fermi GF100 which we covered in our reviews of the GTX 480, published here, here and here. Senior Editor BFG10K reviewed GTX 470 here and here and Senior editor MrK covered GTX 465 here.
We also recently examined the performance of Galaxy’s GTX 480 SuperOverclock and we also reran GTX 480 against stock and overclocked versions of HD 5870, HD 6870 and HD 6850 here just a few weeks ago.
Specifications
The GeForce GTX 580 was designed from the ground up to deliver exceptional tessellation performance, which is a key component of Microsoft’s DirectX 11 development platform for PC games. Tessellation allows game developers to take advantage of the GeForce GTX 580 GPU’s tessellation ability to increase the geometric complexity of models and characters to deliver far more realistic and visually rich gaming environments. You will soon see that the clocks of Nvidia’s GTX 580 are clocked far higher than the reference GTX 480 version and that we were also able to go even further than the reference core clock that Nvidia set for the GTX 580 while still remaining cool and quiet.
Here is the specification chart for the GTX 580 at a glance; right away we notice its lower TDP of only 244W and that the new GPU now supports the new HDMI 1.4a connector standard.
Needless to say, the new Fermi GF110 GTX 580 brings a lot of features to the table that current Nvidia customers will appreciate, including improved CUDA’s PhysX, 2D and 3D Surround to drive up to 3 LCDs with GTX SLI and Tri-SLI, superb tessellation capabilities and a really fast GPU in comparison to their GT200 series and even their GF100 GTX 480 series. Let’s see how Nvidia breaks down the enhancements of their new GTX 580 to the reference GTX 480 in terms of performance.
While the GTX 480 and GTX 580 share the same SM configuration, the newer GTX 580 has improved performance on a clock-per-clock basis by improvement in two key areas. The new GTX 580 now supports full FP16 texture filtering and new tile formats that increase efficiency of up to 5% or more in many cases over the GTX 480. Besides that, Nvidia has increased the clock frequencies and the number of CUDA cores in the GTX 480 from 480 cores to 512 in the GTX 580 and there are also more texture units and SMs all operating more efficiently.
Beauty is more than skin deep
The GF100 Fermi GTX 480 was completely re-engineered at the transistor level into the GF110 GTX 580 and there are now about 200,000 less transistors for a total of about 3 billion total in the GF110 GPU. Through a complete Fermi redesign on a mature process on TSMC’s 40 nm, the GTX 580 achieves higher clockspeed than the GTX 480 with less power. In other words, Nvidia increased the CUDA core count from 480 to 512, upped the clock speed and lowered the power requirements. This has lead to an amazing reduction in noise from the VGA cooling fan that now brings it into a lower db range than GTX 285! In fact, our own ears tell us that the GTX 580 is now about as quiet as the HD 5870 – both cards do not suffer from the loud and sometimes startling “spin up” of the GTX 480 when it is under load in a game. How have they achieved this?
Nvidia’s new vapor chamber
Vapor chamber cooling is not new to the PC world and AMD Graphics first began shipping video cards with vapor chamber coolers in 2007 and has continued to develop them on every product generation since, including the $180 Radeon HD 6850. However, this kind of cooling is new for Nvidia’s GTX 580.
The GTX 580 employs a custom sealed copper vapor chamber to efficiently remove heat from the GPU. It then dissipates the heat by blowing the GPU-heated air through a large dual-slot heatsink and out the back of the video card and thus out of the PC case. But there is more to quietly cooling a hot GPU than a vapor chamber.
Nvidia has redesigned the GTX 480’s reference fan completely. The new GTX 580 fan has been re-engineered to produce a lower pitch and tone that is less noticeable to human hearing. Not once in the past week did this reviewer notice the noise coming from the GTX 580 – in sharp contrast to the GTX 480 which would sometimes be quite noticeable during a game. In fact, the GTX 580 features a new fan speed control algorithm that is adaptive; it smooths the ramp up and down of fan RPMs that are far less noticeable than the GTX 480’s. This improved programmable fan controller has enabled fine-grained and continuously variable control which handles multiple fan profiles simultaneously that respond instantly to the temperature changes and it is also found in competing Radeon products own internal micro-controllers.
To add to the SLI experience, even the cover of the GTX 580 has been redesigned so that its cover is angled so as to offer better airflow between the cards in tight SLI configurations.
Can you tri-SLI your GTX 580?
Tri-SLI is supported by GTX 580 and there is improved scaling for SLI with GTX 580. There are also recently more compelling reasons besides increased performance to consider GTX 580 SLI which includes being able to experience Nvidia’s multi-display 2D/3D Surround. You will also require a less powerful PSU to run your GTX 580 SLI than with powering GTX 480 SLI. And because of its cover’s new angled design, a GTX 580 can be used in SLI configuration with another GTX 580 and still get decent cooling in many X58 motherboards that currently overheat using two GTX 480s in SLI. Also, by using the latest GeForce 260 drivers, each card can keep its own unique clocks or they can be set asynchronously. Because of severe time constraints on this article, SLI will be examined in depth in a further article as well as 3-panel 2D Surround versus Eyefinity.
New Power Monitoring Hardware – or no more Furmark!
In order to stay below the 300 W power limit imposed by the PCIe specification, NVIDIA has added a power draw limitation system to their card. When either Furmark or OCCT are detected, sensors measure the incoming current and voltage to calculate the total power draw. If the power draw exceeds a certain predetermined limit, the GTX 580 will automatically downclock to avoid damage to hardware component. After the power draw drops back to safe limits, the GPU returns to normal clocks much the same as in thermal management.
Because of this, we will no longer use Furmark for showing power draw and will return to using games to illustrate real world situations. Currently, this power management only switches on when Furmark or OCCT are detected and it should not limit overclocking unless Nvidia extends this management to regular PC games. Evidently this works by having the GeForce driver detect the program and treat it as a virus; but we have found a workaround and we shall update the Power Section of this evaluation later on by completely maxing out the power draw, probably for the last time in our testing as Nvidia responds to this with better security measures.
Let’s flip the GTX 580 0ver and check out the other side:
As a total package, the new GTX 580 looks (and sounds) great! Let’s show you the results of our one week’s test drive, shall we? We will put it to the test in 23 PC games and in two synthetic tests. But first, head to the next page to check out our test bed configuration.
Test Configuration
Test Configuration – Hardware
- Intel Core i7 920 reference 2.66 GHz and overclocked to 3.8 GHz); Turbo is off.
- Gigabyte EX58-UD3R (Intel X58 chipset, latest BIOS, PCIe 2.0 specification; CrossFire/SLI 16x+16x).
- 6 GB OCZ DDR3 PC 1800 Kingston RAM (3×2 GB, tri-channel at PC 1600 speeds; 2×2 GB supplied by Kingston)
- GeForce GTX 580, 1.5 GB reference design and clocks, supplied by Nvidia
- GeForce GTX 480, 1.5 GB reference design and clocks, supplied by Nvidia
- Galaxy GTX 480 SOC; 1.5 GB, overclocked version and overclocked further, supplied by Galaxy
- ATI Radeon HD 5870 (1GB, overclocked clocks, 850/1200 MHz) by Diamond
- ATi Radeon HD 6870 (1GB, reference clocks, 900/1050 MHz) supplied by AMD
- Onboard Realtek Audio
- Two identical 250 GB Seagate Barracuda 7200.10 hard drives configured and set up identically from drive image; one partition for Nvidia GeForce drivers and one for ATI Catalyst drivers
- Thermaltake ToughPower 775 W power supply unit supplied by Thermaltake
- Thermaltake Element G Case supplied by Thermaltake
- Noctua NH-U12P SE2 CPU cooler, supplied by Noctua
- Philips DVD SATA writer
- HP LP3065 2560×1600 thirty inch LCD
Test Configuration – Software
- ATi Catalyst 10-10; highest quality mip-mapping set in the driver, Catalyst AI set to “Standard”; surface performance optimizations are off
- NVIDIA GeForce 262.99 beta release drivers for GTX 480; High Quality
- Windows 7 64-bit; very latest updates
- DirectX July 2010
- All games are patched to their latest versions.
- vsync is off in the control panel and is never set in-game.
- Varying AA enabled as noted in games and “forced” in Catalyst Control Center for UT3 and Batman: Arkham Asylum; all in-game settings are specified with 16xAF always applied; 16xAF forced in control panel for Crysis.
- All results show average, minimum and maximum frame rates except as noted.
- Highest quality sound (stereo) used in all games.
- Windows 7 64, all DX10 titles were run under DX10 render paths; DX11 titles under DX11 render paths.
The Benchmarks
- Vantage
- Call of Juarez
- Crysis
- Far Cry 2
- Just Cause 2
- X3:Terran Conflict
- Dirt 2
- Lost Planet
- Lost Planet 2
- Unreal Tournament 3
- Resident Evil 5
- STALKER, Call of Pripyat
- Batman: Arkham Asylum
- H.A.W.X.
- H.A.W.X. 2
- Battleforge
- Enemy Territory: Quake Wars
- F.E.A.R.
- Call of Duty 4
- Alien vs. Predator
- Serious Sam, Second Encounter HD (2010)
- Metro 2033
- Mafia II
- Grand Theft Auto IV
- Heaven 2
Vantage
Vantage is Futuremark’s latest test. It is really useful for tracking changes in a single system – especially driver changes. There are two mini-game tests, Jane Nash and Calico and also two CPU tests, but we are still focusing on the graphics performance. Here is a scene from Vantage’s second mini-game.
Let’s go right to the graphs and first check the basic tests with the default benchmark scores:
We note the rankings. Unfortunately, they are completely meaningless when they are presented this way.
We see an interesting lineup. Unfortunately for our purposes, Vantage is a meaningless test with meaningless numbers in a weak attempt to compare one video card’s performance to another – even in the same system. Let’s move on to PC games and to real world situations and we will create our “snapshot” of current performance of the top single-GPU video cards!
Call of Juarez
Call of Juarez is one of the very earliest DX10 games. It is loosely based on Spaghetti Westerns that became popular in the early 1970s. Call of Juarez features its Chrome Engine using Shader Model 4 with DirectX 10. Our benchmark is built into Call of Juarez. It runs a simple flyby of a level that is created to showcase its DX10 effects. It offers good repeatability and it is a good stress test for DX10 features in graphics cards, although it is not quite the same as actual gameplay because the game logic and AI are stripped out of this demo.
Performing Call of Juarez benchmark is easy. You are presented with a simple menu to choose resolution, anti-aliasing, and two choices of shadow quality options. We set the shadow quality on “high” and the shadow map resolution to the maximum, 2048×2048. At the end of the run, the demo presents you with the minimum, maximum, and average frame rate, along with the option to quit or run the benchmark again. We always ran the benchmark at least a second time and recorded that generally higher score.
Here are Call of Juarez DX10 benchmark results, first at 1920×1200; there is no 2560×1600 run available:
Now we test at 1680×1050:
The GTX 580 takes the lead over both the stock and the overclocked GTX 480s. At any rate, the GTX 580 is the clear winner with some nice scaling when we overclock it which is translating to a real performance increase over the reference version. Even when the GTX 480 is solidly overclocked from 700/1848 MHz to 850/2004 MHz to match the GTX 580’s new overclocked clocks, it still falls short of the new GF110 GPU.
CRYSIS
Next we move on to Crysis, a science fiction first person shooter by Crytek. It remains one of the most demanding games for any PC and it is also still one of the most beautiful games released to date. Crysis is based in a fictional near-future where an alien spacecraft is discovered buried on an island near the coast of Korea. The single-player campaign has you assume the role of USA Delta Force, ‘Nomad’ who is armed with futuristic weapons and equipment. Crysis uses DirectX10 for graphics rendering.
A standalone but related game, Crysis Warhead was released last year. CryEngine2 is the game engine used to power Crysis and Warhead and it is an extended version of the CryEngine that also powers FarCry. As well as supporting Shader Model 2.0, 3.0, and DirectX10’s 4.0, CryEngine2 is also multi-threaded to take advantage of dual core SMP-aware systems and Crytek has developed their own proprietary physics system, called CryPhysics. However, it is noted that actually playing this game is a bit slower than the demo implies.
GPU Demo, Island
All of our settings are set to the in-game maximum’s “very high” including 2xAA for 2560×1600 and 4xAA for 1920×1200 and we force 16xAF in the control panels. Here is Crysis’ Island Demo benchmark, first at 2560×1600 resolution (2xAA):
Although the HD 5870 passes the stock GTX 480, the stock GTX 580 as well as the overclocked GTX 480 move right past it. Next we test at 1920×1200 and we up the anti-aliasing from 2x to 4xAA as we lower the resolution from the previous run.
‘The reference GTX 480 is edged out by the HD 5870 although the GTX 480 when overclocked is definitely faster. However the GTX 580 is even faster than the highly overclocked GTX 480 while the Radeons play Crysis a bit slower at our admittedly extreme setting. So far, no single GPU yet offers playable framerates at 2560×1600 at very high’ plus 2xAA although our overclocked GTX 580 comes closest. Perhaps the larger framebuffer of the GTX makes a little performance difference over the Radeon’s.
All of our top cards are now playable with Crysis at 1920×1200 if you are willing to compromise with anti-aliasing and/or lower a couple of detail settings. However, the experience is similar on all 4 cards although the Galaxy GTX 580 is definitely faster if you are keeping score and our overclocked version scales nicely.
FarCry 2
Far Cry 2 uses the name of the original Far Cry but it is not connected to the first game as it brings you a new setting and a new story. Ubisoft created it based on their Dunia Engine. The game setting takes place in an unnamed African country, during an uprising between two rival warring factions. Your mission is to kill “The Jackal”; the Nietzsche-quoting mercenary that arms both sides of the conflict that you are dropped into.
The Far Cry 2 game world is loaded in the background and on the fly to create a completely seamless open world. The Dunia game engine provides good visuals that scale well. The Far Cry 2 design team actually went to Africa to give added realism to this game. One thing to especially note is Far Cry 2’s very realistic fire propagation by their engine that is a far cry from the scripted fire and explosions that we are used to seeing. First we test Far Cry 2 benchmark at 2560×1600 with AI enabled and we use the Ranch Long benchmark with ultra settings plus 8xAA.
Our extreme settings are too much for the Radeons which stumble in the minimums. Let’s move on to 1920×1200 resolution:
The GTX 580 and the GTX 480 runs away from the Radeons and the overclocked GTX 580 is clearly the fastest of the very best. Here we see a clean sweep by the GTX 580 in Far Cry 2.
Enemy Territory: Quake Wars
Enemy Territory: Quake Wars is an objective-driven, class-based first person shooter set in the Quake universe. It was developed by id Software and Splash Damage and published by Activision. Quake Wars pits the combined human armies of the Global Defense Force against the technologically superior Strogg, an alien race who has come to earth to use humans for spare parts and food. It allows you to play a part, probably best as an online multi-player experience, in the battles waged around the world in mankind’s desperate war to survive.
Quake Wars is an OpenGL game based on id’s Doom3 game engine with the addition of their MegaTexture technology. It also supports some of the latest 3D effects seen in today’s games, including soft particles, although it is somewhat dated and less demanding on video cards than many DX10 games. id’s MegaTexture technology is designed to provide very large maps without having to reuse the same textures over and over again.
For our benchmark we chose the flyby, Salvage Demo. It is one of the most graphically demanding of all the flybys and it is very repeatable and reliable in its results. It is fairly close to what you will experience in-game. All of our settings are set to ‘maximum’ and we also apply 4xAA or 8xAA plus 16xAF in game. First we test at 2560×1600 resolution with all settings fully maxed in-game plus 4xAA/16xAF:
All four card offer a similar excellent experience with the GTX 580 again beating the competition by more than a couple of frames per second – including the highly overclocked GTX 480. Let’s crank up the anti-aliasing from 4x to 8x while we test at 1920×1200 resolution.
All of our video cards have no trouble handling this game fully maxed out. The new GTX 580 really stands out as the quickest among the very best in this game. This is the tank you want to defend against the Strogg in this game.
F.E.A.R.
F.E.A.R. – First Encounter Armed Assault – is a DX9c game by Monolith Productions that was originally released in October 2005 by Vivendi Universal Production. Later, there were two expansions with the latest, Perseus Mandate, released in 2007. Although the game engine is aging, it still has some of the most spectacular effects of any game. F.E.A.R. showcases a powerful particle system, complete with sparks and smoke for collisions as well as featuring bullet marks and other effects including “soft shadows”.This is highlighted by the built-in performance test, although it was never updated.
This performance test will tell you how F.E.A.R. will run, but both of its expansions are progressively more demanding on your PC graphics and will run slower than the demo. We always run at least two sets of tests with all in-game features at ‘maximum’. F.E.A.R. uses the Jupiter Extended Technology engine from Touchdown Entertainment. We test this game with the most demanding settings. We use fully maxed details with 4xAA/16xAF; soft shadows ‘off’, as they do not play well with AA. Let’s start first at 2560×1600:
None of our cards have issues with this game at the highest settings and resolution although the Radeons drop off in the minimums and the GTX 580 again stands out although the highly overclocked GTX 480 nearly catches the stocked clocked reference version. Let’s now check out 1920×1200 resolution with the same maxed out settings:
The GTX 580 is strongest in this DX9 game; first overclocked then followed by the GTX 480 overclocked and then by the reference version. Both the HD 5870 and the HD 6870 also offer very playable experiences in this game. Even so, there is not much difference in practically playing F.E.A.R. between the fastest and the slowest video cards as the minimums are already sufficiently high.
Batman: Arkham Asylum is an action-adventure/stealth video game based on DC Comics’ Batman. Arkham Asylum is based directly on the long-running comic book’s Dark Knight character. The Joker devised an elaborate plot from inside Arkham Asylum that Batman is personally forced to put a stop to. The game’s primary characters are superbly voiced by the actors Kevin Conroy, Mark Hamill and Arleen Sorkin who reprise their roles as Batman, the Joker and Harley Quinn.
The game is played as an over-the-shoulder, third-person perspective action-adventure game with a primary focus on Batman’s combat abilities, stealth, detective skills and complete with an arsenal of gadgets that can be used in both combat and as exploring in “detective mode”.The game uses a “Freeflow” combat system as well as the ability to use Batarangs and the Bat-Claw. The player also has access to progressively stronger counter attacks as well as a special attack that can quickly take down a single foe. Stealth tactics includes silent takedowns by sneaking up on foes including dropping and/or gliding from overhead perches.
Batman: Arkham Asylum uses a highly modified version of the Unreal Engine 3. It does not support AA natively but must be added in and supported by the game’s developer. Unfortunately we cannot compare Batman: Arkham Asylum using our GeForce exactly against the Radeon with PhysX on. In the game’s control panel, the settings are also different, depending if you play with a GeForce or a Radeon.
This time we did something different from our usual testing – we simply did not test the Radeons as a direct comparison with the GeForce cards. The developer optimized MSAA for GeForce cards in game but you must set non-optimized AA in the Catalyst Control Center with a substantially higher performance hit on the Radeons. Only the Game of the Year Edition of Batman: Arkham Asylum supports in-game AA settings for both Radeon and GeForce cards. We have purchased that edition and will test with it in subsequent evaluations.
We begin testing at 2560×1600 with details maxed and with 8xMSAA applied in the game’s setting control panel.
In each case, both cards at any clockspeed and are able to offer similar playing experiences as the minimums are sufficiently high even at 2560×1600 with details maxed and with 8xMSAA applied. 1920×1200 can only be faster.
Even when it is highly overclocked, the GTX 480 falls short of the reference GTX 580. There is absolutely no problem playing this game fully maxed out with with either the GTX 480 or GTX 580 and this game would be a superb candidate for playing in 3D Vision or even 3D Surround.
Call Of Duty 4: Modern Warfare
Call of Duty 4: Modern Warfare (CoD4) is a first person shooter running on a custom engine. It has nice graphics but the engine is somewhat dated compared to others and it runs well on modern PCs. It is the first CoD installment to take place in a modern setting instead of in World War II. It differs from the previous Call of Duty games by having a more film-like plot that uses intermixed story lines from two perspectives; that of a USMC sergeant and a British SAS sergeant. There is also a variety of short missions where players control other characters in flashback sequences to advance the story. Call of Duty 4’s move to modern warfare introduced a variety of modern conventional weapons and technologies including plastic explosives.
There are currently about 20 multiplayer maps in CoD4. It is very popular and there is a new expansion for it. CoD Modern Warfare 2 was also released with updated visuals but it is also not very demanding on graphics cards. For multiplayer, CoD4 includes five preset classes and introduces the Perks system. Perks are special abilities which allow users to further customize their character to suit their personal style. Our timedemo benchmark was created by ABT’s own Senior Editor and lead reviewer, BFG10K. It is very accurate and totally repeatable. Here is CoD4, first at 2560×1600 resolution with all in-game settings completely maxed out plus 4xAA:
The HD 5870 edges out the GTX 480 and the stock GTX 580, although our new overclocked GTX wins overall.
This time the GTX 580 overtakes the lead, stock then overclocked, followed by the reference HD 5870 matching the highly overclocked GTX 480 and then finally the reference GTX 480 and last, the HD 6870. We see results similar to Unreal Tournament 3 where a popular multiplayer game is very playable even on midrange graphics cards from the last generation and it plays very smoothly with this generation’s top video cards. The overclocked GTX 580 is the fastest in this maxed out benchmark although the HD 5870 puts in a strong showing.
Aliens vs Predator Aliens vs. Predator, known to fans as Aliens versus Predator 3 or AVP3 is a video game developed by Rebellion Developments, and published by Sega in February 2010. It is the sixth game of the Aliens versus Predator game series. There are three campaigns in the game, one for each race or faction (the Predators, the Aliens and the Colonial Marines), that form one main storyline although they differ in objectives depending on your choice of campaign.
Alien vs Predators DX11 benchmark is a stand alone bench that as the name says is only for DX11 cards. First we bench at 2560×1600 with maxed out settings plus 2xAA.
The GTX 580 is the fastest stock card although it is almost caught by our super-overclocked Galaxy GTX 480 SOC. Now we test at 1920×1080 and up the anti-aliasing to 4xAA .
We see the GTX 580s take a solid performance lead with it scaling nicely alongside the super-overclocked GTX 480. The Radeons appear to struggle a bit more when 4xAA is applied.
Serious Sam features cooperative gameplay and allows for split screen action supporting up to 4 players. Serious Sam: The Second Encounter was remade as “HD” using Serious Engine 3. It was released in April, 2010 for PC. Besides updated visuals, new game modes including “Co-op Tournament” and “Survival” for single player, were introduced in this remake. Serious Sam 3 is currently in development by Croteam and is expected to debut at E3, 2011. We use the basic 3 “ultra” presets for benching Serious Sam: The Second Encounter HD. There are possible further fine-tuning which will make the game even more demanding, but we chose the “ultra” presets with only one higher GPU setting, to allow for testing beyond 1920×1080. We test first at 2560×1600 resolution:
And finally at 1920×1200 with the same ultra presets:
Serious Sam: The Second Encounter HD on the Serious 3 engine is quite demanding and yet all of our top configurations play it satisfactorily using the game’s built-in “ultra” presets. Although the HD 5870 trades blows with the stock GTX 480, the stock GTX 580 simply blasts past them both. However, even the HD 6870 puts in a good showing and is sufficient to play this game at 1920×1200.
Mafia II is a third-person action-adventure video game which is the sequel to Mafia: The City of Lost Heaven. It is developed by 2K Czech and is published by 2K Games and was released last month. Mafia II is set from 1943 to 1951 in Empire Bay which is a fictional city based mostly on San Francisco and New York City along with some influences from Chicago and also Detroit.
Mafia II is a gritty drama which chronicles the rise of World War II veteran Vito Scaletta who joins the Falcone Crime Family and becomes a ‘made’ man. There are 15 chapters in the game and over two hours of game engine generated cutscenes.
Mafia II makes extensive use of Nvidia’s PhysX whose full effects are seen smoothly only by playing on a PhysX-enabled GeForce and preferably with a second video card dedicated to it. For this article, we used the full retail game with Mafia II‘s built-in benchmark with the highest settings for 1920×1200 and 1680×1050 – without PhysX except as noted – and this time we will reserve comment until after both charts. Playing at 2560×1600 is difficult without substantially lowering the settings that we love to test at.
Now at 1680×1050:
All of our video cards will give a satisfactory playing experience despite the benchmark’s suggestion that the framerates drop too low. However, enabling AA and high PhysX together will bring a GTX 580 to its knees even at 1920×1200 although medium PhysX settings are quite playable on a stock GTX 580 at 1680×1050 as you can see above. However, if you want to run Mafia II with AA and high PhysX settings on, consider getting a dedicated PhysX card.
Metro 2033 is the “Crysis” of 2010. It is a very demanding game on any PC with the very latest DX11 visuals. Metro 2033 is an action-oriented video game with a combination of survival horror, and first-person shooter elements. The game is based on the novel “Metro 2033” by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010. The game utilizes multi-platform 4A Engine and there is some doubt if the games engine is related to the original XRay engine used in S.T.A.L.K.E.R..
The Metro 2033 story takes place mostly in post-apocalyptic Moscow’s metro system but occasionally the player has to go above ground on some missions and to search for valuables. Metro 2033‘s locations reflect the dark atmosphere of real metro tunnels but in a much more dangerous and lethal manner. Strange phenomena and noises are frequent, and mostly the player has to rely only on their flashlight to find their way around in otherwise total darkness. Even more deadly is the surface as it is severely irradiated and a gas mask must be worn at all times due to the toxic air.
Last month, THQ released an official benchmark for Metro 2033. It is available when Steam updates the game and it includes a quality benchmark that provides minimum/maximum/average framerates, and you can adjust many graphics settings including PhysX, AA, DOF and tessellation, and the number of runs. Our presets are set to maximum (very high) with 1xAA and no PhysX nor DOF enabled.:
Here is our first chart at 1920×1200 as 2560×1600 proves too demanding without turning off most of the visuals that make this game really impressive. We test at very High settings with AA and DOF off except as noted.
Let’s test at the same settings, now at 1680×1050.
All of our cards struggle with Metro 2033 with the aggressive settings that we used except for the GTX 580 and our highly overclocked GTX 480. Metro 2033 is tessellation-heavy and it appears to take advantage of GF100/GF110 tessellators better than the Radeon’s single one and HD 6870 still brings up the rear which is not unusual considering it’s low $240 pricing – less than half that of the GTX 580.
X3: Terran Conflict
X3:Terran Conflict (X3:TC) is another beautiful stand-alone benchmark that runs multiple tests and will really strain a lot of video cards. X3:TC is a space trading and combat simulator from Egosoft and is the most recent of their X-series of computer games. X3:TC is a standalone expansion of X3: Reunion, based in the same universe and on the same engine. It complements the story of previous games in the X-Universe and especially continues the events after the end of X3: Reunion. Compared to Reunion, Terran Conflict features a larger universe, more ships, and of course, new missions. The X-Universe is huge. The Terran faction was added with their own set of technology including powerful ships and stations. Many new weapons systems were developed for the expansion and it has generally received good reviews. It has a rather steep learning curve.
First we note the results at 2560×1600 with completely maxed out settings plus 8xAA:
Now we test at 1920×1200:
This time all of our video cards run close to each other in a fairly tight grouping except for the more budget-oriented HD 6870. However, all of our video cards perform well and all of them experience similar minimum framerates and a similar playing experience.
DiRT 2
Colin McRae: DiRT 2 is a racing game that was released in September 2009, and is the sequel to Colin McRae: Dirt. It includes many new race-events, including stadium events as your RV travels from one event to another in many real-world environments across four continents. Dirt 2 includes five different event types even allowing you to compete at new locations. It also includes a new multiplayer mode. Dirt 2 is powered by an updated version of the EGO engine which was featured in Race Driver: Grid. This updated EGO engine also features an updated physics engine.
We are using the Dirt 2 full retail game built-in benchmark at the highest “ultra” DX11 setting with 8xAA applied. First we test at 2560×1600:
The GTX 580 pulls ahead and passes in an otherwise tight race. What about 1920×1200?
The GTX 580 gets the DiRT 2 checkered flag on the DX11 pathway as the GTXes pull further away from the Radeons at 1920×1200. However, even the lowest priced $240 HD 6870 can play this game satisfactorily at the highest resolutions.
Just Cause 2
Just Cause 2 is a 2010 sandbox-style action video game by Swedish developer Avalanche Studios and Eidos Interactive and is the sequel to the 2006 video game, Just Cause. Just Cause 2 employs the Avalanche Engine 2.0 which an updated version of the engine used in the original and there are impressive visuals as it is made just for DX10. It is set on the fictional tropical island of Panau in Southeast Asia. Rico Rodriguez returns as the protagonist who aims to overthrow the evil dictator “Baby” Panay and also to confront his former boss, rogue agent Tom Sheldon.
The game play is similar to that of its predecessor in that the player is free to roam the huge open world without a need to focus on the storyline. The Just Cause 2 AI has been rewritten to use a planning system which enables the in-game enemies to do more and there is also more vertical game play as well as a manual aiming system that allows the player to target enemy NPC’s specific limbs. Just Cause 2 also includes an adaptive difficulty system which scales as the player progresses. There are also new weapons in Just Cause 2 which include launching laser-controlled rockets as well as several new vehicles including a Boeing 737. Just Cause 2 now includes dual-grappling hooks which give players the ability to tether unlimited objects to each other including the tethering of enemies to vehicles and to each other which works very well as one of your goals is to cause maximum chaos. It is a lot of fun!
Here are the maximum settings available to a GeForce card; the bottom two, the Bokeh Filter and GPU water simulation, are unavailable to Radeons and they are left off on all runs to give solid apples-to-apples comparisons for all of our tested video cards and we used the Dark Tower benchmark built into the retail game.
First the benches at 2560×1600:
The HD 5870 trades blows with the stock GTX 480 but the GTX 580 is solidly faster than either of them. Now let’s look at the performance at 1920×1200:
The GTX 580 scores an impressive win over the reference GTX 480 although both Radeons deliver a playable experience at 1920×1200 with maxed out details and 8xAA. Overclocking either the GTX 580 (moderately) or the GTX 480 (highly) bring similar excellent scaling with core clockspeed increases.
Lost Planet
Lost Planet: Extreme Condition is a Capcom port of an Xbox 360 game. It takes place on the icy planet of E.D.N. III which is filled with monsters, pirates, big guns, and huge bosses. This frozen world highlights high dynamic range lighting (HDR) as the snow-white environment reflects blinding sunlight as DX10 particle systems toss snow and ice all around. The game looks great in both DirectX 9 and 10 and there isn’t really much of a difference between the two versions except perhaps shadows. Unfortunately, the DX10 version doesn’t look that much better when you’re actually playing the game and it still runs slower than the DX9 version.
We use the in-game performance test from the retail copy of Lost Planet and updated through Steam to the latest version for our runs. This run isn’t completely scripted as the creatures act a little differently each time you run it, requiring multiple runs. Lost Planet’s Snow and Cave demos are run continuously by the performance test and blend into each other.Here are our benchmark results with the more demanding benchmark, Snow. All settings are fully maxed out in-game including 2x or 4xAA/16xAF. Let’s start with 2560×1600 resolution with 2xAA.
Now at 1920×1200 and with 4xAA:
All of our top cards have issues with minimum framerates at the highest resolution. The reference GTX 480 is beaten by the top two Radeons, although the overclocked version wins out. However, the stock GTX 580 convincingly beats up on even the highly overclocked GTX 480. Let’s move on to Lost Planet 2 and DX11.
Lost Planet 2
Lost Planet 2 is the sequel to Lost Planet: Extreme Condition and is also made by Capcom. The events take place ten years after the first game and on the same, now thawed, EDN III. The PC version was released on October 12, 2010 and it runs on the MT-Framework 2.0 engine; an updated version of the engine used in several Capcom games. Campaign mode can have up to 4 players working together over the Internet. Lost Planet 2 allows players to create and customize their own characters which will allow them to unlock more things after leveling up and downloading content.
We are using the stand alone benchmark in DX11 with maximum settings plus 4xMSAA. We also have the full retail game with the identical benchmark that we shall use the next time. As the game is quite demanding, we first test at 2560×1600 resolution with no AA except as noted on the chart.
Just as in the original game, none of our cards can play Lost Planet 2 at 2560×1600 at the highest settings. However, we do see the GTX 580 passing its competition with the reference GTX 580 even passing the highly overclocked GTX 480. Let’s lower the resolution to 1920×1200, add 4xAA, and test again with all of our cards.
In this more tessellation-heavy DX11 game, the HD 6870 nearly catches the HD 5870 (finally)! That is quite an accomplishment, however, even the stock-clocked GTX 480 version can beat any of the Radeons by a significant margin. The GTX 580 is impressive as the reference version even edges out the highly overclocked GTX 480.
Grand Theft Auto IV
Grand Theft Auto IV (GTA IV) is a sandbox-style action-adventure video game released by Rockstar in late 2008. It is the sixth game in the Grand Theft Auto series. Two episodic expansion packs have since been released since then as late as April of this year. The game is set in a redesigned rendition of Liberty City, a fictional city based heavily on modern day New York City. It follows Niko Bellic, a war veteran from Eastern Europe. He comes to the United States in search of the American Dream and enters a world of organized crime, gangs and corruption. GTA IV is mostly composed of elements from driving games and third-person shooters which features free-roaming gameplay. It features an online multiplayer mode, the first of the GTA series to do so.
Here are the settings that we used. The GTX 480 and GTX 580 by virtue of having 1.5 GB vRAM, can use even higher settings than the Radeons (which are pictured below running nearly out of resources).
First we test at 2560×1600 resolution
Now we test of 1920×1200.
All of our GTXes are grouped tightly and they all give a similar playable experience. AMD’s Radeons are significantly slower at the settings that we have chosen although their playability according to the demo results are OK.
Unreal Tournament 3 (UT3)
Unreal Tournament 3 (UT3) is the fourth game in the Unreal Tournament series. UT3 is a first-person shooter and online multiplayer video game by Epic Games. Unreal Tournament 3 provides a good balance between image quality and performance, rendering complex scenes well even on lower-end PCs. Of course, on high-end graphics cards you can really turn up the detail. UT3 is primarily an online multiplayer title offering several game modes and it also includes an offline single-player game with a campaign.
For our tests, we used the very latest game patch for Unreal Tournament 3. The game doesn’t have a built-in benchmarking tool, so we used FRAPS and did a fly-by of a chosen level. Here we note that performance numbers reported are a bit higher than compared to in-game. The map we use is called “Containment” and it is one of the most demanding of the fly-bys. Our tests were run at resolutions of 2560 x 1600 and 1920 x 1200 with UT3’s in-game graphics options set to their maximum values.
One drawback of the way the UT3 engine is designed is that there is no support for anti-aliasing built in. We forced 4xAA for 2560×1600 and 8xAA for 1920×1200 in each vendor’s control panel; 8xQ for Nvidia to match AMD Graphics’ 8xMSAA settings. We record a demo in the game and a set number of frames are saved in a file for playback. When playing back the demo, the game engine then renders the frames as quickly as possible, which is why you will often see it playing it back more quickly than you would actually play the game.
Here is Containment Demo, first at 2560×1600 with 4xAA forced in each vendor’s control panel:
Now at 1920 x 1200 and with 8xAA forced:
There is absolutely no problem playing this game fully maxed out with any of our graphics configurations. The HD 5870 catches and passes even the highly overclocked GTX 480 at 1920×1200 although the GTX 580 puts in the best showing.
Resident Evil 5
Resident Evil 5 is a survival horror third-person shooter developed and published by Capcom that has become the best selling single title in the series. The game is the seventh installment in the Resident Evil series and it was released for Windows in September 2009. Resident Evil 5 revolves around two investigators pulled into a bio-terrorist threat in a fictional town in Africa.
Resident Evil 5 features online co-op play over the internet and also takes advantage of NVIDIA’s new GeForce 3D Vision technology. The PC version comes with exclusive content the consoles do not have. The developer’s emphasis is in optimizing high frame rates but they have implemented HDR, tone mapping, depth of field and motion blur into the game. Re5’s custom game engine, ‘MT Framework’, already supports DX10 to benefit from less memory usage and faster loading. Resident Evil 5 gives you choice as to DX10 or Dx 9 and we naturally ran the DX10 pathway.
There are two benchmarks built-into Resident Evil 5. We chose the variable benchmark as it is best suited for testing video cards. Here it is at 2560×1600 resolution with maxed out in-game setting plus 8xAA:
Here are the results at 1920×1200 resolution:
Although the HD 5870 comes close to the stock GTX 480 at the highest resolution, the GTX 580 simply powers past all of its competition including shaming even the highly overclocked GTX 480. However, all of our video cards turn in respectable performances and their overall playability is similar. This would be another good candidate for 3D Vision technology with good framerates on the GTX 580.
S.T.A.L.K.E.R., Call of Pripyat
S.T.A.L.K.E.R., Call of Pripyat became a new DX11 benchmark for us after GSC Game World released a another story expansion to the original Shadows of Chernobyl. It is the third game in the S.T.A.L.K.E.R. series. All of these games have non-linear storylines which feature role-playing game elements. In both games, the player assumes the identity of a S.T.A.L.K.E.R.; an illegal artifact scavenger in “The Zone” which encompasses about 30 square kilometers. It is the location of an alternate reality story surrounding the Chernobyl Power Plant after another (fictitious) explosion.
S.T.A.L.K.E.R., Call of Pripyat features “a living breathing world” with highly developed NPC creature AI. Call of Pripyat utilizes the XRAY 1.6 Engine, allowing advanced modern graphical features through the use of DirectX 11 to be fully intregrated. Call of Pripyat is also compatible with DirectX 8, 9, 10 and 10.1. It uses the X-ray 1.6 Engine one outstanding feature being the inclusion of real-time GPU tesselation– a Shader model 3.0 & 4.0 graphics engine featuring HDR, parallax and normal mapping, soft shadows, motion blur, weather effects and day-to-night cycles. As with other engines using deferred shading, the original DX9c X-ray Engine does not support anti-aliasing with dynamic lighting enabled, although the DX10 and DX 11 versions do.
We are using the stand-alone “official” benchmark by Clear Sky’s creators. Call of Pripyat is top-notch and worthy to be part of the S.T.A.L.K.E.R’s universe with even more awesome DX11 effects which help to create and enhance their game’s already incredible atmosphere. As with Clear Sky before it, DX10 and now DX11 comes with steep hardware requirements and this new game still really needs multi-GPU to run at its maximum settings. We picked the most stressful test out of the four, “Sun shafts”. It brings the heaviest penalty due to its extreme use of shaders to create DX10/DX10.1 and DX11 effects. We ran this benchmark fully maxed out in DX11.0 with “ultra” settings plus 4xAA, including applying edge-detect MSAA which chokes performance even further.
Here we present our maxed out DX11 settings for S.T.A.L.K.E.R., Call of Pripyat DX11 benchmark with 4xAA at 1920×1200:
Now we move on to 1680×1050 with 4xAA:
The GTX 580 makes a clean sweep of these benches although we would still lower settings at 1920×1200 to have a completely smooth playing experience. We see the stock GTX 580 equal the highly overclocked GTX 480’s performance – all the while remaining quieter and cooler than the older video card.
Tom Clancy’s H.A.W.X.
Tom Clancy’s H.A.W.X. is an air combat video game developed by Ubisoft Romania and published by Ubisoft. It was released in United States on March 6, 2009. You have the opportunity to fly 54 aircraft over real world locations and cities in somewhat realistic environments that are created with satellite data. This game is a more of a take on flying than a real simulation and it has received mixed reviews.The game story takes place during the time of Tom Clancy’s Ghost Recon Advanced Warfighter. H.A.W.X. is set in the year 2014 where private military companies have replaced government-run military in many countries. The player is placed into the cockpit as an elite ex-military pilot who is recruited by one of these corporations to work for them as a mercenary. You later return to the US Air Force with a team as you try to prevent a full scale terrorist attack on the United States which was started by your former employer.
H.A.W.X. runs on DX10.1 faster and with more detail than on the DX10 pathway. All of our video cards can take advantage of DX10.1. Let’s check it out H.A.W.X. with our top cards at 2560×1600 with fully maxed out in-game settings and 8xAA:
The GTX 580 jets away from the Radeons and clearly beats the reference and even highly overclocked GTX 480. Here are our results at 1920×1200 resolution:
Although all of our four cards give a similar playing experience in this game with maxed out settings and 8xAA, the new GTX 580 is clearly the top gun.
Tom Clancy’s H.A.W.X. 2
(–the full retail DX11 game)
Tom Clancy’s H.A.W.X. 2 is as yet an unreleased air combat video game developed which is soon to be published by Ubisoft for PC. It will be released later on this week but the game details remain under NDA for now. We are using the built-in benchmark from the full retail game. The way tessellation is implemented shows AMD graphics cards are perhaps unnaturally slow compared to other DX11 titles. Unfortunately, AMD is still working on a driver-based solution in time for the final release of the game that they claim improves performance without sacrificing image quality. When we get that new driver, we shall update our results here and also in a short performance H.A.W.X. 2 performance analysis.
H.A.W.X. 2 runs on DX11 faster and with more detail than on the DX10 pathway. Here the emphasis is on terrain tessellation which looks outstanding in DX11 and “flat” in DX10. Let’s check out H.A.W.X. 2 with our video cards at 2560×1600 with fully maxed out in-game settings and with 8xAA:
And now we test at 1920×1200 resolution:
We see the GTX 580 at stock clocks even beating the highly overclocked GTX 480. However, even with unoptimized drivers, the two Radeons can play this game maxed out. We also see that the HD 6870, although generally slower than the HD 5870, is much faster in this tessellation-heavy game.
BattleForge
BattleForge is an online PC game developed by EA Phenomic and published by Electronic Arts. The full game was released in March 2009. BattleForge is a card-based RTS that revolves around acquiring and winning by means of micro-transactions for buying new cards. By May, 2009, BattleForge became a Play 4 Free game with fewer cards than the retail version. BattleForge supports Directx 11 with full support for hardware tesselation. It is very impressive visually and quite demanding on any system.
First we test with our cards at 2560×1600 using the BattleForge built-in benchmark with all of its settings completely maxed out and with 2xAA:
Now we test at 1920×1200 and with 4xAA.
The GTX 580 is again fastest in BattleForge followed by the reference GTX 480 and then by the HD 5870 and the HD 6870. The stock-clocked GTX 580 is surpassed by the highly overclocked Galaxy GTX 480 SOC although the overclocked GTX 580 is the fastest. We also note the performance hit that the GTX 480 and the GTX 580 take when enabling higher levels of anti-aliasing.
Heaven 2.0 Unigine
Finally we come to our last benchmark, Heaven 2.1, on the Unigine engine. It uses DX11 and heavy tessellation which will strain any graphics card. At least two DX11 games based on Unigine are expected to be released this year.
We use the setting for “extreme tessellation” and high shaders and we also set AF to 16x. We will tell you right now that this test chokes the GTX 580 at the highest settings and resolution so we do not run it at 2560×1600. Here is Heaven 2.1 benchmark with maxed settings, extreme tessellation and 2xAA at 1920×1200:
The extreme tessellation and 2xAA completely choke the Radeons. In this case again, we see that the Cypress architecture is bottlenecked by its tessellator as the lower-performing HD 6870 with a tweaked tessellator is faster than the generally faster HD 5870. We can hope that AMD’s soon-to-be-release Cayman architecture is improved further for tessellation over Barts and Cypress.
And now we test at 1920×1200:
This is a synthetic benchmark and we will withhold judgment until we play PC games using the Unigine engine. However, the GTX 580 again scores highest.
Overclocking
When it first launched, it was wrongly assumed that the GTX 480’s high thermals and high TDP would limit overclocking. However, the maximum temperatures achieved with “worst case” FurMark showed temperatures in the upper 90s C with thermal throttling coming into play at 105 C. Clearly the GF100 GPU was built for high thermals and gave us good overclocks; the reference GTX 480 clocked +150 MHz on its core, from 700 MHz to 825 MHz and the Galaxy GTX 480 SOC – thanks to its outrageous Arctic-Cooling VGA cooler – clocked to 850 MHz with stock voltage. Now we have a much cooler running GF110 GPU and we want to overclock it further from its 772/2004 MHz clocks.
This editor has his own set of criteria for overclocking the GTX 580 and the Galaxy GTX 480 SOC further.
- Voltage is kept stock. Not many people are willing to boost voltage on a five-hundred dollar video card.
- It must be 100% stable and continue to scale with each increase of any clock.
- The fan must stay on automatic. No one wants a fan at 80%.
Enter the Galaxy Xtreme Tuner overclocking tool and notice the overclocked settings that we achieved with GTX 480:
Here are our GTX 480 clocks: Reference >> Galaxy’s OC >> new OC:
- Graphics Clock – 700 MHz >> 760MHz >> 850 MHz
- Processor Clock – 1401 MHz >> 1520MHz >> 1700 MHz
- Memory Clock – 3696 MHz >> 3800MHz >> 4004 MHz
Now we use the same Galaxy Xtreme Tuner to set our maximum GTX 580 overclocks:
Here are our GTX 850 clocks: Reference >> the new OC:
- Graphics Clock – 772 MHz >> 850 MHz
- Processor Clock – 1544 MHz >> 1700 MHz
- Memory Clock – 4004 MHz >> 4004 MHz
An overclock of +150 MHz over the reference GTX 480’s core – from 700 MHz to 760 MHz with Galaxy’s overclock, then to 850 MHz – brought us some solid performance increases, earning it the title of the “World’s fastest GTX 480” in our last review. By pure coincidence, we got the same maximum overclock with our GTX 580 so that you were able to compare both cards very closely at clock to clock which does show the GTX 580’s solid performance advantage from architectural improvements and enabled parts.
Power Usage
Power usage is important for many people as a very hot running GPU is not only not “green”, it throws warm air into your room that your air conditioner must work extra hard to compensate for. Of course, for those of us like this editor who lives where it is cooler than warmer, a small space-heater in ones PC is a plus. We have seen that the GTX 480’s TDP specification, which is 250W, is far more than the HD 5870’s 188W TDP – and both the GTX 580 and the GTX 480 requires 6-pin+8-pin PCIe connectors as shown below in the reference and also in Galaxy’s GTX 480 SOC version. As we contrast the GTX 480 with the HD 5870, only 6-pin+6-pin PCIe connectors are required for the Radeon. You will also note that the reference Diamond HD 5870 is physically longer than the reference GeForce 480 and some cutting modification had to be made to the Cooler Master Gladiator 600 to accomodate it. In fact, the Galaxy GTX 480 SOC is even longer than our HD 5870 and we had to remove the HD cage in our oversized mid-tower Thermaltake Element G to accommodate it. In contrast, the faster GTX 580 is only 10.5 inches long and fits into most mid-towers easily just as the reference GTX 480 does.
The reference GTX 480’s performance does come at a power cost; compare the system total power draw at the wall with the with the HD 5870 first – at idle and then at maximum GPU usage when running FurMark with the exact same system back in April. Now the total system power draw from the wall with the same PC, but with the GTX 480 inside instead of the HD 5870. First, we see the idle state and then with the GTX GPU maxed out running FurMark. . Of course, the second image is of our overclocked GTX 480. We see that we would be pulling well over 250W from the wall; today we pulled almost 50 watts more than the reference GTX 480 version with the same PC as we saw 550W with the overclocked Galaxy GTX 480 SOC. FurMark will stress a GPU’s stability and give the maximum thermals that one would never see in-game. You can consider FurMark’s torture tests, “worst case” scenarios for power and heat. Here is a screen shot of FurMark running at 2560×1600 with the Reference GTX 480: The reference GTX 480 is very hot at 97 C. Now compare the temperatures with the overclocked Galaxy GTX 480 SOC at its peak of only 70C running Furmark; but first notice that the overclocked Galaxy GTX 480 SOC will pull about fifty more watts than the reference GTX 480, requiring that you have a very stable PSU:
Now let’s compare the reference GTX 480 at 97 C with the Galaxy GTX SOC running the same Furmark hot-as-hell test at 70C, or 27C cooler with the Galaxy’s quiet Arctic Cooler:
The reference GTX 480 definitely runs toasty at 97C as “worst case” but the reference cooling solution appears up to the task with plenty of noise. In cold contrast, the Galaxy GTX 480 SOC has tamed the thermals quietly by hitting only 70 C in Furmark and it never seems to leave the 50s C during actual gaming!! That is 97C for a reference GTX 480 vs. 70C for the Galaxy GTX 450 SOC – all the while, the Galaxy’s overclocked GPU is putting out 50 more watts!
This time we are unable to provide you with a solid apples to apples comparison with Furmark by running GTX 580. This new card has has added a power draw limitation system to their card. Three sensors measure the inrush current and voltage on all 12 V lines (PCI-E slot, 6-pin, 8-pin) to calculate power. As soon as the power draw exceeds a predefined limit, the card will automatically clock down much the same as it does when a safe temperature is exceeded. As with temperatures, this limiter will restore clocks as soon as the overcurrent has ended. We are uncertain how this new safety feature will affect extreme overclockers.
UPDATE: We have managed to workaround the GTX 580 power draw limiter by using an old version of FurMark. In that case, it draws as much power as our overclocked GTX 480! A version of GPU-Z has also recently been released that gives the option to disable this limiter.
We will emphasize however, that the GTX 580 runs much cooler than the reference GTX 580 and is far quieter. When the GTX 580 spins up under load, it is about as quiet as the HD 5870; in contrast, the reference GTX 480 was much like the much nosier, last generation dual-GPU card, HD 4870-X2. We can confirm also that our GTX 280 is noticeably louder than the GTX 580 at every fan speed. In actual games, the GTX 580 runs far cooler than the stock GTX 480 and – unlike with the GTX 480 – you can handle handle your new card right after you turn off your PC.
Price to Performance
It is pretty clear from our 24 games and two synthetic tests that the GTX 580 is a potent GPU to put against AMD’s upcoming Cayman video cards – the HD 6950 and HD 6970 which are the forthcoming replacements and upgrade over the current HD 5850 and HD 5870 video cards. The new GTX 580 has the clear distinction of being the fastest video card in its class – the fastest single GPU video card; and at a suggested retail price of $499, sets it just above the average price of most overclocked GTX 480s. GTX 480 has been steadily gaining on the HD 5870 in performance with newer GeForce drivers and the GTX 580 is simply in a higher class than the current top Radeon, the HD 5870. The only AMD card that we do not have, the $600 HD 5970, is a dual-GPU video card that is able to match and/or beat the GTX 580 in many games where CrossFire scales well; and it is also coming down in price.
Considering the awesome, cool and quiet cooler and the much better performance, it is a no-brainer to go for Nvidia’s new GTX 580 solution if you are considering buying a GTX 480. We also expect that some of its success will depend on market pricing and what AMD does with their HD 5970 pricing which appears to be dropping as AMD responds to GF110. But if you want the fastest single GPU with an awesome cool and quiet VGA cooler, the new GTX 580 gives you your cake and allows you to eat it also.
Clearly AMD is confident in its own mature product with HD 5870 and HD 5970 and they are apparently going to counter Nvidia with discounted pricing and it is also clear that they are leaving their partners to use their own judgment. Will this strategy work? How will Nvidia respond? There is a good chance that AMD’s new high-end HD 6000 series including both Cayman and Antilles will be introduced later on this year. Until then – and right now – it is an excellent time to upgrade as there is price competition again, something we have not seen for a long time until recently. The GTX 580 is actually priced at Nvidia’s own suggested retail price for the reference and overclocked GTX 480s.
This Morning’s Interview with AMD’s Stanley Ossias, Director, Mobile Discreet Graphics Product Management at AMD
We had an opportunity to interview Stanley Ossias, Director, Mobile Discreet Graphics Product Management at AMD right about the time that the GTX 580 NDA ended. Basically his message was that AMD expected this improved version of Fermi and is well-prepared to counter it now and with their further release of their upcoming HD 6000 series. They feel that AMD has the right product and the right price point and that competition is good. We will have more of the actual interview with Mr Ossias later on and in a separate published article on ABT. Stay tuned as this graphics war gets more and more exciting and ABT is reporting from right on the battlefront’s front lines.
Conclusion
This has been quite an enjoyable – if physically exhausting – one week, hand’s on experience for us in comparing our brand-new, under-NDA, GTX 580 versus our two GTX 480s versus our HD 5870 and HD 6870 and we look forward to evaluating further new products from AMD and Nvidia. We used all “fresh” testing with the very latest drivers for all of these video cards and we wish that we had more than the 7 days that we were allowed to benchmark the GTX 580 so as to give you our first impressions. Fortunately, we have been gaming for months with both the reference GTX 480 and with our HD 5870, so that we can provide you with a reliable comparison. However, it was certainly worth it and we feel priviliged to bring you our very first benchmarks and performance testing of Nvidia’s amazingly improved GTX 580.
We like the new GTX 580 quite a lot and we plan to follow up with this editor’s special areas of interest – multi-GPU scaling and multi-display’s Eyefinity versus (2D) Surround. We were amazed by the supeb scaling of SLI in newer games when we tested GTX 450 SLI. So, expect an ongoing series. We also expect GF108 from Nvidia to take on HD 5500/5400 series. Soon we will cover AMD’s continued launch of their HD 6000 series Cayman GPU which they expect to take on GF110.
In the meantime, feel free to comment below, ask questions or have a detailed discussion in our ABT forum. If you have any requests on what you would like for us to focus on for further testing or for any other information, please join our ABT forum or leave a comment.
GTX 580
Pros and Cons:
Pros:
- The GTX 580 is confirmed as the world’s fastest DX11 GPU and it currently is the fastest single-GPU video card (period!)
- The GTX 580 is much faster than its competition, HD 5870 and is solidly faster than GTX 480 and often it wins even versus our highly overclocked GTX 480.
- There is further room for overclocking and the overclocked GTX 580 is solidly faster than the overclocked GTX 480 at the same overclocks.
- New architecture brings support for GPU computing and a level of performance way beyond the last generation.
- DX11 and great support for tessellation, PhysX and CUDA, 3D gaming, and 2D/3D Surround (with SLI) bring realism to gaming
- Nvidia’s new vapor chamber cooler is great for achieving and keeping your OC by keeping your GPU cool. It is one awesome cooler that tames GTX 580’s thermals very quietly, bringing GTX 580 performance at or near HD 5870 volume levels.
- Power draw and thermals have improved; current limiters may be a mixed blessing – great for protecting the system (but may limit extreme overclocking).
- If you are considering SLI (for performance, 3D or Suround), 2 x GTX 580 is a very potent performance solution and you may also consider Tri-SLI; in that case, you are unlikely to need a further dedicated PhysX card. Two of these cards are designed to be put close together and still exhaust air and stay cool
Cons:
- Price and uncertainty about AMD’s Cayman. The market will decide.
That’s it. For about the same price or slightly more than a reference GTX 480 or an overclocked version, you get all the features that Nvidia video cards have to offer in an overclocked and very solidly-built, cool and quiet-running GTX 580! Add to this all the benefits of a flagship card, and we feel that Nvidia has a real winner in their GTX 580 to offer us and we are pleased to award them our ABT Editor’s Choice award!
We do not know what the future will bring, but this amazing card brings a great value to the Fermi family of GTX “tanks” in Nvidia’s lineup. Look for it at an etailer this week. This editor believes that Nvidia, although late, does bring a very remarkable full-featured DX11 GPU lineup to the market that will find good acceptance among customers and their fans alike. Fermi architecture is impressive and flexible and it does translate to performance in gaming – although with a bit of a price premium. We have also seen Nvidia’s drivers improve and their multi-GPU SLI scaling for newer games is very impressive. We also like the direction they are heading in with their simplified installations of the GeForce 260 drivers.
If you currently game on a HD 4870, 8800 GTX, 8800 GTS, or 9800 GT class of card on up to HD 4870-X2, GTX 280, GTX 285 and even GTX 295, you will do yourself a big favor by upgrading. The move to a GTX 580 will give you better visuals on the DX11 pathway and you are no doubt thinking of GTX 580 SLI if you want to get even higher performance or want to use Suround’s three-panel display (which we are going to explore in a future article versus Eyefinity).
If the many exclusive features of the new GTX 480 appeal to you and you are gaming at 1920×1080 or above, you cannot go wrong with a GTX 580. In this editor’s experience, it is also great choice if you are considering overclocking further. The competition is hot as the prices on both the HD 5970 and the HD 5870 have softened and they offer their own set of features including a cheaper way to experience 3-panel multi-display with Eyefinity. And AMD is also bringing out their Cayman-based highest performing single GPU video cards out shortly. Stay tuned, there is a lot coming from us at ABT.
Mark Poppin
ABT Senior Editor
Please join us in our Forums
Become a Fan on Facebook
Follow us on Twitter
For the latest updates from ABT, please join our RSS News Feed
Join our Distributed Computing teams
- Folding@Home – Team AlienBabelTech – 164304
- SETI@Home – Team AlienBabelTech – 138705
- World Community Grid – Team AlienBabelTech
Amazing review guys, I wonder if I can evolve my GTX 480 to 580 using EVGA’s RMA process. I need maor powah!!!
I think this is the best GTX 580 review I’ve seen on the net! And I’ve read almost all of them. Thanks for testing so many game titles and especially DX9 titles. I still run XP, so it’s important for me to see how the card did in them. Most reviews only have one or two DX9 games tested, some don’t have any. After reading this review I made up my mind about getting this card. Thanks!
Oh, and I forgot to ask: please use same titles when reviewing the upcoming HD 6970. Thanks!
I had 3 580 GTX’s in my system using a Core i7 980x, and it crashed during minesweeper.
How did it go with the step up program?
Thanks for the kind words. And I generally use the same titles and same settings for the high end cards.
I will be very busy for the next few days.
A very comprehensive review.
It seems that Nvidia has upped its game. I’ve always preferred single GPU solutions over SLI or Crossfire, as have many others, although right now this is out of my budget. It exceeded expectations – cooler and faster. I wonder what AMD’s response will be like.