GTX 480 (825/1100 MHZ) vs. HD 5870 (975/1300 MHz), Overclocked Performance Analysis, Part 2
NVIDIA has released its long awaited GeForce GTX based on its brand new Fermi DX11 GF100 architecture just over two weeks ago. We were able to bring you Part One of our GTX 480 series, “NVIDIA’s GTX 480 Performance Testing” here. We now have had two weeks hand’s on experience testing GTX 480 versus HD 5870 and we have learned quite a bit more that we would like to share with you.
In Part One, we learned that the GTX 480 does indeed overclock. We were able to get it stably up from its stock clocks of 700/924 MHz to 825/1100! So now, for Part Two, we will compare the performance of our overclocked GTX 480 (825/1100MHz) to our overclocked HD 5870 (975/1300MHz) and we have added one more benchmark. In Part Three, we will look at the relative performance hit of 8xMSAA over 4xMSAA on each card, and we shall continue to add games as we progress through this series of GTX 480 vs. HD 5870 exploring the features of each video card on our adventure.
In Part One, we compared the relative performance of 5 GPUs – GTX 480, GTX 280, HD 5870, HD 4870-X2 and HD 4870. GTX 480 won most of the benchmarks just over HD 5870, followed by HD-4870-X2. HD 4870 and GTX 280 were left in the DX10 dust while the DX11 cards showed their advantages over a dual-GPU flagship card of the last generation. This time, we will only be comparing HD 5870 to GTX 480 and we shall overclock them each as far as we can. We do want to note that we did not overclock the GTX 480’s shader clocks for this review as they cannot be adjusted in the competing card. However, you can expect even further performance improvement by doing so. In a later part of this series and after we get WHQL drivers for GTX 480, we shall also overclock the shaders to further explore this new GF100 GeForce architecture in gaming.
We are also going to revisit the “Power Usage” section as befits overclocking in this review. We have heard back from NVIDIA about the apparent discrepancy between the published TDP specification of 250 W “Maximum Board Power” and the near-300 W power draw that we observed with our own GTX 480 and what has been confirmed by other reviewers. NVIDIA has finally clarified how they determine their published “Maximum Board Power” (TDP) specifications for their graphics cards!
AMD has already launched its entire 5000 DX11 series from top to bottom – $600 for their dual-GPU HD 5970 down to their passive-cooled HD 5450 for $60. To compete, NVIDIA has launched its new DX11 GeForce lineup with its GTX 480 flagship and second-fastest card, the GTX 470. The GTX 480 comes with a MSRP of $499 and the GTX470 retails for $349. We still need to answer the question: Is it worth the $100 premium over the $400 that one would currently spend for AMD’s top single-GPU video card – HD 5870?
That question is important because we expect that NVIDIA will shortly launch it’s own entire DX11 line-up based on their GF100 “Fermi” architecture. We expect to see GTX 460 and 450 launch within a few months and we need to see what this new GF 100 Fermi architecture brings over their GT200b series besides DX11 and a smaller process. Overclocking our GTX 480 may give us some idea of future GF100 core scalability as NVIDIA refines their process to perhaps bringing out an “Ultra” version of GTX 480 with all 512 CUDA cores enabled and with perhaps a higher clock than what we achieved with our own 480-core GTX 480 for this review.
To properly bring you this review, we purchased a Diamond HD 5870 from NewEgg and put it through its paces with the very latest performance drivers – Catalyst 10-3a. AMD is quite proud of this driver set as it brings sold performance increases over Catalyst 10-2 and over even the latest WHQL drivers, Catalyst 10-3, which were released two weeks ago. The results would be more in NVIDIA’s favor if we used the last Catalyst 10-2 driver set or even the current WHQL Catalyst 10-3 drivers. Also, remember that AMD has had a long time to mature their drivers and that the release GTX 480 GeForce 197.17 drivers that we are still using are beta “release drivers” and they should leave some room for improvement by NVIDIA’s GeForce driver team in the months to come.
Today you will see us pit our Diamond reference design HD 5870 which is now overclocked from 850/1200 to 975/1300 MHz against the new GTX 480 which is overclocked to 825/1100 MHz from 700/924 MHz. We continue to benchmark with 14 modern games and with 3 synthetic benchmarks ranging from 1680×1050 to 1920×1200 to 2560×1600 resolutions and with details fully maxed and with 4xAA/16xAF.
Is GTX 480 worth nearly $100 more than its rival, AMD’s HD 5870?
We were not able to fully answer that question in Part One of our GTX 480 performance evaluation even though we declared the GTX 480 the performance winner. This review is continuing on as a series and we believe that we have a much clearer picture now. Part One analyzed and compared the stock GTX 480 to the stock HD 5870 performance and the performance winner was the GTX 480, but not overwhelmingly so. Later on, we will also look at the individual features of each video card to see what else the new NVIDIA GPU brings to the table and if it is worth the nearly $100 premium over its AMD counterpart. We learned that AMD is immediately going to respond to NVIDIA’s GTX 480/470 Fermi GF100 launch by allowing their partners to overclock the current HD 5870.
Here is AMD’s not-so-secret weapon and notice the free down-loadable copy of Call of Duty, Modern Warfare 2 bundled in with the PowerColor HD 5870 PCS+ as an incentive. PowerColor HD 5870+ is a mildly overclocked version of our reference HD 5870; 875/1225 MHz up +25 MHz on each the core and memory clock. Worthy of note especially for smaller cases, the PowerColor HD 5870 PCS+ PCB is shorter and wider than reference and the cooling solution is both quieter and more effective than the reference cooling solution.
Widespread e-tail availability of both GeForce GTX 480 and GTX 470 will happen this coming week of April 12, 2010. So you still have a little time to decide what to do and this review is designed to help you with an important potential upgrade. NVIDIA says that they are building tens of thousands of units for initial availability, and this will ensure that their partners have ample volume for what is certainly one of the most anticipated GPU launches ever.
We already pointed out in Part 1 that it is practical to upgrade from HD 4870/GTX 280 class – which includes GTX 260, 275 and by extension, GTX 285. We also discovered that it is also logical to upgrade to DX11 from a HD 4870-X2 or HD 4870 CrossFire, or by extension, GTX 260 or 275 SLI or even a GTX 295 which is a bit more powerful than our HD 4870-X2. Since we do not want any chance of our CPU “bottlenecking” our graphics, we continue testing both of our graphics cards with our Intel Core i7 920 at 3.80 GHz (3.97 GHz effectively with the 21x multiplier in turbo mode), 6 GB Kingston DDR3 and a Gigabyte X58 full 16x + 16x PCIe CrossFire/SLI motherboard.
Later on we plan to also test our AMD DX11 video cards on AMD’s Dragon platform. We also acquired a brand new ECS black label A890GXM-A CrossFire motherboard which is a nice performance upgrade from our current Gigabyte 790X motherboard and we shall post that review early this week.
Overclocking and Temperatures
Each of our competing video cards was overclocked as far as it could go without exceeding the core voltage or thermals so as to throttle the respective GPUs. We got 100% stability from each of our GPUs at the noise-expense of running their cooling fans near 90%.
We used MSI’s Afterburner to overclock our reference Diamond HD 5870 to 975/1300 MHz – a solid overclock of +125 MHz on the core and +100 MHz on the vRAM. Correspondingly, we overclocked the GTX 480 with EVGA’s tool to 825/1100 from 700/924 MHz – another solid overclock which was coincidentally also +125 MHz on the core; however, we managed +176MHz on the GTX 480’s vRAM. We did not attempt to adjust the GTX 480 shader clock which would give us a further performance increase. Here is the EVGA overclock tool showing stock clocks; note the fan speed.
Now we see our maximum overclock and note the fan speed is now set at 90% as we finally settled on 1650/2200 MHz (825/1100 MHz) with stock voltage. We had to do the same thing with HD 5870’s fan when we overclocked it to 975/1300 MHz and upped its core voltage to 1.35V. The fan noise of either video card is intolerable and we might suggest water cooling or a better aftermarket air cooler if you are going to push your GPU to a near-extreme overclock.
Temperatures remained well within the each card’s specifications and no throttling was noted. Both fans were set at 90% each to insure thermal stability and most of the testing proceeded in a warm room with ambient temperatures of 76-80 F; rather warm. Each card was stable at their respective overclock and no issues were noted with either card.
Both cards are very noisy with fans set at 90%; it would be intolerable under gaming conditions without headphones. However, we wanted to test a near-maximum overclock of each card. The Diamond HD 5870 actually overclocked further than our PowerColor HD 5870 PCS+ (1325/1300 MHz) because the PowerColor is a non-reference design with a shorter and wider PCB than the reference, but it also lacks voltage adjustments.
We eventually settled on 1.35V for the Diamond HD 5870; the highest voltage allowed by the MSI Afterburner overclocking tool; we could not pass beyond 975 MHz on the core, but it is still a good overclock over the stock core clock of 850 MHz. 1300 MHz seems to be the upper limit of the GDDR5 that is used in both of our HD 5870s.
No voltage adjustments were done on our GTX 480 as it already ran pretty warm with stock voltage. You could probably push it further with a better cooling solution than the reference fan but it still managed a nice overclock from 700 MHz on the core to 825 MHz. The GTX 480’s vRAM was able to overclock from 924 MHz to 1100 MHz; not bad!
Make sure you check out the revised Power Usage section later on in this same review which revisits and explains how NVIDIA measures the GTX 470/480 TDP and perhaps explains the apparent discrepancy between the published specification of 250 W “maximum board power” and what we and other reviewers observed and measured.
Ok, let’s look at our test bed for both of these overclocked cards.
Test Configuration
Test Configuration – Hardware
- Intel Core i7 920 reference 2.66 GHz and overclocked to 3.8 GHz); Turbo (21X multiplier for 3.97 GHz of a single core) is on.
- Gigabyte EX58-UD3R (Intel X58 chipset, latest BIOS, PCIe 2.0 specification; CrossFire/SLI 16x+16x).
- 6 GB OCZ DDR3 PC 18000 Kingston RAM (3×2 GB, tri-channel at PC 16000 speeds; 2×2 GB supplied by Kingston)
- NVIDIA GTX 480, reference design (supplied by NVIDIA under NDA)
- ATI Radeon HD 5870 (2GB, reference clocks) by Diamond
- Onboard Realtek Audio
- Two identical 250 GB Seagate Barracuda 7200.10 hard drives configured and set up identically from drive image; one partition for NVIDIA GeForce drivers and one for ATI Catalyst drivers
- SilentPro 600 M power supply unit supplied by Cooler Master
- Cooler Master Gladiator 600 Case supplied by Cooler Master
- Noctua UD CPU cooler, supplied by Noctua
- Five Case fans by Cooler Master and Noctua
- Philips DVD SATA writer
- HP LP3065 2560×1600 thirty inch LCD
Test Configuration – Software
- ATi Catalyst 10-3a; highest quality mip-mapping set in the driver, Catalyst AI set to “Standard”
- NVIDIA GeForce 197.13 WHQL for GTX 280 and 197.17 beta release drivers for GTX 480; High Quality
- Windows 7 64-bit; very latest updates
- DirectX February 2010
- All games are patched to their latest versions.
- vsync is off in the control panel and is never set in-game.
- 4xAA enabled in all games and “forced” in Catalyst Control Center for UT3; all in-game settings at “maximum” or “ultra” with 16xAF always applied; 16xAF forced in control panel Crysis.
- All results show average, minimum and maximum frame rates except as noted.
- Highest quality sound (stereo) used in all games.
- Windows 7 64, all DX10 titles were run under DX10 render paths; DX11 titles under DX 11 render paths except for Dirt 2 demo in DX9c.
The Benchmarks
•Vantage
•Call of Juarez
•Crysis
•S.T.A.L.K.E.R., Call of Pripyat
•Far Cry 2
•World in Conflict
•X3:Terran Conflict
•Dirt 2
•Left4Dead
•Lost Planet
•Unreal Tournament 3
•Resident Evil 5
•ARMA2
•H.A.W.X.
•Battleforge
•Heaven 1.0 (Unigine)
•Heaven 2.0 (Unigine)
We have added Heaven 2.0 as a third synthetic benchmark as it uses extreme shaders and it may well portend the future of games with DX11’s most noticeable feature over DX10, tessellation. In fact, the Unigine Engine will be the basis for at least two new DX11 games this year, one of which is Primal Carnage – a dinosaur game. For Part Three of this series, we will also add Just Cause 2 as a game benchmark and perhaps revisit some of our older games as we explore the performance hit of 8xMSAA over 4x with both of our current competing video cards. Let’s move on to Vantage and then 14 more games as well as Unigine benchmarks as we explore the scaling of the GTX 480 versus HD 5870 as they are overclocked.
Vantage
Vantage is Futuremark’s latest test. It is really useful for tracking changes in a single system – especially driver changes. There are two mini-game tests, Jane Nash and Calico and also two CPU tests, but we are still focusing on the graphics performance. Here is a scene from Vantage’s second mini-game.
Let’s go right to the graphs and first check the basic tests with the default benchmark scores:
We see an interesting lineup. However, it is a meaningless test with meaningless numbers to compare one video card’s performance to another. In contrast, the mini games just might show a bit more as they are actually benching framerates.
Here we see the GTX 480 ranked below the HD 5870 at stock speeds but moves ahead when both are overclocked to their individual maximums. Let’s move on to PC games and to real world situations to see if this is a trend or not.
Call of Juarez
Call of Juarez is one of the very earliest DX10 games. It is loosely based on Spaghetti Westerns that became popular in the early 1970s. Call of Juarez features its Chrome Engine using Shader Model 4 with DirectX 10.
Our benchmark isn’t built into Call of Juarez, but is an official stand-alone that is identical to the one that is built-into the game. It runs a simple flyby of a level that is created to showcase its DX10 effects. It offers good repeatability and it is a good stress test for DX10 features in graphics cards, although it is not quite the same as actual gameplay because the game logic and AI are stripped out of this demo.
Performing Call of Juarez benchmark is easy. You are presented with a simple menu to choose resolution, anti-aliasing, and two choices of shadow quality options. We set the shadow quality on “high” and the shadow map resolution to the maximum, 2048×2048. At the end of the run, the demo presents you with the minimum, maximum, and average frame rate, along with the option to quit or run the benchmark again. We always ran the benchmark at least a second time and recorded that generally higher score.
Here are Call of Juarez DX10 benchmark results at 1920×1200:
This time the overclocked GTX 480 takes a solid lead and we notice a bigger performance gap over the HD 5870 even when they are both overclocked. Now on to 1650×1080 resolution:
Our top cards are almost wasted on Call of Juarez at 1680×1050 resolution. This is so different from when Call of Juarez was first released as it was almost a slide show on the fastest cards of just three and a half years ago. We will next be looking to add 8xAA at our testing resolutions to see the relative performance hit to each card over 4xAA. At any rate, the overclocked GTX 480 is the clear winner.
CRYSIS
Next we move on to Crysis, a science fiction first person shooter by Crytek. It remains one of the most demanding games for any PC and it is also still one of the most beautiful games released to date. Crysis is based in a fictional near-future where an alien spacecraft is discovered buried on an island near the coast of Korea. The single-player campaign has you assume the role of USA Delta Force, ‘Nomad’ who is armed with futuristic weapons and equipment.
Crysis uses DirectX10 for graphics rendering. A standalone but related game, Crysis Warhead was released last year. CryEngine2 is the game engine used to power Crysis and Warhead and it is an extended version of the CryEngine that also powers FarCry. As well as supporting Shader Model 2.0, 3.0, and DirectX10’s 4.0, CryEngine2 is also multi-threaded to take advantage of dual core SMP-aware systems and Crytek has developed their own proprietary physics system, called CryPhysics. However, it is noted that actually playing this game is a bit slower than the demo implies.
GPU Demo, Island
All of our settings are set to maximum “very high” including 4xAA and we force 16xAF in the control panels. Here is Crysis’ Island Demo benchmark, first at 2560×1600 resolution:
Crysis at 2560×1600 still requires at least multi-GPU to play smoothly. At stock clocks, HD 5870 and GTX 480 are pretty much neck-and-neck overall although the Radeon will slightly stumble at times. Perhaps the larger framebuffer of the GTX makes a difference. Overclocked, the GTX pulls away from the Radeon. Neither of our overclocked video cards played this game at maximum resolution particularly well; not even without AA/AF – but this time the overclocked GTX 480 pulls slightly ahead of the overclocked HD 5870; a change from the ranking of their stock clocks. GTX 480 appears to scale a bit better as it is overclocked.
Let’s move on to 1920×1200:
This time the performance is closer. Both of our top cards are now playable with Crysis at 1920×1200 if you are willing to compromise with AA/AF or lower a couple of detail settings. This time the overclocked GTX 480 catches the overclocked HD 5870 and edges it slightly performance-wise in the averages; a subtle change in ranking from their stock clocks.
The Radeon HD 5870 is slightly faster than the GeForce GTX 480 when both are stock clocked in Crysis. When they are both overclocked, the GTX is now a bit faster in the minimums and averages. We might also point out that the new GTX 480 is still running on beta release 197.17 drivers while the AMD cards have mature drivers in Catalyst 10-3a. We will revisit this benchmark every month in our Catalyst and GeForce driver performance analysis and report any changes that we may find.
Far Cry 2 uses the name of the original Far Cry but it is not connected to the first game as it brings you a new setting and a new story. Ubisoft created it based on their Dunia Engine. The game setting takes place in an unnamed African country, during an uprising between two rival warring factions. Your mission is to kill “The Jackal”; the Nietzsche-quoting mercenary that arms both sides of the conflict that you are dropped into.
The Far Cry 2 game world is loaded in the background and on the fly to create a completely seamless open world. The Dunia game engine provides good visuals that scale well. The Far Cry 2 design team actually went to Africa to give added realism to this game. One thing to especially note is Far Cry 2’s very realistic fire propagation by their engine that is a far cry from the scripted fire and explosions that we are used to seeing.
First let’s check out 2560×1600:
Here the GTX 480 takes an even more commanding lead over the HD 5870 when they both are overclocked. Clearly the GTX 480 runs away from the HD 5870 at our highest resolutions. Now we test Far Cry 2 benchmark at 1920×1200 – all of the resolutions that we test are with AI enabled.
We note that only at the maximum frame rate does the Radeon almost equal the GTX 480’s stock performance. Finally we test at 1680×1050 resolution:
Here we see a clean sweep by GTX 480 in Far Cry 2 – and when it is overclocked, the performance difference is magnified.
World in Conflict
World In Conflict is set in an alternate history Earth where the Cold War did not end and Russia invaded the USA in 1989 and the remaining Americans decided to strike back. World in Conflict (WiC) is a real-time tactical/strategy video game developed by Massive Entertainment. Although it is generally considered a real-time strategy (RTS) game, World in Conflict includes gameplay typical of real-time tactical (RTT) games. WiC is filled with real vehicles from both the Russian and the American military. There are also tactical aids, including calling in massive bombing raids, access to chemical warfare, nuclear weapons, and far more.
Here is yet another amazing and very customizable and detailed DX10 benchmark that is available in-game or as a stand-alone. The particle effects and explosions in World in Conflict are truly spectacular! Every setting is fully maxed out.
We start our benching at 2560×1600:
Again the GTX 480 adds to its lead when it is overclocked. World in Conflict is very playable at 2560×1600 on our GTX 480 even to very acceptable minimums under the game’s most demanding situations. Next we see the results at 1920×1200 resolution:
The GTX 480 adds to its lead over HD 5870 when both are overclocked. Now at 1680×1050 resolution:
The GTX 480 delivers excellent performance all the way up to and including 2560×1600. You probably want GTX 480 if you play a lot of World-in-Conflict at higher resolutions with your details and filtering fully maxed out.
X3: Terran Conflict
X3:Terran Conflict (X3:TC) is another beautiful stand-alone benchmark that runs multiple tests and will really strain a lot of video cards. X3:TC is a space trading and combat simulator from Egosoft and is the most recent of their X-series of computer games. X3:TC is a standalone expansion of X3: Reunion, based in the same universe and on the same engine. It complements the story of previous games in the X-Universe and especially continues the events after the end of X3: Reunion.
Compared to Reunion, Terran Conflict features a larger universe, more ships, and of course, new missions. The X-Universe is huge. The Terran faction was added with their own set of technology including powerful ships and stations. Many new weapons systems were developed for the expansion and it has generally received good reviews. It has a rather steep learning curve.
Nothing seems to help the minimums at 2560×1600. We see minimal performance improvement from both video cards when they are overclocked and their relative ranking remains unchanged. Next we note the results at 1920×1200:
We see much the same results where the overclocked HD 5870 has the highest minimum but is edged out by the overclocked GTX 480 in average and maximum frame rates. Now at 1680×1050:
Again we see a fairly tight grouping. However, all of our video cards perform well and all of them experience similar minimum framerates. Overall, the GTX 480 is slightly faster than the overclocked HD 5870 although you will not really notice any difference actually playing this game.
DiRT 2 Demo – (DX9c)
Colin McRae: DiRT 2 is a racing game that was released in September 2009, and is the sequel to Colin McRae: Dirt. It includes many new race-events, including stadium events as your RV travels from one event to another in many real-world environments across four continents. Dirt 2 includes five different event types even allowing you to compete at new locations. It also includes a new multiplayer mode. Dirt 2 is powered by an updated version of the EGO engine which was featured in Race Driver: Grid. This updated EGO engine also features an updated physics engine.
We have been using the Dirt 2 demo to benchmark up until now as it works just as well as in the retail game – until you try to run DX11 on a NVIDIA DX11 card, in which case it reverts back to DX9c. Evidently the developer did not provide support for NVIDIA’s new DX11 card in the demo although the retail game has no such issues. Since we ran all of our tests with the Dirt 2 demo in Part One, it was too late to switch to the full game for this Part 2. We instead edited the configuration file so that the HD 5870 also ran on the DX9 pathway so that we could have a solid apples-to-apples comparison of performance across all of the cards. Later on, in further testing, will use the full retail game for the DX11 pathway as the visuals are better.
First we test our two video cards at 2560×1600:
The GTX 480 pulls ahead of the HD 5870 in a not so tight race when they are stock clocked and the GTX improves its lead when it is overclocked. What about 1920×1200?
Again the GTX 480 leads stock clocked and pulls further ahead of the HD 5870 when they are both overclocked.
Dirt 2 gets the checkered flag on the GTX 480 on the DX9c pathway, especially as it is pushed to its red line. We look forward to bringing you DX11 results in subsequent testing.
Left 4 Dead
Left 4 Dead (L4D) is a 2008 co-op first-person shooter that was developed by Turtle Rock Studios and purchased by Valve Corporation during its development. Left 4 Dead uses Valve’s proprietary Source engine and it replaces our older Souce benchmark which used Half Life 2’s Lost Cost demo.
Left 4 Dead is set in the aftermath of a worldwide pandemic which pits its four protagonists against hordes of the infected zombies. There are four game modes: a single-player mode in which your allies are controlled by AI; a four-player, co-op campaign mode; an eight-player online versus mode; and a four-player survival mode. In all modes, an artificial intelligence (AI), dubbed the “Director”, controls pacing and spawns, to create a more dynamic experience with increased replay value. It is best as a multiplayer game with humans.
There is no built in benchmark, so we used ABT Senior Editor BFG10K’s custom time demo which is very repeatable. The game is updated regularly by Steam and we chose the highest detail settings and 4xAA. First we test at 2560×1600 resolution. Please note that these charts do not begin at zero; we are emphasizing differences in a tight grouping:
The situation reverses from stock clocks and now the overclocked GTX 480 now pulls ahead of the overclocked HD 5870. On to our next chart at 1920×1200:
We note the same thing. Let’s move on to 1680×1050 resolution:
At 2500×1600, the stock-clocked GTX 480 stumbles in comparison to the HD 5870 but moves ahead when it is overclocked. The overclocked GTX 480 takes the lead in Left 4 Dead. However, we note that for playing Source engine games, a HD 4870 or GTX 260+ is usually plenty and we will now be looking to set MSAA at least to 8x for both of our DX 11 cards which we shall explore in Part Three.
Lost Planet
Lost Planet: Extreme Condition is a Capcom port of an Xbox 360 game. It takes place on the icy planet of E.D.N. III which is filled with monsters, pirates, big guns, and huge bosses. This frozen world highlights high dynamic range lighting (HDR) as the snow-white environment reflects blinding sunlight as DX10 particle systems toss snow and ice all around. The game looks great in both DirectX 9 and 10 and there isn’t really much of a difference between the two versions except perhaps shadows. Unfortunately, the DX10 version doesn’t look that much better when you’re actually playing the game and it still runs slower than the DX9 version.
We use the in-game performance test from the retail copy of Lost Planet and updated through Steam to the latest version for our runs. This run isn’t completely scripted as the creatures act a little differently each time you run it, requiring multiple runs. Lost Planet’s Snow and Cave demos are run continuously by the performance test and blend into each other.
Here are our benchmark results with the more demanding benchmark, Snow. All settings are fully maxed out in-game including 4xAA/16xAF. Let’s start with 2560X1600. In our last Part One, there is a typo in the chart where the stock GTX 480’s maximum frame rate is wrongly presented as 85 FPS; this one corrects that error; it is 57 FPS.
Now at 1920×1200 resolution:
Now at 1920×1200:
Finally at 1680×1050:
All of our top cards are tightly grouped. For the averages, the GTX 480 edges the HD5870. Performance is close but the GTX 480 at 825/1100 MHz wins the duel in Lost Planet.
Unreal Tournament 3 (UT3)
Unreal Tournament 3 (UT3) is the fourth game in the Unreal Tournament series. UT3 is a first-person shooter and online multiplayer video game by Epic Games. Unreal Tournament 3 provides a good balance between image quality and performance, rendering complex scenes well even on lower-end PCs. Of course, on high-end graphics cards you can really turn up the detail. UT3 is primarily an online multiplayer title offering several game modes and it also includes an offline single-player game with a campaign.
For our tests, we used the very latest 1.5 game patch for Unreal Tournament 3, released after its ‘Titan’ pack. The game doesn’t have a built-in benchmarking tool, so we used FRAPS and did a fly-by of a chosen level. Here we note that performance numbers reported are a bit higher than compared to in-game. The map we use is called “Containment” and it is one of the most demanding of the fly-bys. Our tests were run at resolutions of 1920 x 1200 and 1680 x 1050 with UT3’s in-game graphics options set to their maximum values.
One drawback of the way the UT3 engine is designed is that there is no support for anti-aliasing built in so we forced 4xAA in each vendor’s control panel. We record a demo in the game and a set number of frames are saved in a file for playback. When playing back the demo, the game engine then renders the frames as quickly as possible, which is why you will often see it playing it back more quickly than you would actually play the game. Here is Containment Demo, first at 2560×1600:
Now at 1920×1200:
Finally at 1680×1050:
There is absolutely no problem playing this game fully maxed out with any of our older graphics cards such as HD 4870 or GTX 280, never mind GTX 480 and HD 5870 providing overkill framerates. However, the GTX 480 wins in the Unreal Tournament 3 arena.
Resident Evil 5
Resident Evil 5 is a survival horror third-person shooter developed and published by Capcom that has become the best selling single title in the series. The game is the seventh installment in the Resident Evil series and it was released for Windows in September 2009. Resident Evil 5 revolves around two investigators pulled into a bio-terrorist threat in a fictional town in Africa.
Resident Evil 5 features online co-op play over the internet and also takes advantage of NVIDIA’s new GeForce 3D Vision technology. The PC version comes with exclusive content the consoles do not have. The developer’s emphasis is in optimizing high frame rates but they have implemented HDR, tone mapping, depth of field and motion blur into the game. Re5‘s custom game engine, ‘MT Framework’, already supports DX10 to benefit from less memory usage and faster loading. Resident Evil 5 gives you choice as to DX10 or Dx 9 and we naturally ran the DX10 pathway.
There are two benchmarks built-into Resident Evil 5. We chose the fixed benchmark. Here it is at 2560×1600:
The overclocked HD 5870 catches up to the stock GTX 480, but overclocking the NVIDIA video card further widens the performance gap at this high resolution. However, neither card has any issues playing this game fully maxed out. Here are the results at 1920×1200 resolution:
Let’s check out 1680×1050:
The overclocked GTX 480 is able to distinguish itself from the pack although the stock cards also turn in good performance in Resident Evil 5..
S.T.A.L.K.E.R., Call of Pripyat
S.T.A.L.K.E.R., Call of Pripyat became a brand new DX11 benchmark for us after GSC Game World released a another story expansion to the original Shadows of Chernobyl. It is the third game in the S.T.A.L.K.E.R. series. All of these games have non-linear storylines which feature role-playing game elements. In both games, the player assumes the identity of a S.T.A.L.K.E.R.; an illegal artifact scavenger in “The Zone” which encompasses about 30 square kilometers. It is the location of an alternate reality story surrounding the Chernobyl Power Plant after another (fictitious) explosion.
S.T.A.L.K.E.R., Call of Pripyat features “a living breathing world” with highly developed NPC creature AI. Call of Pripyat utilizes the XRAY 1.6 Engine, allowing advanced modern graphical features through the use of DirectX 11 to be fully intregrated. Call of Pripyat is also compatible with DirectX 8, 9, 10 and 10.1. It uses the X-ray 1.6 Engine one outstanding feature being the inclusion of real-time GPU tessellation– a Shader model 3.0 & 4.0 graphics engine featuring HDR, parallax and normal mapping, soft shadows, motion blur, weather effects and day-to-night cycles.
As with other engines using deferred shading, the original DX9c X-ray Engine does not support anti-aliasing with dynamic lighting enabled, although the DX10 and DX 11 versions do. We are using the stand-alone “official” benchmark by Clear Sky’s creators. Call of Pripyat is top-notch and worthy to be part of the S.T.A.L.K.E.R’s universe with even more awesome DX11 effects which help to create and enhance their game’s already incredible atmosphere. As with Clear Sky before it, DX10 and now DX11 comes with steep hardware requirements and this new game still really needs multi-GPU to run at its maximum settings.
We picked the most stressful test out of the four, “Sun shafts”. It brings the heaviest penalty due to its extreme use of shaders to create DX10/DX10.1 and DX11 effects. We ran this benchmark fully maxed out in DX11.0 with “ultra” settings plus 4xAA, including applying edge-detect MSAA which chokes performance even further.
We present our maximum DX11 settings for S.T.A.L.K.E.R., Call of Pripyat DX11 benchmark at 2560×1600:
Now on to the benchmarks at 2560×1600:
Next at 1920×1200:
Let’s check out 1680×1050:
The overclocked HD 5870 gains some nice ground on the stock GTX 480 but the performance gap widens when the GTX is overclocked and it makes a clean sweep of these benches.
ARMA 2
ARMA 2 is taken from the third installment in their series of realistic modern military simulation games from Bohemia Interactive. It features a player-driven story with more than 70 weapons and over 100 different vehicles. With a game world of 225 square km that is taken from actual surveillance photos, you can expect truly massive online battles with five distinct armed groups to choose from.
ARMA2 can be considered a tactical shooter where the player commands a squad of AI – or many squads – with elements of real-time tactics.ARMA 2 Demo was released in late June, 2009 and coming in at 2.6 GB, the ARMA 2 demo allows you to experience the same game play that is featured in the full version of ARMA 2 – including multiplayer, as well as a few of the vehicles, weapons and units. The ARMA2 demo also contains a part of Chernarus terrain, a small section of the full game world set in the fictional “Black Russia”.
In previous testing, there was always a massive performance hit on any DX10/10.1 card when maximum details are enabled at the resolutions that we test; AA is set to “high”. Let’s see how our overclocked video cards do with ARMA 2 at 2560×1600. Again, the charts do not start with zero as the grouping is very tight.
Notice there is not much difference overclocked; the cards are within 1-2 FPS from their stock to maximum overclock:
We see the same thing; a small performance increase when each card is overclocked and the GTX 480 is 2 frames per second faster than the HD 5870.
Again, the GTX 480 just edges out the HD 5870, overclocked and stock beating it by 1 frame per second in ARMA 2.
Tom Clancy’s H.A.W.X.
Tom Clancy’s H.A.W.X. is an air combat video game developed by Ubisoft Romania and published by Ubisoft for Microsoft Windows, Xbox 360 and PlayStation 3. It was released in United States on March 6, 2009. You have the opportunity to fly 54 aircraft over real world locations and cities in somewhat realistic environments that are created with satellite data. This game is a more of a take on flying than a real simulation and it has received mixed reviews.
The game story takes place during the time of Tom Clancy’s Ghost Recon Advanced Warfighter. H.A.W.X. is set in the year 2014 where private military companies have replaced government-run military in many countries. The player is placed into the cockpit as an elite ex-military pilot who is recruited by one of these corporations to work for them as a mercenary. You later return to the US Air Force with a team as you try to prevent a full scale terrorist attack on the United States which was started by your former employer.
Let’s check out H.A.W.X. at 2560×1600:
Although the overclocked HD 5870 nearly catches the stock GTX 480, the overclocked GTX 480 flys away from the overclocked HD 5870 and solidly picks up framerates when it is overclocked from its own stock clocks. Here are our results at 1920×1200 resolution:
Again, the GTX 480 speeds by HD 5870 which in turn gained nicely with its own overclock to beat the stock GTX 480’s performance. Let’s see what testing at 1680×1050 shows.
H.A.W.X. is clearly fastest on the overclocked GTX 480. Let’s move on to a DX11 online game, BattleForge.
BattleForge
BattleForge is an online PC game developed by EA Phenomic and published by Electronic Arts. The full game and a demo was released in March 2009. BattleForge is a card based RTS that revolves around acquiring and winning by means of micro-transactions for buying new cards. By May, 2009, BattleForge became a Play 4 Free game with fewer cards than the retail version. BattleForge supports Directx 11 with full support for hardware tessellation. It is very impressive visually and quite demanding on any system.
First we test with our three top cards at 2560×1600 using the BattleForge built-in benchmark with all of its settings completely maxed out and with 4xAA:
The stock GTX 480 is ahead of the overclocked HD 5870; overclocking the GTX 480 gives it a further solid performance boost.
Again, the overclocked GTX 480 clearly leads followed by it at stock clocks. Now at 1680×1050 resolution:
The overclocked GTX 480 is fastest in BattleForge followed by the stock clocked GTX 480 even though in all cases, overclocking our HD 5870 gave it some more framerates.
Heaven benchmark, Unigine Engine
Now we test the synthetic Heaven 1.0 benchmark based on the Unigine engine. It uses DX11 and fairly heavy tessellation which will strain any graphics card. Here are the settings we used for this benchmark (we checked ‘full-screen’).
Here is our benchmark run at 2560×1600. In Part One we mistakenly posted our DX10 results as DX11 where HD 5870 and GTX 480 were 0.1 FPS apart. This chart is correct:
Now at 1920×1200:
And finally at 1680×1050:
It won’t be until later this year that we will see our first DX11 games based on the Unigine Engine. For what it is worth, GTX 480 excels in this benchmark.
Let’s just include Heaven 2.0 with Heaven 1.0 as they are both Unigine engine; version 2.0 is just using much more extreme tessellation with nicer visuals as a result.
Heaven 2.0
There will be at least two DX11 games based on Unigine that will be released later on this year. And there is a brand new and even more stressful Unigine 2.0 benchmark that was just released this week which we will explore next week in part two of this review.
As you can see there is a setting for “extreme tessellation”. We will tell you right now that this test chokes the GTX 480 at the highest settings but it is better than the slide show the Radeon HD 5870 manages. However, the visuals are also extraordinary. Here are the results at 2560×1600:
Of course, this is a synthetic test based on a game engine that has yet to see a PC game that uses it in retail. But it is worth noting the tessellation capabilities of the GTX 480 and its scaling with clock speed.
Performance Summary Chart
Power Usage – NVIDIA’s TDP
This is important for many people as a very hot running GPU is not only not “green”, it throws warm air into your room that your air conditioner must work extra hard to compensate for. Of course, for those of us like this editor who lives where it is cooler than warmer, a small space-heater in ones PC is a plus. We have seen the GTX 480’s TDP specification, that it is 250W – far more than the HD 5870’s 188W TDP – and the GTX 480 requires 6-pin+8-pin PCIe connectors as shown below. As we contrast the GTX 480 with the HD 5870, only 6-pin+6-pin PCIe connectors are required for the Radeon. You will also note that the HD 5870 is physically longer than the GeForce and some cutting modification had to be made to the Cooler Master Gladiator 600 to accommodate it. The GTX 480’s performance does come at a power cost; compare the system total power draw at the wall with the with the HD 5870 first – at idle and then at maximum GPU usage when running FurMark.
Now the total system power draw from the wall with the same PC, but with the GTX 480 inside instead of the HD 5870. First, we see the idle state and then with the GTX GPU maxed out running FurMark.
.
Of course, the second image is of our overclocked GTX 480. We see that we would be pulling over 250W from the wall! This also brings up overclocking which we shall cover shortly.
FurMark will stress a GPU’s stability and give the maximum thermals that one would never see in-game. You can consider FurMark’s torture tests, “worst case” scenarios for power and heat. Here is a screen shot of FurMark running at 2560×1600:
Here is GPU-Z right next to FurMark results:
It definitely runs toasty at 97C as “worst case” but the reference cooling solution appears up to the task. At no time while we were overclocking our GTX 480 to 825/1100 MHz and running it in games did we see temperatures as high as what FurMark was able to achieve at stock clocks. Also, we might note that our case temperatures remained below 100F although ambient temperatures rose to 80F ! We attribute this to six 120 MM to 140 MM case fans cooling our Gladiator 600 mid tower case.
Why the apparent discrepancy between Published specs and the tests?
250W TDP Maximum Board Power” is given in NVIDIA’s specifications but we see the GTX 480 actually draws more than that just from our own total system measurement and it has also been confirmed by other reviewers.
We asked NVIDIA to explain and the reply was that TDP is a measure of maximum power draw over time in real world applications. It does not represent the maximum power draw in extreme cases such as FurMark nor do they measure the peak power draw in games except to average them. Readers are advised to check this out:
http://en.wikipedia.org/wiki/Thermal_design_power
The thermal design power (TDP), sometimes called thermal design point, represents the maximum amount of power the cooling system in a computer is required to dissipate. . . . The TDP is typically not the most power the chip could ever draw, such as by a power virus, but rather the maximum power that it would draw when running real applications. . . . Since safety margins and the definition of what constitutes a real application vary between manufacturers, TDP values between different manufacturers cannot be accurately compared.
Evidently Nvidia doesn’t release its peak board power specifications publicly. Clearly it is possible to go well over 250W with tests like FurMark. But now we know that NVIDIA’s “maximum board power = 250W” is measured “over time in real world gaming”. So it is an average measurement and it is tested with many games.
Conclusion
This conclusion is a short one and it dictated itself. The HD 5870 overclocks solidly in many cases and once overclocked to about 975/1300 MHz it will in many cases catch the stock GTX 480 when performance is already reasonably close. However, the GTX 480 when it is overclocked also turns into a performance monster and in many cases now runs away from even the overclocked HD 5870. This bodes very well for GF100 Fermi architecture and indicates that NVIDIA has very solid scalable new architecture to build on although it is still on an improving process.
We look forward to seeing what NVIDIA its partners have in store with future variants of GTX 480 and the rest of the Fermi GeForce family. We also look forward to AMD’s response beyond overclocking its current HD 5870. This has been quite an enjoyable two-week hand’s on experience for us in comparing GTX 480 versus HD 5870 and with each video card solidly overclocked.
We feel privileged to bring you our very first overclocked benchmarks and performance testing of GTX 480 versus HD 5870. We like it so much so that we will make this a series until we have covered this subject in depth. Next up in this series is an exploration of the relative performance hit of 8xMSAA over 4xMSAA for the HD 5870 versus the GTX 480’s performance hit. We also expect to explore GTX 480 SLI versus HD 5870 CrossFire and NVIDIA’s claims of incredible scaling to 90% or so under Windows 7.
In the meantime, feel free to comment below, ask questions or have a detailed discussion in our ABT forum. If you have any requests on what you would like us to focus on for Part Three or for any other information, please join our ABT forum.
Mark Poppin
ABT Senior Editor
Please join us in our Forums
Become a Fan on Facebook
Follow us on Twitter
For the latest updates from ABT, please join our RSS News Feed Join our Distributed Computing teams
- Folding@Home – Team AlienBabelTech – 164304
- SETI@Home – Team AlienBabelTech – 138705
- World Community Grid – Team AlienBabelTech
Thanks for all the tests! This just confirms what I was already leaning towards doing – waiting for the next round of refresh cards from Nvidia vs. AMD’s next gen cards.
OC’ing Fermi shows that with some tweaks and getting thermals to a lower level, this card has plenty of untapped potential.
Correction, this architecture has plenty of untapped potential.
What a monster than GTX480 is… Too bad I cannot afford it nor a new PSU to for it D:
The HD 5870 is a 6-month-old card. While it’s good to see nVidia back to being competitive (performance-wise), I wonder how long it will last before ATI releases a GTX-480 killer?
nVidia can keep its price point for now because of sales to benchmark enthusiasts and nVidia loyalists, but assuming ATI releases a new card soon (and prices reasonably), they’re going to have to make some significant price cuts.
But, as I said before, it’s nice to see them back in the game. I’ve always been an AMD & ATI fan (worked out real well for me when they merged), but it’s never fun to see one side or the other sitting on top for too long.
Why put these cards head to head?
The 480GTX price is much close to the 5970 then the 5870 so why no comparison with the 5970?
The 5870 is an old card now….
As its been said nvidia Failed hard on this new card
Why? Because HD 5870 and GTX 480 are respectively the fastest single GPU video cards from AMD and NVIDIA. The HD 5970 have 2 GPUs in a single video card and it is also over $100 more expensive than GTX 480.
By asking us to benchmark GTX 480 against HD 5970, you are basically asking us to test 2 x HD 5870 in CrossFire against a single-GPU GTX 480 video card. It may be unfair, but no worries, we shall do this test soon – putting HD 5870 CF which is a little faster than HD 5970 – against the stock GTX 480 and also overclocked.
This is a *series* on GTX 480 performance and upcoming Part 3 shall use the new 197.41 WHQL drivers to test against the release Betas and we shall also compare the relative performance hit of 8xMSAA vs. 4xMSAA on HD 5870 vs. GTX 480. Expect this part of the review up next Monday.
For Part 3, I am also returning about 8 games to my benching suite and I will also add Just Cause 2.
As you may have noticed, I predicted that AMD would allow their partners bring out their highly overclocked HD 5870s – also priced about $500 – to compete with GTX 480. That is GTX 480’s competition, overclocked HD 5870 – not HD 5970. There will be no price war until NVIDIA brings out its “Ultra” and AMD responds with a “5890”.
James:
“Why put these cards head to head?
The 480GTX price is much close to the 5970 then the 5870 so why no comparison with the 5970?”
Where do you shop? At newegg, the cheapest 5870 is $409.99. All the stock GTX 480s are $499.99.
That is $90.00 difference.
The cheapest 5970 is $699.99, that is $200. difference.
The GTX480 and HD5870 are MUCH closer in price, but the GTX480 leads in performance and feature set.
who cares how many gpus there are everybody put gtx 295 against 5870 and 7950gx2 against x1900 sure it was fair then but not now lol its about fastest graphic card right ? or may be you don’t want to upset nvidia ?
No. This series on GTX 480 performance has not been about the fastest video card. Other sites do that but we prefer to explore in far more depth through a *series* of reviews.
Ours is an ongoing performance analysis of GTX 480 vs. HD 5870. So far, it compares AMD’s single fastest GPU against NVIDIA’s single fastest GPU.
We also pointed out that the $500 GTX 480 is set against the overclocked versions of HD 5870 which are also about $500; not against a $700 dual-Radeon card.
Later on in this series, we shall expand it to include comparing GTX 480 and perhaps GTX 480 SLI versus HD 5870 CrossFire. That will give you an idea of how GTX 480 performance compares to that of two Radeons.
If you ignore power consumption, heat and noise. nVidia wins this round(providing you’re willing to pay a little more in your price range). STFU about comparing a dual-gpu card to a single gpu card, as the guy before me said, when the GX2 came out against the X1950XTX i assure you the nvidia loyalists didnt argue it was unfair. (despite X1950XTX’s in CF outperforming the quad SLI(two GX2’s)). Pick whichever card you wish, personally, with soaring energy costs, a large scale economic recession every penny helps, in the long run ATi’s solutions are cheaper and offer more than enough performance, but if you want an extra couple of fps (along with extra heat and noise and a larger energy bill) fork out the extra for the green option. The companies don’t care about you just your money, side with whoever has the best product for your needs.
PS: european prices on GPU’s arent the same as US ones, theyre way higher. Using a single website for a price comparison isnt really fair. even if it is newegg.
About gtx480, 5870 and 5970…
Here in Europe (precisely in Poland), those cards are priced around 474$ for 5870, 664$ for GTX480, and 804$ for the 5970 (yeah, who would have guest we’re so rich, I can’t see it in anything else than the prices). That gives 190$ difference between 5870 and gtx480, and 139$ diff between nv gtx and 5970. When we look at the difference as percents, we get 40% and 21% differences respectively. When we add to that the fact that the gtx uses as much energy as the 5970 witch means nvidia won’t be physically able to create a dual gpu card form their 480, so not comparing it to the 5970 is giving a handicap to nvidia actually.
The 5870 competitor is the gtx470, at least price wise, because again the ati card uses less energy, produces less heat and so on. If nvidia will try to take on the 5970, their only chance is to create some down clocked dual 470 card, witch might not be able to do the trick, especially because stock 5970 has so much oc headroom.
Anyway, great to see competitive cards from nvidia… although they didn’t pull ati prices down witch proves that they’re not much of a threat.
Well, as I pointed out in Part 1, three weeks ago, NVIDIA is not going affect ATI pricing for quite awhile. It appears to be intentional that the GTX 480 and GTX 470 do not compete in the same price slots as HD 5870 and HD 5850.
It appears that both companies benefit by not engaging in a price war with each other now. ATI has instead decided to offer their partner-overclocked HD 5870s at about the same price as GTX 480 (US $500) as competition to it.
HD 5970 is in a unique position until NVIDIA brings out their own dual-GPU card. And when we see regularly overclocked versions of GTX 480, then we will probably see an ATI refresh of 5870 – or their new architecture.
You guys OCed the GTX480 that much? Did your computer shoot across the room when the fan ramped up? How did you avoid burning the lab down in a horrid flash fire?
Worst thing about the Fermi is temps and power. It gets ludicrously close to the 105C throttle point when running under load. A hot summer day and some dust will be enough to piss off a lot of gamers.
Nope. Our GTX 480 has plenty of overclocking headroom and it never went near 100C in any game even with very warm ambient temperatures of 80 degrees F.
The GTX 480’s fan at 90% is just as annoying as our reference Diamond HD 5870’s fan at 90%. Both of these video cards are simply intolerable without headphones and neither reference card was designed to have their fans running continuously at that speed for 24/7/365. You need to make a proper fan profile but the spin-up and loudness of either card is annoying in 3D gaming.
For either a moderately or a highly overclocked GTX 480, I would definitely recommend a non-reference fan or else water-cooling. For reference clockspeeds or even for a mild overclock, you will be fine with a reference GTX 480.
Yep untapped potential, if you are smart you will wait for the improved version of fermi which I am sure nv is working on. You can OC that even more . Wait for GTX485 with improved thermals / power consumption clockrates and possibly with up tp 512 CUDA cores and higher overclockability.
Remember how fast GTX 280 became obsolete when 285 came out? same story here.
Thanks for this article! I haven’t run a video card at stock speeds for almost 10 years… That pretty much makes most reviews somewhat meaningless since the overclocking headroom of each card can (and usually does) dramatically change the price/performance landscape.
Since I usually aim for one-step below the top-of-the-line cards due to the large price premiums they carry, I would love to see the 470 and 5950 added to the mix. That would also paint a clearer picture of what the “real” difference in performance is between them and their bigger brothers…
I know this seems silly, but where oh where have the 2D tests gone?
I know that 2D is generally fast enough and just peachy. However, personally I get annoyed when a browser window filled with all kinds of junk does not scroll 100% smoothly.
Bitblt, text rendering etc should be tested as part of your standard repertoire.
I dont know what to say…I had the 5870 now got the 480 for testing purposes…..I gotta say your numbers look off….way off…favouring Nvidia…far cry 2 nvidia overclocked gets 20 more fps and ati only 3? with a bigger oc?????? come on guys who are we kidding here…..be real….
The 2 GB HD 5870 is currently a “specialty” card for Eyefinity-6. We’d certainly like to test it but we seriously doubt that the performance figures in any games would change by very much.
As to the response by prpz about our Diamond reference HD 5870 not scaling very well at the extreme limit of its core and vRAM; we agree. It did not scale particularly well. And it brings us to “why?”
We have just began testing our “new core” PowerColor HD 5870 PCS+ against our reference “old core” Diamond HD 5870 at incremental overclocks to see if we can discover anything interesting. At any rate, expect a full review of the PowerColor HD 5870 PCS+ early next month.
The very next part of this series – Part 3, GTX 480 vs. HD 5870, 8xMSAA Performance Analysis – is going up this week.
GTX580 and FS2004? my FS 9 isn t running any more
Sorry, I don’t have FS series. I will be lucky to add a new game for benchmarking every month and I have to play it first to see how the game relates to the benchmark.
I am running 23 game benchmarks now and H.A.W.X. 2 and Batman Arkham Asylum GotY edition are my latest games. F1 2010 is confirmed as my 24th benchmark game.