NVIDIA’s GTX 480 Performance Testing
NVIDIA has finally released its long awaited GeForce GTX based on its brand new Fermi DX11 GF100 architecture. This new GPU – Graphics Processing Unit – a term originally originated by NVIDIA is a continuation of NVIDIA’s strategy since their G80 which launched over three year ago to create a General Purpose Processor – co-equal with the CPU – that also renders amazing graphics. Here is the culmination of their efforts with their new DX11 Fermi architecture, the GTX 480, their flagship GPU.
NVIDIA realizes that they are 6 months later than their rival AMD graphics with DX11 video cards. NVIDIA’s original intention was to launch GTX 480 with Windows 7. But it was a very diffiicult 3 billion transistor GPU to manufacture on a new 40 nm process at TSMC with its own issues. To improve the yields, NVIDIA had to cut down the GTX number of shaders from 512 to 480 so as to guarantee that there would be enough chips to supply a big demand when it finally launched.
In fact, AMD has already launched its entire 5000 DX11 series from top to bottom – $600 for their dual-GPU HD 5970 down to their passive-cooled HD 5450 for $60. So today, NVIDIA has launched its new DX11 GeForce lineup with its GTX 480 flagship and second-fastest card, the GTX 470. The GTX 480 comes with a MSRP of $499 and the GTX470 will retail for $349. So we need to answer the question: Is it worth the $100 premium over the $400 that one would currently spend for AMD’s top single-GPU video card – HD 5870?
To properly bring you this review, we purchased a Diamond HD 5870 from NewEgg and put it through its paces this week with the very latest performance drivers – Catalyst 10-3a. AMD is quite proud of this driver set as it brings sold performance increases over Catalyst 10-2 and over even the latest WHQL drivers, Catalyst 10-3, which were released this week. We suspect the results would be much more in NVIDIA’s favor if we used the last Catalyst 10-2 driver set. Also, remember that AMD has had a long time to mature their drivers and that the release GTX 480 GeForce 197.17 drivers that we are using are still beta and leave some room for solid improvment by NVIDIA’s driver team in the months to come.
So you will see us pit our Diamond reference design HD 5870 against the new GTX 480 in 14 modern games and 2 synthetic benchmarks from 1680×1050 t0 2560×1600 resolutions. We are also using our standard reference video card, HD 4870-X2, the very fastest video card of AMD’s last generation and still very competitive with HD 5870 performance in many games.
Is GTX 480 worth $500 which is nearly $100 more than its rival, AMD’s HD 5870?
This review is going to be in two parts. This first one will analyze and compare GTX 480 and HD 5870 performance and hopefully we can announce a performance winner. We will also look at the details to see what the new NVIDIA GPU brings to the table and if it is worth the nearly $100 premium over its AMD counterpart. We also believe we have a good handle on how AMD is going to respond to NVIDIA’s GTX 480/470 Fermi launch and we will share our analysis and insights with you. The second part will be much expanded with more game benchmarks and with AMD’s likely answer to NVIDIA’s Fermi GTX launch.
Widespread e-tail availability of both GeForce GTX 480 and GTX 470 will happen the week of April 12, 2010. So you have a little time to decide what to do and this review is designed to help you with an important potential upgrade. NVIDIA says they are building tens of thousands of units for initial availability, and this will ensure that their partners have ample volume for what is the certainly one of the most anticipated GPU launches ever.
We will also help to answer if it is practical to upgrade from HD 4870/GTX 280 class – which includes GTX 260, 275 and by extension, GTX 285. We will also consider if it is practical or useful to upgrade from a HD 4870-X2 or HD 4870 CrossFire or by extension, GTX 260 or 275 SLI or even a GTX 295 which is a bit more powerful than our HD 4870-X2. Since we do not want any chance of our CPU “bottlenecking” our graphics, we our testing all of our Graphics cards with our Intel Core i7 920 at 3.80 GHz (3.97 GHz effectively with the 21x multiplier in turbo mode), 6 GB Kingston DDR3 and a Gigabyte X58 full 16x + 16x PCIe CrossFire/SLI motherboard.
Later on we plan to also test our AMD DX11 video cards on AMD’s Dragon platform. We also acquired a brand new ECS black label A890GXM-A CrossFire motherboard which is a nice performance upgrade from our current Gigabyte 790X motherboard and we shall post that review this week.
Before we do performance testing, let’s take a look at the GTX 480 and quickly recap its new DX11 architecture and features.
Architecture and Features
Architecture
We have covered GF100 architecture in a lot of detail previously. You can can read our articles here and also in our coverage of NVIDIA’s GPU Tecnology Conference that we reported on it for you here, here and here in a three-part series.
Because of severe time constraints on this article, the new GTX 480/470 architecture will be examined in depth in Part Two of this series, GTX Performance Testing, versus HD 5870 with both of them overclocked.
Specifications
The GeForce GTX 480 was designed from the ground up to deliver exceptional tessellation performance, which is the key component of Microsoft’s DirectX 11 development platform for PC games. Tessellation allows game developers to take advantage of the GeForce GTX 480 GPU’s tessellation ability to increase the geometric complexity of models and characters to deliver far more realistic and visually rich gaming environments.
Needless to say, the new GTX brings a lot of features to the table that current NVIDIA customers will appreciate, including improved CUDA’s PhysX, 2D and 3D Surround to drive up to 3 LCDs with GTX SLI, superb tessellation capabilities and a really fast GPU in comparison to their GT200 series.
Test Configuration
Test Configuration – Hardware
- Intel Core i7 920 reference 2.66 GHz and overclocked to 3.8 GHz); Turbo (21X multiplier for 3.97 GHz of a single core) is on.
- Gigabyte EX58-UD3R (Intel X58 chipset, latest BIOS, PCIe 2.0 specification; CrossFire/SLI 16x+16x).
- 6 GB OCZ DDR3 PC 18000 Kingston RAM (3×2 GB, tri-channel at PC 16000 speeds; 2×2 GB supplied by Kingston)
- NVIDIA GTX 480, reference design (supplied by NVIDIA under NDA)
- NVIDIA GTX 280, reference design (by BFG)
- ATI Radeon HD 5870 (2GB, reference clocks) by Diamond
- ATi Radeon HD 4870-X2 (2GB, reference clocks 750/900) by VisionTek
- ATi Radeon HD 4870 (1GB, reference clocks 750/900) by ASUS
- Onboard Realtek Audio
- Two identical 250 GB Seagate Barracuda 7200.10 hard drives configured and set up identically from drive image; one partition for NVIDIA GeForce drivers and one for ATI Catalyst drivers
- SilentPro 600 M power supply unit supplied by Cooler Master
- Cooler Master Gladiator 600 Case supplied by Cooler Master
- Noctua UD CPU cooler, supplied by Noctua
- Five Case fans by Cooler Master and Noctua
- Philips DVD SATA writer
- HP LP3065 2560×1600 thirty inch LCD
Test Configuration – Software
- ATi Catalyst 10-3a; highest quality mip-mapping set in the driver, Catalyst AI set to “Standard”
- NVIDIA GeForce 197.13 WHQL for GTX 280 and 197.17 beta release drivers for GTX 480; High Quality
- Windows 7 64-bit; very latest updates
- DirectX February 2010
- All games are patched to their latest versions.
- vsync is off in the control panel and is never set in-game.
- 4xAA enabled in all games and “forced” in Catalyst Control Center for UT3; all in-game settings at “maximum” or “ultra” with 16xAF always applied; 16xAF forced in control panel Crysis.
- All results show average, minimum and maximum frame rates except as noted.
- Highest quality sound (stereo) used in all games.
- Windows 7 64, all DX10 titles were run under DX10 render paths; DX11 titles under DX 11 render paths except for Dirt 2 demo in DX9c.
The Benchmarks
•Vantage
•Call of Juarez
•Crysis
•S.T.A.L.K.E.R., Call of Pripyat
•Far Cry 2
•World in Conflict
•X3:Terran Conflict
•Dirt 2
•Left4Dead
•Lost Planet
•Unreal Tournament 3
•Resident Evil 5
•Call of Pripyat
•ARMA2
•H.A.W.X.
•Battleforge
•Heaven 1.0 (Unigine)
Vantage
Vantage is Futuremark’s latest test. It is really useful for tracking changes in a single system – especially driver changes. There are two mini-game tests, Jane Nash and Calico and also two CPU tests, but we are still focusing on the graphics performance. Here is a scene from Vantage’s second mini-game.
Let’s go right to the graphs and first check the basic tests with the default benchmark scores:
We see an interesting lineup. Basically, it is a meaningless test with meaningless numbers to compare one video card’s performance to another. However, the mini games might show a bit more as they are actually benching framerates.
Here we see the GTX 480 ranked below the HD 4870-X2 and the HD 5870. Let’s move on to PC games and to real world situations!
Call of Juarez
Call of Juarez is one of the very earliest DX10 games. It is loosely based on Spaghetti Westerns that became popular in the early 1970s. Call of Juarez features its Chrome Engine using Shader Model 4 with DirectX 10. Our benchmark isn’t built into Call of Juarez, but is an official stand-alone that is identical to the one that is built-into the game. It runs a simple flyby of a level that is created to showcase its DX10 effects. It offers good repeatability and it is a good stress test for DX10 features in graphics cards, although it is not quite the same as actual gameplay because the game logic and AI are stripped out of this demo.
Performing Call of Juarez benchmark is easy. You are presented with a simple menu to choose resolution, anti-aliasing, and two choices of shadow quality options. We set the shadow quality on “high” and the shadow map resolution to the maximum, 2048×2048. At the end of the run, the demo presents you with the minimum, maximum, and average frame rate, along with the option to quit or run the benchmark again. We always ran the benchmark at least a second time and recorded that generally higher score.
Here are Call of Juarez DX10 benchmark results at 1920×1200:
This time the GTX 480 takes the lead and we notice a huge performance gap between last generation’s card and this one. It takes a dual-GPU card or CrossFired HD 4870s to keep up with a single GPU of this new DX11 generation! Now on to 1650×1080 resolution:
Our top cards are almost wasted on Call of Juarez at 1680×1050 resolution. We would be looking to add more than 4xAA at this resolution. At any rate, the GTX 480 is the clear winner.
CRYSIS
Next we move on to Crysis, a science fiction first person shooter by Crytek. It remains one of the most demanding games for any PC and it is also still one of the most beautiful games released to date. Crysis is based in a fictional near-future where an alien spacecraft is discovered buried on an island near the coast of Korea. The single-player campaign has you assume the role of USA Delta Force, ‘Nomad’ who is armed with futuristic weapons and equipment. Crysis uses DirectX10 for graphics rendering. A standalone but related game, Crysis Warhead was released last year. CryEngine2 is the game engine used to power Crysis and Warhead and it is an extended version of the CryEngine that also powers FarCry. As well as supporting Shader Model 2.0, 3.0, and DirectX10’s 4.0, CryEngine2 is also multi-threaded to take advantage of dual core SMP-aware systems and Crytek has developed their own proprietary physics system, called CryPhysics. However, it is noted that actually playing this game is a bit slower than the demo implies.
GPU Demo, Island
All of our settings are set to maximum “very high” including 4xAA and we force 16xAF in the control panels. Here is Crysis’ Island Demo benchmark, first at 2560×1600 resolution:
We did not bother to continue to test our HD 4870 and GTX 280 at 2560×1600 resolution. It would be a slideshow as even the HD 4870-X2 still stumbles at this resolution. The maximum are certainly not as important as the minimums and averages which still show that Crysis at 2560×1600 requires at least multi-GPU to play smoothly. None of our video cards played this game at maximum resolution particularly well; not even without AA/AF.
However, HD 5870 and GTX 480 are pretty much neck-and-neck overall although the Radeon will noticeably stumble at times. Perhaps the larger framebuffer of the GTX makes a difference. We will investigate later on. For Part Two of this review, we will also test Crysis with HD 5870 CrossFire to see how playable it is at 2560×1200. Let’s move on to 1920×1200:
This time the HD 4870-X2 takes the lead. All three of our top cards are now playable with Crysis at 1920×1200 if you are willing to compromise with AA/AF or lower a couple of detail settings. And now at 1680×1050:
The Radeons just edge the GeForce cards in Crysis. However, the new GTX 480 is running on beta drivers while the AMD cards have very mature drivers. We wil revisit this benchmark every month in our Catalyst and GeForce driver performance analysis and report the changes that we may find.
FarCry 2
Far Cry 2 uses the name of the original Far Cry but it is not connected to the first game as it brings you a new setting and a new story. Ubisoft created it based on their Dunia Engine. The game setting takes place in an unnamed African country, during an uprising between two rival warring factions. Your mission is to kill “The Jackal”; the Nietzsche-quoting mercenary that arms both sides of the conflict that you are dropped into.
The Far Cry 2 game world is loaded in the background and on the fly to create a completely seamless open world. The Dunia game engine provides good visuals that scale well. The Far Cry 2 design team actually went to Africa to give added realism to this game. One thing to especially note is Far Cry 2’s very realistic fire propagation by their engine that is a far cry from the scripted fire and explosions that we are used to seeing.
First we test Far Cry 2 benchmark at 1920×1200 – all of the resolutions we tested at are with AI enabled. The GTX 480 runs away from the Radeons at the higher resolutions.
And again at 1680×1050 resolution:
Here we see a clean sweep by GTX 480 in Far Cry 2.
World in Conflict
World In Conflict is set in an alternate history Earth where the Cold War did not end and Russia invaded the USA in 1989 and the remaining Americans decided to strike back. World in Conflict (WiC) is a real-time tactical/strategy video game developed by Massive Entertainment. Although it is generally considered a real-time strategy (RTS) game, World in Conflict includes gameplay typical of real-time tactical (RTT) games. WiC is filled with real vehicles from both the Russian and the American military. There are also tactical aids, including calling in massive bombing raids, access to chemical warfare, nuclear weapons, and far more.
Here is yet another amazing and very customizable and detailed DX10 benchmark that is available in-game or as a stand-alone. The particle effects and explosions in World in Conflict are truly spectacular! Every setting is fully maxed out.
We start our benching at 2560×1600:
Next we see the results at 1920×1200 resolution; again the GTX 480 wins. And HD 4870-X2 is faster than HD 5870.
Now at 1680×1050 resolution:
The GeForce GTX 480 delivers good performance all the way to 2560×1600! You want GTX 480 if you play a lot of World-in-Conflict.
X3: Terran Conflict
X3:Terran Conflict (X3:TC) is another beautiful stand-alone benchmark that runs multiple tests and will really strain a lot of video cards. X3:TC is a space trading and combat simulator from Egosoft and is the most recent of their X-series of computer games. X3:TC is a standalone expansion of X3: Reunion, based in the same universe and on the same engine. It complements the story of previous games in the X-Universe and especially continues the events after the end of X3: Reunion.
Compared to Reunion, Terran Conflict features a larger universe, more ships, and of course, new missions. The X-Universe is huge. The Terran faction was added with their own set of technology including powerful ships and stations. Many new weapons systems were developed for the expansion and it has generally received good reviews. It has a rather steep learning curve.
First we note the results at 1920×1200:
Now at 1680×1050:
This time the GTX 480 and HD 4870-X2 run the fastest with the HD 5870 close behind in a fairly tight grouping. However, all of our video cards perform well and all of them experience similar minimum framerates.
DiRT 2 Demo – (DX9c)
Colin McRae: DiRT 2 is a racing game that was released in September 2009, and is the sequel to Colin McRae: Dirt. It includes many new race-events, including stadium events as your RV travels from one event to another in many real-world environments across four continents. Dirt 2 includes five different event types even allowing you to compete at new locations. It also includes a new multiplayer mode. Dirt 2 is powered by an updated version of the EGO engine which was featured in Race Driver: Grid. This updated EGO engine also features an updated physics engine.
We have been using the Dirt 2 demo to benchmark up until now as it works just as well as in the retail game – until you try to run DX11 on a NVIDIA DX11 card, in which case it reverts back to DX9c. Evidently the developer did not provide support for NVIDIA’s new DX11 card in the demo although the retail game has no such issues. Since we ran all of our tests with the Dirt 2 demo, it was too late to switch to the full game. We instead edited the configuration file so that the HD 5870 also ran on the DX9 pathway so we could have a solid apples-to-apples comparison of performance across all of the cards. Later on, in further testing, will use the full retail game for the DX11 pathway as the visuals are better.
First we test our top three cards at 2560×1600:
The GTX 480 pulls ahead in a not so tight race. What about 1920×1200?
Again the GTX 480 leads and this time the HD 4870-X2 even pulls ahead of the HD 5870.
Dirt 2 gets the checkered flag on the GTX 480 on the DX9c pathway. However, even the older single-GPU cards are satisfactory to run Dirt 2 at 1920×1200. We look forward to bringing you DX11 results in subsequent testing.
Left 4 Dead
Left 4 Dead (L4D) is a 2008 co-op first-person shooter that was developed by Turtle Rock Studios and purchased by Valve Corporation during its development. Left 4 Dead uses Valve’s proprietary Source engine and it replaces our older Souce benchmark which used Half Life 2’s Lost Cost demo. L4D is set in the aftermath of a worldwide pandemic which pits its four protagonists against hordes of the infected zombies. There are four game modes: a single-player mode in which your allies are controlled by AI; a four-player, co-op campaign mode; an eight-player online versus mode; and a four-player survival mode. In all modes, an artificial intelligence (AI), dubbed the “Director”, controls pacing and spawns, to create a more dynamic experience with increased replay value. It is best as a multiplayer game with humans.
There is no built in benchmark, so we used ABT Senior Editor BFG10K’s custom time demo which is very repeatable. The game is updated regularly by Steam and we chose the highest detail settings and 4xAA. We will save our comments until after we present all three charts. First we test at 2560×1600 resolution:
On to our next chart at 1920×1200:
Finally at 1680×1050:
Here the performance of the top cars are all pretty close at 1680×1050 and 1920×1200 resolutions. But at 2500×1600, the GTX 480 stumbles badly in comparison to the Radeons and the HD 5870 moves ahead of the older HD 4870-X2 in an impressive show of what an new generation does to the one that preceeded it. However, for Source engine games, a HD 4870 or GTX 260+ is plenty. We wait to see if NVIDIA’s driver team will improve the GTX 480’s performance relative to the HD 5870 at the highest resolution.
Lost Planet
Lost Planet: Extreme Condition is a Capcom port of an Xbox 360 game. It takes place on the icy planet of E.D.N. III which is filled with monsters, pirates, big guns, and huge bosses. This frozen world highlights high dynamic range lighting (HDR) as the snow-white environment reflects blinding sunlight as DX10 particle systems toss snow and ice all around. The game looks great in both DirectX 9 and 10 and there isn’t really much of a difference between the two versions except perhaps shadows. Unfortunately, the DX10 version doesn’t look that much better when you’re actually playing the game and it still runs slower than the DX9 version.
We use the in-game performance test from the retail copy of Lost Planet and updated through Steam to the latest version for our runs. This run isn’t completely scripted as the creatures act a little differently each time you run it, requiring multiple runs. Lost Planet’s Snow and Cave demos are run continuously by the performance test and blend into each other.Here are our benchmark results with the more demanding benchmark, Snow. All settings are fully maxed out in-game including 4xAA/16xAF.
Let’s start with 2560X1600. Please note that there is a typo in the GTX 480’s maximum FPS; instead of 85, it should read 52 FPS.
Now at 1920×1200 resolution:
Finally at 1680×1050:
All of our top cards are tightly grouped. For the averages, the GTX 480 edges the top two Radeons but then stumbles very slightly compared to the HD 5870 at 2560×1600. Performance is too close to call.
Unreal Tournament 3 (UT3)
Unreal Tournament 3 (UT3) is the fourth game in the Unreal Tournament series. UT3 is a first-person shooter and online multiplayer video game by Epic Games. Unreal Tournament 3 provides a good balance between image quality and performance, rendering complex scenes well even on lower-end PCs. Of course, on high-end graphics cards you can really turn up the detail. UT3 is primarily an online multiplayer title offering several game modes and it also includes an offline single-player game with a campaign.For our tests, we used the very latest 1.5 game patch for Unreal Tournament 3, released after its ‘Titan’ pack. The game doesn’t have a built-in benchmarking tool, so we used FRAPS and did a fly-by of a chosen level. Here we note that performance numbers reported are a bit higher than compared to in-game. The map we use is called “Containment” and it is one of the most demanding of the fly-bys. Our tests were run at resolutions of 1920 x 1200 and 1680 x 1050 with UT3’s in-game graphics options set to their maximum values.One drawback of the way the UT3 engine is designed is that there is no support for anti-aliasing built in so we forced 4xAA in each vendor’s control panel. We record a demo in the game and a set number of frames are saved in a file for playback. When playing back the demo, the game engine then renders the frames as quickly as possible, which is why you will often see it playing it back more quickly than you would actually play the game.
Here is Containment Demo, first at 2560×1600:
Now at 1920×1200:
Finally at 1680×1050:
There is absolutely no problem playing this game fully maxed out with either of our graphics configurations. Generally, the dual-GPU HD 4870-X2 is a bit faster, followed by the GTX 480 and HD 5870. The older cards turn in a respectable showing and there is not need to upgrade for this game even for 2560×1600.
Resident Evil 5
Resident Evil 5 is a survival horror third-person shooter developed and published by Capcom that has become the best selling single title in the series. The game is the seventh installment in the Resident Evil series and it was released for Windows in September 2009. Resident Evil 5 revolves around two investigators pulled into a bio-terrorist threat in a fictional town in Africa.
Resident Evil 5 features online co-op play over the internet and also takes advantage of NVIDIA’s new GeForce 3D Vision technology. The PC version comes with exclusive content the consoles do not have. The developer’s emphasis is in optimizing high frame rates but they have implemented HDR, tone mapping, depth of field and motion blur into the game. Re5‘s custom game engine, ‘MT Framework’, already supports DX10 to benefit from less memory usage and faster loading. Resident Evil 5 gives you choice as to DX10 or Dx 9 and we naturally ran the DX10 pathway.
There are two benchmarks built-into Resident Evil 5. We chose the fixed benchmark. Here it is at 2560×1600:
Here are the results at 1920×1200 resolution:
Let’s check out 1680×1050:
We see results that are fairly normal until we hit 2560×1600 which suggests there may be room for driver improvements. The GTX 480 leads the top Radeons until the HD 4870-X2 suddenly pulls ahead. GTX 480 is faster than our Diamond reference HD 5870. All of our older video cards turn in a respectable performance.
S.T.A.L.K.E.R., Call of Pripyat
S.T.A.L.K.E.R., Call of Pripyat became a brand new DX11 benchmark for us after GSC Game World released a another story expansion to the original Shadows of Chernobyl. It is the third game in the S.T.A.L.K.E.R. series. All of these games have non-linear storylines which feature role-playing game elements. In both games, the player assumes the identity of a S.T.A.L.K.E.R.; an illegal artifact scavenger in “The Zone” which encompasses about 30 square kilometers. It is the location of an alternate reality story surrounding the Chernobyl Power Plant after another (fictitious) explosion.
S.T.A.L.K.E.R., Call of Pripyat features “a living breathing world” with highly developed NPC creature AI. Call of Pripyat utilizes the XRAY 1.6 Engine, allowing advanced modern graphical features through the use of DirectX 11 to be fully intregrated. Call of Pripyat is also compatible with DirectX 8, 9, 10 and 10.1. It uses the X-ray 1.6 Engine one outstanding feature being the inclusion of real-time GPU tesselation– a Shader model 3.0 & 4.0 graphics engine featuring HDR, parallax and normal mapping, soft shadows, motion blur, weather effects and day-to-night cycles.
As with other engines using deferred shading, the original DX9c X-ray Engine does not support anti-aliasing with dynamic lighting enabled, although the DX10 and DX 11 versions do. We are using the stand-alone “official” benchmark by Clear Sky’s creators. Call of Pripyat is top-notch and worthy to be part of the S.T.A.L.K.E.R’s universe with even more awesome DX11 effects which help to create and enhance their game’s already incredible atmosphere. As with Clear Sky before it, DX10 and now DX11 comes with steep hardware requirements and this new game still really needs multi-GPU to run at its maximum settings. We picked the most stressful test out of the four, “Sun shafts”. It brings the heaviest penalty due to its extreme use of shaders to create DX10/DX10.1 and DX11 effects. We ran this benchmark fully maxed out in DX11.0 with “ultra” settings plus 4xAA, including applying edge-detect MSAA which chokes performance even further.
Please note that HD 4870/4870-X2 ran on the DX10.1 pathway and the GTX 280 on the DX10 pathway. The DX11 pathway is much more demanding and the 4870-X2 has a much easier time in this bencmark than do the HD 5870 or the GTX 480. Here we present our max DX11 settings for S.T.A.L.K.E.R., Call of Pripyat DX11 benchmark at 2560×1600:
Now on to the benchmarks at 2560×1600 and note that HD 4870-X2 is on the DX10.1 pathway which is a far less demanding pathway than DX11 that HD 5870 and GTX 480 use:
Next at 1920×1200 and remember that our two DX11 cards not only work harder, they produce better visuals than the DX10/10.1 cards:
Let’s check out 1680×1050:
Considering that HD 5870 and GTX 480 are running DX11 and the HD 4870-X2 is on DX10.1, the results are all the more impressive for the new cards – and the GTX 480 makes a clean sweep of these benches.
ARMA 2
ARMA 2 is our newest benchmark and it is taken from the third installment in their series of realistic modern military simulation games from Bohemia Interactive. It features a player-driven story with more than 70 weapons and over 100 different vehicles. With a game world of 225 square km that is taken from actual surveillance photos, you can expect truly massive online battles with five distinct armed groups to choose from. ARMA2 can be considered a tactical shooter where the player commands a squad of AI – or many squads – with elements of real-time tactics.ARMA 2 Demo was released in late June, 2009 and coming in at 2.6 GB, the ARMA 2 demo allows you to experience the same game play that is featured in the full version of ARMA 2 – including multiplayer, as well as a few of the vehicles, weapons and units. The ARMA2 demo also contains a part of Chernarus terrain, a small section of the full game world set in the fictional “Black Russia”. There was also a massive performance hit on any DX10/10.1 card when maximum details are enabled at the resolutions that we test; AA is set to “high”.Let’s see how our top three video cards do with ARMA 2 at 2560×1600″
The new cards really differentiate themselves from the older HD 4870-X2. Here are our results at 1920×1200 resolution:
Wow! We are talking a massive performance increase over the last generation and HD 4870-X2 CrossFire does not even appear to scale positively.
If you play ARMA 2 you will do yourself a huge favor to upgrade to the newer cards from either AMD or NVIDIA. The GTX 480 just edges out the HD 5870.
Tom Clancy’s H.A.W.X.
Tom Clancy’s H.A.W.X. is an air combat video game developed by Ubisoft Romania and published by Ubisoft for Microsoft Windows, Xbox 360 and PlayStation 3. It was released in United States on March 6, 2009. You have the opportunity to fly 54 aircraft over real world locations and cities in somewhat realistic environments that are created with satellite data. This game is a more of a take on flying than a real simulation and it has received mixed reviews.The game story takes place during the time of Tom Clancy’s Ghost Recon Advanced Warfighter. H.A.W.X. is set in the year 2014 where private military companies have replaced government-run military in many countries. The player is placed into the cockpit as an elite ex-military pilot who is recruited by one of these corporations to work for them as a mercenary. You later return to the US Air Force with a team as you try to prevent a full scale terrorist attack on the United States which was started by your former employer.H.A.W.X. runs on DX10.1 faster and with more detail than on the DX10 pathway. ATI video cards can take advantage of DX10.1 while our GTX 280 is necessarily restricted to the DX10 pathway. Let’s check it out H.A.W.X. with our top three cards at 2560×1600:
The GTX 480 flys away from the other two cards. Here are our results at 1920×1200 resolution:
Again, the GTX 480 edges the HD 4870-X2 and speeds by HD 5870. Let’s see what testing at 1680×1050 shows.
H.A.W.X. is clearly faster on the GTX 480. Let’s move on to a DX11 online game, BattleForge.
BattleForge
BattleForge is an online PC game developed by EA Phenomic and published by Electronic Arts. The full game and a demo was released in March 2009. BattleForge is a card based RTS that revolves around acquiring and winning by means of micro-transactions for buying new cards. By May, 2009, BattleForge became a Play 4 Free game with fewer cards than the retail version. BattleForge supports Directx 11 with full support for hardware tesselation. It is very impressive visually and quite demanding on any system.
First we test with our three top cards at 2560×1600 using the BattleForge built-in benchmark with all of its settings completely maxed out and with 4xAA:
GTX 480 pulls ahead of the HD 5870 and HD 4870-x2 – on a less demanding pathway with less visuals – is a very distant third.
Again, the GTX 480 clearly leads the pack. Now at 1680×1050 resolution:
The GTX 480 is fastest in BattleForge followed by the HD 5870.
The Future? – Heaven 1.0 Unigine
Finally Heaven benchmark on the Unigine engine is our last of two synthetic benchmarks of this part one review, the GTX 480 Performance analysis. It uses DX11 and fairly heavy tessellation which will strain any graphics card. Here are the settings we used for this benchmark (no worries, we checked ‘full-screen’).
Here is our benchmark run at 2560×1600. As there are only two DX11 cards, we will compare them to each other. Look very carefully at the next graph and see if you can spot what is wrong:
(Edit: There is an error in this chart; it is incorrectly identified as DX11; it is actually DX10 results. The correct FPS for 2560×1600 DX11 are: GTX 480, 23.4; HD 5870, 20.3)
Wow! If you look at the graph it looks like a lot, but there is only 0.1 FPS difference!! Beware of just looking at the graph. Our HD 5870 can can keep up with the GTX 480 at the maximum resolution in Heaven benchmark.
For the rest of the test we did something different. We ran our GTX 480 againt the HD 5870 in DX11 and the three older cards on the DX10 pathway. Do not compare DX10 to DX11 results! The DX10 pathway is far less stressful than the heavy tessellation used on the DX11 pathway. Also, the visuals are far more impressive with DX11 over DX10 primarily because of tessellation – something the GeForce GTX 480 and HD 5870 appear to excel at.
And now at 1680×1050; again the newer cards are on the far more demanding DX11 pathway:
There will be at least two DX11 games based on Unigine that will be released this year. And there is a brand new and even more stressful Unigine 2.0 benchmark that was just released this week which we will explore next week in part two of this review.
As you can see there is a setting for “extreme tessellation”. We will tell you right now that this test chokes the GTX 480 at the highest settings but it is better than the slide show the Radeon HD 5870 manages. However, the visuals are also extraordinary.
Performance Summary Chart
The GeForce GTX 480 “wins” – at what cost?
– Power Usage
This is important for many people as a very hot running GPU is not only not “green”, it throws warm air into your room that your air conditioner must work extra hard to compensate for. Of course, for those of us like this editor who lives where it is cooler than warmer, a small space-heater in ones PC is a plus. We have seen the GTX 480’s TDP specification, that it is 250W – far more than the HD 5870’s 188W TDP – and the GTX 480 requires 6-pin+8-pin PCIe connectors as shown below.
As we contrast the GTX 480 with the HD 5870, only 6-pin+6-pin PCIe connectors are required for the Radeon. You will also note that the HD 5870 is physically longer than the GeForce and some cutting modification had to be made to the Cooler Master Gladiator 600 to accomodate it.
The GTX 480’s performance does come at a power cost; compare the system total power draw at the wall with the with the HD 5870 first – at idle and then at maximum GPU usage when running FurMark.
Now the total system power draw from the wall with the same PC, but with the GTX 480 inside instead of the HD 5870. First, we see the idle state and then with the GTX GPU maxed out running FurMark.
.
Of course, the second image is of our overclocked GTX 480. We see that we would be pulling over 250W from the wall! This also brings up overclocking which we shall cover shortly.
FurMark will stress a GPU’s stability and give the maximum thermals that one would never see in-game. You can consider FurMark’s torture tests, “worst case” scenarios for power and heat. Here is a screen shot of FurMark running at 2560×1600:
Here is GPU-Z right next to FurMark results:
It definitely runs toasty at 97C as “worst case” but the reference cooling solution appears up to the task.
The GeForce GTX 480 “wins” – at what cost?
-Overclocking
It was wrongly assumed that the GTX 480 high thermals and TDP would limit overclocking. Look at the maximum temperatures achieved with “worst case” FurMark. Clearly this GPU was built for high thermals. Now we want to know how far it overclocks. Enter EVGA’s overclocking tool:
At idle we can see the temperatures are good and the fan is at a barely audible 44%. Let’s go for the maximum overclock, shall we? Using this tool, we set the fan to just over 90% – where it is loud and where it never goes in real game play – and raised our clock and memory clocks a bit at a time to 826/2200!! This is what we got:
Pretty radical. The clocks are way up and Vantage score is also way up over the GTX at stock clocks, from 21239 to 21416! Let’s overclock it again for Far Cry 2 at 1920×1200 with fully maxed details plus 4xAA:
Another large improvement from overclocking since we started with 75/93/143 and increased to 87/107/154 FPS !! Since we just now got a potentially very highly overclockable HD 5870 PCS+ (Professional Cooling Series plus), we will pit our overclocked GTX 480 against the Radeon in Part Two of this article and we will also explore the performance of HD 5870 CrossFire.
The GeForce GTX 480 “wins” – at what cost?
– Price to Performance
It is pretty clear from our 14 games and two synthetic tests that the GTX 480 has regained for NVIDIA the performance crown for the fastest single GPU. However, it does not have the distinction of being the fastest video card. The HD 5970 – a dual GPU solution by AMD graphics – still holds that crown. So let’s look at pricing versus performance to determine if NVIDIA has a winner with their new Fermi architecture. We expect many variations of GeForce GF100 Fermi video cards to go up against AMD’s current offernings. Let’s look at the current DX11 cards to see where they are now in both price and performance:
- HD 5970 – $600
- GTX 480 – $500
- HD 5870 – $430
- GTX 470 – $350
- HD 5850 – $330
As we can see, these cards do not compete directly with each exactly in price nor in performance. They seem to have created their own slots to match their performance independently of each other. This portends that we may not see the kind of price war that AMD and NVIDIA have been engaged in since GTX 280 came out at $650 and HD 4870 came out shortly afterward to force the GTX to drop to $500 until AMD launched its unchallenged DX11 lineup over 6 months ago.
So we have to wonder what AMD’s strategy might be. Well, we have the answer for you pictured below. It arrived at ABT HQ this Friday afternoon just in time to spoil NVIDIA’s launch. Notice the free down-loadable copy of Call of Duty, Modern Warfare 2 bundled in with the PowerColor HD 5870 PCS+ as an incentive.
Clearly AMD is confident in its own mature product and they are apparently not going to rush to bring out a “5890”. It is clear that they are leaving their partners to use their own cooliing solutions and to overclock HD 5870 much higher than reference to attempt to catch NVIDIA’s 480 GTX. Will this strategy work? How will NVIDIA respond? Will they unlock the extra shaders in the GTX 480 for an “ultra” version on the current stepping, A3? Will there even be another stepping and respin? We will attempt to cover this in Part Two of our GTX 480 Performance analysis versus a potentially highly overclocked HD 5870; and of course, we shall overclock our GTX also.
Conclusion
This has been quite an enjoyable – if physically exhausting – one-week hand’s on experience for us in comparing GTX 480 versus HD 5870 and with the previous generation of video cards. We wish that we had more than the week we were allowed to benchmark GTX 480 under NDA to give you our first impressions. During that same week, we also had to set up and benchmark HD 5870. However, it was certainly worth it and we feel priviliged to bring you our very first benchmarks and performance testing of GTX 480. We like it a lot. So much so that we will make this a series until we have covered this subject in depth. We expect to explore GTX 480 SLI and NVIDIA’s claims of incredible scaling to 90% or so under Windows 7.
In the meantime, feel free to comment below, ask questions or have a detailed discussion in our ABT forum. If you have any requests on what you would like us to focus on for Part two or for any other information, please join our ABT forum.
GTX 480 Pros and Cons:
Pros:
It is the fastest single GPU – period! There is further room for overclocking
New architecture brings support for GPU computing and a level of performance way beyond the last generation.
DX11 and great support for tessellation, PhysX and CUDA, 3D gaming, and 2D/3D Surround (with SLI) bring realism to gaming
Cons:
It runs very warm with a TDP of 250 Watts
It is not the fastest videocard; HD 5970’s dual GPU card outperforms it for about $100 more
The fan is fairly noticeable at high RPMs
- This editor believes that NVIDIA, although late, does bring a very remarkable full-featured DX11 GPU to the market that will find good acceptance among customers and their fans alike. GF100 Fermi architecture is impressive and it does translate to performance in gaming – although with a price premium and with a high TDP and thermals. On the plus side, we believe that NVIDIA’s drivers will also continue to improve at a faster rate than the already mature AMD drivers which will continue to improve, albeit more slowly.
- If you currently game on a GTX 280/GTX 260/GTX 275/HD 4870/HD 4890 class of card, you will do yourself a big favor by upgrading. If you have a HD 4870-X2 or GTX 260 or GTX 275 SLI or even perhaps a GTX 295, the move to a GTX 480 will give you better visuals on the DX11 pathway and you are no doubt thinking of GTX 480 SLI. If you have a Radeon HD 5870 and are satisfied with the drivers and performance, you are not so likely to consider any change unless the many exclusive features of the new GTX 480 appeal to you. Very likely you are considering overclocking as an alternative to getting more performance.
- Of course, we will add much more information to this when we publish Part Two and cover a potentially highly overclocked HD 5870 versus an overclocked GTX 480 and we will of course throw CrossFired HD 5870s into the mix. Stay tuned there is a lot coming from us at ABT.
- We want you to join us and Live in Our World. It is fast expanding and we think you will like what you progressively discover here.
- Mark Poppin
- ABT Senior Editor
Please join us in our Forums
Become a Fan on Facebook
Follow us on Twitter
For the latest updates from ABT, please join our RSS News Feed
Join our Distributed Computing teams
- Folding@Home – Team AlienBabelTech – 164304
- SETI@Home – Team AlienBabelTech – 138705
- World Community Grid – Team AlienBabelTech
Image Gallery
From reading all the reviews online I don’t see a reason to either of the two new nvidia cards. Most of the reviews show at the higher resolutions ati cards take back the lead or get really close. High resolutions are one of the primary reasons to buy a high end card. The only reason to go nvidia is really for physx at this point. The audio through hdmi isn’t as good as what ati is offering either. I have seen multiple sites say that they had these cards running at 99 degrees and in sli they are expected to draw close to 800 watts. Most sites are reporting that they are taking more power than the spec for the card shows is the max. Price just isn’t worth it either. This launch was 6 months late and still isn’t actually launching until two weeks into April. I fully expect a refresh from ATI for current cards and possibly price drops or adjustments that give them a clear advantage. Also remember because this launch is so delayed that ATI is 6 months ahead for the next gen of graphics cards because of all the delays from nvidia and AMD and ATI are finally getting all their eggs in a row after their merger and the gpu’s are getting die shrinks faster than they where before and leading to cooler running gpu’s. Not only will Nvidia ferni cards ear up you wallet now they will continue to do so with with the power draw and also to increasing cooling for your pc and possible extra costs of air condition this upcoming summer.
Thanks for the review Mark! Although my opinions remain moderately negative and disheartened against Nvidia’s GF100 first-generation chips and the entire 40nm fabrication process in general, I am somewhat enamored by the marketing team’s ability to turn this generation of enthusiast GeForce products into a highly-demanded revenue stream with only two major marketing points.
When I was over at Nvidia’s headquarters in late October and asked several engineers what their initial impressions were of the GF100 architecture, the consensus simply responded with, “oh..Fermi got the flu.” At first, I wasn’t quite sure how to comprehend the response, as it was answered in a very general, casual fashion. What I soon learned from Fudo, Theo Valich and several technical marketing managers at Nvidia was that the chip’s original November launch plans had been completely removed from the 2009 timeframe.
Over the next few months, it was very disheartening to read several commentaries on the state of the fabrication process regarding the rampant production issues of high power walls, leaky transistors (resulting in the shader drop from 512 cores to 480 cores) and variable chip temperatures (due to bad bumps).
What I am surprisingly thankful for out of all of this mess, however, was Nvidia’s ability to produce a wonderful tessellation architecture with unparalleled SLI scalability across additional GPU cores. I’m quite sure that many journalists, analysts and consumers did not see it coming either, but it was a very impressive comeback and a much needed marketing point for the green team.
As much as I am looking forward to the *properly designed* second-generation GF100 chips between Q4 2010 and Q1 2011 with great anticipation, I still have a level of respect for the company’s ability to take a product afloat a sinking ship and rescue it with incredible multi-GPU scalability and 3D Vision Surround!
Dude your review is broken. On the Left 4 Dead page you said the GTX 480 stumbles badly vs. the 5870…
But it doesn’t.. at least any worse than anywhere else.. It’s just that your chart is broken!!! Instead of the far left being “0fps” it’s “74fps”, effectively zooming in on the gap, making it look wayyyy bigger than it actually is.
I see this problem actually applies to alot of the charts. I suggest you just fix them in excel to avoid confusing or misleading people. There are also some cases where, because of this, the delta looks much bigger than it actually is in NVIDIAs favor.
In excel you can right click the X-axis and go into it’s properties and set the minimum value to 0. This will force your graph to always show the full chart, instead of the zoomed in version.
Thanks for the suggestion, Joe.
I stand by what I said about the GTX 480 “stumbling badly” compared to HD 5870. Let’s look at the context:
The GTX 480 is generally leading the HD 5870 until we get to Left 4 Dead. Then it *stumbles*
In my own personal opinion, minus 4-1/2 frames is significant when you consider that the GTX leads the Radeon in the other benches. That is why I suggested that it could be immature drivers holding back the GTX in this Source Engine benchmark.
As to my charts being misleading, I pointed this out in Unigine Heaven 1.0 benchmark:
http://alienbabeltech.com/main/?p=16475&page=19
I want to make absolutely sure that my readers look carefully at the numbers, not just at the charts and at the “gap”. Marketing and PR often use a chart to further their own agenda.
Also, if one is comparing just two or maybe three video cards, it is good to emphasize the difference between them and I believe that it is not necessary to start with “0” on a graph; especially if you are dealing with reasonably a high FPS.
At any rate, always look at the frame rate. The numbers are not misleading in any way and the charts are representative of the close performance between these two most excellent video cards – HD 5870 and GTX 480; each in its own price range and each with a compelling set of features.
To Jon Worrel,
Many thanks for your kind words. And sorry for the delay in my response. There have been issues with our editorial staff being able to post comments. :O
I feel positively about the GTX 480. Perhaps you were expecting a bit more and were disappointed.
I look at this launch differently. It is as though AMD had brought an HD 2900-XTX to the market and it had beat the 8800-GTX performance wise instead of only matching the 8800-GTS. I believe that NVIDIA pulled off quite a feat. I remember Jensen’s words from GTC, “We want a General Purpose Processor that can do amazing graphics”.
IMO, they delivered – in spades. And I am not referring to any proprietary “features” the new GF100 brings to the market. Besides, I am mostly saving CUDA, 3D and Surround for my Part Two of this review – O/C’d GTX 480 vs. PowerColor HD 5870 PCS+ (which I am lead to believe is a Super-Overclocker).
I believe that this new GF100 chip is the beginning. I believe that NVIDIA could have brought it to market 6 months from now with tamed thermals and all SPs enabled. But that did not seem to make sense from a business standpoint. If you do not like the TDP of the GTX 480, the GTX 470 is 35W lower. And of course it will be respun. It is only logical that they would aim to improve yield, performance and likely bring out a 512 SP-enabled “ultra” – perhaps even with the current stepping.
I also do not doubt that we will see some kind of “GX2” if their ultra cannot catch the HD 5970. It is their nature to be so competitive – which is wonderful for us as consumers and enthusiasts.
NVIDIA has recently brought hotter and hungrier GPUs to market for the enthusiasts – that is just the way they do things with their philosophy of monolithic dies. AMD achieves a similar level of performance with a completely different philosophy. No one can say who is right or wrong – just who they prefer.
I really like both companies. I am impressed with HD 5870 for about $400 with a CoD MW2 game tossed in. But then I am also impressed by the raw performance of GTX 480 fo $500. I love them both and I look forward to pitting CrossFired HD 5870s against SLI’d GTX 480s.
And you can look at it another way. I would say that GTX 480 TriSLI would match the performance of your HD 5870 QuadFire with a very similar thermal output and even be price competitive. I also realize that it would not be such a great “upgrade” for you.
@apoppin:
“I believe that this new GF100 chip is the beginning. I believe that NVIDIA could have brought it to market 6 months from now with tamed thermals and all SPs enabled.
… I also do not doubt that we will see some kind of “GX2? if their ultra cannot catch the HD 5970. It is their nature to be so competitive – which is wonderful for us as consumers and enthusiasts.”
all undoubtedly true. as jonny has already pointed out however, the problem for nvidia is that they are a good 6 months behind ati, who not only have a lot of manoeuvrability with pricing and will probably produce a hd 5890 to do even more damage to nvidia in price/performance terms, but will also be introducing a completely new architecture in 6 months time. and if this doubles evergreens compute power with a similar die size/tdp, then fermi as the leading directx11/directcompute architecture will be superseded.
of course it remains to be seen what the middle and bottom tier fermi cards will be like – which is of course where the money is made – but nvidia are up against it both time and tech-wise. fundamentally, fermi has left nvidia unable to turn tight. lucky for them they have a legion of loyal customers and a large marketing budget. and then, as you rightly point out, all of this is very good for us consumers.
Hello fellas
What is the main question here is, how long will it take for Nvidia to release a PROPER fermi based GPU with all 512 cores enabled (keep in mind, 480 and 470 and only half baked chips that didn’t pass the whole 512 core testing most likely (imo, same story as usual), and their prime stock of 512 core cards goes to 3d studio/design/engineering rendering cards) and how long will it take for them to release properly polished off, shiny and squeaky clean set of drivers – and how much more time does Nvidia still have until ATI slaps in the face with their (I’m assuming the name) 6xxx line of cards.
I remember when I bought my sli combo of 9800gt’s, had it for a while on some earlier drivers, and than suddenly came out newer revision, and brought a good performance gain across the board, some titles benefit greatly, but all games I’ve played showed an increase in smoothness.
At this very moment, any arguments about 480 this, and 5870 that are steam let out too soon, I have a sneaky suspicion that the delay for proper release of the cards until 12th of April isn’t only hardware related, somewhere there is an entire floor of hungry programmers that aren’t getting any food until they compile proper drivers ;)!
For next 2 weeks, we can only sit and wait, most likely new drivers will show up within 7~10 days (before the 12th), than we’ll see again what the numbers are telling us.
Cheers
Marcin
My main issue with the late release date and the not overwhelming numbers is that they are compared to 6+ month old ATI cards. I believe someone from ATI said there will be a refresh at some point for ATI cards. So that should speed them up. Then of course ATI has a 6 month head start on the next gen of cards and with the issues that ferni has brought up I don’t see nvidia being able to make that gap up with a product that can compete and stay cool at a lower power draw. And if you already have a high end ATI card I don’t see much of a reason to switch to nvidia. As far drivers I have been super impressed with ATI so far this year. They are bringing new features and bumping up performance for games as well. Nvidia is pulling drivers because of issues only to re-release them later. I haven’t been impressed with nvidia drivers for a long time.
Thanks, Aaron .. already fixed.
I’m glad I had some more time on my hands and could give you a much closer look. Very impressive work and an impressive site. Added to my bookmarks.
Great review.. I suspect the GTX 480 to be just like HD2900XT was back then on 80nm. At least it’s not as much of a “failure” as the HD2900XT, but it’s way more power-hungry. True, the HD2900XT was considerably slower than 8800 Ultra, but the power consumption and heat output was pretty much the same. Heck, the consumption was actually equal to less hungry 8800GTX according to some reviews.
Anyways, the point here is that Nvidia made sure that they came out at the top with the fastest single GPU card but at some cost. I mean, the real point here is (LOL) that the Fermi architecture appears to be designed with the future in mind, in the same way that the R600 was designed. With a reduced number of TMU’s (albeit more efficient ones), the ratio seems better suited with the number of shaders for better scaling in the future. On a smaller 28nm shrink, we’d be seeing a far better design with at least double the specifications for more than 100% increase in performance (similar to RV770 versus the original R600) while consuming even less power.