NVIDIA’s GTX 480 Performance Testing


NVIDIA has finally released its long awaited GeForce GTX based on its brand new Fermi DX11 GF100 architecture. This new GPU – Graphics Processing Unit – a term originally originated by NVIDIA is a continuation of NVIDIA’s strategy since their G80 which launched over three year ago to create a General Purpose Processor – co-equal with the CPU – that also renders amazing graphics. Here is the culmination of their efforts with their new DX11 Fermi architecture, the GTX 480, their flagship GPU.

GTX4801 NVIDIAs GTX 480 Performance Testing

NVIDIA realizes that they are 6 months later than their rival AMD graphics with DX11 video cards. NVIDIA’s original intention was to launch GTX 480 with Windows 7. But it was a very diffiicult 3 billion transistor GPU to manufacture on a new 40 nm process at TSMC with its own issues. To improve the yields, NVIDIA had to cut down the GTX number of shaders from 512 to 480 so as to guarantee that there would be enough chips to supply a big demand when it finally launched.

gtx480 0221 300x150 NVIDIAs GTX 480 Performance Testing

In fact, AMD has already launched its entire 5000 DX11 series from top to bottom – $600 for their dual-GPU HD 5970 down to their passive-cooled HD 5450 for $60. So today, NVIDIA has launched its new DX11 GeForce lineup with its GTX 480 flagship and second-fastest card, the GTX 470. The GTX 480 comes with a MSRP of $499 and the GTX470 will retail for $349. So we need to answer the question: Is it worth the $100 premium over the $400 that one would currently spend for AMD’s top single-GPU video card – HD 5870?

To properly bring you this review, we purchased a Diamond HD 5870 from NewEgg and put it through its paces this week with the very latest performance drivers – Catalyst 10-3a. AMD is quite proud of this driver set as it brings sold performance increases over Catalyst 10-2 and over even the latest WHQL drivers, Catalyst 10-3, which were released this week. We suspect the results would be much more in NVIDIA’s favor if we used the last Catalyst 10-2 driver set. Also, remember that AMD has had a long time to mature their drivers and that the release GTX 480 GeForce 197.17 drivers that we are using are still beta and leave some room for solid improvment by NVIDIA’s driver team in the months to come.

So you will see us pit our Diamond reference design HD 5870 against the new GTX 480 in 14 modern games and 2 synthetic benchmarks from 1680×1050 t0 2560×1600 resolutions. We are also using our standard reference video card, HD 4870-X2, the very fastest video card of AMD’s last generation and still very competitive with HD 5870 performance in many games.

5870 480 300x222 NVIDIAs GTX 480 Performance Testing

Is GTX 480 worth $500 which is nearly $100 more than its rival, AMD’s HD 5870?

This review is going to be in two parts. This first one will analyze and compare GTX 480 and HD 5870 performance and hopefully we can announce a performance winner. We will also look at the details to see what the new NVIDIA GPU brings to the table and if it is worth the nearly $100 premium over its AMD counterpart. We also believe we have a good handle on how AMD is going to respond to NVIDIA’s GTX 480/470 Fermi launch and we will share our analysis and insights with you. The second part will be much expanded with more game benchmarks and with AMD’s likely answer to NVIDIA’s Fermi GTX launch.

Widespread e-tail availability of both GeForce GTX 480 and GTX 470 will happen the week of April 12, 2010. So you have a little time to decide what to do and this review is designed to help you with an important potential upgrade. NVIDIA says they are building tens of thousands of units for initial availability, and this will ensure that their partners have ample volume for what is the certainly one of the most anticipated GPU launches ever.

We will also help to answer if it is practical to upgrade from HD 4870/GTX 280 class – which includes GTX 260, 275 and by extension, GTX 285. We will also consider if it is practical or useful to upgrade from a HD 4870-X2 or HD 4870 CrossFire or by extension, GTX 260 or 275 SLI or even a GTX 295 which is a bit more powerful than our HD 4870-X2. Since we do not want any chance of our CPU “bottlenecking” our graphics, we our testing all of our Graphics cards with our Intel Core i7 920 at 3.80 GHz (3.97 GHz effectively with the 21x multiplier in turbo mode), 6 GB Kingston DDR3 and a Gigabyte X58 full 16x + 16x PCIe CrossFire/SLI motherboard. trans NVIDIAs GTX 480 Performance Testing

Later on we plan to also test our AMD DX11 video cards on AMD’s Dragon platform. We also acquired a brand new ECS black label A890GXM-A CrossFire motherboard which is a nice performance upgrade from our current Gigabyte 790X motherboard and we shall post that review this week.

trans NVIDIAs GTX 480 Performance TestingBefore we do performance testing, let’s take a look at the GTX 480 and quickly recap its new DX11 architecture and features.

«»

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

apoppin

Founder and Senior Editor of ABT.

14 Responses

  1. jonny says:

    From reading all the reviews online I don’t see a reason to either of the two new nvidia cards. Most of the reviews show at the higher resolutions ati cards take back the lead or get really close. High resolutions are one of the primary reasons to buy a high end card. The only reason to go nvidia is really for physx at this point. The audio through hdmi isn’t as good as what ati is offering either. I have seen multiple sites say that they had these cards running at 99 degrees and in sli they are expected to draw close to 800 watts. Most sites are reporting that they are taking more power than the spec for the card shows is the max. Price just isn’t worth it either. This launch was 6 months late and still isn’t actually launching until two weeks into April. I fully expect a refresh from ATI for current cards and possibly price drops or adjustments that give them a clear advantage. Also remember because this launch is so delayed that ATI is 6 months ahead for the next gen of graphics cards because of all the delays from nvidia and AMD and ATI are finally getting all their eggs in a row after their merger and the gpu’s are getting die shrinks faster than they where before and leading to cooler running gpu’s. Not only will Nvidia ferni cards ear up you wallet now they will continue to do so with with the power draw and also to increasing cooling for your pc and possible extra costs of air condition this upcoming summer.

  2. Jon Worrel says:

    Thanks for the review Mark! Although my opinions remain moderately negative and disheartened against Nvidia’s GF100 first-generation chips and the entire 40nm fabrication process in general, I am somewhat enamored by the marketing team’s ability to turn this generation of enthusiast GeForce products into a highly-demanded revenue stream with only two major marketing points.

    When I was over at Nvidia’s headquarters in late October and asked several engineers what their initial impressions were of the GF100 architecture, the consensus simply responded with, “oh..Fermi got the flu.” At first, I wasn’t quite sure how to comprehend the response, as it was answered in a very general, casual fashion. What I soon learned from Fudo, Theo Valich and several technical marketing managers at Nvidia was that the chip’s original November launch plans had been completely removed from the 2009 timeframe.

    Over the next few months, it was very disheartening to read several commentaries on the state of the fabrication process regarding the rampant production issues of high power walls, leaky transistors (resulting in the shader drop from 512 cores to 480 cores) and variable chip temperatures (due to bad bumps).

    What I am surprisingly thankful for out of all of this mess, however, was Nvidia’s ability to produce a wonderful tessellation architecture with unparalleled SLI scalability across additional GPU cores. I’m quite sure that many journalists, analysts and consumers did not see it coming either, but it was a very impressive comeback and a much needed marketing point for the green team.

    As much as I am looking forward to the *properly designed* second-generation GF100 chips between Q4 2010 and Q1 2011 with great anticipation, I still have a level of respect for the company’s ability to take a product afloat a sinking ship and rescue it with incredible multi-GPU scalability and 3D Vision Surround!

  3. Joe Smith says:

    Dude your review is broken. On the Left 4 Dead page you said the GTX 480 stumbles badly vs. the 5870…

    But it doesn’t.. at least any worse than anywhere else.. It’s just that your chart is broken!!! Instead of the far left being “0fps” it’s “74fps”, effectively zooming in on the gap, making it look wayyyy bigger than it actually is.

  4. Joe Smith says:

    I see this problem actually applies to alot of the charts. I suggest you just fix them in excel to avoid confusing or misleading people. There are also some cases where, because of this, the delta looks much bigger than it actually is in NVIDIAs favor.

    In excel you can right click the X-axis and go into it’s properties and set the minimum value to 0. This will force your graph to always show the full chart, instead of the zoomed in version.

  5. apoppin says:

    Thanks for the suggestion, Joe.

    I stand by what I said about the GTX 480 “stumbling badly” compared to HD 5870. Let’s look at the context:

    The GTX 480 is generally leading the HD 5870 until we get to Left 4 Dead. Then it *stumbles*

    In my own personal opinion, minus 4-1/2 frames is significant when you consider that the GTX leads the Radeon in the other benches. That is why I suggested that it could be immature drivers holding back the GTX in this Source Engine benchmark.

    As to my charts being misleading, I pointed this out in Unigine Heaven 1.0 benchmark:

    http://alienbabeltech.com/main/?p=16475&page=19

  6. Wow! If you look at the graph it looks like a lot, but there is only 0.1 FPS difference!! Beware of just looking at the graph. Our HD 5870 can can keep up with the GTX 480 at the maximum resolution in Heaven benchmark.
  7. I want to make absolutely sure that my readers look carefully at the numbers, not just at the charts and at the “gap”. Marketing and PR often use a chart to further their own agenda.

    Also, if one is comparing just two or maybe three video cards, it is good to emphasize the difference between them and I believe that it is not necessary to start with “0″ on a graph; especially if you are dealing with reasonably a high FPS.

    At any rate, always look at the frame rate. The numbers are not misleading in any way and the charts are representative of the close performance between these two most excellent video cards – HD 5870 and GTX 480; each in its own price range and each with a compelling set of features.

  • apoppin says:

    To Jon Worrel,

    Many thanks for your kind words. And sorry for the delay in my response. There have been issues with our editorial staff being able to post comments. :O

    I feel positively about the GTX 480. Perhaps you were expecting a bit more and were disappointed.

    I look at this launch differently. It is as though AMD had brought an HD 2900-XTX to the market and it had beat the 8800-GTX performance wise instead of only matching the 8800-GTS. I believe that NVIDIA pulled off quite a feat. I remember Jensen’s words from GTC, “We want a General Purpose Processor that can do amazing graphics”.

    IMO, they delivered – in spades. And I am not referring to any proprietary “features” the new GF100 brings to the market. Besides, I am mostly saving CUDA, 3D and Surround for my Part Two of this review – O/C’d GTX 480 vs. PowerColor HD 5870 PCS+ (which I am lead to believe is a Super-Overclocker).

    I believe that this new GF100 chip is the beginning. I believe that NVIDIA could have brought it to market 6 months from now with tamed thermals and all SPs enabled. But that did not seem to make sense from a business standpoint. If you do not like the TDP of the GTX 480, the GTX 470 is 35W lower. And of course it will be respun. It is only logical that they would aim to improve yield, performance and likely bring out a 512 SP-enabled “ultra” – perhaps even with the current stepping.

    I also do not doubt that we will see some kind of “GX2″ if their ultra cannot catch the HD 5970. It is their nature to be so competitive – which is wonderful for us as consumers and enthusiasts.

    NVIDIA has recently brought hotter and hungrier GPUs to market for the enthusiasts – that is just the way they do things with their philosophy of monolithic dies. AMD achieves a similar level of performance with a completely different philosophy. No one can say who is right or wrong – just who they prefer.

    I really like both companies. I am impressed with HD 5870 for about $400 with a CoD MW2 game tossed in. But then I am also impressed by the raw performance of GTX 480 fo $500. I love them both and I look forward to pitting CrossFired HD 5870s against SLI’d GTX 480s.

    And you can look at it another way. I would say that GTX 480 TriSLI would match the performance of your HD 5870 QuadFire with a very similar thermal output and even be price competitive. I also realize that it would not be such a great “upgrade” for you.

  • turingpest says:

    @apoppin:

    “I believe that this new GF100 chip is the beginning. I believe that NVIDIA could have brought it to market 6 months from now with tamed thermals and all SPs enabled.
    … I also do not doubt that we will see some kind of “GX2? if their ultra cannot catch the HD 5970. It is their nature to be so competitive – which is wonderful for us as consumers and enthusiasts.”

    all undoubtedly true. as jonny has already pointed out however, the problem for nvidia is that they are a good 6 months behind ati, who not only have a lot of manoeuvrability with pricing and will probably produce a hd 5890 to do even more damage to nvidia in price/performance terms, but will also be introducing a completely new architecture in 6 months time. and if this doubles evergreens compute power with a similar die size/tdp, then fermi as the leading directx11/directcompute architecture will be superseded.
    of course it remains to be seen what the middle and bottom tier fermi cards will be like – which is of course where the money is made – but nvidia are up against it both time and tech-wise. fundamentally, fermi has left nvidia unable to turn tight. lucky for them they have a legion of loyal customers and a large marketing budget. and then, as you rightly point out, all of this is very good for us consumers.

  • Marcin3891 says:

    Hello fellas

    What is the main question here is, how long will it take for Nvidia to release a PROPER fermi based GPU with all 512 cores enabled (keep in mind, 480 and 470 and only half baked chips that didn’t pass the whole 512 core testing most likely (imo, same story as usual), and their prime stock of 512 core cards goes to 3d studio/design/engineering rendering cards) and how long will it take for them to release properly polished off, shiny and squeaky clean set of drivers – and how much more time does Nvidia still have until ATI slaps in the face with their (I’m assuming the name) 6xxx line of cards.

    I remember when I bought my sli combo of 9800gt’s, had it for a while on some earlier drivers, and than suddenly came out newer revision, and brought a good performance gain across the board, some titles benefit greatly, but all games I’ve played showed an increase in smoothness.

    At this very moment, any arguments about 480 this, and 5870 that are steam let out too soon, I have a sneaky suspicion that the delay for proper release of the cards until 12th of April isn’t only hardware related, somewhere there is an entire floor of hungry programmers that aren’t getting any food until they compile proper drivers ;)!

    For next 2 weeks, we can only sit and wait, most likely new drivers will show up within 7~10 days (before the 12th), than we’ll see again what the numbers are telling us.

    Cheers
    Marcin

  • jonny says:

    My main issue with the late release date and the not overwhelming numbers is that they are compared to 6+ month old ATI cards. I believe someone from ATI said there will be a refresh at some point for ATI cards. So that should speed them up. Then of course ATI has a 6 month head start on the next gen of cards and with the issues that ferni has brought up I don’t see nvidia being able to make that gap up with a product that can compete and stay cool at a lower power draw. And if you already have a high end ATI card I don’t see much of a reason to switch to nvidia. As far drivers I have been super impressed with ATI so far this year. They are bringing new features and bumping up performance for games as well. Nvidia is pulling drivers because of issues only to re-release them later. I haven’t been impressed with nvidia drivers for a long time.

  • Aaron says:

    If im not mistaken, on page 23, the number 1 is listed as the “HD 5890″ when I do believe that its supposed to be “HD 5970″. Just so ya know :D

  • Aaron says:

    and err, Call of Defence? Dont you mean Duty? xD

  • apoppin says:

    Thanks, Aaron .. already fixed.

  • I’m glad I had some more time on my hands and could give you a much closer look. Very impressive work and an impressive site. Added to my bookmarks. :)

  • Bo_Fox says:

    Great review.. I suspect the GTX 480 to be just like HD2900XT was back then on 80nm. At least it’s not as much of a “failure” as the HD2900XT, but it’s way more power-hungry. True, the HD2900XT was considerably slower than 8800 Ultra, but the power consumption and heat output was pretty much the same. Heck, the consumption was actually equal to less hungry 8800GTX according to some reviews.

    Anyways, the point here is that Nvidia made sure that they came out at the top with the fastest single GPU card but at some cost. I mean, the real point here is (LOL) that the Fermi architecture appears to be designed with the future in mind, in the same way that the R600 was designed. With a reduced number of TMU’s (albeit more efficient ones), the ratio seems better suited with the number of shaders for better scaling in the future. On a smaller 28nm shrink, we’d be seeing a far better design with at least double the specifications for more than 100% increase in performance (similar to RV770 versus the original R600) while consuming even less power.

  • Leave a Reply

    Your email address will not be published.

    You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>