Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Vega Thread
Quote:An MSI company representative posting on Dutch tech-forums confirmed our worst fears, that the RX Vega will have a very high power draw. "Specs van Vega RX gezien. Tering wat power heeft die nodig. Wij zijn er aan bezig, dat is een start dus launch komt dichterbij," said the representative who goes by "The Source" on Dutch tech forums As a gentleman scholar in Google Translate, and citing VideoCardz which cited a native Dutch speaker; the MSI rep's statement translates as "I've seen the specs of Vega RX. It needs a damn lot of power. We're working on it, which is a start so launch is coming closer."
Quote:It's PC World's comments on the Vega card's gaming performance that might pique your interest. In its report, the publication comments that the Radeon Pro Vega Frontier Edition offers gaming performance that is faster than NVIDIA's GeForce GTX 1080, but slightly slower than its GTX 1080 Ti graphics card. To back its statement, PC World claims to have run the Vega Frontier Edition and TITAN Xp in "Doom" with Vulkan API, "Prey" with DirectX 11, and "Sniper Elite 4" with DirectX 12.
Quote:Gaming PC builder Falcon Northwest teased a picture of its upcoming Tiki compact high-performance desktop built on the AMD Radeon theme. The silver-bodied beast shows off a Radeon Pro Vega Frontier Edition graphics card through an acrylic cutout on its side, and will be one of the first pre-built desktops you can buy with the $1,000-ish air-cooled Radeon Pro Vega Frontier Edition.
And Vega Frontier Edition is released:
Quote:Update 1: #define has made an update with a screenshot of the card's score in 3DMark's FireStrike graphics test. The user reported that the Pro drivers' score "didn't make sense", which we assume means are uncooperative with actual gaming workloads. On the Game Mode driver side, though, #define reports GPU frequencies that are "all over the place". This is probably a result of AMD's announced typical/base clock of 1382 MHz and an up to 1600 MHz peak/boost clock. It is as of yet unknown whether these frequencies scale as much with GPU temperature and power constraints as NVIDIA's pascal architecture does, but the fact that #define is using a small case along with the Frontier Edition's blower-style cooler could mean the graphics card is heavily throttling. That would also go some way towards explaining the actual 3DMark score of AMD's latest (non-gaming geared, I must stress) graphics card: a 17,313 point score isn't especially convincing. Other test runs resulted in comparable scores, with 21,202; 21,421; and 22,986 scores. However, do keep in mind these are the launch drivers we're talking about, on a graphics card that isn't officially meant for gaming (at least, not in the sense we are all used to.) It is also unclear whether there are some configuration hoops that #define failed to go through.
Quote:That’s it. The baseplate has some thermal pads for the FETs & inductors, alongside some fins near the DisplayPort & I/O, but is otherwise plain. The cooler is a large, aluminum block with standard 90-degree fin pitch and standard fin density, using flat-top fins to force air through the cooler block more efficiently. This exhausts out the back of the case, as expected. Conduction is handled by a copper protrusion in the copper baseplate, which covers both the GPU & HBM2 (two stacks). This is a vapor chamber cooler.

As for measurements, we took a pair of calipers to get some rough measurements. These may be off by 0.25-0.5mm. Here’s what we came up with:
  • 30mm x 30mm total size of interposer + GPU (does not include substrate)
  • 20.25mm x ~26mm GPU die size
  • 10mm x ~12mm HBM2 size (x2)
  • ~4mm package height (this is the one that’s least accurate, but gives a pretty good ballpark)
  • 64mm x 64mm mounting hole spacing (square, center-to-center)
  • PCB ~1mm

Our VRM & component-level analysis will come next, conducted by GN resident overclocker Buildzoid. Power, thermals, and noise will be around the same time for publication.
Quote:Also, when asked about the Frontier Edition's (lacking) gaming chops, AMD's Jason Evangelho has come out with the warning that we all expected, and that we ourselves conveyed here: "it's premature to worry about a product's gaming performance by judging a different product NOT optimized for gaming."
Quote:For the math-savvy around here (or even just for those of you who have read the headline), that particular equation should solve towards a perfect 484 mm² die area. Good news for AMD: this isn't the company's biggest die-size in consumer GPUs ever. That dubious honor goes to the company's Fiji XT silicon which powered the company's R9 Fury X, coming in at a staggering 596 mm² in the 28 nm process. For comparison, AMD's current Polaris 20 XTX-based RX 580 chip comes in at slightly less than half the confirmed RX Vega's die-size, at a much more yield-friendly 232 mm². NVIDIA's current top-of-the-line Titan Xp comes in at a slightly smaller 471 mm² die-size.
Quote:The folks at Videocardz have put together an interesting chart detailing the 687F:C1 RX Vega's score history since benchmarks of it first started appearing, around three months ago. This chart shows an impressive performance improvement over time, with AMD's high-performance GPU contender showing an improvement of roughly 15% since it was first benchmarked. That averages out at around a 5% improvement per month, which bodes well for the graphics card... At least in the long term. We have to keep in mind that this video card brings with it some pretty extensive differences from existing GPU architectures in the market, with the implementation of HBC (High Bandwidth Cache) and HBCC (High Bandwidth Cache Controller). These architectural differences naturally require large amounts of additional driver work to enable them to function to their full potential - full potential that we aren't guaranteed RX Vega GPUs will be able to deliver come launch time.
Quote:Disclaimer things first: take this with a grain of salt, since this hasn't seen the amount of confirmations we'd like.
However, it seems that AMD's BIOS is only scheduled to be sent to AIB partners on August 2nd, and AIB partners still have no word on launch dates from AMD, which would hamper their ability to move on to mass production of their designs. This may mean a paper launch from AMD, or perhaps a launch with only AMD reference designs being available for order. Remember that final BIOS is a particularly important part on partner's design customizations, since these usually include info on stock AMD-defined power and temperature limits, power states and fan curve, which partners leverage in building their customized cooling solutions.
And the liquid-cooled Vega Frontier Edition is out:
Quote:There are a few ways this could play out, based on what we know so far. The 56-CU Vega XL could be positioned between the GTX 1060 and the 1070 with the 64-CU Vega XT (air-cooled) landing between the GTX 1070 and GTX 1080. The air-cooled and water-cooled Vega XTX (OC) would then try to close the gap with the 1080 and approach the 1080 Ti. This makes sense if AMD wants to match the GTX 1080 with its air-cooled XTX, then surpass it by a solid margin with its water-cooled variant.

Alternatively — and this assumes game benchmark scores are low due to driver issues or possibly a need for that water cooler — the Vega XL actually drops in between the 1070 and 1080; the air-cooled Vega XT has a win-some-lose-some battle with the GTX 1080; the air-cooled RX Vega XTX (OC) beats or at least matches the GTX 1080 in most games; and the water-cooled RX Vega XTX (OC) performs well enough to be a slightly slower (but also slightly less expensive) competitor to the GTX 1080 Ti. But based on what we’ve seen to-date from the Vega Frontier Edition, AMD needs to pull two rabbits out of its hat to make that happen. First, it needs to pick up at least 10 percent in average game performance thanks to better drivers. Second, it needs to pick up 8-12 percent of clock speed over the Vega Frontier Edition’s 1.6GHz.
Quote:The trouble with this solution is that it is imperfect by nature. First, every chip is not made the same; ours may undervolt better or worse than others out there, and that means there’s no easy “use these numbers” method. You’ll ultimately have to guess and check at stability to find the numbers that work, which means more work is involved in getting this solution to be rock-steady. That’s not to say it’s difficult work, but it’s certainly not as easy as plugging a card in and using it. We found that some games required 1120mv to remain stable, while others were fine at 1090mv. Ideally, you’d make a profile for each application – but that’s a bit annoying, and becomes difficult to maintain. The next option would be to choose the lowest stable voltage for all applications (in our case, that might be 1120mv). You lose some of the efficiency argument when doing this, as the bottom-end is cut off, but still gain overall.

A straight +50% overpower configuration is a huge waste of power down the PCIe cables, which results in running hotter than necessary and thereby louder.

Software is also buggy and frustrating. No, not everyone sees the same issues – that’s the nature of buggy software. It is difficult to precisely pinpoint the issue causing HBM2’s brutal downclocking of -445MHz, but we have seen it happen routinely and on multiple systems with multiple environments. We think that this has to do with manually configuring all 7 DPM states and their corresponding voltage states; when we only configured DPMs 4-7, the downclocking issue did not occur. Fan speed curves are also inaccurate, and report about 200RPM higher than what the user requests. Wattool has the same bugs as WattMan, and Afterburner can’t adjust voltage (yet). The point isn’t to say that it’s impossible to undervolt like we did, it’s just to say that you should really be aware of all the different variables when tweaking. It’s possible to inadvertently hinder performance (in major ways) if HBM2 underclocks without the user’s knowledge. Keep an eye on it.

As for the task at hand, it seems the best possible configuration is to overpower the core (+50%), undervolt the core (roughly -110 to -90mv), and run a fan RPM that keeps temperatures at or below 80C. That’ll depend on your cooling solution, case, and case/room ambient temperatures.

This yields a decent boost to application performance (professional and gaming) without costing the insane +90W draw of a straight +50% overpower configuration.
Quote:Which leads to the real question: what does this mean for the upcoming RX Vega product? While I know that testing the Frontier Edition in ONLY gaming is a bit of a faux pas, much of our interest was in using this product to predict what AMD is going bring to gamers later this month. It is apparent now that if the clocks are in the 1600 MHz range consistently, rather than dipping into 1400 MHz and below states as we found with the air cooler at stock, Vega in its current state can be competitive with the GeForce GTX 1080. That’s an upgrade over where it stood before – much closer to GTX 1070 performance.

The question is then can AMD provide that type of gaming experience at the $499 price point and will it require liquid cooling to do so? AMD probably isn’t holding back on some magic air cooler performance tricks it might have for RX Vega, so I don’t expect performance and clock consistency to change much. BUT, board partner cards that don’t design only for blower-style integrations will likely be able to add a lot of RX Vega performance and capability. This is clearly an instance where a better cooling solution can make a dramatic impact; in recent years and with current GeForce cards that isn’t the case. I do think we will see a liquid-cooled version of RX Vega but at a higher cost because of the complexity of the design used here. We might also see that HBM2 memory clocked up another 100 MHz or more, boosting things ever so slightly.
Despite the added performance, the Frontier Edition pricing scale leaves a lot to be desired. You are going to shell out $500 more for this version of the card, netting you at most 15% added performance for 50% added cost. Yes, the cooling solution design is impressive, and the card maintains an under 65C temperature in our testing at stock settings under full load, but it’s a steep price to pay. If the cooler really DOES cost enough to warrant that change, then it’s bad news for the potential for a water-cooled RX Vega card and its ability to stay within a reasonable price window to compare to GeForce products.

Before doing this testing, I had dismissed the liquid-cooled version of the Vega Frontier Edition completely, but the performance and improvement in clock rate/temperature we saw today gives me hope that RX Vega will be able to turn in a better-than-expected performance when it hits the scene. There are still plenty of ways that AMD could drop the ball, but if they can launch RX Vega at similar performance to the GTX 1080, as we saw here with the card in its 350-watt state, and prices it correctly, there will be a line up for buyers. And that is despite the obvious power efficiency disadvantage that AMD will have to accept.
350W new tech to compete with 180W year old tech will be the battle cry on the forums.

The response will be:

A. I don't want a card that might have less VRAM than advertised like the 970.
B. I want a card that plays Game X better because I play a lot of Game X.
C. I want new technology like HBM2.
D. I keep my card a long time and AMD drivers just keep getting better.

Lather, rinse, repeat.

It's always the same.

My guess is something went horribly wrong with Vega, and what launches will be a 2nd or 3rd revision of it. AMD just doesn't have the cash to operate at the level of intel and AMD.

However; I think the level they do operate at is pretty amazing given the loss of staff and cash they've been through.
This has the markings of AMD leaving the segment, they aren't playing the same game anymore.

Vega is 484mm squared, the 1080ti is 471mm squared- Vega is bigger, not much, but it is bigger. 

Power consumption- 1080ti is guzzling down a whopping 250 Watts, in order to get Vega to perform *at its advertised clock rates* it is looking like it needs 350 Watts... 

It's built using HBM 2, adding considerable overhead for what amounts to, at the moment, no real benefit. 

If this part was launching alongside Pascal, it would be a run of the mill case of AMD coming up short, but that isn't where we are at- We are staring down Volta.

Given AMD's rightly earned reputation for epicly failing to fill retail channels heavily for product launches, and given the utter catastrophe that has been Vega making its way to an actual shipping consumer product, they are looking to have one ramp up quarter of possible sales at *passable* margins before Volta destroys their balance sheets. 

This part is markedly more expensive to make than the 1080ti, it requires a more expensive cooler, uses far more expensive RAM- and on a cost basis I am doing AMD a *huge* favor comparing this to the 1080ti. While nVidia has been printing huge margins and selling out of 1080tis for quite a while now, the much lower margin Vega- if it was performing at the same level of performance, would be doing so for smaller margins on an insanely narrow time window before they are dealing with Volta. 

In this wet dream scenario for Vega, they would have at most a quarter of ramped production to not only recoup their R&D costs but also try to finance closing the gap which is now looking at being close to two generations- two generations behind the company that many are seeing as the company that is going to knock Intel off of its computing throne that it has had for decades. 

AMD is not competition for nVidia at this point, I don't know why there is any real confusion on this point. By volume AMD has about somewhere between one half and one third the market share nV does, entirely at the bottom of the segment in all aspects- most importantly margin wise. Right now nV's competition is themselves and *maybe* Intel or Google on the outside. Data Center/AI/Auto is where nV's growth is coming from, it will soon be the majority of their income- with better margins and significantly longer return periods per device developed. As a general example- Audi is about to launch the first in the world Level 3 autonomous driving vehicle(Tesla could probably argue their newest are, but Audi is actually claiming it making them liable). This system is based on the Tegra K1- you know- the chip that debuted in January of 2014.

*This* is the major element that some people seem to not be grasping- nVidia is now entering into mass production with parts that should be generating in the hundreds of millions of dollars off of *KEPLER* based parts. Just entering. 

I bring this up in the Vega thread to try and impress an accurate level of assessment on how this architecture actually stacks up. Validation for these segments, particularly automotive, takes years- actual returns on investments are also years out. AMD, with this architecture has shown the given an extra year, a larger die, more expensive platform and a borderline shocking amount of additional power, they can soundly get their ass kicked- without nVidia even needing to get Volta out the door. 

We are past the hypothetical phase on these future nV revenue streams. They are going into mass production- and the pipeline is already full for the next four, and barring Intel/Mobileye popping up something insane real quick five years, and Vega...... Vega was the last chance AMD had to really try and become remotely competitive with nVidia in at least one segment. 

Now, Datacenter is dominated by Intel with nV taking a big slice- and AMD being almost non existent(and nigh completely absent in the GPU space). Auto is being dominated by nVidia with Intel playing on the fringes and AMD not even being contemplated, and, oh, there are these desktop GPU things they have going too. 

The regular refrain from the AMD camp was they didn't have enough R&D to compete with nVidia(heh, Larrabbee, but I digress) because of Ryzen. Well, now you are competing with revenue streams that have margins that obliterate desktop GPUs- and pay dividends out many years after you are into straight black(Kepler products entering mass production now, 2H 2017).

This isn't AMD having a run of the mill bad generation. All of the above would be if Vega were going toe to toe with the 1080ti- they aren't even close to that stupidly low bar(given an extra year, bigger die, additional platform costs and almost 50% higher power budget)- they aren't even fucking close.

This has the markings of AMD leaving the segment, they aren't playing the same game anymore.

I don't say that lightly or without extensive reasoning.
I will just post here what I posted on ArsTechnica.  None of this surprises me.  I predicted this sort of thing would happen the day I learned Eric Demers had abandoned ship for Qualcomm.

gstanford Wrote:This is what happens when the Chief GPU architect leaves a company. 5 years since Eric Demer left AMD after giving them GCN. 5 years of twiddling with what he left them, desperately rearranging the deck chairs on the Titanic multiple times.

and quite frankly the only reason Eric Demers worked as hard as he did on GCN before leaving is because AMD was being paid by Sony to put the features Sony wanted on the chip.
Adam knew he should have bought a PC but Eve fell for the marketing hype.

Homeopathy is what happened when snake oil salesmen discovered that water is cheaper than snake oil.

The reason they call it the American Dream is because you have to be asleep to believe it. -- George Carlin
(07-19-2017, 01:43 PM)BenSkywalker Wrote: A whole bunch of stuff that makes pretty good sense.

Ben I think you need to copy/paste your post to AnandTech Video Cards forum.

There are a bunch of guys over there like AtenRa, SlowSpyder, Nurtured Hate, etc ad infinitum who want to talk to you about this.

Big Grin

Actually you should do it just to make them earn their free Vegas, the sight of this will launch a dozen PM meetings between them and their mothership.

I disagree on one thing-

I don't see AMD leaving the segment.

A. It's how they make a living. They're not going to just say, "The jig is up, we're beat. Time to fish.". They've lingered around in the CPU market for decades, rarely competing.
B. They have the console market that NVIDIA does not seem to want. They need to keep advancing their GPU tech.
Quote:All in all, I have to say, this tour doesn't inspire me confidence. This isn't the kind of "in your face" comparison we're used to seeing from companies who know they have a winning product; should the comparison be largely in favor of AMD, I posit the company would be taking every advantage of that by showcasing their performance leadership. There did seem to be an inordinate amount of smoke and mirrors here, though, with AMD going out of its way to prevent attendees from being able to discern between their and their competitors' offering.

AMD reportedly told attendees that the AMD and NVIDIA systems had a $300 difference in AMD's favor. All other hardware being equal, and accounting for AMD's stance that a FreeSync monitor tends to cost around $200 less than a comparable NVIDIA G-Sync enabled one, that leaves around $100 savings solely towards the RX Vega part of the equation. This means the RX Vega could sell around the $459-$500 bracket, if current pricing of the GTX 1080 is what AMD considered.
Post on ATVF!
The readers there haven't seen the Raja quote where he says his driver team wishes Vega were the same as Fiji enough times yet.

Only when that has been posted 500X will people believe writing drivers is hard and Vega will slay! Smile
Here, I'll write my request BoFox style!

"Strain turgid against the BVD boundaries of ABT and explode all over the AT forum in a frenzy of fappening!"

(heh- that guy was fun to read, I'm boring compared to him!)
LOL this thread is pretty great. I don't agree with BenSkywalker at all politically, but he was really spot on with his post about Vega. Very well thought out and very accurate IMO.
(07-19-2017, 11:54 PM)SickBeast Wrote: LOL this thread is pretty great.  I don't agree with BenSkywalker at all politically, but he was really spot on with his post about Vega.  Very well thought out and very accurate IMO.
Fully agreed.
An interesting tidbit from here:

Quote:There's a more concerning aspect in the "costs $300 less" argument, since the FreeSync monitor retails for $800 compared to $1300 for the G-Sync display. That puts the G-Sync display plus GTX 1080 at $1850 (MSRP), compared to $1550 for the FreeSync display plus RX Vega (MSRP), but the real price difference is the $500 delta between the monitors. This suggests AMD is looking at around $700 (give or take) for the RX Vega, which would put it into direct competition with the GTX 1080 Ti. If the demonstration really used a GTX 1080 and not a 1080 Ti, performance would appear to be in Nvidia's favor.

$700USD for GTX 1080 performance. No thanks. I'm not quite sure what AMD is thinking.
RX Vega is revealed:
  • 8 GB HBM2
  • Price not yet announced
  • More details coming
  • The Alienware gaming computer with 16C/32T Threadripper suffered a RAM issue on stage, requiring the RAM to be swapped out
  • 16C/32T Threadripper scores 2,866 in Cinebench
  • Base clock for 16C/32T Threadripper appears to be 2.6 GHz

Quote:The Radeon RX Vega pictured above has a similar aluminum cooler shroud as the Radeon Vega Frontier Edition except in a brushed silver finish. The Radeon logo and other LEDs on the card also seem to shine red here to go with the Radeon color scheme. The cooling solution is similar to reference air-cooled VGAs from AMD with a two-slot blower cooler design and what should be an aluminum or copper heatsink underneath the shroud. The reference RX Vega sports three full-size DisplayPort and one full-size HDMI connectors all in the same row allowing airflow exit holes above and making this a potential one-slot card if paired with a single-slot cooling solution such as a water block on an AIO or as part of a custom watercooling loop. Powering the card are two 8-pin PCIe power connectors which add credence to the previous rumors about a 300+ W TDP on the card.
Lastly, and perhaps most interestingly, is pictured what is being referred to by AMD themselves as a "Radeon Holocube" which has Radeon Vega printed on it and also has a mention of it being "enabled by Radeon Software". It appears to be a display with a screen at least on one side, so perhaps it connects to a system and acts as a GPU status monitor? Your guess is as good as mine here, but needless to say it is intriguing and we will bring you more information as we get it.
Vega releases August 14, starting at $500:
Quote:Deep technical details aside, we know that Vega 10 is both dense and physically large, outweighing Nvidia’s GP102 in transistor count and die size. It’s going to be hot—official board power specs from 295W to 345W for the RX Vega 64 assure us of this. And AMD’s own guidance puts the card up against a more than year-old GeForce GTX 1080. Make no mistake, we’re excited to see AMD closing the gap between its competition, and we know die-hard AMD fans are glad to see Radeon RX Vega finally coming to fruition (well, almost). But when it comes time to recommend where you best spend at least $500, we’re going to want to see these cards win somewhere. It looks like relative value could be the company’s best hope for this round. The wait is almost over. August 14th is just around the corner.
Actually, according to the Tom's link, Vega starts at $399.
Vega Nano is announced:

The fancy holocube is not being released in August:
Quote:We’re not preemptively calling this Nvidia’s game, not by a long shot. But AMD’s pricing, bundle, and overall part positioning seem to imply the RX Vega 56 will compete against the GTX 1070 while the Vega 64 competes against the GTX 1080. And if you care about power consumption, unless AMD has some crazy last minute optimizations up their sleeve, we’ll be watching the company’s new GPU face off with Nvidia’s May 2016 flagship, not the more recently launched GTX 1080 Ti.
(07-30-2017, 08:07 PM)SteelCrysis Wrote: RX Vega is revealed:
  • 8 GB HBM2
  • Price not yet announced
  • More details coming
  • The Alienware gaming computer with 16C/32T Threadripper suffered a RAM issue on stage, requiring the RAM to be swapped out
  • 16C/32T Threadripper scores 2,866 in Cinebench
  • Base clock for 16C/32T Threadripper appears to be 2.6 GHz

That video just makes me feel sad for all concerned. :(
Quote:According to AMD, the $200 off FreeSync monitor discount is only applicable in 5 countries- USA, Mexico, Canada, Australia, and Singapore. The rest of the world does not even get that option, which makes the whole deal a lot less appealing. All countries get the $100 off Ryzen 7 combo which includes an option of Ryzen 7 1700X, and Ryzen 7 1800X CPUs paired with an option of three X370 motherboards- the ASUS ROG Crosshair VI Extreme X370 Motherboard, the GIGABYTE GA-AX370-Gaming K7 Motherboard, and the MSI X370 XPOWER GAMING TITANIUM Motherboard. These are all very expensive motherboards, which have not justified their price point with Ryzen CPUs having a limited overclocking potential that can be reached on less expensive motherboards as well. Lastly, the two games available are Wolfenstein II: The New Colussus and Prey for all countries aside from Germany, Austria and Switzerland where Wolfenstein is replaced by Sniper Elite 4. Between all this, and the Samsung CF791 not having the best FreeSync implementation with users reporting display flickering in Ultimate Engine mode that increases the FreeSync range (48-100 Hz), the Radeon Packs are not looking as lucrative anymore and hopefully will be revised, or have the price reduced accordingly for other regions.
Quote:Unless the sensors are lying to us, the GPU’s maximum temperature is 84°C (85°C peak), while the HBM2 gets up to 95°C (96°C peak). That latter figure seems fairly high, but it does end up close to the ceiling for GDDR5X. We'll keep an eye on both temperatures as we get our hands on Radeon RX Vega 64; it's important to ensure the sensor data is 100% accurate.

What jumps out to us is that the board just below the GPU reads ~5°C cooler than the inside of Vega 10! The obvious question is why, and we already gave the answer earlier in our review. The Radeon Vega FE’s GPU and HBM2 sit on an interposer attached to a package substrate positioned on top of the PCB. Furthermore, the interposer doesn’t seem to make full contact with this substrate, causing a so-called underfill issue. Air between the layers acts almost like insulation.
Let’s not beat around the bush: this is the best reference cooler we've seen from AMD since its Radeon HD 2900. A much-improved radial fan supplied by Delta allows the Radeon Vega Frontier Edition to compete on even footing with Nvidia’s GeForce GTX 1080 Founders Edition. According to our measurements, the card’s maximum noise level is 44 dB(A), which almost completely drowns out the coil whine present at very high frame rates. Our video demonstrates this well, especially towards the end:

The frequency analysis doesn’t reveal any low-frequency growling that would be a sign of bad bearings or cheap motors. Meanwhile, the coil whine can be spotted around 8 kHz on the following graph:

Most of the noise is generated by the fan, but it's certainly acceptable. An Nvidia GeForce GTX 1080 Ti, with its ~250W of waste heat, isn’t significantly quieter, even though it's a lower-power card. If you push the 1080 Ti to a more comparable 270W by increasing its power target, noise rises to the same level as AMD's Radeon Vega FE.

This particular competition ends in a draw, with both manufacturers probably hitting the physical limits of what can be achieved. Moving to more expensive cooling solutions would only yield diminishing returns.
It’s hard to reach a conclusion. We just can’t speak to what might or might not be right around the corner. Either way, a fast workstation graphics card that can also be used for gaming does have a certain charm. The Radeon Vega FE’s price point helps a lot as well. We wouldn’t recommend it for the well-off gamer, but those who find themselves in the prosumer space may want to consider it as an alternative to pricier pro boards.

If your workstation-class application works well with AMD's Radeon Vega FE, then this card becomes a steal for the performance you get. That's full driver support in high-end titles for a fraction of the competition's asking price.
Quote:In an interview, AMD's Chris Hook justified Vega's delayed release due to a wish to increase available stock for gamers who want to purchase the new high-performance architecture by AMD. In an interview with HardOCP, Chris Hook had this to say:

"Part of the reason it's taken us a little longer to launch Vega - and I'll be honest about that - is that we wanted to make sure we were launching with good volume. (...) Obviously we've got to compensate for things like coin-miners, they're going to want to get their hands on these. We believe we're launching with a volume that will ensure that gamers can get their hands on them, and that's what's important to us."

It appears that AMD tried their best to increase production and stock volumes so as to mitigate price fluctuations upon Vega's entry to the market due to above normal demand from cryptocurrency miners. The jury is still out on whether Vega will be an option for mining due to its exquisite architecture, however. Still, this sounds as good a reason as any to delay Vega for as long as it has been already. Just a few more days until we see what AMD managed with this one, folks. Check the video after the break.
Tom's advises caution on rumor of Vega being an insanely good mining card:
Vega 64 review sample packaging:
AMD manipulating release of review content again:
Quote:We praised AMD’s cooling solution in our Vega Frontier Edition review. However, a pleasant breeze turns into a raging tornado this time around due to power consumption that's way too high. Then again, an Nvidia GeForce GTX 1080 Ti Founders Edition with a 295W maximum power target is almost as loud.

The version of AMD’s cooling solution used on Radeon RX Vega 64 is significantly different from the one we found on Vega Frontier Edition. It’s too aggressive, too hot, and, of course, too loud. Designing a thermal solution to be "good enough" never works out well for thermals or noise.

Let's get the messy stuff out of the way first. Radeon RX Vega 64 is late. It's hot. It's aimed at the competition's third-fastest product (which is 15 months old, uses a lot less power, and is quieter). And a lot of the architecture's new features are future-looking, rather than beneficial today.

AMD chose a $500 price point to match the 1080, then gave Nvidia enough time to make sure cryptocurrency-inflated prices on its cards were reigned in ahead of Vega 64's launch. So now you have a whole handful of third-party GTX 1080s in stock at $500 online.

Yes, AMD does surprise us with performance that typically exceeds our expectations. Based on the company's earlier hints, we were anticipating Radeon RX Vega 64 to tie GTX 1080, at best. However, AMD enjoys an advantage in Doom, The Division, and Dawn of War III. It roughly matches GeForce GTX 1080 in Ashes of the Singularity, Hitman, Metro, and Rise of the Tomb Raider. And it only really loses in Ghost Recon and The Witcher 3. The card is exceptional for 2560x1440 and respectable at Ultra HD, where you'll need to make quality compromises in certain games for smooth frame rates.

Of course, AMD had to flog its Vega 10 GPU to get there. Gaming power consumption in excess of 280W is particularly painful when GeForce GTX 1080 is 100W lower. Even the much faster GeForce GTX 1080 Ti barely passes the 210W mark, based on our measurements. Obviously this isn't an ideal situation, especially when we factor in the temperature and noise measurements. So Vega 64 includes two BIOSes with three power profiles each, allowing enthusiasts to dial in the right balance between performance and acoustics. We plan to test the various outcomes of these settings, but suspect that enthusiasts paying top-dollar for high-end graphics won't want to readily give up frame rates in exchange for a quieter fan. After all, certain GeForce GTX 1080 partner cards already address Vega's shortcomings and have been sitting on shelves for months.
Quote:AMD's Radeon RX Vega 64 delivers good performance that, when averaged, roughly matches GTX 1080 Founders Edition. This makes the card almost 20% faster than GTX 1070, 30% faster than R9 Fury X; 30% slower than GTX 1080 Ti. What is interesting though is that individual benchmarks show huge differences between NVIDIA and AMD. In some games NVIDIA is far ahead, in others AMD is the clear winner. The difference here doesn't seem to be DirectX 12 (which was the case with Polaris). For example in Hitman, which even is an AMD sponsored title, the GTX 1080 is quite far ahead. What is also worth taking a look at is games that are CPU limited, like Civilization VI. Here we see AMD's Vega cards bunched up to an invisible wall; a wall which sits at a lower FPS value than on NVIDIA. This suggests that AMD's drivers consume more CPU time than the NVIDIA drivers to complete the same tasks, leaving less CPU time for the game. Given the numbers we see in our review, we can't recommend RX Vega 64 for 4K gaming - it will shine at 1440p though. AMD needs to promote its new architectural features such as packed math and primitive shaders, to game developers. A game that utilizes Vega's hardware resources the way AMD intended, will be significantly faster than on the GTX 1080 including optimizations for "Pascal."

Power efficiency has been AMD's weak point for a while and RX Vega 64 can't really make that a thing of the past. It seems that in order to achieve the clock speeds required to beat GTX 1080, the company had to increase voltages far outside the comfort range of the Vega GPU, which results in terrible power consumption. Power consumption alone won't matter for most people, but higher power means more heat, which the cooler has to deal with, which usually means more fan noise. AMD has done what it can to limit noise levels by capping fan RPMs and allowing temperatures to go up to 85°C, but it's just not enough. The card is very noisy with 45 dBA in gaming, which almost disqualifies it for people who are noise-sensitive. I'm also missing the idle fan noise off feature that turns off the card's fans while doing desktop work, Internet browsing, or even light gaming. Some coil noise at high FPS is audible on our sample, other reviewers confirm this, too.
Quote:AMD Radeon RX Vega 56 is the little sister of Vega 64. It comes at a 20% lower price than the RX Vega 64, and yet it only has 12.5% fewer stream processors, and runs lower operating frequencies. In our testing, we see a surprisingly small performance impact as a result of cutting down the "Vega 10" silicon, with performance being just 11% lower than on the RX Vega 64. This makes the RX Vega 56 card 6% faster than the competing GeForce GTX 1070, and 12% slower than GTX 1080, at only $399. I would recommend Vega 56 over Vega 64 for 1440p gaming as it provides perfectly sufficient framerates for that scenario, especially when paired with a FreeSync monitor, which will give you a stutter-free experience, even if FPS dip a bit below 60. The saved money over Vega 64 could be used to get that FreeSync monitor, or let you upgrade other components of your system, like moving from HDD to SSD.
Besides good performance the other big surprise is how power-efficient Vega can be, if it's operating in the right clock/voltage band. Our testing shows power efficiency being close to the GeForce GTX 1060, which means Pascal is not that far away, when running at the right settings. This increased efficiency translates into lower power draw, which means less heat is generated. This lets the cooler work less hard and results in less fan noise distracting your ears.

The card is by no means silent, but it is clearly quieter than the RX Vega 64, running lower temperatures at the same time. If AMD had optimized the fan profile a bit better (allowing higher temperatures), then fan noise could be well below 40 dBA at full load, especially on custom designs, where board partners will most certainly use better coolers than the AMD reference solution.
Quote:Another reason for the performance deficit between the RX Vega 64 and the GTX 1080 in our initial standings is, I think, a lot simpler. The RX Vega 64 reference card seems to be running on the ragged edge of its voltage-and-frequency-scaling curve, and its factory fan profile doesn't allow the GPU to run at its peak clock speeds for extended periods, if at all. Our card was plenty happy to overclock with its blower cranked and a bunch of fans blowing on it, but the eyebrow-raising power draw and physically painful noise levels that ensued showed why AMD isn't pushing Vega over the shoulder of the voltage-and-frequency-scaling curve and into its ear.

Even at stock speeds, power consumption is the bane of the Vega GPU. Our system power draw with the RX Vega 64 installed exceeded that of even the GeForce GTX 1080 Ti, at 408W for the RX Vega 64 and about 350W for the GTX 1080 Ti, and it peaked at over 500W for the Vega chip once we informally explored overclocking. For a more apples-to-apples comparison, installing the GTX 1080 Founders Edition caused our system to draw about 272W. Those extra watts mean more expensive power supplies, more robust cooling fans, potentially higher noise levels, and a need for better climate control in one's gaming den. The Power Saver Wattman profile goes a long way toward taming the RX Vega 64 for next to no performance cost, but there is no denying that performance-per-watt remains a challenge for AMD's architects.
The RX Vega 56 is a happier story for AMD. As was the case with the R9 Fury versus the R9 Fury X, losing eight of the full Vega 10 chip's compute units to the world's tiniest chainsaw just doesn't hurt the Vega 56 that much. Our indices of 99th-percentile frame times and average frames per second put the Vega 56 dead-on with a hot-clocked GTX 1070, and only 10% behind the Vega 64 in our FPS index. The 56 does draw much more power than a GTX 1070 in our test system, but not in the eye-popping way of the Vega 64.

The RX Vega 56 looks especially nice in light of the FreeSync monitor selection these days. One can get a 144-Hz IPS gaming display with a 30-Hz-to-144-Hz FreeSync range for just $500 now, compared to about $800 for a comparable G-Sync display. That $300 could go a long way toward more powerful components elsewhere in a system, or it could stay in one's pocket. It's not a stretch any longer to say that a variable-refresh-rate  display is the way to game, and the RX Vega duo complete a puzzle for FreeSync that's been unfinished for a long time. I could go on and on about how revelatory VRR tech still is for the gaming experience, and I'm happy to see it become potentially more accessible again for high-end gaming. Assuming you can tolerate the higher heat and noise levels of the Vega 56 compared to the Pascal competition, its FreeSync support makes it a strong contender for the entry-level high-end graphics card to get.
Quote:Another interesting bit this design has going for itself is the usage of 1x 6-pin and 1x 8-pin power connectors, whereas AMD's reference design makes use of 2x 8-pin. Maybe AMD is being especially careful in their reference design after the RX 480's PCIe power hoggling?
Quote:We checked around, and the only other web shops with listings for RX Vega cards are Best Buy, which is carrying XFX's range, and NCIX US and its selection of back-ordered Sapphire cards. Meanwhile, Fry's Electronics, OutletPC, SuperBiiz, and even Amazon are bereft of listings for the new video cards. Altogether, these data points would indicate that we've simply been too quick on the draw. It's likely the listings are less "sold out" and "back ordered" and more akin to "newly listed."
Quote:Trying to keep our community entertained and distracted from the growing pains and expectations of waiting for the death of AMD's imposed NDAs on Vega reviews is one of our missions. As such, while we know what you want are actual performance numbers, price/performance charts, and an in-depth, independent review, you'll not find such answers in this post. You will find, however, some interesting tidbits on AMD RX Vega designs. In this case, a triple-fan cooling solution for AMD's RX Vega 56 (the smaller Vega).

Linus' intro will make all of us smile:

Edit: Nearly forgot this, go to time index 07:04:

Alright, I wanted to wait to reply to this thread until the reviews were live and we all had a chance to get a good range of data points to digest, I read every post daily, even though I don't often reply Smile

Quote:A. It's how they make a living. They're not going to just say, "The jig is up, we're beat. Time to fish.". They've lingered around in the CPU market for decades, rarely competing.

This has a couple of different elements to it, and both of them are rather important. One is the rate at which either division needs to advance, and one is a straight balance sheet issue.

If I were to tell someone today that I had an i5 3570K in my primary gaming rig, nobody would bat an eye, it wouldn't be given a second thought. That chip was mid range in 2012. Mid range in 2012 on the other end? GTX 660. You all, I would hope, would be busting my balls about running that dinosaur in anything with gaming rig in its duties in 2017. The performance disparity in most games between a 3570k and a top of the line current $1K Intel CPU(or AMD) is going to be less then 100% almost all of the time and the majority of the time it will be *significantly* less. The difference between a 660 and a 1080Ti? Can we just throw out 400% and agree that is conservative?

From an engineering perspective the CPU team was "chasing" a snail strolling up hill and spent years and billions of dollars to not quite do it. YoY CPU improvements have been well below 20% for a very long time now. The GPU team, on the other hand, is facing competition moving so fast that they have managed to swipe ~$1Billion in data center sales for the first half of this year with a fucking graphics card. MIT has named nVidia the smartest company on the planet(links have been giving me issues when posting, you can Google it, no joke). 

So from an engineering perspective on one side you have a division using a couple billion to chase down that snail, and almost managed to do it. On the other side, you have a division that in the last five years went from neck and neck with the competition to being a year late, significantly more expensive to build and slower by a staggering margin(this is the engineering side- Vega 64 goes toe to toe with Titan, 56 with 1080Ti- that is engineering reality) while consuming a borderline insane amount of additional power. 

To make matters worse, you are staring down a *major* architectural upgrade which will hit before you have production fully ramped likely relegating your halo product to somewhere in the 4th or 5th tier from the competition, and likely using about 20% of the power.

That is the first side to look at, and that is where they look best.

Now on the business side we are simply taking the realities of the financial aspects into account.

The Vega 56 costs more to produce then the TitanXP. That isn't spin, that isn't me trying to bash AMD, that is simply a point of fact. It is larger, uses far more expensive memory, uses considerably more power requiring more power circuitry and a better cooler.

The Vega 64 in any realistic sense is DoA, only drooling fanboys are going to touch it, and the Vega 56 is only somewhat viable because of the miners impact on 1070 pricing(MSRP is $350 for the 1070). Given that this product is over a year late, is more expensive to produce then the Titan and can only reasonably be sold for around $400, what is a realistic estimate on margins? I would say we could be very generous and call it 30%, that being very generous. Next we have some market numbers to look at-


The two elements we want to focus on are the total shipments and and the segment breakdown. Less then 10% of the market is in the enthusiast segment, probably closer to 5% but I'm trying to err on AMD's side here, so we'll call it 10%. Based on their sales, we'll round up for their benefit again, and we get 300K units per quarter. We are giving them 30% margins which works out to gross margins of roughly $45 Million per quarter. 

To put that number into perspective nVidia's quarterly R&D budget was $85 million........... in 2005. They have now pushed over $400 million a quarter and implied that would quickly grow to over $500 million a quarter. 

Now wait a second here Ben, you are leaving out that crazy console money, fair point, let's be very generous once again and give AMD a whole lot of loving on that end.

We are going to call it a $25 *margin* on every console sold for this entire generation and we will give all of the credit to the GPU side, sound good?

Right now we are sitting at 90 million consoles shipped for the two AMD powered platforms over the course of four years give or take. I'm going to do AMD a solid and I'm going to say they will move 200 million consoles over the course of seven years, sound generous enough? That gives them a whopping $5 billion in straight margins, broken down to a quarterly basis it puts us at about $179Million per- if we add that with Vega margins we end up at about half of what nVidia is using for R&D right now. Let's be clear, nVidia was making considerably less then that for margins on their contracts and both MS and Sony turned their nose up to nV this generation because they were too greedy, but like I said, I'm going all in on trying to prop up the AMD side of the equation here.

Now we still have the other 90% of the discrete market to look at, which is true, so let's take that 2.7 million units and we'll give them another 30% margin on an ASP of $250, $202Million more per quarter coming in for straight margins.

Anyone got any problems with my numbers? Anyone thinking I am being unfair to AMD in any way? I'm bending over backwards to give them every benefit I can, and these are the numbers I'm hitting.

In this dream land pie in the sky scenario- AMD can equal nV's current quarterly R&D budget and make *$0* profit. It actually turns negative if I get down to the final figures- they would be losing a few million a quarter with that rub one out for AMD fan boy dreamland scenario I spelled out.

So now we must put the two different factors together- the engineering one and the financial one. The engineering side is way behind nVidia, and normally that would indicate a need to increase R&D to a level that would allow you to catch up to your competition. If nVidia was a reasonably competent company, a reasonable assumption would be that matching their R&D would allow for you to close the gap, but this is "The Smartest Company in the World"- it stands as a reasonable stance to take that outspending them to close the gap would be logical. 

That's where we run into the next problem. Intel, the 800lb gorilla of the computing world for decades, has outspent nVidia by orders of magnitude and is losing ground to them at a staggering rate. Intel had their pick of all the top engineering talent for many years, now nVidia has that luxury while AMD would likely be, at best, the fifth option on most enthusiasts lists of places to work for(Qualcomm, Apple, Samsung, Sony, MS etc). 

If you are the CEO sitting back, going over the financials and combining that with the engineering reality that you part is roughly 40% slower *and* consumes 30% more power *and* is more expensive to produce- is that a path you want to try and rectify given the financial realities?

All of this is looking at it from AMD's perspective. nVidia is moving into so many different markets, their margin for error simply keeps growing- they have leveraged GPUs into the fastest growing commodity in the market, and have done so while continually setting new high marks for performance/watt and revenue generated per $ of R&D(by diversifying the segments effectively covered by their technology). 

If you are sitting their as CEO of AMD today, do you really look at all these numbers and green light any new projects?

AMD got their modern shot at relevance due to the brilliant design team it acquired- ArtX. Those were the people responsible for the legendary R300- the first time we saw a company go toe to toe with current nVidia tech and have a compelling argument as to why there were flat out superior. This was a group of former SGi engineers, they designed the N64 rasterizer and had monopolized Nintendo set top designs since. So with that said, and with nVidia opting out of consoles altogether, what happened with this generation of Nintendo? 

They went with a years old ultra mobile off the shelf chip from nVidia. This while requiring their markedly higher margins. Can AMD risk the other two heading in a similar direction? Vega vs Volta- which is where the majority of this generation will be fought, could realistically be seeing AMD's 350 watt parts struggling against nV's 100 watt offerings. As a console manufacturer, margins or not- where do you draw the line? Additional power usage requires better cooling, bigger power supply, lowers reliability- I mean, how much is worth it? What makes matters worse, nV already has laptop chips that can go toe to toe with Vega 64- and they are markedly cheaper to produce.

Honest analysis- we are staring down a situation(Volta vs Vega) where nVidia's mid tier laptop parts are likely to perform roughly equal to clearly superior to AMD's 350watt desktop parts. So we could be looking at a situation where nVidia is demanding 60% margins versus 30% for AMD and nV could end up being *markedly* cheaper.

Again, if you are the CEO- honestly assessing the competitive landscape and the realistic limitations of your GPU design team, are you going to spend that $400 million to try and go toe to toe with nVidia and, best case scenario- only lose tens of millons a year?
Ben, your points are well thought out as usual.

I would note that AMD doesn't "have" to make NVs R&D budget, they have to hire well. Have you seen the movie "Moneyball"?

In every industry there are people who have the ability to play above their pay grade that don't get hired by the big players for a variety of reasons. Too young/inexperienced, told a boss to kiss his/her ass, can't get to work on time, can't play nice with others, booze or dope, etc..

I think AMD just needs to hold on, sell some Vegas, and that for a while Ryzen and consoles are going to be funding the graphics division.

AMD made too much of an investment in graphics and graphics are too intertwined with CPUs these days for AMD to just walk away.

If they can hire well in terms of engineering talent maybe they can make strides with Navi or the next gen. Ryzen has certainly given some legitimacy back to the brand, this should be good for their employment prospects.
Quote:In every industry there are people who have the ability to play above their pay grade that don't get hired by the big players for a variety of reasons. Too young/inexperienced, told a boss to kiss his/her ass, can't get to work on time, can't play nice with others, booze or dope, etc..

You are describing Richard Huddy, Roy Taylor and Dave Baumann to perfection there! and they have ended up exactly where they deserve to be -- at the bottom of the barrel!

Adam knew he should have bought a PC but Eve fell for the marketing hype.

Homeopathy is what happened when snake oil salesmen discovered that water is cheaper than snake oil.

The reason they call it the American Dream is because you have to be asleep to believe it. -- George Carlin
My opinion is that Vega was a paper launch. apparently had only 20 Vega cards to sell. Newegg is Canada's largest PC hardware retailer. Other stores didn't even get them. AMD is looking more and more desperate. They just blatantly lied again, saying that the cards would be launched with good supply. I don't know where they find their marketing people. They make them look like a complete and utter fly by night operation. It has become a complete joke.

Forum Jump:

Users browsing this thread: 1 Guest(s)