Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Turing Discussion Thread
#1
https://www.techpowerup.com/245088/nvidi...rce-series
Quote:The BoM also specifies a timeline for the tentative amount of time it takes for each of the main stages of the product development, leading up to mass-production. It stipulates 11-12 weeks (2-3 months) leading up to mass-production and shipping, which could put product-launch some time in August (assuming the BoM was released some time in May-June). A separate table also provides a fascinating insight to the various stages of development of a custom-design NVIDIA graphics card.
Reply
#2
https://www.techpowerup.com/245318/nvidi...-indicates
Quote:It's more likely, though, that we're looking at a product launch and announcement that precedes the Hot Chips presentation. This breadcrumb trail could be not much more than wishful thinking, though: NVIDIA CEO Jensen Huang himself said at COMPUTEX 2018 that we might have to wait for a long time before new GeForce hardware is actually launched.

This is both expected and unexpected for a variety of reasons. Personally, I believe NVIDIA would only reap benefits by introducing its new 1100 or 2000 series GeForce graphics cards before AMD has its act together for their next generation Radeon products. NVIDIA has enjoyed an earlier time to market with their solutions for some time now, and that means they tend to entrench themselves in the market with their new solutions first, addressing the urge for users to get the next shiny piece of graphics hardware they can. At the same time, it gives them the opportunity to launch products with raised costs upfront (if mumblings of increased base pricing of GeForce products to capitalize on expected cryptocurrency demand are anything to go by). This means the company could begin filling up its war chest for price cuts should AMD pull a rabbit out of its proverbial hat with an extremely competitive lineup of products - as it has done in the past.
Reply
#3
https://www.techpowerup.com/245399/nvidi...-inventory
Quote:Write-offs in inventory are costly (just ask Microsoft), and apparently, NVIDIA has found itself in a miscalculating demeanor: overestimating gamers' and miners' demand for their graphics cards. When it comes to gamers, NVIDIA's Pascal graphics cards have been available in the market for two years now - it's relatively safe to say that the majority of gamers who needed higher-performance graphics cards have already taken the plunge. As to miners, the cryptocurrency market contraction (and other factors) has led to a taper-out of graphics card demand for this particular workload. The result? NVIDIA's demand overestimation has led, according to Seeking Alpha, to a "top three" Taiwan OEM returning 300,000 GPUs to NVIDIA, and "aggressively" increased GDDR5 buying orders from the company, suggesting an excess stock of GPUs that need to be made into boards.

With no competition on the horizon from AMD, it makes sense that NVIDIA would give the market time to assimilate their excess graphics cards. A good solution for excess inventory would be price-cuts, but the absence of competition brings that to a halt: NVIDIA's solutions are selling well in the face of current AMD products in the market, and as such, there is no need to artificially increase demand - and lower ASP in the meantime. Should some sort of pressure be applied, NVIDIA can lower MSRP at a snap of its proverbial fingers.
Reply
#4
New GTX 1180 sighting, and this time it's appearing as Turing, it uses GDDR6, its release date is September 28: https://www.techpowerup.com/245657/nvidi...ese-stores
Reply
#5
The existence of the GTX 1160 has reportedly been leaked by Lenovo: https://www.techpowerup.com/245719/lenov...e-gtx-1160
Reply
#6
Leaked email indicates that Geforce 11 series is coming, starting on August 30, with GTX 1180+, GTX 1180, GTX 1170, GTX 1160: https://www.techpowerup.com/246207/nvidi...tes-leaked
Reply
#7
https://www.tomshardware.com/news/geforc...37498.html
Quote:The above information should be taken with a grain of salt. Gamer Meld didn’t give any clue as to which OEM the information came from, so we can’t begin to verify the details within.
Reply
#8
https://www.tomshardware.com/news/nvidia...37530.html
Quote:Nvidia's announcement says the "event will be loaded with new, exclusive, hands-on demos of the hottest upcoming games, stage presentations from the world’s biggest game developers, and some spectacular surprises." Naturally, speculation is running high that Nvidia will announce its latest GPUs, which are rumored to come to market in late August, at the event.

Nvidia is holding the event on August 20, which is the same date as Nvidia's now-canceled "Next Generation Mainstream GPU" presentation at Hot Chips 2018. That presentation was largely viewed as the first introduction of the finer details of the new GPU architecture, but the conference removed the listing from the schedule after extensive press coverage.

The event is open to the public. The Eventbrite registration says the venue has limited capacity and entry will be first-come, first-served. The company is also livestreaming the event, so you won't be entirely left out if you aren't in Cologne, Germany for the gaming celebration. Nvidia hasn't provided a link to view the live streamed event, but we expect that information will come to light when it announces the location of the secretive event.
Reply
#9
https://www.tomshardware.com/news/nvidia...37391.html
Quote:The latest and perhaps most substantive development is a story from VideoCardz with an image of what may (or may not) be a leaked Nvidia-made GTX 1180 PCB. The image, which comes from a Baidu user, seems to show a reference board with both a six- and eight-pin power connector, a non-standard SLI connector (perhaps as part of an NVLink implementation), and a fairly small pinout area for the GPU itself. VideoCardz also points out that there is no DVI connector on this board. Perhaps Nvidia has nixed the aging port in favor of the reported VR-centric VirtualLink connector based on USB-C. There will certainly be gamers with older monitors who will be affected by this if that's the case--even if they just have to buy a new cable or an adapter.

As with any such leaks, it's tough to say anything for sure about this image. It could be legit, it could be doctored, or it could be a PCB for a future card other than the GTX 1180/GTX 2080. So you shouldn't take any of this as fact. But with a rumored launch at Gamescom in August, we may not have long to wait before we know more for certain.
Reply
#10
AIDA 64 beta release lists GTX 1180: https://www.techpowerup.com/246565/lates...e-gtx-1180
Reply
#11
https://www.neowin.net/news/the-founders...oling-fans
Quote:It's not uncommon for third-party graphics card makers to build dual-fan GPUs based on reference designs from Nvidia, but this would be the first time that the company does it in its own variation of the GPU. This could indicate that the card will have significantly more power inside, thus the need for additional cooling.

This is backed up by comments from GPU manufacturer Galax, who says that the performance will see a "breakthrough growth" with Nvidia's upcoming line of hardware. The company also says to expect information about the cards sometime in September, which could be the closest we have an official announcement date, though this should still be taken with a grain of salt. Earlier this year, the cards were expected to debut in June, a report that never materialized.

https://www.techpowerup.com/246656/nvidi...-dual-fans
Quote:The PCB pictures revealed preparation for an unusually strong VRM design, given that this is an NVIDIA reference board. It draws power from a combination of 6-pin and 8-pin PCIe power connectors, and features a 10+2 phase setup, with up to 10 vGPU and 2 vMem phases. The size of the pads for the ASIC and no more than 8 memory chips confirmed that the board is meant for the GTX 1080-successor. Adding to the theory of this board being unusually hot is an article by Chinese publication Benchlife.info, which mentions that the reference design (Founders Edition) cooling solution does away with a single lateral blower, and features a strong aluminium fin-stack heatsink ventilated by two top-flow fans (like most custom-design cards). Given that NVIDIA avoided such a design for even big-chip cards such as the GTX 1080 Ti FE or the TITAN V, the GTX 1080-successor is proving to be an interesting card to look forward to. But then what if this is the fabled GTX 1180+ / GTX 2080+, slated for late-September?
Reply
#12
Turing officially announced, Quadro cards will be coming first, no word on Turing Geforce cards: https://www.tomshardware.com/news/nvidia...37599.html
Reply
#13
Turing Quadro cards are using GDDR6: https://www.techpowerup.com/246759/samsu...s-solution
Reply
#14
https://www.tomshardware.com/news/ray-tr...37600.html
Quote:Nvidia has been pushing ray-tracing technology for at least a decade. In 2008, it acquired a ray-tracing company called RayScale, and two years later at Siggraph 2010, it showed the first interactive ray-tracing demo running on Fermi-based Quadro cards. After witnessing the demo first-hand, we surmised that we would see real-time ray-tracing capability “in a few GPU generations.”

A few generations turned into six generations, but Nvidia finally achieved real-time ray tracing with the new Quadro RTX lineup. When the company releases gaming-class GPUs that support real-time ray tracing, which could happen as soon as next week, we should see a big improvement in graphics fidelity in future video games. Real-time ray tracing is a foundational step towards game graphics that are indistinguishable from the real world around us.

https://www.extremetech.com/computing/27...w-possible
Quote:Jen-Hsun Huang is claiming that this new GPU represents a fundamental shift in capabilities and could drive the entire industry towards a new mode of graphics. It’s possible he’s right — Nvidia is in a far more dominant position to shift the graphics industry than most companies. But I’m also reminded of another company that thought it could revamp the entire graphics industry with a new GPU architecture that would be a radical departure from everything anyone had done before, with a specific goal of enabling RTRT. The company was Intel, the GPU was Larrabee, and the end result was not much in particular. After a brief flurry of interest, Intel killed the card and the industry went along its path.

Obviously, that’s not going to happen here, given that Nvidia is shipping silicon, but the major question will be whether the very different techniques associated with real-time ray tracing can catch on with developers and drive a major change in how consumer graphics are created and consumed. The odds of a global market transformation in favor of real-time ray tracing will increase substantially if companies like AMD and eventually Intel throw their own weight behind it.
Reply
#15
https://www.tomshardware.com/news/ashes-...37605.html
Quote:Which brings us back to Ashes of the Singularity: Escalation. We've long used the game to benchmark new graphics cards and processors--it offers a variety of quality settings, relies on both the GPU and CPU and is a generally accepted measure of how well a system handles real-time strategy games. Now someone with the handle "nvgtltest007" has used the game on Crazy settings at 4K resolution to benchmark the "Nvidia Graphics Device" paired with an Intel Core i7-5960X clocked at 3GHz (and yes, we suspect that gobbledygook of a username is supposed to refer to secret testing of Nvidia cards).

The "Nvidia Graphics Device" appeared to score well enough. It managed to squeeze out 75.1, 60.6 and 57.4 frames per second in the game's normal, medium and heavy batch tests, respectively. The recorded CPU frame rates were 138.6, 118.4 and 91.9, respectively. That averages out to a frame rate of 63.5 and CPU frame rate of 113 on Crazy settings. For reference, we've gotten between 40 and 59.5 frames per second out of the GeForce GTX 1080 Ti 11GB with Crazy settings at a 3,840 x 2,160 resolution. Therefore, this mysterious Nvidia device bottoms out near the top of what the GTX 1080 Ti can achieve.

Those numbers should be taken with a pound of salt, however, because we didn't run the tests on the "Nvidia Graphics Device" ourselves. We don't know all the differences between the setups and methodologies we use and what "nvgtltest007" uses. But if the benchmark is at least close to accurate, there's another reason to be excited for whatever Nvidia plans to announce sometime soon. We're looking forward to getting this mystery device into our own test systems so we can benchmark it ourselves. And, you know, finally play some Crysis.
Reply
#16
Nvidia hints that the Turing series GPUs will be the 20xx series, not the 11xx series: https://www.extremetech.com/gaming/27536...-next-week
Reply
#17
Gainward is teasing a new product that will be released on August 20: https://www.techpowerup.com/246821/gainw...ugust-20th
Reply
#18
Leaked photos of RTX 2080 and RTX 2080 Ti: https://www.tomshardware.com/news/msi-ge...37629.html
Reply
#19
https://www.tomshardware.com/news/geforc...37635.html
Either these prices are preorder prices, there is a lack of competition at the high end, or the prices are just placeholders.
Quote:A sharp-eyed Redditor spotted a listing on PNY's website for its GeForce RTX 2080 Ti Overclocked XLR8 Edition for $1,000. The leak, which has since been taken down, also listed specs for the GeForce RTX 2080 8GB XLR8 Gaming Overclocked Edition.

The cards come bearing the Turing architecture. According to the listings, the $1,000 RTX 2080 Ti comes equipped with 4,352 CUDA cores and 11GB of GDDR6 with a 352-bit memory bus that provides 616 GB/s of throughput. PNY listed boost clocks at 1,545 MHz. The card is fed with two 8-pin connectors. However, it is also possible that the $1,000 price tag for the 2080 Ti is merely a placeholder. This same model also comes as a 1080 Ti that retails for $899, which potentially gives us an idea of the price deltas we can expect between generations.

The RTX 2080 comes packing 2944 CUDA cores and 8GB of GDDR6 with a 256-bit memory bus that pushes 448 GB/s of bandwidth. Boost speeds weigh in at 1,710 MHz. The card is fed with 8-pin and 6-pin power connectors. PNY purportedly listed this model for $800, but the listing was removed before we could grab a screenshot. Again, this is likely a placeholder.
Reply
#20
Turing cards announced, prices are for the Founder's Edition cards only: https://www.tomshardware.com/news/nvidia...37647.html
There's also this: https://www.techpowerup.com/246905/usb-t...-by-nvidia
Quote:With the latest GeForce RTX "Turing" family, NVIDIA could push for the adoption of USB type-C connectors with DisplayPort wiring, and perhaps even USB-PD standards compliance, pushing up to 60 Watts of power from the same port. This USB+DP+Power connector is called VirtuaLink. This could make it easier for VR HMD manufacturers to design newer generations of their devices with a single USB type-C connection for display and audio input form the GPU, USB input from the system, and power. We reckon 60W is plenty of power for a VR HMD.
Reply
#21
SEP prices for the Turing cards: https://www.techpowerup.com/246926/nvidi...the-making
Reply
#22
https://www.gamersnexus.net/news-pc/3355...ce-release
Quote:NVIDIA announced its new Turing video cards for gaming today, including the RTX 2080 Ti, RTX 2080, and RTX 2070. The cards move forward with an upgraded-but-familiar Volta architecture, with some changes to the SMs and memory. The new RTX 2080 and 2080 Ti ship with reference cards first, and partner cards largely at the same time (with some more advanced models coming 1+ month later), depending on which partner it is. The board partners did not receive pricing or even card naming until around the same time as media, so expect delays in custom solutions. Note that we were originally hearing a 1-3 month latency on partner cards, but that looks to be only for advanced models that are just now entering production. Most tri-fan models should come available on the same date.

Another major point of consideration is NVIDIA's decision to use a dual-axial reference card, eliminating much of the value of partner cards at the low-end. Moving away from blower reference cards and toward dual-fan cards will most immediately impact board partners, something that could lead to a slow crawl of NVIDIA expanding its direct-to-consumer sales and bypassing partners. The RTX 2080 Ti will be priced at $1200 and will launch on September 20, with the 2080 at $800 (and September 20), and the 2070 at $600 (TBD release date).
...
NVIDIA has moved the price forward in significant ways with this launch. The 2070 is not a linear line from the 1070 in price -- it's closer to a 1080, and the 2080 Ti has replaced a Titan X-class card in pricing. With market dominance, and with the biggest high-end competition being Pascal, this is sensible, if not what we wanted to see. We were hoping to see the 2080 Ti closer to $900.

As for the rest, a few notes:
  • Partners did not get pricing or naming until around the same time we did. Depending on partner, this means that non-reference cards won't ship for another 1-3 months.
  • Partners are going to be contending directly with NVIDIA at the low-end, which means that most will move to tri-fan designs.
  • Discussion of a single Tensor core equating 10x 1080 Tis in performance requires some clarification for viewers at home: In this scenario, note that a single Tensor core does not equate 10x 1080 Tis except when working specifically with machine learning, AI, and other DL tasks. If Tensor cores were good at every aspect of graphics, the entire card would be Tensor cores.
  • NVIDIA is doing what confused so many people at the Pascal launch in 2016: There is now a separate reference and FE spec, which wasn't the case before (FE replaced "reference" in name). Now, with two separate SKUs, there will undoubtedly be more confusion. FE is higher clocked. That is the primary change, as we understand it now.
  • NVLink is available for $80 for SLI
  • NVIDIA has been working on Turing for 10 years
  • It will take some time for developer adoption of the RTX SDK and of ray-tracing in general
We'll post another story later. That's it for now.
Reply
#23
https://www.tomshardware.com/news/nvidia...37672.html
Quote:Unsurprisingly, consumers have some passionate responses to these prices. Readers in the Tom’s Hardware forums are calling for more competition from AMD, wondering if the performance benefit is worth it and saying they just won’t buy right now. On the Nvidia subreddit, there were similar thoughts on competitive pricing and keeping existing cards.

So, why are Nvidia's new Turing cards so expensive? Nvidia didn’t respond to questions about why the cards are priced as they are, but we’ll update the story if we hear back.

Analyst Jon Peddie suggests that the cost may just be a result of what it takes to make this kind of hardware.

“Simple cost-of-goods… “ he told Tom’s Hardware over email. “These giant (and they are really big) chips cost a lot to make and test, and the huge amount of memory is expensive plus the cooling systems - just [cost of goods]. There's no rip off here, no conspiracy.”

But it could also be for a variety of other reasons. Stephen Baker, vice president of industry analysis at NPD, suggested it could be due to inventory or availability.

“I think they are likely trying to price these as very premium products,” Baker said. “Certainly if there is a significant amount of series 10 cards floating around they would want to at least draw that down somewhat.”

Baker also suggested that they can use the high pricing for a gradual release as the company better understands demand: “The market for cards has been so crazy the last couple of years, between the explosion in interest in gaming, the cryptomining bubble and the upgrade in quality and demand that they would be doing themselves a disservice to come out at lower prices,” he said. Lastly, he theorizes that high prices could be a way to protect against limited inventories after the launch.

https://www.tomshardware.com/news/wait-t...37673.html
Quote:Nvidia’s new GeForce RTX 20-series graphics cards were just announced, but there’s a few solid reasons you shouldn’t jump on the ray-tracing train and purchase one of the new Turing-based GPUs. At least not yet.

High Pricing (For Now)
...
No Gaming Benchmarks (Yet)
...
Ray Tracing Isn’t A Thing (Yet)
...
Pre-Orders Are Already Out of Stock (Mostly)

https://www.extremetech.com/computing/27...080-family
Quote:Buying CPUs and GPUs for a first-generation feature is almost always a bad idea. If you bought an Nvidia Maxwell or Pascal video card because you thought DX12 and Vulkan were the future, do you feel like you got what you paid for as far as that feature is concerned? Probably not. AMD doesn’t get to take a bow on this either. True, DX12 has been kinder to Team Red than Team Green, but if you bought a 2013 Radeon thinking Mantle was going to take over the gaming industry, you didn’t get a lot of shipping titles before it was retired in favor of other APIs. If you bought a Radeon in 2013 thinking you were getting in on the dawn of a new age of gaming, well, you were wrong.

The list goes on. The first DX10 cards weren’t particularly fast, including models like Nvidia’s GTX 8800 Ultra. The first AMD GPUs to support tessellation in DX11 weren’t all that good at it. If you bought a VR headset and a top-end Pascal, Maxwell, or AMD GPU to drive it, guess what? By the time VR is well-established, if it ever is, you’ll be playing it on very different and vastly improved hardware. The first strike against buying into RTX specifically is that by the time ray tracing is well-established, practically useful, and driving modern games, the RTX 2080 will be a garbage GPU. That’s not an indictment of Nvidia, it’s a consequence of the substantial lead time between when a new GPU feature is released and when enough games take advantage of that feature to make it a serious perk.

But there’s also some reason to ask just how much performance these GPUs are going to deliver, period, and Nvidia left substantial questions on the table on that point. The company showed no benchmarks that didn’t involve ray tracing. To try and predict what we might see from this new generation, let’s take a look at what past cards delivered. We’re helped in this by [H]ardOCP, which recently published a massive generational comparison of the GTX 780 versus the GTX 980 and 1080. They tested a suite of 14 games from Crysis 3 to Far Cry 5. Let’s compare the GPUs to the rate of performance improvement and see what we can tease out:
...
Now we come to RTX 2080. Its fill rate is actually slightly less than the GTX 1080. Its core count increase is smaller than either of the previous two generations. Its bandwidth increase is smaller. And those facts alone suggest that unless Nvidia managed to deliver the mother of all IPC improvements via rearchitecting its GPU core, the RTX 2080 family is unlikely to deliver a huge improvement in games. This tentative conclusion is further strengthened by the company’s refusal to show any game data that didn’t focus on ray tracing this week.
...
But the RTX hardware in the Nvidia GPU, including the RTX 2080 Ti, isn’t going to be fast enough to simply ray trace an entire AAA game. Even if it was, game engines themselves are not designed for this. This point simply cannot be emphasized enough. There are no ray tracing engines for gaming right now. It’s going to take time to create them. At this stage, the goal of RTX and Microsoft DTX is to allow ray tracing to be deployed in certain areas of game engines where rasterization does poorly, and ray tracing could offer better visual fidelity at substantially less performance cost.
...
Look to the RTX’s features to provide a nominal boost to image quality. But don’t expect the moon. And never, ever, buy a GPU for a feature someone has promised you will appear at a later date. Buy a GPU for the features it offers today, in shipping titles, that you can definitely take advantage of.

I’m unwilling to declare the RTX 2080’s performance a settled question because numbers don’t always tell the whole story. When Nvidia overhauled its GPUs from Fermi to Kepler, it moved to a dramatically different architecture. The ability to predict performance as a result of comparing core counts and bandwidth broke as a result. I haven’t seen any information that Turing is as large a departure from Pascal as Kepler was from Fermi, but it’s always best to err on the side of caution until formal benchmark data is available. If Nvidia fundamentally reworked its GPU cores, it’s possible that the gains could be much larger than simple math suggests.

Nonetheless, simple math suggests the gains here are not particularly strong. When you combine that with the real-but-less-than-awe-inspiring gains from the incremental addition of ray tracing into shipping engines and the significant price increases Nvidia has tacked on, there’s good reason to keep your wallet in your pocket and wait and see how this plays out. But the only way the RTX 2080 is going to deliver substantial performance improvements above Pascal, over and above the 1.2x – 1.3x suggested by core counts and bandwidth gains, is if Nvidia has pulled off a huge efficiency gain in terms of how much work can be done per SM.
Reply
#24
https://www.tomshardware.com/news/nvidia...37679.html
Quote:Right out of the gate, we see that six of the 10 tested games include results with Deep Learning Super-Sampling enabled. DLSS is a technology under the RTX umbrella requiring developer support. It purportedly improves image quality through a neural network trained by 64 jittered samples of a very high-quality ground truth image. This capability is accelerated by the Turing architecture’s tensor cores and not yet available to the general public (although Tom’s Hardware had the opportunity to experience DLSS, and it was quite compelling in the Epic Infiltrator demo Nvidia had on display).

The only way for performance to increase using DLSS is if Nvidia’s baseline was established with some form of anti-aliasing applied at 3840x2160. By turning AA off and using DLSS instead, the company achieves similar image quality, but benefits greatly from hardware acceleration to improve performance. Thus, in those six games, Nvidia demonstrates one big boost over Pascal from undisclosed Turing architectural enhancements, and a second speed-up from turning AA off and DLSS on. Shadow of the Tomb Raider, for instance, appears to get a ~35 percent boost from Turing's tweaks, plus another ~50 percent after switching from AA to DLSS.

In the other four games, improvements to the Turing architecture are wholly responsible for gains ranging between ~40 percent and ~60 percent. Without question, those are hand-picked results. We’re not expecting to average 50%-higher frame rates across our benchmark suite. However, enthusiasts who previously speculated that Turing wouldn’t be much faster than Pascal due to its relatively lower CUDA core count weren’t taking underlying architecture into account. There’s more going on under the hood than the specification sheet suggests.

A second slide calls out explicit performance data in a number of games at 4K HDR, indicating that those titles will average more than 60 FPS under GeForce RTX 2080.

Nvidia doesn’t list the detail settings used for each game. However, we’ve already run a handful of these titles for our upcoming reviews, and can say that these numbers would represent a gain over even GeForce GTX 1080 Ti if the company used similar quality presets.

Of course, we’ll have to wait for final hardware, retail drivers, and our own controlled environment before drawing any concrete conclusions. But Nvidia’s own benchmarks at least suggest that Turing-based cards improve on their predecessors in a big way, even in existing rasterized games.
Reply
#25
https://www.extremetech.com/gaming/27596...rong-cards
Quote:After Nvidia’s price hikes, the RTX 2080 is no longer the appropriate point of comparison for the GTX 1080. So what happens when we compare the RTX 2080 with its actual competitor, the $700 GTX 1080 Ti? It just so happens that [H]ardOCP recently published an article comparing the GTX 1080 against the 1080 Ti. So let’s look at that data set and estimate how much the 1080 Ti would slice into Nvidia’s results.

[H]ardOCP tested Crysis 3, Tomb Raider, GTA V, Witcher 3, Fallout 4, Rise of the Tomb Raider, Doom, Deus Ex Mankind Divided, Battlefield 1, Sniper Elite 4, Mass Effect Andromeda, Kingdom Come Deliverance, and Far Cry 5. The GTX 1080 Ti is, on average, 1.27x faster than the GTX 1080. If we assume that those averages hold across the ecosystem, we can expect the RTX 2080 to be roughly 1.23x faster than the GTX 1080 Ti at the same price point. There could be some shifts depending on resolution and detail level differences, but we’d expect those to favor the 1080 Ti if anything. Any bottleneck that specifically slowed the GTX 1080 would give the 1080 Ti’s additional horsepower more room to shine.

There’s nothing wrong with a 1.2x performance boost, but Nvidia knows it’s not the kind of thing that’ll get gamers talking. It’s certainly not the kind of improvement that gets someone to rush out and replace a 1080 Ti they just bought within the past 18 months. So instead of acknowledging this point, they elided it by comparing the GTX 1080 to a much more expensive GPU that it wouldn’t normally compete against. When you set the competition appropriately, up to half of Nvidia’s claimed performance improvement may vanish.
...
For four generations, (GTX 295 – GTX 680) Nvidia kept the same $500 price for its flagship card. The GTX 780 surged up to $649 for launch but fell lower in six months thanks to AMD’s Hawaii-based R9 290 and R9 290X. Maxwell tacked a modest $50 increase on top-end price, but nothing crazy. Beginning in 2016, however, Nvidia began aggressively charging more, especially if you bought a Founders card. If you want to buy an RTX 2080 card in 2018 the way you bought a GTX 980 card in 2014, Nvidia wants an extra $150 – $250 for the privilege. It’s a far cry from what we used to see, just a few years ago, when Nvidia brought dramatically improved performance to the same price points year-on-year, even at the top of the market. AMD’s difficulty competing at the top of the GPU market is reflected in Nvidia’s pricing. At the rate costs are increasing, Intel’s 2020 GPUs can’t come soon enough.

Based on the results we’ve seen to date, Nvidia’s RTX 2080 looks to deliver between 1.15x – 1.3x additional performance compared with the GTX 1080 Ti at the same price point in mainstream titles that do not take advantage of its features. The claims of 50 percent-plus improvements do not withstand scrutiny given the difference in price between the two solutions. As always, all of this analysis should be considered preliminary and speculative based on publicly available information, but not final hardware. It’s possible that other, as-yet-undisclosed enhancements to the GPU core could impact the final analysis.
Reply
#26
https://www.techpowerup.com/247140/first...card-leaks
Quote:A Time Spy benchmark score of one of NVIDIA's upcoming RTX 20-series graphics cards has come out swinging in a new leak. We say "one of NVIDIA's" because we can't say for sure which core configuration this graphics card worked on: the only effective specs we have are the 8 GB of GDDR6 memory working at 14 Gbps, which translates to either NVIDIA's RTX 2070 or RTX 2080 graphics cards. If we were of the betting type, we'd say these scores are likely from an NVIDIA RTX 2080, simply because the performance improvement over the last generation 1080 (which usually scores around the 7,300's) sits pretty at some 36% - more or less what NVIDIA has been doing with their new generation introductions.

The 10,030 points scored in Time Spy by this NVIDIA RTX graphics card brings its performance levels up to GTX 1080 Ti levels, and within spitting distance of the behemoth Titan Xp. This should put to rest questions regarding improved performance in typical (read, non-raytracing) workloads on NVIDIA's upcoming RTX series. It remains to be seen, as it comes to die size, which part of this improvement stems from actual rasterization performance improvements per core, or if this comes only from increased number of execution units (NVIDIA says it doesn't, by the way).
Reply
#27
Tom's Hardware penned an insane article urging people to just buy the RTX series, and now Tom's Hardware US is censoring former staff:




Also, there's this: https://www.gamersnexus.net/news-pc/3358...tel-solder
Quote:In a new report from Digitimes, the Taiwanese semiconductor market experts analyze Nvidia’s Q2 revenue earnings, which they claim are backed by massive contracted GPU shipments to board partners. Interestingly, the report claims that “more than 10 graphics cards makers have no other choice but to swallow contracted shipments released by Nvidia to deplete its inventories”. This list chiefly includes ASUS, MSI, and Gigabyte among the partners stuck with GTX 10-series GPUs.

Nvidia has likely been left with a large, residual supply of 10-series GPUs from the cryptocurrency mining bubble, and the Digitimes report seems to suggest that Nvidia’s stronger than expected Q2 earnings are resultant of forcing AIB partners to absorb them. Additionally, the report claims AIB partners had to absorb the contracted shipments “in order to secure that they can be among the first batch of customers to get sufficient supply quotas of new-generation GPUs.”

We think this is why nVidia’s prices are a bit higher than the market might have expected. It will allow 10-series to continue shipping alongside the 20-series graphics cards, and the RTX branding will also allow for the two markets to exist separately. We think that nVidia might even re-introduce the RTX branding for lower-end cards in the 20 series, like those that may be unfit for ray tracing. The future 2060 would be an example.
Reply
#28
https://www.techpowerup.com/247209/nvidi...-at-launch
Quote:Tom Petersen had this to say on the HotHardware webcast: "The partners are basically going to hit the entire price point range between our entry level price, which will be $499, up to whatever they feel is the appropriate price for the value that they're delivering. (...) In my mind, really the question is saying 'am i gonna ever see those entry prices?' And the truth is: yes, you will see those entry prices. And it's really just a question of how are the partners approaching the market. Typically when we launch there is more demand than supply and that tends to increase the at-launch supply price."

Of course, there were some mitigating words left for last: "But we are working really hard to drive that down so that there is supply at the entry point. We're building a ton of parts and it's the natural behaviour of the market," Tom Petersen continued. "So it could be just the demand/supply equation working its way into retail pricing - but maybe there's more to it than that."
Reply
#29
https://www.tomshardware.com/news/micron...37755.html
Quote:Micron penned a blog post today announcing that the company is providing Nvidia with GDDR6 memory for the GeForce RTX 20-Series cards, which includes the 2070, 2080, and 2080 Ti. Micron is Nvidia's launch partner, but other companies will also chip in to supply the RTX series with GDDR6 memory in the coming months.
...
Nvidia's selection of GDDR6 over HBM2 should lead to lower manufacturing costs, though it's debatable if end users will see those savings any time soon. Luckily, Micron claims it has plenty of production capacity to prevent a repeat of the crippling memory shortages the industry weathered last year. The rampant memory shortages, along with the cryptocurrency mining boom, contributed to painfully high graphics card prices over the last year.

Micron's GDDR6 will also make its way into automotive and networking applications, but the real question is how it will perform in Nvidia's GeForce RTX cards. It appears the memory has some room for overclocking: Micron recently presented a research paper claiming that it was able to successfully overclock one of its GDDR6 prototype memory chips to a staggering 20 Gbps with a just slight bump in I/O voltage.

Of course, hands-on testing will tell the tale. We won't have to wait much longer to see how the performance of the Nvidia GeForce RTX GPUs pans out in our test suite–stay tuned!
Reply
#30
https://www.extremetech.com/gaming/27647...rtx-launch
Quote:Luckily, Micron recalled its own product announcement by paragraph 3. The company notes that GDDR6 is shipping with up to 14Gbps performance today. Faster speeds are possible; in June Micron announced it had managed to overclock its GDDR6 up to 20Gbps with a voltage bump. Clearly, there’s some headroom for more bandwidth over the long run, which honestly isn’t all that surprising. The first GDDR5 GPUs, like the Radeon HD 4890, offered up to 125GB/s of bandwidth on a 256-bit bus. Today, GPUs like the RX 580 (which also uses standard GDDR5 and a 256-bit bus) have hit 256GB/s of bandwidth. It’s good to see Micron being able to push GDDR6 up to 20Gbps, but that’s honestly what we should expect to see given the expected longevity of the standard.

The aggressively pro-Nvidia sales pitch is still more than a little strange, however, and it’s not a function of the fact that Nvidia is the launch partner on GDDR6. Nvidia was also the only GPU company that used GDDR5X, and Micron the only company that built it, but none of the blog posts the company published over 2015 and 2016 take the same ridiculous tone or over-the-top presentation.

RTX GPUs are coming. They’re going to be faster than current cards, though they’re also arriving at higher prices in a break with recent years in which Nvidia has opted for either no price increases or smaller jumps. Whether you intend to buy one or not, your backyard will remain safe for the foreseeable future.
Reply
#31
https://www.techpowerup.com/247429/nvidi...tier-cards
Quote:During that Q&A, NVIDIA's Colette Kress put Turing's performance at a cool 2x improvement over their 10-series graphics cards, discounting any raytracing performance uplift - and when raytracing is indeed brought into consideration, she said performance has increased by up to 6x compared to NVIDIA's last generation. There's some interesting wording when it comes to NVIDIA's 20-series lineup, though; as Kress puts it, "We'll start with the ray-tracing cards. We have the 2080 Ti, the 2080 and the 2070 overall coming to market," which, in context, seems to point out towards a lack of raytracing hardware in lower-tier graphics cards (apparently, those based on the potential TU106 silicon and lower-level variants).

This is just speculation - based on Kress's comments, though - but if that translates to reality, this would be a tremendous misstep for NVIDIA and raytracing in general. The majority of the market games on sub-**70 tier graphics cards (the 20-series has even seen a price hike up to $499 for the RTX 2070...), and failing to add RT hardware to lower-tier graphics would exclude a huge portion of the playerbase from raytracing effects. This would mean that developers adding NVIDIA's RTX technologies and implementing Microsoft's DXR would be spending development resources catering to the smallest portion of gamers - the ones with high-performance discrete solutions. And we've seen in the past what developers think of devoting their precious time to such features.
...
All in all, it seems to this editor that segregation of graphics cards via RTX capabilities would be a mistake, not only because of userbase fracturing, but also because the highest amount of players game at **60 and lower levels. Developers wouldn't be so inclined to add RTX to their games to such a small userbase, and NVIDIA would be looking at dilluting its gaming brand via RTX and GTX - or risk confusing customers by branding a non-RTX card with the RTX branding. If any of these scenarios come to pass, I risk saying it might have been too soon for the raytracing push - even as I applaud NVIDIA for doing it, anyway, and pushing graphics rendering further. But perhaps timing and technology could have been better? But I guess we all just better wait for actual performance reviews, right?
Reply
#32
https://www.gamersnexus.net/industry/336...iff-prices
Quote:Tom’s Hardware founder Thomas Pabst, who left the site in 2007, recently posted a comment on the GamersNexus facebook page to address our Tom’s Hardware video. The response was made to our coverage of the “Just Buy It” article that was posted by Tom’s Hardware’s current Editor-in-Chief Avram Piltch, where, at one point, I jokingly asked, “Tom, what have you become? Look at yourself! I’m using Tom as an all-inclusive term for anyone who writes at Tom’s Hardware.”

Pabst took the question and ran with it, replying as follows:
Quote:“You asked ‘What happened, Tom? What has become of you?’ Well, there’s only one person who can answer that!

Tom has become a daddy of two boys and doesn’t have anything to do with Tom’s Hardware anymore.

So far so good.

What does Tom think of the article you love so emphatically?

“Well, I’d say Tom would have been less kind than you have been with his assessment! It is ultimately ridiculous, it is indeed suicidal, and its conclusions are epically nonsensical. There is value in being an early adopter? Aren’t we, who are impatiently into the latest tech, sorely aware of what we keep doing to ourselves when we purchase technology at high prices ‘ahead of the curve’? It’s maso-f***ing-chism! We make ourselves paying beta testers, and wait for software (and often other hardware) that will hopefully bless our brand new tech with the meaning and usefulness they simply do not have by the time of our premature purchase! There are no RTX games, but there is value in adopting Turing early, because current games *might* be faster than on our 1080ti SLI setup? Yes, this is madness, and good old Tom is scratching his head no less than you are, Steve! There’s got to be value in masochism!”
Reply
#33
https://www.techpowerup.com/247479/nvidi...ll-q1-2019
Quote:NVIDIA CFO Colette Kress, speaking in the company's latest post-results financial analyst call, confirmed that NVIDIA isn't retiring its GeForce GTX 10-series products anytime soon, and that the series could coexist with the latest GeForce RTX series, leading up to Holiday-2018, which ends with the year. "We will be selling probably for the holiday season, both our Turing and our Pascal overall architecture," Kress stated. "We want to be successful for the holiday season, both our Turing and our Pascal overall architecture," she added. NVIDIA is expected to launch not just its RTX 2080 Ti and RTX 2080, but also its RTX 2070 towards the beginning of Q4-2018, and is likely to launch its "sweetspot" segment RTX 2060 by the end of the year.

NVIDIA reportedly has mountains of unsold GeForce GTX 10-series inventory, in the wake of not just a transition to the new generation, but also a slump in GPU-accelerated crypto-currency mining. The company could fine-tune prices of its popular 10-series SKUs such as the GTX 1080 Ti, the GTX 1080, GTX 1070 Ti, and GTX 1060, to sell them at slimmer margins. To consumers this could mean a good opportunity to lap up 4K-capable gaming hardware; but for NVIDIA, it could mean those many fewer takers for its ambitious RTX Technology in its formative year.
Reply
#34
https://www.extremetech.com/gaming/27700...tx-1080-ti
Quote:If you’re having trouble keeping all of this straight, allow us to summarize. The 1080 Ti’s performance scores imply that very little AA is in-use at 4K, while the RTX family’s performance jump from Graph #1 to Graph #2 implies that a heavy AA solution has just been disabled and replaced by a much more efficient one. With a heavy AA solution engaged, the 1080 Ti’s average level of performance should be much lower (check our own review results above in Rise of the Tomb Raider and Metro Last Light Redux for examples of how much performance you lose when enabling SSAA).

One more thing. Nvidia’s 4K@60 line is 131 pixels tall, which means each pixel is “worth” approximately 0.46fps. The height gap between the RTX 2080 and the GTX 1080 Ti, measured top-left corner to top-left corner, is 20 pixels. If we assume that Nvidia’s graph is accurate, this implies the RTX 2080 will outperform the GTX 1080 Ti by ~9 percent at the same price (if $700 cards are even in stock for launch). And this, in turn, may explain why Nvidia is putting so much emphasis on DLSS and ray tracing in the first place — because introducing a 9 percent performance improvement at the same price isn’t anything that’s going to get anyone excited.
Reply
#35
RTX 2080 Ti delayed to September 27: https://techreport.com/news/34097/nvidia...ptember-27
Reply
#36
https://www.techpowerup.com/247657/nvidi...d-for-10nm
Quote:NVIDIA could launch successors to its GeForce GTX 1060 series and GTX 1050 series only by 2019, according to a statement by an ASUS representative, speaking with PC Watch. This could mean that the high-end RTX 2080 Ti, RTX 2080, and RTX 2070, could be the only new SKUs for Holiday 2018 from NVIDIA, alongside cut-rate GeForce GTX 10-series SKUs. This could be a combination of swelling inventories of 10-series GPUs, and insufficient volumes of mid-range RTX 20-series chips, should NVIDIA even decide to extend real-time ray-tracing to mid-range graphics cards.
...
The PC Watch interview also states that NVIDIA's "Turing" architecture was originally designed for Samsung 10 nanometer silicon fabrication process, but was faced with delays and redesigning for the 12 nm process. This partially explains how NVIDIA hasn't kept up with the generational power-draw reduction curve of the previous 4 generations. NVIDIA has left the door open for a future optical-shrink of Turing to the 8 nm silicon fabrication node, an extension of Samsung's 10 nm node, with reduction in transistor sizes.

https://www.techpowerup.com/247660/nvidi...er-variant
Quote:We reached out to industry sources and confirmed that for Turing, NVIDIA is creating two device IDs per GPU to correspond to two different ASIC codes per GPU model (for example, TU102-300 and TU102-300-A for the RTX 2080 Ti). The Turing -300 variant is designated to be used on cards targeting the MSRP price point, while the 300-A variant is for use on custom-design, overclocked cards. Both are the same physical chip, just separated by binning, and pricing, which means NVIDIA pretests all GPUs and sorts them by properties such as overclocking potential, power efficiency, etc.

When a board partner uses a -300 Turing GPU variant, factory overclocking is forbidden. Only the more expensive -30-A variants are meant for this scenario. Both can be overclocked manually though, by the user, but it's likely that the overclocking potential on the lower bin won't be as high as on the higher rated chips. Separate device IDs could also prevent consumers from buying the cheapest card, with reference clocks, and flashing it with the BIOS from a faster factory-overclocked variant of that card (think buying an MSI Gaming card and flashing it with the BIOS of Gaming X).

All Founders Edition and custom designs that we could look at so far, use the same -300-A GPU variant, which means the device ID is not used to separate Founders Edition from custom design cards.

https://www.techpowerup.com/247669/nvidi...fantasy-xv
Quote:Taking a look at the RTX 2080 Ti results, show it beating out the GTX 1080 Ti by 26% and 28% in the standard and high quality tests respectively, at 2560x1440. Increasing the resolution to 3840x2160, again shows the RTX 2080 Ti ahead, this time by 20% and 31% respectively. The RTX 2080 offers a similar performance improvement over the GTX 1080 at 2560x1440, where it delivers a performance improvement of 28% and 33% in the same standard and high quality tests. Once again, increasing the resolution to 3840x2160 results in performance being 33% and 36% better than the GTX 1080. Overall, both graphics cards are shaping up to be around 30% faster than the previous generation without any special features. With Final Fantasy XV getting DLSS support in the near future, it is likely the performance of the RTX series will further improve compared to the previous generation.
Reply
#37
https://www.tomshardware.com/reviews/nvi...09-14.html
Quote:Now, the problem with GeForce RTX 2080, Nvidia’s second-fastest Turing card, is that it’s only marginally faster than GeForce GTX 1080 Ti. Moreover, Nvidia’s Pascal-based flagship is currently available for about $100 less than the RTX 2080 Founders Edition. Neither card allows you enjoy 4K gaming unconditionally. If you want to crank up the detail settings, they both necessitate dropping to 2560x1440 occasionally. In fact, it’s easier to think of them as ideal companions for high-refresh QHD monitors.

Nvidia does try for a more favorable comparison by pitting the 2080 against GeForce GTX 1080. But there’s no way to reconcile a greater-than $300 difference between the cheapest 1080s and GeForce RTX 2080 Founders Edition’s asking price. It’d be like comparing GeForce GTX 1080 Ti to GTX 980 a year ago; they’re simply in different classes.

Notice that we’re not talking about ray tracing as a reason to buy GeForce RTX 2080 instead of GTX 1080 Ti. Without question, the technology packed into Turing has us captivated. And if the first game to launch with real-time hybrid rendering support bowls us over, we’ll change our tune. However, we’re not going to recommend paying a premium today for hardware that can’t be fully utilized yet based solely on what it should be able to do in the future. This is doubly true since we don't yet know how RTX 2080 fares with two-thirds of RTX 2080 Ti's RT core count.

There will come a time when the availability of Pascal-based GeForce cards tapers off, removing the choice between GeForce GTX 1080 Ti and RTX 2080. By then we hope to see 2080s that start in the $700 range, replacing GTX 1080 Ti with a more capable successor. For now, though, GeForce RTX 2080 feels like a side-grade to GTX 1080 Ti. It's faster and more expensive, with a lot more potential, but not something we'd rush to jump into right now.

https://www.tomshardware.com/reviews/nvi...05-14.html
Quote:But we fancy ourselves advocates for enthusiasts, and we still can't recommend placing $1200 on the altar of progress to create an audience for game developers to target. If you choose to buy GeForce RTX 2080 Ti, do so for its performance today, not based on the potential of its halo feature.
...
In the end, Nvidia built a big, beautiful flagship in its GeForce RTX 2080 Ti Founder Edition. We’ve smothered CEO Jensen Huang’s favorite features with caveats just to cover our bases. And we commiserate with the gamers unable to justify spending $1200 on such a luxury. But there’s no way around the fact that, if you own a 4K monitor and tire of picking quality settings to dial back in exchange for playable performance, this card is unrivaled.

https://www.techpowerup.com/reviews/NVID...on/38.html
Quote:While we understand that Turing GPUs are bigger and pack more components, bring more performance to the table, and that they're "more than a GPU from 2017," the current pricing is hard to justify, with the RTX 2080 starting at $700, and the RTX 2080 Ti at $1000. The once basic Founders Edition add another $100/$200 on top of that, and custom board designs will be even more expensive. Similar leaps in technology in the past did not trigger such hikes in graphics card prices over generation. We hence feel that in their current form, the RTX 20-series is overpriced by at least 20% across the board, and could deter not just bleeding-edge enthusiasts, but also people upgrading from older generations such as "Maxwell." For games that don't use RTX, the generational performance gains are significant, but not as big as those between "Maxwell" and "Pascal." On the other hand, I doubt that many gamers will opt for Pascal when they choose to upgrade their graphics card, especially with the promises of RTX and AI that NVIDIA is definitely going to market big. The key factor here will be game support, which looks to be gaining steam fast, if going by recent announcements.

https://www.techpowerup.com/reviews/NVID...on/38.html
Quote:At $799 for the Founders Edition and $699 as the baseline price, the GeForce RTX 2080 has a more justifiable price-tag than the RTX 2080 Ti, given that the GTX 1080 launched at $699 for the Founders Edition. We feel that the RTX 2080 is still overpriced by at least 10%, despite the fact that Turing is "more than a 2017 GPU" on account of its new on-die hardware. Most factory-overclocked custom design cards could be priced north of $800, which puts it out of reach from not just people wanting to upgrade from "Pascal," but also those coming from "Maxwell," and actually needing such an upgrade. Since the RTX 2080 convincingly beats the GTX 1080 Ti, choosing this card over a GTX 1080 Ti that's hovering the $700-mark makes abundant sense. Similar leaps in technology as RTX, in the past, did not raise prices to this extent over generation. If this is a ploy to get rid of unsold "Pascal" cards, it could backfire for NVIDIA. Every "Pascal" customer is one less "Turing RTX" customer for the foreseeable future.
Reply
#38
https://www.extremetech.com/gaming/27738...ousy-value
Quote:Nvidia has done an incredible job launching new features that, as of today, literally not one title can take advantage of. The company has done amazing work pulling a bait-and-switch by promising performance for the RTX 2080 that you’ll need an RTX 2080 Ti to get. If you love paying Nvidia a lot of money, you’re going to love the 2080 Ti. Otherwise, your best bet is to hope that AMD or Intel put something into market in the next 12-18 months to convince Nvidia that it doesn’t have a license to treat gamers like the apples you feed into a cider press. Because with no competition in the market, hey, they’ve got the unilateral right to dictate market pricing. Based on this launch, we can see exactly how Nvidia wants to use that position.

Happy gaming.
Reply
#39
https://www.gamersnexus.net/hwreviews/33...tx-1080-ti
Quote:Conclusion: Is the RTX 2080 Worth It?

No -- not yet.

The card is fine, and what nVidia is trying to do is commendable and, we think, an eventual future for gaming technology. That does not mean that it's worth the price at present, however. The RTX 2080 is poor value today. NVidia's own GTX 1080 Ti offers superior value at $150 less, in some cases, or $100 less on average. The cards perform equivalently, and yet the 1080 Ti is cheaper and still readily available (and with better models, too). The RTX cards may yet shine, but there aren't any applications making use of the namesake feature just yet -- at least, not any outside of tech demonstrations, and those don't count. Until we see a price drop in the 2080, compelling RTX implementations in an actually relevant game, or depleted stock on the 1080 Ti, there is no strong reason we would recommend the RTX 2080 card.

On the upside, the nVidia Founders Edition PCB and VRM are of superior quality, and we question how much value board partners will be able to add (electrically) for this generation. It seems that nVidia will chip away at relevance for AIB partners in the dual-axial market, as it'll be difficult to beat the reference PCB design. The cooler, as usual, could use work -- a lot of it -- but it's certainly improved over Pascal's blower cooler. We still wouldn't recommend the reference card for air cooling, but for an open loop build, its VRM will be difficult to outmatch.

We also want to again recognize the direction nVidia is trying to push. More framerate offers limited usefulness, at some point, and although this is partially a means to play-out the current process node, it also offers merit for developers. We think the RTX cards would be interesting options for game developers or, as software updates to support Tensor/RT cores, potentially 3D artists. The cards are objectively good insofar as their performance, it's just that there needs to be a value proposition or RTX adoption -- one must be true, and presently, neither is. The goal to sidestep manual graphics tuning for planar reflections, caustics, and global illumination is a noble goal. Most of these effects require artist oversight and artist hours, like creating an environment map to reflect lights (this could be done in nVidia's Sol demo, for instance), creating cube maps to reflect city streets in windows, or faking caustics and refractions. With a toggle to ray trace and sample, things would be much easier. It's not here today, and we cannot review a product based on what might be here tomorrow. We will revisit the product if and when RTX games roll-out.

We would recommend 1080 Ti purchases in the $650-$700 class presently. If your region has the 2080 and 1080 Ti price-locked, well, the 2080 is equivalent in performance and would be a worthwhile purchase. We can't justify the extra $100-$150 in the US, but recognize that price equivalence would swing in favor of the 2080.
Reply
#40
https://www.techpowerup.com/reviews/NVID...ing/7.html
Quote:We are happy to report that the RTX 2080 Ti is finally able to overwhelm PCIe gen 3.0 x8, posting a small but tangible 2%–3% performance gain when going from gen 3.0 x8 to gen 3.0 x16, across resolutions. Granted, these are single-digit percentage differences, and you won't be able to notice them in regular gameplay, but graphics card makers expect you to pay like $100 premiums for factory overclocks that fetch essentially that much more performance out of the box. The performance difference isn't nothing, just like with those small out-of-the-box performance gains, but such small differences are impossible to notice in regular gameplay.
...
For the first time since the introduction of PCIe gen 3.0 (circa 2011), 2-way SLI on a mainstream-desktop platform, such as Intel Z370 or AMD X470, could be slower than on an HEDT platform, such as Intel X299 or AMD X399, because mainstream-desktop platforms split one x16 link between two graphics cards, while HEDT platforms (not counting some cheaper Intel HEDT processors), provide uncompromising gen 3.0 x16 bandwidth for up to two graphics cards. Numbers for gen 3.0 x8 and gen 3.0 x4 also prove that PCI-Express gen 2.0 is finally outdated, so it's probably time you considered an upgrade for your 7-year old "Sandy Bridge-E" rig.

By this time next year, we could see the first desktop platforms and GPUs implementing PCI-Express gen 4.0 in the market. If only "Turing" supported PCIe gen 4.0, you would have had the luxury to run it at gen 4.0 x8 without worrying about any performance loss. Exactly this is the promise of PCIe gen 4.0, not more bandwidth per device, but each device working happily with a lower number of lanes, so processor makers aren't required to add more lanes.

https://techreport.com/blog/34116/weighi...erformance
Quote:The RTX 2080 Ti doesn't enjoy as large a gain, but it still reduces its time spent rendering difficult frames by 67% at the 16.7 ms threshold. For minor differences in image quality, I don't believe that's an improvement that any gamer serious about smooth frame delivery can ignore entirely.

It's valid to note that all we have to go on so far for DLSS is a pair of largely canned demos, not real and interactive games with unpredictable inputs. That said, I think any gamer who is displeased with the smoothness and fluidity of their gaming experience on a 4K monitor—even a G-Sync monitor—is going to want to try DLSS for themselves when more games that support it come to market, if they can, and see whether the minor tradeoffs other reviewers have established for image quality are noticeable to their own eyes versus the major improvement in frame-time consistency and smooth motion we've observed thus far.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)