The following warnings occurred:
Warning [2] is_dir(): open_basedir restriction in effect. File(/forum/images//english) is not within the allowed path(s): (/home/alienbab/:/tmp/:/var/tmp/:/opt/alt/php81/usr/share/pear/:/dev/urandom:/usr/local/php74/lib/:/usr/local/php81/lib/:/usr/local/php56/lib/:/usr/local/php74/lib/:/usr/local/php80/lib/:/usr/local/lib/php/) - Line: 440 - File: global.php PHP 7.4.33 (Linux)
File Line Function
[PHP]   errorHandler->error
/global.php 440 is_dir
/showthread.php 28 require_once




Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Ampere Discussion Thread
#1
http://www.redgamingtech.com/geforce-gtx...eol-rumor/
Give the rest a read as well.
Quote:But rumors circulating on 3DCenter tell us that Nvidia cut production of the GP102 based products back in November, 2017 – thus meaning that this flagship card is now EoL. This means that in the next month or two, the GTX 1080 Ti cards will no longer be available new to purchase, and in theory we should see other cards in the GeForce Pascal lineup end up similarly in the next few months too.

Instead, these cards will be replaced by the GeForce GTX 2080 and GeForce GTX 2070, although rather than being powered by Volta (as was originally rumored), we would instead see them use Ampere, an architecture designed for gaming. Given the release date of both the GeForce GTX 2080 and GTX 2070 cards is April 12th, 2018, Ampere will likely be shown off in March at either GTC or GDC… or possibly a later livestream hosted by Nvidia exclusively to reveal this new architecture.
Reply
#2
https://www.techpowerup.com/241872/nvidi...next-month
Quote:An Expreview report points at the possibility of a GeForce product, one that you can buy in your friendly neighborhood PC store and play games with. The "Ampere" based GPU will still be based on the 12 nanometer silicon fabrication process at TSMC, and is unlikely to be a big halo chip with exotic HBM stacks. Why NVIDIA chose to leapfrog is uncertain. GTC gets underway late-March.
Reply
#3
False alarm: https://www.techpowerup.com/241995/repor...-after-all
Reply
#4
https://www.techpowerup.com/246451/nvidi...080-listed
Quote:Case in point: NVIDIA AIB Manli Technology Group has registered at portal.eaeunion.org (with the Eurasian Economic Union) product listings that include references to some NVIDIA GA104 and GA104-400 products, as well as nomenclature for NVIDIA's next-gen cards as being GTX 2070 and GTX 2080. Granted, this could be a registration for future, future products, but it's very unlikely. The jury is still out on this leak, but if ever the carousel will stop spinning, we don't know it.
Reply
#5
Manli denies the submission: https://www.techpowerup.com/246480/its-a...-codenames
Reply
#6
https://www.tomshardware.com/news/samsun...39583.html
Quote:According to a report from DigiTimes, Nvidia will fab its upcoming Ampere architecture, which is expected to succeed Turing, on Samsung's 7nm EUV process rather than on the 7nm process from TSMC, which has been Nvidia's foundry partner for years. Ampere is expected to launch in 2020, though how the architecture differs from Turing isn't clear yet.
Reply
#7
https://www.tomshardware.com/news/nvidia...39787.html
Quote:Nvidia reportedly confirmed that it's partnered with Samsung to manufacture its Ampere GPU, which is expected to launch in 2020 using its 7nm extreme ultraviolet lithography (EUVL) process rather than continuing to source GPUs from longtime foundry partner TSMC.
...
It's not clear why Nvidia decided to switch foundry partners for Ampere. EETimes reported in early June that Samsung "aggressively undercut" TSMC. We do know that TSMC's 7nm process has already become popular with Apple and AMD.
Reply
#8
https://www.techpowerup.com/259842/nvidi...in-1h-2020
Quote:According to the sources over at Igor's Lab, NVIDIA could launch its next generation of GPUs, codenamed "Ampere", as soon as first half of the 2020 arrives. Having just recently launched GeForce RTX Super lineup, NVIDIA could surprise us again in the coming months with replacement for it's Turing lineup of graphics cards. Expected to directly replace high-end GPU models that are currently present, like GeForce RTX 2080 Ti and RTX 2080 Super, Ampere should bring many performance and technology advancements a new graphics card generation is usually associated with.
Reply
#9
https://www.techpowerup.com/261164/nvidi...med-hopper
Quote:NVIDIA has reportedly codenamed a future GPU architecture "Hopper," in honor of Grace Hopper, an eminent computer scientist who invented one of the first linkers, and programmed the Harvard Mark I computer that aided the American war efforts in World War II. This came to light as Twitter user "@kopite7kimi," who's had a fairly high hit-rate with NVIDIA info tweeted not just the codename, but also a key NVIDIA product design change. The tweets were later deleted, but not before 3DCenter.org reported on them. To begin with, "Hopper" is reportedly succeeding the upcoming "Ampere" architecture slated for the first half of 2020.

"Hopper" is also rumored to introduce MCM (multi-chip module) GPU packages, which are packages with multiple GPU dies. Currently, GPU MCMs are packages that have one GPU die surrounded by memory dies or stacks. This combination of GPU dies could make up "giant cores," at least in the higher end of the performance spectrum. NVIDIA reserves MCMs for only its most expensive Tesla family of compute accelerators, or Quadro professional graphics cards, and seldom offers client-segment GeForce products.
Reply
#10
https://www.extremetech.com/gaming/30411...50-percent
Quote:The Taipei Times has reported that the Yuanta Securities Investment Consulting Company has issued an investment note to its clients advising them to expect big things from Nvidia’s next-generation architecture, codenamed Ampere. The note states: ” Nvidia’s next-generation GPU based on the Ampere architecture is to adopt 7-nanometer technology, which would lead to a 50 percent increase in graphics performance while halving power consumption.”

That’s a pretty significant set of improvements, but one of them is a lot more likely than the other. [H]ardOCP has gone offline, but the site previously conducted an extensive investigation of Nvidia scaling over time. While the full articles are no longer archived online, the pages that were available show that the GTX 1080 is often much faster than the GTX 980, particularly when the two were compared in newer titles. Anandtech’s “Bench” results for the GTX 1080 versus the GTX 980 also show strong general uplift.
...
The idea that Nvidia would cut absolute power consumption by 50 percent, however, seems unlikely and ahistorical. GPUs tend to sell into TDP bands up to ~300-350W (AMD has historically been more willing to push TDP a bit harder than NV). If you compare power consumption figures for modern GPUs, they don’t tend to fluctuate by nearly this much, and there’s been a steady upward trend. Anandtech records full-system loaded power consumption in Shadow of the Tomb Raider as 205W with the GTX 980, 225W with the 1080, 314W with an RTX 2080, and 350W with an RTX 2080 Super. The RTX 2080 Super scores 127.5fps in SotTR according to Anandtech, compared with 52.3fps for the GTX 980, which means it’s definitely a more power-efficient GPU, with a calculated 2.44x increase in frame rate in exchange for a 1.7x increase in power consumption. But it still uses more power in absolute terms.
Reply
#11
https://www.techpowerup.com/263128/rumor...s-detailed
Quote:NVIDIA's next-generation of graphics cards codenamed Ampere is set to arrive sometime this year, presumably around GTC 2020 which takes place on March 22nd. Before the CEO of NVIDIA, Jensen Huang officially reveals the specifications of these new GPUs, we have the latest round of rumors coming our way. According to VideoCardz, which cites multiple sources, the die configurations of the upcoming GeForce RTX 3070 and RTX 3080 have been detailed. Using the latest 7 nm manufacturing process from Samsung, this generation of NVIDIA GPU offers a big improvement from the previous generation.

For starters the two dies which have appeared have codenames like GA103 and GA104, standing for RTX 3080 and RTX 3070 respectively. Perhaps the biggest surprise is the Streaming Multiprocessor (SM) count. The smaller GA104 die has as much as 48 SMs, resulting in 3072 CUDA cores, while the bigger, oddly named, GA103 die has as much as 60 SMs that result in 3840 CUDA cores in total. These improvements in SM count should result in a notable performance increase across the board. Alongside the increase in SM count, there is also a new memory bus width. The smaller GA104 die that should end up in RTX 3070 uses a 256-bit memory bus allowing for 8/16 GB of GDDR6 memory, while its bigger brother, the GA103, has a 320-bit wide bus that allows the card to be configured with either 10 or 20 GB of GDDR6 memory. In the images below you can check out the alleged diagrams for yourself and see if this looks fake or not, however, it is recommended to take this rumor with a grain of salt.
Reply
#12
https://www.techpowerup.com/264359/two-u...bly-ampere
Quote:NVIDIA is expected to launch its next-generation Ampere lineup of GPUs during the GPU Technology Conference (GTC) event happening from March 22nd to March 26th. Just a few weeks before the release of these new GPUs, a GeekBench 5 compute score measuring OpenCL performance of the unknown GPUs, which we assume are a part of the Ampere lineup, has appeared. Thanks to the twitter user "_rogame" (@_rogame) who obtained a GeekBench database entry, we have some information about the CUDA core configuration, memory, and performance of the upcoming cards.
...
The results are not the latest, as they date back to October and November, so it may be that engineering samples are in question and the clock speed and memory configuration might change until the launch happens.
Reply
#13
https://www.tomshardware.com/news/nvidia...-cuda-hbm2
Quote:An Nvidia graphics card with a huge core count was apparently benchmarked with Geekbench 5 before any official announcement from the manufacturer, as spotted by a Twitter user. We were already taken aback by two unidentified Nvidia graphics cards last week listed with 7,552 and 6,912 CUDA cores. But this latest finding is mind-blowing, with the CUDA core count allegedly reaching 7,936.

Since this is hardware that Nvidia hasn't even confirmed yet, we should take the benchmark results with a grain of salt. Interestingly enough, the Geekbench 5 submission is from the same time frame as the submission for those other two unnanounced Nvidia cards. The system used even has the same CPU, an Intel Core i7-8700K, and motherboard, an Asus Prime Z370-A.
...
Given the scale of the specifications, all three unnanounced graphics cards potentially hail from the Quadro or Tesla families -- more likely the latter. What we can infer based on the specs shared on Geekbench is that the cards target workstation and data center users as opposed to gamers. The graphics cards are rumoredly based on Nvidia's upcoming Ampere microarchitecture using TSMC and Samsung's 7nm facilities.
Reply
#14
https://www.tomshardware.com/news/nvidia...nouncement
Quote:On May 14 at 6 a.m. PT, Nvidia's CEO Jensen Huang will host the GTC 2020 keynote on YouTube. Now although the announcement of this keynote doesn't specifically mention Ampere, it would be a major surprise if we didn't hear about the next-generation graphics architecture.
...
Although an Ampere announcement isn't totally guaranteed, Nvidia did say we should "Get amped for latest platform breakthroughs in AI, deep learning, autonomous vehicles, robotics, and professional graphics" — a not so subtle hint of Ampere if ever we saw one.
Reply
#15
https://techreport.com/news/3470335/nvid...te-may-14/
Quote:And while it’s possible that Huang could reveal a GeForce RTX 3070 or 3080 at the event, it’s unlikely. The GeForce cards are a big enough product on their own that shoehorning them into a bigger presentation is a poor idea. The next generation of RTX cards will likely be revealed later this year and closer to launch. TweakTown notes that we’re likely to see a Tesla or Quadro first.
Reply
#16
https://www.tomshardware.com/news/nvidia...u-dgx-a100
Quote:In less than two weeks, Nvidia will host its GTC keynote, where we're hoping to hear more about Ampere. But while we're waiting, we're already seeing hints of the next-gen graphics architecture in a filed trademark for the DGAX A100, as spotted by hardware leaker @Komachi Ensaka.

The most noteworthy part of this name is A100. The name falls in line with the two most recent Tesla GPUs. The Pascal architecture-based GPU was the P100, with the P standing for Pascal, while the V100 used Volta. The A100 naming points to an Ampere-based Tesla GPU.
Reply
#17
https://www.tomshardware.com/news/nvidia...teaser-gtc
Quote:Today, Nvidia uploaded a video to YouTube showing CEO Jensen Huang cooking up something spicy (the video is unlisted at the moment, so kudos go to Andreas Schilling for spotting it):

In the brief clip, Jensen wants to show us something and pulls it out of the oven while the cameraman is hardly able to contain his laughter.

"Okay, ladies and gentlemen, what we've got here is the world's largest graphics card," the exec says.

There's little point in speculating at this point, but this is almost certainly an Ampere-based Tesla graphics card packed with eight GPUs and tied together with an NVLink interconnect. We suspect the GPUs are Nvidia A100s, which haven't been formally announced but are expected to succeed the Pascal-based P100 and Volta-based V100.

https://www.techpowerup.com/266986/nvidi...his-ampere
Quote:Update May 12th, 5 pm UTC: NVIDIA has listed the video and it is not unlisted anymore.

https://www.techpowerup.com/266982/graph...ower-logic
Quote:Power Logic, a graphics card cooling solution OEM, in an interview with Taiwan tech industry observer DigiTimes, commented that it expects graphics card shipments to rise in the second half of 2020, on the backs of new product announcements from both NVIDIA and AMD, as well as HPC accelerators from the likes of Intel and NVIDIA. NVIDIA is expected to launch its "Ampere" based GeForce RTX 30-series graphics cards, while AMD is preparing to launch its Radeon RX 6000-series "Navi 2#" graphics cards based on the RDNA2 graphics architecture. Power Logic has apparently commenced prototyping certain cooling solutions, and is expected to begin mass-production at its Jiangxi-based plant towards the end of Q2-2020; so it could begin shipping coolers to graphics card manufacturers in the following quarters.
Reply
#18
https://www.techpowerup.com/267079/nvidi...t-7nm-chip
Quote:Not long ago, Intel's Raja Koduri claimed that the Xe HP "Ponte Vecchio" silicon was the "big daddy" of Xe GPUs, and the "largest chip co-developed in India," larger than the 35 billion-transistor Xilinix VU19P FPGA co-developed in the country. It turns out that NVIDIA is in the mood for setting records. The "Ampere" A100 silicon has 54 billion transistors crammed into a single 7 nm die (not counting transistor counts of the HBM2E memory stacks).

https://www.tomshardware.com/news/nvidia...hics-cards
Quote:As reported by MarketWatch, Nvidia CEO Jensen Huang has confirmed in a media briefing prior to GTC 2020 that the chipmaker will use the latest Ampere microarchitecture for all of its next-generation graphics cards.
...
Unfortunately, Nvidia didn't reveal any details about Ampere-powered GeForce graphics card. Nevertheless, Huang was quoted saying that "there’s great overlap in the architecture, but not in the configuration."

https://www.techpowerup.com/267098/nvidi...production
Quote:NVIDIA today announced that the first GPU based on the NVIDIA Ampere architecture, the NVIDIA A100, is in full production and shipping to customers worldwide.
Reply
#19
https://techreport.com/news/3470558/nvid...enter-gpu/
Quote:Nvidia designed the A100 GPU for data centers, rather than traditional PCs. Jensen Huang noted during his GTC keynote that cloud services are surging and said that he expects the Ampere line to “do remarkably well,” calling it Nvidia’s best data center GPU ever.

https://www.techpowerup.com/267171/atos-...r-core-gpu
Quote:Atos, a global leader in digital transformation, today announces its new BullSequana X2415, the first supercomputer in Europe to integrate NVIDIA's Ampere next-generation graphics processing unit architecture, the NVIDIA A100 Tensor Core GPU. This new supercomputer blade will deliver unprecedented computing power to boost application performance for HPC and AI workloads, tackling the challenges of the exascale era. The BullSequana X2415 blade will increase computing power by more than 2x and optimize energy consumption thanks to Atos' 100% highly efficient water-cooled patented DLC (Direct Liquid Cooling) solution, which uses warm water to cool the machine.

Forschungszentrum Jülich will integrate this new blade into its booster module, extending its existing JUWELS BullSequana supercomputer, making it the first system worldwide the use this new technology. The JUWELS Booster will provide researchers across Europe with significantly increased computational resources. Some of the projects it will fuel are the European Commission's Human Brain Project and the Jülich Laboratories of "Climate Science" and "Molecular Systems". Once fully deployed this summer the upgraded supercomputing system, operated under ParTec's software ParaStation Modulo, is expected to provide a computational peak performance of more than 70 Petaflops/s making it the most powerful supercomputer in Europe and a showcase for European exascale architecture.
Reply
#20
https://www.tomshardware.com/news/nvidia...ards-brand
Quote:As pointed out by Heise.de this week, Nvidia didn't use the Tesla moniker for its A100 GPU recently announced. Instead, the Ampere-based GPU is simply called the Nvidia A100. The publication suggested that it did this to avoid any confusion with Elon Musk's endeavor.

We reached out to Nvidia to confirm this information, and indeed, we won't be seeing any more of Nvidia's scientific and data center GPUs branded as Tesla.

In fact, according to the representative, Nvidia ditched the Tesla name back in 2018 with the Nvidia T4. However, Nvidia originally dubbed the GPU the Nvidia Tesla T4 before renaming it.

"We began making the change after introducing our Turing-based Nvidia T4 GPU in the fall of 2018 to eliminate any confusion with Tesla Automotive," an Nvidia rep told Tom's Hardware.

However, it's clear that not everybody noticed the change. That's not surprising though, given that Nvidia didn't make any announcement about the change.
Reply
#21
https://www.tomshardware.com/news/nvidia...lease-date
Quote:Nvidia recently hosted its online GTC keynote, and although the company did announce its Ampere architecture as expected, not a word was said about the consumer/gaming market. Today, a report from DigiTimes claimed that both AMD and Nvidia will launch their next generation of graphics cards in September.

DigiTimes didn't name its source, but giving the report more weight is TrendForce's press release this week that noted that the release of new graphics cards, as well as the PlayStation 5 and Xbox Series X, will elevate DRAM demand and that "Nvidia and AMD are planning to release new GPUs in 3Q20."
Reply
#22
https://techreport.com/news/3471044/nvid...090-leaks/
Quote:Despite how long this year has felt, summer is nearly here and fall is quickly approaching. That means that the wave of hardware coming this fall is going into production so that we can all dump our wallets out into the coffers of AMD, Nvidia, and the console makers. And once physical elements of hardware are under production, that means leaks. We’ve seen a few leaks in the past couple days purporting to be a GeForce “Ampere” RTX 3080 Founders’ Edition and while every rumor should be taken with enough salt to give you heartburn, these are starting to look credible.

The leaked photographs show a heatsink with a complex design that has fans on opposite sides of the shroud, presumably with one fan for intake and one for output.

Igor Wassolek of Igor’s Lab says the pictures are most likely accurate; his sources tell him that an internal investigation has been launched within Nvidia to the source of the leaked cooler shots, and adds that Nvidia may change the designs as a result of the leak. The shroud as it currently exists reportedly costs $150 to build, suggesting that these cards are going to be expensive. These Founders’ Edition cards reportedly use a custom PCB that supports this dual-sided cooler, and Igor says this PCB will be different from the one going out to vendors like Gigabyte, EVGA, and Asus.

Nvidia is apparently looking to mix things up this time, too. Igor says there will be three Founders’ Edition cards at launch: a 3080, a 3080 Ti, and a 3090 Ti. All three cards will reportedly use the GA102 GPU, which is a change from the 20-series cards, which each used a different GPU.

https://www.extremetech.com/extreme/3115...gb-of-vram
Quote:The framing of these GPUs makes me wonder if Nvidia is launching its absolute top-end market stack first. With Pascal, Nvidia led with the GTX 1080 and 1070, with the 1080 Ti debuting months later. For Turing, Nvidia launched the RTX 2080 Ti, 2080, and 2070 simultaneously, but used a different GPU for each. This positioning sounds like Nvidia will lead with what we’d have typically called a “Titan / xx80 Ti / xx80” positioning as opposed to “xx80 Ti / xx80, xx70.”

The RAM loadout is also interesting. With consoles now packing 16GB of unified RAM and some high-end GPUs like the Radeon VII already featuring 16GB, I think there’s been a certain amount of assumption that 16GB would be the RAM capacity of choice next generation. This data suggests otherwise. The 24GB of VRAM on the 3090 Ti/Super is a nod to the card’s datacenter/workstation roots, not an attempt to move the market towards higher VRAM loadouts.
...
If the TDPs are to be believed, Nvidia is also finally leaving the 250W TDP point behind at the high end. Both Nvidia and AMD have flirted with higher-power GPUs before, but 250W has been an anchor point for GPUs in much the same way that 125W TDPs were an anchor for consumer CPUs for many years. Intel and AMD have both exceeded that mark in recent years, and if these rumors are accurate, we should expect GPUs to do so as well. This would free AMD to essentially pursue the same path.

An increased TDP isn’t necessarily surprising. Nvidia may have chosen to maximize performance at the top end, gambling that high-end gamers who would consider these cards in the first place have systems powerful enough to handle them. If you have an 850W – 1.2kW PSU and adequate cooling, a 250W CPU and 350W GPU won’t be anything you can’t handle in the first place.

No word on pricing, but the one thing you can bet these cards won’t be is cheap. Nvidia may position them competitively relative to where Turing or Pascal debuted if it feels AMD is a threat or if it’s worried about the impact of coronavirus on GPU sales, but I’d expect the company to hold the line on pricing to the greatest degree possible.
Reply
#23
https://www.techpowerup.com/268733/possi...cs-surface
Quote:Alleged specifications of NVIDIA's upcoming GeForce RTX 3090, RTX 3080, and next-generation TITAN graphics cards, based on the "Ampere" graphics architecture, surfaced in tweets by KatCorgi, mirroring an early-June kopite7kimi tweet, sources with a high hit-rate on NVIDIA rumors. All three SKUs will be based on the 7 nm "GA102" silicon, but with varying memory and core configurations, targeting three vastly different price-points. The RTX 3080 succeeds the current RTX 2080/Super, and allegedly features 4,352 CUDA cores. It features a 320-bit GDDR6X memory interface, with its memory ticking at 19 Gbps.

The RTX 3090 is heir-apparent to the RTX 2080 Ti, and is endowed with 5,248 CUDA cores, 12 GB of GDDR6X memory across a 384-bit wide memory bus clocked at 21 Gbps. The king of the hill is the TITAN Ampere, succeeding the TITAN RTX. It probably maxes out the GA102 ASIC with 5,326 CUDA cores, offers double the memory amount of the RTX 3090, at 24 GB, but at lower memory clock speeds of 17 Gbps. NVIDIA is expected to announce these cards in September, 2020.
Reply
#24
https://www.techpowerup.com/269328/asus-...-ti-leaked
Quote:Here's possibly the first picture of an ASUS ROG Strix GeForce RTX 3080 Ti graphics card, which not only confirms NVIDIA's nomenclature for its next-generation GeForce RTX graphics cards, but also provides fascinating insights into the direction ASUS is taking with its next-generation ROG Strix graphics cards. The design language involves matte black metal surfaces accented by brushed metal elements that conceal more RGB LED elements. ASUS's Axial Tech fans do the heavy lifting along with a large aluminium fin-stack heatsink underneath. The mention of "RTX 3080 Ti" also casts a shadow of doubt over "RTX 3090" leading the lineup. We should learn more about what ASUS and NVIDIA have in store, as we inch closer to the September unveil of this series.

https://www.techpowerup.com/269347/nvidi...f-tsmc-7nm
Quote:NVIDIA's upcoming GeForce "Ampere" family of GPUs will be built almost entirely on Samsung's 8 nanometer silicon fabrication process that's derived from its 10 nm node; rather than TSMC's 7 nm process, according to kopite7kimi, a source with a high hit-rate with NVIDIA rumors in the past. The 8LPP silicon fabrication node by Samsung is an extension of the company's 10LPP (10 nm) node. Both have the same fin pitch, but reductions are made in the areas of gate pitch (down by 6%) resulting in a transistor density of over 61 million/mm². Apparently NVIDIA's entire high-end product stack, including the GA102 silicon that powers at least three high-end consumer SKUs, are expected to be based on Samsung 8LPP.
Reply
#25
https://www.techpowerup.com/269957/the-c...mpere-gpus
Quote:Over the past few days, we've heard chatter about a new 12-pin PCIe power connector for graphics cards being introduced, particularly from Chinese language publication FCPowerUp, including a picture of the connector itself. Igor's Lab also did an in-depth technical breakdown of the connector. TechPowerUp has some new information on this from a well placed industry source. The connector is real, and will be introduced with NVIDIA's next-generation "Ampere" graphics cards. The connector appears to be NVIDIA's brain-child, and not that of any other IP- or trading group, such as the PCI-SIG, Molex or Intel. The connector was designed in response to two market realities - that high-end graphics cards inevitably need two power connectors; and it would be neater for consumers to have a single cable than having to wrestle with two; and that lower-end (<225 W) graphics cards can make do with one 8-pin or 6-pin connector.
...
As for the power delivery, we have learned that the designers will also specify the cable gauge, and with the right combination of wire gauge and pins, the connector should be capable of delivering 600 Watts of power (so it's not 2*75 W = 150 W), and not a scaling of 6-pin. Igor's Lab published an investigative report yesterday with some numbers on cable gauge that helps explain how the connector could deliver a lot more power than a combination of two common 6-pin PCIe connectors.

Looking at the keying, we can see that it will not be possible to connect two classic six-pins to it. For example pin 1 is square on the PCIe 6-pin, but on NVIDIA's 12-pin is has one corner angled. It also won't be possible to use weird combinations like 8-pin + EPS 4 pin, or similar—NVIDIA made sure people won't be able to connect their cables the wrong way.

On topic of the connector's proliferation, in addition to PSU manufacturers launching new generations of products with 12-pin connectors, most prominent manufacturers are expected to release aftermarket modular cables that can plug in to their existing PSUs. Graphics card vendors will include ketchup-and-mustard adapters that convert 2x 8-pin to 1x 12-pin; while most case/power manufacturers will release fancy aftermarket adapters with better aesthetics.
Reply
#26
https://www.tomshardware.com/news/nvidia...ark-result
Quote:Nvidia's Ampere A100 GPU has been out since May, but we had no idea of how it performed until today. Jules Urbach, founder and CEO of OTOY, has tweeted what seems to be the first benchmark of the A100.

The A100 scored 446 points on OctaneBench, thus claiming the title of fastest GPU to ever grace the benchmark. The Nvidia Titan V was the previous record holder with an average score of 401 points. The A100 delivered up to 11.2% higher performance than the Titan V. Urbach highlighted that the A100 run was with RTX disabled.

It doesn't come as a complete shock that the A100 would topple the Titan V if you look closely at the A100's composition. The GA100 silicon measures 826 millimeters-squared and flaunts 54.2 billion transistors, which is possible, thanks to TSMC's 7nm FinFET manufacturing process. The silicon comes equipped with 128 streaming multiprocessors (SMs), amounting to 8,192 CUDA cores. The A100 doesn't leverage the full die, but its specifications are impressive nonetheless.
...
OctaneBench benchmarks graphics cards with the OctaneRender, and one of its requirements is Nvidia CUDA. Therefore, you won't find any Radeon GPUs from the Red Team on the leaderboard. You will find a generous lot of GeForce, Quadro and Tesla devices on the list though.

The GeForce RTX 2080 Ti ranks 14 on the OctaneBench leaderboard with an average score of 302. The A100 is up to 47.7% faster. Keep in mind that GA100 silicon is tailored for Nvidia's data center products. It's delusional to think it will make its way to Nvidia's forthcoming consumer graphics cards, presumably dubbed RTX 3080 and RTX 3090. The A100 is the successor to the GV100 (Volta), so it could end up in a Titan GPU.
Reply
#27
https://www.techpowerup.com/270821/video...ce-amperes
The source is WCCFTech, so take it with a grain of salt.
Quote:NVIDIA's GeForce RTX 20-series "Turing" graphics card series did not increase video memory sizes in comparison to GeForce GTX 10-series "Pascal," although the memory itself is faster on account of GDDR6. This could change with the GeForce RTX 30-series "Ampere," as the company looks to increase memory sizes across the board in a bid to shore up ray-tracing performance. WCCFTech has learned that in addition to a variety of strange new memory bus widths, such as 320-bit, NVIDIA could introduce certain higher variants of its RTX 30-series cards with video memory sizes as high as 20 GB and 24 GB.

Memory sizes of 20 GB or 24 GB aren't new for NVIDIA's professional-segment Quadro products, but it's certainly new for GeForce, with only the company's TITAN-series products breaking the 20 GB-mark at prices due north of $2,000. Much of NVIDIA's high-end appears to be resting on segmentation of the PG132 common board design, coupled with the GA102 silicon, from which the company could carve out several SKUs spaced far apart in the company's product stack. NVIDIA's next-generation GeForce "Ampere" family is expected to debut in September 2020, with product launches in the higher-end running through late-Q3 and Q4 of 2020.

https://www.techpowerup.com/270837/nvidi...ecountdown
Quote:NVIDIA today shared the first real teaser in what seems to be the start of the Ampere marketing push. A post via Twitter shared an image showing an explosion of cosmic proportions, with a "#theultimatecountdown" tag alongside the "21 days. 21 years" tagline. This is a likely throwback to August 31st 1999, when NVIDIA launched its first GeForce branded graphics card - the GeForce 256 - setting it on its journey to become today's most successful dedicated graphics card maker.

Following the teasers' logic, we should expect some very interesting announcements from NVIDIA come August 31st, 2020 - and with Ampere around the corner, it's highly unlikely we'll be hearing about anything other than that.
Reply
#28
https://techreport.com/news/3472835/nvid...benchmark/
Quote:There’s no question that the RTX 30-series cards from Nvidia–or whatever they end up calling them–will be a nice boost over the RTX 20-series cards, but details about the card ahead of Nvidia’s probable August reveal are sparse. Now, though, the RTX 3080 has surfaced (in a now-removed post) on Userbenchmark, giving us a potential peek at the card’s specs.

Spotted by Twitter user Rogame, the potential 3080 has a device ID of “10DE 2206.” The listing shows the card with 10 GB of GDDR6X VRAM running at 4750 MHz across a 320-bit bus interface (For about 16 Gbps throughput). The GPU on the card sports a frequency up to 2.1 GHz, too.
...
Also on the Nvidia front, rumors on pricing are starting to leak, too. According to a user posting on ChipHell, we’re looking at prices approaching $2,000 US on the RTX 3090 card (via TechPowerUp). A user posted a screenshot purporting to be from Nvidia AIB partner Colorful‘s internal plans. The company will be releasing two RTX 3090 cards, the screenshot shows; an air-cooled Vulcan and a liquid-cooled Neptune. The documentation has the Neptune model listed at CNY (Chinese Yuan) 12,999 ($1875 US) and the Vulcan X OC for two prices–CNY 13,999 and CNY 12,000, or $2,000 and $1,730. It’s unclear which of those prices is real.

If accurate, though, it gives us an eye into what the potential price of the RTX 3090. The supposed -90 name would be a new product category for Nvidia. GeForce cards have typically come in 3-4 levels, typically -70 and -80 to start, with -60 and -50 following later. It’s possible that the RTX 3090 is meant to replace the “Titan” cards that have capped off each generation of Nvidia cards, to make it more clear where each card sits in relation to each other. If it is a Titan RTX replacement, though, $2,000 would be a steal. The current Titan RTX has never, to the best of my knowledge, retailed for south of $2,000.

This is all rumor, and we’ll probably know a lot more in a couple weeks when Nvidia is most likely going to reveal the RTX 30-series.
Reply
#29
https://www.tomshardware.com/news/nvidia...-on-camera
Quote:Twitter user @GarnetSunset has shared two photographs of one of Nvidia's looming Ampere graphics cards. The markings on the backplate are barely visible, but the mysterious graphics card appears to be the GeForce RTX 3090.

It's not the first time that we've seen the peculiar design, but the graphics card is simply massive when compared to the GeForce RTX 2080. According to the images, the GeForce RTX 3090 occupies up to three PCI slots as evidenced by the I/O bracket. It remains to be seen whether aftermarket models will follow suit though. If so, we can see graphic cards with included AIO liquid cooler getting more popular or enthusiasts simply slapping a waterblock on the graphics card and roll with a full liquid cooling system.

The triple-slot design certainly raises the question to whether Ampere will pull a lot of power and, thus, if that's why it has such a beefy heatsink. Early rumors were already floating around that the flagship Ampere graphics card could debut with a 350W TDP (thermal design power). Then the subject of the new 12-pin PCIe power connector emerged and added more fuel to the fire.

https://www.extremetech.com/computing/31...e-a100-gpu
Quote:Microsoft is deploying Nvidia’s new A100 Ampere GPUs across its data centers, and giving customers a massive AI processing boost in the process. Get it? In the– (a sharply hooked cane enters, stage right)

Ahem. As I was saying. The ND A100 v4 VM family starts with a single VM and eight A100 GPUs, but it can scale up to thousands of GPUs with 1.6Tb/s of bandwidth per VM. The GPUs are connected with a 200GB/s InfiniBand link, and Microsoft claims to offer dedicated GPU bandwidth 16x higher than the next cloud competitor. The reason for the emphasis on bandwidth is that the total available bandwidth often constrains AI model size and complexity.
Reply
#30
https://www.tomshardware.com/news/season...s-850w-psu
Quote:Update, 8/23/2020 11:45am PT: Added more pictures posted by Seasonic to bilibili forums.

Original Article:

The countdown to Nvidia's Ampere GeForce GPUs has started, and meanwhile, the rumor mill is working as hard as it can to get you excited. We recently caught wind of a rumor detailing Nvidia's new 12-pin power connector and verified through our sources that such a cable does exist, and now a new picture of a 12-pin cable has surfaced - and it's purportedly posted by Seasonic, a PSU manufacturer.

The posting was spotted by twitter user HXL, who found the image on the bilibili forums. It shows a Seasonic cable next to its box, adapting dual 8-pin connectors into a single 12-pin connector. The "Nvidia 12-pin PCIe Molex Micro-Fit 3.0 Connector" is 750 mm long, and the box notes "It is recommended to use a power supply rated 850 W or higher with this cable."

As you can see above, the 12-pin micro connector is small, being not much larger than a single one of the 8-pins on the other end. This cable features two male 8-pin connectors, it's intended to plug straight into the modular ports on a Seasonic PSU -- rather than to be used as an adapter on your existing 8-pin cables.
...
It's unclear whether this new 12-pin connector will make it to the entire Ampere GeForce line. According to Google Translate, Seasonic notes "Is it really unsure, dare to send because there is no NDA. If it is true, the power supply manufacturer will increase the cost of the bonus line, and Lao Huang will not give us a penny. It is currently only used for testing"

The note on the box says "It is recommended to use a power supply rated 850 W or higher with this cable," but that might only be needed for the most power-hungry cards, as we could imagine it would upset quite a large group of people if something like the RTX 3070 would also need a near-kilowatt PSU.

That being said, it's also possible that the note is just there as a warning. After all, we can't deny that a single 12-pin connector would be a much more elegant solution than multiple connectors, especially if you decide to indulge in individually-sleeved cables.
Reply
#31
https://www.tomshardware.com/news/msi-re...hics-cards
Quote:As PC Gamer reported, the Eurasian Economic Commission (EEC) published a very interesting entry today. Apparently, MSI has registered 29 different Nvidia graphics cards based off the next-gen Ampere architecture with the regulatory body. These could be MSI's take on the GeForce RTX 3090, 3080 and RTX 3070, respectively.

Instead of publicly exposing the model names for the graphics cards, MSI cunningly filed their part numbers. It's a good tactic, since hardware sleuths are constantly on the lookout for new model name listings, but almost no one pays attention to the part numbers.
...
As mentioned, Nvidia has a big GeForce-related online event scheduled for September 1. If you believe in coincidences, Nvidia announced the GeForce 256 graphics card on August 31, 1999. So this year marks the 21 year anniversary of the "the world's first GPU" as Nvidia marketed it. There's simply no better time or a bigger stage to announce Ampere.

At this point, it's pretty much anybody's guess which models will star in Nvidia's Ampere show. The GeForce RTX 3090 and RTX 3080 are strong candidates; although, we wouldn't be surprised if the RTX 3070 also gets in on the fun.

https://www.techpowerup.com/271359/ek-te...-at-launch
Quote:EK Waterblocks via Facebook teased availability of its custom-designed, custom-fitting watercooling solutions for NVIDIA's RTX 3000 series. Usually, users have to wait for a while before aftermarket cooling solutions become available for the latest and greatest; but apparently, not anymore. In response to a user question on Twitter on whether the company would have 3000 series blocks available on launch day, the official EK handle answered that "we'll have some things ready at or close to launch".

This is surely good news from users who have the economic power to go after NVIDIA's halo products (which, if reports are correct for this graphics card generation's costs, are bound for a significant upwards movement). Especially considering the most recent leaks painting the RTX 3090 as quite the three-slot behemoth.

https://www.techpowerup.com/271367/nvidi...e-pictured
Quote:Here's the first picture of an NVIDIA "Ampere" GA102 GPU die. This is the largest client-segment implementation of the "Ampere" architecture by NVIDIA, targeting the gaming (GeForce) and professional-visualization (Quadro) market segments. The "Ampere" architecture itself debuted earlier this year with the A100 Tensor Core scalar processor that's winning hearts and minds in the HPC community faster than ice cream on a dog day afternoon. There's no indication of die-size, but considering how tiny the 10.3 billion-transistor AMD "Navi 10" die is, the GA102 could come with a massive transistor count if its die is as big as that of the TU102. The GPU in the picture is also a qualification sample, and was probably pictured off a prototype graphics card. Powering the GeForce RTX 3090, the GA102-300 is expected to feature a CUDA core count of 5,248. According to VideoCardz, there's a higher trim of this silicon, the GA102-400, which could make it to NVIDIA's next halo product under the TITAN brand.
Reply
#32
https://www.tomshardware.com/news/nvidia...-connector
Quote:If you've been following the hard-to-miss news about the upcoming Nvidia Ampere graphics cards, you'll know that the rumors point to a few unusual design choices. There is talk of a 12-pin power connector and a whole new PCB and cooler design that is radically different from before. Now, in a new video detailing the design philosophy, Nvidia has confirmed the new cooler and 12-pin power connector, subtly explaining why certain decisions have been made.
...
"Whenever we talk about GPU performance, it all comes from the more power you give, and you can dissipate, the more performance you get." says Gabriele Gorla, Director of Systems Engineering at Nvidia "The biggest challenge, when you do a very high-end board and try to squeeze it into 6-7 inches, is that the power density becomes really really high"

So it looks like Nvidia had a handful of ambitions with the new cards, and challenges to solve in order to accomplish the goals. The team wanted to pump more power into the cards, and thus the cooler had to grow. But, this couldn't happen without shrinking the PCB to ensure the card in its entirety didn't grow too much, limiting the PCB itself to be 6-7 inches long.

As it turns out, that's where the new 12-pin power connector comes in. It may not seem like much, but it's smaller than the dual 6-8 pin connectors Nvidia previously used. And that's not only because of its physical dimensions, but because it looks like Nvidia is planning on mounting the connector perpendicular to the board — this couldn't have been done with the old PCIe power connectors.

Nvidia didn't explicitly detail too much, but the images provided in the video confirm the rumors. The PCB will be quite short and have a triangular bite taken out at the end to make room for a fan in the cooler that can pull the air in through the bottom, and out the top end of the card, tapping into the airflow path to be exhausted by the rear case fan.

Given that the RTX 3090 is the first card to come from Nvidia with the **90 nomenclature since the GTX 690, we're expecting the halo GPU to pack an incredible punch — it's almost certainly going to be a card that Nvidia builds to show off what it can do, but will in practice be far too expensive for most consumers.

Whether this new cooler design with the smaller PCB and new power connector will make it to the more mainstream models, such as the RTX 3070 and RTX 3080 remains to be seen. If it does though, there is no cause for concern as Nvidia also stated in this video that adapters will be provided — so no, you will not need a new power supply for the Ampere graphics cards, unless you're going for the halo card and need more watts.

However, what must be noted either way is that these rumors are all about the Nvidia reference card, so it isn't necessarily telling anything about the custom cards that make up a large chunk of the market. Nvidia's board partners will undoubtedly have their own designs, possibly sticking to the classic card design with roomier PCBs and the old-style PCIe power connectors. As noted in Seasonic's power cable leak, the 12-pin cable from the PSU manufacturer was only meant to be used with PSUs of 850W and above, so I believe there's a good chance that the 12-pin connector is only a thing on the halo GPU.

https://www.techpowerup.com/271410/nvidi...-confirmed
Quote:Although the video does not reveal any picture of the finished product, the bits and pieces of the product's wire-frame model, and the PCB wire-frame confirm the design of the Founders Edition which has been extensively leaked over the past few months. NVIDIA mentioned that all its upcoming cards that come with 12-pin connector include free adapters to convert standard 8-pin PCIe power connectors to 12-pin, which means there's no additional cost for you. We've heard from several PSU vendors who are working on adding native 12-pin cable support to their upcoming power supplies.

The promise of backwards compatibility has further implications: there is no technical improvement—other than the more compact size. If the connector works through an adapter cable with two 8-pins on the other end, its maximum power capability must be 2x 150 W, at the same current rating as defined in the PCIe specification. The new power plug will certainly make graphics cards more expensive, because it is produced in smaller volume, thus driving up BOM cost, plus the cost for the adapter cable. Several board partners hinted to us that they will continue using traditional PCIe power inputs on their custom designs.
Reply
#33
https://www.tomshardware.com/news/nvidia...-dollar499
Quote:After a long but worth-it wait, Nvidia has announced the chipmaker's spanking new GeForce RTX 30-series (codename Ampere) graphics cards. The GeForce RTX 3090 and GeForce RTX 3080 will be available on September 24 and 17, respectively, with the GeForce RTX 3070 coming at a later date in October. These three cards boast impressive specs that will vie for our Best Graphics Cards for Gaming list.

Built on a custom Samsung 8nm process, Ampere comes equipped with Nvidia's second-generation Ray Tracing cores and third-generation Tensor cores. The GeForce RTX 3090, RTX 3080 and RTX 3070 are also the first Nvidia consumer graphics cards to come with PCIe 4.0 support. We have the architectural deep dive details here.
...
The GeForce RTX 3090 is aimed at 8K gaming at 60 frames per second, so it comes equipped with 10,496 CUDA cores that feature a boost clock up to 1.7 GHz. There's also 24GB of 19.5 Gbps GDDR6X memory across a 384-bit memory interface. The graphics card has a 350W TDP (thermal design power) and requires two 8-pin PCIe power connectors.

Professional and hardcore enthusiasts will be delighted to know that the GeForce RTX 3090 is the only Ampere-based graphics card to support SLI through Nvidia's NVLink connector. This opens the door to pairing up two of these beasts together for an awesome compute machine. The GeForce RTX NVLink Bridge costs $79.99 and will be available on the same day as the GeForce RTX 3090.
...
Nvidia touts the GeForce RTX 3080 as the flagship Ampere SKU. The chipmaker is promising up to double the performance over the previous GeForce RTX 2080. According to Nvidia, the GeForce RTX 3080 is capable of providing a perfect 60 frames per second gaming experience at 4K even with ray tracing enabled.
...
Ultimately, the GeForce RTX 3070 will continue to be the sweet spot for gamers. The graphics card starts at $499 and delivers higher performance than last generation's flagship GeForce RTX 2080 Ti at half the price. We have the deep-dive details in our Nvidia GeForce RTX 3070: RTX 2080 Ti Performance at $499 article.
...
Regardless of the model, the Ampere-based graphics cards offer three DisplayPort 1.4a outputs and one HDMI 2.1 port. Gone is the VirtualLink port that debuted with Turing. It doesn't come as a huge surprise since the standard never really caught on.

Given the power requirements, Nvidia recommends a 750W power supply for the GeForce RTX 3090 and GeForce RTX 3080, while the GeForce RTX 3070 can get by with a 650W unit. Nvidia's recommendations are based around a high-end system with the Intel Core i9-10900K so you could get away with a power supply that has a lower capacity than the suggested.

https://www.tomshardware.com/news/nvidia...ng-we-know
Quote:We don't have actual hard performance data yet, and we won't until some time in October. However, everything on paper makes the GeForce RTX 3070 look like an easy recommendation. Performance has increased substantially over the previous generation cards, and price has remained the same.

If you've been skeptical of ray tracing, and whether it's even necessary for games, now it really doesn't matter. You can get higher ray tracing performance than Nvidia's first generation hardware, and at the same time you'll get a major boost in performance for traditional rendering modes. That's our expectation at least; stay tuned for the full review next month.

We understand why a lot of people skipped the Turing generation of GPUs. They were expensive, and not really that much faster than Pascal. Besides, if you already bought a GTX 1070, GTX 1080, or GTX 1080 Ti, there wasn't a real need to upgrade. Skipping a generation in hardware — and skipping the first generation of any new technology — is rarely a bad idea. But now, based on what we've seen of the specs, features, and pricing? This is going to be a very tasty card.

Nvidia has thrown down the gauntlet with the GeForce RTX 3070. What remains to be seen is whether AMD can match or even beat Nvidia when it comes to the next generation GPUs. Big Navi might end up being just as 'big' as Ampere. We should know more by the time these cards launch in October.

https://www.extremetech.com/gaming/31448...ere-turing
Quote:There’s a lot more to talk about with the RTX 3000 series, including some of the new software capabilities and the cooling system. Look for that content in a bit. Overall, my initial impression of the positioning on these GPUs is very positive — Nvidia clearly is bracing for a fight when RDNA2 launches, and the claimed 1.9x performance improvement over Turing is going to be sharply compared and contrasted against AMD’s claimed 1.5x further improvement over RDNA. On paper, Nvidia now leads if you apply these claims against existing GPU power draw, but broad targets can’t compete with actual measurements where efficiency is concerned.

https://www.techpowerup.com/271626/fsp-r...-connector
Quote:FSP today rolled out the "Ampere Cable," an accessory that converts two 8-pin PCIe inputs from modular PSUs to the 12-pin Molex MicroFit connector standard introduced to the client segment by NVIDIA and its GeForce RTX 3000 "Ampere" Founders Edition graphics cards. The cables are compatible with any modular PSU that has 8-pin 12 V connectors that put out 8-pin PCIe. It should work with all FSP branded modular PSUs, as well as modular PSUs of other brands that use Fortron as their OEM source. FSP is Fortron's DIY channel brand. FSP is setting up web-forms for existing owners of FSP power supplies to claim a free unit.
Reply
#34
https://www.tomshardware.com/news/nvidia...d-best-buy
Quote:The launch of Nvidia's new Ampere GPUs is imminent, and with that, listings are appearing every day with more and more RTX 3000 series cards from various manufacturers. This time Best Buy and Amazon have several new listings ready for the RTX 3090 release.

https://www.tomshardware.com/news/geforc...0-scalpers
Quote:Nvidia's Ampere milkshake has brought all the kids out to play, including scalpers. As evidenced by a now removed Reddit discussion, eBay merchants are selling preorders of the GeForce RTX 3080 for extortionate prices.
...
Shocking as it may sound, eBay allows sellers to put up listings of products that he or she doesn't physically have under the presale policy. The platform's rules state that "Presale listings must clearly state that they are 'presale' in the title and description, and guarantee delivery within 30 days of purchase." As long as the postings abide by the indications, eBay will allow them to stay online.

That's a double whammy, then. If you buy an overpriced RTX 3080 off of eBay, there's a good chance you're just going to sit around waiting for the seller to successfully acquire the cards and then resell it to you. There's a very good chance you could buy a card directly before the eBay time window on presales closes, especially if you're willing to pay a 50% or higher markup.

We don't recommend consumers blindly purchasing a product until a proper review is out. And truth be told, unless the eBay seller has contacts at retailers or has lighting-speed reflexes to click that buy button, his or her possibility of snatching up a GeForce RTX 3080 on September 17 is pretty much the same as yours.

https://techreport.com/news/3473104/what...ia-rtx-io/
Quote:After months of teases and waiting, Nvidia finally unveiled its new generation of graphics cards, the RTX 3070, 3080, and 3090. Alongside the announcement of these impressively powerful and comparatively inexpensive cards, Nvidia talked about some tech coming alongside them. Perhaps the most interesting of these was RTX IO, Nvidia’s answer to the growing sizes of games and capabilities of NVMe SSDs.

Right now, your PC rarely, if ever, uses its NVMe SSD at full capacity. Unless you’re working in heavy content creation, you’re not getting the most out of it, and even then you might not be. The APIs that manage storage can’t keep up with the speeds NVMe drives are capable of, especially on PCIe Gen 4.

And so here comes RTX IO. Nvidia says that RTX IO is a “suite of technologies that enable rapid GPU-based loading and game asset decompression, accelerating I/O performance by up to 100x over traditional hard drives and storage APIs.”
...
This is going to be somewhat of a slow-burn technology, though. Nvidia hasn’t said for sure that it’ll be available on RTX 20-series cards, though that seems like it’s very possible. But it’ll also require greater adoption of NVMe SSDs as PC storage solutions and of PCIe Gen 4. It doesn’t seem like PCIe Gen 4 is required, though.

It’ll also require game developers to be on board, to some degree. This stuff works in the upcoming PlayStation 5 and Xbox Series X because developers can target the hardware and APIs in those systems directly. They know what hardware will be available and what that hardware is capable of. A game developer can’t develop a game for a person with an RTX card and a high-end NVMe SSD when there are people out there with AMD cards and games stored on rotational media. AMD will have to present its own IO solution, too, then. That also seems likely considering that AMD is behind the SoCs in the upcoming consoles.

For those of us with brand new systems, we’ll likely see improvements from RTX IO and DirectStorage immediately. The improvements will only continue, though, as developers start expecting that decompression hardware to be present on GPUs and can build games with it in mind. It seems likely, too, that a suite of other technologies and APIs will begin to accompany this. Developers could have an SSD-optimized installation for those with the hardware to do it, and a bulkier standard installation for the rest of us. It could be something we enable manually or it it could be automated.

There’s no doubt that Nvidia’s 30-series GeForce RTX video cards are a huge jump over the 20-series cards, but RTX IO could be the low-key game changer here in terms of how we play and enjoy games.

https://www.techpowerup.com/272072/nvidi...-confirmed
Quote:NVIDIA in a GeForce community forums post by staff member, announced that reviews of the GeForce RTX 3080 Founders Edition have been delayed to September 16, with the review NDA lifting at 6:00 AM Pacific Time. This NDA was originally slated to be lifted on September 14. According to a Reddit post by an NVIDIA representative "NV-Tim," the delay was in response to certain reviewers requesting more time from NVIDIA as COVID-19 impacted their sampling logistics.

In other news, NVIDIA announced that its $499 (starting) GeForce RTX 3070 graphics card will be available on October 15, 2020. The performance segment graphics card is hotly anticipated by the gaming community as the other two products in the series—RTX 3080 and RTX 3090—are enthusiast segment products. NVIDIA claims that the RTX 3070 beats the RTX 2080 Ti in performance, which means the card should be capable of 1440p high refresh-rate gaming, and 4K UHD gaming at 60 Hz.

https://www.techpowerup.com/272122/nvidi...8gb-memory
Quote:Possible specifications of NVIDIA's next-generation flagship Quadro RTX professional graphics card leaked to the web. The SKU is possibly based on the same 8 nm "GA102" silicon as the GeForce RTX 3090, but features more of the silicon unlocked. It apparently features 10,752 CUDA cores, or exactly one TPC (two SMs) more than the RTX 3090. With 84 SM (42 TPC), the unnamed Quadro RTX should feature 84 RT cores, 336 Tensor cores, and 336 TMUs.

NVIDIA's choice for memory for the upcoming Quadro RTX flagship is interesting, as it's prioritizing memory size over bandwidth (which is more relevant in the professional visualization use-case dealing with large data sets). The card features 48 GB of conventional GDDR6 memory clocked at 16 Gbps over the chip's 384-bit wide memory interface, which should work out to 768 GB/s of memory bandwidth. The max GPU Boost frequency is pegged at 1860 MHz. There's no word on availability. Pictured below is the previous-gen Quadro RTX 5000.

https://www.techpowerup.com/272156/seaso...ies-owners
Quote:Seasonic today shared with us some additional details regarding its 12-pin Molex MicroFit 3.0 modular PSU cable, and how it plans to market it. If you've read our NVIDIA GeForce RTX 3080 Founders Edition Unboxing Preview, you'll know that NVIDIA ships its Founders Edition cards with a printed warning not to use any third-party adapters to convert 8-pin PCIe power connectors to the 12-pin connector. The warning also asks you to use the included adapter, and that using a third-party adapter would void the warranty. There's no clarity from NVIDIA on whether modular cables that plug directly into a modular PSU and put out 12-pin connectors strictly fit this description, as they're not "adapters."

Seasonic believes that its cable is a perfect match for NVIDIA's GeForce RTX 30 series graphics cards. The cable uses 16 AWG wires. Each 12 V pin of the connector is rated for 9 A current, which means the cable is capable of ferrying up to 540 W of power. On one end of the cable are two 8-pin 12 V connectors (which plug into the 12 V modular back-plane of your PSU), and on the other is one 12-pin connector. The cable is 75 cm in length. The cable is compatible with the company's Prime, Focus, and Core series modular/semi-modular PSUs.

If you own a Seasonic PSU (Prime/Focus/Core) and an NVIDIA GeForce RTX 30 series graphics card that has a 12-pin input, then you're eligible to receive the cable free of charge. Later this week, on Wednesday, the Seasonic website will have an online form with which you can request the cable. You'll have to provide the serial number and a photo of the PSU, and proof of purchase and serial number for the GeForce 30 graphics card.

Certain RTX 30 series graphics card SKUs ask for 750 W (or higher) PSUs in their system requirements; while certain other RTX 30 series cards mention 650 W in their system requirements. Be sure that the PSU you're seeking your modular cable for meets the system requirements of the graphics card you own. Seasonic mentioned that it has limited supply of the cable, and hasn't yet decided if they want to sell the cable in the open market. The company will take a call on this by year-end.
Reply
#35
https://www.tomshardware.com/news/rtx-3080-sold-out
Quote:Nvidia’s RTX 3080 launched earlier today, but unless you woke up as soon as the sales went live or, perhaps, had a bot that bought it for you, you wouldn’t know. We did our best to guide potential customers on where and how to buy the RTX 3080 yesterday afternoon, but the new holder of the Best Graphics Card crown is already sold out at every website we listed.

B&H’s entire site is down right now, in fact. And, according to customers, the cards sold out so fast that no amount of preparation would have let them get one.
...
Online retail is more important than ever right now, with many unable or unwilling to enter brick-and-mortar stores in the midst of the pandemic. And, unfortunately, that means we’re also more vulnerable to bots than ever, as well. As with the 1080 launch, we’d expect stock to start reappearing anywhere from 2 weeks to 2 months from now. Though we could also see the shortage lasting into next year, given the card's stark jump in performance and the potential for cryptocurrency miners to start adopting it as a new standard.

In the meantime, our friends over at PC Gamer reached out to Nvidia, which reportedly said the following:

We've just spoken to Nvidia: “We are seeing unprecedented demand for the RTX 3080. We have been in production since August and are making them as quickly as possible. Our NV team and partners are shipping more every day to etailers & retailers”

https://www.tomshardware.com/news/nvidia...-apologize
Quote:Nvidia's GeForce RTX 3080 GPU , one of the best graphics cards, launched today, and it sold out almost instantly. Nvidia has apologized for the situation regarding orders on its own website, and Newegg posted a statement to Twitter to justify the situation.

Nvidia put a statement up on its forum in a locked thread entitled "RTX 3080 Nvidia Store Availability," which, as of this writing, has been downvoted 126 times.
...
It further added that the company is manually reviewing orders in an attempt to thwart scalpers and bots, and that more cards are being shipped daily.
...
Newegg claims it got more traffic today than on Black Friday morning, and that it sold out in five minutes, even with bot protection. It encourages those who want an RTX 3080 to use its Auto Notify feature and check in regularly. In response, some Twitter users have claimed that cards had sold out before Auto Notify let them know about stock.

In a follow-up, Newegg added that the card would have sold out even faster if the site didn't have traffic issues, and added that Auto Notify isn't a guarantee.

EVGA, which sells its own cards, thanked followers on Twitter for patience after "overwhelming demand" and also recommended the use of a notify feature.
...
The cards also sold out almost immediately at Best Buy, B&H, Amazon and Microcenter, the latter of which was featured in photos that went semi-viral on social media with lines outside stores.

https://www.tomshardware.com/news/nvidia...rtage-2020
Quote:Since the Nvidia Ampere Keynote, we've known the RTX 3080 and RTX 3090 are going to be absolute power suckers, Nvidia itself recommends a bare minimum of a 750W power supply for the RTX 3080 and RTX 3090, but you'll need larger power supplies for some systems. Our testing backs that up, too; we observed up to a 335W average power draw from just the 3080. This means many of us will need new power supplies, unfortunately right during a PSU shortage that has sent pricing skyward on some models (more on that shortly).
...
Power Supply Requirements for Nvidia GeForce RTX 3090
For the Nvidia GeForce RTX 3090 PSU requirements, you'll want a 750W power supply if you pair the card with a mainstream Core i5/i7 processor or Ryzen 5/7 CPU. That requirement jumps to 850W for Intel Core i9 and Ryzen i9 chips, with a peak of 1000W for Intel HEDT and AMD Threadripper platforms.

Power Supply Requirements for Nvidia GeForce RTX 3080
Mainstream Core i5/i7 and Ryzen i5/i7 system PSU requirements stay the same at 750W for the Nvidia GeForce RTX 3080 as they are for the RTX 3090. However, you only need an 850W power supply for HEDT/Threadripper and Core i9/Ryzen 9 platforms.

Power Supply Requirements for Nvidia GeForce RTX 3070
Finally, for the Nvidia GeForce RTX 3070 PSU requirements, you'll "only" need 650W for Core i5/i7 & Ryzen 5/7 systems, 750W for Core i9/Ryzen 9 systems, and 850W for HEDT/Threadripper platforms.
...
If only it were that easy. Unfortunately, due to the pandemic, power supply volume has been at an all-time low, causing stock outages and prices to rise on some models. We touched upon the topic a month ago, and the cause of the PSU shortage is a multi-pronged issue:
...
Looking at 750W power supply listings on Newegg.com, it appears that stock for PSUs is okay today, but prices are still over-inflated on most units. So if you're one of those unlucky people that don't have a qualifying unit to compliment your RTX 3080, be prepared to pay even more for your RTX 3080 upgrade than what you see on the GPUs sticker.

Supply chain issues can take months to resolve, and we may not be in the worst of the power supply shortage yet. Given the current state of the market and the expected rush of new PSU orders as enthusiasts build out new Ampere systems, it's possible we could see the supply situation worsen, or pricing increase further. We'll update as necessary.

https://www.tomshardware.com/news/amazon...ber-launch
Quote:The GeForce RTX 3070, which could be a serious contender for the best graphics card in the mid-range category, lands on October 15. Despite that, Amazon has already listed PNY's GeForce RTX 3070 8GB XLR8 Gaming Epic-X RGB Triple Fan Edition and GeForce RTX 3070 8GB Dual Fan, as spotted via @momomo_us.
...
Nvidia has disabled preorders with Ampere, meaning everyone gets a fair chance at snatching up a GeForce RTX 3070 come October 15.

https://www.tomshardware.com/features/nv...benchmarks
Quote:If you're thinking about purchasing the RTX 3080, we still maintain that having a good CPU is important, meaning at least a six-core Intel or AMD Ryzen processor. Of course, it's all relative as well. Maybe you're looking to upgrade your entire PC in the coming months, but you want to wait for Zen 3 to launch. Or maybe you just can't afford to upgrade everything at once. Depending on your current graphics card, making the jump to the 3080 might be warranted even on an older PC, especially if it's only going to be a temporary solution. How you want to upgrade is a personal decision, and it's entirely possible to stuff the RTX 3080 into an old 2nd Gen Core or AMD FX build if you want. We wouldn't recommend doing that, but it's possible.

While the biggest differences in performance occurred at lower resolutions and settings, it's also important to look at the bigger picture. The Core i9-9900K and 10900K might 'only' be 10% faster than a slower CPU, but that's almost a whole step in GPU performance. RTX 2080 Ti was 15-20% faster than the RTX 2080 Super, but it cost 70% more. If you're running an older GPU with an older CPU and you're planning to upgrade your graphics card, it's probably a good idea to wait and see what the GeForce RTX 3070 brings to the table, as well as AMD's Big Navi. You might not miss out on much in terms of performance while saving several hundred dollars, depending on the rest of your PC components.

As always, the sage advice for upgrades is to only make the jump when your current PC doesn't do what you want it to do. If you're pining for 4K ultra at 60 fps, or looking forward to games with even more ray tracing effects, the GeForce RTX 3080 is a big jump in generational performance. It's clearly faster than the previous king of the hill (the RTX 2080 Ti, or even the Titan RTX), at a substantially lower price. Just don't expect to make full use of the 3080 if you pair it up with an old processor.

https://www.extremetech.com/gaming/31509...erformance
Quote:Why open a discussion of Nvidia’s latest Ampere with a reminder of Turing’s launch flaws? Because Ampere, as near as I can tell, fixes every single one of them. Having taken such a negative stance on Turing at launch, it’s appropriate to revisit the question of whether I feel the same way about Ampere on its debut.

I do not.

ExtremeTech has not yet received an Ampere GPU for review, so I’m basing this opinion on coverage from our sister site PCMag, as well as HotHardware and Ars Technica. Sometimes, different reviewers have significantly different opinions on a product, but that’s not the case here. Everyone thinks Ampere is an amazing GPU.
...
Ampere offers performance uplifts of 1.5x – 1.8x in 4K gaming, with the degree of improvement depending on the title. If the 1080 Ti was the first GPU to offer uncompromising 4K performance, the RTX 3080 is the first GPU to offer good 4K performance with a chance of turning ray tracing on. Features like DLSS 2.0, while not strictly introduced with Ampere, make DLSS far more interesting (and useful when it comes to boosting game performance). At $700, the GPU isn’t cheap — but it’s a vastly better deal than the RTX 2080 represented at launch. Comparing backward against Pascal, the launch price on the RTX 3080 is higher, but performance is far better. The eventual Ampere GPUs that hit the $500 price point will still significantly outperform any GTX 1080.

As for AMD, RDNA2 is going to have to be something very special to compete against Nvidia at the top of the product stack. According to data at 3DCenter.de, the multiple-review average for the 5700 XT is ~50.5 percent of the RTX 3080’s performance. Because scaling is never perfectly linear, AMD will need a potent combination of increased core counts (80 CUs is the most common rumor, up from 40), increased clock speeds, and potentially some additional IPC gains from RDNA2 in order to cleanly beat the RTX 3080.

AMD has promised a 1.5x performance-per-watt improvement over RDNA, but that doesn’t tell us anything about what the average performance improvement will be. We can assume that doubling up core counts would roughly double performance assuming the GPU is balanced, but that’s only an approximation — and AMD would need additional gains from clock or efficiency to bring Big Navi in ahead of the RTX 3080.

AMD always has the option to position Big Navi as a strong option against the RTX 3070 and lower GPUs, similar to how the RX 5700 XT and 5700 were competitors for RTX 2060 and 2070. For now, however, Nvidia is decisively redefining high-end performance — and they’re doing it at better launch pricing this time around.

I haven’t had the ability to test one yet, but I trust the opinions of my colleagues. Ampere looks to be everything Turing wasn’t, I’m considering buying one out of my own pocket for AI upscaling work, and the GPU has no weak points that anyone has identified. Stellar work.
Reply
#36
https://www.tomshardware.com/news/how-th...aunch-mass
Quote:In an article published yesterday, PC Mag spoke to the people behind Bounce Alerts, as well as a few of their customers, about how the company’s automated purchasing bots work. For the uninitiated, Bounce Alerts is a service that members can subscribe to for $75 a month which then gives them access to scripts that monitor store pages and automatically purchase items when they go in stock. The company first sprung up in the sneakers market, but there’s nothing stopping savvy resellers from applying it to tech as well.
...
A Bounce Alerts member was a bit more candid with the publication, giving the magazine insight into the actual bot ordering process. The member said that the bot works by running “an automated script to run basically from the product page to payment information and then to checkout.” They added that there it does have a noticeable flaw, though, in that the product page needs to stay live for the bot to keep working- “Whenever the site died [from too much traffic], I would have to restart the script and hope for it just to get through on the next one.”
...
Nvidia declined to comment on Bounce Alerts’ bots when PC Mag reached out with questions, but did tell the publication that it is manually reviewing RTX 3080 orders to try to weed out scalpers and bots. The GPU maker also told the outlet that it does limit RTX 3080 orders to one per customer, though Bounce Alerts countered that it’s come up with ways to circumvent these limits.

With the pandemic limiting customers from just going to brick and mortar stores to snatch up purchases by hand, product launches are more vulnerable to online bot orders than ever. Like in high frequency stock trading, even the fastest, most diligent customers can’t hope to keep up with the speeds these bots have. If manufacturers want their products to actually make their way to people who intend to use them rather than just mark them up, they’re going to have to up their anti-scalping protections, and fast.

https://www.extremetech.com/gaming/31521...080-launch
Quote:Those who tried to purchase Nvidia’s Founders Edition cards found the Nvidia site to be unreliable. Even when they did manage to get a GPU in their carts, they would be unable to complete the transaction. As PC gamers started to simmer over the apparently botched launch, the resellers were gloating on Twitter.
...
Nvidia claims it has a policy of limiting purchases per customer, but it would appear Bounce Alerts found a workaround. There’s no reason Nvidia should be sending 42 order confirmations to one email — that’s pretty clear evidence something is up. Nvidia says it will go through and manually confirm orders, which we can only hope will cause these resellers to lose their ill-gotten merchandise. It hasn’t been specific about what, if anything, it will do to prevent more bot orders in the future. In the meantime, don’t buy overpriced video cards from resellers. The restocks will come, and you don’t want to reward this kind of operation.

https://www.tomshardware.com/news/rtx-30...ock-record
Quote:The GeForce RTX 3080, which is the current performance king of graphics cards, is also a stud in overclocking. The Ampere-powered graphics card (via Wccftech) has set new world recorlds in 3DMark's Time Spy and Port Royal benchmark.

Brazilian overclocking legend Ronaldo "Rbuass" Buassali pushed his Galax GeForce RTX 3080 SG to 2,340 MHz to secure the first place on the Port Royal leaderboard. Time Spy is a bit more demanding so the avid overclocker dropped the GPU core clock to 2,130 MHz. Nevertheless, it was fast enough to allow Buassali to set a new world record in Time Spy as well. Interestingly, the GeForce RTX 3080's memory remained untouched in both runs so when Buassali gets around to overclocking it, the scores should improve.

Despite the fact that the GeForce RTX 3080 went on sale yesterday, some manufacturers still haven't updated the product pages for their flagship models. With the information that we have so far, the MSI GeForce RTX 3080 Gaming X Trio 10G appears to be the fastest custom GeForce RTX 3080 on the market so far with a boost clock that tops out at 1,815 MHz. So, Buassali's overclocked GeForce RTX 3080 was running up to 28.9% faster than MSI's over-engineered model. However, the overclocker was likely using exotic cooling, such as liquid nitrogen to keep his graphics card's temperature under check.

https://www.tomshardware.com/news/report...until-2021
Quote:According to the outlet, the limited availability at launch wouldn't just affect the GeForce RTX 3080, which is the new king of gaming graphics cards, but the entire Ampere product stack. We can't say we didn't see it coming, though, since there were early signs of the GeForce RTX 30-series having a tight supply problem. For one, no retailer is accepting preorders for Ampere, which means everyone has the same chance of securing a graphics card at launch time. Sadly, this has also opened the floodgates for scalpers to capitalize on the situation with the GeForce RTX 3080 release. Furthermore, mainstream Ampere relies on a custom Samsung 8nm process node, which might require a bit of time for yields to improve. The global supply chain disruptions from the coronavirus pandemic likely also play a role, too.

According to the Commercial Times report, the short supply originated in the second quarter of this year. Initially, the transition from Turing to Ampere slowed down the production schedule. Consequently, the shortage of other components contributed to elongating the schedule and negatively impacting delivery times. The lead time for Ampere is estimated to have increased by two to three weeks. There is hope that it will improve in the fourth quarter, though.

https://www.tomshardware.com/news/gigaby...0gb-models
Quote:Nvidia's been busy this quarter pumping out new Ampere GPUs. The RTX 3080 debuted this week, with the RTX 3090 and RTX 3070 close behind it. But it looks like Nvidia could be bringing even more Ampere cards to market. As spotted by VideoCardz, Gigabyte has leaked three unannounced GPUs: the RTX 3060 Ti, RTX 3080 20GB and RTX 3070 16GB.

In the model numbers, you can see the unknown SKUs featuring an "S" at the end of the Ampere numbering scheme. The "S" could stand for "Super," like the last generation or it could possibly stand for "Ti." In any case, these GPUs will be more advanced models of their base tier SKUs as they strive to be some of the best graphics cards available to PC gamers.
...
VideoCardz believes the RTX 3080 20GB and RTX 307016GB will arrive sometime after AMD's RDNA 2 launch, with the RTX 3060 Ti arriving earlier, more specifically in late October.

https://www.tomshardware.com/news/gigaby...blower-gpu
Quote:The GeForce RTX 3090, poised to be one of the fastest and best graphics cards on the planet, will hit stores on September 24 for $1,499. Gigabyte is one of the first, if not the only, brave vendor to release a GeForce RTX 3090 with a blower-style design.
...
However, the biggest concern we have with the GeForce RTX 3090 Turbo 24G is not the cooler itself, but Ampere's large appetite for power. The GeForce RTX 3090 is rated for 350W, so some will question whether a blower design can keep the beast under control. Manufacturers could more easily get away with a blower design on the GeForce RTX 3070, which is rated for 220W, but a blower on a GeForce RTX 3090 sounds like a gamble.
...
Not that it changes anything power-wise, but Gigabyte opted for two normal 8-pin PCIe power connectors rather than Nvidia's 12-pin PCIe power connector. The manufacturer conveniently placed both connectors at the rear of the graphics card.

https://www.techpowerup.com/272312/nvidi...be-a-novel
Quote:In other corners of the Internet, however, expectations were met and attempts flourished. These seem to have been mostly met by scalpers, though, so there is nothing idyllic in this particular painting - it's more akin to Edvard Munch's The Scream than it is Vincent Van Gogh's Starry Night. On eBay, an RTX 3080 card was allegedly sold for 70,000$ - a particularly criminal act, if I've ever seen one. It's also common, right now, to see some of these going for prices ranging between $1,300 and $5,000 - and at this point, this writer feels he's almost out of metaphors for this particular situation. Apparently, a service named Bounce Alerts was used - it appears that most RTX 3080 orders were done through this service, which automatically bought as much RTX 3080 stock as it could from wherever they were sold. A user reported having acquired some 42 RTX 3080's from the NVIDIA store before stock ran out. There are even bots designed to bid on eBay sales so as to waste scalpers' time and make orders that will never be fulfilled - a sort of poetic justice, if you may, though I don't believe the kind Shakespeare himself would have conceived of.
Reply
#37
https://www.tomshardware.com/news/nvidia...ted-supply
Quote:If there was any doubt to the GeForce RTX 3090's performance, Nvidia just confirmed it via a blog post today. The GeForce RTX 3090, which Nvidia is dubbing as the big ferocious GPU or "BFGPU," is only 10% to 15% faster than the GeForce RTX 3080 at 4K gaming. The graphics card is sure to once again upset our list of best gaming graphics cards, hopefully in a good way.

The GeForce RTX 3090 officially goes on sale tomorrow. The Founders Edition will retail at $1,499, and custom models could cost between $1,499.99 to $1,799.99. Don't get your hope up just yet though since Nvidia itself has acknowledged that stock will be extremely limited.

In the post, Nvidia stated that "Since we built GeForce RTX 3090 for a unique group of users, like the TITAN RTX before it, we want to apologize upfront that this will be in limited supply on launch day. We know this is frustrating, and we’re working with our partners to increase the supply in the weeks to come."

https://www.techpowerup.com/272492/nvidi...d-pictured
Quote:Here is the first picture of an alleged next-generation Quadro RTX graphics card based on the "Ampere" architecture, courtesy YouTube channel "Moore's Law is Dead." The new Quadro RTX 6000-series shares many of its underpinnings with the recently launched GeForce RTX 3080 and RTX 3090, in being based on the 8 nm "GA102" silicon. The reference board design retains a lateral blower-type cooling solution, with the blower drawing in air from both sides of the card, through holes punched in the PCB, "Fermi" style. The card features the latest NVLink bridge connector, and unless we're mistaken, it features a single power input near its tail end, which is very likely a 12-pin Molex MicroFit 3.0 input.

As for specifications, "Moore's Law is Dead," shared a handful of alleged specifications that include maxing out of the "GA102" silicon, with all its 42 TPCs (84 SMs) enabled, working out to 10,752 CUDA cores. As detailed in an older story about the next-gen Quadro, NVIDIA is prioritizing memory size over bandwidth, which means this card will receive 48 GB of conventional 16 Gbps GDDR6 memory across the GPU's 384-bit wide memory interface. The 48 GB is achieved using twenty four 16 Gbit GDDR6 memory chips (two chips per 32-bit wide data-path). This configuration provides 768 GB/s of memory bandwidth, which is only 8 GB/s higher than that of the GeForce RTX 3080. The release date of the next-gen Quadro RTX will depend largely on the supply of 16 Gbit GDDR6 memory chips, with leading memory manufacturers expecting 2021 shipping, unless NVIDIA has secured an early production batch.

https://www.techpowerup.com/272508/galax...s-rtx-2080
Quote:An alleged event by GALAX targeted at distributors in China revealed up to three upcoming SKUs in NVIDIA's RTX 30-series. This comes as yet another confirmation from a major NVIDIA AIC partner about the 20 GB variant of the GeForce RTX 3080. The RTX 3080 originally launched with 10 GB memory earlier this month, and it is widely expected that NVIDIA fills the price-performance gap between this $700 SKU and its $1,500 sibling. The RTX 3080 uses twenty 8 Gbit GDDR6X memory chips (two chips per 32-bit data-path), much like how the RTX 3090 achieves its 24 GB memory amount.

Elsewhere we see GALAX mention the RTX 3060, a performance-segment SKU positioned under the RTX 3070. You'll notice that the product-stack graph by GALAX suggests performance comparisons to previous-generation SKUs. The RTX 3080 and RTX 3090 are faster than everything from the previous generation, while the RTX 3070, which is coming next month, is shown trading blows with both the RTX 2080 Ti and the RTX 2080 Super. In this same graph, the RTX 3060 is shown matching up to the RTX 2080 (non-Super), a card NVIDIA originally launched at $700.

There's an unnamed SKU slotted between the RTX 3070 and the RTX 3080 10 GB, which is codenamed "PG142 SKU 0." Some AICs/OEMs are referring to this as the "RTX 3070 Ti," and others the "RTX 3070 Super." Given that the Super brand extension has been used by NVIDIA to denote a mid-life refresh for an existing product stack, it's very likely that this SKU is simply the RTX 3070 16 GB (RTX 3070 with 16 GB of GDDR6 memory, and perhaps bolstering in other areas). This SKU is shown trading blows with the RTX 2080 Ti and the TITAN RTX.
Reply
#38
https://www.tomshardware.com/news/new-nv...ere-series
Quote:Overclocking your brand new RTX 30 series GPU is a risky move but someone has already done this, hardwareluxx reports a ROG Strix RTX 3080 BIOS has been flashed to use the BIOS for an RTX 3080 TUF Gaming using a new version of NVFlash that includes support for Nvidia's new Ampere graphics cards.
...
Now, all we're left with is flashing the graphics card BIOS, since we aren't modifying the actual BIOS, this is still something we can do to hopefully get higher clock speeds. Technically as long as you grab a BIOS from another card with the same GPU (say RTX 3080 Founders Edition to RTX 3080 TUF Gaming) that can work.

But should you do it? Probably not, flashing a BIOS from one card to a different card is obviously very risky and not something the cards were designed to do. In a worst-case scenario, your graphics card will be completely bricked, and the warranty rejected because you tampered with your card. Hardcore overclocking is the only real reason to do it, and if you don't mind risking your GPUs life in the process the performance gains could be incredible.

https://www.extremetech.com/gaming/31570...ity-issues
Quote:We’ve covered the reports of instability across the RTX 3080 and RTX 3090 product families, even cards from different manufacturers. An early theory, coined by Igor of Igor’s Lab, is that the issue could be caused by sub-par power circuitry, especially since there seemed to be evidence that the problems were concentrated in certain specific GPU families much more so than others.
...
Here’s the truth of the situation as we know it today:

Nvidia launched Ampere. A lot of customers had problems with Ampere. A logical theory based on the quality of the power rail circuitry was advanced and endorsed as plausible.

Meanwhile, people discovered that lowering your GPU boost clock by ~100MHz or locking your GPU clock to a high, fixed frequency both produced more stability. This was not in tension with the first theory — a circuit that can’t quite keep up with the demand places on it at 2.1GHz might be perfectly at ease at 1.9GHz. Similarly, repeatedly switching a GPUs clock speed up and down requires much more complex power distribution compared with running things at a static clock.

After several days of speculation, Nvidia has released a new driver, 456.55, that seems to have dramatically improved the situation for most players. Games that were previously unstable at low clocks now run rock-solid at higher frequencies. There are some people still having problems after the update, but it appears to have worked for the majority of people.

We’re still going to keep an eye on this solution as it evolves, but Nvidia’s latest driver appears to resolve the issue quite effectively.

https://www.extremetech.com/gaming/31571...nstability
Quote:One of the frustrating things about trying to sort out the RTX instability issues from last week’s launch has been the relative paucity of comments from vendors. Now, however, they’ve collectively broken their silence — and they’re all saying pretty much the same thing: Update your video drivers.
...
Overclocker der8auer, otherwise known as “Person who does things I don’t have the guts to try,” decided to replace two of Gigabyte’s stock 470u CP-CAP capacitors with twenty 47u MLCC capacitors (this works out to the same power capacity for both setups). His maximum stable overclock went up 2 percent as a result, or about 30MHz.

Der8auer’s results do show that power rail hardware can make a small difference, but it’s not enough to really move the needle one way or the other. Instead, the problem really does appear to have been driver-related.
...
The example above is hypothetical; we don’t know what Nvidia adjusted in its driver to improve stability, and while there have been reports of clock drops, there have also been reports of clock improvements.

I’ve been following this story since it broke and I’ve written a number of updates to illustrate how quickly something can evolve — and how early reports, even when they accurately identify a problem, can incorrectly identify the cause. Until and unless new evidence emerges showing the problem is still somehow linked to the POSCAP / MLCC question, the Nvidia 455.56 driver appears to resolve the extant problems. If they stay resolved, they’ll be remembered as a hiccup on the way to a successful overall launch.
Reply
#39
https://www.extremetech.com/gaming/31586...tx-2080-ti
Quote:Last generation, Nvidia raised the price of the RTX 2080 Ti to $1,200, nearly double the price of the GTX 1080 Ti. Now, the company is promising an RTX 3070 that can match the 2080 Ti and costs just 42 percent as much. According to new data released by the company and helpfully tabulated by VideoCardz, we can see exactly what improvements Nvidia is claiming:
...
How accurate is this information likely to be? Pretty accurate. Companies don’t typically bother to show incorrect results they know reviewers will be able to disprove. What’s more typical is that they show accurate benchmark results, but cherry-pick the test and settings to showcase products in the most favorable light.

What do we see here? The RTX 3070 appearing to fully match the RTX 2080 Ti’s overall performance. At worst, the two GPUs are the same speed. At best — mostly in applications — the RTX 3070 can be 1.23x faster than the older card.

If we compare the specs of the RTX 3070 with the RTX 2080 Ti, this ranking makes sense. The RTX 2080 Ti has 4,352 GPU cores, 272 texture units, and 88 ROPs. The RTX 3070 has 5,888 cores, 184 TMUs, and 96 ROPs. On paper, we’d expect the RTX 3070 to potentially be even faster over the RTX 2080 Ti than Nvidia is claiming. So on the question of “Is Nvidia presenting cherry-picked tests?” I doubt it. The RTX 3070 is, on average, about 1.6x faster than the RTX 2070, so the upgrade value here is pretty strong, especially if you’re coming from a GPU like the GTX 980 or GTX 1080.

Of course, whether you’ll be able to buy one is anyone’s guess.

https://www.extremetech.com/gaming/31581...ot-debacle
Quote:Nvidia’s RTX 3080 and RTX 3090 sales were the worst examples yet of how badly online bots are damaging product launches, and the company wants to prevent a similar event from happening when it launches the RTX 3070. To that end, Nvidia will delay the RTX 3070 debut by two weeks, from October 15 to October 29, in order to build inventory and ensure an adequate supply of cards.

This is going to be an interesting stress test of the bot armies, OEM manufacturing, and retailer attempts to identify real orders versus scalpers. I’m not terribly optimistic about the outcome. As I wrote earlier this week, Nvidia has every reason to crack down on bots and scalpers, but other companies in the distribution chain don’t necessarily see things that way.

According to Rob Fahey at GamesIndustry.biz, Amazon apparently took no action to prevent people from buying pre-order stocks before immediately re-listing those exact same products for sale at a substantial markup compared with previous listings. Companies like eBay have no reason to attempt to block preorder scams and scalping, given that they literally make their money from online auctions and will earn more from an inflated sales price than a normal one.
...
If the customer who bought 42 GPUs was an outlier, Nvidia is fine. If he represents the median bot purchase — or is even within one standard deviation of it — then we’re talking about bots sucking down 1-2 dozen cards apiece. If 1,000 to 2,000 bots can account for 12,000 to 48,000 video cards, it’s going to be much harder to overwhelm the collective credit limits and resources of the botters. Some scammers might take whatever profits they earned from the first wave of RTX 3080 and 3090 order abuse, then pour those profits into buying more RTX 3070s in the hopes of pulling the same trick again.

I’m glad to see Nvidia taking the situation seriously and I hope retailers and manufacturers do the same in order to make certain hardware gets into the hands of customers attempting to buy it as opposed to flipping it for profit, but the bots have definitely won Round 1 of our metaphorical match-up. Here’s hoping better detection methods and more inventory can hand a win to the good guys in Round 2. Nvidia has claimed the $500 RTX 3070 will outperform the $1,200 RTX 2080 Ti, and that’s going to have a lot of people eyeing the RTX 3070 as a potential upgrade.

https://techreport.com/news/3473505/nvid...two-weeks/
Quote:The post doesn’t talk at all about what measures Nvidia might be taking or asking retailers to take to deal with one of the biggest problems with the 3080 launch, though. As mentioned above, a huge number of RTX 3080 GPUs ended up in the hands of scalpers via purchasing bots. Nvidia said in a postportem blog that it moved its store to a dedicated environment, increased server capacity, and added “pot protection” via CAPTCHA. The company also cancelled hundreds of orders snapped up by bots.

https://www.tomshardware.com/news/nvidia...until-2021
Quote:If you thought it would become easier to purchase an Nvidia RTX 3080 or 3090 by the end of the year, you might be wrong. And it doesn't look good for the RTX 3070, either. Today Nvidia CEO Jensen Huang revealed that the company expects the crushing shortages of RTX 3080 and 3090 graphics cards to persist through the end of 2020, saying:

"I believe that demand will outstrip all of our supply through the year," Huang said. "Remember, we're also going into the double-whammy. The double-whammy is the holiday season. Even before the holiday season, we were doing incredibly well, and then you add on top of it the ‘Ampere factor,’ and then you add on top of that the ‘Ampere holiday factor,’ and we're going to have a really really big Q4 season."
...
"The 3080 and 3090 have a demand issue, not a supply issue," said Huang. "The demand issue is that it is much much greater than we expected — and we expected really a lot."

"Retailers will tell you they haven't seen a phenomenon like this in over a decade of computing. It hearkens back to the old days of Windows 95 and Pentium when people were just out of their minds to buy this stuff. So this is a phenomenon like we've not seen in a long time, and we just weren't prepared for it."

"Even if we knew about all the demand, I don't think it's possible to have ramped that fast. We're ramping really, really hard. Yields are great, the product's shipping fantastically, it's just getting sold out instantly," said Huang. "I appreciate it very much, I just don't think there's a real problem to solve. It's a phenomenon to observe. It's just a phenomenon."
...
Huang's assessment of the breadth of the shortage is similar to reports we've seen from several China-based media outfits that predicted the shortages would last until 2021. Those reports also outlined that we could see third-party graphics card makers create bundle deals that force users to buy a motherboard with the GPU to upsell customers. That hasn't happened...yet.

https://www.tomshardware.com/news/gamer-...nd-it-runs
Quote:The RTX 3090 might be overpriced for a gaming card, especially with 24GB of VRAM you will never fully saturate...or will you? Strife212 on Twitter had the genius idea of using that 24GB frame buffer to run Crysis 3, but not in the way you think.

Using a program called "VRAM Drive" Strife212 was able to make a 15GB virtual disk on the RTX 3090's VRAM and physically install Crysis 3 onto it, leaving 9GB of VRAM remaining for Crysis 3 to use as graphics memory, which is plenty for any video game by today's standards.

She reports Crysis 3 loads fast, and performance is really good (screenshot shows 75FPS). She ran Crysis 3 at 4k very high settings with VRAM utilization barely hitting the 20GB mark.

https://www.tomshardware.com/news/rtx-a6...adro-cards
What's going to happen to PNY, since they made all of the Quadro cards?
Quote:Nvidia Quadro is dead, long live Nvidia Quadro. Following in the footsteps of the Nvidia A100 that replaces the previous generation Tesla V100, and the Nvidia RTX 3090, which we called the "heir to the Titan throne" in our review, Nvidia today announced two new graphics cards that have the power of the workstation-focused Quadro line, but not the branding. These are the Nvidia RTX A6000 and Nvidia RTX A40, and they each pack Ampere architecture with 10,572 CUDA cores and 48GB of VRAM.

We're not sure why Nvidia's dropping the Quadro branding, but much like the A100 and RTX 3090, it might be that the company's trying to centralize all of its new cards under a simpler "RTX + number" format. We don't know too much about these cards yet, from release date to specs to price. What we do know is that the cards will use 48GB GDDR6 VRAM, which is twice as much as the RTX 3090 but without the 'X' factor.
...
The Quadro RTX 8000, which was the last high-end workstation GPU that Nvidia released, cost $5,800 at launch (and will still run you about $5300 to buy now), so don't expect either the A6000 or the A40 to come cheap. You can sign up to be notified of availability for both of these cards on Nvidia's website. Nvidia says the cards should begin shipping in December.

https://www.tomshardware.com/news/nvidia...validation
Quote:Coming hot off the heels of a potential RTX 3070 mobile GPU, a new post from twitter user @Avery78 (via Videocardz) shows a picture of Nvidia's desktop GA104-300 Ampere GPUs being validated for production. The picture originated from a user on the Baidu forums, but it has since been removed. As with all leaked info, though, we have to approach this with some caution, but it does appear to be legitimate.

The GA104-300 is assumed to be a cut-down version of the GA104 die, with 5888 CUDA cores, 184 TMUs, and 96 ROPS. This would be the RTX 3070 GPU, which features the same specifications. However, it could also be a different configuration for RTX 3060 or RTX 3060 Ti. Two or possibly all three (3060 Ti) of those GPUs are expected to launch in the next two months, to compete with future RDNA2 products.
...
Regardless, having RTX 3070 in full production comes as no surprise, as the RTX 3070 launches on October 29th. RTX 3060 and/or RTX 3060 Ti are likewise rumored to arrive by Black Friday, though Nvidia hasn't officially announced either part. Performance for the RTX 3070 should be somewhere around the RTX 2080 Ti as indicated by Nvidia's latest slide showcasing RTX 3070 performance vs the Turing generation. Let's hope the delayed launch will allow supply to surpass demand at least for a little while.
Reply
#40
https://www.extremetech.com/gaming/31589...-into-2021
Quote:Jensen also reiterated that the problem with Ampere was overwhelming demand, not an issue of supply. So many people apparently want the cards, it’s impossible to keep them on store shelves. I wouldn’t be surprised if this was part of the reason because Turing uptake has been quite low. A lot of gamers are back on 10xx GPUs, and the RTX 30xx family look like great upgrades relative to those cards. Could Ampere demand be the entire reason stocks are so hard to come by? It could be — but I suspect it isn’t the entire explanation.

First, we know bots were a problem at the RTX 3080 and RTX 3090 launches, and it’s not as if those users are going to have quit using them in the intervening period. Every retailer who isn’t running effective bot detection is going to be a problem.

Second, according to semiconductor analyst Daniel Nenni, Samsung’s 8nm yields are anything but good. In an August 30 podcast, Nenni said: “Samsung has had yield problems – serious, serious yield problems – throughout their history because they were first to a node, but TSMC is always first to high volume manufacturing, so you really have to separate the two.”

This does not automatically mean that Samsung’s 8nm is a low-yielding node, but there have certainly been questions about what yields look like, and no clear answers yet. Samsung has struggled to land major customers other than its own business for more recent advanced nodes. IBM and Nvidia are both fairly recent announcements, especially Nvidia’s high-end manufacturing.
...
The big question now is: “Will AMD have a similar problem?” If RDNA2 and Ampere are relatively well-matched and one of them is $500 theoretically and $800 practically, that’s going to impact people’s buying decisions. AMD probably wins some sales based on Nvidia’s difficulty supplying the market, if prices on Nvidia GPUs stay inflated. Alternately, it’s possible that RDNA2 will get hit by exactly the same wave of bots or upgrade demand. It won’t really matter how RDNA2 compares with Ampere if both of them are impossible to find, and the reason won’t really matter to customers who don’t get to buy one.

Either way, I wouldn’t necessarily pin your hopes on a new GPU this Christmas.

https://www.tomshardware.com/news/evga-i...ing-system
Quote:Even now that a few weeks have passed since the RTX 3080 and RTX 3090 launches, availability is still a huge issue. Per Nvidia's own admission, supply problems are likely to persist into 2021, and that isn't good news -- especially with the bots grabbing every purchase the moment cards are available. But, EVGA has come up with a creative workaround to give everyone a fair chance: classic british queueing.
...
EVGA's product management director, Jacob Freeman has also been actively keeping the community updated about RTX 30-series orders through their Twitter account, and credit where credit is due, they are doing a stellar job managing expectations.

Of course, when you receive the email that it's your turn to order an RTX 30-series card, you still only have 5 hours to actually place the order -- so you better pray that it comes at a time when your boss isn't watching.

https://www.techpowerup.com/273010/evga-...hics-cards
Quote:Check after the break for the full explanation on how this system works, which was given by EVGA's own Jacob Freeman. The system is first being rolled-out for the US, with other regions following suit according to its success.

https://www.tomshardware.com/news/crysis...benchmarks
Quote:A couple of days ago, word got out that a gamer was running Crysis from a 3090 GPU RAM disk. It worked, but of course, we had questions. Specifically, how well does Crysis 3 run from a RAM disk compared to boring old SSD storage? That's easy enough to test, so we set about downloading Crysis 3, VRAM Drive, and ImDisk Toolkit—those last two were required for the GPU and system RAM disk testing.
...
Yeah, that earlier bit about Crysis 3 not being very storage limited? This is the result. Launch times varied by about 0.2 seconds in our testing, but some of that might be human error. No one is likely to notice a 0.2 second difference in load times, and the SATA SSD actually outperformed the NVMe SSD. The GPU RAM Disk ended up coming in last, perhaps just due to software overhead. The VRAM Drive ought to perform as well as other storage options, but again, a few tenths of a second aren't particularly meaningful. The time to load a save game was effectively tied.

As for actual in-game performance, there's a bit more variability between runs, with the VRAM Drive coming out just a hair ahead of the two SSDs. Running the game off of system RAM ended up being the slowest, which again doesn't make much sense, but it was consistently nearly 1 fps slower than the other storage options. There were also still occasional stutters on all of the test options (particularly on the first run, where minimum fps dropped into the single digits), so extreme RAM drive storage didn't fix that.

The issue with RAM drives is that the applications have no idea they're residing on blazingly fast storage. This is why letting the application or game or even OS manage memory is usually a better overall solution. Think about what we've done here in our testing of Crysis 3.

For the RAM drive, we've allocated a chunk of system memory as storage. When we launch the game, the CPU reads data out of that portion of memory, copies it into another section of RAM, then processes the data in various ways and loads some portions of the data into the GPU memory. There's a lot of wasted resources and effort.

For the VRAM drive, it's even worse. Data gets copied over the PCIe bus to the system RAM when the game launches, then it gets processed there, and eventually, textures and other portions of the game get copied back to the VRAM over the PCIe bus. We're using DDR4-3600 memory that provides 57.6 GBps of bandwidth, but the PCIe Gen3 bus only manages 16 GBps. PCIe Gen4 might help in this case, but even so, we're still generally going to end up being limited by other elements rather than storage throughput.

The takeaway is that, besides being an extremely expensive storage solution, VRAM and RAM disks typically aren't necessary for games. It would be better for games to optimize their use of memory better than to try and pre-allocate a fixed portion of memory—system or GPU—as storage, then copy over files to that drive, and maybe gain some benefit. There are situations where a RAM disk can be more beneficial, particularly in the server realm. But for games like Crysis 3? Not so much.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)