A Big Die Size may be a Potential Advantage for Tesla GPUs – Updated!!
A couple of us were just wildly speculating about the future and the subject of AMD’s small “efficient” die size was being touted as a current advantage to Nvidia’s huge monolithic die. Well, we looked ahead to see what Nvidia’s engineers might be thinking:
Try opening up an image of a GT200 card in an image editor and copying images of a RAM package over it. We found that you could fit 12 RAM packages into the same area.
The RAM chips themselves are smaller than the packages that house them, so we guestimate there is enough room on the die for all 16 RAM packages a GTX280 uses – plus the NVIO die.
We know that IBM has been researching ways of stacking multiple dies on top of each other for a long time now. This (currently) has a heat disadvantage, but offers many other advantages, including but not limited to – shorter wiring with much higher speeds possible as a result. Nvidia recently joined the SOI Consortium – and this is important since SOI is a major technique employed with die stacking.
Anyway putting all this together, and we can see that the Tesla GPU die size is large enough now for die stacking the GPU + vRAM to be feasible for nvidia with future chips!!
Watch for it. We think in the near future the large die size will actually help Nvidia progress things even further by “integrating” or stacking the memory onto the GPU itself. It would certainly give future chips a big (and probably unexpected) boost versus Intel’s forthcoming Larrabee too.
– Mark Poppin
It appears that our speculation may not be wild at all:
First 3-D processor runs at 1.4 Ghz on new architecture
‘Rochester Cube’ points way to more powerful chip designs
The next major advance in computer processors will likely be the move from today’s two-dimensional chips to three-dimensional circuits, and the first three-dimensional synchronization circuitry is now running at 1.4 gigahertz at the University of Rochester.
Unlike past attempts at 3-D chips, the Rochester chip is not simply a number of regular processors stacked on top of one another. It was designed and built specifically to optimize all key processing functions vertically, through multiple layers of processors, the same way ordinary chips optimize functions horizontally. The design means tasks such as synchronicity, power distribution, and long-distance signaling are all fully functioning in three dimensions for the first time.
“I call it a cube now, because it’s not just a chip anymore,” says Eby Friedman, Distinguished Professor of Electrical and Computer Engineering at Rochester and faculty director of the pro of the processor. “This is the way computing is going to have to be done in the future. When the chips are flush against each other, they can do things you could never do with a regular 2-D chip.”
This new “Rochester cube’s” processing functions are optimized vertically in the same way as a regular chip’s are optimized horizontally. Eby Friedman, Professor of Electrical and Computer Engineering at Rochester, developed this 3D chip as an entire circuit board folded up into a tiny footprint.
Friedman maintains typical chips in the consumer devices could be made with ten time the processing power with the new technology and one tenth the size of current ones. Of course there are many practical hurdles to overcome – including thermal – and they are hard at work developing an new control system for it. Freidman says, “getting all three levels of the 3-D chip to act in harmony is like trying to devise a traffic control system for the entire United States—and then layering two more United States above the first and somehow getting every bit of traffic from any point on any level to its destination on any other level—while simultaneously coordinating the traffic of millions of other drivers.”
This cube design is the first to makes power distribution and long-distance signaling functional in three dimensions. The chip itself was manufactured at MIT and is designed to allow vertical connections to the transistors in each layer. Since scaling horizontally is apparently nearing its limits, vertical scaling is the future.
aka apoppin/ABT editor