Galaxy GTS 250 512 MB Review
Eye Candy:The Card Interior
Time to pop off the hood and take a look at what powers this card. In order to remove the cooler you need to remove the four screws at the back.
After removing the cooler here is what you see
The cooler used here is designed by Cooler Master. It uses a temperature controlled fan which has a maximum speed of 2800 rpm. The cooler is designed for keeping the noise low and is really quiet even at 2800 rpm while cooling the GPU, memory and MOSFETs at the same time. The hot air is blown out of the back of the case.
This card uses a 8-layer PCB employing all solid state Japanese capacitors. 4+1 phase power circuitry i.e 4 for GPU, 1 for memory is used here. Each phase has two power transistors. They are covered by the goldish heatsink in the image on the right below.
The card uses eight 64MB GDDR3 memory chips. The GDDR3 memory chips are made by SAMSUNG and carry the model number K4J52324QH-HJ08. The suffix ’08’ indicates that the chips have a latency of 0.8ns and are rated to run at 1200 MHz. They use a voltage of 2.05V. Data sheet can be found here (PDF Link) if you want to get your hands dirty with more technical information.
Now lets have a look at the GPU itself. This is a 55nm G92b GPU housing 754 million transistors. To give you an idea of the realistic size of this, I have placed Uncle Sam’s dime on the GPU. The GPU is very slightly bigger than the dime. Goes to show how much technology has progressed. 754 million transistors in an area of nearly a dime. The ‘0902B1’ code on the chip indicates that this GPU was manufactured in the second week of 2009 and uses 55nm fabrication process,as indicated by ‘B1’. The core voltage on the GPU is set at 1.15V.
You’re not even hitting the card’s limits on Furmark to correctly gauge it’s temperature or power consumption under load. Under res 1440 x 900, MSAA 16x, and post processing; the card starts to throttle as it reaches temperatures of 105 c.
@stridhiryu030363 – I tried to get the card temperature near to the maximum temperature a game would have done. I don’t think any game would be able to heat a card to 105C, unless of course the ambient temperature is very hot.
For Power consumption, I tried to max it. If you use AA, the power consumption is less than with no AA.
Just putting that out there as that is the worst case scenario should a game could put that much stress in the near future. I have an older 9800 gt that doesn’t put out that much heat on the same settings and a friend’s gtx 260 maxing out at 85 c. There’s something wrong with galaxy’s non reference design imo.
Did not know AA lowered power consumption. You’ll think sharpening textures would put more stress on the card.
Karan, were you testing total system power consumption, or just the card’s?
Adding AA to a card should generally increase its load consumption because it works harder.
Update. My card just died. Maybe there was something wrong with mine which caused the overheating. No, I didn’t overheat the thing to death, it just locked up my system and refuses to boot during a gaming session.
Anyone else willing to confirm the temps for me?
@BFG10K – was measuring total system power consumption.
IIRC using high AA saturates the GPU bus, which makes the shaders and texturing units idle while they wait for the AA samples to clear the ROPs.
@stridhiryu030363 – Do you have the same card ?
No game was able to hit the temps Furmark hit in the review.
Yes, I have the same card. Funny thing is, the game I was playing was hardly graphic-intensive so my card stayed around an average of 58 c when it bit the dust.
When I meant confirm the temps, I meant attempt to run Furmark on the same settings I had set to recreate my result. My card died yesterday under very mild conditions of operation so I was just wondering if it was the defect in my card that was causing the high temperatures.
IIRC using high AA saturates the GPU bus, which makes the shaders and texturing units idle while they wait for the AA samples to clear the ROPs.
I’m not sure what you mean by “GPU bus”. Yes, AA hits the ROPs harder but it also hits the memory too. In general the card’s consumption should increase when AA is enabled because the rendering load is higher.
@BFG10K – was measuring total system power consumption
I have a theory then. If the GPU becomes saturated due to AA, the CPU might not be working as hard because it’s waiting for the GPU to catch up, thereby lowering overall system consumption. If you could test just the GPU’s consumption then it should increase when AA is applied.
@BFG10K – It could be the CPU idling as you said. But I didn’t notice any change in CPU Usage with and without AA.
Also it could be the shaders and texture units waiting for the AA samples to clear the ROPs.
Could be a combination of both things.
I have thought of measuring the GPU only power usage, but haven’t come up with a way to do so yet.
Well, finally received my card back from rma and with the same settings as before on furmark, I seem to top out at 96 c after 20 minutes of it.
@stridhiryu030363 – what is your ambient temperature ?
Not sure, I don’t have anyway of checking. It was around 2 A.M. when I attempted this so it couldn’t have been very hot.
@stridhiryu030363 – check the weather on internet or something. What state or country are you in ?
California.
@stridhiryu030363 – that should explain it. it must be hot where you live.
How hot does the card get in games ?
Not at 2 a.m at night.
Right now, 84 f according to google, been folding units all day with folding@home and it’s only at 71 c, 51% fan speed. It’s not a really demanding application. Will stress test again later tonight.
60 f according to google. Same Settings, Same results.
Tops out at 96 c
CHINA IS A BEAUFUL COUNTRY
Normally I don’t learn article on blogs, but I would like to say that this write-up very forced me to check out and do it! Your writing style has been surprised me. Thanks, very great post.