Galaxy GTS 250 512 MB Review
Overclocking
For overclocking I turned to our gaming benchmarks. I started increasing the clocks 20 MHz at a time and tested with Crysis Warhead. I then found the maximum clocks at which Warhead was stable. Then I tested other games at these clocks; if the benchmarks failed, i backed down the overclock by 10 MHz. The clocks that all the benchmarks passes are shown above in the image. Note that GPU-Z requests clocks from the driver as set in the Rivatuner’s overclocking panel, but Rivatuner’s Hardware Monitoring module requests clocks from the hardware itself which is accurate. Thus GPU-Z clocks and Rivatuner clocks differ slightly.
Starting from stock clocks of 738 MHz/1836 MHz/1100 MHz (Core/Shader/Memory), I reached the final overclock of 820 MHz/2040 MHz/1320 MHz (Core/Shader/Memory), which is a 11.1%/11.1%/20% overclock on core/shader/memory. The memory overclock is impressive which is finally running at 2640 MHz DDR Speed.
You’re not even hitting the card’s limits on Furmark to correctly gauge it’s temperature or power consumption under load. Under res 1440 x 900, MSAA 16x, and post processing; the card starts to throttle as it reaches temperatures of 105 c.
@stridhiryu030363 – I tried to get the card temperature near to the maximum temperature a game would have done. I don’t think any game would be able to heat a card to 105C, unless of course the ambient temperature is very hot.
For Power consumption, I tried to max it. If you use AA, the power consumption is less than with no AA.
Just putting that out there as that is the worst case scenario should a game could put that much stress in the near future. I have an older 9800 gt that doesn’t put out that much heat on the same settings and a friend’s gtx 260 maxing out at 85 c. There’s something wrong with galaxy’s non reference design imo.
Did not know AA lowered power consumption. You’ll think sharpening textures would put more stress on the card.
Karan, were you testing total system power consumption, or just the card’s?
Adding AA to a card should generally increase its load consumption because it works harder.
Update. My card just died. Maybe there was something wrong with mine which caused the overheating. No, I didn’t overheat the thing to death, it just locked up my system and refuses to boot during a gaming session.
Anyone else willing to confirm the temps for me?
@BFG10K – was measuring total system power consumption.
IIRC using high AA saturates the GPU bus, which makes the shaders and texturing units idle while they wait for the AA samples to clear the ROPs.
@stridhiryu030363 – Do you have the same card ?
No game was able to hit the temps Furmark hit in the review.
Yes, I have the same card. Funny thing is, the game I was playing was hardly graphic-intensive so my card stayed around an average of 58 c when it bit the dust.
When I meant confirm the temps, I meant attempt to run Furmark on the same settings I had set to recreate my result. My card died yesterday under very mild conditions of operation so I was just wondering if it was the defect in my card that was causing the high temperatures.
IIRC using high AA saturates the GPU bus, which makes the shaders and texturing units idle while they wait for the AA samples to clear the ROPs.
I’m not sure what you mean by “GPU bus”. Yes, AA hits the ROPs harder but it also hits the memory too. In general the card’s consumption should increase when AA is enabled because the rendering load is higher.
@BFG10K – was measuring total system power consumption
I have a theory then. If the GPU becomes saturated due to AA, the CPU might not be working as hard because it’s waiting for the GPU to catch up, thereby lowering overall system consumption. If you could test just the GPU’s consumption then it should increase when AA is applied.
@BFG10K – It could be the CPU idling as you said. But I didn’t notice any change in CPU Usage with and without AA.
Also it could be the shaders and texture units waiting for the AA samples to clear the ROPs.
Could be a combination of both things.
I have thought of measuring the GPU only power usage, but haven’t come up with a way to do so yet.
Well, finally received my card back from rma and with the same settings as before on furmark, I seem to top out at 96 c after 20 minutes of it.
@stridhiryu030363 – what is your ambient temperature ?
Not sure, I don’t have anyway of checking. It was around 2 A.M. when I attempted this so it couldn’t have been very hot.
@stridhiryu030363 – check the weather on internet or something. What state or country are you in ?
California.
@stridhiryu030363 – that should explain it. it must be hot where you live.
How hot does the card get in games ?
Not at 2 a.m at night.
Right now, 84 f according to google, been folding units all day with folding@home and it’s only at 71 c, 51% fan speed. It’s not a really demanding application. Will stress test again later tonight.
60 f according to google. Same Settings, Same results.
Tops out at 96 c
CHINA IS A BEAUFUL COUNTRY
Normally I don’t learn article on blogs, but I would like to say that this write-up very forced me to check out and do it! Your writing style has been surprised me. Thanks, very great post.