This article is another installment in the ongoing investigation of bottlenecking in video cards, with the first installment having been done on a 8800 Ultra.
To quickly recap on the basic premise: nVidia hardware allows separate core, shader and memory clocks. Therefore, underclocking one of these values while leaving the rest at stock allows you to find out what part of the GPU is the biggest bottleneck in any given situation.
There’s a limit to how far the core and shader clocks can separate from each other and in the case of the GTX260+, it’s quite a small drop of 1242 MHz to 1153 MHz, or about 7%. This is much smaller than the 8800 Ultra’s possible drop of 19%, but it should still be possible to see some kind of differences.
I’ll run the card at stock speed followed by underclocking each clock by 7%, leaving the rest at stock, to see which has the most impact on performance. In addition to using a different video card this time, I’ll also use a different selection of games with more modern characteristics overall than the last batch of games.
The stock clocks are as follows: core 576, shader 1242, memory 999.
The underclocked values with the corresponding colors are as follows: core 535, shader 1153, memory 927, each with an underclock of approximately 7%.
- Intel Core 2 Duo E6850 (reference 3 GHz clock).
- nVidia GeForce GTX260+ (896 MB, nVidia reference clocks).
- 4 GB DDR2-800 RAM (4×1 GB, dual-channel).
- Gigabyte GA-G33M-DS2R (Intel G33 chipset, F7 BIOS).
- Creative X-Fi XtremeMusic.
- Windows XP 32 bit SP3
- nVidia Forceware 181.22, high quality filtering, all optimizations off, LOD clamp enabled.
- DirectX November 2008.
- All games patched to their latest versions.
- 16xAF forced in the driver, vsync forced off in the driver.
- AA forced either through the driver or enabled in-game, whichever works better.
- Since I’m on XP, all DX10 titles were run under DX9 render paths.
- Highest quality sound (stereo) used in all games.
- All results show an average framerate.