Intel i5 750 Performance Test
Introduction
This article serves as part two of my E6850 bottleneck investigation. Some of you commented that because I didn’t use four cores, I couldn’t conclude that a quad-core processor wouldn’t show a performance gain. This of course is untrue given if two cores can fully saturate a GTX285, an extra two cores won’t remove that bottleneck and make the GPU go faster. As further evidence, I’ll be using a quad-core processor today.
Even though that article clearly demonstrated a faster CPU showed little to no benefit in my gaming situations, I was still growing bored of my E6850 setup. It was my intention to wait for Westmere and pick up an i5 660 or 670, which would give me a high-clocked dual-core processor with Hyper Threading. Unfortunately Intel’s high pricing left a lot to be desired; even worse was the abysmal memory bandwidth and memory latency, a result of the memory controller being shifted onto the integrated graphics core.
So I decided to pick up Intel’s best bang-for-buck quad-core CPU: the i5 750.
This article will mimic the E6850 article, and will retest the same games at the same settings to see the performance change from an E6850 to an i5 750. Left 4 Dead has also been added to the mix, which brings the total tested titles to 17, all released in 2006 or onwards. And again, the settings used are the actual settings I use when I play these games. In other words I want to see the real-world impact to my gaming rather than running a bunch of theoretical settings that I don’t even use.
In the text results, MS denotes transparency multi-sampling, while SS denotes transparency super-sampling.
Hardware (Old)
- Intel Core 2 Duo E6850 (reference 3 GHz clock).
- 4 GB DDR2-800 RAM (4×1 GB, dual-channel).
- Gigabyte GA-G33M-DS2R (Intel G33 chipset, F7 BIOS).
Hardware (New)
- Intel Core i5 750 (2.8 GHz, 21x multiplier set in BIOS, Turbo Boost enabled).
- 4 GB DDR3-1333 RAM (2×2 GB, dual-channel).
- Gigabyte GA-P55-UD3 (Intel P55 chipset, F6 BIOS).
Hardware (Common)
- nVidia GeForce GTX285 (1 GB, reference clocks).
- Creative X-Fi XtremeMusic.
- 30” HP LP3065 (maximum resolution 2560×1600).
Software
- Windows XP 32 bit SP3
- nVidia driver 196.21, high quality filtering, all optimizations off, LOD clamp enabled.
- DirectX February 2010.
- All games patched to their latest versions.
Settings
- 16xAF forced in the driver, vsync forced off in the driver.
- AA forced either through the driver or enabled in-game, whichever works better.
- Highest quality sound (stereo) used in all games.
- All results show an average framerate.
2xAA
In this set of tests, we see Far Cry 2 have a small performance gain of 7.64% with the i5 750. The rest of the games are unchanged and fall within the margin of benchmarking error. Clearly the GTX285 is almost completely the bottleneck here, and the E6850 is more than enough to push it hard enough to expose this fact.
1920×1200
In the next set of games, we see a slight performance gain of 4.73% in Wolfenstein. The rest of the games again show no performance change.
2560×1600
At these settings the GTX285 is the bottleneck by 100%, so the i5 750 makes absolutely no difference here.
Other Thoughts
Some people have reported that their system feels smoother in gaming with less hitching and faster load times after they moved from a dual-core to a quad-core processor. Subjectively I have yet to see any such improvements, and I have a gaming library of about 100 games under active and heavy play rotation. All of the games I tried feel the same, and their load times don’t appear any faster.
Also, the i5 750’s stock cooler is the worst CPU cooler I’ve ever used; it’s only about half the height of the stock E6850 cooler. I never had any trouble with the E6850, but i5 750 whines and drones, sometimes even when the CPU is idle. My case (Antec 902) has ample cooling and airflow, so it’s clearly the rubbish cooler at fault as it struggles to cope with the workload. I’m seriously considering buying a better CPU cooler to reduce the noise, even though I don’t overclock.
Conclusion
So there you have it. Aside from small performance gains in two titles, the i5 750 provided zero performance gain in 15 of the 17 titles I tested today, at the settings I play them at. Meanwhile a 5870 or Fermi would provide a substantial performance gain across all titles, even if I was using my E6850 underclocked to 2 GHz.
Once again it’s clear the GPU is the biggest bottleneck here, while hard-disk speed is obviously holding back gaming load times.
I’ll echo what I mentioned in the first article: if you want maximum performance and image quality, buy the biggest monitor with the highest resolution you can afford, then buy the fastest GPU setup you can to feed that display. You’ll get far more benefit as a whole, compared to buying for the handful of fringe titles that are truly unplayable on anything but the fastest quad-core CPU.
After all this, I can honestly say the E6850 has been one of the finest gaming CPUs I’ve ever used. It lasted me around two years and across four graphics card upgrades (8800 GTS (640 MB), 8800 Ultra, GTX260+ and GTX285). With each upgrade it pushed the GPU hard enough to provide me with substantial performance gains. And a Fermi would do exactly the same thing if I had kept my E6850 for it.
Please join us in our Forums
Follow us on Twitter
For the latest updates from ABT, please join our RSS News Feed
I would like to thank You for the honest articles! Very few gaming sites cunduct tests under realistic circumstances. Hmm, if I could sell my gaming PC and put a dedicated 5870 for Wndows 7 in my Mac Pro. The tests show that the 2.8 Ghz Penryn Quad is more than adequate for 1080P gaming.
Good article highlighting that the video card is the make or break component in a gaming system. I would have liked to see another run of the bench suite ran this time with a Win7 x64 install, side by side with the XP x86 XP3 results to put the microscope on the difference the OS and drivers make. Having a run with an ATi card would be icing on the cake.
Good job! I’m amazed the gtx285 is a bottleneck with only 2x AA in most/all of those games.
Thanks for the comments. In the future I plan on permanently migrating to Windows 7 x64, but I’m not ready right now.
What I found most interesting was the games that ran slightly slower on the i5 750. It looks like benchmarking noise, but the results are consistent from run to run.
It can’t be the reduced clock speed on the i5 because I’ve already shown the same games don’t lose performance when dropping down to 2 GHz on the E6850, at the same settings. Not to mention that turbo boost is definitely working on the i5, even with the crappy stock cooler.
Maybe it’s the GPU drivers and/or OS reacting slightly differently to the new platform.
I am not so sure that the 200 MHz difference in the CPU clocks does not account for the consistent 1% or so difference you found.
I found the same thing in my testing. CPU clockspeeds does make a little difference in the overall performance although it is very slight compared to the GPU.
If the 200 MHz would make a difference, then a 1000 MHz difference on the E6850 (3 GHz vs 2 GHz) should’ve made an even bigger difference, but it didn’t happen.
As an example, look at Call of Juarez 1.
A 2.41% performance drop on the i5, yet on the E6850 @ 2 GHz it was only 0.08%, which is virtually zero.
Well, you are now dealing with a CPU of a different architecture from your old one. It is hard to pick one example to draw conclusions. I believe that the only certain way to rule out the 200 MHz making any performance difference with the i5 would be to test it.
Nice test. I wish I’d read this before I upgraded my E6750 (also with the GA-G33M-DS2R) to an i5-750, spending $600 or so in the process. Oh well…
Apoppin: it’s not just one test, it’s pretty much all that were slower on the i5 750. Go back and check, here’s the link: http://alienbabeltech.com/main/?p=13454
Yes, the architectures are different, which is exactly why you can’t infer that 3 GHz on both CPUs will yield the same results. Also there’s no way an E6850 @ 2 GHz is faster than an i5 750 @ 2.8 GHz + turbo. It’s something other than clock-speed causing this.
Well, if it is something other than clock speed, then maybe you could do some further testing to find out what. That would be surely an interesting article as well.
Although I agree that CPU has ZERO affect on FPS in gaming, I have this question:
Wont the CPU limit the RAM which will limit the preformance?
For example, you cant use DDR3 triple channel on a P4 system.
And from I think that RAM DOES has an affect of preformance. Don’t you agree?
IlyaD, system RAM is lumped under the “platform”. If the GPU is the primary bottleneck, RAM speed will have little to no effect.
As an example, dual-core i5 processors have abysmal bandwidth and cache latencies compared to quad-core i5 processors, yet they don’t appear to be hampered very much in gaming.
Just wanted to say I really liked the post. You have really put a lot of time into your content and it is just wonderfull!
Great article. You have my confirm: CPUs are near to useless for today’s gaming. My Q6600 still does it’s work in a magnificent way, and obviously the same thing would be valid if it was a Exxxx. it’s not worth to spend money to upgrade the system to a Nehalem architecture.
can you test under multiplayer conditions ? the added stress should be related to the cpu
Well i got some gains out of my E6850 by replacing the stock Cooler with Zalman Performa around 2fps Quicker on Crysis Warhead and 2 minutes of Converting 3 hour Avi to DVD using Nero.CPU was 62 Deg C and Now 42Deg C With the new cooler under load with games and programs.