Exploring “Frame time” measurement – Part 1 – Is Fraps a good tool?
Faster. Smoother. Richer. These three words define Nvidia’s Kepler marketing campaign which attempts to convey what a gamer wants to experience – (1) Faster; more raw frame rates as measured as frames per second (fps). (2) Smoother; the gamer should not experience hiccups or stuttering which we are now able to easily measure and chart. And (3) Richer; the overall gaming experience should be excellent and fully-featured.
One of the tools most often used to convey the performance differences between video cards is Fraps. Most commonly, it is used to measure fps (frames per second), or frame rates which convey how evenly the frames are delivered. Fraps is quite popular among enthusiasts and there is a free version that can also measure frame times as well as frame rates. It is a versatile tool as seen from the Fraps website:
It has been noted for nearly a decade that just measuring fps does not convey the full experience if the frames are delivered unevenly – if there is stutter, micro stutter, or jitter. Recently, Scott Wasson of the Tech Report began to measure “frame times” which are also measured by Fraps. He is not the first, as Lost Circuits experimented with this in 2008 but returned to fps charts for simplicity sake.
It is pretty well-established that Fraps is a very good measure of frame rates (fps), but what about its ability to measure frame times? Fraps also outputs these results that can be charted in Excel or a similar spreadsheet program if “frametimes” is checked (along with “FPS” and “MinMaxAvg” which are usually measured, under the “FPS” tab).
We are going to look at two in-game benchmarks that also measure frame times and we will compare them using Fraps frame time measurements. We are using the STALKER, Call of Pripyat stand-alone benchmark and Metro 2033’s official benchmark, and we shall also look at Far Cry 2.
We are using the stock-clocked GTX 680 and if our testing shows Fraps to be a useful tool, later benching will pit its frame time smoothness of delivery versus the HD 7970 at GHz edition clocks. Raw frame rates simply do not convey the complete gaming experience and this can be easily illustrated with multi-GPU – where otherwise satisfactory frame rates may feel much slower than then they should due to uneven delivery of frames.
One thing to note is that we overclocked our Core i7-3770K CPU to 4.5GHz to make sure that there is no CPU bottleneck. We are testing at 1920×1080. Although we are only testing three games in this Part 1, generally we use 2x/4xAA or 8xAA plus 16xAF and with maximum (ultra) DX11/10.1/10/9c details whenever it is available. We are benching with GeForce 310.70 WHQL drivers.
Please continue on to the next page for the complete hardware and software setup of our testing platform.
wow. These results are extremely similar. More so than i ever thought. Fraps result do differ by a tiny amount but this is because the game engine and fraps grab their data completely different. Fraps monitors DX/openCL calls after the fact and it being an external program throws in its own latencies, however small they may be. But through all this the trend graphs show near identical trends for fraps vs the internal game engines. Its amazingly accurate and the results are like mirror images.
Some people may not know the point in these testings so to be clear i might add it. Most games do not have any way to gather frame times from the engine at all. Fraps can do this rather easily anytime while playing or benchmarking. The question was whether fraps was accurate enough to rely on at all. Whether the spikes meant what they were being represented as. Having these test with the game engines producing near identical trend graphs gives high credence to fraps being an acceptable tool to capture frame times. We also have the reviewers experience which seemed to align with the frame time data very nicely. He seen the small hitching or studdering that the trend charts represent.
It’s remarkably easy, actually. Fraps and in game timing work by time stamping the ‘swap buffer’ command which indicates end of frame. As the timestamp is an internal CPU counter the only difference would be either side of procedure call so both would differ by at most a few nanoseconds. Fraps would be at most a few kilobytes of code mostly being a stub for forwarding calls to the ‘real’ driver.
The question “whether fraps was accurate enough to rely on at all” was not answered “at all”, as Fraps and games used the same measure of “frame time”: time between calls to “swap buffer”, as Pancake mentioned here.
But the real question: does it measure correctly all the stuttering we see on the screen? And the answer is almost certainly: “No”. As after call to Present() driver can do a quite a lot of work before finally presenting the frame to the display device.
And although it doesn’t matter in the terms of average frame counts it does matter when you want to measure “jitter”.
Sadly, the only way to correctly measure the frame time is to use profiling tools from the vendor (Nvidia or AMD), and the game needs to support this “profiling” mode (none of them obviously ships with it “on”) or some clever way to circumvent it by making game to run with “hacked” D3D library.