ArticlePosted on September 2015

Calculating FPS past requestAnimationFrame limit with requestIdleCallback

Reading time: 10 minutes

Topics: JavaScript, requestAnimationFrame, requestIdleCallback

This is a legacy post, ported from the old system:
Images might be too low quality, code might be outdated, and some links might not work.

If you want to skip directly to the proof of concept and demo, jump directly with the buttons below.

Warning: your browser doesn't support requestIdleCallback!

Rendering and frames per second

One of the simplest and best tools to measure performance of code that has to run in real-time is frames per second, FPS for short. FPS tell us how many frames are being drawn per second, and more generally speaking, the frequency at which our drawing code runs.

For clarity, i will use "render" to mean the critical code on each frame, but it can be physics stepping, simulation, or any other frame-based code. FPS when used in a broader sense, it gauges the frequency at which our code can run.

The FPS metric is simple: executions over time and it's related to the time the code takes to run. If our render code takes 1 second to run, we're running at 1 FPS: 1 frame rendered in 1 second. If it takes 200ms (1/5th of a second), we're running at 5FPS: 5 frames rendered in 1 second. Frequency is number of executions over 1s, or inversely 1s divided by the time the frame code takes to run.

We used to rely on setTimeout or setInterval to get our JavaScript code to run at specific desired rates. For instance setTimeout( fn, 100 ) would execute fn every 100ms, or at 10FPS. The only problem was that it isn't always like this, since setTimeout/setInterval just pushes the fn call to the queue, and they are retrieved and executed as soon as the main thread has finished its work. Check out this video to know more: "What the heck is the event loop anyway?" by Philip Roberts.

We got requestAnimationFrame (rAF), which is a big improvement over depending on JavaScript timers, and helps scheduling correctly our render calls. With rAF, we specify the callback to our render code, and it will be executed at the beginning of every frame. Now, at what framerate does that code run?

Usually render code tries to keep up with the display refresh rate to prevent tearing, in what is called V-Sync. The refresh rate of a display is stated in Hz, which is a measure of frequency, so it's directly translatable to FPS. Commonly displays -like laptop and mobile screens- run at 60Hz: they can show up to 60 frames per second.

We could still be rendering 1 frame per second, and the display would be refreshing 60 times per second, showing the same image over and over.

requestAnimationFrame runs at the display's framerate (usually 60Hz): our render function will get called 60 times per second at most. That means every 16.6ms (1/60 of a second), our time budget per frame. If our render code takes more than 16.6ms to execute we won't meet the current frame's time, we'll be over budget and start eating into the next frame (or frames) time. The frame that didn't make it into its budget will be eventually be rendered, but while doing so, all the frames that are not renderered are dropped frames.

Talking RAIL, our animation time is terribly over 16ms, and our response time is dangerously going over 100ms.

It's important to remember that framerate and display refresh rate are related but not the same, since the hardware will execute on time, but software can be delayed or blocked.

There's plenty solutions to measure framerate for our JavaScript code, like Chrome DevTools FPS meter or the Timeline's Framerate graph; libraries like stats.js or rStats; and many more. Chrome's tools are designed for common performance debugging, so they're capped to 60FPS. Stats and rStats and the like are not tied to any specific refresh rate, but since the code they're work on is usually tied to rAF, they're bound to 60FPS (or the display's refresh rate) too.

To make things a bit more complicated, there's displays that run at 120Hz, or the Oculus Rift itself and other VR HMDs run at 75Hz or even 90Hz! For most of the calculations I'll be sticking to 60Hz, but remember that it's not universal.

If our code runs at a perfect 16ms, fitting into the frame budget, it's all great: we're meeting 60FPS, not dropping any frames, life is great. What happens if it takes longer? We know when it takes more than 16ms, because we start dropping frames. But if our code is running faster than that it would mean we have leeway to add more effects or run more complex code. And we want to squeeze the most out of the hardware, don't we?

How do we measure the exact rendering time? In code that runs synchronously, like canvas, that's not a problem because we have many tools to measure exactly how much time each call is taking. We can use performance.now(), or console.time()/console.timeEnd(), and the tools mentioned previously. But WebGL drawing code runs asynchronously and those solution report shorter times than the actual values.

Enter requestIdleCallback

With requestIdleCallback (rIC) landing in Canary a few weeks ago we have a way of knowing more information about frames. The idea of rIC is to schedule work when there is free time at the end of a frame, or when the user is inactive. The callback we get with rIC tells us a few things, the most important a timeRemaining value that tells us the ms left in the frame.

This opens a lot of possiblities, one of them directly applies to measuring FPS: if we schedule our render code with requestAnimationFrame, and we use requestIdleCallback to know the remaining time in each frame, we can calculate the time the render code took for the current frame and calculate FPS over 60FPS.

Let's get a simple FPS meter using requestAnimationFrame and a bit of math:

No surprises here, it should run at 60FPS, since the render code is doing esentially nothing.

So what we do is accumulate frames every time the rAF callback is called, and then check if the elapsed time is greater than 1000ms. The number of frames accumulated during that time -should be around 1s- is the framerate, because it's literally frames per second.

Calculating the elapsed time every rAF seems overkill, we could do it with a setTimeout or setInterval, but besides the cons about the timing functions stated before, it's nicer to have everything in a single callback.

We will now add some code to simulate rendering code that keeps the CPU busy by creating a sleep() function that takes a parameter to specify the time in ms that the code would be running.

Now you can try moving the slider up and down and see how the framerate changes depending on the frame time. Notice also that the FPS value never goes over 60FPS (except by rounding), even if we lower the render time under 16ms.

Let's add now a FPS meter that works using requestIdleCallback. The idea here is simple: if we get a call to our rIC it means we still have time remaining in the frame. We then calculate what the actual rendering time is by taking timeRemaining() and substract it from the frame budget. I'm assuming here it's 60FPS, and that's wrong for the reasons stated before. There should be a way of knowing either the refresh rate of the display (similar to DevicePixelRatio) or the ideal frame time for the current display.

Notice that now we have framerate over 60FPS when the time frame value is below 16ms, but no updates when the framerate is below 60FPS. That is because rIC won't be called when there's no remaining time in the frame, and that's precisely when we drop from 60FPS.

The solution is to mix both methods, and choose the appropriate value depending on the situation: if we are rendering at 60FPS or below, we can use the normal value, since it doesn't really matter how much over budget we are; if we are rendering above 60FPS then we use the value calculated with requestIdleCallback.

Why not use the timeout parameter of requestIdleCallback here? It could be definitely be used to detect that rIF is not being called, but since I already have a rAF checking, it feels redundant, and slower to react to FPS changes.

The values over 60FPS are very unstable, and that makes sense, since they're based on punctual values per frame. Ideally, the framerate calculated this way should be averages or smoothed in some fashion.

Final words

There's a WebGL demo I first coded to try the rIC-based FPS meter. Add and remove scenes to see how it affects the framerate, it's specially interesting in Chrome Dev for Android.

As always, thanks for reading and feel free to post a comment or hit me up on twitter!

Comments