The default timer resolution on Windows is 15.6 ms – a timer interrupt 64 times a second. When programs increase the timer frequency they increase power consumption and harm battery life. They also waste more compute power than I would ever have expected – they make your computer run slower! Because of these problems Microsoft has been telling developers to not increase the timer frequency for years.
So how come almost every time I notice that my timer frequency has been raised it’s been done by a Microsoft program (or Chrome), that is not doing anything to justify requesting a high-frequency timer?
Update March 2015: Chrome now avoids raising the timer frequency unnecessarily.
This article was updated July 13, 2013, based on feedback from readers. See the bottom for the new material.
Update July 15, 2014. Google has locked the Chrome bug to further editing. The last comment from Google says that Chrome doesn’t always raise the timer resolution, and besides, other programs also raise it. Chrome may not always raise the timer resolution, but with a home page of about:blank it does. That seems bad. And while other programs may raise the timer resolution, I avoid running those. So I guess Google is telling me to avoid running Chrome as well. Okay. Done.
Seeing the current timer frequency is easy – just run the clockres tool by sysinternals.
ClockRes v2.0 – View the system clock resolution
Copyright (C) 2009 Mark Russinovich
SysInternals – http://www.sysinternals.com
Maximum timer interval: 15.600 ms
Minimum timer interval: 0.500 ms
Current timer interval: 1.000 ms
For maximum battery life the current timer interval (which can be changed with timeBeginPeriod) should be 15.6 ms. but as you can see above some program had set it to 1.0 ms. That means the timer interrupt is firing an extra 936 times per second, which should only be done if the benefits justify the costs.
Finding the culprit – WPF
Finding out who raised the timer frequency is non-obvious, but still fairly easy. Just open an administrator command prompt and run “powercfg -energy duration 5”. Part of the resulting HTML report will look like this:
The stack of modules responsible for the lowest platform timer setting in this process.
Requested Period 10000
Requesting Process ID 3932
Requesting Process Path
C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\devenv.exe
Calling Module Stack
So, Visual Studio 11, through its use of WPF, requested a 1.0 ms timer interval, confusingly displayed as 10,000 with the units being 100 ns. This is a known problem with WPF. All versions of Visual Studio trigger this behavior sometimes, and presumably most WPF programs can also trigger it. While increasing the timer frequency might make sense for an application that is trying to maintain a steady frame rate it does not make sense for WPF to leave the timer frequency raised even when there is no animation going on.
Finding the culprit – SQL Server
Another common culprit on my machine is sqlservr.exe. I think this was installed by Visual Studio but I’m not sure. I’m not sure if it is being used or not. Either way, SQL Server should not be raising the timer frequency. If doing so is needed to improve performance then that sounds like a design flaw. And, as with WPF, if raising the frequency is needed then it should only be done when SQL Server is busy, instead of leaving it permanently raised.
Platform Timer Resolution:Outstanding Timer Request
A program or service has requested a timer resolution smaller than the platform maximum timer resolution.
Requested Period 10000
Requesting Process ID 2384
Requesting Process Path \Device\HarddiskVolume1\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn\sqlservr.exe
Finding the culprit – quartz.dll
I don’t have the powercfg output for it but C:\Windows\System32\quartz.dll is another cause of an increased timer frequency. I’m not even sure what Quartz is (Expression Web Designer?) but I know it is sometimes wasting energy.
Finding the culprit – Chrome
Microsoft is the usual culprit on my machine, but Google’s Chrome is also an offender. If I run Chrome then it instantly raises the timer frequency to 1,000 Hz, even when I’m on battery power and just displaying a raw HTML page.
To the right we can see Chrome displaying a harsh indictment of Chrome.
Finding the culprit – svchost.exe
Sometimes svchost.exe raises the timer frequency to 100 Hz. That’s nowhere near as bad as 1,000 Hz, but still annoying. It’s particularly frustrating because I can’t tell which service is doing it.
Tragedy of the commons – highest frequency wins
The Windows timer interrupt is a global resource and it ticks at one rate for the entire system. That means that a single program that raises the timer frequency affects the behavior of the entire system.
When a process calls timeBeginPeriod this frequency request is in force until it is explicitly cancelled with timeEndPeriod or until the process terminates. Most programs (including my own test program below) never call timeEndPeriod, relying instead on Windows process cleanup. This works, and is reasonable for any application that needs the timer frequency high for its entire lifetime, but for any process whose lifetime may outlast its need for a high frequency timer, it’s time to start calling timeEndPeriod. As Microsoft recommends, this includes movie players that are paused, and games that are minimized. It also includes web browsers that do not currently need high-resolution timers, or are running on battery power.
(see Sleep Variation Investigated for what the timer frequency affects)
Does it matter?
My main home computer is a laptop. I use it on the bus every day and I like to save my battery power for useful things rather than having it wasted on waking up the CPU 1,000 times a second.
Microsoft says that it matters. In this article they say “We are committed to continuously improving the energy efficiency of Windows PCs” and yet, four years later, they don’t seem to be following their own guidelines or heading their own warnings which say “Some applications reduce this to 1 ms, which reduces the battery run time on mobile systems by as much as 25 percent.”
One handy way of estimating the power cost is to use the Intel Power Gadget tool. On supported Intel processors this shows you the power drawn by the CPU package in real-time with a claimed precision of 0.01 W. Power Gadget is handy because it works equally well whether on battery power or plugged in. On my Windows 7 Sandybridge laptop it consistently shows a .3 W increase in power draw from having the timer frequency increased. That’s almost 10% of the idle CPU package power draw, although a lower percentage of total system power draw.
An increase of 0.3 W may not seem like much but there are a couple of reasons to take it seriously. One is that if your software is on average running on 33 million machines (a conservative bet for something like Chrome) then increasing the timer frequency could be wasting about ten MW of power. A check-in that fixes such a bug gives you enough carbon-offset-karma to last a lifetime.
Another reason to take this issue seriously is that I have been told that the importance of this issue is only increasing over time. With newer CPUs and with better timer coalescing the frenetic interrupts are likely to consume a greater percentage of total compute power.
Fast timers waste performance
Executing interrupts also uses some execution resources so having more interrupts per second should make your computer run a little bit slower. I tested this theory by writing a program that spins in a busy loop and reports every second on how quickly it’s getting work done. While this program was running I would change the timer resolution and see whether its throughput was affected.
It was affected. A lot.
I just did some quick tests on two machines, so the exact values shouldn’t be taken too seriously, and results will certainly vary depending on machine type, load, etc. But the results clearly indicate a performance cost to having high-frequency interrupts enabled. The overhead that I measured varied from 2.5% to 5%. That’s about an order of magnitude more than I expected. This level of slowdown is significant enough that it makes the common practice of raising the timer frequency in high-performance animation software seem counter-productive.
Raising the Windows timer frequency is bad. It wastes power and makes your computer slower. Routinely doing this in all sorts of programs that end up sitting idle for hours really needs to stop.
Here are some raw results:
The 20 second period in the middle where performance suddenly drops is exactly when the timer resolution increase happened, and I got similar results every time I tried. I tested this both on my laptop on battery power and my workstation on wall power and the results were always similar.
It’s not science without disclosing the source code, so here’s my performance measuring program:
const double kDelayTime = 1.0;
return counter.QuadPart / double(g_frequency.QuadPart);
for (int i = 0; i < ARRAYSIZE(g_array); ++i)
g_sum += g_array[i + offset];
double start = GetTime();
int iterations = 0;
double elapsed = GetTime() – start;
if (elapsed >= kDelayTime)
printf(“%1.5e iterations/s\n”, iterations / elapsed);
int main(int argc, char* argv)
And here’s my program that raises the timer frequency for 20 s.
#pragma comment(lib, “winmm.lib”)
int main(int argc, char* argv)
// timeEndPeriod call is omitted because process
// cleanup will do that.
Don’t forget to check the system timer resolution using clockres before running the test by. Make sure the timer interval is at least 10 ms before doing the test or else you won’t see dramatic changes.
And fix your code. Everybody.
Update, July 13, 2013
I’ve added some clarifications based on reader confusion, and some new information that I learned from reader comments. Enjoy.
I have not tried this on Windows 8 but one reader reports that the performance slowdown is gone on Windows 8. This article in ArsTechnica discusses the move to a tick-less kernel in Windows 8. It seems that some of the cost of just having the timer enabled has gone away with the move to tick-less. Now the cost should be proportional to how frequently applications ask to be woken, which is much saner. I have not verified these changes myself but it sounds encouraging. I have not seen many technical details about the tickless Windows kernel, but a recent article about the tickless Linux kernel explains some of the issues and challenges. In particular it is quite likely that Windows 8 still runs the interrupt on one processor, so that timeGetTime will have its increased precision. The article Timer-Resolution.docx discusses timer coalescing, and disabling of timer interrupts on processors that don’t need them, which is presumably part of what was changed in Windows 8.
There are two reasons for raising the timer frequency. One is that it improves the resolution of Sleep(n) and of timeouts on WaitForSingleObject. For instance, some games have a power saving mode that throttles the game to 30 fps and this can only be done accurately if Sleep(1) returns in one millisecond, rather than 15-16 milliseconds. By enabling a lower frame rate without requiring busy waiting the higher timer frequency actually saves energy, in this case. For details on Sleep(n) and timer frequency read Sleep Variation Investigated. Multi-media playback often raises the timer frequency for variants of this reason, but these programs should reset the frequency when they are no longer animating.
Another reason for raising the timer frequency is so that timeGetTime will be more accurate. This is used, for instance, by SQL Server to more accurately measure query times. This behavior can be controlled using trace flag T8038, and is discussed more in KB931279. For details on the difference between timeGetTime and GetTickCount see timeGetTime versus GetTickCount.
The Chrome developers realized years ago that raising the timer frequency on battery power was a bad idea, as documented in Chrome: Cranking Up The Clock. However their mitigation of not raising the frequency when on battery power regressed. Issue 153139 tracks this – star it if you think it’s important or want to be notified of changes. If this article causes Chrome to fix this issue then it will have been worthwhile as that will probably save many megawatts of power. Or, as one reader prefers, many billions of Joules of power (per hour that the fix is in effect).
The Windows timer frequency is set to the highest frequency requested by a running program. Timer frequency requests can be cancelled by calling timeEndPeriod, but this is rarely done. Timer frequency requests are automatically cancelled when a process ends. If powercfg -energy duration 5 shows that a process has raised the timer frequency you can solve this by killing that process.
Preventing the timer frequency from being raised on your machine is simple. All you have to do is inject code into every process which shims timeBeginPeriod before it is called so that calls to it are a NOP. However, despite this being an obviously trivial task that could be put together in mere seconds, nobody has yet offered up anything more than code snippets and links to references.
Timer Queues were suggested as being a better timer mechanism, but the advantages of this better timer mechanism were not described.
An unexpected side effect of this article is that many developers said “Cool – now I know how to increase the timer frequency!” That makes me nervous, but as long as those developers raise the timer frequency for good reasons, and reset it with timeEndPeriod when they no longer need it, then all will be well.
Reddit discussion is here.
OSNews discussion is here.
A comment on some random forum suggested that this article was misguided because on a busy server the wasted energy is swamped by the energy used for real work. That is true, but that hardly makes my claims irrelevant: On a busy computer the issue is the wasted performance. On an idle computer the issue is the wasted electricity. I continue to believe that (on Windows 7 and below at least) a raised timer frequency is harmful. And if you don’t believe my test results, feel free to do your own.
Raising the timer frequency isn’t (despite everything I’ve said) universally bad. It can be necessary. Many games (including those that I work on) raise the timer frequency in order to allow high frame rates (100+ fps). Having a high timer frequency means we can call Sleep(1) while waiting for the next frame, which means that we save power compared to busy waiting for the next frame! My complaint is with programs that raise the timer frequency and then leave it raised for days at a time, even when the program is just sitting idle in the background. That is what bothers me.