Windows Timer Resolution: Megawatts Wasted

The default timer resolution on Windows is 15.6 ms – a timer interrupt 64 times a second. When programs increase the timer frequency they increase power consumption and harm battery life. They also waste more compute power than I would ever have expected – they make your computer run slower! Because of these problems Microsoft has been telling developers to not increase the timer frequency for years.

So how come almost every time I notice that my timer frequency has been raised it’s been done by a Microsoft program (or Chrome), that is not doing anything to justify requesting a high-frequency timer?

This article was updated July 13, 2013, based on feedback from readers. See the bottom for the new material.

Update July 15, 2014. Google has locked the Chrome bug to further editing. The last comment from Google says that Chrome doesn’t always raise the timer resolution, and besides, other programs also raise it. Chrome may not always raise the timer resolution, but with a home page of about:blank it does. That seems bad. And while other programs may raise the timer resolution, I avoid running those. So I guess Google is telling me to avoid running Chrome as well. Okay. Done.

Seeing the current timer frequency is easy – just run the clockres tool by sysinternals.

ClockRes v2.0 – View the system clock resolution
Copyright (C) 2009 Mark Russinovich
SysInternals – http://www.sysinternals.com

Maximum timer interval: 15.600 ms
Minimum timer interval: 0.500 ms
Current timer interval: 1.000 ms

For maximum battery life the current timer interval (which can be changed with timeBeginPeriod) should be 15.6 ms. but as you can see above some program had set it to 1.0 ms. That means the timer interrupt is firing an extra 936 times per second, which should only be done if the benefits justify the costs.

Finding the culprit – WPF

Finding out who raised the timer frequency is non-obvious, but still fairly easy. Just open an administrator command prompt and run “powercfg -energy duration 5”. Part of the resulting HTML report will look like this:

The stack of modules responsible for the lowest platform timer setting in this process.
Requested Period 10000
Requesting Process ID 3932
  Requesting Process Path
C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\devenv.exe
  Calling Module Stack
C:\Windows\SysWOW64\ntdll.dll
C:\Windows\SysWOW64\winmm.dll
C:\Windows\Microsoft.NET\Framework\v4.0.30319\WPF\wpfgfx_v0400.dll
C:\Windows\SysWOW64\kernel32.dll
C:\Windows\SysWOW64\ntdll.dll

So, Visual Studio 11, through its use of WPF, requested a 1.0 ms timer interval, confusingly displayed as 10,000 with the units being 100 ns. This is a known problem with WPF. All versions of Visual Studio trigger this behavior sometimes, and presumably most WPF programs can also trigger it. While increasing the timer frequency might make sense for an application that is trying to maintain a steady frame rate it does not make sense for WPF to leave the timer frequency raised even when there is no animation going on.

Finding the culprit – SQL Server

Another common culprit on my machine is sqlservr.exe. I think this was installed by Visual Studio but I’m not sure. I’m not sure if it is being used or not. Either way, SQL Server should not be raising the timer frequency. If doing so is needed to improve performance then that sounds like a design flaw. And, as with WPF, if raising the frequency is needed then it should only be done when SQL Server is busy, instead of leaving it permanently raised.

Platform Timer Resolution:Outstanding Timer Request
A program or service has requested a timer resolution smaller than the platform maximum timer resolution.
Requested Period 10000
Requesting Process ID 2384
Requesting Process Path \Device\HarddiskVolume1\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn\sqlservr.exe

Finding the culprit – quartz.dll

I don’t have the powercfg output for it but C:\Windows\System32\quartz.dll is another cause of an increased timer frequency. I’m not even sure what Quarts is (Expression Web Designer?) but I know it is sometimes wasting energy.

Finding the culprit – Chrome

imageMicrosoft is the usual culprit on my machine, but Google’s Chrome is also an offender. If I run Chrome then it instantly raises the timer frequency to 1,000 Hz, even when I’m on battery power and just displaying a raw HTML page.

To the right we can see Chrome displaying a harsh indictment of Chrome.

Finding the culprit – svchost.exe

Sometimes svchost.exe raises the timer frequency to 100 Hz. That’s nowhere near as bad as 1,000 Hz, but still annoying. It’s particularly frustrating because I can’t tell which service is doing it.

Tragedy of the commons – highest frequency wins

The Windows timer interrupt is a global resource and it ticks at one rate for the entire system. That means that a single program that raises the timer frequency affects the behavior of the entire system.

When a process calls timeBeginPeriod this frequency request is in force until it is explicitly cancelled with timeEndPeriod or until the process terminates. Most programs (including my own test program below) never call timeEndPeriod, relying instead on Windows process cleanup. This works, and is reasonable for any application that needs the timer frequency high for its entire lifetime, but for any process whose lifetime may outlast its need for a high frequency timer, it’s time to start calling timeEndPeriod. As Microsoft recommends, this includes movie players that are paused, and games that are minimized. It also includes web browsers that do not currently need high-resolution timers, or are running on battery power.

(see Sleep Variation Investigated for what the timer frequency affects)

Does it matter?

My main home computer is a laptop. I use it on the bus every day and I like to save my battery power for useful things rather than having it wasted on waking up the CPU 1,000 times a second.

Microsoft says that it matters. In this article they say “We are committed to continuously improving the energy efficiency of Windows PCs” and yet, four years later, they don’t seem to be following their own guidelines or heading their own warnings which say “Some applications reduce this to 1 ms, which reduces the battery run time on mobile systems by as much as 25 percent.”

imageOne handy way of estimating the power cost is to use the Intel Power Gadget tool. On supported Intel processors this shows you the power drawn by the CPU package in real-time with a claimed precision of 0.01 W. Power Gadget is handy because it works equally well whether on battery power or plugged in. On my Windows 7 Sandybridge laptop it consistently shows a .3 W increase in power draw from having the timer frequency increased. That’s almost 10% of the idle CPU package power draw, although a lower percentage of total system power draw.

An increase of 0.3 W may not seem like much but there are a couple of reasons to take it seriously. One is that if your software is on average running on 33 million machines (a conservative bet for something like Chrome) then increasing the timer frequency could be wasting about ten MW of power. A check-in that fixes such a bug gives you enough carbon-offset-karma to last a lifetime.

Another reason to take this issue seriously is that I have been told that the importance of this issue is only increasing over time. With newer CPUs and with better timer coalescing the frenetic interrupts are likely to consume a greater percentage of total compute power.

Fast timers waste performance

Executing interrupts also uses some execution resources so having more interrupts per second should make your computer run a little bit slower. I tested this theory by writing a program that spins in a busy loop and reports every second on how quickly it’s getting work done. While this program was running I would change the timer resolution and see whether its throughput was affected.

It was affected. A lot.

I just did some quick tests on two machines, so the exact values shouldn’t be taken too seriously, and results will certainly vary depending on machine type, load, etc. But the results clearly indicate a performance cost to having high-frequency interrupts enabled. The overhead that I measured varied from 2.5% to 5%. That’s about an order of magnitude more than I expected. This level of slowdown is significant enough that it makes the common practice of raising the timer frequency in high-performance animation software seem counter-productive.

Raising the Windows timer frequency is bad. It wastes power and makes your computer slower. Routinely doing this in all sorts of programs that end up sitting idle for hours really needs to stop.

Here are some raw results:

4.03904e+006 iterations/s
4.08690e+006 iterations/s
4.09211e+006 iterations/s
4.09437e+006 iterations/s
4.05934e+006 iterations/s
4.00926e+006 iterations/s
4.07723e+006 iterations/s
4.10709e+006 iterations/s
4.02196e+006 iterations/s
4.10028e+006 iterations/s
4.10170e+006 iterations/s
4.10272e+006 iterations/s
4.10708e+006 iterations/s
4.10137e+006 iterations/s
3.95200e+006 iterations/s
3.90879e+006 iterations/s
3.92327e+006 iterations/s
3.91697e+006 iterations/s
3.92326e+006 iterations/s
3.91740e+006 iterations/s
3.92221e+006 iterations/s
3.91711e+006 iterations/s
3.91795e+006 iterations/s
3.92029e+006 iterations/s
3.92204e+006 iterations/s
3.92487e+006 iterations/s
3.91863e+006 iterations/s
3.92451e+006 iterations/s
3.92307e+006 iterations/s
3.92017e+006 iterations/s
3.91865e+006 iterations/s
3.91699e+006 iterations/s
3.92120e+006 iterations/s
3.90531e+006 iterations/s
3.98594e+006 iterations/s
4.10586e+006 iterations/s
4.10674e+006 iterations/s
4.11726e+006 iterations/s
4.11836e+006 iterations/s
4.11177e+006 iterations/s
4.10970e+006 iterations/s

The 20 second period in the middle where performance suddenly drops is exactly when the timer resolution increase happened, and I got similar results every time I tried. I tested this both on my laptop on battery power and my workstation on wall power and the results were always similar.

Source code

It’s not science without disclosing the source code, so here’s my performance measuring program:

#include “stdafx.h”

#include <stdio.h>
#include <Windows.h>

LARGE_INTEGER g_frequency;
const double kDelayTime = 1.0;

double GetTime()
{
    LARGE_INTEGER counter;
    QueryPerformanceCounter(&counter);
    return counter.QuadPart / double(g_frequency.QuadPart);
}

int g_array[1024];
int offset;
int g_sum;

void SpinABit()
{
    for (int i = 0; i < ARRAYSIZE(g_array); ++i)
    {
        g_sum += g_array[i + offset];
    }
}

void Stall()
{
    double start = GetTime();
    int iterations = 0;
    for (;;)
    {
        ++iterations;
        SpinABit();
        double elapsed = GetTime() – start;
        if (elapsed >= kDelayTime)
        {
            printf(“%1.5e iterations/s\n”, iterations / elapsed);
            return;
        }
    }
}

int main(int argc, char* argv[])
{
    QueryPerformanceFrequency(&g_frequency);
    for (;;)
        Stall();
    return 0;
}

And here’s my program that raises the timer frequency for 20 s.

#include <stdio.h>
#include <Windows.h>

#pragma comment(lib, “winmm.lib”)

int main(int argc, char* argv[])
{
    timeBeginPeriod(1);
    printf(“Frequency raised.\n”);
    Sleep(20000);
    printf(“Frequency lowered.\n”);
    // timeEndPeriod call is omitted because process
    // cleanup will do that.
    return 0;
}

Don’t forget to check the system timer resolution using clockres before running the test by. Make sure the timer interval is at least 10 ms before doing the test or else you won’t see dramatic changes.

And fix your code. Everybody.

Update, July 13, 2013

I’ve added some clarifications based on reader confusion, and some new information that I learned from reader comments. Enjoy.

I have not tried this on Windows 8 but one reader reports that the performance slowdown is gone on Windows 8. This article in ArsTechnica discusses the move to a tick-less kernel in Windows 8. It seems that some of the cost of just having the timer enabled has gone away with the move to tick-less. Now the cost should be proportional to how frequently applications ask to be woken, which is much saner. I have not verified these changes myself but it sounds encouraging. I have not seen many technical details about the tickless Windows kernel, but a recent article about the tickless Linux kernel explains some of the issues and challenges. In particular it is quite likely that Windows 8 still runs the interrupt on one processor, so that timeGetTime will have its increased precision. The article Timer-Resolution.docx discusses timer coalescing, and disabling of timer interrupts on processors that don’t need them, which is presumably part of what was changed in Windows 8.

There are two reasons for raising the timer frequency. One is that it improves the resolution of Sleep(n) and of timeouts on WaitForSingleObject. For instance, some games have a power saving mode that throttles the game to 30 fps and this can only be done accurately if Sleep(1) returns in one millisecond, rather than 15-16 milliseconds. By enabling a lower frame rate without requiring busy waiting the higher timer frequency actually saves energy, in this case. For details on Sleep(n) and timer frequency read Sleep Variation Investigated. Multi-media playback often raises the timer frequency for variants of this reason, but these programs should reset the frequency when they are no longer animating.

Another reason for raising the timer frequency is so that timeGetTime will be more accurate. This is used, for instance, by SQL Server to more accurately measure query times. This behavior can be controlled using trace flag T8038, and is discussed more in KB931279. For details on the difference between timeGetTime and GetTickCount see timeGetTime versus GetTickCount.

The Chrome developers realized years ago that raising the timer frequency on battery power was a bad idea, as documented in Chrome: Cranking Up The Clock. However their mitigation of not raising the frequency when on battery power regressed. Issue 153139 tracks this – star it if you think it’s important or want to be notified of changes. If this article causes Chrome to fix this issue then it will have been worthwhile as that will probably save many megawatts of power. Or, as one reader prefers, many billions of Joules of power (per hour that the fix is in effect).

Using QueryPerformanceCounter gives even more accurate time results, but QPC has a history of being buggy. More timing discussions can be found here and here.

The Windows timer frequency is set to the highest frequency requested by a running program. Timer frequency requests can be cancelled by calling timeEndPeriod, but this is rarely done. Timer frequency requests are automatically cancelled when a process ends. If powercfg -energy duration 5 shows that a process has raised the timer frequency you can solve this by killing that process.

Preventing the timer frequency from being raised on your machine is simple. All you have to do is inject code into every process which shims timeBeginPeriod before it is called so that calls to it are a NOP. However, despite this being an obviously trivial task that could be put together in mere seconds, nobody has yet offered up anything more than code snippets and links to references.

Timer Queues were suggested as being a better timer mechanism, but the advantages of this better timer mechanism were not described.

An unexpected side effect of this article is that many developers said “Cool – now I know how to increase the timer frequency!” That makes me nervous, but as long as those developers raise the timer frequency for good reasons, and reset it with timeEndPeriod when they no longer need it, then all will be well.

Reddit discussion is here.

OSNews discussion is here.

A comment on some random forum suggested that this article was misguided because on a busy server the wasted energy is swamped by the energy used for real work. That is true, but that hardly makes my claims irrelevant: On a busy computer the issue is the wasted performance. On an idle computer the issue is the wasted electricity. I continue to believe that (on Windows 7 and below at least) a raised timer frequency is harmful. And if you don’t believe my test results, feel free to do your own.

Raising the timer frequency isn’t (despite everything I’ve said) universally bad. It can be necessary. Many games (including those that I work on) raise the timer frequency in order to allow high frame rates (100+ fps). Having a high timer frequency means we can call Sleep(1) while waiting for the next frame, which means that we save power compared to busy waiting for the next frame! My complaint is with programs that raise the timer frequency and then leave it raised for days at a time, even when the program is just sitting idle in the background. That is what bothers me.

About these ads

About brucedawson

I'm a programmer, working for Valve (http://www.valvesoftware.com/), focusing on optimization and reliability. Nothing's more fun than making code run 5x faster. Unless it's eliminating large numbers of bugs. I also unicycle. And play (ice) hockey. And juggle.
This entry was posted in Performance, Rants and tagged , . Bookmark the permalink.

68 Responses to Windows Timer Resolution: Megawatts Wasted

  1. mmm… I’m going to be a smart engineer and sell an app that periodically switches the timer back to a reasonable value and claim huge battery life savings!

    Jokes aside, I remember a Windows developer detailing that Win XP was full of timeBeginPeriod(1) calls and they had to get rid them all for Vista and 7, since they found out major battery savings.
    Too bad the MSVC team didn’t get the memo. That’s another strike for them. Lately I’ve been increasingly annoyed by the msvc team since VC 2010 for some design choices

    As for SQL, it is needed by some msvc components, mostly MS 2010 Visual Web Developer and some optional C# components

  2. A gem I just found about Chrome.
    I was actually googling “force timeBeginPeriod system wide” (Windows is ignoring my requests to decrease the timer frequecy while Chrome is running because it asked a higher freq; and my timeEndPeriod(1) are being rejected because my process is not Chrome’s)
    I don’t understand what’s their fuss about the QPC API, yes it’s really broken, but the AMD cpus do have a fix (I happen to own one of those buggy processors) and the problem goes away if the thread issuing the qpc is locked to a single processor.

    • brucedawson says:

      Thanks for the link to the Chrome article. I have two complaints about it (are you listening Google?) One is it says that they only increase the timer frequency when on battery power, and on recent version of Chrome that is not true. That appears to be a regression.

      Second, they should only raise the timer frequency when needed. If everybody does it all the time because everybody else is doing it then, well, chaos.

      That said, it would be nice if Windows offered ways to wake up at a precise time without having to globally change the timer interrupt frequency. Tickless kernels (scheduling exactly the interrupts that you need) are one way of doing that.

      The cost is real, and people pay the cost even when they are not reaping the benefits.

  3. Aaron Avery says:

    FYI, quartz.dll is a (the ?) major component of DirectShow. At least that one might has some business bumping the timer frequency. It’s most likely the renderer, which does need frame-accurate timings and needs accurate waitable timers or Sleep() in order to “play nice” and not spin.

    • brucedawson says:

      Quartz (and Flash, and Chrome, and WPF) should only raise the frequency when needed. All of these systems seem to raise the frequency at startup and then leave it raised until process destruction. Sloppy. I think it’s time to rethink that behavior. A sure sign that Quartz and DirectShow are not appropriately balancing power consumption with their frame-accurate timing needs is that Visual Studio so often ends up raising the timer frequency because of these things — and Visual Studio is not a timing critical animation program.

      • Aaron Avery says:

        I agree 100%. It’s criminal that in order to get accurate timers, one has to do this system-hogging 1ms timer “trick”. With as multimedia-centric as Microsoft is trying to be with Windows, you’d think they would have addressed this by now.

        As to quartz.dll showing up under Visual Studio, I can only guess that VS is hosting some web page with Flash on it. When I ran your tests to check the timer resolution, I only ever saw quartz.dll show up while a DirectShow-using application was actively running.

  4. Pingback: Windows Timer Resolution | musingstudio

  5. Pingback: Is It Just Me? v233893843 - Page 141

  6. Robert says:

    Hello Bruce,

    i’m a martin’s colleague and actualy i have a lot to do with windows timer. There are more potential issues with windows timer or with poor pc hardware and third party drivers. Just take a look at this paper “The Problems You’re Having May Not Be the Problems You Think You’re Having: Results from a Latency Study of Windows NT”

    http://research.microsoft.com/apps/pubs/default.aspx?id=68734

    Best regards!
    Robert

    • brucedawson says:

      Yep, that’s the bug. Reported seven months ago. I just added a comment and linked back to here. Vote on the bug if you want it fixed I guess. Thanks for posting the link.

  7. Gavin S says:

    Noticed your not calling timeEndPeriod at the end of your sample. The documentation doesn’t mention it, do you know if there is an assumed call to timeEndPeriod at process termination or have you just left the timer in limbo?

    • brucedawson says:

      There is an assumed timeEndPeriod at process termination. Process cleanup is quite thorough and I count on that.

      Unfortunately most users of timeBeginPeriod assume that but then leave their process running for days, even when the higher timer resolution isn’t needed. :-(

  8. Alexander Graef says:

    If timer resolution is a system-wide, global property, how would you know if you could safely reduce the resolution again without disrupting other processes that also need the higher resolution? Or better yet, how would you feel if Microsoft had decided to make all their programs reset the resolution to the default on process exit, when your program is relying on having the higher resolution?

    I agree that something has to be done, i.e. the global timer resolution should be the maximum of all processes that requested a higher resolution, so when all programs that requested the higher resolution have exited, the resolution could fall back, and you could reset the resolution mid-execution without actually affecting it or other programs.

    • Gavin S says:

      I believe that is the way it’s designed to work.
      The call to timeBeginPeriod shows your desire to have the higher resolution. And the timeEndPeriod releases that desire. Not sure though if there is an implicit timeEndPeriod on process termination though.

    • brucedawson says:

      You are describing exactly the current behavior. The global timer frequency is the maximum of all processes requesting a higher frequency. When processes exit their request is cancelled. I probably should have made that clearer in the post.

      • Alexander Graef says:

        So then the problem remains that we don’t know why those applications require higher timer frequency. For quartz.dll it’s clear: a DirectShow graph needs a high precision master clock, and although it is usually provided by the audio renderer, that renderer itself samples usually at about 48 kHz, so even the higher precision contains much uncertainty. The questions is whether the high resolution needs to be turned while the graph is paused or not running at all.
        It might be interesting to see how disrupting changing the timer frequency would be to an application. Maybe they stick with the higher frequency because accelerating and slowing down the frequency would cause problems in the application itself.
        Unfortunately, .NET doesn’t call SetSystemTimeAdjustment directly, so I could not do a call trace on why it did that and if it has any methods that undo that change.

        • brucedawson says:

          I think we know why WPF and DirectShow need (or think they need) a high timer frequency. However they clearly leave that high frequency enabled when they no longer need it. If it is important to not unnecessarily leave the timer running at high frequency (and Microsoft has said that it is, and my testing confirms that) then Microsoft should make the effort to lower the frequency when it is not strictly needed.

          This is especially true when running on battery power. WPF and DirectShow should probably default to lowering the timer frequency when on battery power, and they should also cancel their high-frequency request whenever they are not currently using it, which on my machine in devenv.exe is most of the time.

          It is plainly obvious to anyone looking at the static and non-animating Visual Studio process that is often raising my timer frequency that it is being raised unnecessarily.

  9. Ric says:

    Just stop using Windows, and the problems are gone.

  10. Pingback: Programs that set the Windows timer to 1ms | Some Things Are Obvious

  11. Pingback: Why Mobile Web Apps are so Slow, Drew Crawford | musingstudio

  12. Alexander says:

    There’s a way to find out which service is to be blamed. You can split service groups, so less services will be started in the same process. Then you have the processID to be blamed, and Process Explorer can show the service contents of each process.

  13. Sik says:

    At some point you mention games but I don’t get why a game would use the standard timer. They’re more likely to use the query performance one, which is counter-based rather than interrupt-based (and thereby much more power management friendly as it won’t take the CPU’s attention all the time). In fact, that timer is probably better for things like animations and such due to its better accuracy (I guess some non-game programs may still want to use standard timers so they don’t have something going on alongside the message loop, but even then 1000Hz seems overkill in that sense).

    That said, I know the query performance timers did have bugs on some systems at some point (but that should be gone by now), so maybe that’s why some programs are resorting to the standard timers. There shouldn’t be much of a need for that anymore, though.

    • brucedawson says:

      The main reason games would increase the timer frequency is so that they can get scheduled every ms instead of every 15.6 ms. For instance, when we want to frame-limit a game to, say, 300 fps we do that by calling Sleep(1) the necessary number of times, and that only works if the timer frequency is 1,000 Hz.

      If we didn’t set the timer frequency to 1,000 Hz then we would have to busy wait, which would actually waste even more power. So, in that context the faster timer actually saves power.

  14. Kat Marsen says:

    Until there is some other way to make Sleep(1), or really WaitForSingleObject(h, 1), actually only sleep for 1 ms (even with timeBeginPeriod(1) they sleep for closer to 1.98 ms on Win7+), I think you’ll just have to suck it up and buy a second battery. 1 millisecond is an eon in computing time… that the most responsive my program can be with a single thread in the case of timeout is 64 Hz, unless I make that call, is ridiculous. On a machine with /billions/ of hertz to go around.

    I do agree that absent of high-level access to genuine timer facilities (whatever happened to HPET anyway?– should we be able to use that by now?), it’s silly that applications have to resort to global calls that affect the whole system.

  15. Tom says:

    Raising the timer frequency to 1000Hz inside a VM (with VirtualBox at least) can also drastically increase idle CPU usage. Instead of the system hanging out close to 0% CPU usage, it can end up as high as 10-20% even when nothing’s going on. I got the author of one of my favorite pieces of software to stop calling timeBeginPeriod and it completely solved the issue when running that software in a VM.

  16. scheherazade says:

    TBH, one of the reasons people pick on windows for being ‘slow’, is the fact that the timer isn’t always at max rate, and it results in lower responsiveness.
    Those 1ms time slices help make things feel ‘snappy’.

    But this isn’t even salient, as high time resolution applications should make use of the HPET or RTC to achieve timeliness.

    You can do this in Linux with timerfd() and read(), if you’re after a ~fixed schedule.
    If you want simply accurate sleeps, the newer kernels already implement the usleep() function via HPET, falling back onto RTC, falling back onto soft clock.

    In Windows, you have to write HPET or RTC specific code to use either one for timing or sleeping, and the built in Sleep() function is always a soft clock.

    However, using the hardware interrupt timers, both Windows and Linux can give you resolution on the order of N microseconds – effectively eliminating any meaningful [scheduler] speed differences between either O.S..

    [IIRC. I wrote this stuff a while ago and reuse the same code. Haven't had to look at it in a while.]

    Video drivers and compositing are a separate issue, but that will affect the perception of responsiveness when given a chance.

    -scheherazade

  17. Eric says:

    Why, on God’s green earth, do they even have such a function available that will change the behavior of EVERY SINGLE THING? The function call that sets this absolutely should be hacked to a NOOP, and it should be completely forgotten about.

  18. Pingback: Windows Timer Resolution – who’s wasting energy | Short Cut Blog

  19. nipunbatra says:

    Reblogged this on .

  20. Pingback: Windows Timer Resolution: Megawatts Wasted (via: Random ASCII) « The Wiert Corner – irregular stream of stuff

  21. Title should be: “Moronic game developers: Lots of Energy Wasted” I mean if you wanted to have a title which has something to do with the content….

    • brucedawson says:

      Hmmm. I disagree. Some of our games increase the timer frequency so that we can clamp the frame rate to a maximum 30 fps without busy waiting. Thus, we use the increased timer resolution to *save* energy. Most game developers raise it and then run at as high a frame rate as they can, in which case the timer resolution is not relevant to energy consumption.

      Chrome, WPF, and Quartz, on the other hand, leave the timer frequency raised when they aren’t using it. Aren’t they the wasteful ones?

  22. jrv says:

    It’s easy to figure out what is being run by svchost.exe because the request stack includes the process id. In the former SysInternals now Microsoft tool Process Explorer (procexp.exe), find the process id, then right-click, properties. The “command line” textbox will show the startup. In my case it was “C:\Windows\system32\svchost.exe -k netsvcs”. As best I can tell, all I have to do to reduce my timer is not run on a network. I wish I’d known sooner that that was all it took.

    • brucedawson says:

      That technique only works reliably if you reconfigure svchost so that it puts one service in each process – by default it puts many in each process, and that is what makes attributing blame trickier. Note that you don’t need to use Process Explorer to see what services are running in each svchost instane — just use the Services tab in Task Manager.

      netsvcs does not normally raise the timer frequency (or else everybody would be running at high frequency) so there must be something else going on.

  23. Pingback: Switching to Firefox again | XRubio.com

  24. Interesting article. I only began researching the system timer resolution calls because I heard a lower one would fix my Crysis 3 problems. After writing a tiny program to maximise the resolution (5ms on my system), the occasional stuttering I was experiencing has gone and I’m seeing a significant fps increase. I wasn’t expecting it to be so effective.

    I figured I shouldn’t force it to 5ms all the time because it’s clearly making the CPU work harder… so I’m grateful having stumbled across all this info. Thanks. :)

      • brucedawson says:

        What was the timer resolution before? And what did the framerate change from/to?

        If Crysis needs a higher timer resolution then they should be raising it. That’s fine. It’s only tragic when programs leave the resolution raised for long periods of time when they don’t need it, especially on battery power.

        I’m talking to you Chrome.

        • Sorry, I only just noticed your reply.

          Crysis 3 doesn’t touch the timer resolution. It stays at 15.625ms. In areas with lots of NPCs, particularly outdoors, the framerate would frequently dip below 30fps and there was occasional stuttering. After setting it to 0.500ms, the stuttering vanished and the frame rate never dropped below 45fps. I played the game through a second time and was amazed at the difference.

  25. Pingback: Bugs I Got Other Companies to Fix in 2013 | Random ASCII

  26. Tebjan Halm says:

    Just for your interest: I’ve made a little tool which gets and sets the windows system timer, the code and download is here: https://github.com/tebjan/TimerTool

    • brucedawson says:

      Cool! I assume that it can’t lower the frequency if another process has raised it, correct?

      You might want to tweak the display. It looks like it says “Current: 1 min” (i.e.; current is one minute). Separate lines for Current/Min/Max would avoid that.

      • Tebjan Halm says:

        Right, you can only decrease the timer values. I’ve googled a bit, but there seems to be no was to overwrite the timer resolution if another process has set it.
        But i have improved the GUI as you suggested and it can now also set 0.5 ms as timer resolution.

  27. j0hnwayn3 says:

    Do you know why Windows 8 would report 5003 as maxres when you try to set to 5000? Or 10007 when you try and set to 10000? This happens on mulitple laptops I have (Asus/HP) 4th gen i7/i5. However in windows 7 it sets fine to 5000/10000 respectively. In windows 8 Sleep(1) is timed at ~1.49998ms and in Windows 7 Sleep(1) ~.99997ms…this boggles my mind! Have you seen the same issue when using ntSetTimerResolution?

    • brucedawson says:

      Sorry, I don’t know. I’ve never actually used ntSetTimerResolution. Maybe post a link to a project that shows the problem and see if anybody knows the answer?

  28. Clayton H says:

    Inspired by the flurry of “activity” (i.e. noise) on the Chrome bug thread today, I decided to have a look.

    It seems Spotify does the same thing. Even when paused. And I’m not entirely sure it needs it at all (16ms is a little long, but it doesn’t strike me as a completely unreasonable amount to buffer).

    • brucedawson says:

      Audio apps need to buffer dozens of ms to avoid dropouts because, 1ms timer or not, they may end up not being scheduled for a while. I emphatically agree that there is no reason for Spotify to be changing the timer precision. Playing background music needn’t harm your battery life. From a power draw perspective, the Zune software is the worst that I have measured.

      When Valve found that Steam was accidentally raising the timer frequency the bug got fixed quickly. The change in power draw was quite measurable so a quick fix was obviously correct.

  29. Pingback: Dataverlies door Energie Besparen | TD-er

  30. Pingback: Google Chrome will stop draining your laptop battery soon | Microsoft | Geek.com | What Happen Today?!?

  31. Pingback: Google Chrome stop draining your laptop battery TechGool

  32. Zlip says:

    Just to let you know, Chrome now decided to fix this timer issue to improve power usage.

  33. Pingback: Browser Chrome leert Notebook-Akkus | virtualfiles.net

  34. cpu says:

    Yeah, are trying to fix it. Chrome its a platform, not a plain app, so things on top of us can force our hand, most notably web pages. We have a few patches in flight, but after a few years at 1ms who knows what has grown to depend on that. This will take some time to be fully sorted out.

    Regarding win8 tickless kernel, I have no empirical evidence of that. All my tests indicates it behaves the same, that is timeBeginPeriod causes everybody in the system to wake up faster. If somebody has user mode code that shows how win8 improves anything let me know on the chromium bug.

    • brucedawson says:

      My understanding is that Windows 8 maintains the same behavior but does it in a more power efficient way. So, the cost of raising the timer frequency is apparently reduced on Windows 8. But I haven’t measured that myself.

      It is impossible to comment on the chromium bug because it has been locked — a pity.

      I hope that Chrome is able to fix this. I understand the concerns about being a platform. The problem would have been much easier to fix initially but now it is possible that web pages depend on it. But probably not, given the necessity of being portable to other browsers. So fix it.

      Steam is also a platform. When Steam realized they had accidentally raised the timer frequency they fixed it promptly. No problem.

  35. Pingback: Google will fix the battery-eating 'bug' in its Chrome browser - TechReact - We provide computers to Kids

  36. xplosneer says:

    I also have been drawn in from the Google links today, and here’s what I’ve found:
    I was hitting the 1ms resolution on every trace. Generally I found some background programs that always run but are not listed as startup programs, most egregiously the National Instruments software support suite that had timers which I’m sure are necessary for high-resolution measurement, but NOT WHEN THE PROGRAM ISN’T OPEN.

    Firefox was giving me a 1ms resolution timer as well, just from the start page, but I didn’t check on my extensions.

    iTunes was giving me 1ms. Last.FM was giving me 5ms, which is sad given that I only use it for scrobbling and never playing (and that was with iTunes turned off for that trace, so it should be completely idle…)

    Seems like this is a pretty egregious problem in a number of programs.

    • xplosneer says:

      Also, this was all running Windows 8.1, on a Sony S15 laptop in battery-saver mode on battery power.

    • xplosneer says:

      In addition, I also found audiodg.exe (the standard windows audio driver?) increasing the resolution as well…

      Once I finally forced closed all of these things and reran powercfg, the rate was finally back to the default.

      • brucedawson says:

        It seems very odd that audiodg.exe would increase the timer resolution. That program runs on all Windows computers and I have never seen it doing that. Did you verify that with powercfg? If so, what was the call stack? Maybe there is an audiodg add-in that does it.

  37. Pingback: Google will fix the battery-eating 'bug' in its Chrome browser | Hihid News

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s