Windows has long had a reputation for slow file operations and slow process creation. Have you ever wanted to make these operations even slower? This weeks’ blog post covers a technique you can use to make all file operations on Windows run at one tenth their normal speed (or slower), in a way that will be untraceable for most users!
And, of course, this post will also cover how to detect and avoid this problem.
Flaky failures are the worst. In this particular investigation, which spanned twenty months, we suspected hardware failure, compiler bugs, linker bugs, and other possibilities. Jumping too quickly to blaming hardware or build tools is a classic mistake, but in this case the mistake was that we weren’t thinking big enough. Yes, there was a linker bug, but we were also lucky enough to have hit a Windows kernel bug which is triggered by linkers!
Zombies probably won’t consume 32 GB of your memory like they did to me, but zombie processes do exist, and I can help you find them and make sure that developers fix them. Tool source link is at the bottom.
Is it just me, or do Windows machines that have been up for a while seem to lose memory? After a few weeks of use (or a long weekend of building Chrome over 300 times) I kept noticing that Task Manager showed me running low on memory, but it didn’t show the memory being used by anything. In the example below task manager shows 49.8 GB of RAM in use, plus 4.4 GB of compressed memory, and yet only 5.8 GB of page/non-paged pool, few processes running, and no process using anywhere near enough to explain where the memory had gone:
My machine has 96 GB of RAM – lucky me – and when I don’t have any programs running I think it’s reasonable to hope that I’d have at least half of it available.
Describing performance improvements exists at the intersection of mathematics and linguistics. It is quite common to use incorrect math to describe performance improvements, and it is possible to use incorrect, misleading, or just sub-optimal rhetoric to describe your math.
Consider this hypothetical press release:
AirTrain Inc. is proud to announce the new AirTrain-8000. This revolutionary new plane can fly from London to Seattle at an average speed of 7,700 km/h – a huge improvement over the 770 km/h of other jets. This drops the travel time from ten hours to just one, making the AirTrain-8000 90% faster than our competitors.
A press release like this would never be released. The new plane is ten times faster than previous planes (7,700 km/h divided by 770 km/h), and no marketing team would allow this improvement to be summarized as “90% faster”, which means “almost twice as fast.” And yet, when talking about computers – where ten-times speedups happen fairly often – this mistake is made quite frequently.
This abuse of percentages has made them meaningless for describing optimizations – we need to stop using them. The AirTrain-8000 is ten times as fast, full stop.
Some years ago I saw a set of resumes created by not-yet-graduated university students. They were all three to four pages long. This is too long, and I said to my daughter “Even Jesus Christ’s resume fits on one page”. His resume was then revealed to us during an epic road trip and I was inspired to share it with the world:
The recent reveal of Meltdown and Spectre reminded me of the time I found a related design bug in the Xbox 360 CPU – a newly added instruction whose mere existence was dangerous.
Back in 2005 I was the Xbox 360 CPU guy. I lived and breathed that chip. I still have a 30-cm CPU wafer on my wall, and a four-foot poster of the CPU’s layout. I spent so much time understanding how that CPU’s pipelines worked that when I was asked to investigate some impossible crashes I was able to intuit how a design bug must be their cause. But first, some background…
Part of my job always seems to include crash analysis. A program crashes on a customer’s machine, a minidump is uploaded to the cloud, and it might be my desk that it appears on when Monday morning rolls around. The expectation is that I can make sense of it so that we can ship more reliable software.
The Windows ecosystem makes this as easy as possible because the debuggers will automatically download the binaries and the debug symbols, and if source indexing has been applied then they will also automatically download the correct source files as needed.
Chrome has a public symbol server and uses source indexing so debugging Chrome is accessible for any developer.