Doubles are not floats, so don’t compare them

I’ve seen a few online discussions linking to my Comparing Floating Point Numbers page for misguided reasons and I wanted to discuss those reasons to help people understand why throwing epsilons at the problem without understanding the situation is a Really Bad Idea™. In some cases, comparing floating-point numbers for exact equality is actually correct.

If you do a series of operations with floating-point numbers then, since they have finite precision, it is normal and expected that some error will creep in. If you do the same calculation in a slightly different way then it is normal and expected that you might get slightly different results. In that case a thoughtful comparison of the two results with a carefully chosen relative and/or absolute epsilon value is entirely appropriate.

However if you start adding epsilons carelessly – if you allow for error where there should be none – then you get a chaotic explosion of uncertainty where you can’t tell truth from fiction.

Floating-point numbers aren’t cursed

Sometimes people think that floating-point numbers are magically error prone. There seems to be a belief that if you redo the exact same calculation with the exact same inputs then you might get a different answer. Now this can happen if you change compilers (such as when changing CPU architectures), change compiler settings (such as optimization levels), and it can happen if you use instructions (like fsin/fcos/ftan) whose value is not precisely defined by the IEEE standard (and if you run your code on a different CPU that implements them differently). But if you stick to the basic five operations (plus, minus, divide, multiply, square root) and you haven’t recompiled your code then you should absolutely expect the same results.

Update: this guarantee is mostly straightforward (if you haven’t recompiled then you’ll get the same results) but nailing it down precisely is tricky. If you change CPUs or compilers or compiler options then, as shown in this chart from this post, you can get different results from the same inputs, even on very simple code. And, it turns out that you can get different results from the same machine code – if you reconfigure your FPU. Most FPUs have a (per-thread) setting to control the rounding mode, and x87 FPUs have a setting to control register precision. If you change those then the results will change. So the guarantee is really that the same machine code will produce the same results, as long as you don’t do something wacky.

Understanding these guarantees is important. I talked to somebody who had spent weeks trying to understand how to deal with floating-point instability in his code – different results on different machines from the same machine code – when it was clear to me that the results should have been identical. Once he realized that floating-point instability was not the problem he quickly found that his code had some race conditions, and those had been the problem all along. Floating-point as the scapegoat delayed his finding of the real bug by almost a month.

Constants compared to themselves

The other example I’ve seen where people are too quick to pull out an epsilon value is comparing a constant to itself. Here is a typical example of the code that triggers this:

float x = 1.1;
if (x != 1.1)
    printf(“OMG! Floats suck!\n”);

On a fairly regular basis somebody will write code like this and then be shocked that the message is printed. Then somebody inevitably points them to my article and tells them to use an epsilon, and whenever that happens another angel loses their wings.

If floating-point math is incapable of getting correct results when there are no calculations (except for a conversion) involved then it is completely broken. And yet, other developers manage to get excellent results from it. The more logical conclusion – rather than “OMG! Floats suck!” is that the code above is flawed in some way.

And indeed it is.

Fatally flawed floats

The problem is that there are two main floating-point types in most C/C++ implementations. These are float (32 bits) and double (64 bits). Floating-point constants in C/C++ are double precision, so the code above is equivalent to:

if (float(1.1) != (double)1.1)
    printf(“OMG! Floats suck!\n”);

In other words, if 1.1 is not the same when stored as a float as when stored as a double then the message will be printed.

Given that there are twice as many bits in a double as there are in a float it should be obvious that there are many doubles that cannot be represented in a float. In fact, if you take a randomly selected double then the odds of it being perfectly representable in a float are about one part in 4 billion. Which is poor odds.

It looks so simple…

The confusion, presumably, comes from the fact that 1.1 looks like such a simple number, and therefore the naive expectation is that it can trivially be stored in a float. That expectation is incorrect – it is impossible to perfectly represent 1.1 in a binary float. To see why let’s see what happens when we try converting 1.1 to binary. But first let’s practice base conversion.

To convert the fractional part of a number to a particular base you just repeatedly multiply the number by the base. After each step the integer portion is the next digit. You then discard the integer portion and continue. Let’s try this by converting 1/7 to base 10:

  1. 1/7 (initial value)
  2. 10/7 = 1+3/7 (multiply by ten, first digit is one)
  3. 30/7 = 4+2/7 (discard integer part, multiply by ten, next digit is four)
  4. 20/7 = 2+6/7 (discard integer part, multiply by ten, next digit is two)
  5. 60/7 = 8+4/7 (discard integer part, multiply by ten, next digit is eight)
  6. 40/7 = 5+5/7 (discard integer part, multiply by ten, next digit is five)
  7. 50/7 = 7+1/7 (discard integer part, multiply by ten, next digit is seven)
  8. 10/7 = 1+3/7 (discard integer part, multiply by ten, next digit is one)

The answer is 0.142857142857… We can see that the bold steps (2-7) repeat endlessly so therefore we will never get to a remainder of zero. Instead those same six digits will repeat forever.

1.1 is not a binary float

Let’s try the same thing with converting 1.1 to base two. The leading ‘1’ converts straight across, and to generate subsequent binary digits we just repeatedly multiply by two. After each multiply we take the integer portion of the result as our next digit, and discard the integer portion before the next multiply:

  1. 0.1 (initial value)
  2. 0.2 (multiply by two, first digit is zero)
  3. 0.4 (multiply by two, next digit is zero)
  4. 0.8 (multiply by two, next digit is zero)
  5. 1.6 (multiply by two, next digit is one)
  6. 1.2 (discard integer part then multiply by two, next digit is one)
  7. 0.4 (discard integer part, then multiply by two, next digit is zero)
  8. 0.8 (multiply by two, next digit is zero)

Notice that the steps in bold (3-6) repeat endlessly, so the binary representation of 1.1 repeats endlessly. The net result is that the binary representations of 1.1 in float (24-bit mantissa) and double (53-bit mantissa) precision are:

float(1.1) = %1.00011001100110011001101

double(1.1) = %1.0001100110011001100110011001100110011001100110011010

The slight breaking of the pattern at the end of both numbers is because the conversion is done with round-to-nearest rather than truncation, in order to minimize the error in the approximation, and in both cases it is rounded up.

All binary numbers can be exactly represented in decimal, but not all decimal numbers can be exactly represented in binary. Just as 1/7 cannot be represented as a decimal number, 1/10 cannot be represented as a binary number. For more details on conversions and precision see this post.

And therefore…

float(1.1) = 1.10000002384185791015625

double(1.1) = 1.100000000000000088817841970012523233890533447265625

and these values are not equal. Don’t expect the result of double-precision calculations to equal the result of float-precision calculations, even when the only ‘calculation’ is converting from decimal to binary floating point. Two reasonable ways to fix the initial code would be:

float x = 1.1f; // Float constant
if (x != 1.1f) // Float constant
    printf(“OMG! Floats suck!\n”);

or:

double x = 1.1;
if (x != 1.1)
    printf(“OMG! Floats suck!\n”);

Both of these pieces of code will behave intuitively and will not print the message. No epsilon required. And no angels harmed.

About these ads

About brucedawson

I'm a programmer, working for Valve (http://www.valvesoftware.com/), focusing on optimization and reliability. Nothing's more fun than making code run 5x faster. Unless it's eliminating large numbers of bugs. I also unicycle. And play (ice) hockey. And juggle.
This entry was posted in Floating Point, Programming and tagged , , , , . Bookmark the permalink.

9 Responses to Doubles are not floats, so don’t compare them

  1. This is a very clear and straight explanation of how floats and doubles behave, and the extra about base conversion was very helpful to remember how it worked! As a programmer who used to avoid floats and doubles whenever possible (because I didn’t understand the “magic” behind them) and is getting to C/C++ game programming, this post and all the others about the subject taught me I was doing it wrong. Thank you!

  2. Excellent discussion. Even knowing some of this stuff, it’s easy to forget day-to-day. The examples are extremely valuable when one’s thinking isn’t as clear as it should be.

  3. Joseph Lunderville says:

    As a case in point, all of Relic’s games have used floating-point extensively in gameplay code, but are still completely deterministic across CPUs from the Pentium II through the i7 — we know this because you can play networked games and replays on different generations of machine without desyncing.

    There are caveats: the way some SSE instructions are specified is a bit of a minefield, and we’re trying to figure out now how we can make things work reliably across architectures, but the main point is valid. Floating point is subtle, but it’s not inherently nondeterministic.

  4. Pascal Cuoq says:

    Cogently written. The worst counter-example to the superstition “testing floats with equality is always bad” may be this one, where the tested value is an infinite: http://stackoverflow.com/questions/11421756/weverything-yielding-comparing-floating-point-with-or-is-unsafe

    A couple more examples are in this post of mine (it can get tricky, but that does not undermine the message that floating-point is deterministic): http://blog.frama-c.com/index.php?post/2011/11/08/Floating-point-quiz

    • brucedawson says:

      Nice — I appreciate the comment and the links.

      One quibble: in the Floating-point-quiz you say “For a decimal number to be representable as a (base 2) floating-point number, its decimal expansion has to end in 5″ but that is only true if there is a fractional part. All of the integers from 0 to 2^24 (about 16 million) are representable in a float, and many more are representable in a double, and most of them don’t end in 5.

  5. Pingback: Float Precision Revisited: Nine Digit Float Portability | Random ASCII

  6. Pingback: Comparing Floating Point Numbers, 2012 Edition | Random ASCII

  7. An awesome reminder of how to do base conversion and why we had to learn it in the first place!

    Thanks,
    Donato

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s