Last year I pointed out that float variables can be converted to text and then back to the same binary value using printf(“%1.8e”). I also supplied a test program that used C++ 11 threading to quickly prove this claim on VC++ 2012.
However what was left untested was whether a float that is converted to text using one compiler would faithfully be restored using a different compiler on a different platform. Today that question is (mostly) tested.
The summary of the results is that across a test of all two billion positive floats VC++ gets the last digit wrong four times, has legitimate disagreements with g++ 6,694,304 times, but the discrepancies probably don’t matter.
This article is part of a series of floating-point articles published mostly in 2012. The complete list is:
- 1: Tricks With the Floating-Point Format – an overview of the float format
- 2: Stupid Float Tricks – incrementing the integer representation of floats
- 3: Don’t Store That in a Float – a cautionary tale about time
- 3b: They sure look equal… – special bonus post (not on altdevblogaday)
- 4: Comparing Floating Point Numbers, 2012 Edition – tricky but important
- 5: Float Precision—From Zero to 100+ Digits – what does precision mean, really?
- 5b: C++ 11 std::async for Fast Float Format Finding – special bonus post (not on altdevblogaday) on fast scanning of all floats
- 6: Intermediate Precision – their effect on performance and results
- 7.0000001: Floating-Point Complexities – a lightning tour of all that is weird about floating point
- 8: Exception Floating point – using floating-point exceptions to find bugs
- 9: That’s Not Normal–the Performance of Odd Floats
- 10: Floating-Point Poetry: haiku for programmers
- 11: Doubles are not floats, so don’t compare them: explaining the problems with 0.1f == 0.1
- 12: Game Developer Magazine Floating Point: the companion article
In the original float-precision article I claimed that:
- A 32-bit float can uniquely identify all six digit decimal numbers within its normalized range
- A 32-bit float can be uniquely identified by printing it with nine decimal digits of mantissa
- Printing the exact value of a 32-bit float can take up to 112 decimal digits of mantissa
For the purposes of this article the crucial distinction to understand is between the number of digits required to “uniquely identify” a particular float, and the number required to “exactly print” its value. The value 0.1f is a simple example of the distinction. 0.1f uniquely identifies a particular float (the float whose value is closest), but the value of this float isn’t 0.1 – it is actually 0.100000001490116119384765625. Close, but different.
Most developers rarely need to print the exact value of a float but it is useful to be able to print a float and be confident that you can retrieve the identical binary float value from the text.
ASCII non-equivalence: rounding rules
My first test was to write code to print all ~2 billion positive floats using VC++ 2010 (x86) and using g++ 4.6.3 on x86 Ubuntu. I had hoped to find that the ASCII representations were identical, but I was disappointed. Exactly 6,694,308 of the positive floats returned different decimal representations when printed with %1.8e with VC++ compared to g++. That’s roughly 0.3% of the total.
Here’s an example of one of the differences:
- +6.10351563e-005: VC++
- +6.10351562e-05: g++
- +6.103515625e-005: full precision, using the code from this article
There’s one cosmetic difference since VC++ prints the exponents as three digit numbers and g++ uses two digits. That’s easy enough to ignore.
The next difference is that VC++ and g++ disagree about the final (ninth) digit. Looking at the full precision value we can see that the next (tenth) digit was a 5 – halfway between – and VC++ rounded up and g++ rounded down. Unfortunately it appears that there is no standard to mandate behavior in this situation. g++ appears to use the round-to-nearest-even rule for ties, and VC++ rounds away from zero for ties. I think that round-to-nearest-even is more in keeping with the spirit of the IEEE float standard, but that’s just my personal preference, so I can’t declare either one of them to be right or wrong.
ASCII non-equivalence: double rounding
Here’s another example:
- +4.30373587e-015: VC++
- +4.30373586e-15: g++
- +4.30373586499999995214071901727947988547384738922119140625e-015: full precision
This one exhibits a different issue. By looking at the full precision representation of the float we can see that g++ is definitively correct in its decision to round down. The tenth digit is a four and you don’t have to look any farther to realize that rounding down is correct – VC++ is just plain wrong.
This appears to be a case of double rounding. VC++ prints a maximum of 17 digits of mantissa – if you ask for more you always get zeroes – and it looks like VC++ handles variable precision printing of floats by always printing to 17 digits and then rounding that result (or appending zeroes). The initial rounding to 17 digits rounds up to 4.3037358650000000, and when that is rounded to nine digits it is rounded up again. In a correctly rounded world the result should never be off by more than 0.5 in the last place, and VC++ fails this test, by a tiny margin.
I did some analysis of the discrepancies and I found the following:
- In 6,694,304 cases the actual result is exactly half between what VC++ and g++ print and the difference is just a printing policy difference
- In the remaining four cases VC++ prints the wrong value due to double rounding
While checking the results I found that in all but three cases the discrepancy was that g++ had a two as the last digit and VC++ had a three. That makes sense because .25 and .75 would be common binary float endings that could lead to ambiguous rounding, and with .75 both compilers would agree to round up to .8. Most of the two versus three discrepancies were just policy disagreements, but one was a case of double rounding.
My analysis also showed that across all ~two billion positive floats VC++ only does double rounding four times – the other 6,694,304 discrepancies were just a policy difference about what rounding rules should be used. That means that in the vast majority of cases VC++ and g++ print results that are no more than 0.5 ULPs (nine-digit decimal) away from the actual float value, and in the remaining four cases VC++ is just about 0.50000001 ULPs away.
Here are the four positive floats that VC++ double rounds, printed to full precision:
Aside: the reason VC++ prints to 17 digits is because %e has to be able to print double values and these require a 17 digit mantissa in order to round-trip reliably. The printf code always receives a double and I guess the library writers decided that it was easiest to print to 17 digits and then adjust from there.
Luckily for software developers it is quite likely that none of this matters. The discrepancy between the g++ and VC++ results is always less than one part in 100,000,000, and the difference between the printed result and the actual float value is barely 0.5 parts in 100,000,000 . Since the maximum precision of a float is one part in ~16,777,216 this means that the difference between the g++ and VC++ results is less than one sixth of the difference between adjacent floats. Therefore, as long as the conversions from text to binary (scanf) do not contain egregious errors then we should always be able to retrieve the original binary value.
In other words, the maximum printf error is normally 0.5 (nine-digit decimal) ULPs, and occasionally 0.50000001 ULPs, but either way still much less than the distance between adjacent floats (minimum 6.019 ULPs around 1e-28), so even the differently rounded results still uniquely identify the correct float.
To verify this I did a scanf of each platform’s output on the other platform, for all ~2 billion positive floats, and in all cases I got back my original value. If you scanf them back into a double then you will get different results – because the ninth digit is then more significant – but scanning back to a float works.
Floats and debuggers
As I mentioned in the original precision post, VS 2010’s watch window prints floats with eight digits of mantissa, leading to ambiguity when debugging. I filed a bug on that and VS 2012’s watch window prints floats with nine digits. gdb (on x86 Linux) prints floats with nine digits.
Knowing that you can count on preserving the value of a float when printed with %1.8ef is important. Many game developers serialize floating-point data to text files, and they often do it incorrectly. One mistake is to use fewer than nine digits of mantissa, which means that they will occasionally lose information. Another mistake is to not trust %1.8ef and print with “%08x”, *(int*)&f instead of %1.8ef, thus losing the readability of a text format. A few game developers even print floats both ways, which has all the fashion advantages of wearing both a belt and suspenders. I hope I have proven to everyone’s satisfaction that using %1.8ef can be trusted.
For more information and different perspectives on this topic I recommend this article which points out that printf (“%.1f\n”,0.25); is enough to show gcc/VC++ differences. I find that the whole site is quite interesting.
Readers interested in deeper details might want to read How to Print Floating-Point Numbers Accurately. It’s worth mentioning that this article does not say that accurately printing floating-point numbers is hard – it’s actually quite easy and simple – but doing it both accurately and efficiently is quite subtle and tricky.
C++ programs should take a look at Incorrect Round-Trip Conversions in Visual C++. Apparently iostreams in VC++ has a bug where it will not correctly convert some 17-digit strings to doubles. That’s pretty darned serious.