This is an old revision of the document!
Table of Contents
Printing Floating Point Numbers
To complete the printf()
function of PDCLib – which so far lacks support for %e
, %f
, and %g
– I needed to solve the problem of converting the binary, in-memory representation of double
and long double
to decimal.
Using the same algorithm as for integers (divide by ten, take quotient, recurse with remainder) is not an option. Not only would repeated floating point divisions be horrendously slow: Multiplying / dividing a (base 2) floating point by 10 repeatedly would also accumulate rounding errors. You would have slow, wrong results.
What is in a floating point number
The lowest bit in an integer has the value 2⁰ (1). The next bit counts for 2¹ (2), the next for 2² (4), and so on.
The basic concept of a floating point number is that the first bit has the value 2⁰ (1), with the next having the value 2⁻¹ (0.5), then 2⁻² (0.25), and so on. With this fractional part (also called mantissa, or significant) comes an exponent, which scales the fractional part. The final value of the floating point is 𝑓×2ᵉ. Usually there is a dedicated sign bit to allow for negatives. (Yes, this means there is such a thing as a negative zero.)
IEEE 754 brought a standard for floating point formats and arithmetic, which most CPUs today adhere to. It defines a 32bit “single precision” format, and a 64bit “double precision” format. These are usually what you get when you declare a float
or double
, respectively.
The C type long double
is trickier. It could be the same 64bit format as double
, or the 128bit “quadruple precision” defined by IEEE 754. Or it could be Intel's 80bit “x86 extended precision” format.
There are a couple of special cases that need to be kept in mind when working with floating point numbers.
Implicit Decimal
The first bit of a floating point number, the one valued 2⁰, is always set (but see Subnormals below). For this reason, most floating point formats don't bother with actually storing it, and instead imply it's being set, with the first mantissa bit stored being the 2⁻¹ one.
Biased Exponent
Instead of assuming two's complement to allow for positive and negative exponents, IEEE 754 uses biased exponents: The exponent bits are interpreted as unsigned integer, but to get the “real” exponent value, you need to substract the bias value, which is FLOAT_MAX_EXP - 1
, DBL_MAX_EXP - 1
, or LDBL_MAX_EXP - 1
, respectively.
Infinity
If a value is too big for the exponent to represent, the value will become infinity. This is represented by an all-bits-set exponent and an all-zero mantissa. This allows to continue operating on the value and checking for overflow just once at the end, instead of after each operation.
Not a Number
Some mathematical operations (like 0.0 / 0.0
) are undefined, and result in the value to take on a special state called “Not a Number” (NaN). This is represented by an all-bits-set exponent and a non-zero mantissa. The bit pattern of the mantissa can hold additional information, but that is beyond the scope of this document.
Subnormals
When a floating point number becomes too close to zero to represent even with the smallest exponent, most platforms support subnormal numbers, i.e. numbers that are not normalized to have the implicit decimal bit set (see above). These are represented by a no-bits-set exponent. The exponent is considered the same as ( 1 - bias )
, and there is no implicit decimal bit. That means, the closer to zero the remaining mantissa gets, the less bits of precision it holds.
Unnormals
The 80bit Intel x86 Extended Precision format has an explicit decimal bit, which originally allowed for a number of additional combinations, like “pseudo subnormals” with a no-bits-set exponent but a set decimal bit, which were evaluated the same as subnormals. There were also unnormal numbers, with a non-zero exponent but a zero decimal bit. These were evaluated at an exponent one lower than subnormals, providing some additional precision for very small values.
While the explicit decimal bit still exists in the architecture, the corresponding special combinations have last been used by the 80286. From the 80386 onward, these are not generated / not accepted by the CPU. While PDCLib aims at maximum portability, I decided not to aim for 80286 compatibility for the generic implementation.
In PDCLib
The general idea for PDCLib is as follows:
- When passed to
printf()
, a floating point value is sent to either_PDCLIB_bigint_from_dbl()
or_PDCLIB_bigint_from_ldbl()
for “deconstruction” into its components: Sign, exponent, and mantissa.- There is no need for
_PDCLIB_bigint_from_flt()
, sincefloat
gets promoted todouble
when passed through the variable argument list ofprintf()
.
- This deconstruction makes use of bit-twiddling macros defined in
_PDCLIB_config.h
as appropriate for the platform. - The remaining conversion code works on the
_PDCLIB_bigint_t
holding the deconstructed floating point value, without needing to know its original composition. - For now, I limited myself to base-2 floating points. Other formats (like base-8, or base-16) could probably be added as needed.
Dragon4
Now that we have learned a bit about what a floating point number looks like, we will focus on how to convert one to a decimal representation.
The seminal work in this area is a paper by Guy L. Steele Jr. and Jon L. White, How to Print Floating-Point Numbers Accurately. They had worked out an algorithm they had named Dragon 4, which solved the problem. Years later, Robert G. Burger and R. Kent Dybvig, Printing Floating-Point Numbers Quickly and Accurately improved the performance of Dragon 4 significantly.
I am aware of later optimizations, like Printing Floating-Point Numbers Quickly and Accurately with Integers by Florian Loitsch, but since that requires a fallback mechanism for cases when the improved algorithm fails, I opted for the Burger & Dybvig variant.
Overview
Dragon 4 allows to find the shortest decimal number that uniquely identifies the binary number.
Given the limited precision of the mantissa, each representable binary value has exactly one predecessor, and one successor. If we have printed enough decimal digits to unambiguously identify this binary value, we can (and should!) stop printing further digits, as those no longer signify any usable precision.
Let's have a look at such a triple of consecutive numbers, for the sake of brevity in single precision format:
- 0x4040 0001 – 3.0000002384185791015625
- 0x4040 0002 – 3.000000476837158203125
- 0x4040 0003 – 3.0000007152557373046875
You will see that everything after the seventh fractional digit is, effectively, useless. The decimal number 3.0000004
uniquely identifies the binary value 0x4040 0002
, and is all that should be printed, even if the user requested a precision of 20 digits.
The initial reflex might be, “but what when I need those additional digits?”. The simple answer is, you really don't. You already have no way to distinguish whether the value you're looking at really was …047683…
or, for example, …039871…
, or …058812…
. If you think you need those digits, you should not be using a float
.