7.6. Don’t Assume Floating-Point Numbers Are Perfectly Accurate

Computers can only store the digits of the binary number system, which are 1 and 0. To represent the decimal numbers we’re familiar with, we need to translate a number like 3.14 into a series of binary ones and zeros. Computers do this according to the IEEE 754 standard, published by the Institute of Electrical and Electronics Engineers (IEEE, pronounced “eye- triple-ee”). For simplicity, these details are hidden from the programmer, allowing you to type numbers with decimal points and ignore the decimal- to-binary conversion process:

>>> 0.3
0.3

Although the details of specific cases are beyond the scope of this book, the IEEE 754 representation of a floating-point number won’t always exactly match the decimal number. One well-known example is 0.1 :

>>> 0.1 + 0.1 + 0.1
0.30000000000000004
>>> 0.3 == (0.1 + 0.1 + 0.1)
False

This bizarre, slightly inaccurate sum is the result of rounding errors caused by how computers represent and process floating-point numbers. This isn’t a Python gotcha; the IEEE 754 standard is a hardware standard implemented directly into a CPU’s floating-point circuits. You’ll get the same results in C++, JavaScript, and every other language that runs on a CPU that uses IEEE 754 (which is effectively every CPU in the world).

The IEEE 754 standard, again for technical reasons beyond the scope of this book, also cannot represent all whole number values greater than 2 53 . For example, 2 53 and 2 53 + 1, as float values, both round to 9007199254740992.0 :

>>> float(2**53) == float(2**53) + 1
True

As long as you use the floating-point data type, there’s no workaround for these rounding errors. But don’t worry. Unless you’re writing software for a bank, a nuclear reactor, or a bank’s nuclear reactor, rounding errors are small enough that they’ll likely not be an important issue for your program. Often, you can resolve them by using integers with smaller denominations: for example, 133 cents instead of 1.33 dollars or 200 milliseconds instead of 0.2 seconds. This way, 10 + 10 + 10 adds up to 30 cents or milliseconds rather than 0.1 + 0.1 + 0.1 adding up to 0.30000000000000004 dollars or seconds. But if you need exact precision, say for scientific or financial calcula- tions, use Python’s built-in decimal module, which is documented at https:// docs.python.org/3/library/decimal.html. Although they’re slower, Decimal objects are precise replacements for float values. For example, decimal.Decimal(‘0.1’) creates an object that represents the exact number 0.1 without the impreci- sion that a 0.1 float value would have.

Passing the float value 0.1 to decimal.Decimal() creates a Decimal object that has the same imprecision as a float value, which is why the resulting Decimal object isn’t exactly Decimal(‘0.1’) . Instead, pass a string of the float value to decimal.Decimal() . To illustrate this point, enter the following into the interactive shell:

>>> import decimal
>>> d = decimal.Decimal(0.1)
>>> d
Decimal('0.1000000000000000055511151231257827021181583404541015625')
>>> d = decimal.Decimal('0.1')
>>> d
Decimal('0.1')
>>> d + d + d
Decimal('0.3')

Integers don’t have rounding errors, so it’s always safe to pass them to decimal.Decimal() . Enter the following into the interactive shell:

>>> 10 + d
Decimal('10.33333333333333333333333333')
>>> d * 3
Decimal('0.9999999999999999999999999999')
>>> 1 - d
Decimal('0.6666666666666666666666666667')
>>> d + 0.1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'decimal.Decimal' and 'float'

But Decimal objects don’t have unlimited precision; they simply have a predictable, well-established level of precision. For example, consider the following operations:

>>> import decimal
>>> d = decimal.Decimal(1) / 3
>>> d
Decimal('0.3333333333333333333333333333')
>>> d * 3
Decimal('0.9999999999999999999999999999')
>>> (d * 3) == 1 # d is not exactly 1/3
False

The expression decimal.Decimal(1) / 3 evaluates to a value that isn’t exactly one-third. But by default, it’ll be precise to 28 significant digits. You can find out how many significant digits the decimal module uses by accessing the decimal.getcontext().prec attribute. (Technically, prec is an attribute of the Context object returned by getcontext() , but it’s convenient to put it on one line.) You can change this attribute so that all Decimal objects created afterward use this new level of precision. The following interactive shell example lowers the precision from the original 28 signifi- cant digits to 2:

>>> import decimal
>>> decimal.getcontext().prec
28
>>> decimal.getcontext().prec = 2
>>> decimal.Decimal(1) / 3
Decimal('0.33')

The decimal module provides you with fine control over how numbers interact with each other. The decimal module is documented in full at https://docs.python.org/3/library/decimal.html.