You've probably seen this old chestnut by now.

This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2009/05/why-do-computers-suck-at-math.html

You've probably seen this old chestnut by now.

This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2009/05/why-do-computers-suck-at-math.html

$ python

Python 2.5.2 (r252:60911, Sep 20 2008, 22:32:52)

[GCC 4.2.3] on linux2

Type help, copyright, credits or license for more information.

399999999999999-399999999999998

1

Python WIN!!!

There are languages that handle arithmetics well.

I known two examples (I am sure, there are more of them)

Python: has built-in infinite precision integer numbers

Lisp: has built-in infinite precision integer and rational numbers and has arithmetical operations defined same way as in math (e.g., (mod -1 10) = 9, not -1).

So, it is clearly solved problem, not sure why not all language designers use infinite precision integers and rationals.

In mathematics, the repeating decimal 0.999Ö denotes a real number equal to one., and This equality has long been accepted by professional mathematicians and taught in textbooks.

Not the textbooks I read (or did when I was in university - it’s not a favorite pasttime of mine…). The correct mathematical explanation is that 0.99999… (zero point nine recurring) approaches 1. Any pure mathematician (probably not many read Coding Horror…) would surely agree.

Good post, though!

PHP also calculates the example perfectly.

In case it hasn’t bit you yet, you might be surprised that a nice large, simple decimal like 0.1 is an infinitely repeating value in binary: .0001100110011…

You have to be very careful doing floating point math[1]. Especially when it comes to testing for equality[2].

[2] http://stackoverflow.com/search?q=floating+point+equality

Here we go again (thanks, Jeff, for re-opening a can of worms).

@Darren: Nope, sorry, 0.999… EQUALS EXACTLY 1.0 no ifs ands or buts. Read the Wikipedia article. It includes some very straightforward proofs.

Not to mention that typical floating-point numbers can’t represent some decimal values accurately. Some programming languages, including C#, have a decimal type that’s floating-point but works in base 10 instead of base 2.

The article is a bit misleading (to me, at least). I understand the sentence A standard floating point number has roughly 16 decimal places of precision as If you have a real number with about 16 decimal places, floating points can represent it accurately, which isn’t true.

A trivial example of this is the decimal number 0.2, which can never be accurately described with IEEE-754, because the .2 is represented with an infinitely repeating pattern of 1001 (100110011001…).

Interestingly, the launch failure of the Ariane 5 rocket, which exploded 37 seconds after liftoff on June 4, 1996, occurred because of a software error that resulted from converting a 64-bit floating point number to a 16-bit integer.

I’m tired of reading such nonsense. That’s a total misunderstanding of the reasons that led to the loss of the rocket.

Reusing Ariane-4 software in the larger Ariane-5 (with different constraints) can hardly be called rounding error.

Putting the Excel 850*77.1 bug in there is a little misleading as Excel does in fact calculate the result correctly. The display logic was broken on assembler level which isn’t exactly Excel doing a wrong calculation, but rather a programmer’s oversight.

It’s not an artifact of how computers handle numbers in contrast to the other examples you give.

Oh no. Yet another place infected with the 0.(9) vs 1.0 debate!

@Simon: python and lisp implementations handle short number (fixnums, as they are called in lisp) with the same speed as arithmetics machine words (e.g., as in C). And long arithmetics is only used when numbers do not fit into registers anymore. So, you can see, that on short numbers, it does not make difference, but on long numbers, correct but slightly longer operation is preferred to fast but incorrect.

@Dennis:

nobody forces to represent 0.1 as a binary fraction. 0.1 is a rational number and is usually should be treated as such (e.g., when precision is required).

simple decimal like 0.1 is an infinitely repeating value in binary:

@Darren

I am a mathematician, there is no difference between .9(bar) and 1. Just because we can’t count to infinity doesn’t mean that it doesn’t exist. At infinity, it is equal to 1. As Dennis said, see the wikipedia page on it for proofs.

http://en.wikipedia.org/wiki/0.999…

@Simon Buchan: Python, Lisp and Haskell get it partially because they are already pretty slow

Lisp is slow is a myth. Most lisp compilers do *not* use bytecode. SBCL compiles to fast machine code that can compete (and sometimes outperform due to presence of high-level facilities) with C code (including numerical computations). SBCL can often prove that arithmetics will not overflow register and perform various optimizations (including the most interesting cases with bit-twiddling).

Most languages don’t support bignum because of the code bloat inherent. C++ will likely compile these statements like:

That’s the problem in C++ compiler and the problem in its static typing (e.g., typing variables, not values). Lisp compiler will compile it into completely different code due to using tagged values (i.e., in most cases the runtime dispatch is *very* fast).

There are usually 3 steps in creating software: make it work, make it right, make it fast. Using arithmetics in machine words screws the second step.

Computers are fine with math, it’s the programmers that suck.

Raymond Chen wrote about how Calculator got an infinite-precision engine: http://blogs.msdn.com/oldnewthing/archive/2004/05/25/141253.aspx

(In my opinion, the biggest waste of development resources you could think of, but oh well.)

In discussion like these, people seem to forget 2 important things:

First:

Computers are only high speed idiots

Second:

People/Humans normaly calculate in decimals (in base 10), whereas computers calculate with floating point (in base 2), allthough both use some form of DOT notation, using floatingpoint in programms will allways give you errors in the real world with for instance ‘money’.

using floatingpoint in programms will allways give you errors in the real world with for instance ‘money’.

The conclusion: do not use floating point for such values, but use precise rational arithmetics.