Why Do Computers Suck at Math?

Since nobody mentioned Haskell so far: Haskell is another language that handles large numbers and fractions very well. It even has a rational number datatype that represents numbers as the quotient of two integers of arbitrary size.

@the old [o]rang:

A simple way (very simplified) is to say that 0.99999ā€¦ is so close to 1.0, that I wish not to have to print all them darned nines, and for my purposes I will round it off to 1.0, so I donā€™t have to spend all day writing out senseless nines. The precision of avoiding the incredibly small differences is not worth the effort, since each space is 1/10 the size of the previous.

At the risk of enabling a troll, let me correct you.

0.999ā€¦ is not pretty close to one. It is one. Not by convention, not by habit, not by agreement, not by tradition, not by laziness. The problem here, I think, is that you are ignoring what that ā€¦ means. It doesnā€™t mean a whole bunch more, nor an arbitrarily large number of. It means an infinite number of. It means they go on forever. Not till you get tired of writing them, or holding down the 9 key. For. Ever.

Whatā€™s more, your rambling about positive zero and negative zero, about how computers can only add, the rest is just tricks, and computers canā€™t do anything but binary make me think youā€™re a representative from the Time Cube organization. Look up bignums and BCD and get back to us. (Or donā€™t.)

In general, to the commenters who are saying that Jeff is speaking to the wrong crowdā€¦

Itā€™s good that you had an excellent computer science education. But you seem to be failing to account for the common nature of these issues. Maybe Jeff is saying things that have already been said, but as long as these mistakes keep happening these things bear repeating. At the very top of the article, itā€™s observed that Google has made this common, easily-corrected error (I say easily-corrected because with Guido van Rossum in their organization somewhere, youā€™d think they could make the search engineā€™s math output at least as good as the default handler in Python). If Googleā€™s screwing up something this simple, maybe itā€™s not as obvious to the average programmer as we think it is?

So perhaps the people who write common and widely-used software should improve their computer science education. And perhaps the people who comment on these articles should (http://drupal.org/node/29405) help by contributing to successful projects like Drupal.

Maybe computer science prowess and popular, successful programs are semi-independent variables.

And maybe in a world where they are semi-independent, the people who know what they are doing should get up off the backs of the people who try to educate the rest of us.

The only problem I have with the .99999ā€¦ = 1 is that give me N number of 9ā€™s after 0.9 and I can give you an infinite number of number between that number and 1. Not necessarily on a computer but in theory.

You know the old saying ā€¦ garbage in, garbage out

Donā€™t blame the poor unknowing computer. Itā€™s just doing what itā€™s told, and following the rules its given to produce a result.

Trickiest computer math gotcha I stumbled upon in reality: modulo repetition. Given a float x, think thereā€™s no difference between x%1 and x%1%1 (% being modulo operator)? Think again:

Python 2.6.2 (release26-maint, Apr 19 2009, 01:58:18)
[GCC 4.3.3] on linux2
Type help, copyright, credits or license for more information.
x=-1e-20
x%1
1.0
x%1%1
0.0

Jolly good. If you see somebody using floating point for currency representation, they surely donā€™t know what theyā€™re doing. Beware bad text-books.

Decimal numbers are coming back to computers. There is a modern IEEE-standard for them, and hardware support is said to be coming for this standard. Even the software versions are actually quite fast, but most importantly, theyā€™re correct. There are implementations for C and C++ as well.

Mike Cowlishaw made the programming language REXX which actually uses decimal numbers, and heā€™s done some important work on decimal numbers and the IEEE-standard. Heā€™s also the man behind the JSR-13 for BigDecimal in Java. Densely Packed Decimal numbers are a variance of Chen-Ho encoded decimal numbers.

Some links on decimal arithmetic on computers for those who are interested:

http://domino.research.ibm.com/comm/research_people.nsf/pages/cowlishaw.index.html
http://en.wikipedia.org/wiki/Mike_Cowlishaw
http://www.intel.com/technology/itj/2007/v11i1/s2-decimal/1-sidebar.htm
http://en.wikipedia.org/wiki/Densely_Packed_Decimal
http://speleotrove.com/decimal/
http://speleotrove.com/decimal/DPDecimal.html

Simply put: There is not much excuse for not doing math correctly on computers. I can understand why floating point arithmetic is wanted by Fortran/HPC/Science programmers, who at least for some systems need as much speed as they can get their hands on, but for anything else, decimal arithmetic is the way to go.

Anyway, itā€™s a bliss to use a language which just does it correctly.

@zokier

So if Java is like a house, then C++ is like a house without a kitchen sink?

Sounds about right to meā€¦ :wink:

Soā€¦youā€™ve found a problem. Whatā€™s the solution?

In mathematics, the repeating decimal 0.999Ɩ denotes a real number equal to one. In other words: the notations 0.999Ɩ and 1 actually represent the same real number.
This equality has long been accepted by professional mathematicians and taught in textbooks.

Really? Glad to know it gets taught; nobody taught that to ME. I swear, just a couple of months ago I was randomly thinking about periodic fractions and non-decimal bases (nothing better to think aboutā€¦ maybe thatā€™s why it took me so long to get married :slight_smile: ), and I stumbled upon this fact in total bewilderment. It was a clear, unmistakable and inescapable consequence of a few basic mathematical facts. How fun!

To me, the most interesting consequence is that our conventional numeric notation system (even with the use of ā€¦ or the vinculum sign) is not a biyective representation of the set of Reals, even though for the longest time I had assumed it was.

@Craig Fritzpatrick

ƬSo when we divide 9 by 2 for example, we as people might write: 4 1/2. Simple, a string of 5 characters including the space. No precision problems.Ʈ

Good luck when you want to calculate sqrt(2). Seriously, there are many rational classes, but I doubt any store them as strings.

@Jim

ƬIt doesnā€™t help that the real numbers are uncountably infinite, not merely countably infinite like the integers.Ć® ā€¦ Ƭ Any two different real numbers have an infinite number of other reals between them, so itā€™s impossible to represent any nonempty segment of the real number line exactly.Ć®

Thatƭs not just a problem with unaccountably infinite numbers; rationales are countable, but any two rationale numbers have an infinite number of rationales between them. But this isnƭt really the problem with representing numbers on computers Ʊ integers have just as much of a problem if they are sufficiently large.

Not related to floating point numbers: Stack Overflow has spoiled me. Everything on the internet needs to be able to be up-voted or down-voted. I read this article in Google Reader and for a split second was looking for the up-arrow to click on. Sharing or staring an article just doesnā€™t feel the same as up-voting.

@Daren:
The correct mathematical explanation is that 0.99999ā€¦ (zero point nine recurring) approaches 1.

No, actually it has nothing to do with Limits. I know it looks like it, but itā€™s not.

Hereā€™s a simplified version of what I stumbled upon:

1/3 = 0.33333...
2/3 = 0.66666...

Nothing weird there. Those two are clearly exactly equivalent. Now:

1/3 + 2/3 = 1

No possible doubt there. There are no limits or rounding involved. Therefore,

0.33333... + 0.66666... = 0.99999...

Which means that 0.99999ā€¦ MUST therefore be an alternative (non-normalized) representation of 1.

Ruby and Python rules too :slight_smile: Just like the python:

[ruby]
~$ irb
irb(main):001:0 399999999999999-399999999999998
= 1
irb(main):002:0 exit

[php]
~$ php -r 'echo 399999999999999-399999999999998 . \n;'
1
~$

I got the correct answer from excel 2007 when I tried =850*77.1

The identity, 1 = 0.999ā€¦, simply indicates that we can represent the number that we call ā€˜oneā€™ as two different power series,

0 + 9/10 + 9/100 + 9/1000 + ā€¦

and

1 + 0/10 + 0/100 + 0/1000 + ā€¦

(This is simply the definition of our decimal expansion notation; if the trailing coefficients of the series are repeating zeros, we omit writing them for convenience.) Put another way, the equation, 1 = 0.999ā€¦, just says that two different sequences (the partial sums of the two power series above) converge to one.

Python rulez ?

38.1 * .198
7.5438000000000009
.1
0.10000000000000001

http://docs.python.org/tutorial/floatingpoint.html

I had the same problem on an e-commerce website in c#, and in perl/PHP/Tcl/C, and as everybody knows computers are not faulty, only developpers :slight_smile:

Normaly to avoid problem it is quite better not to work in BCD (binary coded decimal) float (unless your CPU compute natively numbers in decimal :slight_smile: ), but rather to use fixed point trick.

In real life it means manipulating only intergers (you donā€™t store price as float, but integer representing tenth of cents for instance) and the point is just a matter of presentation.

It hardly works all the time, but it is better than nothing.

Computers donā€™t suck at math. People simply use floating point variables for purposes that floating point wasnā€™t designed for.

The technology to do true decimal arithmetic and return precise numbers has existed for more than 50 years, On business platforms, such as the IBM i platform, itā€™s the default numeric data type. Itā€™s the right choice when you are working with money, weights, quantities and the other precise numbers used in business.

Floating point was designed more for scientific applications or graphical applications. It wasnā€™t designed for business.

Strangely, most languages for the PC platform are lacking true decimal arithmetic, and the developers arenā€™t clamoring for it. Iā€™ve never understood that. How can floating point be good enough for your business?

The excel bug is very confusing ā€¦ if you take that 850 * 77.1 and format the cell as a date you get the same value as if you have 65535 formatted as a date ā€¦ if you format it pretty much any other way you get the 100000 and 65535

But subtraction can be anywhere from exact to completely inaccurate. If two numbers agree to n figures, you can lose up to n figures of precision in their subtraction.

This makes no sense to me, and neither this article nor the linked article attempts to explain it. Why should subtraction be harder than addition, multiplication, or division? Iā€™ve tried thinking about it from various angles, and donā€™t see why subtraction should introduce this kind of difficulty, and especially why the agreement of the two operands should have an effect on the precision of the results. Can you elaborate?