Why Do Computers Suck at Math?

Jeff, why do you insist on including some kind of image/clipart on every post you make? Occasionally they help make the post entertaining. More often than not it is simply annoying.

I’m a mathematician, and it is true that 0.(9)=1, but I hate that result. It’s really annoying. If you want to get rid of it, you can use infinitesimals and nonstandard analysis, but they are tricky too.

But that’s not really Jeff’s point, since computer math doesn’t even get close to being able to say that 0.(9)=1, unless you go all symbolic on me.

This makes no sense to me, and neither this article nor the linked article attempts to explain it

Simple enough of a problem to understand that you are confused…

  1. Computers are not user friendly.
  2. Computers can ONLY add…
  3. At that, they can only add 1+1… 1+0, or… 0+0. they add only binary integers (not decimals.)
  4. Programming, either hardware, software or firmware, seemingly allow computers to do more, but, in essence, they only take the numbers and manipulate them to force the execution of subtraction, multiplication and division.
  5. (here is where it gets complex) Subtraction is performed by forcing the number to become a negative value (called the compliment). and then they are added together to get the answer. That is a little hard to understand but, If you look at the equation 5 - 4, you see the same thing stated. You turn a ‘positive’ four, into a negative number (-4) and add them together to end up with a ‘1’ (5 + (-4)= 1).

(before anyone sounds off…)

I am not a college graduate…
I used to be a very low level language programmer…
I had to read and comprehend a book on ‘Higher Unary Mathematics’ for a job I had dealing with navigation.
I may not be explaining things well, but, I know what is the essence of what I have said.
I used to have a sign on my desk that said:

Your wisdom is akin to the result of the most complex and detailed problem in higher unary mathematics.

(ALL problems in ALL levels of Unary Mathematics are answered with 0, the only number in Unary Mathematics. The only problem about the answer was, is it negative or positive… and THAT answer took a whole book).

One of my favorite rounding stories. During the first Gulf war, patriot rockets fired by the Americans got less and less reliable over time, and often missed their target. But strangely enough, the ones fired by the Dutch forces didn’t.

Turned out, the Americans kept systems going all the time, whereas the Dutch, frugal as they are, switched them off at times when there wasn’t any threat. When the Dutch switched them on again, systems reset, looked up the time on the network and were happy. But the Americans never reset, and the rounding errors caused real time and kept track of time to drift, causing them to aim in the wrong direction (because the clock was used to orient the systems).

Don’t know whether the moral is be frugal or have more bytes in your time representation, but it seems to show that even addition (+ 1 second) canlead to problems.

ItĂ­s good that you are acknowledging that it is important to know some mathematics, but I find this post slightly disturbing, in as far as it was felt to be necessary.

When I were a lad, learning my trade, one of the first things youĂ­d learn was how computers represented numbers, and what that meant for precision. The fact that this post is needed suggests there may be a generation of programmers who donĂ­t care about whatĂ­s happening under the hood. Maybe dynamic languages are to blame, or maybe not enough programmers learn C.

In any event, as others have pointed out, the post is slightly misleading in that you donĂ­t mention that computers use binary, and the problem is that a fraction might have a low precision decimal expansion, but be infinite in binary.

Last time I was bitten by floating point arithmetic was calculating a triangle area with Heron’s formula.

http://en.wikipedia.org/wiki/Heron%27s_formula#Numerical_stability

I’ve never have to care for those little errors with big numbers but in a naive implementation of Heron’s formula a little disproportionate triangle results in semiperimeter equal to a side and 0.0 as resulting area.

JavaScript also passed this test:

html
head
titleJavaScript Test/title

script language=JavaScript
	function doIt() {
		var x = 399999999999999;
		var y = 399999999999998;
	
		var z = x - y;
	
		document.write(z);
	}
/script

/head

body onLoad=doIt();

/body
/html

Hmm I put that little Excel equation into Excel 2007 and got the correct answer of 65,535. Did I do something wrong???

The only problem I have with the .99999… = 1 is that give me N number of 9’s after 0.9 and I can give you an infinite number of number between that number and 1. Not necessarily on a computer but in theory.
One problem with all the discussion, is that .99999…=1 is a mis-statement. It is ‘Wrong’, or more precisely, a mis-interpretation of what is symbolised.

A simple way (very simplified) is to say that 0.99999… is so close to 1.0, that I wish not to have to print all them darned nines, and for my purposes I will round it off to 1.0, so I don’t have to spend all day writing out senseless nines. The precision of avoiding the incredibly small differences is not worth the effort, since each space is 1/10 the size of the previous.

BTW pi to the 10th position is precise enough to negate almost any need to go further, except to make one think pi to the millionth place is a good encryption tool. (and a nifty dandy way to test your abilities at programing decimal precision.)

Programmers should definately be aware computers do this and how not to step into the traps it opens.

For example when increasing a value with a small fraction to predict growth in the far future. Say your daily increase was calculated to d=0,00000000000000000001241224111244. So school math will tell us to take (1 + d)^100 to forecast the value 100 days from now. Using a float with not enough precision can cut off more decimals than expected when adding 1 to the very small number. If you instead add the increase day by day you’ll get a different, more accurate, result value.

Google got this wrong:

$ perl -e 'print 399999999999999-399999999999998 . \n;'
1
$

Python got this wrong:

$ perl -e 'print 38.1 * .198 . \n;'
7.5438
$

Final Grade:

Google: 50% (F)
Python: 50% (F)
Perl: 50% (A+)

Perl Wins!

Aren’t all numbers (other than crazy mathematical constants) representable by finite fractions ? Like 0.33333… is representable perfectly fine by 1/3, 0.1 by 1/10, etc ? Would be interesting to see if any work is being done on using such a data type to represent ‘floats’.

-[A standard floating point number has roughly 16 decimal places of precision]-

Assuming ‘decimal places’ means ‘significant figures’, this is the precision in a 64-bit double. You only get 6 or 7 significant figures from a 32-bit float.

Not sure you calculator and Excel 2007 examples are 100% accurate. On my Win7 machine running Office 2007 the calculator and Excel examples return the correct values. I have emailed you screenshots of both, just so you can see that it appears those examples are not longer good examples. Other than that this was a really good article on how we as programmers need to understand how computers handle math.

Have a great day.

@ J. Stoever: There are very many more [vast understatement] crazy irrational numbers than there are rational ones.

@J. Stoever: Only rational numbers. That’s why they are called rational. Irrational numbers like pi, sqrt(2), etc are not representable by fractions, unless you round them.

That said, there are in fact libraries and even built-in mechanisms in many languages for handling fractions the way you describe. The only issue is that they are much slower, resulting in a precision-speed trade-off when considering the two.

When I were a lad, learning my trade, one of the first things youĂ­d learn was how computers represented numbers, and what that meant for precision. The fact that this post is needed suggests there may be a generation of programmers who donĂ­t care about whatĂ­s happening under the hood.

Don’t blame the tools/generation, blame the field. And if it’s this young slapdash generation, how come f-p bugs have been causing problems since before I was born?
I did Java long before messing with low-level languages, and this issue still arose, because I was doing something that involved floating point arithmetic. You can find this issue on a pocket calculator if you’re doing the ‘right’ sum. But it’s worth a reminder that it exists, that it’s incredibly prevalent, and what it means for computing.
Or maybe Jeff just needed to fill his blog quota.

@Darren

0.99999… doesn’t approach anything. It’s a number. Saying it approaches 1 is like saying 1.1 approaches 1.2, it’s clearly nonsense.

0.99999… = 1

@Martin obviously the morale is to always run NTP daemons on your rocket launchers. :slight_smile:

@Steve W: number representation and all those basics are part of any decent comp sci program and I’d think you’d find it at a university level software engineering program too. Nowadays most programmers get trained in trade schools though. Not all of them teach the basics.

Though as someone pointed out, floating point arithmetics has its issues in dynamic languages too. Any decent language guide will contain a section on them so it seems to me most programmers would eventually get round to learning about them, one way or another.