Why Do Computers Suck at Math?

The difference is: If a computer does a wrong calculation, it is caused by a bug in either hard of software. If a human calculates wrong, it is very likely caused by a mistake, lack of talent or too complicated math. Unless of cause that you define humans as bugs.

Reminds me of how brilliant I thought I was, when javascript:parseInt(‘010’) returned 8 instead of 10 . I have found a bug in javascript I thought. Only to find out it was because of the octal conversion. :slight_smile:

@jul I think you are confusing binary coded decimals (BCD) and binary floating point representation (IEEE 754):

http://en.wikipedia.org/wiki/Binary-coded_decimal
http://en.wikipedia.org/wiki/IEEE_754

BCD is (was?) a technique often used in assembly code to store large numbers and perform precise arithmetic on them.

I really hate when some data type isn’t good or large enough. When I do calculations, I am not interested in the datatype one bit. If the value is big, do you think I care? The computer should enlarge the variable to be able to hold the big value - automatically. Automation is the name of the game anyway.

Or do you think it is reasonable that when a big value occurs, the computer whispers me hey, psst… hey programmer What? This is really embarassing, but could you kindly enlarge my variable? Oh for God’s sake. Argh, alright then, but this will be the last time! Yes, goody goody goody! Thanks!

These responses scare me very much.

How could this article be written without a discussion of machine-epsilon?

The desktop calculator designed by Dr. Larry Nylund is a complete replacement for the traditional calculator that comes with Microsoft Windows©. It solves the issue of the floating-point problem.

Download your own desktop calculator today, at almost no cost! http://www.math-solutions.org

1/3 = 0.333333333…
multiply both sides by 3
1 = 0.9999999…

don’t be a d-bag

To be clear, for all of those talking about languages that will handle this, and how PHP, lisp, Ruby, Python, etc. are all superior because they handle it, this isn’t special. I’m not aware of any programming languages which would be unable to handle it. The integer in question requires 49 bits, what programming language still in use doesn’t have 64-bit integers?..

The problem only exists because Google is using single-precision floats for this, presumably because they are able to get things done faster that way on average for a typical query. If your language can pass this test while using single precision floats, THAT would be impressive… But it can’t. I highly doubt Google even sees this as a bug of any sort, because most users aren’t exactly using the Google calculator in ways that would make this any more than a novelty. The fact that everybody looks at 399999999999999-399999999999998 rather than 30347423581692-303474235816991 or something from a real world example that went bad is a testament to this.

[0.(9) equals 1] is false.
[0.(9) does not equal 1] is also false.

[0.(9) is probably equal to 1] is true.

Probability is used to solve unsolvable problems. For 99.(9)% of our needs [0.(9) equals 1] is true. This includes engineering and applied mathematics. And it’s for the simple fact that you have to decide on a level of precision (number of decimal places) or wait for eternity as the 9s roll out, and thus never get anything done.

In theoretical physics [0.(9) does not equal 1] is true. Think big questions like the size of our finite universe and what’s on the other side if it is finite. In this context, 0.(9) does not equal 1.

It equals 42. :stuck_out_tongue:

shane is being a d-bag

You should know that

a) programming language a is slower/faster than b,
b) programming language a is better/worst than b,
c) 0.9 periodic vs 1 discusion (added right now),

are tabu subjects. It’s too late now. This will go on and on forever.

you people need to learn numerical methods before commenting on this. numbers to a computer don’t exist on a continuous line everything is discrete and not linearly space on that discrete number line. this sort of floating point error is a common occurrence for poorly written code

Wow, this is pretty pointless. Who gives a damn.

and furthermore, Ihave tested and retested this theory with the calc in XP and I cannot recreate a math error with any calculation period no matter what level of precision or how many digits.

@Lilian, Adamsson:
From the site:

No Wolfram|Alpha handle it properly.

I guess it depends on the nature of the computer

http://www.wolframalpha.com/input/?i=3999999999999999999999999-3999999999999999999999998asynchronous=falseequal=Submit

There is problem with Daren’s explanation

0.9999… = 1

because

(10*0.999…)-0.999… = (10-1)0.999… = 90.999…

and

(10*0.999…)-0.999… = 9.999… - 0.999… = 9

But,
(10*0.999…(N-times)) - 0.999(N-times) = 9.999…((N-1)times) - 0.999(N-times) = 9.000…((N-1)-times)1 != 9

Thus 9 = 9*0.999…

Arithmetics axioms say if x*y=x then y=1 (1 is the unique neutral element for * operation).}}

Based on the definition of decimal representation of real numbers, it’s clear that 0.999… = 1. That’s not what I have a problem with. Many posters, in making this claim, have appealed to the obvious notion that 0.333… = 1/3. My question is, if you don’t accept that 0.999… = 1, then on what basis would you accept that 0.333… = 1/3?

Wolfram|Alpha uses at max only 66 decimal digits :frowning:

http://www.wolframalpha.com/input/?i=(1%2B1e-66)^1e66-1asynchronous=falseequal=Submit

if you don’t accept that 0.999… = 1, then on what basis would you accept that 0.333… = 1/3?

I don’t know why they don’t accept 0.999… = 1, but there is a more intuitive reason why 1/3 = 0.333…; just try dividing 1 by 3 using long division.

Just some thoughts as a mathematician. Sorry about my English.

There’s a huge difference between INFINITE precision and ARBITRARY precision. An arbitrary-precision system is able to represent numbers at any FINITE precision, but not at INFINITE precision – it would not be able to handle all real numbers even with infinite memory. Actually, it could handle hardly any real numbers. There’s just too much of them.

http://en.wikipedia.org/wiki/Cardinality

About the 0.999… = 1 confusion: just forget all the proofs and equations. Just check the definitions of decimal notation and real numbers.