The difference is: If a computer does a wrong calculation, it is caused by a bug in either hard of software. If a human calculates wrong, it is very likely caused by a mistake, lack of talent or too complicated math. Unless of cause that you define humans as bugs.
Reminds me of how brilliant I thought I was, when javascript:parseInt(‘010’) returned 8 instead of 10 . I have found a bug in javascript I thought. Only to find out it was because of the octal conversion.
I really hate when some data type isn’t good or large enough. When I do calculations, I am not interested in the datatype one bit. If the value is big, do you think I care? The computer should enlarge the variable to be able to hold the big value - automatically. Automation is the name of the game anyway.
Or do you think it is reasonable that when a big value occurs, the computer whispers me hey, psst… hey programmer What? This is really embarassing, but could you kindly enlarge my variable? Oh for God’s sake. Argh, alright then, but this will be the last time! Yes, goody goody goody! Thanks!
To be clear, for all of those talking about languages that will handle this, and how PHP, lisp, Ruby, Python, etc. are all superior because they handle it, this isn’t special. I’m not aware of any programming languages which would be unable to handle it. The integer in question requires 49 bits, what programming language still in use doesn’t have 64-bit integers?..
The problem only exists because Google is using single-precision floats for this, presumably because they are able to get things done faster that way on average for a typical query. If your language can pass this test while using single precision floats, THAT would be impressive… But it can’t. I highly doubt Google even sees this as a bug of any sort, because most users aren’t exactly using the Google calculator in ways that would make this any more than a novelty. The fact that everybody looks at 399999999999999-399999999999998 rather than 30347423581692-303474235816991 or something from a real world example that went bad is a testament to this.
[0.(9) equals 1] is false.
[0.(9) does not equal 1] is also false.
[0.(9) is probably equal to 1] is true.
Probability is used to solve unsolvable problems. For 99.(9)% of our needs [0.(9) equals 1] is true. This includes engineering and applied mathematics. And it’s for the simple fact that you have to decide on a level of precision (number of decimal places) or wait for eternity as the 9s roll out, and thus never get anything done.
In theoretical physics [0.(9) does not equal 1] is true. Think big questions like the size of our finite universe and what’s on the other side if it is finite. In this context, 0.(9) does not equal 1.
you people need to learn numerical methods before commenting on this. numbers to a computer don’t exist on a continuous line everything is discrete and not linearly space on that discrete number line. this sort of floating point error is a common occurrence for poorly written code
and furthermore, Ihave tested and retested this theory with the calc in XP and I cannot recreate a math error with any calculation period no matter what level of precision or how many digits.
Based on the definition of decimal representation of real numbers, it’s clear that 0.999… = 1. That’s not what I have a problem with. Many posters, in making this claim, have appealed to the obvious notion that 0.333… = 1/3. My question is, if you don’t accept that 0.999… = 1, then on what basis would you accept that 0.333… = 1/3?
Just some thoughts as a mathematician. Sorry about my English.
There’s a huge difference between INFINITE precision and ARBITRARY precision. An arbitrary-precision system is able to represent numbers at any FINITE precision, but not at INFINITE precision – it would not be able to handle all real numbers even with infinite memory. Actually, it could handle hardly any real numbers. There’s just too much of them.