Why Do Computers Suck at Math?

osp70:

What you say is true, but backwards. What you’re missing is we aren’t giving you N many 9’s in 0.999… We’re putting infinitely many 9’s there. You cannot give any number with more 9’s than infinity many 9’s, so you cannot give a number between that and 1, which means there is no difference.

1/1, 2/2, 5/5, 9/9, multiplicative identity, 1/2 + .5, all mean the same thing, but people get hung up on this one representation of the number 1.

You know, that 0.9999 would only be accurate if it produced a graphic that would reach the edge of the screen and beyond :slight_smile:

Err, it’s not the test of the overflow flag, but rather a comparison against zero that causes the incorrect behavior with the 32-bit code but not with the 16-bit code. My point still stands, though.

@Anders Sandvig

To be precise I was refering to fixed point representation :

http://docs.python.org/library/decimal.html
http://msdn.microsoft.com/en-us/library/system.decimal(VS.80).aspx

it spares really a lot of problem when working on e-commerce.

I may have been confus(ing|ed) though :slight_smile:

I just did not remembered there is no such thing as a language working in BCD only oooolllllddd computers (IBM).

That’s the problem of being graduated from last century :slight_smile:

@the old rang:

I hate to sound off on that, but it isn’t a very simplified way of looking at 0.99999… vs. 1, that’s a wrong way of looking at 0.99999… vs. 1. Because the point is that they are literally the same number expressed in two different ways. Not so close as makes no difference, but the exact same number.

I’m an engineer. I know intimately close enough as makes no difference.

It doesn’t help that the real numbers are uncountably infinite, not merely countably infinite like the integers. At least with integers you can say okay, you get to represent the first N above and/or below zero and just make N large if needed. Any two different real numbers have an infinite number of other reals between them, so it’s impossible to represent any nonempty segment of the real number line exactly.

tomc:

Not true! For the exact same reason that 1 + 1/2 + 1/4 + 1/8… doesn’t equal infinity even though you are adding together an infinite number of positive numbers, it also doesn’t need to take infinite space to represent the concept given infinitely fine pixel density (which could be approximated procedurally via an infinite zooming algorithm).

Alternatively, he could replace the graphic with the numeral 1 :).

I am afraid these two examples have only very little to do with precision.

In first case, I suppose, it was an optimization, where all the digits were comapred and if they were equall, result was 0. Obvious. However, last digit was forgotten. In pseudo C code:

int streq(a,b) { return (strlen(a)==strlen(b)) (all strlen(a)-1 chars are equall); }

Common mistake, in either direction.

Second error - I guess Excell is keeping big numbers in an array of 16bit numbers, and as the result somehow (it is likely, here the precision played a role) happens not to fit in these 16bits (sign + real(0.1) fraction(1/10) ), top bit has to be carried over and something is/was screwed up there.

Marian

News flash: in our ongoing series, after discovering proper password salting, and after discovering the processor supervisor state/ring 0, Jeff Atwood now discovers floating-point numbers. In other news, programmers everywhere discover the bozo bit can also be flipped on an other fellow programmer.

Obligatory Bistromathics reference:

Bistromathics itself is simply a revolutionary new way of understanding the behavior of numbers. Just as Einstein observed that time was not an absolute but depended on the observerís movement in space, and that space was not an absolute, but depended on the observerís movement in time, so it is now realized that numbers are not absolute, but depend on the observerís movement in restaurants.

http://www.tudy.ro/2007/07/10/the-bistromathic-drive/
http://en.wikipedia.org/wiki/Starship_Billion_Year_Bunker

Maybe this is the reason why this cheap calculator I bought a few weeks ago does some calculations completely wrong. 2060/3.8*3.8=1 according to that piece of crap. Even other cheap calculators don’t have that problem.

crap I still remember when stupid int counters would overflow at 32k and unsigned shorts were 256.

As Dennis mentioned above, 0.1 is a simple base 10 fraction that has a repeating form in base 2, i.e., 0.0(0011) where the part in parentheses repeats. 0.3 is another example: 0.0(1001). Thus, they cannot be exactly represented in IEEE arithmetic. I’ve made some base conversion tools for numbers with fractional parts available at

http://www.knowledgedoor.com/1/Base_Conversion/Convert_a_Number_with_a_Fractional_Part.htm

These can be very helpful in illustrating just how easy it is to run into numerical problems when switching between base 10 and base 2 representations.

Computers only suck at math when they don’t use IBM COBOL packed decimal (BCD) fields ^^

I hate to sound off on that, but it isn’t a very simplified way of looking at 0.99999… vs. 1, that’s a wrong way of looking at 0.99999… vs. 1. Because the point is that they are literally the same number expressed in two different ways. Not so close as makes no difference, but the exact same number.

Hmmm… hehehehe… Ok, engineer… Take 1.0. Subtract your number0.99999. Test for ‘zero’ and if it passes the test, when you are between the moon and Mars, you are right, and complete your mission. if you are wrong, you find temperatures in the solar range, and you … should go to the sun when it is night.

The numbers are the same, by convention, NOT mathematics. People did not deal with such long decimals in olden times. (when the rules of mathematics and arithmetic were developed.)

Better still… make a gamble out of it. If the odds are that you have a 0.999999… chance of surviving an ordeal, you might take the bet… But, the odds are not 1.0… not a certainty. Would you still chance the bet? Murphy wrote the laws. The important one was ‘at the worst possible time’… If certainty is required, 1.o is NOT equal to .99999…

To support precise math and to have it built in is two different things. C++ supports arbitrary precision math just fine when using GNU MP Bignum lib http://gmplib.org/, it just isn’t java which has even kitchen sink built in

I do agree that infinite precision math is required in end-user applications (besides maybe matlab/mathcad and alike).

One can easily guess that Jeff is a little biased against Microsoft by not finding in his post this link
http://search.live.com/results.aspx?q=399999999999999-399999999999998mkt=en-US

And this link http://blogs.msdn.com/oldnewthing/archive/2004/05/25/141253.aspx

Ruby did not exist and and Google was a very small startup when Microsoft fixed this bug.

Unlike strange floating point caluclation, 850*77.1 trick in Excel was a real bug. Fortunately, I’ve just tried that trick in Excel 2007 SP2 and could not reproduce the issue.

Within the real number system, ‘0.999…’ is, by definition, equal to the real number ‘one’.

In other number systems things can be defined differently:
http://en.wikipedia.org/wiki/0.999…#Alternative_number_systems

As far as I know systems are ‘best effort’ approximations of reality.

You are making a ‘modern’ assumption based on incorrect data.

Unless proper correcting software is included, computers are terrible at math. They can do it in binary, octal and hexidecimal. Unfortunately, they can not do decimal math. (There is NO octal equivalent to decimal 0.05… period. Also, there is a major difference between negative zero (-0.00) and positive zero (0.00). Control Data used to manufacture a decimal computer, and these problems were not really a question. Since the Window take over of reality, precision decimal math has taken a back seat to ‘enhancements’ instead of true and accurate function.

In the olde tymes (when BAL and machine language were the only languages, and punch paper tape was high speed input… Computers had to have math functions programmed (oftentimes) since, if you really know the facts, computers can only add.

Writing functions to actually do math correctly, is a lost art. After all, Computers are great at multiplying and dividing!! Right!!!.. er… ummm…

No. They aren’t. Not unless a human, with understanding and knowledge, fixes the basic problem… Computers are binary, and Binary, Converted to octal, doesn’t function in decimal.

Launch a space ship, from earth, to … Ummm. Mars… The error in the system, unless corrected for (have you ever heard of mid missions flight corrections??) will miss Mars, and not by a small factor.

Unary mathematics is almost unheard of, now days… I remember a thick tome that was required reading, to do flight plotting. Seems a negative zero will, according to the computer either be equal, or not equal to positive zero, unless Murphy said it ain’t, and you better listen.

Mathematically speaking, if the remainder of a subtration is even 0.000… (999 more zeroes)…01, there is a difference. Unless the difference is zero (positive or negative) there is a difference! If there is a positive or a negative zero, they you have to know why and test to see IF the difference is significant. Not doing so, will create errors like you might not believe.

Hmm…

einstein:~ siuying$ irb
irb(main):001:0 399999999999999-399999999999998
= 1
irb(main):002:0 12.51-12.52
= -0.00999999999999979
irb(main):003:0 quit

einstein:~ siuying$ python
Python 2.5.1 (r251:54863, Jan 13 2009, 10:26:13)
[GCC 4.0.1 (Apple Inc. build 5465)] on darwin
Type help, copyright, credits or license for more information.
12.51-12.52
-0.0099999999999997868