Why Do Computers Suck at Math?

@Darren

0.9999… = 1

because

(10*0.999…)-0.999… = (10-1)0.999… = 90.999…

and

(10*0.999…)-0.999… = 9.999… - 0.999… = 9

Thus 9 = 9*0.999…

Arithmetics axioms say if x*y=x then y=1 (1 is the unique neutral element for * operation).

Actually, you’re talking about calculation, not math.

Hi

Try query in google:

1 usd in dkk

and then try:

1 us dollar in dkk

A small difference… But could be more serious working with larger numbers :wink:

// WiredSource

@Pierre Lebeaupin

Well you think this is trivial the same way unicode, date/time, CRLF, typography are. But I am still surprised at how often I stumble on mistakes in computer expert codes I have to rewrite.

Seems recoding a framework fits the so called experts, real life non export developper as I have to deal with the so called trivial issues in their code.

So THIS is a great topic. As much as dont put clear text password in database as should be try to use ISO, IEEE, RFC norms for representations.

For the record I was once told my application (talking to an international webservice) had a bug, because I did not use uk for the Great Britain country code (there is no such thing as Great Britain in os3166). Another time I was told an should be used for England country code.

Trivialities is the base of begining to know something, while expertise is the art of ignoring so called trivia.

While experts are told to have a narrow specialized culture, developpers should have a broad culture, that’s why I like coding horror.

Great topic Jeff, thx.

heheh, the vb guy said dim

Why does anyone need a calculator to see that the answer is 1?

Works on Live Search :slight_smile:

http://search.live.com/results.aspx?q=399+999+999+999+999±+399+999+999+999+998form=QBLH

iIn mathematics, the repeating decimal 0.999Ö denotes a real number equal to one., and This equality has long been accepted by professional mathematicians and taught in textbooks.

Not the textbooks I read (or did when I was in university - it’s not a favorite pasttime of mine…). The correct mathematical explanation is that 0.99999… (zero point nine recurring) approaches 1. Any pure mathematician (probably not many read Coding Horror…) would surely agree.

Good post, though!
Darren on May 13, 2009 10:47 PM /i

Uhh… No. Any pure mathematician would know that a single number doesn’t converge to anything.

Certainly, the iseries/i {0.9, 0.99, 0.999, 0.9999, …} converges to 1, but that’s just further proof that the inumber/i 0.999… is 1.

Consider this: If 1 and 0.999… are not equal, then there are an infinite amount of numbers between the two. I defy you to find one.

I normally read CH in an aggregator, but came to comment about the Ariane 5 mention. Given the notoriety of the launch failure and its subsequent use in many undergrad CS/IT courses, it’s not surprising that it has its own article: hxxp://en.wikipedia.org/wiki/Ariane_5_Flight_501

@Vinzent however beat me to it…the problem wasn’t with the number conversion per se, but the inappropriate re-use of software from a different software project, without checking the design constraints properly and also without appropriate in-situ testing.

Jeff - misuse of previous code is a negligent act that should be talked about more, and with more attention than the Windows 3.1 calculator should ever get 17 years later.

Goldberg’s What Every Computer Scientist Should Know About Floating-Point Arithmetic is a must read and must understand for every software engineer.

Also, it may be useful to read the relevant chapters of the Knuth’s The Art of Computer Programming. It may be a bit dense for understanding, but it’s good.

It’s worth to note that besides the clear difference between the numeric models we learn in school and models employed in our computers, there’s yet another one, specific to the design of programming languages…

Let’s look at the basic C/C++'s problems with math, the problems with which plagued an incredible amount of existing software.

  1. Signed types get promoted to unsigned where necessary and unnecessary. The following condition is always false (it evaluates to 0):
    1U -1
    Yeah, nice, +1 isn’t greater than -1. This is totally brain dead.
    Although usually it’s not cheap to compare signed and unsigned values, it’s possible and I see no point in making the programmer do it in convoluted ways, which he very often (usually?) doesn’t for one reason or another (laziness or ignorance) and ends up with a code bug.

  2. Suppose our int is 16-bit long and our long is longer than that. Then the following condition is always false as well:
    32767 + 1 == 32768
    Sweet.
    That’s because the nums on the LHS will be ints, but the num on the RHS will be long per the language design. And per that same simplistic design there’s no provision to avoid the overflow on the LHS when moving it into an lvalue (or comparing it with rvalue) that has more bits, where the overflow will become apparent and likely unwanted.

  3. The following is false as well, also due to signed-unsigned conversion:
    (-3) % 3U == 0
    I’d love to see this produce mathematically more expected results at the expense of extra CPU cycles. Too bad it’s not so.

  4. Assuming 16-bit ints, the following will be false:
    32767 * 2 == 65534
    But these will be true:
    32767L * 2 == 65534
    32767U * 2 == 65534
    It is obvious that the product of two 16-bit ints is gonna need 32 bits of storage. It is brain-dead to require the programmer to instruct the compiler to produce the full product and not just the least significant half of its bits or do some other trickery. It’s easy to make a mistake here and forget an explicit type conversion to long of one of the multiplicands.

  5. The following is also false:
    -2 + 1U + 1.0 == -2 + 1.0 + 1U
    (A+B)+C is no longer the same as (A+C)+B. Now, that’s nice, broken commutativity!

  6. Again, assuming 16-bit ints, shifts by 16 or more positions left or right are undefined per the design and in practice you get some funny results as if the shifts were done by count % 16. It would be natural to produce 0 with right shifts by 16 or more (assuming, we’re talking unsigned ints, it’s a different thing with signed ints) and I’d be OK to get a 0 (or anything defined, e.g. UINT_MAX) when doing the same in the other direction.

Basically, the problem is, in C/C++ a lot of what you learned about arithmetic is no longer true, not just the rational (AKA floating point) numbers. That is, you can’t just use whatever you learned at school. You must learn the way the language does the math and you must write your code accordingly, adapting your ideal-world ideas to the brutality of the real computing. C’s arithmetic expressions look familiar and seem to make sense to anybody understanding math, but make no mistake, behind this faÁade hides a great deception.

It’s possible to explain in part why C/C++ is so goddamn math unfriendly. It’s basically a generalized assembler language which must be quite primitive so it’s easy to compile it and transform into comparable and primitive instructions of the target CPU. For instance, very few CPUs have comparison and division of a signed and unsigned operand. It is uncommon for CPUs to support shift counts larger than the register size in bits. It’s possible to construct such operations, but nobody bothered to back when C was being designed and now it’s too late.

Every C/C++ programmer has to learn this the hard way and figure out a way to do in C what an ideal (or almost so) calculator would do w/o any surprises.

For a long time I actually found programming in assembly more transparent and giving more expected results than programming in C. In part I attribute it to the difference between the two: when you learn an assembly language you have to open the CPU manual to see what registers and instructions are there and how they work. In C you don’t seem to need to learn how + or * or == work, because they’re familiar and you intuitively know how to use them. Unfortunately, these are wrong expectations and assumptions, and wrong expectations and assumptions rarely work well in software engineering. What helps in the case of C/C++ is reading and understanding the language standard. The standard isn’t an easy reader. Many usually end up buying books on C/C++. These days there’re titles that cover the issues well. But I remember the days when C/C++ books covered this poorly or ignored the topic almost entirely. I hated them and only finally got everything straight in my head when I’d made every possible mistake and written a sufficient amount of very portable C code.

Btw, last time I checked, the C standard was available online for the mere $18 (cheaper than a book on C/C++ you’ll pick at your local bookstore). Not knowing how to use your tools correctly is not just bad, it’s very bad and wrong. At the same time I wish C/C++ never existed in its current form – it could’ve been done better.

1 and 0.999… are simply condensed notation for different infinite sequences that converge to the same real number. I think that the controversy about the 1 = 0.999… thing stems from the fact that most people do not think of a decimal expansion as the limit of a convergent sequence. Here’s an excellent explanation:

http://en.wikipedia.org/wiki/Decimal_representation

I never understand why some people have such an issue with 0.999Ö = 1, but never with 0.333Ö = 1/3. And given that, what do they think 3 X 0.333Ö is?

That calculator bug existed for quite a while in windows.
It was still present in 3.11 WfW.

bram

At www.karenware.com you can get a freeware calculator program for Windows that will do calculations to hundreds of thousands of digits.
The bigger numbers will take a while.
I imagine it won’t do infinite numbers of digits.

The Windows calculator issue went away when MS rewrote it to use infinite precision for basic operations…
http://blogs.msdn.com/oldnewthing/archive/2004/05/25/141253.aspx

Using real numbers, 0.999… is absolutely one. Limits are involved in the proof, but aren’t necessary for the original statement. But, as pointed out by William, there are other number systems (hyperreal, superreal, surreal) that have an infinite number of numbers between 0.999… and 1. The defiance shouldn’t be to come up with such numbers (in Hakenstrings, 10(1) followed by any combination of 0s and 1s), but rather to find a use for them in CS.

Joe wrote:

What did you expect from Windows? On my Linux system, xcalc does the example correctly, even to 5.0002 - 5.0001. Ditto for command-line dc.

/smug

In a similar vein this was explained to me by a teacher who also was a priest which made it all the more funny…

A naked woman is standing 10 meters away from a mMathematician and an Engineer. They are allowed to walk halfway towards the woman before stopping then go half again and so on and so on…

The Mathamatician says you will never reach her.

The Engineer says you will get close enough as it makes no difference.

When I was a freshman my professor asked in our class to explain the behavior of

for (i = 0.2; i != 10; i++)

I was the only one to answer this question right and I got an extra point in my final exam!!!