An Initiate of the Bayesian Conspiracy

@Dan Neuman:
As far as I know, the CRM114 uses a combination of techniques (including HMMs) to catch spam.

In regards to hidden Markov models, the amount of words the model “remembers” is dependent on its order: a first-order HMM takes into account the last word (one) it saw when deciding which state to move to next. A second-order HMM takes into account the last two states it was in.

Most models I’ve seen are first-order, because of the exponential curve in computational/spatial price in higher order HMMs.

If you’re curious about HMMs, a great resource is Durbin (et al.)’s “Biological Sequence Analysis”, or Rabiner’s classic tutorial “A tutorial on hidden Markov models and selected applications in speech recognition”.

I’d be embarrassed to display my pathetic attempts at solving this - thanks for the links. I can scarcely remember this concept from a stats class, but I doubt I really understood it even then.

It looks very interesting - hopefully I’ll become a co-conspirator soon.

OK, what do you make the probability?
I make it a little over 8.4%.
Why is that wrong? It seems simple enough to me.

Everyone should have got the right or answer, and if they did not it should be because they could not be a#sed to work it out, after all the readership are all computer programmers . . .

Well management probably did better than most, because they would give the answer as “about 1%”

I like number and above all examples:

our case: 1000 women

following probability we can think
10 have BC
990 don’t have BC

if all of them take a M (mammography)
it can be P (positive) or N (negagtive)

of the 10 with BC we have

  • 8 have BC and M is P
  • 2 have BC and M is N

of the 990 without BC we have

  • 950 don’t have BC and M is N
  • 40 don’t have BC M is P

if we are in the case of a Positive M then the
probability to have BC is 8/(40+8) i.e.
between 16 and 17%

I like number and above all examples
(but I’m home sick with flu and fever
and I have an excuse for the bad
calculations in the previous post!) :slight_smile:

our case: 1000 women

following probability we can think
10 have BC
990 don’t have BC

if all of them take a M (mammography)
it can be P (positive) or N (negagtive)

of the 10 with BC we have

  • 8 have BC and M is P
  • 2 have BC and M is N

of the 990 without BC we have

  • 895 don’t have BC and M is N
  • 95 don’t have BC M is P

if we are in the case of a Positive M then the
probability to have BC is 8/(8 + 95) i.e.
between 7 and 8%

If you can read the question, it’s pretty obvious that the number of false positives must be high. I have to say, as far as statistical problems go, this one’s pretty easy.

Oh the irony. I just tried to post a comment here asking about Bayesian filtering works, and tried to use an example as a question, using a medication for floppy junk and a medication for hair loss as 2 examples, and the comment posting program tells me that "Your comment could not be submitted due to questionable content: " and then it lists the floppy junk medication as the reason.

Bayesian filtering at work.

I didn’t realize this was new. I learned this in college stats in 1977.

Hmm,how’s this:

P(A | B) = P(A ^ B) / P(B)
and
P(B) = P(B | A)P(A)

so that

P(A | B) = P(A ^ B) / P(B | A) / P(A)

In other words, even if A is a cause of B, we can consider them as correlated variables, and deduce the probability of A given B from knowing the probabilities of B given A, the probability of A (the cause) all by itself, and the probability of A and B (when the cause results in the effect).

There has been some argument (see Wikipedia), however, about assigning the “a priori” probabilities P(B | A). Is that valid, considering we are measuring P (A | B ) ?

James M., the two stats that do not specify an age group must apply to ALL women, thus do apply to women aged 40, so do apply to this case.

This is amazing. Either I’m completely daft, or you are all falling for a rather cheap and very old trick – extra information that doesn’t matter and is simply presented to confuse you.

The answer is NOT something with 7. The answer is clearly 100-9.6, which should be 90.4%. That’s the chance she really has cancer. That’s all the math you have to do. Really. I’ll show you why:

Ah, how embarrassing. Five minutes after posting I find why this looked so easy to me, and why you all got it wrong (or rather didn’t). I guess my initial assumption, that I was daft, was the correct one.

I read:

9.6% of women without breast cancer will also get positive mammographies.

and understood:

9.6% of women who got positive mammographies will have no breast cancer.

Good thing I got here late and didn’t make a fool of myself on the first page :wink:

@ Jon Raynor:

Actually, the “gentle introduction” article mentions this. The “positive” result is a low-occurance but “weak” piece of evidence, whereas a “negative” result is the typical case, and very “strong” evidence. The point of the test isn’t really finding out who DOES have cancer, it’s finding out who DOESN’T. On that score, the test mentioned in the question is quite accurate; false negatives are very rare.

Woo, got the correct 7.7% chance without reading anything. I guess being married to a statistician gives knowledge by osmosis.

I solved the problem by building a matrix of true+, false-, false+ and true- before I noticed all I cared about were the true+ and false- portion. From there is is a simple ratio. If this is Bayes, I’m an Bayes intuitive.

So where’s my prize money?

I was lucky enough to get this, but only because I made a table to work it out.

For me, the “intuitive” step is realizing how easily false positives can skew the results. I used to work in anti-virus/anti-spyware, so false positives are an issue we think about a lot – it may have biased me towards looking for similar issues in any test :slight_smile:

I’ve written up a detailed explanation here, with the table, in case it helps anyone.

http://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem/

Appreciate the post.

suppose there are 10000 women.
So:
1)group A of 100000.01=100 women have the cancer
2)group B of 10000
0.99=9900 women have no cancer
3)group C = (group A)*0.8=80 women have the cancer and positive mammographies
4)group D = (group B)*0.096=950.4 women have no cancer but positive mammographies

Since the woman gets positive mammographies, she should be in the union of group C and group D. then the chance of getting cancer is:
(group C)/((group C)+(group D))=80/(80+950.4)=0.07764

What are small chance. :slight_smile:

I like the other poster’s response,

If 9.6% of the positive results are false, then 90.4% are correct. Since she did the exam and it came up positive she has 90.4% chance that she has cancer.
That is the problem with the modern test, they are too good.

If 9.6% of the positive results are false,

Alex, you have failed miserably at reading comprehension. Have a nice day.

Seriously, when I read the question, I thought that there must be some kind of red herring, or something like that. Then I took my pen and started solving the problem as I used to in high school after reading the comments.

I seriously can’t believe that I used Bayes theorem all along in high school without even knowing the formula! It’s all common sense. When you see numbers and you are asked about probability, ALWAYS grab a pen, a paper and a calculator. BTW, the actual rate of probablity that a person with a positive mammography has cancer can’t be lower than 10%, right?