P(+|C) vs P(+,C) vs P(C|+) definition

Skip to first unread message


Oct 23, 2011, 2:36:20 PM10/23/11
to Stanford AI Class
In section 2.14, I am confused on P(+|C) vs P(+,C) vs P(C|+)
definition, Some one pls explaining on this topic?

Thanks and Regards

Stuart Gale

Oct 23, 2011, 2:43:21 PM10/23/11
to stanford...@googlegroups.com
Hi Vijay,

P(+|C): What is the probability of the test being positive, given the
patient has cancer?
P(+, C): What is the probability of the test being positive and the
patient having cancer?
P(C|+): What is the probability of the patient having cancer, given
that the test was positive?

You then use the various probability parameters stated in the problem
and Bayes Rule etc to work out the probabilities of each.

Hope that helps.


> --
> You received this message because you are subscribed to the Google Groups "Stanford AI Class" group.
> To post to this group, send email to stanford...@googlegroups.com.
> To unsubscribe from this group, send email to stanford-ai-cl...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/stanford-ai-class?hl=en.

Vijayakumar Ramdoss

Oct 23, 2011, 4:12:54 PM10/23/11
to stanford...@googlegroups.com
Thanks for your explanation Stuart

David Weiseth

Oct 24, 2011, 12:08:07 AM10/24/11
to stanford...@googlegroups.com
Let me add a bit more of an intuitive explanation if you should find yourself as I, with less exposure to probabilities.

You have a sample set  S of all elements.

The chance that you can find an element with a combination of characteristics in this set S, is defined by the Joint Probability.  We represent this by   P(characteristic 1, characteristic 2, etc.... )  // comma separated, remember this is the chance you can pull this from the entire sample set S e.g. P(A,B).

The chance that you can find an element with a characteristic or set of characteristics given the set has been prefiltered, i.e. subset, to only include the elements that have a given characteristic is defined by the Conditional Probability.  We represent this by P( Characteristic to find  |  characteristics that are prefiltered in the set S ) e.g. P( A | B )  // read chance of A given B has been already achieved, so you are only looking at the chance from this subset, not full S set.

The whole complication surrounds the fact that P(A,B) =/= P(A) * P(B)  // because they can not be assumed to be independently occurring in the sample set, the presence of A affects the presence of B.


A = lightning 
B = rain

You can not just  say that the chance of both A and B is represented by multiplying the chances of them occurring in S alone, e.g. P( lightning ) * P( rain ).  They occur together and not independently, so we need more involved mechanics to derive the missing information needed for letting our Agent make decisions about the environment.  This is described by the Bayes Theorem and its surrounding rules.

If you are new to this topic, like me, I recommend a little outside study, the Khan Academy or I liked a book "Probabilities for Dummies" I got for my kindle, about $10.

Professor Thrun has put us on notice that this topic will be central to all forthcoming topics and is one of the most important topics in A.I. so the investment in time to get familiar will be well worth it.

Also when working out the probabilities it can be quite helpful to draw out the graph/tree, and see which results satisfy the goal of the problem or program.

Marginal Probabilities are concerned with just one characteristic only, where as the other kinds Joint and Conditional are two or more.

Hope that helps too. --David

P(A) + P(~A) = 1

P( A | B) * P(B) = P(A,B) = P(B,A) = P( B | A) * P(A)

Note:   P( A | B ) =/= P(A) ;  P( B | A) =/= P(B) // since they can not be assumed to be independent

David Weiseth

Oct 24, 2011, 1:37:40 AM10/24/11
to stanford...@googlegroups.com
Ooops should have added a bit more...

step 1

Start with sample S

step 2   apply P(B)  // this is applied alone so we can use the normal terms

Get subset of S where only where B is present

step 3  apply P(A | B)  // this one is the combination where things get a little different hence the different term

Get subset of only subset B where A is present

set  =  P(A,B)

Note that   
P(A | B) + P(~A | B ) = P(B)  // gives me whole subset of where B is in S  by recombining A & ~A
P(B | A) + P(~B | A ) = P(A) // gives me whole subset of where A is in S  by recombining B & ~B

meaning if you look at the subset where only B is present you divided it into where A is also and where A is not present ( ~ ) NOT, so adding them together gets you back to the whole subset of each, as described by applying P(B) or P(A) to S depending on which one you are working on.

Here another one that can be useful

P(A) = P(A | B) P(B) + P(A | ~B) P(~B)

so this one tries to approach S, by removing the subgrouping for B by adding B & ~B, but keeping it for A for S, i.e. P(A) against S

Remember to use the Graph/tree so you do not forget any parts of the set total.  You might need to add those in depending on the goal of the problem.

P(c1)  + P(c2) + ... etc  = total P(cn)  // if you add back all pieces it should add to 1, but you might just need to add back the ones to answer the problem.

It is all about the mechanics of transforming the data you have to get to the problem answer, math stuffs.

Hope this helps.  --David

David Weiseth

Oct 24, 2011, 2:17:03 AM10/24/11
to stanford...@googlegroups.com

P(A | B) P(B) + P(~A | B ) P(B) = P(B)  // gives me whole subset of where B is in S  by recombining A & ~A
P(B | A) P(A) + P(~B | A ) P(A) = P(A) // gives me whole subset of where A is in S  by recombining B & ~B
Reply all
Reply to author
0 new messages