ProbabilisticLearning Bayesian
ProbabilisticLearning Bayesian
Probability
The probability of an event can be estimated from observed data
by dividing the number of trials in which an event occurred by the
total number of trials. For example if it rained 3 out of 10 days, the
probability of rain can be estimated to 30%. Similarly if 10 out of
50 emails are spam, then the probability of spam can be estimated
as 20%. The notation P(A) is used to denote the probability of
event A, as in P(spam)=0.20
The total probability of all possible outcomes of a trial must
always be 100%. Thus, if the trial has only 2 outcomes that cannot
occur simultaneously, such as rain or shine, spam or not spam, then
knowing the probability of either outcome reveals the probability
of the other.
When two events are mutually exclusive and exhaustive ( they
cannot occur at the same time and are the only two possible
outcomes) and P(A)= q, then P(A)= 1-q.
Joint Probability:
We may be interested in monitoring several non-mutually
exclusive events for the same trial. If the events occur with the
event of interest, we may be able to use them to make predictions.
Consider, for instance, a second event based on the outcome that
the email message contains the word Viagra. For most people, this
word is only likely to show up in a spam message; its presence
would be strong evidence that the email is spam. The probability
that an email contains the word “Viagra” is 5%.
We know that 20% of all messages were spam , and 5% of all
messages contained Viagra. We need to quantify the degree of
overlap between the two proportions, that is we hope to estimate
the probability of both Spam and Viagra occurring , which can be
written as P(spam Viagra).
Calculating P( spam Viagra) depends on the joint probability
of the two events. If the two events are totally unrelated, they are
called independent events. On the other hand, dependent events
are the basis of predictive modeling. For instance, the presence of
clouds is likely to be predictive of a rainy day.
If we assume that P(spam) and P(Viagra) are independent, we
could then calculate
P(spam Viagra) as the product of probabilities of each
P(spam Viagra)= P(spam)*P(Viagra) = .2*.05= 0.01, or 1% of
all spam messages contain the word Viagra.
In reality, it is more likely that P(spam) and P(Viagra) are highly
dependent, which means that the above calculation is incorrect.
Conditional probability with Bayes’ theorem:
The relationship between dependent events can be described using
Bayes’ theorem. The notation P(A|B) is read as the probability of
event A given that event B has occurred. This is known as
conditional probability, since the probability of A is dependent
(that is, conditional) on what happened with event B.
𝑷(𝑩|𝑨)𝑷(𝑨) 𝑷(𝑨 ∩ 𝑩)
𝑷(𝑨|𝑩) = =
𝑷(𝑩) 𝑷(𝑩)
To understand a little better how the Bayes’ theorem works,
suppose we are tasked with guessing the probability that an
incoming email was spam. without any additional evidence, the
most reasonable guess would be the probability that any prior
message was spam (20%). This estimate is known as the prior
probability.
Now suppose that we obtained an additional piece of evidence, that
is that the incoming message the term Viagra was used. The
probability that the world Viagra was used in previous spam
messages is called the likelihood and the probability that Viagra
appeared in any message at all is known as marginal likelihood.
By applying Bayes’ theorem to this evidence, we can compute a
posterior probability that measures how likely the message is to
be spam. If the posterior probability is more than 50%, the
message is more likely to be spam.
𝑷(𝑽𝒊𝒂𝒈𝒓𝒂⁄𝒔𝒑𝒂𝒎) ∗ 𝑷(𝒔𝒑𝒂𝒎)
𝑷(𝒔𝒑𝒂𝒎⁄𝑽𝒊𝒂𝒈𝒓𝒂) =
𝑷(𝑽𝒊𝒂𝒈𝒓𝒂)
Viagra
Frequency Yes No Total
spam 4/20 16/20 20
non spam 1/80 79/80 80
Total 5/100 95/100 100
Using the values in the likelihood table, we can start filling the numbers in
these equations. Because the denominator is the same, we will ignore it for
now.
The overall likelihood of spam is then
(4/20)*(10/20)*(20/20)*(12/20)*(20/100)=0.012.
While the likelihood of non spam given this pattern of words is:
(1/80)*(66/80)*(71/80)*(23/80)*(80/100)=0.002
Since 0.012/0.002= 6, this says that an email with this pattern of words is 6
times more likely to be spam than non spam.
To convert these numbers to probabilities, we apply the formula=
0.012/(0.012+0.002)= 0.857=85.7%
The probability that the message is spam is equal to the likelihood that the
message is spam divided by the sum of likelihoods that the message is either
spam or non spam.
Similarly, the probability of non spam is : 0.002/(0.012+0.002)=0.143
Given the pattern of words in the message, we expect that the message is
spam with 85.7% probability and non-spam with 14.3% probability.
The naïve Bayes classification algorithm used can be summarized by the
following formula. The probability of level L for class C, given the
evidence provided by features F1, F2,…, Fn, is equal to the product of
probabilities of each piece of evidence conditioned on the class level, the
prior probabilities of the class level and a scaling factor 1/Z which converts
the result to a probability:
𝑛
1
𝑃(𝐶𝐿 |𝐹1 , 𝐹2 , … , 𝐹𝑛 ) = 𝑝(𝐶𝐿 ) ∏ 𝑝(𝐹𝑖 |𝐶𝐿 )
𝑍
𝑖=1
This problem might arise if an event never occurs for one or more levels of
the class in the training set. For example, the term Groceries had never
previously appeared in a spam message. Consequently P(spam|groceries)=0
Because probabilities in NB are multiplies, this 0 value causes the posterior
probability of spam to be 0, giving a word the ability to nullify and overrule
all of the other evidence.
A solution to this problem involves using the Laplace estimator. The
Laplace estimator adds a small number to each of the counts in the
frequency table, which ensures that each feature has a non zero probability
of occurring with each class. Typically, the estimator is set to 1, which
ensures that every feature has a non-zero probability.
Let us see how this affects our prediction for this message. Using a Laplace
value of 1, we add 1 to each numerator in the likelihood function. The total
number of 1’s must also be added to each denominator. The likelihood of
spam becomes:
(4/20)*(10/20)*(0/20)*(12/20)*(20/100)=0
(5/24)*(11/24)*(1/24)*(13/24)*(20/100)= 0.0004
And the likelihood of non spam is:
(2/84)*(15/84)*(9/84)*(24/84)*(80/100)=0.0001
Probability of spam = 0.0004/0.0005=.8= 80%
Probability of non spam = 20%
Using numeric features with naïve Bayes
Because naïve Bayes uses frequency tables for learning the data, each
feature must be categorical in order to create the combinations of class and
feature values. Since numeric features do not have categories of values, the
NB algorithm would not work without modification.
One easy and effective solution is to discretize a numeric feature, which
means that the numbers are put in categories known as bins. For this reason,
discretization is often called binning.
There are several different ways of binning a numeric value. The most
common is to explore the data for natural categories or cut points in the
distribution. For example, suppose you added a feature to the spam dataset
that recorded the time (on a 24 hours clock) the email was sent.
We might want to divide the day into 4 bins of 6 hours based on the fact that
in the early hours of morning, messages frequency is low. Activity picks up
during business hours, and tapers off in the evening. This seems to validate
the choice of 4 natural bins of activity. Each email will have a feature stating
which bin the email belongs to.
Note if there are no obvious cut points, one option is to discretize the feature
using quantiles.
Practice exercises on Naïve Bayes
Exercise 1:
Using the data above find the probability that an email is spam or not if it has:
- “groceries”, “Money” and “unsubscribe” and not containing “Viagra”
-“Viagra” and “groceries”, but not “money” or “unsubscribe”
Exercise 2:
A retail store carries a product that is supplied by three manufactures, A, B, and C, and
30% from A, 20% from B and 50% from C.
It is known that 2% of the products from A are defective, 3% from B are defective, and
5% from C are defective.
A) If a product is randomly selected from this store, what is the probability that it is
defective?