Unit IV Probability Bayes
Unit IV Probability Bayes
Discrete probability, Bayes’ theorem, expected value and variance, boolean functions,
minimization of circuits.
Why?
Ideas and techniques from probability theory are used to determine the average-case
complexity of algorithms.
Probabilistic algorithms can be used to solve many problems that cannot be easily or
practically solved by deterministic algorithms.
In combinatorics, probability theory can even be used to show that objects with certain
properties exist
Probability theory can help us answer questions that involve uncertainty, such as
determining whether we should reject an incoming mail message as spam based on the
words that appear in the message.
Terminologies
An experiment is a procedure that yields one of a given set of possible outcomes
The sample space of the experiment is the set of possible outcomes.
An event is a subset of the sample space.
The probability of an event is between 0 and 1
Laplace’s definition of the probability of an event with finitely many possible
outcomes will now be stated.
Discrete probability deals with events that can take on distinct values. The probability of
an event is a measure of the likelihood that the event will occur.
Consider a fair six-sided die. The probability of rolling a specific number (e.g., a 4) is:1/6
If you roll the die once, what is the probability of rolling an even number?
1/2
An urn contains four blue balls and five red balls. What is the probability that a ball chosen
at random from the urn is blue?
What is the probability that when two dice are rolled, the sum of the numbers on the two
dice is 7?
To calculate the probability, note that there are nine possible outcomes, and four of these
possible outcomes produce a blue ball. Hence, the probability that a blue ball is chosen is
4/9.
There are a total of 36 equally likely possible outcomes when two dice are rolled.There are
six successful outcomes, namely, (1, 6), (2, 5), (3, 4), (4, 3), (5, 2), and (6, 1), where the
values of the first and second dice are represented by an ordered pair. Hence, the
probability that a seven comes up when two fair dice are rolled is 6/36 = 1/6.
A sequence of 10 bits is randomly generated. What is the probability that at least one of
these bits is 0?
Let E1 and E2 be events in the sample space S. Then
What is the probability that a positive integer selected at random from the set of positive
integers not exceeding 100 is divisible by either 2 or 5?
Conditional Probability
A bit string of length four is generated at random so that each of the 16 strings of length
four is equally likely. What is the probability that it contains at least two consecutive 0s,
given that its first bit is a 0? (We assume that 0 bits and 1 bits are equally likely.)
Conditional probability provides a way to calculate the likelihood of one event given
another, Bayes' Theorem takes that further by allowing us to revise the probabilities
of hypotheses in light of new data.
Revising Probabilities: Apply Bayes' Theorem when you need to update the probability
of a hypothesis (an event) based on new evidence or data. For example, if you initially
estimate the probability of a disease being present in a patient, and then you receive a
positive test result, you would use Bayes' Theorem to revise the probability of the disease
being present.
Suppose that one person in 100,000 has a particular rare disease for which there is a fairly
accurate diagnostic test. This test is correct 99.0% of the time when given to a person
selected at random who has the disease;
it is correct 99.5% of the time when given to a person selected at random who does not
have the disease.
Given this information can we find
(a) the probability that a person who tests positive for the disease, has the disease?
(b) the probability that a person who tests negative for the disease, does not have the
disease?
Should a person who tests positive be very concerned that he or she has the disease?
Simplified Formula
Bayes’ theorem describes the probability of an event based on prior knowledge of
conditions related to the event.
The expected value of a random variable is the sum over all elements in a sample space of
the product of the probability of the element and the value of the random variable at
this element.
Consequently, the expected value is a weighted average of the values of a random
variable.
The expected value of a random variable provides a central point for the distribution of
values of this random variable.
We can solve many problems using the notion of the expected value of a random variable,
such as determining who has an advantage in gambling games and computing the
average-case complexity of algorithms.
Another useful measure of a random variable is its variance, which tells us how spread
out the values of this random variable are.
We can use the variance of a random variable to help us estimate the probability that a
random variable takes values far removed from its expected value
Expected Value of a Die:
Let X be the number that comes up when a fair die is rolled. What is the expected value of
X?
H/w: A fair coin is flipped three times. Let S be the sample space of the eight possible
outcomes, and let X be the random variable that assigns to an outcome the number of heads
in this outcome. What is the expected value of X?
Expected value and variance are fundamental concepts in probability and statistics that help
us understand the behavior of random variables.
The expected value, also known as the mean, represents the average outcome if an
experiment were repeated many times.
Variance, on the other hand, measures the spread or dispersion of a set of values.
For example, if you were to roll a fair six-sided die, the expected value of the roll would be
the average of all possible outcomes (1 through 6), which is 3.5. The variance would give
you an idea of how much each roll deviates from this average value.
Variance can also be expressed using the expected value in the following way:
Var(X) = E[(X − E(X))2]
A high variance indicates that the data points are spread out widely around the mean,
suggesting high variability or dispersion in the dataset. This implies greater uncertainty or
risk
A low variance indicates that the data points are clustered closely around the mean,
suggesting low variability or dispersion in the dataset. This implies greater consistency and
less uncertainty.
Variance cannot be negative. Since variance is calculated as the average of the squared
differences from the mean, it is always a non-negative value.
A game where you can win $10 (probability 0.5) or lose $5 (probability 0.5).
Calculate expected value?
Boolean Functions
Definition:
A Boolean function is a function that takes Boolean inputs (true/false or 1/0) and produces a Boolean
output.
Example:
Scenario: Consider a function f(A,B)=A∧B (logical AND).
Each Boolean expression represents a Boolean function.
The dual of a Boolean expression is obtained by interchanging Boolean sums and Boolean products and
interchanging 0s and 1s.
Minimization of Circuits
Definition:
Minimization of circuits involves reducing the number of gates and inputs in a logic circuit
while maintaining the same output.
Methods:
1. Boolean Algebra: Use algebraic methods to simplify expressions.
2. Karnaugh Maps (K-maps): Visual method for simplifying Boolean expressions.
Self read
5.1. Minimizing Circuits. A circuit is minimized if it is a sum-of-products
that uses the least number of products of literals and each product contains the least
number of literals possible to produce the desired output.
The laws of Boolean algebra can often be used to simplify a complicated Boolean
expression, particularly the sum-of-products expressions that we have used to represent
Boolean functions.
Any such simplification also leads to a simplification of combinatorial circuits.
K-maps or normal method