Discrete Random Variable
Discrete Random Variable
Jyotsna Singh
Division of ECE, NSUT
Concept of Random Variable, Probability mass
function and Probability density function,
Examples of random variables. The Binomial, Poisson,
Normal, Geometric, Gamma, Uniform Random
Variables and their Properties, Function of one Random
Variable,
TOTAL LECTURES : 10
Definition
A random variable is a numerical quantity that is generated
by a random experiment.
Experiment Number X Possible Values of X
Sum of the number of 2, 3, 4, 5, 6, 7, 8, 9, 10,
Roll two fair dice
dots on the top faces 11, 12
Flip a fair coin Number of tosses until
1, 2, 3,4, …
repeatedly the coin lands heads
Measure the voltage at
Voltage measured 118 ≤ x ≤ 122
an electrical outlet
Operate a light bulb Time until the bulb
0≤x<∞
until it burns out burns out
μ=E(X)=Σx P(x)
The mean of a random variable may be interpreted as the average of
the values assumed by the random variable in repeated trials of the
experiment.
The variance measures how far the values of X are from their mean, on
average.
Variance is the mean squared deviation of a random variable from its own mean.
If X has high variance, we can observe values of X a long way from the mean.
If X has low variance, the values of X tend to be clustered tightly around then
mean value.
Definition
Definition
The standard deviation, σ, of a discrete random variable X is the square root of
its variance, hence is given by the formulas
x P ( x) [ x 2 P ( x)] 2
2
Solution: 21 22 23 24 25 26
The sample space of equally 31 32 33 34 35 36
likely outcomes is 41 42 43 44 45 46
The possible values for X are the numbers 2 51 52 53 54 55 56
through 12. 61 62 63 64 65 66
X = 2 is the event {11}, so P(2)=1∕36.
X = 3 is the event {12,21}, so P(3)=2∕36.
Continuing this way we obtain the table
C= ''X≤a''
D= ''a<X≤b''
E= ''X≤b''.
Then C and D are mutually exclusive, and their union is the
event E. By the third axiom of probability, this tells us that
Pr(E) =Pr(C)+Pr(D)
⟹Pr(X≤b)==Pr(X≤a)+Pr(a<X≤b)
⟹ Pr(a<X≤b) =Pr(X≤b)−Pr(X≤a)
⟹ Pr(a<X≤b)=FX(b)−Fx(a)
The cumulative distribution function gives you the cumulative
probability associated with a function.
With a table, the frequency is the amount of times a particular number or
item happens. The cumulative frequency is the total counts up to a certain
number:
The cumulative distribution function works in the same way, except with probability.
The yellow area represents the probability of a dog being above 11 pounds.
The cumulative distribution function FX(x) of a random
variable X has three important properties:
1. The cumulative distribution function FX(x) is a non-decreasing
function. This follows directly from the result we have just derived:
For a<b, we have
Pr(a<X≤b)≥0
⟹ FX(b)−FX(a)≥0
⟹ FX(a)≤FX(b).Pr(a<X≤b)≥0
⟹ FX(b)−FX(a)≥0
⟹ FX(a)≤FX(b).
2. As x→−∞, the value of FX(x) approaches 0 (or equals 0). That
is, limx→−∞FX(x)=0. This follows in part from the fact
that Pr(Φ)=0.
3. As x→∞, the value of FX(x) approaches 1.
That is, limx→∞FX(x)=1. This follows in part from the fact
that Pr(E)=1.
As x→−∞, the value of FX(x) approaches 0 (or equals 0).
i.e. limx→−∞FX(x)=0.
This follows in part from the fact that P(∅)=0.