Lecture 713 PDF
Lecture 713 PDF
E[X(t)] = µ
and its autocorrelation depends only on τ = t1 − t2
Rxx(0) = E[|X(t)|2]
Spring 2002
Lecture 13 1
Lecture 13 2 Lecture 13 3
Linear Filtering of Random Processes Filtering Random Processes
The above example combines weighted values of X(t) and X(t − t0) Let X(t, e) be a random process. For the moment we show the
to form Y (t). Statistical parameters E[Y ], E[Y 2], var(Y ) and Ryy (τ ) outcome e of the underlying random experiment.
are readily computed from knowledge of E[X] and Rxx(τ ).
Let Y (t, e) = L [X(t, e)] be the output of a linear system when X(t, e)
The techniques can be extended to linear combinations of more than is the input. Clearly, Y (t, e) is an ensemble of functions selected by
two samples of X(t).
e, and is a random process.
n−1
Y (t) = hk X(t − tk )
What can we say about Y when we have a statistical description of
k=0
This is an example of linear filtering with a discrete filter with weights X and a description of the system?
h = [h0, h1, . . . , hn−1] Note that L does not need to exhibit random behavior for Y to be
The corresponding relationship for continuous time processing is random.
∞ ∞
Y (t) = h(s)X(t − s)ds = X(s)h(t − s)ds
−∞ −∞
Lecture 13 4 Lecture 13 5
Lecture 13 6 Lecture 13 7
Output Autocorrelation Crosscorrelation Theorem
The autocorrelation function of the output is Let x(t) and y(t) be random processes that are related by
∞
Ryy (t1, t2) = E[y(t1)y ∗(t2)] y(t) = x(t − s)h(s)ds
−∞
Then
We are particularly interested in the autocorrelation function Ryy (τ ) ∞
of the output of a linear system when its input is a wss random Rxy (t1, t2) = Rxx(t1, t2 − β)h(β)dβ
−∞
process. and
∞
Ryy (t1, t2) = Rxy (t1 − α, t2)h(α)dα
When the input is wss and the system is time invariant the output −∞
is also wss. Therefore,
∞
Ryy (t1, t2) = Rxx(t1 − α, t2 − β)h(α)h(β)dαdβ
The autocorrelation function can be found for a process that is −∞
not wss and then specialized to the wss case without doing much
additional work. We will follow that path.
Lecture 13 8 Lecture 13 9
Lecture 13 10 Lecture 13 11
Photon Pulses (continued) Photon Pulses (continued)
Now consider the case |τ | < . Then, by the Poisson assumption,
there cannot be two pulses so close together so that X(t) = h and
X(t + τ ) = h only if t and t + |τ | fall within the same pulse.
Let us first assume that τ > . Then it is impossible for the instants
P (X1 = h, X2 = h) = P (X1 = h)P (X2 = h|X1 = h) = λP (X2 = h|X1 = h)
t and t + τ to fall within the same pulse.
The probability that t + |τ | also hits the pulse is
E[X(t + τ )X(t)] = x1x2P (X1 = x1, X2 = x2)
x1 x2 P (X2 = h|X1 = h) = 1 − |τ |/
= 0 · 0P (0, 0) + 0 · hP (0, h) + h · 0P (h, 0) + h2P (h, h)
Hence,
= h2P (X1 = h)P (x2 = h)
|τ |
The probability that the pulse waveform will be at level h at any E[X(t + τ )X(t)] = h2α 1− for |τ | ≤
instant is λ, which is the fraction of the time occupied by pulses.
If we now let → 0 and keep
Hence,
h = 1 the triangle becomes an
E[X(t + τ )X(t)] = (hλ)2 for |τ | > impulse of area h and we have
Rxx(τ ) = λδ(τ ) + λ2
Lecture 13 12 Lecture 13 13
Lecture 13 14 Lecture 13 15
White Noise Plots of y(t) for t = 20, 200, 1000
Suppose that w(t) is white noise and that
t
y(t) = w(s)ds
0
Then
t
E[y 2(t)] = E[w(u)w(v)]dudv
0
t
= q(u)δ(u − v)dudv
0
t
= q(v)dv
0
If the noise is stationary then
t
E[Y (t)] = µw ds = µw t
0
E[Y 2(t)] = qt
Lecture 13 16 Lecture 13 17
Lecture 13 18 Lecture 13 19
Practical Calculations Practical Calculations
Suppose that you are given a set of samples of a random waveform. Mean-squared value: In a similar manner, the mean-squared value
Represent the samples with a vector x = [x0, x1, . . . , xN −1]. It is can be approximated by
assumed that the samples are taken at some sampling frequency
−1
fs = 1/Ts and are representative of the entire random process. That 1 N x, x
X2 = x2
i =
is, the process is ergodic and the set of samples is large enough. N i=0 N
Variance: An estimate of the variance is
Sample Mean: The mean value can be approximated by −1
1 N 2
−1
1 N S2 = xj − X̄
X̄ = xi N − 1 j=0
N i=0
It can be shown that
This computation can be represented by a vector inner (dot) prod-
uct. Let 1 = [1, 1, . . . , 1] be a vector of ones of the appropriate E[S 2] = σ 2
length. Then and is therefore an unbiased estimator of the variance.
x, 1
X̄ =
N
Lecture 13 20 Lecture 13 21