0% found this document useful (0 votes)
77 views6 pages

Lecture 713 PDF

This document discusses wide-sense stationary (WSS) stochastic processes and linear filtering of random processes. It defines a WSS process as having a constant mean and an autocorrelation that depends only on the time difference. It shows that the autocorrelation and other statistics of the output of a linear, time-invariant system can be determined from the input autocorrelation using convolution. The crosscorrelation and autocorrelation theorems relate the input and output autocorrelations and crosscorrelations.

Uploaded by

taye
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
77 views6 pages

Lecture 713 PDF

This document discusses wide-sense stationary (WSS) stochastic processes and linear filtering of random processes. It defines a WSS process as having a constant mean and an autocorrelation that depends only on the time difference. It shows that the autocorrelation and other statistics of the output of a linear, time-invariant system can be determined from the input autocorrelation using convolution. The crosscorrelation and autocorrelation theorems relate the input and output autocorrelations and crosscorrelations.

Uploaded by

taye
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 6

Wide-Sense Stationary

A stochastic process X(t) is wss if its mean is constant

E[X(t)] = µ
and its autocorrelation depends only on τ = t1 − t2

Rxx(t1, t2) = E[X(t1)X ∗(t2)]


Linear Filtering of Random Processes
E[X(t + τ )X ∗(t)] = Rxx(τ )
∗ (τ ) and
Note that Rxx(−τ ) = Rxx
Lecture 13

Rxx(0) = E[|X(t)|2]
Spring 2002

Lecture 13 1

Example Example (continued)


We found that the random telegraph signal has the autocorrelation We can also compute the autocorrelation Ryy (τ ) for τ = 0.
function
Rxx(τ ) = e−c|τ | Ryy (τ ) = E[Y (t + τ )Y ∗(t)]

We can use the autocorrelation function to find the second moment


of linear combinations such as Y (t) = aX(t) + bX(t − t0). = E[(aX(t + τ ) + bX(t + τ − t0)) (aX(t) + bX(t − t0))]
2
Ryy (0) = E[Y 2(t)] = E[(aX(t) + bX(t − t0)) ]
= a2E[X(t + τ )X(t)] + abE[X(t + τ )X(t − t0)]
= a2E[X 2(t)] + 2abE[X(t)X(t − t0)] + b2E[X 2(t − t0)]
+ abE[X(t + τ − t0)X(t)] + b2E[X(t + τ − t0)X(t − t0)]
= a2Rxx(0) + 2abRxx(t0) + b2Rxx(0)
= a2Rxx(τ ) + abRxx(τ + t0) + abRxx(τ − t0) + b2Rxx(τ )
= (a2 + b2)Rxx(0) + 2abRxx(t0)
= (a2 + b2)Rxx(τ ) + abRxx(τ + t0) + abRxx(τ − t0)
= (a2 + b2)Rxx(0) + 2abe−ct0

Lecture 13 2 Lecture 13 3
Linear Filtering of Random Processes Filtering Random Processes
The above example combines weighted values of X(t) and X(t − t0) Let X(t, e) be a random process. For the moment we show the
to form Y (t). Statistical parameters E[Y ], E[Y 2], var(Y ) and Ryy (τ ) outcome e of the underlying random experiment.
are readily computed from knowledge of E[X] and Rxx(τ ).
Let Y (t, e) = L [X(t, e)] be the output of a linear system when X(t, e)
The techniques can be extended to linear combinations of more than is the input. Clearly, Y (t, e) is an ensemble of functions selected by
two samples of X(t).
e, and is a random process.

n−1
Y (t) = hk X(t − tk )
What can we say about Y when we have a statistical description of
k=0
This is an example of linear filtering with a discrete filter with weights X and a description of the system?

h = [h0, h1, . . . , hn−1] Note that L does not need to exhibit random behavior for Y to be
The corresponding relationship for continuous time processing is random.
 ∞  ∞
Y (t) = h(s)X(t − s)ds = X(s)h(t − s)ds
−∞ −∞

Lecture 13 4 Lecture 13 5

Time Invariance Mean Value


We will work with time-invariant (or shift-invariant) systems. The The following result holds for any linear system, whether or not it
system is time-invariant if the response to a time-shifted input is is time invariant or the input is stationary.
just the time-shifted output.
E[LX(t)] = LE[X(t)] = L[µ(t)]
Y (t + τ ) = L[X(t + τ )] When the process is stationary we find µy = L[µx], which is just the
The output of a time-invariant linear system can be represented by response to a constant of value µx.
convolution of the input with the impulse response, h(t).
 ∞
Y (t) = X(t − s)h(s)ds
−∞

Lecture 13 6 Lecture 13 7
Output Autocorrelation Crosscorrelation Theorem
The autocorrelation function of the output is Let x(t) and y(t) be random processes that are related by
 ∞
Ryy (t1, t2) = E[y(t1)y ∗(t2)] y(t) = x(t − s)h(s)ds
−∞
Then
We are particularly interested in the autocorrelation function Ryy (τ )  ∞
of the output of a linear system when its input is a wss random Rxy (t1, t2) = Rxx(t1, t2 − β)h(β)dβ
−∞
process. and
 ∞
Ryy (t1, t2) = Rxy (t1 − α, t2)h(α)dα
When the input is wss and the system is time invariant the output −∞
is also wss. Therefore,
 ∞
Ryy (t1, t2) = Rxx(t1 − α, t2 − β)h(α)h(β)dαdβ
The autocorrelation function can be found for a process that is −∞
not wss and then specialized to the wss case without doing much
additional work. We will follow that path.

Lecture 13 8 Lecture 13 9

Crosscorrelation Theorem Example: Autocorrelation for Photon Arrivals


Proof Multiply the first equation by x(t1) and take the expected Assume that each photon that arrives
value. at a detector generates an impulse of
 ∞  ∞ current. We want to model this pro-
E[x(t1)y(t)] = E[x(t1)x(t − s)]h(s)ds = Rxx(t1, t − s)h(s)ds
−∞ −∞ cess so that we can use it as the exci-
This proves the first result. To prove the second, multiply the first tation X(t) to detection systems. As-
equation by y(t2) and take the expected value. sume that the photons arrive at a rate
 ∞  ∞ λ photons/second and that each photon
E[y(t)y(t2)] = E[x(t − s)y(t2)]h(s)ds = Rxy (t − s, t2)h(s)ds generates a pulse of height h and width
−∞ −∞
.
This proves the second and third equations. Now substitute the To compute the autocorrelation function we must find
second equation into the third to prove the last.
Rxx(τ ) = E[X(t + τ )X(t)]

Lecture 13 10 Lecture 13 11
Photon Pulses (continued) Photon Pulses (continued)
Now consider the case |τ | < . Then, by the Poisson assumption,
there cannot be two pulses so close together so that X(t) = h and
X(t + τ ) = h only if t and t + |τ | fall within the same pulse.
Let us first assume that τ > . Then it is impossible for the instants
P (X1 = h, X2 = h) = P (X1 = h)P (X2 = h|X1 = h) = λP (X2 = h|X1 = h)
t and t + τ to fall within the same pulse.
 The probability that t + |τ | also hits the pulse is
E[X(t + τ )X(t)] = x1x2P (X1 = x1, X2 = x2)
x1 x2 P (X2 = h|X1 = h) = 1 − |τ |/
= 0 · 0P (0, 0) + 0 · hP (0, h) + h · 0P (h, 0) + h2P (h, h)
Hence,
= h2P (X1 = h)P (x2 = h)  
|τ |
The probability that the pulse waveform will be at level h at any E[X(t + τ )X(t)] = h2α 1− for |τ | ≤ 

instant is λ, which is the fraction of the time occupied by pulses.
If we now let  → 0 and keep
Hence,
h = 1 the triangle becomes an
E[X(t + τ )X(t)] = (hλ)2 for |τ | >  impulse of area h and we have

Rxx(τ ) = λδ(τ ) + λ2

Lecture 13 12 Lecture 13 13

Detector Response to Poisson Pulses White Noise


It is common for a physical de-
We will say that a random process w(t) is white noise if its values
tector to have internal resis-
w(ti) and w(tj ) are uncorrelated for every ti and tj = ti. That is,
tance and capacitance. A se-
ries RC circuit has impulse re- Cw (ti, tj ) = E[w(ti)w∗(tj )] = 0 for ti = tj
sponse
1 −t/RC The autocovariance must be of the form
h(t) = e step(t)
RC
The autocorrelation function of the detector output is
 ∞
Cw (ti, tj ) = q(ti)δ(ti − tj ) where q(ti) = E[|w(ti)|2] ≥ 0
Ryy (t1, t2) = Rxx(t1 − α, t2 − β)h(α)h(β)dαdβ is the mean-squared value at time ti. Unless specifically stated to
−∞
∞   be otherwise, it is assumed that the mean value of white noise is
1 2 e−(α+β)/RC dαdβ
= λδ(τ + α − β) + λ
(RC)2 0 zero. In that case, Rw (ti, tj ) = Cw (ti, tj )
 ∞  2
λ −(τ +2α)/RC dα + λ2 1 −u/RC
= e e du
(RC)2 0 RC Examples: A coin tossing sequence (discrete). Thermal resistor
λ −τ /RC 2 noise (continuous).
= e + λ with τ ≥ 0
2RC

Lecture 13 14 Lecture 13 15
White Noise Plots of y(t) for t = 20, 200, 1000
Suppose that w(t) is white noise and that
 t
y(t) = w(s)ds
0
Then
 t
E[y 2(t)] = E[w(u)w(v)]dudv
0
 t
= q(u)δ(u − v)dudv
0
 t
= q(v)dv
0
If the noise is stationary then
 t
E[Y (t)] = µw ds = µw t
0

E[Y 2(t)] = qt

Lecture 13 16 Lecture 13 17

Filtered White Noise Example


Find the response of a linear filter with impulse response h(t) to Pass white noise through a filter with the exponential impulse re-
white noise. sponse h(t) = Ae−btstep(t).
 ∞  ∞
Ryy (t1, t2) = Rxx(t1 − α, t2 − β)h(α)h(β)dαdβ Ryy (τ ) = A2q e−bαe−b(α−τ )step(α)step(α − τ )dα
−∞ −∞
 ∞
with x(t) = w(t) we have Rxx(t1, t2) = qδ(t1 − t2). Then, letting = A2qe−bτ e−2bαdα
τ = t1 − t2 we have τ
 ∞
A2q −bτ
Ryy (t1, t2) = qδ(τ − α + β)h(α)h(β)dαdβ = e for τ ≥ 0
−∞ 2b
 ∞ Because the result is symmetric in τ ,
= q h(α)h(α − τ )dα
−∞ A2q −b|τ |
Ryy (τ ) = e
Because δ(−τ ) = δ(τ ), this result is symmetric in τ . 2b
Interestingly, this has the same form as the autocorrelation func-
tion of the random telegraph signal and, with the exception of the
constant term, also for the Poisson pulse sequence.

Lecture 13 18 Lecture 13 19
Practical Calculations Practical Calculations
Suppose that you are given a set of samples of a random waveform. Mean-squared value: In a similar manner, the mean-squared value
Represent the samples with a vector x = [x0, x1, . . . , xN −1]. It is can be approximated by
assumed that the samples are taken at some sampling frequency
−1
fs = 1/Ts and are representative of the entire random process. That 1 N x, x
X2 = x2
i =
is, the process is ergodic and the set of samples is large enough. N i=0 N
Variance: An estimate of the variance is
Sample Mean: The mean value can be approximated by −1 
1 N 2
−1
1 N S2 = xj − X̄
X̄ = xi N − 1 j=0
N i=0
It can be shown that
This computation can be represented by a vector inner (dot) prod-
uct. Let 1 = [1, 1, . . . , 1] be a vector of ones of the appropriate E[S 2] = σ 2
length. Then and is therefore an unbiased estimator of the variance.
x, 1
X̄ =
N

Lecture 13 20 Lecture 13 21

You might also like