EE 6333/6307 assignment 5 (Bayesian and Least Squares Estimators)
Instructions
1. You may need to use the results given at the end of problems.
Question 1. Consider a data model, y[n] = A + ω [n] for n = 0, 1, ..., N − 1 where
ω [n] is zero mean WGN with variance σ2 . The parameter A is to be estimated using
a bayesian MMSE estimator. The prior pdf is A ∼ N (µ A , σA 2 ). Show that Â
MMSE =
2
σA
µA +
σ2
(y − µ A ), where y = N1 ∑nN=−01 y[n]. (Hint: Use results 1 and 2.)
2
σA + N
Question 2. State and output equations of linear stochastic discrete time system is given
by,
x (k + 1) = x (k ) + 2v(k ),
y ( k ) = x ( k ) + ω ( k ).
Here, x, v, y, ω are scalar. v and ω are zero mean Gaussian white noise with unity variance
and they are uncorrelated to each other. v is uncorrelated to x (0) and ω is uncorrelated
to x. Given data, y(1) = 2, y(2) = 1 and assume initial estimate, x̂0|0 = 1, initial mean
square error, P0|0 = E[( x (0) − x̂0|0 )2 ] = 1. You need to use a Kalman filter to estimate
state.
(a) Find predicted estimate and mean square error, x̂1|0 , P1|0 .
(b) Find corrected estimate and mean square error, x̂1|1 , P1|1 .
(c) Repeat this for k = 2.
Question 3. A discrete-time nonlinear system is given by the following state and output
equations,
X (k + 1) = f ( X (k ), u(k)) + v(k ),
y ( k ) = h ( X ( k ) + ω ( k ), (1)
x1 (k ) + Tx2 (k)
where f ( X (k ), u(k)) = and h( X (k)) = sin x1 (k ) with
x2 (k ) − 1.2T sin x1 (k ) + 0.5u(k)
v(k) ∼ N (0, Q), ω (k ) ∼ N (0, R). Given sampling time T = 0.1, initial estimate x̂0|0 =
[0.1 0]T and error covariance P0|0 = diag(0.01, 0.01), Q = diag(10−4 , 10−4 ), R = 0.005,
u(0) = 0.2 and measurement y(1) = 0.105.
Find one step of predicted and corrected state and error covariance using EKF.
Question 4. Rain is accumulating in a cylindrical tank at a steady rate. It is desired to es-
timate this rate. Measurements of the water level in the tank are initiated at a certain time
and it is noisy. Assume that the measurements follow the process, y(k ) = a + bk + v(k ),
where y(k ) is the level of water at instant k, b is the rate of rainfall, a is the initial amount
of water accumulated before measurements are taken, and v(k ) is the noise. Denote data
collected at k = 1, 2, .., N as y(1), y(2), ..., y( N ). Note that, both a and b are unknown, but
may be estimated from this data.
(a) Using the method of least squares, obtain an estimate of the rate of rainfall b̂.
(b) Find the bias of the estimate E[b̂ − b] if v(k ) is a zero mean process.
(c) If v(k ) is a zero mean white noise process with unity variance, prove that the vari-
S0 N ( N + 1)
ance of the estimate E[(b̂ − b)2 ] = 2
, where S0 = N, S1 = ∑ N = ,
S0 S2 − S1 2
N ( N + 1)(2N + 1)
S2 = ∑ N 2 = .
6
Question 5. Design a recursive least square estimator with forgetting factor to esti-
mate a parameter θ from noisy data, y(k ) = θ + ω (k ). Consider a cost function, J =
∑kN=−01 W N −1−k (y(k ) − θ )2 , where 0 < W < 1.
(a) Minimize J to find a least square estimate.
(b) Consider the estimate found in part-(a) as θ̂ N (estimate using N data points) and find
a recursive relation between θ̂ N and θ̂ N +1 .
(c) What is the significance of your choice of W.
Question 6: (coding problem 1) Consider an AR(1) model y(k ) = − a1 y(k − 1) + ω (k ),
where ω (k) is zero mean white Gaussian noise with unit variance . Simulate this model
with a1 = 0.72 to store data for 1000 instants (use the MATLAB function randn or similar
in Python numpy library for noise process) and plot the data with respect to k. Now
consider that your parameter a1 is unknown, and estimate it from data using the formula
R̂yy (1)
â1 = .
R̂yy (0)
Question 7: (coding problem 2) Consider the scalar process xk+1 = 0.5xk + vk , yk =
0.5xk + wk , where xk is the state, yk is the output, and vk , wk are independent white noise
process following a Gaussian distribution with zero mean and unit variance. Simulate
this process, plot it, and store the output data yk for N = 10000 instants. Using this output
data, estimate the state by constructing a Kalman filter. Plot y and ŷ in same figure. Plot
x̂ + . Plot P+ all with respect to k.
Results:
1. Gaussian conditional pdf: If the random vectors, X and Y are jointly Gaussian with
Cxx Cxy
mean vector [E( X ) T E(Y ) T ] and partitioned covariance matrix, C = , then
Cyx Cyy
the conditional pdf f (Y | X ) is also Gaussian with mean E( X |Y ) = E( X ) + Cxy Cyy
− 1 (Y −
E(Y )) and covariance CX |Y = Cxx − Cxy Cyy −1 C .
yx
2. Matrix Inversion Lemma: ( A + BCD ) − 1 = A−1 − A−1 B(C −1 + DA−1 B)−1 DA−1 ,
where A and C are square and invertible matrices.