0% found this document useful (0 votes)
35 views2 pages

Detection and Estimation Theory (CT505) : Assignment 8 May 22, 2020

This document contains 6 questions related to detection and estimation theory. Question 1 asks to find the MMSE and MAP estimators of μ for a Gaussian distribution and discusses the estimators as the prior variance goes to 0 and infinity. Question 2 asks to find the MMSE and MAP estimators for a specified posterior distribution. Question 3 asks to find the MAP estimator of A for a given data model. Question 4 specifies a data model and asks about the distributions of parameters A and φ. Question 5 shows the vector MAP estimator minimizes Bayes Risk for a 0-1 loss function. Question 6 asks to prove that the distribution of xTC−1x is chi-squared for a multivariate Gaussian x.

Uploaded by

DIPESH
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
35 views2 pages

Detection and Estimation Theory (CT505) : Assignment 8 May 22, 2020

This document contains 6 questions related to detection and estimation theory. Question 1 asks to find the MMSE and MAP estimators of μ for a Gaussian distribution and discusses the estimators as the prior variance goes to 0 and infinity. Question 2 asks to find the MMSE and MAP estimators for a specified posterior distribution. Question 3 asks to find the MAP estimator of A for a given data model. Question 4 specifies a data model and asks about the distributions of parameters A and φ. Question 5 shows the vector MAP estimator minimizes Bayes Risk for a 0-1 loss function. Question 6 asks to prove that the distribution of xTC−1x is chi-squared for a multivariate Gaussian x.

Uploaded by

DIPESH
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 2

Detection and Estimation Theory (CT505)

Assignment 8
May 22, 2020

1. The data x[n], n = 0, 1, . . . , N − 1 are observed having the PDF


 
1 1 2
p(x[n]|µ) = √ exp − 2 (x[n] − µ) .
2πσ 2 2σ
The x[n]’s are independent when conditioned on µ. The mean µ has the prior PDF
µ ∼ N (µ0 , σ02 ).
Find the MMSE and MAP estimators of µ. What happens as σ02 −→ 0 and σ02 −→ ∞?
2. For the posterior PDF (
exp [−(θ − x)] , θ > x
p (θ|x) =
0, θ < x,
find the MMSE and MAP estimators.
3. The data x[n] = A + w[n] for n = 0, 1, . . . , N − 1 are observed. The unknown parameter A is
assumed to have the prior PDF
(
λ exp (−λA) A > 0
p(A) =
0 A<0

where λ > 0, and w[n] is WGN with variance σ 2 and is independent of A. Find the MAP
estimator of A.
4. Consider the data model
x[n] = A cos(2πf0 n + φ) + w[n]
where √
A= a2 + b 2
 
−b
φ = arctan .
a
If θ = [a b]T ∼ N (0, σθ2 I), show that the PDF of A is Rayleigh, the PDF of φ is U[0, 2π], and
that A and φ are independent.
5. Consider the vector MAP estimator or
θ̂ = arg max p(θ|x).
θ

Show that this estimator minimizes the Bayes Risk for the cost function
(
1 |||| > δ
C() =
0 |||| < δ
Pp
where  = θ − θ̂, ||||2 = 2
i=1 i , and δ −→ 0.

1
6. If x is a 2 × 1 random vector with PDF

x ∼ N (0, C),

prove that the PDF of xT C−1 x is that of a X22 random variable.

You might also like