0% found this document useful (0 votes)
32 views51 pages

Spectral Density Function Explained

The document discusses the spectral density function, focusing on the distinction between deterministic and stochastic signals. It explains Parseval's theorem, which states that the total energy of a signal is conserved when transforming between time and frequency domains, and introduces the Power Spectral Density (PSD) for stochastic processes. Additionally, it covers the Wiener-Khinchin theorem, which relates the power spectral density to the autocorrelation function of wide-sense stationary processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views51 pages

Spectral Density Function Explained

The document discusses the spectral density function, focusing on the distinction between deterministic and stochastic signals. It explains Parseval's theorem, which states that the total energy of a signal is conserved when transforming between time and frequency domains, and introduces the Power Spectral Density (PSD) for stochastic processes. Additionally, it covers the Wiener-Khinchin theorem, which relates the power spectral density to the autocorrelation function of wide-sense stationary processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

SPECTRAL DENSITY FUNCTION

Key Concepts: Deterministic vs. Stochastic


• Deterministic signal x(t): Energy spectrum is 𝑋 𝜔 2


𝑋 𝜔 = න 𝑥 𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡
−∞
Energy distribution: 𝑋 𝜔 2 across frequency
Parseval’s Theorem - Definition
Parseval’s theorem states that the total energy of a signal is the same
whether calculated in the time domain or the frequency domain.
Mathematically expressed as:
+∞ +∞
1
න 𝑥 𝑡 2 𝑑𝑡 = න 𝑋 𝜔 2 𝑑𝜔
−∞ 2𝜋 −∞
Where:
- 𝑥 𝑡 is the time-domain signal
- 𝑋 𝜔 is its Fourier transform (frequency-domain representation)
Intuitive Meaning
• Parseval’s theorem confirms that energy is conserved when
we transform a signal between time and frequency domains
• The energy content remains unchanged regardless of our
perspective (time or frequency)
• It establishes that the Fourier transform is an energy-
preserving operation
Mathematical Foundation
For a deterministic signal 𝑥 𝑡 with Fourier transform 𝑋 𝜔 :
+∞
1. Fourier transform: 𝑋 𝜔 = ‫׬‬−∞ 𝑥 𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡
+∞ 2 𝑑𝑡
2. Signal energy in time domain: 𝐸𝑡 = ‫׬‬−∞ 𝑥 𝑡
1 +∞ 2
3. Parseval’s theorem shows: 𝐸𝑡 = ‫׬‬−∞ 𝑋 𝜔 𝑑𝜔 = 𝐸𝑓
2𝜋
4. Therefore: 𝐸𝑡 = 𝐸𝑓 = 𝐸 (total signal energy)
Spectral density function
• Power spectrum
– For a deterministic signal x(t)
• The spectrum is well defined: If X() represents its Fourier
transform + − j t
X ( ) =  − x (t )e dt ,
• Then |X()|2 represents its energy spectrum. This follows from
Parseval’s theorem since the signal energy is given by
+ +
 − x (t ) dt =
2 1
2  − | X ( ) | 2
d = E.
• Thus |X()|2 represents the signal energy in the band (,
+) | X ( )|2
X (t ) Energy in ( , + )

t 0 
0   + 

6
Example: Rectangular Pulse

For a rectangular pulse 𝑥 𝑡 = 1 for 𝑡 < 𝑇/2 and 0 elsewhere:


𝑇/2 2
• Time domain energy: 𝐸𝑡 = ‫׬‬−𝑇/2 1 𝑑𝑡 =𝑇
sin 𝜔𝑇/2
• Fourier transform: 𝑋 𝜔 =
𝜔/2
2
1 +∞ sin 𝜔𝑇/2
• Frequency domain energy: 𝐸𝑓 = ‫׬‬ 𝑑𝜔 = 𝑇
2𝜋 −∞ 𝜔/2
Discrete-Time Version
For discrete-time signals, Parseval’s theorem takes the form:
+∞
2
1 𝜋 2
෍ 𝑥𝑛 = න 𝑋 𝑒 𝑗𝜔 𝑑𝜔
2𝜋 −𝜋
𝑛=−∞
For the Discrete Fourier Transform (DFT) of length N:
𝑁−1 𝑁−1
2
1 2
෍ 𝑥𝑛 = ෍ 𝑋𝑘
𝑁
𝑛=0 𝑘=0
Definitions & Preliminaries
Stochastic process X(t): Power Spectral Density (PSD)
The Power Spectral Density (PSD) 𝑆xx  describes how the
power of a signal is distributed over different frequencies.
Formally, for a signal 𝑥 𝑡 ,the PSD is defined as:
1
𝑆xx  = lim 𝐸 𝑋𝑇  2
𝑇→∞ 𝑇
Where 𝑋𝑇  is the Fourier transform of the truncated signal
𝑥𝑇 𝑡 .
Definitions & Preliminaries
Properties:
- PSD is always real and non-negative: 𝑆xx  ≥ 0
-For real-valued signals, the PSD is even:
𝑆𝑥𝑥  = 𝑆x𝑥 −
-The total area under the PSD curve equals the total average
power:

𝑃𝑡𝑜𝑡𝑎𝑙 = න 𝑆𝑥x  𝑑
−∞
Average Power Distribution
– The average power distribution based on (– T, T ) is ensemble average of power
distribution for 
 | X T ( ) |2  1 T T − j ( t1 − t2 )
PT ( ) = E  = − T − T
E { X ( t1 ) X *
( t 2 )}e dt1dt2
 2T  2T
1 T T − j ( t1 − t2 )
=
2T  
−T −T
R XX
( t1 , t 2 ) e dt1dt2

– We have this represents the power distribution of X(t) based on (– T, T ).


– If X(t) is assumed to be w.s.s, then

RXX (t1 , t2 ) = RXX (t1 − t2 )


– And we have
1 T T − j ( t1 − t2 )
PT ( ) =  −T  −T XX 1 2
R ( t − t ) e dt1dt2 .
2T 11
Wiener-Khinchin Theorem
The Wiener-Khinchin Theorem states that:
The power spectral density of a wide-sense stationary random
process is equal to the Fourier transform of its autocorrelation
function.
Mathematically:

𝑆x𝑥  = ℱ{𝑅𝑥x 𝜏 } = න 𝑅𝑥x 𝜏 𝑒 −𝑗2𝜋𝜏 𝑑𝜏
−∞
 𝑒 𝑗2𝜋𝜏 𝑑

Conversely: 𝑅𝑥x 𝜏 = ℱ −1 {𝑆𝑥x  }= ‫׬‬−∞ 𝑆𝑥x
Proof of the Theorem
Part 1: Truncated Signal Approach
Consider a truncated version of the signal 𝑥 𝑡 :
𝑥 𝑡 for 𝑡 ≤ 𝑇/2
𝑥𝑇 𝑡 = ቊ
0 for 𝑡 > 𝑇/2
The Fourier transform of 𝑥𝑇 𝑡 is:
𝑇/2
𝑋𝑇  = න 𝑥 𝑡 𝑒 −𝑗2𝜋𝑡 𝑑𝑡
−𝑇/2
Proof (continued)
The energy spectral density of 𝑥𝑇 𝑡 is:
𝑋𝑇  2 = 𝑋𝑇  ⋅ 𝑋𝑇∗ 
Expanding this product:
𝑇/2 𝑇/2
𝑋𝑇  2 =න න 𝑥 𝑡 𝑥 ∗ 𝑠 𝑒 −𝑗2𝜋𝑓 𝑡−𝑠 𝑑𝑡𝑑𝑠
−𝑇/2 −𝑇/2
Setting 𝜏 = 𝑡 − 𝑠 :
𝑇 𝑇/2−𝜏
𝑋𝑇  2 =න න 𝑥 𝑠 + 𝜏 𝑥 ∗ 𝑠 𝑑𝑠 𝑒 −𝑗2𝜋𝜏 𝑑𝜏
−𝑇 −𝑇/2
Proof (continued)
For a WSS process, the inner integral approaches 𝑇 ⋅ 𝑅𝑥x 𝜏 as 𝑇 → ∞:
𝑇/2−𝜏
න 𝑥 𝑠 + 𝜏 𝑥 ∗ 𝑠 𝑑𝑠 ≈ 𝑇 ⋅ 𝑅x𝑥 𝜏
−𝑇/2
The power spectral density is defined as:
1
𝑆𝑥𝑥  = lim ⋅ 𝐸 𝑋𝑇  2
𝑇→∞ 𝑇
Substituting:
𝑇
1
𝑆x𝑥  = lim ⋅ 𝐸 න 𝑇 ⋅ 𝑅𝑥x 𝜏 𝑒 −𝑗2𝜋𝜏 𝑑𝜏
𝑇→∞ 𝑇 −𝑇
Simplifying:

𝑆𝑥x  = න 𝑅𝑥x 𝜏 𝑒 −𝑗2𝜋𝜏 𝑑𝜏 = ℱ{𝑅𝑥x 𝜏 }
−∞
Proof (continued)
Part 2: Alternative Approach Using Expectation
We can also prove the theorem by considering the expected value of the
Fourier transform directly.
For a WSS process, we define:
1
𝑆𝑥𝑥  = lim ⋅ 𝐸 𝑋𝑇  2
𝑇→∞ 𝑇
Using the definition of autocorrelation and taking expectations:
𝐸 𝑥 𝑡 𝑥 ∗ 𝑡 + 𝜏 = 𝑅x𝑥 𝜏
This leads directly to:

𝑆𝑥  = න 𝑅𝑥𝑥 𝜏 𝑒 −𝑗2𝜋𝜏 𝑑𝜏
−∞
Spectral density function
– Khinchin-Wiener theorem
• The autocorrelation function and the power spectrum of a w.s.s
process form a Fourier transform pair.
RXX ( ) ⎯→
FT
S XX ( )  0.
+
RXX ( ) = 21 − S XX ( )e j d 
+
S XX ( ) = lim PT ( ) = − RXX ( )e − j d  0
T →

• For  = 0,
+
1
2 − S XX
( )d  = RXX (0) = E{| X (t ) |2 } = P, the total power.

17
Spectral density function
• Khinchin-Wiener theorem
• The area under Sxx() represents the total power of the process X(t),
and hence Sxx() truly represents the power spectrum.

S XX ( ) S XX (  ) 
represents the power
in the band ( ,  +  )

  +  
0

• The nonnegative-definiteness property of the auto-correlation function


translates into the “nonnegative” property for its Fourier transform
(power spectrum).
RXX ( ) nonnegative - definite  S XX ( )  0.

18
Spectral density function
• Khinchin-Wiener theorem
• If X(t) is a real w.s.s process, then
RXX ( ) = RXX ( − )
• So that +
S XX ( ) =  − RXX ( )e − j d
+
=  − RXX ( ) cos  d

= 2  0 RXX ( ) cos  d = S XX ( − )  0

• The power spectrum is an even function, (in addition to being real and
non-negative).

19
Spectral density function
• Fourier Expansion
– A process x(t) is Mean Square periodic with period T if
E{lx(t + T) - x(t)|2} = 0 for all t.
– A WSS process is Mean Square periodic if its autocorrelation
R() is periodic with period T = 2/0.
– Expanding R() into Fourier series:
𝑅 𝜏 = σ∞ 𝑛=−∞ 𝛾𝑛 𝑒
𝑗𝑛𝜔0 𝜏
𝛾𝑛 = 𝑇1 ‫׬‬0𝑇 𝑅 𝜏 𝑒 −𝑗𝑛𝜔0 𝑑𝜏
– For WSS periodic process x(t) with period T, form the sum:
𝑥ො 𝑡 = σ∞𝑛=−∞ 𝑐𝑛 𝑒
𝑗𝑛𝜔0 𝑡
𝑐𝑛 = 𝑇1 ‫׬‬0𝑇 𝑥 𝑡 𝑒 −𝑗𝑛𝜔0𝑡 𝑑𝑡
– This sum equal x(t) in the MS sense: E{lx(t) - 𝑥ො (t)|2} = 0 for all t

20
Spectral density function
• Karhunen-Loeve Expansion
– In general case:

𝑥ො 𝑡 = ෍ 𝑐𝑛 𝜑𝑛 (𝑡) 0<𝑡<𝑇
𝑛=1
• Where 𝜑𝑛 is a set of orthonormal functions in the interval (0, T):
𝑇

න 𝜑𝑛 𝑡 𝜑𝑚 𝑡 𝑑𝑡 = 𝛿[𝑛 − 𝑚]
0
• And the coefficients 𝑐𝑛 are random variables given by
𝑇
𝑐𝑛 = න 𝑥 𝑡 𝜑𝑛∗ 𝑡 𝑑𝑡
0
• In this development, we consider the problem of determining a set
of orthonormal functions 𝜑𝑛 𝑡 such that: (a) the sum equals x(t);
(b) the coefficients 𝑐𝑛 are orthogonal.

21
White Noise
• Definition: A random process whose PSD is constant for all
frequencies
• Properties:
– 𝑆𝑥𝑥 𝜔 = 𝑁0 /2 (two-sided PSD)
– 𝑅𝑥𝑥 𝜏 = 𝑁0 /2 ⋅ 𝛿 𝜏 (autocorrelation is a delta function)
– Samples are uncorrelated at different times
• Idealization: Theoretical construct (infinite bandwidth, infinite
power)
Colored Noise
• Definition: Random process with non-uniform PSD
• Common examples:
– Pink noise: 𝑆𝑥𝑥 𝜔 ∝ 1/ 𝜔
– Brown noise: 𝑆𝑥𝑥 𝜔 ∝ 1/𝜔2
– Blue noise: 𝑆𝑥𝑥 𝜔 ∝ 𝜔2
Practical Estimation: Periodogram Method
• Definition: 𝑋𝑇 𝜔 2 /2𝑇 where 𝑋𝑇 𝜔 is the FT of x(t) over (-T,T)
• Properties:
– Asymptotically unbiased: 𝐸{𝑃෠ 𝜔 } → 𝑆𝑥𝑥 𝜔 as 𝑇 → ∞
– But does not converge in mean square (variance doesn’t decrease)
Exercise 1: White Noise Analysis
A white noise process has a constant PSD 𝑆𝑥𝑥 𝜔 = 𝑁0 /2.
a) Find the autocorrelation function 𝑅𝑥𝑥 𝜏 .
b) Calculate the average power of the process.
c) If the noise is passed through an ideal low-pass filter with
cutoff frequency 𝜔𝑐 , what is the PSD of the output?
Exercise 1: White Noise Analysis
Solution:
a)Finding the autocorrelation function:
1 ∞
𝑅𝑥𝑥 𝜏 = න 𝑆𝑥𝑥 𝜔 𝑒 𝑗𝜔𝜏 𝑑𝜔
2𝜋 −∞
1 ∞ 𝑁0 𝑗𝜔𝜏
= න 𝑒 𝑑𝜔
2𝜋 −∞ 2
𝑁0 ∞ 𝑗𝜔𝜏
= න 𝑒 𝑑𝜔
4𝜋 −∞
𝑁0
= ⋅𝛿 𝜏
2
This is a fundamental result: white noise has a delta function autocorrelation, meaning it is
uncorrelated at different time instants.
Exercise 1: White Noise Analysis
b)Average power calculation:
𝑁0
𝑃 = 𝑅𝑥𝑥 0 = ⋅𝛿 0
2
Since 𝛿 0 = ∞ theoretically, this gives 𝑃 = ∞, which highlights that
ideal white noise is a mathematical abstraction with infinite power. In
practice, real noise always has finite bandwidth and power.
Exercise 1: White Noise Analysis
c) After passing through an ideal low-pass filter with cutoff frequency 𝜔𝑐 :
𝑆𝑦𝑦 𝜔 = 𝐻 𝜔 2 ⋅ 𝑆𝑥𝑥 𝜔
Where 𝐻 𝜔 is the filter’s frequency response:
1, 𝜔 ≤ 𝜔𝑐
𝐻 𝜔 =ቊ
0, 𝜔 > 𝜔𝑐
Therefore:
𝑁 /2, 𝜔 ≤ 𝜔𝑐
𝑆𝑦𝑦 𝜔 = ቊ 0
0, 𝜔 > 𝜔𝑐
The output power is finite:
1 𝜔𝑐 𝑁0 𝑁0 𝜔𝑐
𝑃𝑦 = න 𝑑𝜔 =
2𝜋 −𝜔𝑐 2 𝜋
This demonstrates how filtering limits the infinite power of ideal white noise to a finite value.
Exercise 2: Autocorrelation and PSD
Consider a WSS process with autocorrelation function 𝑅𝑥𝑥 𝜏
= 𝜎 2 𝑒 −𝑎 𝜏 where a > 0.
a) Find the power spectral density 𝑆𝑥𝑥 𝜔 .
b) Calculate the total power of the process.
c) Find the mean square value of the process after passing
through an ideal bandpass filter with passband 𝜔1 , 𝜔2 .
Exercise 2: Autocorrelation and PSD
Finding the power spectral density:

𝑆𝑥𝑥 𝜔 = න 𝑅𝑥𝑥 𝜏 𝑒 −𝑗𝜔𝜏 𝑑𝜏
−∞

= න 𝜎 2 𝑒 −𝑎 𝜏 𝑒 −𝑗𝜔𝜏 𝑑𝜏
−∞
0 ∞
= 𝜎 2 න 𝑒 𝑎𝜏 𝑒 −𝑗𝜔𝜏 𝑑𝜏 + න 𝑒 −𝑎𝜏 𝑒 −𝑗𝜔𝜏 𝑑𝜏
−∞ 0
0 ∞
= 𝜎2 න 𝑒 𝑎−𝑗𝜔 𝜏
𝑑𝜏 + න 𝑒 − 𝑎+𝑗𝜔 𝜏
𝑑𝜏
−∞ 0

2
𝑒 𝑎−𝑗𝜔 𝜏 0 −𝑒 − 𝑎+𝑗𝜔 𝜏 ∞
=𝜎 | + |
𝑎 − 𝑗𝜔 −∞ 𝑎 + 𝑗𝜔 0
1 1
= 𝜎2 +
𝑎 − 𝑗𝜔 𝑎 + 𝑗𝜔
2𝑎
= 𝜎2 ⋅ 2
𝑎 + 𝜔2
This is a Lorentzian spectrum that decreases as 𝜔 increases, showing that the process has more power at lower frequencies.
Exercise 2: Autocorrelation and PSD
The total power is:
𝑃 = 𝑅𝑥𝑥 0 = 𝜎 2
This can also be verified by integrating the PSD:
1 ∞ 2 2𝑎 2
𝑃= න 𝜎 ⋅ 2 𝑑𝜔 = 𝜎
2𝜋 −∞ 𝑎 + 𝜔2
Exercise 2: Autocorrelation and PSD
c) For the mean square value after passing through an ideal bandpass filter:
1 𝜔2
𝑃𝑜𝑢𝑡 = න 𝑆 𝜔 𝑑𝜔
2𝜋 𝜔1 𝑥𝑥
1 𝜔2 2 2𝑎
= න 𝜎 ⋅ 2 𝑑𝜔
2𝜋 𝜔1 𝑎 + 𝜔2
𝜎 2 𝜔2 𝑎
= න 𝑑𝜔
𝜋 𝜔1 𝑎 2 + 𝜔 2
𝑎 𝜔
Using the standard integral ‫׬‬ 𝑑𝜔 = tan−1 :
𝑎2 +𝜔2 𝑎
𝜎2 𝜔2 𝜔1
𝑃𝑜𝑢𝑡 = tan−1 − tan−1
𝜋 𝑎 𝑎
This result shows how the output power depends on the filter bandwidth and the parameter
𝑎 of the process.
Exercise 3: Linear Systems and PSD
A WSS process X(t) with PSD 𝑆𝑥𝑥 𝜔 is input to an LTI system
with frequency response 𝐻 𝜔 .
a) Find the PSD of the output process Y(t).
1 𝑗𝜔
b) If 𝑆𝑥𝑥 𝜔 = and 𝐻 𝜔 = , find 𝑆𝑦𝑦 𝜔 .
1+𝜔2 1+𝑗𝜔
c) Calculate the power of the output process.
Exercise 3: Linear Systems and PSD
a) The PSD of the output process Y(t) is:
𝑆𝑦𝑦 𝜔 = 𝐻 𝜔 2 ⋅ 𝑆𝑥𝑥 𝜔
This is a fundamental result in random signal processing: when a
random signal passes through an LTI system, its PSD is shaped by the
magnitude squared of the system’s frequency response.
Exercise 3: Linear Systems and PSD
1 𝑗𝜔
b) Given 𝑆𝑥𝑥 𝜔 = and 𝐻 𝜔 = :
1+𝜔2 1+𝑗𝜔
2
First, calculate 𝐻 𝜔 :
2
2
𝑗𝜔
𝐻 𝜔 =
1 + 𝑗𝜔
𝑗𝜔 2
=
1 + 𝑗𝜔 2
𝜔2
= 2
1 + 𝜔2
𝜔2
=
1 + 𝜔2
Therefore:
𝑆𝑦𝑦 𝜔 = 𝐻 𝜔 2 ⋅ 𝑆𝑥𝑥 𝜔
𝜔2 1
= ⋅
1 + 𝜔2 1 + 𝜔2
𝜔2
=
1 + 𝜔2 2
This shows the system acts as a high-pass filter on the input spectrum, attenuating low frequencies.
Exercise 3: Linear Systems and PSD
c) The power of the output process:
1 ∞
𝑃𝑦 = න 𝑆 𝜔 𝑑𝜔
2𝜋 −∞ 𝑦𝑦
1 ∞ 𝜔2
= න 𝑑𝜔
2𝜋 −∞ 1 + 𝜔 2 2
This integral can be solved using contour integration or standard integration tables:

𝜔2
න 2 2
𝑑𝜔 = 𝜋/2
−∞ 1 + 𝜔
Therefore:
1 𝜋 1
𝑃𝑦 = ⋅ =
2𝜋 2 4
This result shows that the system reduces the power of the input process (which is 1/2) to 1
/4.
Solution 3
a) 𝑆𝑦𝑦 𝜔 = 𝐻 𝜔 2 ⋅ 𝑆𝑥𝑥 𝜔
2 𝜔2 𝜔2
b) 𝐻 𝜔 =
1+𝜔2
, so 𝑆𝑦𝑦 𝜔 =
1+𝜔2 2
1 ∞ 𝜔2 1
c) 𝑃𝑦 = ‫׬‬
2𝜋 −∞ 1+𝜔2 2
𝑑𝜔 =
4
Exercise 8: Filter Design Based on PSD
A signal has PSD 𝑆𝑥𝑥 𝜔 = 1/ 1 + 𝜔4 plus white noise with
PSD 𝑁0 /2 = 0.1.
a) Design a Wiener filter 𝐻 𝜔 that minimizes mean square
error in estimating the signal.
b) Calculate the resulting minimum MSE.
c) How does the filter change if 𝑁0 increases to 1.0?
Solution 8
𝑆𝑥𝑥 𝜔 1/ 1+𝜔4
a) 𝐻 𝜔 = =
𝑆𝑥𝑥 𝜔 +𝑁0 /2 1/ 1+𝜔4 +0.1
1 ∞ 𝑆𝑥𝑥 𝜔 𝑁0 /2
b) MSE = ‫׬‬ 𝑑𝜔 (requires numerical integration)
2𝜋 −∞ 𝑆𝑥𝑥 𝜔 +𝑁0 /2
c) As noise increases, the filter becomes more aggressive in
attenuating frequencies with low SNR
Exercise 9: System Response to Random Inputs
A WSS process with autocorrelation 𝑅𝑥𝑥 𝜏 = 𝑒 − 𝜏 is input to a
system with impulse response ℎ 𝑡 = 𝑒 −2𝑡 𝑢 𝑡 .
a) Find the PSD of the input process.
b) Find the PSD of the output process.
c) Calculate the autocorrelation of the output process.
d) Find the cross-correlation between input and output.
Solution 9
2
a) 𝑆𝑥𝑥 𝜔 = 1+𝜔2
2
2𝑆 1 2 2
b) 𝑆𝑦𝑦 𝜔 = 𝐻 𝜔 𝑥𝑥 𝜔 =
2+𝑗𝜔

1+𝜔2
=
22 +𝜔2 1+𝜔2
c) 𝑅𝑦𝑦 𝜏 is the inverse Fourier transform of 𝑆𝑦𝑦 𝜔

d) 𝑅𝑥𝑦 𝜏 = ℎ 𝜏 ∗ 𝑅𝑥𝑥 𝜏 = ‫׬‬−∞ ℎ 𝜏 − 𝑢 𝑅𝑥𝑥 𝑢 𝑑𝑢
Summary of Key Concepts
• Power Spectral Density provides frequency domain
characterization of random processes
• Wiener-Khinchin theorem connects time domain
(autocorrelation) to frequency domain (PSD)
• PSD estimation techniques balance resolution, variance, and
computational complexity
• Applications span multiple disciplines from communications
to finance
Advanced Topics
• Multidimensional spectral analysis
• Time-frequency analysis (Wigner-Ville distributions)
• Higher-order spectra (bispectrum, trispectrum)
• Spectral estimation in non-stationary environments
Recommended Literature
1. Papoulis, A. & Pillai, S.U. “Probability, Random Variables and
Stochastic Processes”
2. Proakis, J.G. & Manolakis, D.G. “Digital Signal Processing”
3. Kay, S.M. “Modern Spectral Estimation: Theory and
Application”
4. Stoica, P. & Moses, R. “Spectral Analysis of Signals”
Proof of Wiener-Khinchin Theorem
Starting with the definition of the power spectral density as:
𝑆𝑥𝑥 𝜔 = lim 𝑇→∞ 𝐸{ 𝑋𝑇 𝜔 2 /2𝑇}
Where 𝑋𝑇 𝜔 is the Fourier transform of 𝑥 𝑡 over interval
𝑇
−𝑇, 𝑇 : 𝑋𝑇 𝜔 = ‫׬‬−𝑇 𝑥 𝑡 𝑒 −𝑗𝜔𝑡 𝑑𝑡
Proof Continued
Expanding 𝑋𝑇 𝜔 2 : 𝑋𝑇 𝜔 2 = 𝑋𝑇 𝜔 𝑋𝑇∗ 𝜔
𝑇 𝑇
= ‫׬‬−𝑇 ‫׬‬−𝑇 𝑥 𝑡1 𝑥 ∗ 𝑡2 𝑒 −𝑗𝜔 𝑡1 −𝑡2 𝑑𝑡1 𝑑𝑡2
Taking the expectation: 𝐸{ 𝑋𝑇 𝜔 2 }
𝑇 𝑇 ∗ −𝑗𝜔 𝑡 −𝑡 𝑇 𝑇
= ‫׬‬−𝑇 ‫׬‬−𝑇 𝐸 {𝑥 𝑡1 𝑥 𝑡2 }𝑒 1 2 𝑑𝑡1 𝑑𝑡2 = ‫׬‬−𝑇 ‫׬‬−𝑇 𝑅𝑥𝑥 (𝑡1
− 𝑡2 )𝑒 −𝑗𝜔 𝑡1 −𝑡2 𝑑𝑡1 𝑑𝑡2
Proof Conclusion
Substituting 𝜏 = 𝑡1 − 𝑡2 and applying the change of variables

leads to: 𝑆𝑥𝑥 𝜔 = ‫׬‬−∞ 𝑅𝑥𝑥 𝜏 𝑒 −𝑗𝜔𝜏 𝑑𝜏
Which establishes the Wiener-Khinchin theorem.
Exercise 10: Spectral Estimation Performance
Compare the performance of periodogram, Bartlett’s method,
and Welch’s method for estimating the PSD of a process with
𝑅𝑥𝑥 𝜏 = 𝑒 −2 𝜏 cos 5𝜏 .
a) Derive the true PSD of this process analytically
b) For a finite record of length 𝑁 = 1000 samples, calculate and
plot the bias and variance of each estimator
c) Discuss the tradeoffs between resolution and variance for
each method
References and Resources
Textbooks
1. Stoica, P., & Moses, R. (2005). Spectral Analysis of Signals. Prentice
Hall.
2. Kay, S. M. (1988). Modern Spectral Estimation: Theory and
Application. Prentice Hall.
3. Marple, S. L. (1987). Digital Spectral Analysis with Applications.
Prentice Hall.
4. Percival, D. B., & Walden, A. T. (1993). Spectral Analysis for Physical
Applications. Cambridge University Press.
5. Proakis, J. G., & Manolakis, D. G. (2007). Digital Signal Processing.
Pearson.
Journal Articles
1. Welch, P. D. (1967). “The use of fast Fourier transform for the estimation
of power spectra: A method based on time averaging over short, modified
periodograms.” IEEE Transactions on Audio and Electroacoustics, 15(2), 70-
73.
2. Thomson, D. J. (1982). “Spectrum estimation and harmonic analysis.”
Proceedings of the IEEE, 70(9), 1055-1096.
3. Burg, J. P. (1975). “Maximum entropy spectral analysis.” PhD dissertation,
Stanford University.
4. Cohen, L. (1989). “Time-frequency distributions—a review.” Proceedings
of the IEEE, 77(7), 941-981.
5. Mallat, S. G., & Zhang, Z. (1993). “Matching pursuits with time-frequency
dictionaries.” IEEE Transactions on Signal Processing, 41(12), 3397-3415.
Online Resources
1. MIT OpenCourseWare: “Digital Signal Processing”
2. Stanford University: “Spectral Analysis for Physical
Applications”
3. MATLAB Documentation: “Signal Processing Toolbox”
4. Python Documentation: “SciPy Signal Processing”
5. NASA Goddard Space Flight Center: “Power Spectral Density
Computation”

You might also like