Lecture 5: Wavefront reconstruction and prediction
Rufus Fraanje
[email protected]
TU Delft Delft Center for Systems and Control
May 30, 2011
1 / 37
Exercise
Study the relation between the Strehl ratio, the Fried parameter r0 and the
telescope diameter D.
Do this by means of a computer program, that:
1
an
Generates random wavefront phase aberrations satisfying the von Karm
model, choose L0 = 10m, r0 = 0.2, . . . , 1m, and D = 2, . . . , 20m.
Make sure the grid is of sufficient size and resolution.
Compute the corresponding point spread function;
Compute the Strehl ratio
S =
I (0, 0)
Io (0, 0)
where I (0, 0) and Io (0, 00 the intensity at the intersection of the optical axis
with the image plane with and without phase aberration.
4
Average your results over sufficient amount of realizations of the wavefront
phase.
Make graphs of the Strehl ratio S versus the Fried parameter r0 for several
choices of the telescope diameter D. How can you explain the results?
2 / 37
Matlab example
L0
= 10;
D list = [2:2:20];
r0 list = [0.1:0.1:1];
nD
nr0
Nav
Strehl
=
=
=
=
%[m] o u t e r s c a l e o f t u r b u l e n c e
% diameter telescope
% F r i e d parameter
length ( D l i s t ) ;
length ( r 0 l i s t ) ;
500; % number o f r e a l i z a t i o n f o r a v e r a g i n g
zeros ( nr0 , nD , Nav ) ; % a r r a y f o r S t r e h l r a t i o s
f o r iD = 1 : nD ,
D = D l i s t ( iD ) ;
f o r i r 0 = 1 : nr0 ,
r0 = r 0 l i s t ( i r 0 ) ;
% telescope diameter
% F r i e d parameter
r a d i u s = min (max( 1 6 , c e i l (2 D/ r 0 ) ) , 4 0 ) ; % # p o i n t s along r a d i u s
...
3 / 37
Matlab example
Corr = vonkarmancorr space ( x , y , r0 , L0 ) ; % compute c o r r . c o e f f s
% Cholesky f a c t o r i z . n
C o r r s q r t = chol ( Corr ) ;
...
% Compute u n d i s t o r t e d wave over c i r c u l a r t e l e s c o p e g r i d :
Af = c i r c s h i f t ( f f t 2 (A) , ( p +1) r a d i u s [ 1 1 ] ) ;
% i n t e n s i t y at center :
I 0 = abs ( Af ( ( p +1) r a d i u s + 1 , ( p +1) r a d i u s + 1 ) ) 2 ;
f o r i a v = 1 : Nav ,
% Generate a square random phase screen
p h i = C o r r s q r t randn ( ( 2 r a d i u s + 1 ) 2 , 1 ) ;
. . . % place phi at center of grid , i . e .
f i l l Phi
4 / 37
Matlab example
% Compute t h e wave over t h e c i r c u l a r t e l e s c o p e g r i d :
Ap = A. exp ( s q r t ( 1) Phi ) ;
% perform t h e f o u r i e r t r a n s f o r m o f t h e d i s t o r t e d wave :
Apf = c i r c s h i f t ( f f t 2 ( Ap) , ( p +1) r a d i u s [ 1 1 ] ) ;
% i n t e n s i t y at center :
I p = abs ( Apf ( ( p +1) r a d i u s+1av : ( p +1) r a d i u s +1+av , . . .
( p +1) r a d i u s+1av : ( p +1) r a d i u s +1+av ) ) 2 ;
S t r e h l ( i r 0 , iD , i a v ) = I p / I 0 ;
end ; % number o f averages Nav
end ; % number o f F r i e d parameters nr0
end ; % number o f t e l e s c o p e d i a m e t e r s nD
mean( S t r e h l , 3 ) , % average over r e a l i z a t i o n s
5 / 37
Matlab example
0
10
D=2m
D=4m
D=6m
D=8m
D=10m
D=12m
D=14m
D=16m
D=18m
D=20m
Strehl ratio [.]
10
10
10
10
10
Fried parameter r [m]
10
6 / 37
Matlab example
Strehl ratio
Strehl ratio tip-tilt compensated
10
Strehl ratio [.]
Strehl ratio [.]
10
10
10
D=0.10m
D=0.20m
D=0.30m
D=0.40m
D=0.50m
D=0.60m
D=0.70m
D=0.80m
D=0.90m
D=1.00m
10
10
Fried parameter r0 [m]
10
10
10
Fried parameter r0 [m]
10
7 / 37
Outline
Preliminaries: derivatives of functions wrt. matrices;
Wavefront reconstruction from pixel intensities;
Wavefront reconstruction from local gradients;
Wavefront prediction;
Current research.
8 / 37
Outline
Preliminaries: derivatives of functions wrt. matrices;
Wavefront reconstruction from pixel intensities;
Wavefront reconstruction from local gradients;
Wavefront prediction;
Current research.
8 / 37
Outline
Preliminaries: derivatives of functions wrt. matrices;
Wavefront reconstruction from pixel intensities;
Wavefront reconstruction from local gradients;
Wavefront prediction;
Current research.
8 / 37
Outline
Preliminaries: derivatives of functions wrt. matrices;
Wavefront reconstruction from pixel intensities;
Wavefront reconstruction from local gradients;
Wavefront prediction;
Current research.
8 / 37
Outline
Preliminaries: derivatives of functions wrt. matrices;
Wavefront reconstruction from pixel intensities;
Wavefront reconstruction from local gradients;
Wavefront prediction;
Current research.
8 / 37
Preliminaries
Lemma (1)
d
tr AXB T = AT B
dX
Proof.
Note that
tr AXB T
XXX
k
aki xij bkj
hence
d
tr AXB T
dXij
where A =
a1
an
,B=
b1
d X
aki bkj
dXij
k
= aiT bj
bm
9 / 37
Preliminaries
Lemma (1)
d
tr AXB T = AT B
dX
Proof.
Note that
tr AXB T
XXX
k
aki xij bkj
hence
d
tr AXB T
dXij
where A =
a1
an
,B=
b1
d X
aki bkj
dXij
k
= aiT bj
bm
9 / 37
Preliminaries
Lemma (1)
d
tr AXB T = AT B
dX
Proof.
Note that
tr AXB T
XXX
k
aki xij bkj
hence
d
tr AXB T
dXij
where A =
a1
an
,B=
b1
d X
aki bkj
dXij
k
= aiT bj
bm
9 / 37
Preliminaries
Lemma (1)
d
tr AXB T = AT B
dX
Proof.
Note that
tr AXB T
XXX
k
aki xij bkj
hence
d
tr AXB T
dXij
where A =
a1
an
,B=
b1
d X
aki bkj
dXij
k
= aiT bj
bm
9 / 37
Preliminaries
Lemma (1)
d
tr AXB T = AT B
dX
Proof.
Note that
tr AXB T
XXX
k
aki xij bkj
hence
d
tr AXB T
dXij
where A =
a1
an
,B=
b1
d X
aki bkj
dXij
k
= aiT bj
bm
9 / 37
Preliminaries
Lemma (2)
d T T
tr AX B
= BT A
dX
Lemma (3)
d
tr AXBX T C = AT C T XB T + CAXB
dX
Proof.
Use product rule:
dtr AXBX T C
= tr AdXBX T C + tr AXBdX T C
then evaluate each derivative using previous two lemmas.
10 / 37
Preliminaries
Lemma (2)
d T T
tr AX B
= BT A
dX
Lemma (3)
d
tr AXBX T C = AT C T XB T + CAXB
dX
Proof.
Use product rule:
dtr AXBX T C
= tr AdXBX T C + tr AXBdX T C
then evaluate each derivative using previous two lemmas.
10 / 37
Preliminaries
Lemma (2)
d T T
tr AX B
= BT A
dX
Lemma (3)
d
tr AXBX T C = AT C T XB T + CAXB
dX
Proof.
Use product rule:
dtr AXBX T C
= tr AdXBX T C + tr AXBdX T C
then evaluate each derivative using previous two lemmas.
10 / 37
Preliminaries
Theorem (Weighted least squares)
Let V = V T and W = W T and
J (X ) = tr V C AXB W C AXB
then
dJ (X )
= 2AT V C AXB WB T
dX
and dJ (X )/dX = 0 if and only if X satisfies the (generalized) normal equations:
AT VA X BWB T
AT V C WB T
Proof.
The proof follows by straightforward use of the previous lemmas.
Exercise: verify this.
11 / 37
Preliminaries
Theorem (Weighted least squares)
Let V = V T and W = W T and
J (X ) = tr V C AXB W C AXB
then
dJ (X )
= 2AT V C AXB WB T
dX
and dJ (X )/dX = 0 if and only if X satisfies the (generalized) normal equations:
AT VA X BWB T
AT V C WB T
Proof.
The proof follows by straightforward use of the previous lemmas.
Exercise: verify this.
11 / 37
Preliminaries
Theorem (Weighted least squares)
Let V = V T and W = W T and
J (X ) = tr V C AXB W C AXB
then
dJ (X )
= 2AT V C AXB WB T
dX
and dJ (X )/dX = 0 if and only if X satisfies the (generalized) normal equations:
AT VA X BWB T
AT V C WB T
Proof.
The proof follows by straightforward use of the previous lemmas.
Exercise: verify this.
11 / 37
Preliminaries
Theorem (Weighted least squares)
Let V = V T and W = W T and
J (X ) = tr V C AXB W C AXB
then
dJ (X )
= 2AT V C AXB WB T
dX
and dJ (X )/dX = 0 if and only if X satisfies the (generalized) normal equations:
AT VA X BWB T
AT V C WB T
Proof.
The proof follows by straightforward use of the previous lemmas.
Exercise: verify this.
11 / 37
Wavefront reconstruction from pixel intensities
Problem of wavefront reconstruction: how to get (, ) from i (x , y )?
Spatial impulse response given by:
s(x , y ) = F 1 a(, )ej (,)
Obtained image (incoherent imaging case):
i (x , y ) = o(x , y ) |s(x , y )|2 + n(x , y )
12 / 37
Wavefront reconstruction from pixel intensities
Problem of wavefront reconstruction: how to get (, ) from i (x , y )?
Spatial impulse response given by:
s(x , y ) = F 1 a(, )ej (,)
Obtained image (incoherent imaging case):
i (x , y ) = o(x , y ) |s(x , y )|2 + n(x , y )
12 / 37
Wavefront reconstruction from pixel intensities
Problem of wavefront reconstruction: how to get (, ) from i (x , y )?
Spatial impulse response given by:
s(x , y ) = F 1 a(, )ej (,)
Obtained image (incoherent imaging case):
i (x , y ) = o(x , y ) |s(x , y )|2 + n(x , y )
12 / 37
Wavefront reconstruction from pixel intensities
Problem of wavefront reconstruction: how to get (, ) from i (x , y )?
Spatial impulse response given by:
s(x , y ) = F 1 a(, )ej (,)
Obtained image (incoherent imaging case):
i (x , y ) = o(x , y ) |s(x , y )|2 + n(x , y )
Pupil plane
F 1 {.}
F{.}
Image plane
x
y
z
12 / 37
Wavefront reconstruction from pixel intensities
Problem of wavefront reconstruction: how to get (, ) from |s(x , y )| knowing
a(, )?
Wavefront (, )
PSF |s(x , y )| = F 1 a(, )ej (,)
phase-retrieval problem
13 / 37
Wavefront reconstruction from pixel intensities
Problem of wavefront reconstruction: how to get (, ) from |s(x , y )| knowing
a(, )?
Wavefront (, )
PSF |s(x , y )| = F 1 a(, )ej (,)
phase-retrieval problem
13 / 37
Wavefront reconstruction from pixel intensities
Methods for point objects:
Method 1: Solve nonlinear optimization problem, e.g., using
Gerchberg-Saxton iterations1 , 2 ;
Method 2: Use wavefront sensor, e.g., Shack-Hartmann, and determine
wavefront by:
For each lenslet/sub-aperture determine local tip-tilt by
peak-detection or center of mass;
For each lenslet/sub-aperture determine local wavefront by
Method 1;
Determine global wavefront by Method 1 using information on
geometry of lenslet array.
Other: wavefront sensorless methods3 , . . .
1 Gerchberg&Saxton,
Optik, 237 (1972)
Applied Optics, 21(15), (1982)
3 c.f. Song et al, Optics Express, 23(18), (2010)
2 Fienup,
14 / 37
Wavefront reconstruction from pixel intensities
Methods for point objects:
Method 1: Solve nonlinear optimization problem, e.g., using
Gerchberg-Saxton iterations1 , 2 ;
Method 2: Use wavefront sensor, e.g., Shack-Hartmann, and determine
wavefront by:
For each lenslet/sub-aperture determine local tip-tilt by
peak-detection or center of mass;
For each lenslet/sub-aperture determine local wavefront by
Method 1;
Determine global wavefront by Method 1 using information on
geometry of lenslet array.
Other: wavefront sensorless methods3 , . . .
1 Gerchberg&Saxton,
Optik, 237 (1972)
Applied Optics, 21(15), (1982)
3 c.f. Song et al, Optics Express, 23(18), (2010)
2 Fienup,
14 / 37
Wavefront reconstruction from pixel intensities
Methods for point objects:
Method 1: Solve nonlinear optimization problem, e.g., using
Gerchberg-Saxton iterations1 , 2 ;
Method 2: Use wavefront sensor, e.g., Shack-Hartmann, and determine
wavefront by:
For each lenslet/sub-aperture determine local tip-tilt by
peak-detection or center of mass;
For each lenslet/sub-aperture determine local wavefront by
Method 1;
Determine global wavefront by Method 1 using information on
geometry of lenslet array.
Other: wavefront sensorless methods3 , . . .
1 Gerchberg&Saxton,
Optik, 237 (1972)
Applied Optics, 21(15), (1982)
3 c.f. Song et al, Optics Express, 23(18), (2010)
2 Fienup,
14 / 37
Wavefront reconstruction from pixel intensities
Methods for point objects:
Method 1: Solve nonlinear optimization problem, e.g., using
Gerchberg-Saxton iterations1 , 2 ;
Method 2: Use wavefront sensor, e.g., Shack-Hartmann, and determine
wavefront by:
For each lenslet/sub-aperture determine local tip-tilt by
peak-detection or center of mass;
For each lenslet/sub-aperture determine local wavefront by
Method 1;
Determine global wavefront by Method 1 using information on
geometry of lenslet array.
Other: wavefront sensorless methods3 , . . .
1 Gerchberg&Saxton,
Optik, 237 (1972)
Applied Optics, 21(15), (1982)
3 c.f. Song et al, Optics Express, 23(18), (2010)
2 Fienup,
14 / 37
Wavefront reconstruction from pixel intensities
Methods for point objects:
Method 1: Solve nonlinear optimization problem, e.g., using
Gerchberg-Saxton iterations1 , 2 ;
Method 2: Use wavefront sensor, e.g., Shack-Hartmann, and determine
wavefront by:
For each lenslet/sub-aperture determine local tip-tilt by
peak-detection or center of mass;
For each lenslet/sub-aperture determine local wavefront by
Method 1;
Determine global wavefront by Method 1 using information on
geometry of lenslet array.
Other: wavefront sensorless methods3 , . . .
1 Gerchberg&Saxton,
Optik, 237 (1972)
Applied Optics, 21(15), (1982)
3 c.f. Song et al, Optics Express, 23(18), (2010)
2 Fienup,
14 / 37
Wavefront reconstruction from pixel intensities
Methods for point objects:
Method 1: Solve nonlinear optimization problem, e.g., using
Gerchberg-Saxton iterations1 , 2 ;
Method 2: Use wavefront sensor, e.g., Shack-Hartmann, and determine
wavefront by:
For each lenslet/sub-aperture determine local tip-tilt by
peak-detection or center of mass;
For each lenslet/sub-aperture determine local wavefront by
Method 1;
Determine global wavefront by Method 1 using information on
geometry of lenslet array.
Other: wavefront sensorless methods3 , . . .
1 Gerchberg&Saxton,
Optik, 237 (1972)
Applied Optics, 21(15), (1982)
3 c.f. Song et al, Optics Express, 23(18), (2010)
2 Fienup,
14 / 37
Wavefront reconstruction from pixel intensities
For extended objects:
Method 1: Solve nonlinear optimization problem to determine both wavefront
phase and object, e.g., using phase diversity4 ;
Method 2: Use wavefront sensor, e.g., Shack-Hartmann, and determine
wavefront by:
For each lenslet/sub-aperture determine local tip-tilt by
correlation tracking;
For each lenslet/sub-aperture determine local wavefront by
Method 1;
Determine global wavefront by Method 1 using information on
geometry of lenslet array.
4 Gonsalves,
Opt. Eng., 21, (1982)
15 / 37
Wavefront reconstruction from pixel intensities
For extended objects:
Method 1: Solve nonlinear optimization problem to determine both wavefront
phase and object, e.g., using phase diversity4 ;
Method 2: Use wavefront sensor, e.g., Shack-Hartmann, and determine
wavefront by:
For each lenslet/sub-aperture determine local tip-tilt by
correlation tracking;
For each lenslet/sub-aperture determine local wavefront by
Method 1;
Determine global wavefront by Method 1 using information on
geometry of lenslet array.
4 Gonsalves,
Opt. Eng., 21, (1982)
15 / 37
Wavefront reconstruction from pixel intensities
For extended objects:
Method 1: Solve nonlinear optimization problem to determine both wavefront
phase and object, e.g., using phase diversity4 ;
Method 2: Use wavefront sensor, e.g., Shack-Hartmann, and determine
wavefront by:
For each lenslet/sub-aperture determine local tip-tilt by
correlation tracking;
For each lenslet/sub-aperture determine local wavefront by
Method 1;
Determine global wavefront by Method 1 using information on
geometry of lenslet array.
4 Gonsalves,
Opt. Eng., 21, (1982)
15 / 37
Wavefront reconstruction from pixel intensities
For extended objects:
Method 1: Solve nonlinear optimization problem to determine both wavefront
phase and object, e.g., using phase diversity4 ;
Method 2: Use wavefront sensor, e.g., Shack-Hartmann, and determine
wavefront by:
For each lenslet/sub-aperture determine local tip-tilt by
correlation tracking;
For each lenslet/sub-aperture determine local wavefront by
Method 1;
Determine global wavefront by Method 1 using information on
geometry of lenslet array.
4 Gonsalves,
Opt. Eng., 21, (1982)
15 / 37
Wavefront reconstruction from pixel intensities
For extended objects:
Method 1: Solve nonlinear optimization problem to determine both wavefront
phase and object, e.g., using phase diversity4 ;
Method 2: Use wavefront sensor, e.g., Shack-Hartmann, and determine
wavefront by:
For each lenslet/sub-aperture determine local tip-tilt by
correlation tracking;
For each lenslet/sub-aperture determine local wavefront by
Method 1;
Determine global wavefront by Method 1 using information on
geometry of lenslet array.
4 Gonsalves,
Opt. Eng., 21, (1982)
15 / 37
Wavefront reconstruction from pixel intensities
Spot shift by center of mass:
RR
x I (x , y )dxdy
RR
I (x , y )dxdy
RR
y I (x , y )dxdy
RR
I (x , y )dxdy
Wavefront phase (small angles sin() tan() ):
x
y
x /f
y /f
where f the focal distance, such that
(, )
x + y
16 / 37
Wavefront reconstruction from pixel intensities
Spot shift by center of mass:
RR
x I (x , y )dxdy
RR
I (x , y )dxdy
RR
y I (x , y )dxdy
RR
I (x , y )dxdy
Wavefront phase (small angles sin() tan() ):
x
y
x /f
y /f
where f the focal distance, such that
(, )
x + y
16 / 37
Wavefront reconstruction from pixel intensities
Spot shift by center of mass:
RR
x I (x , y )dxdy
RR
I (x , y )dxdy
RR
y I (x , y )dxdy
RR
I (x , y )dxdy
Wavefront phase (small angles sin() tan() ):
x
y
x /f
y /f
where f the focal distance, such that
(, )
x + y
16 / 37
Wavefront reconstruction from pixel intensities
Spot shift by center of mass:
RR
x I (x , y )dxdy
RR
I (x , y )dxdy
RR
y I (x , y )dxdy
RR
I (x , y )dxdy
Wavefront phase (small angles sin() tan() ):
x
y
x /f
y /f
where f the focal distance, such that
(, )
x + y
16 / 37
Wavefront reconstruction from pixel intensities
Gerchberg-Saxton algorithm:
Input: |s(x , y )| and a(, )
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
while not converged do
3:
(, ) F (s(x , y ))
S
4:
(, ) a(, )ej S (,)
S
5:
(, )
s(x , y ) F 1 S
6:
s(x , y ) |s(x , y )|ej s(x ,y )
7:
end while
8:
(, )
) = S
return (,
17 / 37
Wavefront reconstruction from pixel intensities
Gerchberg-Saxton algorithm:
Input: |s(x , y )| and a(, )
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
while not converged do
3:
(, ) F (s(x , y ))
S
4:
(, ) a(, )ej S (,)
S
5:
(, )
s(x , y ) F 1 S
6:
s(x , y ) |s(x , y )|ej s(x ,y )
7:
end while
8:
(, )
) = S
return (,
17 / 37
Wavefront reconstruction from pixel intensities
Gerchberg-Saxton algorithm:
Input: |s(x , y )| and a(, )
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
while not converged do
3:
(, ) F (s(x , y ))
S
4:
(, ) a(, )ej S (,)
S
5:
(, )
s(x , y ) F 1 S
6:
s(x , y ) |s(x , y )|ej s(x ,y )
7:
end while
8:
(, )
) = S
return (,
17 / 37
Wavefront reconstruction from pixel intensities
Gerchberg-Saxton algorithm:
Input: |s(x , y )| and a(, )
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
while not converged do
3:
(, ) F (s(x , y ))
S
4:
(, ) a(, )ej S (,)
S
5:
(, )
s(x , y ) F 1 S
6:
s(x , y ) |s(x , y )|ej s(x ,y )
7:
end while
8:
(, )
) = S
return (,
17 / 37
Wavefront reconstruction from pixel intensities
Gerchberg-Saxton algorithm:
Input: |s(x , y )| and a(, )
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
while not converged do
3:
(, ) F (s(x , y ))
S
4:
(, ) a(, )ej S (,)
S
5:
(, )
s(x , y ) F 1 S
6:
s(x , y ) |s(x , y )|ej s(x ,y )
7:
end while
8:
(, )
) = S
return (,
17 / 37
Wavefront reconstruction from pixel intensities
Gerchberg-Saxton algorithm:
Input: |s(x , y )| and a(, )
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
while not converged do
3:
(, ) F (s(x , y ))
S
4:
(, ) a(, )ej S (,)
S
5:
(, )
s(x , y ) F 1 S
6:
s(x , y ) |s(x , y )|ej s(x ,y )
7:
end while
8:
(, )
) = S
return (,
17 / 37
Wavefront reconstruction from pixel intensities
Gerchberg-Saxton algorithm:
Input: |s(x , y )| and a(, )
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
while not converged do
3:
(, ) F (s(x , y ))
S
4:
(, ) a(, )ej S (,)
S
5:
(, )
s(x , y ) F 1 S
6:
s(x , y ) |s(x , y )|ej s(x ,y )
7:
end while
8:
(, )
) = S
return (,
17 / 37
Wavefront reconstruction from pixel intensities
Gerchberg-Saxton algorithm:
Input: |s(x , y )| and a(, )
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
while not converged do
3:
(, ) F (s(x , y ))
S
4:
(, ) a(, )ej S (,)
S
5:
(, )
s(x , y ) F 1 S
6:
s(x , y ) |s(x , y )|ej s(x ,y )
7:
end while
8:
(, )
) = S
return (,
17 / 37
Wavefront reconstruction from pixel intensities
Gerchberg-Saxton algorithm:
an
turbulence: L0 = 10m, D = 0.05m
Simulation with 10 realizations of von Karm
and r0 = 0.60m, with Hartmann-mask to get more amplitude constraints
40
35
0.05
30
0.045
2
Error ||S S||
0.04
Yaxis [m]
0.035
0.03
0.025
zerophase initial condition
25
20
center of mass determined
intial condition
15
0.02
10
0.015
0.01
0.005
0
0
0.01
0.02
0.03
Xaxis [m]
0.04
0.05
0
0
10
15
Iteration nr. [.]
20
25
30
18 / 37
Wavefront reconstruction from pixel intensities
Gerchberg-Saxton algorithm:
an
turbulence: L0 = 10m, D = 0.05m
Simulation with 10 realizations of von Karm
and r0 = 0.60m, with Hartmann-mask to get more amplitude constraints
Gerchberg-Saxton
Center of mass
0.05
0.05
0.045
0.045
0.045
0.04
0.04
0.04
0.035
0.035
0.035
0.025
0.02
0.015
0.03
Yaxis [m]
0.03
Yaxis [m]
Yaxis [m]
True wavefront
0.05
0.025
0.02
0.015
0.03
0.025
0.02
0.015
0.01
0.01
0.01
0.005
0.005
0.005
0
0
0.01
0.02
0.03
Xaxis [m]
0.04
0.05
0
0
0.01
0.02
0.03
Xaxis [m]
0.04
0.05
0
0
0.01
0.02
0.03
Xaxis [m]
0.04
0.05
18 / 37
Wavefront reconstruction from pixel intensities
Comparison Center-of-mass (COM) & Gerchberg-Saxton (GS)
COM
GS
Complexity:
O (N )
O (N 2 log(N )) per iteration
Accuracy:
only tip-tilt
depending on initialization
Application:
zonal & global
zonal & global
19 / 37
Wavefront reconstruction from local gradients
Local tilt measurements:
tx (i , j )
ty (i , j )
= (i + 1, j ) (i , j ) + nx (i , j )
= (i , j + 1) (i , j ) + ny (i , j )
and all stacked in vectors
t
= G + n
Reconstruction:
1
Solve by line integration (neglecting or filtering noise),
Solve by inversion (or estimation) in Fourier domain,
3
4
Least squares: minimize kt Gk22 ,
Linear estimation:
n random variable with known statistics,
n and
`
random variables with known statistics.
20 / 37
Wavefront reconstruction from local gradients
Local tilt measurements:
tx (i , j )
ty (i , j )
= (i + 1, j ) (i , j ) + nx (i , j )
= (i , j + 1) (i , j ) + ny (i , j )
and all stacked in vectors
t
= G + n
Reconstruction:
1
Solve by line integration (neglecting or filtering noise),
Solve by inversion (or estimation) in Fourier domain,
3
4
Least squares: minimize kt Gk22 ,
Linear estimation:
n random variable with known statistics,
n and
`
random variables with known statistics.
20 / 37
Wavefront reconstruction from local gradients
Local tilt measurements:
tx (i , j )
ty (i , j )
= (i + 1, j ) (i , j ) + nx (i , j )
= (i , j + 1) (i , j ) + ny (i , j )
and all stacked in vectors
t
= G + n
Reconstruction:
1
Solve by line integration (neglecting or filtering noise),
Solve by inversion (or estimation) in Fourier domain,
3
4
Least squares: minimize kt Gk22 ,
Linear estimation:
n random variable with known statistics,
n and
`
random variables with known statistics.
20 / 37
Wavefront reconstruction from local gradients
Local tilt measurements:
tx (i , j )
ty (i , j )
= (i + 1, j ) (i , j ) + nx (i , j )
= (i , j + 1) (i , j ) + ny (i , j )
and all stacked in vectors
t
= G + n
Reconstruction:
1
Solve by line integration (neglecting or filtering noise),
Solve by inversion (or estimation) in Fourier domain,
3
4
Least squares: minimize kt Gk22 ,
Linear estimation:
n random variable with known statistics,
n and
`
random variables with known statistics.
20 / 37
Wavefront reconstruction from local gradients
Local tilt measurements:
tx (i , j )
ty (i , j )
= (i + 1, j ) (i , j ) + nx (i , j )
= (i , j + 1) (i , j ) + ny (i , j )
and all stacked in vectors
t
= G + n
Reconstruction:
1
Solve by line integration (neglecting or filtering noise),
Solve by inversion (or estimation) in Fourier domain,
3
4
Least squares: minimize kt Gk22 ,
Linear estimation:
n random variable with known statistics,
n and
`
random variables with known statistics.
20 / 37
Wavefront reconstruction from local gradients
Local tilt measurements:
tx (i , j )
ty (i , j )
= (i + 1, j ) (i , j ) + nx (i , j )
= (i , j + 1) (i , j ) + ny (i , j )
and all stacked in vectors
t
= G + n
Reconstruction:
1
Solve by line integration (neglecting or filtering noise),
Solve by inversion (or estimation) in Fourier domain,
3
4
Least squares: minimize kt Gk22 ,
Linear estimation:
n random variable with known statistics,
n and
`
random variables with known statistics.
20 / 37
Wavefront reconstruction from local gradients
Fourier domain method
Local tilt measurements:
tx (i , j )
ty (i , j )
= (i + 1, j ) (i , j )
= (i , j + 1) (i , j )
after fourier transform (FFT)a :
Tx (x , y )
Ty (x , y )
e j x 1
e j y 1
(x , y )
Solve for each (discrete) frequency (x , y ) 6= (0, 0):
x , y ) =
(
a Poyneer
et al, JOSA, 19(10), (2002)
21 / 37
Wavefront reconstruction from local gradients
Fourier domain method
Local tilt measurements:
tx (i , j )
ty (i , j )
= (i + 1, j ) (i , j )
= (i , j + 1) (i , j )
after fourier transform (FFT)a :
Tx (x , y )
Ty (x , y )
e j x 1
e j y 1
(x , y )
Solve for each (discrete) frequency (x , y ) 6= (0, 0):
x , y ) =
(
a Poyneer
et al, JOSA, 19(10), (2002)
21 / 37
Wavefront reconstruction from local gradients
Fourier domain method
Local tilt measurements:
tx (i , j )
ty (i , j )
= (i + 1, j ) (i , j )
= (i , j + 1) (i , j )
after fourier transform (FFT)a :
Tx (x , y )
Ty (x , y )
e j x 1
e j y 1
(x , y )
Solve for each (discrete) frequency (x , y ) 6= (0, 0):
x , y )
(
a Poyneer
e j x 1
e j y 1
e j x 1
e j y 1
!1
e j x 1
e j y 1
Tx (x , y )
Ty (x , y )
et al, JOSA, 19(10), (2002)
21 / 37
Wavefront reconstruction from local gradients
Fourier domain method
Local tilt measurements:
tx (i , j )
= (i + 1, j ) (i , j )
ty (i , j )
= (i , j + 1) (i , j )
after fourier transform (FFT)a :
Tx (x , y )
Ty (x , y )
e j x 1
e j y 1
(x , y )
Solve for each (discrete) frequency (x , y ) 6= (0, 0):
x , y ) =
(
a Poyneer
(ej x 1) (ej y 1)
4(sin2 (x /2) + sin2 (y /2))
Tx (x , y )
Ty (x , y )
et al, JOSA, 19(10), (2002)
21 / 37
Wavefront reconstruction from local gradients
Least squares
Recall stacked tilt measurements
t = G + n
Determine by minimizing squared errors
J ()
By theorem on slide 11:
T (t G)
(t G)
t G)
T
= tr (t G)(
=
d = 0
dJ ()/
GT G = GT t
If GT G is invertible, solution is given by:
= (GT G)1 GT t
22 / 37
Wavefront reconstruction from local gradients
Least squares
Recall stacked tilt measurements
t = G + n
Determine by minimizing squared errors
J ()
By theorem on slide 11:
T (t G)
(t G)
t G)
T
= tr (t G)(
=
d = 0
dJ ()/
GT G = GT t
If GT G is invertible, solution is given by:
= (GT G)1 GT t
22 / 37
Wavefront reconstruction from local gradients
Least squares
Recall stacked tilt measurements
t = G + n
Determine by minimizing squared errors
J ()
By theorem on slide 11:
T (t G)
(t G)
t G)
T
= tr (t G)(
=
d = 0
dJ ()/
GT G = GT t
If GT G is invertible, solution is given by:
= (GT G)1 GT t
22 / 37
Wavefront reconstruction from local gradients
Least squares
Recall stacked tilt measurements
t = G + n
Determine by minimizing squared errors
J ()
By theorem on slide 11:
T (t G)
(t G)
t G)
T
= tr (t G)(
=
d = 0
dJ ()/
GT G = GT t
If GT G is invertible, solution is given by:
= (GT G)1 GT t
22 / 37
Wavefront reconstruction from local gradients
Least squares
Recall stacked tilt measurements
t = G + n
Determine by minimizing squared errors
J ()
By theorem on slide 11:
T (t G)
(t G)
t G)
T
= tr (t G)(
=
d = 0
dJ ()/
GT G = GT t
If GT G is invertible, solution is given by:
= (GT G)1 GT t
22 / 37
Wavefront reconstruction from local gradients
Least squares
However, usually G1 = 0 such that GT G is singular!
Then there are infinite solutions, one possible solution:
= G+ t
with G+ the Moore-Penrose pseudoinverse of G (Exercise: verify this.)
Moore-Penrose pseudoinverse
Let the singular value decomposition of G be given by
G =
0
0
VT
T
V
with a diagonal matrix with strictly positive elements on its diagonal.
Then the Moore-pseudoinverse G+ of G is given by
G+ = V 1 U T
23 / 37
Wavefront reconstruction from local gradients
Least squares
However, usually G1 = 0 such that GT G is singular!
Then there are infinite solutions, one possible solution:
= G+ t
with G+ the Moore-Penrose pseudoinverse of G (Exercise: verify this.)
Moore-Penrose pseudoinverse
Let the singular value decomposition of G be given by
G =
0
0
VT
T
V
with a diagonal matrix with strictly positive elements on its diagonal.
Then the Moore-pseudoinverse G+ of G is given by
G+ = V 1 U T
23 / 37
Wavefront reconstruction from local gradients
Least squares
However, usually G1 = 0 such that GT G is singular!
Then there are infinite solutions, one possible solution:
= G+ t
with G+ the Moore-Penrose pseudoinverse of G (Exercise: verify this.)
Moore-Penrose pseudoinverse
Let the singular value decomposition of G be given by
G =
0
0
VT
T
V
with a diagonal matrix with strictly positive elements on its diagonal.
Then the Moore-pseudoinverse G+ of G is given by
G+ = V 1 U T
23 / 37
Wavefront reconstruction from local gradients
Least squares
However, usually G1 = 0 such that GT G is singular!
Then there are infinite solutions, one possible solution:
= G+ t
with G+ the Moore-Penrose pseudoinverse of G (Exercise: verify this.)
Moore-Penrose pseudoinverse
Let the singular value decomposition of G be given by
G =
0
0
VT
T
V
with a diagonal matrix with strictly positive elements on its diagonal.
Then the Moore-pseudoinverse G+ of G is given by
G+ = V 1 U T
23 / 37
Wavefront reconstruction from local gradients
Least squares
However, usually G1 = 0 such that GT G is singular!
Then there are infinite solutions, one possible solution:
= G+ t
with G+ the Moore-Penrose pseudoinverse of G (Exercise: verify this.)
Moore-Penrose pseudoinverse
Let the singular value decomposition of G be given by
G =
0
0
VT
T
V
with a diagonal matrix with strictly positive elements on its diagonal.
Then the Moore-pseudoinverse G+ of G is given by
G+ = V 1 U T
23 / 37
Wavefront reconstruction from local gradients
Linear estimation
Assume n is zero-mean random variable with Gaussian distribution
p(n) =
T 1
1
en Rn n/2
1
/
2
|2 Rn |
Hence, distribution of t = G + n given by
p (t ) =
T 1
1
e(t G) Rn (t G)/2
1
/
2
|2 Rn |
Maximize likelihood of measurement t, by minimizing:
J ()
By theorem on slide 11:
T Rn1 (t G)
(t G)
t G)
T
= tr Rn1 (t G)(
=
d = 0
dJ ()/
GT Rn1 G = GT Rn1 t
24 / 37
Wavefront reconstruction from local gradients
Linear estimation
Assume n is zero-mean random variable with Gaussian distribution
p(n) =
T 1
1
en Rn n/2
1
/
2
|2 Rn |
Hence, distribution of t = G + n given by
p (t ) =
T 1
1
e(t G) Rn (t G)/2
1
/
2
|2 Rn |
Maximize likelihood of measurement t, by minimizing:
J ()
By theorem on slide 11:
T Rn1 (t G)
(t G)
t G)
T
= tr Rn1 (t G)(
=
d = 0
dJ ()/
GT Rn1 G = GT Rn1 t
24 / 37
Wavefront reconstruction from local gradients
Linear estimation
Assume n is zero-mean random variable with Gaussian distribution
p(n) =
T 1
1
en Rn n/2
1
/
2
|2 Rn |
Hence, distribution of t = G + n given by
p (t ) =
T 1
1
e(t G) Rn (t G)/2
1
/
2
|2 Rn |
Maximize likelihood of measurement t, by minimizing:
J ()
By theorem on slide 11:
T Rn1 (t G)
(t G)
t G)
T
= tr Rn1 (t G)(
=
d = 0
dJ ()/
GT Rn1 G = GT Rn1 t
24 / 37
Wavefront reconstruction from local gradients
Linear estimation
Assume n is zero-mean random variable with Gaussian distribution
p(n) =
T 1
1
en Rn n/2
1
/
2
|2 Rn |
Hence, distribution of t = G + n given by
p (t ) =
T 1
1
e(t G) Rn (t G)/2
1
/
2
|2 Rn |
Maximize likelihood of measurement t, by minimizing:
J ()
By theorem on slide 11:
T Rn1 (t G)
(t G)
t G)
T
= tr Rn1 (t G)(
=
d = 0
dJ ()/
GT Rn1 G = GT Rn1 t
24 / 37
Wavefront reconstruction from local gradients
Linear estimation
Assume n is zero-mean random variable with Gaussian distribution
p(n) =
T 1
1
en Rn n/2
1
/
2
|2 Rn |
Hence, distribution of t = G + n given by
p (t ) =
T 1
1
e(t G) Rn (t G)/2
1
/
2
|2 Rn |
Maximize likelihood of measurement t, by minimizing:
J ()
By theorem on slide 11:
T Rn1 (t G)
(t G)
t G)
T
= tr Rn1 (t G)(
=
d = 0
dJ ()/
GT Rn1 G = GT Rn1 t
24 / 37
Wavefront reconstruction from local gradients
Linear estimation
Assume is also zero-mean random variable with Gaussian distribution
p() =
T 1
1
e R /2
1
/
2
|2 R |
and independent of n.
Suppose is given by a linear estimator
= Lt
where L minimizes
J (L)
T
= E tr( )(
)
= E tr( L(G + n))( L(G + n))T
T nT
I LG L
I LG
= trE
n nnT
25 / 37
Wavefront reconstruction from local gradients
Linear estimation
Assume is also zero-mean random variable with Gaussian distribution
p() =
T 1
1
e R /2
1
/
2
|2 R |
and independent of n.
Suppose is given by a linear estimator
= Lt
where L minimizes
J (L)
T
= E tr( )(
)
= E tr( L(G + n))( L(G + n))T
T nT
I LG L
I LG
= trE
n nnT
25 / 37
Wavefront reconstruction from local gradients
Linear estimation
Assume is also zero-mean random variable with Gaussian distribution
p() =
T 1
1
e R /2
1
/
2
|2 R |
and independent of n.
Suppose is given by a linear estimator
= Lt
where L minimizes
J (L)
T
= E tr( )(
)
= E tr( L(G + n))( L(G + n))T
T nT
I LG L
I LG
= trE
n nnT
25 / 37
Wavefront reconstruction from local gradients
Linear estimation
Assume is also zero-mean random variable with Gaussian distribution
p() =
T 1
1
e R /2
1
/
2
|2 R |
and independent of n.
Suppose is given by a linear estimator
= Lt
where L minimizes
J (L)
T
= E tr( )(
)
= E tr( L(G + n))( L(G + n))T
T nT
I LG L
I LG
= trE
n nnT
25 / 37
Wavefront reconstruction from local gradients
Linear estimation
Assume is also zero-mean random variable with Gaussian distribution
p() =
T 1
1
e R /2
1
/
2
|2 R |
and independent of n.
Suppose is given by a linear estimator
= Lt
where L minimizes
J (L)
T
= E tr( )(
)
= E tr( L(G + n))( L(G + n))T
T nT
I LG L
I LG
= trE
n nnT
25 / 37
Wavefront reconstruction from local gradients
Linear estimation
J (L)
T nT
I
LG
L
I LG L
= trE
n nnT
R 0
I 0 L G I
I 0 L G
= tr
0
Rn
By theorem on slide 11: dJ (L)/dL = 0 if and only if
L(GR GT + Rn )
= R GT
which is equivalent to
(R1 + GT Rn1 G)L
= GT Rn1
(also follows from maximizing likelihood of p(, t ) given t)
26 / 37
Wavefront reconstruction from local gradients
Linear estimation
J (L)
T nT
I
LG
L
I LG L
= trE
n nnT
R 0
I 0 L G I
I 0 L G
= tr
0
Rn
By theorem on slide 11: dJ (L)/dL = 0 if and only if
L(GR GT + Rn )
= R GT
which is equivalent to
(R1 + GT Rn1 G)L
= GT Rn1
(also follows from maximizing likelihood of p(, t ) given t)
26 / 37
Wavefront reconstruction from local gradients
Linear estimation
J (L)
T nT
I
LG
L
I LG L
= trE
n nnT
R 0
I 0 L G I
I 0 L G
= tr
0
Rn
By theorem on slide 11: dJ (L)/dL = 0 if and only if
L(GR GT + Rn )
= R GT
which is equivalent to
(R1 + GT Rn1 G)L
= GT Rn1
(also follows from maximizing likelihood of p(, t ) given t)
26 / 37
Wavefront reconstruction from local gradients
Linear estimation
J (L)
T nT
I
LG
L
I LG L
= trE
n nnT
R 0
I 0 L G I
I 0 L G
= tr
0
Rn
By theorem on slide 11: dJ (L)/dL = 0 if and only if
L(GR GT + Rn )
= R GT
which is equivalent to
(R1 + GT Rn1 G)L
= GT Rn1
(also follows from maximizing likelihood of p(, t ) given t)
26 / 37
Wavefront reconstruction from local gradients
Linear estimation
J (L)
T nT
I
LG
L
I LG L
= trE
n nnT
R 0
I 0 L G I
I 0 L G
= tr
0
Rn
By theorem on slide 11: dJ (L)/dL = 0 if and only if
L(GR GT + Rn )
= R GT
which is equivalent to
(R1 + GT Rn1 G)L
= GT Rn1
(also follows from maximizing likelihood of p(, t ) given t)
26 / 37
Wavefront reconstruction from local gradients
Linear estimation
Hence can be solved from
(R1 + GT Rn1 G) = GT Rn1 t
For Kolmogorov turbulence R1 appears to be sparse and fast iterative solvers
can be used, e.g., (Multigrid/Preconditioned) Conjugate Gradient.a .
a Ellerbroek,
JOSA, 19(9), (2002)
27 / 37
Wavefront reconstruction from local gradients
Overview wavefront reconstruction from local gradients
1
Line integration: depending on line integration method
(fast-inaccurate, slow-accurate);
Fourier domain inversion: errors because boundary effects of fourier
transform (noise can be weighted);
Least squares:
= G+ t = (GT G)+ GT t
4
Linear estimation:
= (GT Rn1 G)+ GT Rn1 t
or
= (R1 + GT Rn1 G)+ GT Rn1 t
28 / 37
Wavefront reconstruction from local gradients
Overview wavefront reconstruction from local gradients
1
Line integration: depending on line integration method
(fast-inaccurate, slow-accurate);
Fourier domain inversion: errors because boundary effects of fourier
transform (noise can be weighted);
Least squares:
= G+ t = (GT G)+ GT t
4
Linear estimation:
= (GT Rn1 G)+ GT Rn1 t
or
= (R1 + GT Rn1 G)+ GT Rn1 t
28 / 37
Wavefront reconstruction from local gradients
Overview wavefront reconstruction from local gradients
1
Line integration: depending on line integration method
(fast-inaccurate, slow-accurate);
Fourier domain inversion: errors because boundary effects of fourier
transform (noise can be weighted);
Least squares:
= G+ t = (GT G)+ GT t
4
Linear estimation:
= (GT Rn1 G)+ GT Rn1 t
or
= (R1 + GT Rn1 G)+ GT Rn1 t
28 / 37
Wavefront reconstruction from local gradients
Overview wavefront reconstruction from local gradients
1
Line integration: depending on line integration method
(fast-inaccurate, slow-accurate);
Fourier domain inversion: errors because boundary effects of fourier
transform (noise can be weighted);
Least squares:
= G+ t = (GT G)+ GT t
4
Linear estimation:
= (GT Rn1 G)+ GT Rn1 t
or
= (R1 + GT Rn1 G)+ GT Rn1 t
28 / 37
Wavefront reconstruction from local gradients
Wavefront reconstruction over time
Measurements over time given by:
t (k )
= G(k ) + n(k )
Vector stacking over time-window k = 1, , N:
t1:N = (IN G)1:N + n1:N
with E[1:N T1:N ] and E[n1:N n1T:N ] may be given.
Usually not the way to go:
For large N extremely complex problems;
Usually (k + 1) needs to be estimated from t (1), , t (k ).
Time-recursive methods!
29 / 37
Wavefront reconstruction from local gradients
Wavefront reconstruction over time
Measurements over time given by:
t (k )
= G(k ) + n(k )
Vector stacking over time-window k = 1, , N:
t1:N = (IN G)1:N + n1:N
with E[1:N T1:N ] and E[n1:N n1T:N ] may be given.
Usually not the way to go:
For large N extremely complex problems;
Usually (k + 1) needs to be estimated from t (1), , t (k ).
Time-recursive methods!
29 / 37
Wavefront reconstruction from local gradients
Wavefront reconstruction over time
Measurements over time given by:
t (k )
= G(k ) + n(k )
Vector stacking over time-window k = 1, , N:
t1:N = (IN G)1:N + n1:N
with E[1:N T1:N ] and E[n1:N n1T:N ] may be given.
Usually not the way to go:
For large N extremely complex problems;
Usually (k + 1) needs to be estimated from t (1), , t (k ).
Time-recursive methods!
29 / 37
Wavefront reconstruction from local gradients
Wavefront reconstruction over time
Measurements over time given by:
t (k )
= G(k ) + n(k )
Vector stacking over time-window k = 1, , N:
t1:N = (IN G)1:N + n1:N
with E[1:N T1:N ] and E[n1:N n1T:N ] may be given.
Usually not the way to go:
For large N extremely complex problems;
Usually (k + 1) needs to be estimated from t (1), , t (k ).
Time-recursive methods!
29 / 37
Wavefront reconstruction from local gradients
Wavefront reconstruction over time
Measurements over time given by:
t (k )
= G(k ) + n(k )
Vector stacking over time-window k = 1, , N:
t1:N = (IN G)1:N + n1:N
with E[1:N T1:N ] and E[n1:N n1T:N ] may be given.
Usually not the way to go:
For large N extremely complex problems;
Usually (k + 1) needs to be estimated from t (1), , t (k ).
Time-recursive methods!
29 / 37
Wavefront prediction
Problem formulation:
Given measurements:
t (k ) = G(k ) + n(k ),
k = 1, 2,
Given correlation coefficients:
E
(k )
n(k )
(k )
n(k )
T !
R ()
0
0
n2 I ()
k + 1|k ) of (k + 1) given t (k ), t (k 1), , t (1)
Determine the estimate (
such that
k + 1|k )) = E ((k + 1) (
k + 1|k ))T ((k + 1) (
k + 1|k ))
Jk ((
is minimized.
30 / 37
Wavefront prediction
Problem formulation:
Given measurements:
t (k ) = G(k ) + n(k ),
k = 1, 2,
Given correlation coefficients:
E
(k )
n(k )
(k )
n(k )
T !
R ()
0
0
n2 I ()
k + 1|k ) of (k + 1) given t (k ), t (k 1), , t (1)
Determine the estimate (
such that
k + 1|k )) = E ((k + 1) (
k + 1|k ))T ((k + 1) (
k + 1|k ))
Jk ((
is minimized.
30 / 37
Wavefront prediction
Problem formulation:
Given measurements:
t (k ) = G(k ) + n(k ),
k = 1, 2,
Given correlation coefficients:
E
(k )
n(k )
(k )
n(k )
T !
R ()
0
0
n2 I ()
k + 1|k ) of (k + 1) given t (k ), t (k 1), , t (1)
Determine the estimate (
such that
k + 1|k )) = E ((k + 1) (
k + 1|k ))T ((k + 1) (
k + 1|k ))
Jk ((
is minimized.
30 / 37
Wavefront prediction
Problem formulation:
Given measurements:
t (k ) = G(k ) + n(k ),
k = 1, 2,
Given correlation coefficients:
E
(k )
n(k )
(k )
n(k )
T !
R ()
0
0
n2 I ()
k + 1|k ) of (k + 1) given t (k ), t (k 1), , t (1)
Determine the estimate (
such that
k + 1|k )) = E ((k + 1) (
k + 1|k ))T ((k + 1) (
k + 1|k ))
Jk ((
is minimized.
30 / 37
Wavefront prediction
Solutiona :
a Fraanje
et al., JOSA, 27(11), (2010)
k + 1|k ) = Ak t(k )
(
where
t(k )
t (1)T ,
t (k )T
A1
= R (1)GT (GR (0)GT + n2 I )1
Ak
= linear function of R (0), , R (k ), G, n2
Warning: memory grows with time index k !
31 / 37
Wavefront prediction
Solutiona :
a Fraanje
et al., JOSA, 27(11), (2010)
k + 1|k ) = Ak t(k )
(
where
t(k )
t (1)T ,
t (k )T
A1
= R (1)GT (GR (0)GT + n2 I )1
Ak
= linear function of R (0), , R (k ), G, n2
Warning: memory grows with time index k !
31 / 37
Wavefront prediction
Solutiona :
a Fraanje
et al., JOSA, 27(11), (2010)
k + 1|k ) = Ak t(k )
(
where
t(k )
t (1)T ,
t (k )T
A1
= R (1)GT (GR (0)GT + n2 I )1
Ak
= linear function of R (0), , R (k ), G, n2
Warning: memory grows with time index k !
31 / 37
Wavefront prediction
AR predictor
Suppose auto-regressive model of wavefront is given:
(k + 1) =
p
X
Ai (k i ) + e(k + 1)
i =0
A1
Ap
(k )
..
.
(k n)
= Ap p (k ) + e(k + 1)
+ e(k + 1)
where e(k ) is a zero-mean white-noise process.
Multiplying with from the right with Tp (k ) and taking expected value yields:
E (k + 1)p (k )T
= Ap E p (k )p (k )T
= Ap p
1 T
and Re = E e(k )e(k )T satisfies Re = R (0) p
p p
32 / 37
Wavefront prediction
AR predictor
Suppose auto-regressive model of wavefront is given:
(k + 1) =
p
X
Ai (k i ) + e(k + 1)
i =0
A1
Ap
(k )
..
.
(k n)
= Ap p (k ) + e(k + 1)
+ e(k + 1)
where e(k ) is a zero-mean white-noise process.
Multiplying with from the right with Tp (k ) and taking expected value yields:
E (k + 1)p (k )T
= Ap E p (k )p (k )T
= Ap p
1 T
and Re = E e(k )e(k )T satisfies Re = R (0) p
p p
32 / 37
Wavefront prediction
AR predictor
Suppose auto-regressive model of wavefront is given:
(k + 1) =
p
X
Ai (k i ) + e(k + 1)
i =0
A1
Ap
(k )
..
.
(k n)
= Ap p (k ) + e(k + 1)
+ e(k + 1)
where e(k ) is a zero-mean white-noise process.
Multiplying with from the right with Tp (k ) and taking expected value yields:
E (k + 1)p (k )T
= Ap E p (k )p (k )T
= Ap p
1 T
and Re = E e(k )e(k )T satisfies Re = R (0) p
p p
32 / 37
Wavefront prediction
AR predictor
Suppose auto-regressive model of wavefront is given:
(k + 1) =
p
X
Ai (k i ) + e(k + 1)
i =0
A1
Ap
(k )
..
.
(k n)
= Ap p (k ) + e(k + 1)
+ e(k + 1)
where e(k ) is a zero-mean white-noise process.
Multiplying with from the right with Tp (k ) and taking expected value yields:
E (k + 1)p (k )T
= Ap E p (k )p (k )T
= Ap p
1 T
and Re = E e(k )e(k )T satisfies Re = R (0) p
p p
32 / 37
Wavefront prediction
AR predictor
Suppose auto-regressive model of wavefront is given:
(k + 1) =
p
X
Ai (k i ) + e(k + 1)
i =0
A1
Ap
(k )
..
.
(k n)
= Ap p (k ) + e(k + 1)
+ e(k + 1)
where e(k ) is a zero-mean white-noise process.
Multiplying with from the right with Tp (k ) and taking expected value yields:
E (k + 1)p (k )T
= Ap E p (k )p (k )T
= Ap p
1 T
and Re = E e(k )e(k )T satisfies Re = R (0) p
p p
32 / 37
Wavefront prediction
AR predictor
Suppose auto-regressive model of wavefront is given:
(k + 1) =
p
X
Ai (k i ) + e(k + 1)
i =0
A1
Ap
(k )
..
.
(k n)
= Ap p (k ) + e(k + 1)
+ e(k + 1)
where e(k ) is a zero-mean white-noise process.
Multiplying with from the right with Tp (k ) and taking expected value yields:
E (k + 1)p (k )T
= Ap E p (k )p (k )T
= Ap p
1 T
and Re = E e(k )e(k )T satisfies Re = R (0) p
p p
32 / 37
Wavefront prediction
AR predictor
Suppose auto-regressive model of wavefront is given:
(k + 1) =
p
X
Ai (k i ) + e(k + 1)
i =0
A1
Ap
(k )
..
.
(k n)
= Ap p (k ) + e(k + 1)
+ e(k + 1)
where e(k ) is a zero-mean white-noise process.
Multiplying with from the right with Tp (k ) and taking expected value yields:
E (k + 1)p (k )T
= Ap E p (k )p (k )T
= Ap p
1 T
and Re = E e(k )e(k )T satisfies Re = R (0) p
p p
32 / 37
Wavefront prediction
AR predictor
k + 1|k ) is estimated by
Assume (
k + 1|k ) =
(
p
X
Li t (k i )
i =0
L0
t (k )
..
Lp
.
t (k p)
G(k ) + n(k )
..
Lp
.
G(k p) + n(k p)
= Lp (Ip G)p (k ) + Lp np (k )
33 / 37
Wavefront prediction
AR predictor
k + 1|k ) is estimated by
Assume (
k + 1|k ) =
(
p
X
Li t (k i )
i =0
L0
t (k )
..
Lp
.
t (k p)
G(k ) + n(k )
..
Lp
.
G(k p) + n(k p)
= Lp (Ip G)p (k ) + Lp np (k )
33 / 37
Wavefront prediction
AR predictor
k + 1|k ) is estimated by
Assume (
k + 1|k ) =
(
p
X
Li t (k i )
i =0
L0
t (k )
..
Lp
.
t (k p)
G(k ) + n(k )
..
Lp
.
G(k p) + n(k p)
= Lp (Ip G)p (k ) + Lp np (k )
33 / 37
Wavefront prediction
AR predictor
k + 1|k ) is estimated by
Assume (
k + 1|k ) =
(
p
X
Li t (k i )
i =0
L0
t (k )
..
Lp
.
t (k p)
G(k ) + n(k )
..
Lp
.
G(k p) + n(k p)
= Lp (Ip G)p (k ) + Lp np (k )
33 / 37
Wavefront prediction
AR predictor
(k + 1) = Ap p (k ) + e(k + 1)
(k + 1|k ) = Lp (Ip G)p (k ) + Lp np (k )
such that
k + 1|k ) = (Ap Lp (Ip G))p (k ) + e(k + 1) Lp np (k )
(k + 1) (
Objective is to minimize:
Jk (Lp )
k + 1|k ))T ((k + 1) (
k + 1|k ))
= E ((k + 1) (
= tr (Ap Lp (Ip G))p (Ap Lp (Ip G))T + Re + n2 Lp Lp T
Hence by theorem on slide 11 Lp is a solution if and only if:
Lp (Ip G)p (Ip GT ) + n2 I
= Ap p (Ip GT )
34 / 37
Wavefront prediction
AR predictor
(k + 1) = Ap p (k ) + e(k + 1)
(k + 1|k ) = Lp (Ip G)p (k ) + Lp np (k )
such that
k + 1|k ) = (Ap Lp (Ip G))p (k ) + e(k + 1) Lp np (k )
(k + 1) (
Objective is to minimize:
Jk (Lp )
k + 1|k ))T ((k + 1) (
k + 1|k ))
= E ((k + 1) (
= tr (Ap Lp (Ip G))p (Ap Lp (Ip G))T + Re + n2 Lp Lp T
Hence by theorem on slide 11 Lp is a solution if and only if:
Lp (Ip G)p (Ip GT ) + n2 I
= Ap p (Ip GT )
34 / 37
Wavefront prediction
AR predictor
(k + 1) = Ap p (k ) + e(k + 1)
(k + 1|k ) = Lp (Ip G)p (k ) + Lp np (k )
such that
k + 1|k ) = (Ap Lp (Ip G))p (k ) + e(k + 1) Lp np (k )
(k + 1) (
Objective is to minimize:
Jk (Lp )
k + 1|k ))T ((k + 1) (
k + 1|k ))
= E ((k + 1) (
= tr (Ap Lp (Ip G))p (Ap Lp (Ip G))T + Re + n2 Lp Lp T
Hence by theorem on slide 11 Lp is a solution if and only if:
Lp (Ip G)p (Ip GT ) + n2 I
= Ap p (Ip GT )
34 / 37
Wavefront prediction
AR predictor
(k + 1) = Ap p (k ) + e(k + 1)
(k + 1|k ) = Lp (Ip G)p (k ) + Lp np (k )
such that
k + 1|k ) = (Ap Lp (Ip G))p (k ) + e(k + 1) Lp np (k )
(k + 1) (
Objective is to minimize:
Jk (Lp )
k + 1|k ))T ((k + 1) (
k + 1|k ))
= E ((k + 1) (
= tr (Ap Lp (Ip G))p (Ap Lp (Ip G))T + Re + n2 Lp Lp T
Hence by theorem on slide 11 Lp is a solution if and only if:
Lp (Ip G)p (Ip GT ) + n2 I
= Ap p (Ip GT )
34 / 37
Wavefront prediction
State-space predictors
State-space model for wavefront phase:
(k + 1) = A(k ) + Ke(k )
(k ) = C (k ) + e(k )
measurement equation:
t (k )
= G(k ) + n(k )
Solutiona given by Kalman filter:
k |k 1)
t (k |k 1) = GC (
k + 1|k ) = A(
k |k 1) + Kt (t (k ) t (k ))
(
k + 1|k ) = C (
k + 1|k )
(
where Kt the Kalman gain.
a c.f.
Fraanje et al., JOSA, 27(11), (2010)
35 / 37
Wavefront prediction
State-space predictors
State-space model for wavefront phase:
(k + 1) = A(k ) + Ke(k )
(k ) = C (k ) + e(k )
measurement equation:
t (k )
= G(k ) + n(k )
Solutiona given by Kalman filter:
k |k 1)
t (k |k 1) = GC (
k + 1|k ) = A(
k |k 1) + Kt (t (k ) t (k ))
(
k + 1|k ) = C (
k + 1|k )
(
where Kt the Kalman gain.
a c.f.
Fraanje et al., JOSA, 27(11), (2010)
35 / 37
Wavefront prediction
State-space predictors
State-space model for wavefront phase:
(k + 1) = A(k ) + Ke(k )
(k ) = C (k ) + e(k )
measurement equation:
t (k )
= G(k ) + n(k )
Solutiona given by Kalman filter:
k |k 1)
t (k |k 1) = GC (
k + 1|k ) = A(
k |k 1) + Kt (t (k ) t (k ))
(
k + 1|k ) = C (
k + 1|k )
(
where Kt the Kalman gain.
a c.f.
Fraanje et al., JOSA, 27(11), (2010)
35 / 37
Wavefront prediction
State-space predictors
State-space model for wavefront phase:
(k + 1) = A(k ) + Ke(k )
(k ) = C (k ) + e(k )
measurement equation:
t (k )
= G(k ) + n(k )
Solutiona given by Kalman filter:
k |k 1)
t (k |k 1) = GC (
k + 1|k ) = A(
k |k 1) + Kt (t (k ) t (k ))
(
k + 1|k )
k + 1|k ) = C (
(
where Kt the Kalman gain.
a c.f.
Fraanje et al., JOSA, 27(11), (2010)
35 / 37
Research issues
Phase reconstruction from pixels for extended sources;
Phase reconstruction from single pixel for point sources;
Efficient distributed wavefront reconstruction / prediction (implementation on
GPU / n-core CPU);
Fast wavefront reconstuction for nonstationary turbulence;
Phase reconstruction over multiple layers (tomography);
Data-driven methods (one-button operation).
36 / 37
Research issues
Phase reconstruction from pixels for extended sources;
Phase reconstruction from single pixel for point sources;
Efficient distributed wavefront reconstruction / prediction (implementation on
GPU / n-core CPU);
Fast wavefront reconstuction for nonstationary turbulence;
Phase reconstruction over multiple layers (tomography);
Data-driven methods (one-button operation).
36 / 37
Research issues
Phase reconstruction from pixels for extended sources;
Phase reconstruction from single pixel for point sources;
Efficient distributed wavefront reconstruction / prediction (implementation on
GPU / n-core CPU);
Fast wavefront reconstuction for nonstationary turbulence;
Phase reconstruction over multiple layers (tomography);
Data-driven methods (one-button operation).
36 / 37
Research issues
Phase reconstruction from pixels for extended sources;
Phase reconstruction from single pixel for point sources;
Efficient distributed wavefront reconstruction / prediction (implementation on
GPU / n-core CPU);
Fast wavefront reconstuction for nonstationary turbulence;
Phase reconstruction over multiple layers (tomography);
Data-driven methods (one-button operation).
36 / 37
Research issues
Phase reconstruction from pixels for extended sources;
Phase reconstruction from single pixel for point sources;
Efficient distributed wavefront reconstruction / prediction (implementation on
GPU / n-core CPU);
Fast wavefront reconstuction for nonstationary turbulence;
Phase reconstruction over multiple layers (tomography);
Data-driven methods (one-button operation).
36 / 37
Research issues
Phase reconstruction from pixels for extended sources;
Phase reconstruction from single pixel for point sources;
Efficient distributed wavefront reconstruction / prediction (implementation on
GPU / n-core CPU);
Fast wavefront reconstuction for nonstationary turbulence;
Phase reconstruction over multiple layers (tomography);
Data-driven methods (one-button operation).
36 / 37
Overview
Wavefront reconstruction from pixel intensities:
Gerchberg-Saxton;
Center-of-mass algorithm.
Wavefront reconstruction from local gradients:
Line integration;
Fourier domain inversion;
Least squares;
Linear estimation (exploit statistical information).
Wavefront prediction:
Time-recursive methods needed;
AR-predictors;
Kalman filter (state-space models).
37 / 37
Overview
Wavefront reconstruction from pixel intensities:
Gerchberg-Saxton;
Center-of-mass algorithm.
Wavefront reconstruction from local gradients:
Line integration;
Fourier domain inversion;
Least squares;
Linear estimation (exploit statistical information).
Wavefront prediction:
Time-recursive methods needed;
AR-predictors;
Kalman filter (state-space models).
37 / 37
Overview
Wavefront reconstruction from pixel intensities:
Gerchberg-Saxton;
Center-of-mass algorithm.
Wavefront reconstruction from local gradients:
Line integration;
Fourier domain inversion;
Least squares;
Linear estimation (exploit statistical information).
Wavefront prediction:
Time-recursive methods needed;
AR-predictors;
Kalman filter (state-space models).
37 / 37