Algorithms 17 00100
Algorithms 17 00100
Article
Clustering/Distribution Analysis and Preconditioned Krylov
Solvers for the Approximated Helmholtz Equation and
Fractional Laplacian in the Case of Complex-Valued, Unbounded
Variable Coefficient Wave Number µ
Andrea Adriani 1 , Stefano Serra-Capizzano 1,2, * and Cristina Tablino-Possio 3
1 Department of Science and High Technology, University of Insubria, Via Valleggio 11, 22100 Como, Italy;
[Link]@[Link]
2 Division of Scientific Computing, Department of Information Technology, Uppsala University, Lägerhyddsv 2,
hus 2, SE-751 05 Uppsala, Sweden
3 Department of Mathematics and Applications, University of Milano-Bicocca, Via Cozzi 53, 20125 Milano,
Italy; [Link]@[Link]
* Correspondence: [Link]@[Link]
Abstract: We consider the Helmholtz equation and the fractional Laplacian in the case of the
complex-valued unbounded variable coefficient wave number µ, approximated by finite differences.
In a recent analysis, singular value clustering and eigenvalue clustering have been proposed for a τ
preconditioning when the variable coefficient wave number µ is uniformly bounded. Here, we extend
the analysis to the unbounded case by focusing on the case of a power singularity. Several numerical
experiments concerning the spectral behavior and convergence of the related preconditioned GMRES
are presented.
for every i, j ∈ Z. The discrete version of the fractional Laplacian in such a setting is given
by
1
(−∆h )α/2 (u( x, y)) := α ∑ bk ,k2 u( x + k1 h, y + k2 h),
(α)
(2)
h k ,k ∈Z 1
1 2
(α)
where bk are the Fourier coefficients of the function
1 ,k 2
α
Ψ
η 2
2 2
tα (η, Ψ) = 4 sin + 4 sin , (3)
2 2
that is
1
Z π Z π
tα (η, Ψ)e−i(k1 η +k2 Ψ) dηdΨ,
(α)
bk ,k =
1 2 4π 2 −π −π
where i is the imaginary unit.
Proceeding as in [1] we trace back the original problem to solving the following
linear system
An u := ( Bn + Dn (µ))u = f , n = (n, n), (4)
where Bn = h1α B̂n , and B̂n is the two-level symmetric Toeplitz matrix generated by tα (η, Ψ),
i.e., Bn = Tn (tα ) with
For the sake of simplicity, the previous equation is rewritten in the following scaled form
For the two-level notations and the theory regarding Toeplitz structures, refer to [3].
In the case where µ( x, y) = 1/( x + iy)γ we can give sufficient conditions on the coeffi-
cient γ, depending on α, in order to guarantee that {hα Dn (µ)}n is zero distributed in the
eigenvalue/singular value sense, thus obtaining the spectral distribution of the sequence
{ Ân }n which, under mild conditions, has to coincide with that of { B̂n }n . In the next sec-
tion, we first introduce the necessary tools and then present theoretical results completing
those in [1,2] and related numerical experiments. The numerical experiments concern the
visualization of the distribution/clustering results and the optimal performances of the
related preconditioning when the preconditioned GMRES is used.
We highlight that the spectral analysis for the considered preconditioned and nonpre-
conditioned matrix-sequences for unbounded µ( x, y) is completely new. In fact, in [1,2]
the assumption of boundedness of the wave number is always employed; furthermore,
in [2] the results are focused on eigenvalue localization findings, while in [1] the singular
Algorithms 2024, 17, 100 3 of 19
value analysis is the main target. Finally, we stress that our eigenvalue results are nontrivial
given the non-Hermitian and even non-normal nature of the involved matrix sequences.
2. Spectral Analysis
First, we report a few definitions regarding the spectral and singular value distribution,
the notion of clustering and a few relevant relationships among the various concepts. Then,
we present the main theoretical tool taken from [4] and we perform a spectral analysis of the
various matrix-sequences. Numerical experiments and visualization results corroborating
the analysis are presented in the last part of the section.
dn
1 1 ∑ri=1 F (σi (ψ(x)))
Z
lim
n→∞ dn ∑ F(σi ( An )) = µt ( D ) D r
dx, ∀ F ∈ Cc (R). (6)
i =1
dn
1 1 ∑ri=1 F (λi (ψ(x)))
Z
lim
n→∞ dn ∑ F(λi ( An )) = µt ( D ) D r
dx, ∀ F ∈ Cc (C). (7)
i =1
If A ∈ Cm×m , then the singular values and the eigenvalues of A are denoted by
σ1 ( A), . . . , σm ( A) and λ1 ( A), . . . , λm ( A), respectively. Furthermore, if A ∈ Cm×m and
1 ≤ p ≤ ∞, then ∥ A∥ p denotes the Schatten p-norm of A, i.e., the p-norm of the vector
(σ1 ( A), . . . , σm ( A)); see [5] for a comprehensive treatment of the subject. The Schatten
∞-norm ∥ A∥∞ is the largest singular value of A and coincides with the spectral norm ∥ A∥.
The Schatten 1-norm ∥ A∥1 is the sum of the singular values of A and coincides with the
so-called trace-norm of A, while the Schatten 2-norm ∥ A∥2 coincides with the Frobenius
norm of A, which is of great popularity in the numerical analysis community because of its
low computational complexity.
At this point, we introduce the definition of clustering, which, as for the distribution
notions, is a concept only of the asymptotic type. For z ∈ C and ϵ > 0, let B(z, ϵ) be the
.
disk with center z and radius ϵ, B(z, ϵ) = {w ∈ C : |w − z| < ϵ}. For S ⊆ C and ϵ > 0, we
. S
denote by B(S, ϵ) the ϵ-expansion of S, defined as B(S, ϵ) = z∈S B(z, ϵ).
qϵ (n, S) = o (dn ), as n → ∞.
If { An }n is strongly or weakly clustered at S and S is not connected, then the connected components
of S are called sub-clusters. Of special importance in the theory of preconditioning is the case of
spectral single point clustering where S is made up by a unique complex number s.
The same notions hold for the singular values, where s is a nonnegative number and S is a
nonempty closed subset of the nonnegative real numbers.
Algorithms 2024, 17, 100 4 of 19
Theorem 1. If E R(ψ) = s with s fixed complex number then we have the subsequent equivalence:
{ An }n ∼λ ψ iff { An }n is weakly clustered at s in the eigenvalue sense. Hence, if E R(|ψ|) = s
with s fixed nonnegative number then we have the subsequent equivalence: { An }n ∼σ ψ if { An }n
is weakly clustered at s in the singular value sense.
Theorem 3. Let µ( x, y) = 1/( x + iy)γ . Then, for every γ ≥ 0 such that α > γ − 1 (α ∈ (1, 2)),
we have
a1 {hα Dn (µ)}n ∼λ 0;
a2 { B̂n + hα Dn (µ)}n ∼λ tα .
n 1
Then, we estimate the quantity ∑i,j =1 ( i 2 + j2 ) γ
, under the hypothesis that γ ∈ (0, 1).
n n i −1 n
1 1 1 1
∑ (i2 2
+j ) γ
=2∑ ∑ 2
( i + j 2 )γ
+ γ
2 ∑ i2γ .
i,j=1 i =1 j =1 i =1
Therefore,
n n n
1 1 1 1
∑ (i2 + j2 )γ ≤ 2 ∑ i2γ−1 + 2γ − 2 ∑ 2γ .
i,j=1 i =1 i =1
1
Note, that 2γ −2 < 0 for every γ ∈ (0, 1). A basic computation leads to
n
n2−2γ
Z n
1 dt
2∑ ≤ =
i =1 i2γ−1 0 t2γ−1 1−γ
and
(n+1)1−2γ −1
(
n
γ ̸= 12 ,
Z n
1 dt
∑ 2γ ≥ (t + 1)2γ
= 1−2γ
i =1 0 log(n + 1) γ = 12 .
n 1 n2−2γ
so that ∑i,j =1 ( i 2 + j2 ) γ
≤ cn (γ) ∼ 1− γ for every γ ∈ (0, 1). This immediately implies that
n
1 cn (γ) n2−2α
h2α−2γ ∑ ( i 2 + j2 ) γ
≤
n2α−2γ
∼
1−γ
= o ( n2 )
i,j=1
for every α ∈ (1, 2), as required to apply Theorem 1 in [4] and conclude the proof.
From the computation in Equation (8), we immediately deduce the following: if γ > 1,
n 1 2 2
∑i,j =1 (i2 + j2 )γ ≤ cγ for every n, so that ∥ h Dn ( µ )∥2 = o ( n ) if and only if 2α − 2γ + 2 > 0,
α
As above, this leads to the conclusion that ∥hα Dn (µ)∥22 = o (n2 ) if and only if α > γ − 1,
that is γ < α + 1. Then the proof is complete by Theorem 2 with dn = n2 .
With reference to the proof of Theorem 3, from a technical point of view, it should be
observed that in [7–9] we can find more refined bounds for terms as ∑nj=1 jℓ , with various
choices of the real parameter ℓ.
The next corollary complements the previous result.
for every x, y ∈ [0, 1]. Then, for every γ ≥ 0 such that α > γ − 1 (α ∈ (1, 2)), we have
b1 {hα Dn (µ)}n ∼λ 0;
b2 { B̂n + hα Dn (µ)}n ∼λ tα .
2.2. Preconditioning
For a symmetric Toeplitz matrix Tn ∈ Rn×n with first column [t1 , t2 , . . . , tn ]⊤ , the
matrix τ ( Tn ) defined as
τ ( Tn ) := Tn − H ( Tn ) (9)
the natural τ preconditioner of Tn was already considered decades ago in [10–12] when
a great amount of theoretical and computational work was dedicated to preconditioning
strategies for structured linear systems. Here, H ( Tn ) denotes a Hankel matrix whose
entries are constant along each antidiagonal and whose precise definition is the follow-
ing: the first row and the last column of H ( Tn ) are given by [t2 , t3 , . . . , tn−1 , 0, 0] and
[0, 0, tn−1 , . . . , t3 , t2 ]⊤ , respectively. Notice, that by using the sine transform matrix Sn ,
defined as r
2 πkj
[Sn ]k,j = sin , 1 ≤ k, j ≤ n,
n+1 n+1
it is known that every τ matrix is diagonalized as τ ( Tn ) = Sn Λn Sn , where Λn is a diagonal
matrix constituted by all eigenvalues of τ (Tn ), and Sn = ([Sn ] j,k ) is the real, symmetric,
orthogonal matrix defined before, so that Sn = SnT = Sn−1 . Furthermore, the matrix Sn is
associated with the fast sine transform of type I (see [13,14] for several other sine/cosine
transforms). Indeed the multiplication of a matrix Sn times a real vector can be conducted
in O(n log n) real operations and the cost is around half of the celebrated discrete fast
Fourier transform [15]. Therefore, all the relevant matrix operations in this algebra cost
O(n log n) real operations, including matrix–matrix multiplication, inversion, solution of a
linear system, and computation of the spectrum, i.e., of the diagonal entries of Λn .
Using standard and known techniques, the τ algebra has d-level versions for every
d ≥ 1, in which a τ ( Tn ), Tn d-level symmetric Toeplitz matrix of size ν(n) = n1 n2 · · · nd ,
n = (n1 , . . . , nd ), has the diagonalization form
with Λn , the diagonal matrix is obtained as a d-level sine transform of type I of the first
column of Tn . Again the quoted d-level transform and all the relevant matrix operations in
the related algebra have a cost of O(ν(n) log ν(n)) real operations which is quasi-optimal
given the fact that the matrices have size ν(n).
Algorithms 2024, 17, 100 7 of 19
At the algebraic level, the explicit construction can be conducted recursively using
additive decomposition (9): first at the most external level, and then applying the same
operation to any block which is a (d − 1)-level symmetric Toeplitz matrix and so on until
arriving at matrices with scalars.
In light of the excellent structural, spectral, and computation features of the τ algebra
in d levels, two different types of τ preconditioning for the related linear systems were
proposed in [2] and one in [1] (with d = 2 and n = (n, n)). Here, we consider the latter. In
fact, { Tn ( f ) − τ ( Tn ( f ))}n ∼σ,λ 0 for any Lebesgue integrable f , thanks to the distribution
results on multilevel Hankel matrix-sequences generated by any L1 function f proven
in [16]. From this, as proven in [1], the preconditioner Pn = τ ( Tn (tα )) is such that the
preconditioned matrix sequence is clustered at 1 both in the eigenvalue and singular value
sense under mild assumptions. In fact, using the notion of an approximating class of
sequences, the eigenvalue perturbation results in [6], and the GLT apparatus [3]; it is
enough that µ( x, y) is Riemann integrable or simply bounded. Here, we extend the spectral
distribution results in the case where µ( x, y) is not bounded and even not integrable. More
precisely, as in Corollary 1, we consider the case of a power singularity.
for every x, y ∈ [0, 1]. Consider the preconditioner Pn = τ ( Tn (tα )). Then, for every γ ∈ [0, 1) and
for every α ∈ (1, 2)), we have
c1 { Pn−1 hα Dn (µ)}n ∼λ 0;
−1
c2 Pn B̂n + hα Dn (µ) n ∼λ 1.
and Xn + Yn with
Xn = Pn−1/2 B̂n Pn−1/2 .
Now Xn is real symmetric, and in fact positive definite and so is B̂n . As proven in [1],
the spectral distribution function of { Xn }n is 1 thanks to a basic use of the GLT theory.
Furthermore, the minimal eigenvalue of Pn = τ ( Tn (tα )) is positive and tends to zero as hα ,
since tα has a unique zero of order α at zero (see, e.g., [17]). Therefore,
hα
∥hα Pn−1/2 ∥ = ≤ Dh1/2 ,
λ1 (τ ( Tn (tα )))
1
∥ Pn−1/2 ∥ = ≤ Dh−1/2 .
λ1 (τ ( Tn (tα )))
if and only if γ < 1. In conclusion, the desired result follows from Theorem 2 with dn = n2
and f = 1.
Theorem 4 cannot be sharp since the estimate in (11) would hold also if the precon-
ditioner is chosen as Pn = hα In2 . A more careful estimate would require considering that
the eigenvalues of τ ( Tn (tα )) are explicitly known, and in fact, we will see in the numerical
Algorithms 2024, 17, 100 8 of 19
-0.004 3.5
-0.006
3
-0.008
2.5
-0.01
-0.012 2
-0.014
1.5
-0.016
1
-0.018
-0.02 0.5
-0.022
0
1 2 3 0 0.5 1
Figure 1. Eigenvalues of the matrix Ân for γ = 1, α = 1.2 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].
10 -3
4.5
-2.5
4
-3
3.5
-3.5
-4 3
-4.5 2.5
-5 2
-5.5
1.5
-6
1
-6.5
0.5
-7
0
1 2 3 4 0 0.5 1
Figure 2. Eigenvalues of the matrix Ân for γ = 1, α = 1.4 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].
Algorithms 2024, 17, 100 9 of 19
10 -3
-1 6
-1.2
5
-1.4
-1.6
4
-1.8
-2 3
-2.2
2
-2.4
-2.6
1
-2.8
0
1 2 3 4 5 0 0.5 1
Figure 3. Eigenvalues of the matrix Ân for γ = 1, α = 1.6 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].
10 -4
7
-5
6
-6
5
-7
4
-8
-9 3
-10 2
-11
1
-12
0
2 4 6 0 0.5 1
Figure 4. Eigenvalues of the matrix Ân for γ = 1, α = 1.8 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].
Algorithms 2024, 17, 100 10 of 19
Table 1. Number of outliers No (ϵ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01
and related percentage for increasing dimension n2 .
Table 2. Number of outliers No (ϵ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01
and related percentage for increasing dimension n2 .
Table 3. Number of outliers No (ϵ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01
and related percentage for increasing dimension n2 .
µ( x, y) = 1/( x + iy)γ , γ = 1
n No (0.1) Percentage No (0.01) Percentage
α = 1.2
24 8 3.125000 × 100 235 9.179688 × 101
25 11 1.074219 × 100 450 4.394531 × 101
26 22 5.371094 × 10−1 588 1.435547 × 101
α = 1.4
24 2 7.812500 × 10−1 128 50
25 6 5.859375 × 10−1 164 1.601562 × 101
26 15 3.662109 × 10−1 217 5.297852 × 100
α = 1.6
24 0 0 58 2.265625 × 101
25 2 1.953125 × 10−1 76 7.421875 × 100
26 8 1.953125 × 10−1 123 3.002930 × 100
α = 1.8
24 0 0 27 1.054688 × 101
25 0 0 43 4.199219 × 100
26 0 0 78 1.904297 × 100
Table 4. Number of outliers No (ϵ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01
and related percentage for increasing dimension n2 .
The clustering in the preconditioning setting is also visualized in Figures 5–8, while
Table 5 accounts for the fact that the localization around 1 is very good since we do not
have large outliers. The moderate size of the outliers indicates that the preconditioned
GMRES is expected to be optimal and robust with respect to all the involved parameters.
The latter is evident in Table 6 with a slight increase in the number of iterations when γ
increases, so that the number and the magnitude of the outliers are slightly larger.
Algorithms 2024, 17, 100 12 of 19
0 0
-0.02
-0.02
-0.04
-0.04
-0.06
0.8 0.9 1 1.1 0.8 0.9 1 1.1
0 0
-0.01
-0.01
-0.02
-0.02
-0.03
-0.04 -0.03
0.8 0.9 1 1.1 0.9 0.95 1 1.05
Figure 5. Eigenvalues of the preconditioned matrix of size n2 = 212 for γ = 0.5 and α =
{1.2, 1.4, 1.6, 1.8}, respectively.
0 0
-0.02
-0.05
-0.04
-0.06
-0.1
-0.08
0.8 0.9 1 1.1 0.8 0.9 1 1.1
0 0
-0.01
-0.02
-0.02
-0.04 -0.03
-0.04
-0.06
0.8 0.9 1 1.1 0.9 0.95 1 1.05
Figure 6. Eigenvalues of the preconditioned matrix of size n2 = 212 for γ = 0.8 and α =
{1.2, 1.4, 1.6, 1.8}, respectively.
Algorithms 2024, 17, 100 13 of 19
0 0
-0.05
-0.05
-0.1
-0.15 -0.1
0.8 0.9 1 1.1 0.8 0.9 1 1.1
0 0
-0.02
-0.02
-0.04
-0.04
-0.06
-0.08 -0.06
0.8 0.85 0.9 0.95 1 0.9 0.95 1 1.05
0 0
-0.1
-0.5 -0.2
-0.3
-1 -0.4
0.8 1 1.2 1.4 0.8 0.9 1 1.1
0 0
-0.05
-0.05
-0.1
-0.15 -0.1
0.8 0.85 0.9 0.95 1 0.9 0.95 1
Figure 8. Eigenvalues of the preconditioned matrix of size n2 = 212 for γ = 1.5 and α =
{1.2, 1.4, 1.6, 1.8}, respectively.
Algorithms 2024, 17, 100 14 of 19
Table 5. Maximal distance of eigenvalues of the preconditioned matrix from 1 for increasing dimen-
sion n2 .
µ( x, y) = 1/( x + iy)γ
γ = 0.5
n α = 1.2 α = 1.4 α = 1.6 α = 1.8
24 1.289012 × 10−1 1.065957 × 10−1 8.471230 × 10−2 5.120695 × 10−2
25 1.660444 × 10−1 1.580600 × 10−1 1.257282 × 10−1 6.848000 × 10−2
26 2.157879 × 10−1 2.018168 × 10−1 1.609333 × 10−1 9.085515 × 10−2
γ = 0.8
n α = 1.2 α = 1.4 α = 1.6 α = 1.8
24 1.495892 × 10−1 1.105071 × 10−1 8.629435 × 10−2 5.992612 × 10−2
25 1.699374 × 10−1 1.589429 × 10−1 1.257050 × 10−1 6.831767 × 10−2
26 2.172238 × 10−1 2.017168 × 10−1 1.604429 × 10−1 9.037168 × 10−2
γ=1
n α = 1.2 α = 1.4 α = 1.6 α = 1.8
24 1.775892 × 10−1 1.177315 × 10−1 9.016597 × 10−2 6.761970 × 10−2
25 1.783878 × 10−1 1.626862 × 10−1 1.274329 × 10−1 6.947072 × 10−2
26 2.226273 × 10−1 2.038328 × 10−1 1.612753 × 10−1 9.085565 × 10−2
γ = 1.5
n α = 1.2 α = 1.4 α = 1.6 α = 1.8
24 6.788881 × 10−1 3.445147 × 10−1 1.712513 × 10−1 1.195110 × 10−1
25 8.354951 × 10−1 3.702033 × 10−1 1.651846 × 10−1 1.109992 × 10−1
26 1.029919 × 100 3.983900 × 10−1 1.686921 × 10−1 1.048754 × 10−1
Table 6. Number of preconditioned GMRES iterations to solve the linear system for increasing
dimension n2 till tol = 10−11 .
µ( x, y) = 1/( x + iy)γ
γ = 0.5
α = 1.2 α = 1.4 α = 1.6 α = 1.8
n - Pτ - Pτ - Pτ - Pτ
24 35 9 41 8 47 7 54 7
25 54 9 67 9 83 8 101 7
26 82 10 109 9 144 8 189 7
27 124 11 177 10 251 9 351 7
28 189 11 288 10 437 9 >500 8
29 287 11 467 10 >500 9 >500 8
γ = 0.8
α = 1.2 α = 1.4 α = 1.6 α = 1.8
n - Pτ - Pτ - Pτ - Pτ
24 36 9 42 9 49 8 56 7
25 55 10 68 9 85 8 103 7
26 83 11 111 10 147 9 192 7
27 126 11 180 10 256 9 356 8
28 191 12 293 10 449 9 >500 8
29 290 12 477 11 >500 10 >500 8
γ=1
α = 1.2 α = 1.4 α = 1.6 α = 1.8
n - Pτ - Pτ - Pτ - Pτ
24 36 10 42 9 49 8 56 7
25 55 11 69 10 86 9 104 7
26 84 11 112 10 148 9 193 8
27 127 12 182 10 258 9 359 8
28 193 12 295 11 452 10 >500 8
29 293 12 480 11 >500 10 >500 8
Algorithms 2024, 17, 100 15 of 19
Table 6. Cont.
µ( x, y) = 1/( x + iy)γ
γ = 1.5
α = 1.2 α = 1.4 α = 1.6 α = 1.8
n - Pτ - Pτ - Pτ - Pτ
24 38 13 44 11 51 9 57 8
25 59 14 71 12 88 10 106 8
26 89 16 116 12 152 10 197 8
27 136 17 188 13 264 10 366 8
28 206 18 306 13 461 10 >500 8
29 312 20 498 14 >500 11 >500 9
100 3.5
3
0
2.5
-100
-200
1.5
-300
1
-400
0.5
-500 0
-400 -300 -200 -100 0 0 0.5 1
Figure 9. Eigenvalues of the matrix Ân for γ = 3, α = 1.2 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].
Algorithms 2024, 17, 100 16 of 19
50 4.5
0
3.5
3
-50
2.5
2
-100
1.5
1
-150
0.5
-200 0
-200 -150 -100 -50 0 0 0.5 1
Figure 10. Eigenvalues of the matrix Ân for γ = 3, α = 1.4 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].
10 6
0
5
-10
-20
4
-30
-40 3
-50
2
-60
-70
1
-80
-90 0
-100 -50 0 0 0.5 1
Figure 11. Eigenvalues of the matrix Ân for γ = 3, α = 1.6 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].
Algorithms 2024, 17, 100 17 of 19
5 7
0
6
-5
5
-10
-15 4
-20 3
-25
2
-30
1
-35
-40 0
-40 -30 -20 -10 0 0 0.5 1
Figure 12. Eigenvalues of the matrix Ân for γ = 3, α = 1.8 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].
2 2
0 1.5
-2 1
-4 0.5
-6 0
-8 -0.5
-10 -1
-12 -1.5
-14 -2
-15 -10 -5 0 5 -2 -1 0 1 2
Figure 13. Eigenvalues of the preconditioned matrix of size n2 = 212 for γ = 3 and α = 1.8.
Table 7. Number of outliers No (ϵ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01
and related percentage for increasing dimension n2 .
µ( x, y) = 1/( x + iy)γ , γ = 3
n No (0.1) Percentage No (0.01) Percentage
α = 1.8
24 15 5.859375 × 100 85 3.320312 × 101
25 26 2.539062 × 100 162 1.582031 × 101
26 50 1.220703 × 100 303 7.397461 × 100
Algorithms 2024, 17, 100 18 of 19
Table 8. Number of preconditioned GMRES iterations to solve the linear system for increasing
dimension n2 till tol = 10−11 .
µ( x, y) = 1/( x + iy)γ
γ = 3, α = 1.8
n - Pτ
24 69 17
25 125 22
26 236 32
27 454 47
28 >500 70
29 >500 108
3. Conclusions
In this work, we considered a fractional Helmholtz equation approximated by ad
hoc centered differences with variable wave number µ( x, y), in the specific case where
the complex-valued function µ( x, y) has a pole of order γ. Eigenvalue distribution and
clustering results have been derived in [1,2]. The numerical results presented in this
corroborate the analysis.
Many more intricate cases can be treated using the same type of theoretical apparatus,
including the GLT theory [3,18] and non-Hermitian perturbation results, such as those
in [4,6]. We list a few of them.
• The numerical results in Section 2.5 seem to indicate that the spectral distribution of
the original matrix sequence and the spectral clustering at 1 of the preconditioned
matrix sequence holds also when the Frobenius norm condition in [4] is violated; this
is an indication that Theorem 1 in [4] may not be sharp. A related conjecture is that the
p
key condition ∥Yn ∥22 = o (n) in Theorem 2 could be replaced by ∥Yn ∥ p = o (n), with
any p ∈ [1, ∞), which would be very useful when the trace norm is considered, i.e.,
for p = 1.
• Definition 1 has been reported with a matrix size of the symbol equal to r ≥ 1. In our
study for matrices arising from finite differences, the parameter r is always equal to 1.
However, when considering isogeometric analysis approximations with polynomial
degree p and regularity k ≤ p − 1 we have r = ( p − k )d [19,20]. Notice that a particular
case of the previous formula is the case of p order finite elements in space dimension d
which leads to r = pd [20,21], since k = 0. Also, the discontinuous Galerkin techniques
of degree p are covered: we have r = ( p + 1)d [19] because k = −1.
• The above considerations could be considered also in the case where the fractional
Laplacian is defined on a non-Cartesian d-dimensional domain Ω, or equipped with
variable coefficients, or with approximations on graded grids. In fact, the related
GLT theory is already available [3,18,19] for encompassing such a generality, while
non-Hermitian perturbation tools do not depend on a specific structure of the involved
matrix sequences.
Author Contributions: S.S.-C. is responsible for funding acquisition; the contribution of all authors is
equal regarding all the listed items: Conceptualization, methodology, validation, investigation, data
curation, writing—original draft preparation, writing—review and editing, visualization, supervision,
project administration. All authors have read and agreed to the published version of the manuscript.
Funding: The work of Stefano Serra-Capizzano is supported by GNCS-INdAM and is funded by
the European High-Performance Computing Joint Undertaking (JU) under grant agreement No.
955701. The JU receives support from the European Union’s Horizon 2020 research and innovation
programme and Belgium, France, Germany, and Switzerland. Furthermore, Stefano Serra-Capizzano
is grateful for the support of the Laboratory of Theory, Economics and Systems—Department of
Computer Science at Athens University of Economics and Business.
Data Availability Statement: Data are contained within the article.
Algorithms 2024, 17, 100 19 of 19
Acknowledgments: We thank the anonymous Referees for their careful work and for the explicit
appreciation of our results.
Conflicts of Interest: The authors declare no conflicts of interest.
References
1. Adriani, A.; Sormani, R.L.; Tablino-Possio, C.; Krause, R.; Serra-Capizzano, S. Asymptotic spectral properties and preconditioning
of an approximated nonlocal Helmholtz equation with Caputo fractional Laplacian and variable coefficient wave number µ.
arXiv 2024, arXiv:2402.10569
2. Li, T.-Y.; Chen, F.; Sun, H.W.; Sun, T. Preconditioning technique based on sine transformation for nonlocal Helmholtz equations
with fractional Laplacian. J. Sci. Comput. 2023, 97, 17. [CrossRef]
3. Garoni, C.; Serra-Capizzano, S. Generalized Locally Toeplitz Sequences: Theory and Applications; Springer: Cham, Switzerland, 2018;
Volume II.
4. Barbarino, G.; Serra-Capizzano, S. Non-Hermitian perturbations of Hermitian matrix-sequences and applications to the spectral
analysis of the numerical approximation of partial differential equations. Numer. Linear Algebra Appl. 2020, 27, e2286. [CrossRef]
5. Bhatia, R. Matrix Analysis, Graduate Texts in Mathematics; Springer: New York, NY, USA, 1997; Volume 169.
6. Golinskii, L.; Serra-Capizzano, S. The asymptotic properties of the spectrum of nonsymmetrically perturbed Jacobi matrix
sequences. J. Approx. Theory 2007, 144, 84–102. [CrossRef]
7. Agarwal, R.P. Difference Equations and Inequalities: Second Edition, Revised and Expanded; Marcel Dekker: New York, NY, USA, 2000.
[CrossRef]
8. Guo, S.-L.; Qi, F. Recursion Formulae for ∑nm=1 mk . Z. Anal. Anwend. 1999, 18, 1123–1130. [CrossRef]
9. Kuang, J.C. Applied Inequalities, 2nd ed.; Hunan Education Press: Changsha, China, 1993. (In Chinese) [CrossRef]
10. Bini, D.; Capovani, M. Spectral and computational properties of band symmetric Toeplitz matrices. Linear Algebra Appl. 1983,
52–53, 99–126.
11. Chan, R.; Ng, M. Conjugate gradient methods for Toeplitz systems. SIAM Rev. 1996, 38, 427–482. [CrossRef]
12. Serra-Capizzano, S. Superlinear PCG methods for symmetric Toeplitz systems. Math. Comp. 1999, 68, 793–803.
13. Di Benedetto, F.; Serra-Capizzano, S. Optimal multilevel matrix algebra operators. Linear Multilinear Algebra 2000, 48, 35–66.
[CrossRef]
14. Kailath, T.; Olshevsky, V. Displacement structure approach to discrete-trigonometric-transform based preconditioners of G. Strang
type and of T. Chan type. SIAM J. Matrix Anal. Appl. 2005, 26, 706–734. [CrossRef]
15. Loan, C.V. Computational Frameworks for the Fast Fourier Transform; Frontiers in Applied Mathematics; Society for Industrial and
Applied Mathematics (SIAM): Philadelphia, PA, USA, 1992. [CrossRef]
16. Fasino, D.; Tilli, P. Spectral clustering properties of block multilevel Hankel matrices. Linear Algebra Appl. 2000, 306, 155–163.
[CrossRef]
17. Serra-Capizzano, S. On the extreme spectral properties of Toeplitz matrices generated by L1 functions with several min-
ima/maxima. BIT 1996, 36, 135–142. [CrossRef]
18. Barbarino, G. A systematic approach to reduced GLT. BIT 2022, 62, 681–743.
19. Barbarino, G.; Garoni, C.; Serra-Capizzano, S. Block generalized locally Toeplitz sequences: Theory and applications in the
multidimensional case. Electr. Trans. Numer. Anal. 2020, 53, 113–216. [CrossRef]
20. Garoni, C.; Speleers, H.; Ekström, S.-E.; Reali, A.; Serra-Capizzano, S.; Hughes, T.J.R. Symbol-based analysis of finite element
and isogeometric B-spline discretizations of eigenvalue problems: Exposition and review. Arch. Comput. Methods Eng. 2019, 26,
1639–1690. [CrossRef]
21. Garoni, C.; Serra-Capizzano, S.; Sesana, D. Spectral analysis and spectral symbol of d-variate Q p Lagrangian FEM stiffness
matrices. SIAM J. Matrix Anal. Appl. 2015, 36, 1100–1128. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.