0% found this document useful (0 votes)
109 views19 pages

Algorithms 17 00100

This document summarizes research on developing fast solvers for numerical approximations of a two-dimensional nonlocal Helmholtz equation with a fractional Laplacian and complex-valued variable coefficient wave number. The researchers extended previous analyses to the case of an unbounded wave number by focusing on a power singularity case. They used fractional centered differences to discretize the equation and presented numerical experiments on the spectral behavior and convergence of preconditioned GMRES solvers.

Uploaded by

jamel-shams
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views19 pages

Algorithms 17 00100

This document summarizes research on developing fast solvers for numerical approximations of a two-dimensional nonlocal Helmholtz equation with a fractional Laplacian and complex-valued variable coefficient wave number. The researchers extended previous analyses to the case of an unbounded wave number by focusing on a power singularity case. They used fractional centered differences to discretize the equation and presented numerical experiments on the spectral behavior and convergence of preconditioned GMRES solvers.

Uploaded by

jamel-shams
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

algorithms

Article
Clustering/Distribution Analysis and Preconditioned Krylov
Solvers for the Approximated Helmholtz Equation and
Fractional Laplacian in the Case of Complex-Valued, Unbounded
Variable Coefficient Wave Number µ
Andrea Adriani 1 , Stefano Serra-Capizzano 1,2, * and Cristina Tablino-Possio 3

1 Department of Science and High Technology, University of Insubria, Via Valleggio 11, 22100 Como, Italy;
[Link]@[Link]
2 Division of Scientific Computing, Department of Information Technology, Uppsala University, Lägerhyddsv 2,
hus 2, SE-751 05 Uppsala, Sweden
3 Department of Mathematics and Applications, University of Milano-Bicocca, Via Cozzi 53, 20125 Milano,
Italy; [Link]@[Link]
* Correspondence: [Link]@[Link]

Abstract: We consider the Helmholtz equation and the fractional Laplacian in the case of the
complex-valued unbounded variable coefficient wave number µ, approximated by finite differences.
In a recent analysis, singular value clustering and eigenvalue clustering have been proposed for a τ
preconditioning when the variable coefficient wave number µ is uniformly bounded. Here, we extend
the analysis to the unbounded case by focusing on the case of a power singularity. Several numerical
experiments concerning the spectral behavior and convergence of the related preconditioned GMRES
are presented.

Keywords: Caputo fractional derivatives; Helmholtz equations; eigenvalue asymptotic distribution;


spectral symbol; clustering; Generalized Locally Toeplitz sequences; preconditioning

Citation: Adriani, A.; Serra-


Capizzano, S.; Tablino-Possio, C.
Clustering/Distribution Analysis and 1. Introduction
Preconditioned Krylov Solvers for the In the present, the fractional Laplacian operator (−∆)α/2 (·) is considered. Its formal
Approximated Helmholtz Equation definition is
and Fractional Laplacian in the Case
u( x, y) − u( x̃, ỹ) 2α Γ ( α + 2
2 )
Z
of Complex-Valued, Unbounded
(−∆) α/2
(u( x, y)) = cα P.V. d x̃dỹ, cα = − α ,
Variable Coefficient Wave Number µ.
R2 [( x − x̃ )2 + (y − ỹ)2 ]
2+ α
2 π |Γ( 2 )|
Algorithms 2024, 17, 100. https://
[Link]/10.3390/a17030100
where Γ(.) is the Gamma function. More explicitly, our problem consists in finding fast
Received: 22 January 2024 solvers for the numerical approximation of a two-dimensional nonlocal Helmholtz equation
Revised: 14 February 2024 with fractional Laplacian described by the equations
Accepted: 19 February 2024 (
Published: 26 February 2024 (−∆)α/2 u( x, y) + µ( x, y)u( x, y) = v( x, y), ( x, y) ∈ Ω ⊂ R2 , α ∈ (1, 2),
(1)
u( x, y) = 0, ( x, y) ∈ Ωc ,

with a given variable-coefficient, complex-valued wave number µ = µ( x, y), and with


Copyright: © 2024 by the authors.
source term v. Here, Ω is taken [0, 1]2 ⊂ R2 and Ωc is the complement of Ω. In what
Licensee MDPI, Basel, Switzerland.
follows, µ( x, y) = ( x+1iy)γ for some γ > 0, the case of a bounded µ( x, y) has been studied
This article is an open access article
distributed under the terms and
by Adriani et al. [1] and by Li et al. [2].
conditions of the Creative Commons
To approximate Equation (1), we employ the fractional centered differences (FCD).
1
Attribution (CC BY) license (https:// Given a positive integer n, we take h = n+ 1 as the step size. We define xi = ih and y j = jh
[Link]/licenses/by/
4.0/).

Algorithms 2024, 17, 100. [Link] [Link]


Algorithms 2024, 17, 100 2 of 19

for every i, j ∈ Z. The discrete version of the fractional Laplacian in such a setting is given
by
1
(−∆h )α/2 (u( x, y)) := α ∑ bk ,k2 u( x + k1 h, y + k2 h),
(α)
(2)
h k ,k ∈Z 1
1 2

(α)
where bk are the Fourier coefficients of the function
1 ,k 2

 α
Ψ
 
η 2
2 2
tα (η, Ψ) = 4 sin + 4 sin , (3)
2 2

that is
1
Z π Z π
tα (η, Ψ)e−i(k1 η +k2 Ψ) dηdΨ,
(α)
bk ,k =
1 2 4π 2 −π −π
where i is the imaginary unit.
Proceeding as in [1] we trace back the original problem to solving the following
linear system
An u := ( Bn + Dn (µ))u = f , n = (n, n), (4)
where Bn = h1α B̂n , and B̂n is the two-level symmetric Toeplitz matrix generated by tα (η, Ψ),
i.e., Bn = Tn (tα ) with

B0 B1 B2 ··· Bn−2 Bn−1


 

 B1 B0 B1 ··· Bn−3 Bn−2 

 B2 B1 B0 ··· Bn−4 Bn−3 
Bn =  .. ,
 
.. .. .. .. ..

 . . . . . . 

 Bn−2 Bn−3 Bn−4 ··· B0 B  1
Bn−1 Bn−2 Bn−3 ··· B1 B0

(α) (α) (α) (α) (α)


 
b0,j b1,j b2,j · · · bn−2,j bn−1,j
 (α) (α) (α) (α) (α) 
 b
 1,j b0,j b1,j · · · bn−3,j bn−2,j 

 (α) (α) (α) (α) (α) 
 b
 2,j b1,j b0,j · · · bn−4,j bn−3,j 
Bj =  . .

 . .. .. .. .. .. 
 . . . . . . 

 (α) (α) (α) (α) (α) 
b
 n−2,j bn−3,j bn−4,j · · · b0,j b1,j 
(α) (α) (α) (α) (α)
bn−1,j bn−2,j bn−3,j ··· b1,j b0,j

For the sake of simplicity, the previous equation is rewritten in the following scaled form

Ân u := ( B̂n + hα Dn (µ))u = v, n = (n, n). (5)

For the two-level notations and the theory regarding Toeplitz structures, refer to [3].
In the case where µ( x, y) = 1/( x + iy)γ we can give sufficient conditions on the coeffi-
cient γ, depending on α, in order to guarantee that {hα Dn (µ)}n is zero distributed in the
eigenvalue/singular value sense, thus obtaining the spectral distribution of the sequence
{ Ân }n which, under mild conditions, has to coincide with that of { B̂n }n . In the next sec-
tion, we first introduce the necessary tools and then present theoretical results completing
those in [1,2] and related numerical experiments. The numerical experiments concern the
visualization of the distribution/clustering results and the optimal performances of the
related preconditioning when the preconditioned GMRES is used.
We highlight that the spectral analysis for the considered preconditioned and nonpre-
conditioned matrix-sequences for unbounded µ( x, y) is completely new. In fact, in [1,2]
the assumption of boundedness of the wave number is always employed; furthermore,
in [2] the results are focused on eigenvalue localization findings, while in [1] the singular
Algorithms 2024, 17, 100 3 of 19

value analysis is the main target. Finally, we stress that our eigenvalue results are nontrivial
given the non-Hermitian and even non-normal nature of the involved matrix sequences.

2. Spectral Analysis
First, we report a few definitions regarding the spectral and singular value distribution,
the notion of clustering and a few relevant relationships among the various concepts. Then,
we present the main theoretical tool taken from [4] and we perform a spectral analysis of the
various matrix-sequences. Numerical experiments and visualization results corroborating
the analysis are presented in the last part of the section.

Definition 1. Let { An }n be a sequence of matrices, with An of size dn , and let ψ : D ⊂ Rt → Cr×r


be a measurable function defined on a set D with 0 < µt ( D ) < ∞.
• We say that { An }n has an (asymptotic) singular value distribution described by ψ, and we
write { An }n ∼σ ψ, if

dn
1 1 ∑ri=1 F (σi (ψ(x)))
Z
lim
n→∞ dn ∑ F(σi ( An )) = µt ( D ) D r
dx, ∀ F ∈ Cc (R). (6)
i =1

• We say that { An }n has an (asymptotic) spectral (or eigenvalue) distribution described by ψ,


and we write { An }n ∼λ ψ, if

dn
1 1 ∑ri=1 F (λi (ψ(x)))
Z
lim
n→∞ dn ∑ F(λi ( An )) = µt ( D ) D r
dx, ∀ F ∈ Cc (C). (7)
i =1

If A ∈ Cm×m , then the singular values and the eigenvalues of A are denoted by
σ1 ( A), . . . , σm ( A) and λ1 ( A), . . . , λm ( A), respectively. Furthermore, if A ∈ Cm×m and
1 ≤ p ≤ ∞, then ∥ A∥ p denotes the Schatten p-norm of A, i.e., the p-norm of the vector
(σ1 ( A), . . . , σm ( A)); see [5] for a comprehensive treatment of the subject. The Schatten
∞-norm ∥ A∥∞ is the largest singular value of A and coincides with the spectral norm ∥ A∥.
The Schatten 1-norm ∥ A∥1 is the sum of the singular values of A and coincides with the
so-called trace-norm of A, while the Schatten 2-norm ∥ A∥2 coincides with the Frobenius
norm of A, which is of great popularity in the numerical analysis community because of its
low computational complexity.
At this point, we introduce the definition of clustering, which, as for the distribution
notions, is a concept only of the asymptotic type. For z ∈ C and ϵ > 0, let B(z, ϵ) be the
.
disk with center z and radius ϵ, B(z, ϵ) = {w ∈ C : |w − z| < ϵ}. For S ⊆ C and ϵ > 0, we
. S
denote by B(S, ϵ) the ϵ-expansion of S, defined as B(S, ϵ) = z∈S B(z, ϵ).

Definition 2. Let { An }n be a sequence of matrices, with An of size dn tending to infinity, and


let S ⊆ C be a nonempty closed subset of C. { An }n is strongly clustered at S in the sense of the
eigenvalues if, for each ϵ > 0, the number of eigenvalues of An outside B(S, ϵ) is bounded by a
constant qϵ independent of n. In symbols,
.
qϵ (n, S) = #{ j ∈ {1, . . . , dn } : λ j ( An ) ∈
/ B(S, ϵ)} = O(1), as n → ∞.

{ An }n is weakly clustered at S if, for each ϵ > 0,

qϵ (n, S) = o (dn ), as n → ∞.

If { An }n is strongly or weakly clustered at S and S is not connected, then the connected components
of S are called sub-clusters. Of special importance in the theory of preconditioning is the case of
spectral single point clustering where S is made up by a unique complex number s.
The same notions hold for the singular values, where s is a nonnegative number and S is a
nonempty closed subset of the nonnegative real numbers.
Algorithms 2024, 17, 100 4 of 19

For a measurable function g : D ⊆ Rt → C, the essential range of g is defined as


. .
E R( g) = {z ∈ C : µt ({ g ∈ B(z, ϵ)}) > 0 for all ϵ > 0}, where { g ∈ B(z, ϵ)} = { x ∈ D :
g( x ) ∈ B(z, ϵ)}. E R( g) is always closed and, if g is continuous and D is contained in the
closure of its interior, then E R( g) coincides with the closure of the image of g.
Hence, if { An }n ∼λ ψ (with { An }n , ψ as in Definition 1), then, by ([6], Theorem 4.2),
{ An }n is weakly clustered at the essential range of ψ, defined as the union of the essential
. S
ranges of the eigenvalue functions λi (ψ), i = 1, . . . , r: E R(ψ) = is=1 E R(λi (ψ)): all the
considerations above can be translated in the singular value setting as well, with obvious
minimal modifications.
In addition, the following result holds.

Theorem 1. If E R(ψ) = s with s fixed complex number then we have the subsequent equivalence:
{ An }n ∼λ ψ iff { An }n is weakly clustered at s in the eigenvalue sense. Hence, if E R(|ψ|) = s
with s fixed nonnegative number then we have the subsequent equivalence: { An }n ∼σ ψ if { An }n
is weakly clustered at s in the singular value sense.

A noteworthy example treated in the previous theorem is that of zero-distributed


sequences { An }n expressed by definition as { An }n ∼σ 0 (see [3]).
We will make use of [Theorem 1] in [4], which we report below and which extends
previous results in [6] in the context of the zero distribution of zeros of perturbed orthogo-
nal polynomials.

Theorem 2. Let { Xn }n be a matrix-sequence such that each Xn is Hermitian of size dn and


{ Xn }n ∼λ f , where f is a measurable function q
√defined on a subset of R for some q, with finite
and positive Lebesgue measure. If ∥Yn ∥2 = o ( dn ), with ∥ · ∥2 being the Frobenius norm, then
{Yn }n ∼λ 0 and { Xn + Yn }n ∼λ f .

2.1. Main Results


We study the eigenvalue distribution results of the two matrix-sequences {hα Dn (µ)}n
and { An = B̂n + hα Dn (µ)}n , in the sense of Definition 1. The same kind of matrices and
matrix-sequences are treated in [1,2]. In [2] eigenvalues localization results are studied,
while in [1] singular value and eigenvalue distribution results are obtained and in both
cases, the coefficient µ( x, y) is assumed bounded. Here, we extend the results in the
quoted literature.

Theorem 3. Let µ( x, y) = 1/( x + iy)γ . Then, for every γ ≥ 0 such that α > γ − 1 (α ∈ (1, 2)),
we have
a1 {hα Dn (µ)}n ∼λ 0;
a2 { B̂n + hα Dn (µ)}n ∼λ tα .

Proof. In the proof we strongly rely on Theorem 2. Therefore, we compute


n
∥hα Dn (µ)∥22 = ∑ |µij |2 h2α
i,j=1
n  −γ
= ∑ |ih|2 + | jh|2 h2α
i,j=1
n
1
= h2α−2γ ∑ ( i 2 + j2 ) γ
.
i,j=1
Algorithms 2024, 17, 100 5 of 19

n 1
Then, we estimate the quantity ∑i,j =1 ( i 2 + j2 ) γ
, under the hypothesis that γ ∈ (0, 1).

n n i −1 n
1 1 1 1
∑ (i2 2
+j ) γ
=2∑ ∑ 2
( i + j 2 )γ
+ γ
2 ∑ i2γ .
i,j=1 i =1 j =1 i =1

Now, the first sum can be estimated as


n i −1 n n n
1 i−1 1 1
2∑ ∑ ( i 2 + j2 ) γ ≤ 2 ∑ i2γ = 2 ∑ i2γ−1 − 2 ∑ i2γ . (8)
i =1 j =1 i =1 i =1 i =1

Therefore,
n n   n
1 1 1 1
∑ (i2 + j2 )γ ≤ 2 ∑ i2γ−1 + 2γ − 2 ∑ 2γ .
i,j=1 i =1 i =1

1
Note, that 2γ −2 < 0 for every γ ∈ (0, 1). A basic computation leads to
n
n2−2γ
Z n
1 dt
2∑ ≤ =
i =1 i2γ−1 0 t2γ−1 1−γ

and
(n+1)1−2γ −1
(
n
γ ̸= 12 ,
Z n
1 dt
∑ 2γ ≥ (t + 1)2γ
= 1−2γ
i =1 0 log(n + 1) γ = 12 .

As a consequence, we conclude that



n
1  n2−2γ + (n+1)1−2γ −1 γ ̸= 1
∑ (i2 + j2 )γ ≤  n12−−2γγ +  11−2γ log(n + 1) γ=
2
1
i,j=1 1− γ 2γ −2 2,

n 1 n2−2γ
so that ∑i,j =1 ( i 2 + j2 ) γ
≤ cn (γ) ∼ 1− γ for every γ ∈ (0, 1). This immediately implies that

n
1 cn (γ) n2−2α
h2α−2γ ∑ ( i 2 + j2 ) γ

n2α−2γ

1−γ
= o ( n2 )
i,j=1

for every α ∈ (1, 2), as required to apply Theorem 1 in [4] and conclude the proof.
From the computation in Equation (8), we immediately deduce the following: if γ > 1,
n 1 2 2
∑i,j =1 (i2 + j2 )γ ≤ cγ for every n, so that ∥ h Dn ( µ )∥2 = o ( n ) if and only if 2α − 2γ + 2 > 0,
α

that is, α > γ − 1.


Finally, when γ = 1, we obtain the estimate
n Z nZ n
1 1
∑ i 2 + j2 ≤ k + x 2 + y2
1
dxdy
1
i,j=1
Z n Z π 
2 1
≤ k+ dθ dr
1 0 r
π
= k + log(n),
2
with k being any constant independent of n satisfying

1 1
k≥2∑ − .
j =1
j2 +1 2
Algorithms 2024, 17, 100 6 of 19

As above, this leads to the conclusion that ∥hα Dn (µ)∥22 = o (n2 ) if and only if α > γ − 1,
that is γ < α + 1. Then the proof is complete by Theorem 2 with dn = n2 .

With reference to the proof of Theorem 3, from a technical point of view, it should be
observed that in [7–9] we can find more refined bounds for terms as ∑nj=1 jℓ , with various
choices of the real parameter ℓ.
The next corollary complements the previous result.

Corollary 1. Assume that there exist positive constants c ≤ C for which

c/| x + iy|γ ≤ |µ( x, y)| ≤ C/| x + iy|γ .

for every x, y ∈ [0, 1]. Then, for every γ ≥ 0 such that α > γ − 1 (α ∈ (1, 2)), we have
b1 {hα Dn (µ)}n ∼λ 0;
b2 { B̂n + hα Dn (µ)}n ∼λ tα .

Proof. It follows directly by Theorem 3 with the observation that

cδn ≤ ∥ hα Dn (µ)∥2 ≤ Cδn

with δn = ∥hα Dn (µγ )∥2 , µγ ( x, y) = 1/( x + iy)γ .


Finally, the proof is concluded by invoking Theorem 2 with dn = n2 .

2.2. Preconditioning
For a symmetric Toeplitz matrix Tn ∈ Rn×n with first column [t1 , t2 , . . . , tn ]⊤ , the
matrix τ ( Tn ) defined as
τ ( Tn ) := Tn − H ( Tn ) (9)
the natural τ preconditioner of Tn was already considered decades ago in [10–12] when
a great amount of theoretical and computational work was dedicated to preconditioning
strategies for structured linear systems. Here, H ( Tn ) denotes a Hankel matrix whose
entries are constant along each antidiagonal and whose precise definition is the follow-
ing: the first row and the last column of H ( Tn ) are given by [t2 , t3 , . . . , tn−1 , 0, 0] and
[0, 0, tn−1 , . . . , t3 , t2 ]⊤ , respectively. Notice, that by using the sine transform matrix Sn ,
defined as r  
2 πkj
[Sn ]k,j = sin , 1 ≤ k, j ≤ n,
n+1 n+1
it is known that every τ matrix is diagonalized as τ ( Tn ) = Sn Λn Sn , where Λn is a diagonal
matrix constituted by all eigenvalues of τ (Tn ), and Sn = ([Sn ] j,k ) is the real, symmetric,
orthogonal matrix defined before, so that Sn = SnT = Sn−1 . Furthermore, the matrix Sn is
associated with the fast sine transform of type I (see [13,14] for several other sine/cosine
transforms). Indeed the multiplication of a matrix Sn times a real vector can be conducted
in O(n log n) real operations and the cost is around half of the celebrated discrete fast
Fourier transform [15]. Therefore, all the relevant matrix operations in this algebra cost
O(n log n) real operations, including matrix–matrix multiplication, inversion, solution of a
linear system, and computation of the spectrum, i.e., of the diagonal entries of Λn .
Using standard and known techniques, the τ algebra has d-level versions for every
d ≥ 1, in which a τ ( Tn ), Tn d-level symmetric Toeplitz matrix of size ν(n) = n1 n2 · · · nd ,
n = (n1 , . . . , nd ), has the diagonalization form

τ ( Tn ) = Sn Λn Sn , Sn = Sn1 ⊗ · · · ⊗ Snd , (10)

with Λn , the diagonal matrix is obtained as a d-level sine transform of type I of the first
column of Tn . Again the quoted d-level transform and all the relevant matrix operations in
the related algebra have a cost of O(ν(n) log ν(n)) real operations which is quasi-optimal
given the fact that the matrices have size ν(n).
Algorithms 2024, 17, 100 7 of 19

At the algebraic level, the explicit construction can be conducted recursively using
additive decomposition (9): first at the most external level, and then applying the same
operation to any block which is a (d − 1)-level symmetric Toeplitz matrix and so on until
arriving at matrices with scalars.
In light of the excellent structural, spectral, and computation features of the τ algebra
in d levels, two different types of τ preconditioning for the related linear systems were
proposed in [2] and one in [1] (with d = 2 and n = (n, n)). Here, we consider the latter. In
fact, { Tn ( f ) − τ ( Tn ( f ))}n ∼σ,λ 0 for any Lebesgue integrable f , thanks to the distribution
results on multilevel Hankel matrix-sequences generated by any L1 function f proven
in [16]. From this, as proven in [1], the preconditioner Pn = τ ( Tn (tα )) is such that the
preconditioned matrix sequence is clustered at 1 both in the eigenvalue and singular value
sense under mild assumptions. In fact, using the notion of an approximating class of
sequences, the eigenvalue perturbation results in [6], and the GLT apparatus [3]; it is
enough that µ( x, y) is Riemann integrable or simply bounded. Here, we extend the spectral
distribution results in the case where µ( x, y) is not bounded and even not integrable. More
precisely, as in Corollary 1, we consider the case of a power singularity.

Theorem 4. Assume that there exist positive constants c ≤ C for which

c/| x + iy|γ ≤ |µ( x, y)| ≤ C/| x + iy|γ .

for every x, y ∈ [0, 1]. Consider the preconditioner Pn = τ ( Tn (tα )). Then, for every γ ∈ [0, 1) and
for every α ∈ (1, 2)), we have
c1 { Pn−1 hα Dn (µ)}n ∼λ 0;
 −1  
c2 Pn B̂n + hα Dn (µ) n ∼λ 1.

Proof. We rely on Theorem 2, on Corollary 1, and on a standard symmetrization trick. First


of all, we observe that the eigenvalues of Pn−1 hα Dn (µ) and Yn = Pn−1/2 hα Dn (µ) Pn−1/2 are
the same because the two matrices are similar. The same holds for

Pn−1 B̂n + hα Dn (µ)


 

and Xn + Yn with
Xn = Pn−1/2 B̂n Pn−1/2 .
Now Xn is real symmetric, and in fact positive definite and so is B̂n . As proven in [1],
the spectral distribution function of { Xn }n is 1 thanks to a basic use of the GLT theory.
Furthermore, the minimal eigenvalue of Pn = τ ( Tn (tα )) is positive and tends to zero as hα ,
since tα has a unique zero of order α at zero (see, e.g., [17]). Therefore,


∥hα Pn−1/2 ∥ = ≤ Dh1/2 ,
λ1 (τ ( Tn (tα )))
1
∥ Pn−1/2 ∥ = ≤ Dh−1/2 .
λ1 (τ ( Tn (tα )))

As a consequence of Corollary 1 we deduce

∥Yn ∥2 ≤ D2 ∥ Dn (µ)∥2 = o (n) (11)

if and only if γ < 1. In conclusion, the desired result follows from Theorem 2 with dn = n2
and f = 1.

Theorem 4 cannot be sharp since the estimate in (11) would hold also if the precon-
ditioner is chosen as Pn = hα In2 . A more careful estimate would require considering that
the eigenvalues of τ ( Tn (tα )) are explicitly known, and in fact, we will see in the numerical
Algorithms 2024, 17, 100 8 of 19

experiments that the spectral clustering at 1 of the preconditioned matrix-sequence is


observed also for γ to be much larger than 1.

2.3. Numerical Evidence: Visualizations of the Original Matrix-Sequence


In the current subsection, we report visualizations regarding the analysis in Theo-
rem 3. More precisely, in Figures 1–4, we plot the eigenvalues of the matrix Ân , when the
matrix-size is n2 = 212 and when (α, γ) ∈ {(1.2, 1), (1.4, 1), 1.6, 1), (1.8, 1)}, satisfying the
assumption of Theorem 3 given by γ < α + 1. As it can be observed, the clustering at zero
of the imaginary part of the eigenvalues of Ân and the relation { Ân } ∼λ tα are visible
already for a moderate matrix size of 212 . A remarkable fact is that no outliers are present,
since the imaginary parts are always negligible and the graph is of an equispaced sampling
of tα and of the real parts of the eigenvalues, both sets are in nondecreasing order and
superpose completely.

-0.004 3.5

-0.006
3

-0.008
2.5
-0.01

-0.012 2

-0.014
1.5

-0.016
1
-0.018

-0.02 0.5

-0.022
0
1 2 3 0 0.5 1

Figure 1. Eigenvalues of the matrix Ân for γ = 1, α = 1.2 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].

10 -3
4.5
-2.5
4
-3

3.5
-3.5

-4 3

-4.5 2.5

-5 2

-5.5
1.5

-6
1
-6.5
0.5
-7
0
1 2 3 4 0 0.5 1

Figure 2. Eigenvalues of the matrix Ân for γ = 1, α = 1.4 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].
Algorithms 2024, 17, 100 9 of 19

10 -3
-1 6

-1.2
5
-1.4

-1.6
4

-1.8

-2 3

-2.2
2
-2.4

-2.6
1

-2.8

0
1 2 3 4 5 0 0.5 1

Figure 3. Eigenvalues of the matrix Ân for γ = 1, α = 1.6 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].

10 -4
7

-5
6
-6

5
-7

4
-8

-9 3

-10 2

-11
1

-12
0
2 4 6 0 0.5 1

Figure 4. Eigenvalues of the matrix Ân for γ = 1, α = 1.8 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].
Algorithms 2024, 17, 100 10 of 19

2.4. Numerical Evidence: Preconditioning, Visualizations, GMRES Iterations


In the present subsection, we consider the preconditioned matrices. More in detail,
Tables 1–4 concern the measure of the clustering at 1 with radius ϵ = 0.1, 0.01, for γ = 0.5,
0.8, 1, 1.5, for α = 1.2, 1.4, 1.6, 1.8, for various matrix-dimensions n2 = 28 , 210 , 212 . As can be
seen, the number No (ϵ) = No (ϵ, n) increases moderately with n, but the percentage of the
number of outliers with respect to the matrix-size tends to zero fast and this is in agreement
with the forecast of Theorem 4, at least for γ < 1: the situation is indeed better than the
theoretical predictions because even when the condition γ < 1 is violated, we still observe
a clustering at 1 and this is not a surprise given the comments after Theorem 4.

Table 1. Number of outliers No (ϵ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01
and related percentage for increasing dimension n2 .

µ( x, y) = 1/( x + iy)γ , γ = 0.5


n No (0.1) Percentage No (0.01) Percentage
α = 1.2
24 2 7.812500 × 10−1 227 8.867188 × 101
25 6 5.859375 × 10−1 302 2.949219 × 101
26 16 3.906250 × 10−1 307 7.495117 × 100
α = 1.4
24 1 3.906250 × 10−1 89 3.476562 × 101
25 5 4.882812 × 10−1 99 9.667969 × 100
26 13 3.173828 × 10−1 154 3.759766 × 100
α = 1.6
24 0 0 35 1.367188 × 101
25 3 2.929688 × 10−1 60 5.859375 × 100
26 8 1.953125 × 10−1 112 2.734375 × 100
α = 1.8
24 0 0 20 7.812500 × 100
25 0 0 38 3.710938 × 100
26 0 0 73 1.782227 × 100

Table 2. Number of outliers No (ϵ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01
and related percentage for increasing dimension n2 .

µ( x, y) = 1/( x + iy)γ , γ = 0.8


n No (0.1) Percentage No (0.01) Percentage
α = 1.2
24 3 1.171875 × 100 233 9.101562 × 101
25 7 6.835938 × 10−1 395 3.857422 × 101
26 16 3.906250 × 10−1 456 1.113281 × 101
α = 1.4
24 2 7.812500 × 10−1 115 4.492188 × 101
25 6 5.859375 × 10−1 131 1.279297 × 101
26 15 3.662109 × 10−1 182 4.443359 × 100
α = 1.6
24 0 0 46 1.796875 × 101
25 3 2.929688 × 10−1 66 6.445312 × 100
26 8 1.953125 × 10−1 116 2.832031 × 100
α = 1.8
24 0 0 20 7.812500 × 100
25 0 0 39 3.808594 × 100
26 0 0 75 1.831055 × 100
Algorithms 2024, 17, 100 11 of 19

Table 3. Number of outliers No (ϵ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01
and related percentage for increasing dimension n2 .

µ( x, y) = 1/( x + iy)γ , γ = 1
n No (0.1) Percentage No (0.01) Percentage
α = 1.2
24 8 3.125000 × 100 235 9.179688 × 101
25 11 1.074219 × 100 450 4.394531 × 101
26 22 5.371094 × 10−1 588 1.435547 × 101
α = 1.4
24 2 7.812500 × 10−1 128 50
25 6 5.859375 × 10−1 164 1.601562 × 101
26 15 3.662109 × 10−1 217 5.297852 × 100
α = 1.6
24 0 0 58 2.265625 × 101
25 2 1.953125 × 10−1 76 7.421875 × 100
26 8 1.953125 × 10−1 123 3.002930 × 100
α = 1.8
24 0 0 27 1.054688 × 101
25 0 0 43 4.199219 × 100
26 0 0 78 1.904297 × 100

Table 4. Number of outliers No (ϵ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01
and related percentage for increasing dimension n2 .

µ( x, y) = 1/( x + iy)γ , γ = 1.5


n No (0.1) Percentage No (0.01) Percentage
α = 1.2
24 20 7.812500 × 100 237 9.257812 × 101
25 33 3.222656 × 100 547 5.341797 × 101
26 55 1.342773 × 100 923 2.253418 × 101
α = 1.4
24 9 3.515625 × 100 154 6.015625 × 101
25 13 1.269531 × 100 246 2.402344 × 101
26 24 5.859375 × 10−1 367 8.959961 × 100
α = 1.6
24 3 1.171875 × 100 78 3.046875 × 101
25 5 4.882812 × 10−1 117 1.142578 × 101
26 11 2.685547 × 10−1 181 4.418945 × 100
α = 1.8
24 1 3.906250 × 10−1 42 1.640625 × 101
25 1 9.765625 × 10−2 61 5.957031 × 100
26 1 2.441406 × 10−2 97 2.368164 × 100

The clustering in the preconditioning setting is also visualized in Figures 5–8, while
Table 5 accounts for the fact that the localization around 1 is very good since we do not
have large outliers. The moderate size of the outliers indicates that the preconditioned
GMRES is expected to be optimal and robust with respect to all the involved parameters.
The latter is evident in Table 6 with a slight increase in the number of iterations when γ
increases, so that the number and the magnitude of the outliers are slightly larger.
Algorithms 2024, 17, 100 12 of 19

0 0

-0.02
-0.02

-0.04
-0.04
-0.06
0.8 0.9 1 1.1 0.8 0.9 1 1.1

0 0

-0.01
-0.01
-0.02
-0.02
-0.03

-0.04 -0.03
0.8 0.9 1 1.1 0.9 0.95 1 1.05

Figure 5. Eigenvalues of the preconditioned matrix of size n2 = 212 for γ = 0.5 and α =
{1.2, 1.4, 1.6, 1.8}, respectively.

0 0

-0.02

-0.05
-0.04

-0.06
-0.1
-0.08
0.8 0.9 1 1.1 0.8 0.9 1 1.1

0 0

-0.01
-0.02
-0.02

-0.04 -0.03

-0.04
-0.06
0.8 0.9 1 1.1 0.9 0.95 1 1.05

Figure 6. Eigenvalues of the preconditioned matrix of size n2 = 212 for γ = 0.8 and α =
{1.2, 1.4, 1.6, 1.8}, respectively.
Algorithms 2024, 17, 100 13 of 19

0 0

-0.05
-0.05

-0.1

-0.15 -0.1
0.8 0.9 1 1.1 0.8 0.9 1 1.1

0 0

-0.02
-0.02
-0.04
-0.04
-0.06

-0.08 -0.06
0.8 0.85 0.9 0.95 1 0.9 0.95 1 1.05

Figure 7. Eigenvalues of the preconditioned matrix of size n2 = 212 for γ = 1 and α =


{1.2, 1.4, 1.6, 1.8}, respectively.

0 0

-0.1

-0.5 -0.2

-0.3

-1 -0.4
0.8 1 1.2 1.4 0.8 0.9 1 1.1

0 0

-0.05
-0.05
-0.1

-0.15 -0.1
0.8 0.85 0.9 0.95 1 0.9 0.95 1
Figure 8. Eigenvalues of the preconditioned matrix of size n2 = 212 for γ = 1.5 and α =
{1.2, 1.4, 1.6, 1.8}, respectively.
Algorithms 2024, 17, 100 14 of 19

Table 5. Maximal distance of eigenvalues of the preconditioned matrix from 1 for increasing dimen-
sion n2 .

µ( x, y) = 1/( x + iy)γ
γ = 0.5
n α = 1.2 α = 1.4 α = 1.6 α = 1.8
24 1.289012 × 10−1 1.065957 × 10−1 8.471230 × 10−2 5.120695 × 10−2
25 1.660444 × 10−1 1.580600 × 10−1 1.257282 × 10−1 6.848000 × 10−2
26 2.157879 × 10−1 2.018168 × 10−1 1.609333 × 10−1 9.085515 × 10−2
γ = 0.8
n α = 1.2 α = 1.4 α = 1.6 α = 1.8
24 1.495892 × 10−1 1.105071 × 10−1 8.629435 × 10−2 5.992612 × 10−2
25 1.699374 × 10−1 1.589429 × 10−1 1.257050 × 10−1 6.831767 × 10−2
26 2.172238 × 10−1 2.017168 × 10−1 1.604429 × 10−1 9.037168 × 10−2
γ=1
n α = 1.2 α = 1.4 α = 1.6 α = 1.8
24 1.775892 × 10−1 1.177315 × 10−1 9.016597 × 10−2 6.761970 × 10−2
25 1.783878 × 10−1 1.626862 × 10−1 1.274329 × 10−1 6.947072 × 10−2
26 2.226273 × 10−1 2.038328 × 10−1 1.612753 × 10−1 9.085565 × 10−2
γ = 1.5
n α = 1.2 α = 1.4 α = 1.6 α = 1.8
24 6.788881 × 10−1 3.445147 × 10−1 1.712513 × 10−1 1.195110 × 10−1
25 8.354951 × 10−1 3.702033 × 10−1 1.651846 × 10−1 1.109992 × 10−1
26 1.029919 × 100 3.983900 × 10−1 1.686921 × 10−1 1.048754 × 10−1

Table 6. Number of preconditioned GMRES iterations to solve the linear system for increasing
dimension n2 till tol = 10−11 .

µ( x, y) = 1/( x + iy)γ
γ = 0.5
α = 1.2 α = 1.4 α = 1.6 α = 1.8
n - Pτ - Pτ - Pτ - Pτ
24 35 9 41 8 47 7 54 7
25 54 9 67 9 83 8 101 7
26 82 10 109 9 144 8 189 7
27 124 11 177 10 251 9 351 7
28 189 11 288 10 437 9 >500 8
29 287 11 467 10 >500 9 >500 8
γ = 0.8
α = 1.2 α = 1.4 α = 1.6 α = 1.8
n - Pτ - Pτ - Pτ - Pτ
24 36 9 42 9 49 8 56 7
25 55 10 68 9 85 8 103 7
26 83 11 111 10 147 9 192 7
27 126 11 180 10 256 9 356 8
28 191 12 293 10 449 9 >500 8
29 290 12 477 11 >500 10 >500 8
γ=1
α = 1.2 α = 1.4 α = 1.6 α = 1.8
n - Pτ - Pτ - Pτ - Pτ
24 36 10 42 9 49 8 56 7
25 55 11 69 10 86 9 104 7
26 84 11 112 10 148 9 193 8
27 127 12 182 10 258 9 359 8
28 193 12 295 11 452 10 >500 8
29 293 12 480 11 >500 10 >500 8
Algorithms 2024, 17, 100 15 of 19

Table 6. Cont.

µ( x, y) = 1/( x + iy)γ
γ = 1.5
α = 1.2 α = 1.4 α = 1.6 α = 1.8
n - Pτ - Pτ - Pτ - Pτ
24 38 13 44 11 51 9 57 8
25 59 14 71 12 88 10 106 8
26 89 16 116 12 152 10 197 8
27 136 17 188 13 264 10 366 8
28 206 18 306 13 461 10 >500 8
29 312 20 498 14 >500 11 >500 9

2.5. Numerical Evidence: When the Hypotheses Are Violated


Here, we check the clustering at zero of the imaginary part of the eigenvalues of
Ân and the relation { Ân } ∼λ tα for a moderate matrix size as 212 , for γ = 3, and for
α = 1.2, 1.4, 1.6, 1.8, see Figures 9–12. We stress that the condition γ < α + 1 in Theorem 3
is violated. Nevertheless, the clustering at 0 of the imaginary part is present and the
agreement between an equispaced sampling of tα and the real parts of the eigenvalues of
Ân is still striking.
However, the number and the magnitude of outliers in the preconditioned matrices
start to become significant as reported in Table 7 and Figure 13 and hence, the number of
preconditioned GMRES iterations starts to grow moderately with the matrix-size as can be
seen in Table 8.

100 3.5

3
0

2.5
-100

-200

1.5

-300
1

-400
0.5

-500 0
-400 -300 -200 -100 0 0 0.5 1

Figure 9. Eigenvalues of the matrix Ân for γ = 3, α = 1.2 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].
Algorithms 2024, 17, 100 16 of 19

50 4.5

0
3.5

3
-50
2.5

2
-100
1.5

1
-150

0.5

-200 0
-200 -150 -100 -50 0 0 0.5 1

Figure 10. Eigenvalues of the matrix Ân for γ = 3, α = 1.4 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].

10 6

0
5
-10

-20
4
-30

-40 3

-50
2
-60

-70
1
-80

-90 0
-100 -50 0 0 0.5 1

Figure 11. Eigenvalues of the matrix Ân for γ = 3, α = 1.6 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].
Algorithms 2024, 17, 100 17 of 19

5 7

0
6

-5
5
-10

-15 4

-20 3

-25
2
-30

1
-35

-40 0
-40 -30 -20 -10 0 0 0.5 1

Figure 12. Eigenvalues of the matrix Ân for γ = 3, α = 1.8 and n2 = 212 . The left panel re-
ports the eigenvalues in the complex plane. The right panel reports in blue the real part of the
eigenvalues and in red the equispaced samplings of tα in nondecreasing order, in the interval
[min tα = 0, max tα = 23α/2 ].

2 2

0 1.5

-2 1

-4 0.5

-6 0

-8 -0.5

-10 -1

-12 -1.5

-14 -2
-15 -10 -5 0 5 -2 -1 0 1 2

Figure 13. Eigenvalues of the preconditioned matrix of size n2 = 212 for γ = 3 and α = 1.8.

Table 7. Number of outliers No (ϵ) with respect to a neighborhood of 1 of radius ϵ = 0.1 or ϵ = 0.01
and related percentage for increasing dimension n2 .

µ( x, y) = 1/( x + iy)γ , γ = 3
n No (0.1) Percentage No (0.01) Percentage
α = 1.8
24 15 5.859375 × 100 85 3.320312 × 101
25 26 2.539062 × 100 162 1.582031 × 101
26 50 1.220703 × 100 303 7.397461 × 100
Algorithms 2024, 17, 100 18 of 19

Table 8. Number of preconditioned GMRES iterations to solve the linear system for increasing
dimension n2 till tol = 10−11 .
µ( x, y) = 1/( x + iy)γ
γ = 3, α = 1.8
n - Pτ
24 69 17
25 125 22
26 236 32
27 454 47
28 >500 70
29 >500 108

3. Conclusions
In this work, we considered a fractional Helmholtz equation approximated by ad
hoc centered differences with variable wave number µ( x, y), in the specific case where
the complex-valued function µ( x, y) has a pole of order γ. Eigenvalue distribution and
clustering results have been derived in [1,2]. The numerical results presented in this
corroborate the analysis.
Many more intricate cases can be treated using the same type of theoretical apparatus,
including the GLT theory [3,18] and non-Hermitian perturbation results, such as those
in [4,6]. We list a few of them.
• The numerical results in Section 2.5 seem to indicate that the spectral distribution of
the original matrix sequence and the spectral clustering at 1 of the preconditioned
matrix sequence holds also when the Frobenius norm condition in [4] is violated; this
is an indication that Theorem 1 in [4] may not be sharp. A related conjecture is that the
p
key condition ∥Yn ∥22 = o (n) in Theorem 2 could be replaced by ∥Yn ∥ p = o (n), with
any p ∈ [1, ∞), which would be very useful when the trace norm is considered, i.e.,
for p = 1.
• Definition 1 has been reported with a matrix size of the symbol equal to r ≥ 1. In our
study for matrices arising from finite differences, the parameter r is always equal to 1.
However, when considering isogeometric analysis approximations with polynomial
degree p and regularity k ≤ p − 1 we have r = ( p − k )d [19,20]. Notice that a particular
case of the previous formula is the case of p order finite elements in space dimension d
which leads to r = pd [20,21], since k = 0. Also, the discontinuous Galerkin techniques
of degree p are covered: we have r = ( p + 1)d [19] because k = −1.
• The above considerations could be considered also in the case where the fractional
Laplacian is defined on a non-Cartesian d-dimensional domain Ω, or equipped with
variable coefficients, or with approximations on graded grids. In fact, the related
GLT theory is already available [3,18,19] for encompassing such a generality, while
non-Hermitian perturbation tools do not depend on a specific structure of the involved
matrix sequences.

Author Contributions: S.S.-C. is responsible for funding acquisition; the contribution of all authors is
equal regarding all the listed items: Conceptualization, methodology, validation, investigation, data
curation, writing—original draft preparation, writing—review and editing, visualization, supervision,
project administration. All authors have read and agreed to the published version of the manuscript.
Funding: The work of Stefano Serra-Capizzano is supported by GNCS-INdAM and is funded by
the European High-Performance Computing Joint Undertaking (JU) under grant agreement No.
955701. The JU receives support from the European Union’s Horizon 2020 research and innovation
programme and Belgium, France, Germany, and Switzerland. Furthermore, Stefano Serra-Capizzano
is grateful for the support of the Laboratory of Theory, Economics and Systems—Department of
Computer Science at Athens University of Economics and Business.
Data Availability Statement: Data are contained within the article.
Algorithms 2024, 17, 100 19 of 19

Acknowledgments: We thank the anonymous Referees for their careful work and for the explicit
appreciation of our results.
Conflicts of Interest: The authors declare no conflicts of interest.

References
1. Adriani, A.; Sormani, R.L.; Tablino-Possio, C.; Krause, R.; Serra-Capizzano, S. Asymptotic spectral properties and preconditioning
of an approximated nonlocal Helmholtz equation with Caputo fractional Laplacian and variable coefficient wave number µ.
arXiv 2024, arXiv:2402.10569
2. Li, T.-Y.; Chen, F.; Sun, H.W.; Sun, T. Preconditioning technique based on sine transformation for nonlocal Helmholtz equations
with fractional Laplacian. J. Sci. Comput. 2023, 97, 17. [CrossRef]
3. Garoni, C.; Serra-Capizzano, S. Generalized Locally Toeplitz Sequences: Theory and Applications; Springer: Cham, Switzerland, 2018;
Volume II.
4. Barbarino, G.; Serra-Capizzano, S. Non-Hermitian perturbations of Hermitian matrix-sequences and applications to the spectral
analysis of the numerical approximation of partial differential equations. Numer. Linear Algebra Appl. 2020, 27, e2286. [CrossRef]
5. Bhatia, R. Matrix Analysis, Graduate Texts in Mathematics; Springer: New York, NY, USA, 1997; Volume 169.
6. Golinskii, L.; Serra-Capizzano, S. The asymptotic properties of the spectrum of nonsymmetrically perturbed Jacobi matrix
sequences. J. Approx. Theory 2007, 144, 84–102. [CrossRef]
7. Agarwal, R.P. Difference Equations and Inequalities: Second Edition, Revised and Expanded; Marcel Dekker: New York, NY, USA, 2000.
[CrossRef]
8. Guo, S.-L.; Qi, F. Recursion Formulae for ∑nm=1 mk . Z. Anal. Anwend. 1999, 18, 1123–1130. [CrossRef]
9. Kuang, J.C. Applied Inequalities, 2nd ed.; Hunan Education Press: Changsha, China, 1993. (In Chinese) [CrossRef]
10. Bini, D.; Capovani, M. Spectral and computational properties of band symmetric Toeplitz matrices. Linear Algebra Appl. 1983,
52–53, 99–126.
11. Chan, R.; Ng, M. Conjugate gradient methods for Toeplitz systems. SIAM Rev. 1996, 38, 427–482. [CrossRef]
12. Serra-Capizzano, S. Superlinear PCG methods for symmetric Toeplitz systems. Math. Comp. 1999, 68, 793–803.
13. Di Benedetto, F.; Serra-Capizzano, S. Optimal multilevel matrix algebra operators. Linear Multilinear Algebra 2000, 48, 35–66.
[CrossRef]
14. Kailath, T.; Olshevsky, V. Displacement structure approach to discrete-trigonometric-transform based preconditioners of G. Strang
type and of T. Chan type. SIAM J. Matrix Anal. Appl. 2005, 26, 706–734. [CrossRef]
15. Loan, C.V. Computational Frameworks for the Fast Fourier Transform; Frontiers in Applied Mathematics; Society for Industrial and
Applied Mathematics (SIAM): Philadelphia, PA, USA, 1992. [CrossRef]
16. Fasino, D.; Tilli, P. Spectral clustering properties of block multilevel Hankel matrices. Linear Algebra Appl. 2000, 306, 155–163.
[CrossRef]
17. Serra-Capizzano, S. On the extreme spectral properties of Toeplitz matrices generated by L1 functions with several min-
ima/maxima. BIT 1996, 36, 135–142. [CrossRef]
18. Barbarino, G. A systematic approach to reduced GLT. BIT 2022, 62, 681–743.
19. Barbarino, G.; Garoni, C.; Serra-Capizzano, S. Block generalized locally Toeplitz sequences: Theory and applications in the
multidimensional case. Electr. Trans. Numer. Anal. 2020, 53, 113–216. [CrossRef]
20. Garoni, C.; Speleers, H.; Ekström, S.-E.; Reali, A.; Serra-Capizzano, S.; Hughes, T.J.R. Symbol-based analysis of finite element
and isogeometric B-spline discretizations of eigenvalue problems: Exposition and review. Arch. Comput. Methods Eng. 2019, 26,
1639–1690. [CrossRef]
21. Garoni, C.; Serra-Capizzano, S.; Sesana, D. Spectral analysis and spectral symbol of d-variate Q p Lagrangian FEM stiffness
matrices. SIAM J. Matrix Anal. Appl. 2015, 36, 1100–1128. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like