0% found this document useful (0 votes)
250 views37 pages

Solution Space of A Homogeneous Linear Differential Equation

The document is a project report that proves a theorem regarding the solution space of a homogeneous linear differential equation in the complex plane. It begins with acknowledgements and an abstract. The body of the report covers relevant analysis and linear algebra background concepts. It then proves three supplementary lemmas and finally proves the main theorem stated in the abstract.

Uploaded by

Robert
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
250 views37 pages

Solution Space of A Homogeneous Linear Differential Equation

The document is a project report that proves a theorem regarding the solution space of a homogeneous linear differential equation in the complex plane. It begins with acknowledgements and an abstract. The body of the report covers relevant analysis and linear algebra background concepts. It then proves three supplementary lemmas and finally proves the main theorem stated in the abstract.

Uploaded by

Robert
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 37

Solution space of a Homogeneous

Linear Differential Equation

Robert Joseph

A project submitted in fulfillment of the requirements


for the course MATH 336(Honors Ordinary Differential Equations)

April 23, 2021


2
Acknowledgements

I thank God for blessing me to take this course as well as Professor Xinwei Yu for
teaching this wonderful course and also thank my parents and friends for supporting
me through this pandemic and helping me in every way possible

Robert Joseph
Edmonton, Canada, 23/04/2021
ii Acknowledgements
Abstract

This project proves the following Theorem regarding the Solution Space of a Homo-
geneous Linear Differential Equation in the Complex Plane. All the required prerequi-
sites/lemmas and additional Theorems are proved in order to prove the required theo-
rem.

Theorem 1 For any arbitrary open and connected region R ⊂ C, The solution space
of the homogeneous linear differential equation of order n

y(n) (z) + an−1 (z)y(n−1) (z) + · · · + a0 (z)y(z) = 0

where every coefficient a j (z), j = 0, 1, 2, . . . , n − 1, is continuous is n -dimensional


(dim(VRn ) = n) if and only if every coefficient a j (z), j = 0, 1, 2, . . . , n − 1 is analytic.
iv Abstract
Contents

Acknowledgements i

Abstract iii

1 Preliminaries 1
1.1 Analysis Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Formal Power Series . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Power series as functions . . . . . . . . . . . . . . . . . . . . . 1
1.1.3 Analytic Function . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.4 Coefficients of a formal power series . . . . . . . . . . . . . . 2
1.1.5 Zeros of an Analytic Function . . . . . . . . . . . . . . . . . . 2
1.1.6 Zeros of Order n . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.7 Neighbourhood of a Zero of order m . . . . . . . . . . . . . . . 2
1.1.8 Constant function . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.9 Extended Theorem . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.10 Zeros are Isolated . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.11 Open Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.12 Closed Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.13 Open Deleted Disk . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.14 Deleted Neighbourhood . . . . . . . . . . . . . . . . . . . . . 5
1.1.15 Isolated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.16 Pointwise Convergence . . . . . . . . . . . . . . . . . . . . . . 5
1.1.17 Uniform Convergence . . . . . . . . . . . . . . . . . . . . . . 5
1.1.18 Uniform Limit of Continuous Functions . . . . . . . . . . . . . 6
1.1.19 Uniform Cauchy Sequence . . . . . . . . . . . . . . . . . . . . 6
1.1.20 Weierstrass M-test . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.21 Differential Operator . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.22 Homogeneous Linear n’th order ODE . . . . . . . . . . . . . . 8
1.1.23 Gronwalls lemma . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.1.24 Closed Bounded Set . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 Linear Algebra Background . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.1 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.2 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.3 Linear Independence/Dependence . . . . . . . . . . . . . . . . 10
1.2.4 Span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.5 Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
vi CONTENTS

1.2.7 Exchange Lemam . . . . . . . . . . . . . . . . . . . . . . . . . 12


1.2.8 Theorem for Independence . . . . . . . . . . . . . . . . . . . . 12
1.2.9 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 Supplementary Lemmas’ 15
2.1 Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.1 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Solution Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Lemma 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3.1 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3 Proof of the Main Theorem 21


3.1 Proof of the Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1.1 Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1.2 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1.3 Existence and Uniqueness Theorem . . . . . . . . . . . . . . . 22
3.1.4 If Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.1.5 Only If Part . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Chapter 1

Preliminaries

1.1 Analysis Background


As with polynomials, analytic functions can have repeated roots, and these are observed
using derivatives. In the first section we will go ahead and prove few lemmas/theorem’s
and state definitions that can help us prove the required theorem. Unless stated all
domains D are in C. Kuttler (2020)

1.1.1 Formal Power Series


A (formal) power series centered at x0 ∈ C is a sequence an (n ∈ N0 ) , written as
n
∑n=0 an (x − x0 ). It converges at x1 ∈ C if ∑∞
n=0 an (x1 − x0 ) converges and diverges
otherwise.

1.1.2 Power series as functions


Let f (x) = ∑∞ n
n=0 an x be a (formal) power series (centred at 0). If I ⊆ C is a set such
that for all x0 ∈ I, f (x0 ) converges, we can define a function F : I → C defined by
F(z) = f (z), where the right hand side means the limit of the series ∑∞ n
n=0 an z .

1.1.3 Analytic Function


Let D be an open interval, and f a function defined on I. We say that f is analytic
at x0 ∈ D, if there is a formal power series g centred at c convergent on an interval
(x0 − δ , x0 + δ ) ⊆ D for some δ > 0, such that f (x) = g(x) on (c − δ , c + δ ). We say f
is analytic if that holds for every c ∈ I.

Example

The exponential function is analytic at 0. In fact, every power series with positive radius
of convergence is analytic at its centre.
2 Preliminaries

1.1.4 Coefficients of a formal power series


Let f = ∑∞ n
n=0 an (x − x0 ) be a formal power series centered at x0 with radius of conver-
gence R > 0. Then f is smooth on (x0 − R, x0 + R), and f (n) is again a power series.
Then in particular, an = n!1 f n (x0 ).

1.1.5 Zeros of an Analytic Function


If z0 is a regular point and not a singular point of an analytic function f and if f (z0 ) = 0,
then z0 is called a zero of f .
The point z0 is called a zero of f of order m if in some neighbourhood of z0 , f can
be expanded in a Taylor series of the form

f (z) = ∑ an (z − z0 )n , where am 6= 0

1.1.6 Zeros of Order n


Let f : D → C and If f is analytic at z0 , in a domain D, then we say that f has a zero of
order n ≥ 1 at z0 if
0 = f (z0 ) = f 0 (z0 ) = · · · = f (n−1) (z0 )
and f (n) (z0 ) 6= 0. If f (n) (z0 ) = 0 for all n ≥ 0, then we call z0 a zero of infinite order.

Example :

Let f (z) = z3 − 1, then f (z) has a zero of order 1 at z0 = 1.

f (1) = 0
f 0 (z) = 3z2
f 0 (1) = 3
Therefore f (z) has a zero of order 1 at z0 = 1

1.1.7 Neighbourhood of a Zero of order m


A point z = z0 is a zero of f of order m if and only if in some neighbourhood of z0 , f
can be expressed in the form f (z) = (z − z0 )m g(z), where g(z) is analytic at z0 and
g(z0 ) 6= 0 and f , g ∈ D → C

Proof.
=⇒ Assume that z0 is a zero of f of order m. Then there exists a neighbourhood of z0
where we can expand f as

f (z) = ∑ an (z − z0)n , where am 6= 0
n=m
1.1 Analysis Background 3

Then ∞
f (z) = (z − z0 )m ∑ an (z − z0)n−m
n=m

= (z − z0 )m ∑ b p (z − z0) p , where n − m = p and b p = a p+m
p=0
m
= (z − z0 ) g(z)
where g(z) = ∑∞p=0 b p (z − z0 ) p is analytic at z0 and g (z0 ) = b0 = am 6= 0.
⇐=
Now assume that in some neighbourhood of z0 , f can be expressed as

f (z) = (z − z0 )m g(z)

where g(z) is analytic at z0 and g (z0 ) 6= 0. Then we can expand g(z) in Taylor series
about z0 to obtain

g(z) = ∑ an (z − z0)n , where a0 = g (z0) 6= 0
n=0

Therefore, in some neighbourhood of z0 , we have


∞ ∞
m n
f (z) = (z − z0 ) ∑ an (z − z0) = ∑ an (z − z0)n+m
n=0 n=0

= ∑ b p (z − z0) p , where m + n = p and b p = a p−m
p=m

Since bm = a0 6= 0, z0 is a zero of f of order m. This proves the theorem.

1.1.8 Constant function


If f has a zero of infinite order at z0 . Then there is a r > 0 such that f is identically
zero in Br (z0 ) where Br (z0 ) := {z ∈ C : |z − z0 | < r}

Proof

Now there exists a r > 0 such that



f (z) = ∑ an (z − z0)n for all z ∈ Br (z0 )
n=0

by definition.
Now since z0 is a zero of infinite order(using also the earlier lemma from subsection
1.1.3) then
f (n) (z0 )
an = = 0 for all n ≥ 0.
n!
Hence f (z) = 0 for all z ∈ Br (z0 ).
4 Preliminaries

1.1.9 Extended Theorem


If f is analytic in a domain D ⊂ C and if f has a zero of infinite order in D, then f is
identically zero. Earlier we proved that in every Br (z0 ) , ∃r ∈ C that f is identically
zero but this can be extended to the whole domain.

1.1.10 Zeros are Isolated


Let R ⊂ C be some open set and let f be an analytic function defined on R. Then either
f is a constant function, or the set {z ∈ R : f (z) = 0} is totally disconnected ie all the
zeros are isolated.

Proof

Suppose f has no zeroes in R. Then the set described in the theorem is the empty set,
and we’re done. So we suppose ∃z0 ∈ R such that f (z0 ) = 0. Since f is analytic, there
is a Taylor series for f at z0 which converges for |z − z0 | < R. Now, since f (z0 ) = 0, we
know a0 = 0. Other a j may be 0 as well. So let k be the least number such that a j = 0
for 0 ≤ j < k, and ak 6= 0 Then we can write the Taylor series for f about z0 as:
∞ ∞
n k
∑ an (z − z0) = (z − z0 ) ∑ an+k (z − z0)n
n=k n=0

where ak 6= 0 (otherwise, we’d just start at k + 1 ). Now we define a new function g(z),
as the sum on the right hand side, which is clearly analytic in |z − z0 | < R. Since it
is analytic here, it is also continuous here. Since g (z0 ) = ak 6= 0, ∃ε > 0 so that ∀z
such that |z − z0 | < ε , |g(z) − ak | < |a2k | . But then g(z) cannot possibly be 0 in that disk.
Hence the result.

1.1.11 Open Disk


The open disk of radius r around z0 is the set of points z with |z − z0 | < r, i.e. all points
within distance r of z0

1.1.12 Closed Disk


The open disk of radius r around z0 is the set of points z with |z − z0 | ≤ r, i.e. all points
within distance r of z0

1.1.13 Open Deleted Disk


The open deleted disk of radius r around z0 is the set of points z with 0 < |z − z0 | < r.
That is, we remove the center z0 from the open disk. A deleted disk is also called a
punctured disk.
1.1 Analysis Background 5

1.1.14 Deleted Neighbourhood


A deleted neighbourhood of a point p is a neighbourhood of p, without {p}.

Example

The interval (−1, 1) = {y : −1 < y < 1} is a neighbourhood of p = 0 in the real line,


so the set (−1, 0) ∪ (0, 1) = (−1, 1) \ {0} is a deleted neighbourhood of 0.

1.1.15 Isolated
The singleton set x is an open set in the topological space S ⊆ X. If the space X is a
Euclidean space, then x is an isolated point of S if there exists an open ball around x
which contains no other points of S.

Example

For the set S = {0} ∪ [1, 2], the point 0 is an isolated point.

1.1.16 Pointwise Convergence


Let 6= D ⊂ CN , and let f , f1 , f2 , . . . be C -valued functions on D. Then the sequence
( fn )∞
n=1 is said to converge pointwise to f on D if

lim fn (x) = f (x)


n→∞

holds for each x ∈ D . Runde (2021) Similarly can also be defined in RN and also the
following subsequent theorems.

Examples

For n ∈ N, let
fn : [0, 1] → R, x 7→ xn
so that 
0, x ∈ [0, 1)
lim fn (x) =
n→∞ 1, x = 1
Let 
0, x ∈ [0, 1)
f : [0, 1] → R, x 7→
1, x = 1
It follows that fn → f pointwise on [0,1] .

1.1.17 Uniform Convergence


Let 6= D ⊂ CN , and let f , f1 , f2 , . . . be C -valued functions on D. Then the sequence
( fn )∞
n=1 is said to converge uniformly to f on D if, for each ε > 0, there is nε ∈ N such
that | fn (x) − f (x)| < ε for all n ≥ nε and for all x ∈ D.
6 Preliminaries

Remark
Let us introduce the uniform norm
kgkD = sup |g(z)| for g : D → C.
z∈D

Then fn → f uniformly in D if and only if k fn − f kD → 0 as n → ∞. We wil omit the


use of this norm and stick to the usual norm as above.

Examples
For n ∈ N, let
sin(nπ x)
fn : R → R, x 7→
n
Since
sin(nπ x) 1

n n
for all x ∈ R and n ∈ N, it follows that fn → 0 uniformly on R.

1.1.18 Uniform Limit of Continuous Functions


Let 6= D ⊂ CN , and let f , f1 , f2 , . . . be functions on D such that fn → f uniformly on D
and such that f1 , f2 , . . . are continuous. Then f is continuous.

Proof
Let ε > 0, and let x0 ∈ D. Choose nε ∈ N such that
ε
| fn (x) − f (x)| <
3
for all n ≥ nε and for all x ∈ D. Since fne is continuous, there is δ > 0 such that
| fnε (x) − fnε (x0 )| < ε3 for all x ∈ D with kx − x0 k < δ . Fox any such x we obtain:
| f (x) − f (x0 )| ≤ | f (x) − fnε (x)| + | fnε (x) − fnε (x0 )| + | fnε (x0 ) − f (x0 )| < ε .
| {z } | {z } | {z }
< ε3 < 3e < ε3

Hence, f is continuous at x0 . Since x0 ∈ D was arbitrary, f is continuos on all of D.

1.1.19 Uniform Cauchy Sequence


Let 6= D ⊂ CN . A sequence ( fn )∞
n=1 of C -valued functions on D is called a uniform
Cauchy sequence on D if, for each ε > 0, there is nε ∈ N such that | fn (x) − fm (x)| < ε
for all x ∈ D and all n, m ≥ nε

1.1.20 Weierstrass M-test


Let 6= D ⊂ CN , let ( fn )∞
n=1 be a sequence of C-valued functions on D and suppose
that, for each n ∈ N, there is Mn ≥ 0 such that | fn (x)| ≤ Mn for x ∈ D and such that
∑∞ ∞
n=1 Mn < ∞. Then ∑n=1 f n converges uniformly and absolutely on D.
1.1 Analysis Background 7

Proof of Weierstrass M-test

Let ε > 0 and choose nε̃ ∈ N such that


n
∑ Mk < ε
k=m+1

for all n ≥ m ≥ nε . For all such n and m and for all x ∈ D, we obtain that

n m n n
∑ fk (x) − ∑ fk (x) ≤ ∑ | fk (x)| ≤ ∑ Mk < ε

k=1 k=1
k=m+1 k=m+1

Hence, the sequence (∑nk=1 fk )∞ n=1 is uniformly Cauchy on D and thus uniformly con-
vergent. It is easy to see that the convergence is even absolute.

1.1.21 Differential Operator


Let R ⊂ C be an interval(open and connected set) and n, k be positive integers.

Consider the map


D : C1 (R) → C(R)
given by D( f ) = f 0 . More generally, for any k ∈ {1, . . . , n}, consider the map

Dk : Ck (R) → C(R)

given by Dk ( f ) = f (k) , where f (k) denotes the k -th derivative of f . Observe that
Dk = D◦D◦· · ·◦D(k times ). By convention, D0 = Id (the identity map). The operators
(or maps) Dk are called differentiation operators.

Definition

A differential operator from Cn (R) to C(R) is a map

L : Cn (R) → C(R)

which can be expressed as a function of the differentiation operator D.

Examples

Let L = Dn or L = eD

Properties

• L : Cn (R) → C(R) is said to be linear if for any y(x), y1 (x), y2 (x) ∈ Cn (R) and
c∈R
L (y1 + y2 ) = L (y1 ) + L (y2 ) and L(cy) = cL(y)
8 Preliminaries

Linear ODE
 
0
An ODE given by F x, y, y , . . . , y(n) = 0 on an interval R is said to be linear if it can
be written as L(y)(x) = g(x), where L : Cn (R) → C(R) is a linear differential operator.

1.1.22 Homogeneous Linear n’th order ODE


Suppose that a j (z) ∈ C(R) and an (z) = 1 for all z ∈ R. Let z0 ∈ R. Then the initial value
problem (IVP)
(Ly)(z) = 0, y( j) (z0 ) = y j , j = 0, . . . , n − 1
where y j ∈ R and L(y)(z) := y(n) (z) + an−1 (z)y(n−1) (z) + · · · + a1 (z)y0 (z) + a0 (z)y(z)
has a unique solution y(z) for all z ∈ R.

Superposition Principle
Let yi ∈ Cn (R), i = 1, · · · , n be any solutions of L(y)(z) = 0 on I. Then y(z) = c1 y1 (z) +
c2 y2 (z) + · · · + cn yn (z), where ci , i = 1, · · · , n are arbitrary constants, is also a solution
on R

Kernel
Consider the linear differential operator L where

L(y) := an y(n) + an−1 y(n−1) + · · · + a1 y0 + a0 y


where ai : R → C are given functions. Given g(z) ∈ C(R), find y ∈ Cn (R) such that
L(y) = g(z). Since L : Cn (R) → C(R) is a linear transformation, the solution set of
L(y) = g(z) + y p
is given by
Ker(L)
where y p is a particular solution (PS) satisfying L (yP ) = g and Ker(L) = {y ∈ Cn (R) | L(y) = 0}

1.1.23 Gronwalls lemma


Let u(z) and h(z) ≥ 0 be continuous in [a, b] ⊂ R such that
Z z
u(z) ≤ C + u(s)h(s)ds − 1
a

for some constant C and for all a ≤ z ≤ b. Then


Rz
h(s)ds
u(z) ≤ Ce a

for all a ≤ z ≤ b. To see this, differentiate both sides of 1 and use the second funda-
mental theorem of calculus to obtain
u0 (z) − u(z)h(z) ≤ 0
1.2 Linear Algebra Background 9

Rz
Multiplying both sides by the integrating factor e− a h(s)ds to obtain
d h − Raz h(s)ds i
e u(z) ≤ 0
dz
Integrating both sides from a to z, we find
− az h(s)ds
R
e u(z) − u(a) ≤ 0
Hence proved.

1.1.24 Closed Bounded Set


Let D ⊂ C be a closed, bounded set and let f (z) be a continuous complex function in
D then f (z) is bounded in D.
Assume f (z) is not bounded on D. Then ∀n ∈ N, ∃zn ∈ D s.t. | f (zn )| > n. Con-
struct the sequence (zn )∞ ∞
n=1 ⊂ E from these zn . Note that (zn )n=1 is bounded, as D is
bounded. Then by Bolzano-Weierstrass,
∞ (zn )∞
n=1 has a limit point L, and so there ex-
ists a subsequence znk k=1 which converges to L. Moreover, L∈ D since D is a closed
 
set. This implies limk→∞ f znk = f (L), and so limk→∞ f znk = | f (L)| because f is

continuous on D, and f (z) continuous implies | f (z)| is continuous.

1.2 Linear Algebra Background


All of the definitions/theorems and proofs are from the standard 227/127 textbook and
have been followed similarly. Kuttler (2019)

1.2.1 Vector Spaces


Let F be a field. An F -véctor space or simply vector space if F is understood is
a triple (V, +, ·) where V is a nonempty set and + is an associative operation on V ,
called the addition of V , and . is a map F × V → V called the scalar multiplication
(which associates to each c ∈ F and each v ∈ V an element cv = c · v ∈ V ), such that the
following properties hold:
• The addition is commutative: v + w = w + v for all v, w ∈ V .
• There is an identity element for the addition: There is an element 0 called the
zero vector or simply zero of V such that 0 + v = v + 0 = v for all v ∈ V .
• Each element of V has an additive inverse: for each v ∈ V there is an element −v
of V such that v + (−v) = 0
• The scalar multiplication is associative: for each a, b ∈ F and each v ∈ V , we have
a(bv) = (ab)v
• 1 ∈ F is an identity element for the scalar multiplication: 1v = v for all v ∈ V .
• The scalar multiplication is distributive in the following two senses: for each
a, b ∈ F and each v ∈ V we have (a + b) · v = av + bv; and for each c ∈ F and
v, w ∈ V also c · (v + w) = cv + cw.
10 Preliminaries

Examples
• A vector space over the field R of real numbers is often called Real, and a vector
space over the field C of complex numbers is often called Complex.

1.2.2 Subspaces
Let V be an F -vector space. A subset W ⊆ V is called a subspace of V if it satisfies the
following three properties:
• W is not empty.
• If v, w ∈ W then also v + w ∈ W .
• If w ∈ W and r ∈ F then rv ∈ W .

Examples
A function f : R → R is called a polynomial function if there exists a (fixed) list of real
numbers a0 , a1 , . . . , an such that for each x ∈ R
f (x) = a0 + a1 x + · · · + an xn
Let P(R) be the set of all polynomial functions on R. Then P(R) ⊆ F(R) is a subspace.

1.2.3 Linear Independence/Dependence


An ordered list (v1 , v2 , . . . , v p ) of vectors v1 , v2 , . . . , v p ∈ V is called linearly dependent
if there are scalars c1 , c2 , . . . , c p ∈ F not all zero such that
c1 v1 + c2 v2 + · · · + c p v p = 0
Such a formula is called a Linear Dependence relation. We also call the vectors
v1 , v2 , . . . , v p linearly dependent if the list (v1 , v2 , . . . , v p ) is.
The list (v1 , v2 , . . . , v p ) is called linearly independent if it is not linearly dependent.
In other words, it is linearly independent if
c1 = c2 = · · · = c p = 0.
Thus, (v1 , v2 , . . . , v p ) is linearly independent if and only if there is one and only one
way to write 0 as a linear combination of the vi : 0 = 0v1 + 0v2 + · · · + 0v p . If this is the
case we also say the vectors v1 , v2 , . . . , v p are linearly independent.

Examples
• In Fn , the vectors e1 , e2 , . . . , en are linearly independent. Indeed, suppose c1 e1 +
c2 e2 + · · · + cn en = 0. Then observe that
 
c1
 c2 
c1 e1 + · · · + cn en = 
 ...  = 0

cn
if and only if all ci = 0.
1.2 Linear Algebra Background 11

• In R3 , the three vectors


     
1 1 2
 0 , 1 , 2 
4 5 2
are linearly independent.

1.2.4 Span
Let v1 , v2 , . . . , vn ∈ V (n > 0). Then Span (v1 , . . . , vn ) is a subspace of V . In fact it is the
minimal subspace containing v1 , v2 , . . . , vn in the following senise: if W is any subspace
of V containing v1 , v2 , . . . , vn as elements, then Span (v1 , v2 , . . . , vn ) ⊆ W . Thus,
\
Span (v1 , v2 , . . . , vn ) = W
W ⊆V
v1 ,v2 ,...,vn ∈W

where the intersection ranges over all subspace of V that contain v1 , v2 , . . . , vn .

Examples

If A1 , A2 , . . . , An ∈ Fm are the columns of the matrix A ∈ Mm×n (F), then we call

Col(A) = Span (A1 , A2 , . . . , An )

the column space of A. It is a subspace of Mm×1 (F) which as usual we identify with Fm
It is the set of all B ∈ Fm for which the matrix equation AX = B has a solution: indeed,
AX = B has a solution if and only if B can be expressed as a linear combination of the
columns A1 , A2 , . . . , An of A.

1.2.5 Basis
Let V be a vector space. A basis is a linearly independent ordered list of generators.
Thus, B ⊆ V is a basis if and only if B is linearly independent and Span(B) = V . We
write
B = (v1 , v2 , . . . , vn )
if v1 , v2 , . . . , vn are the elements of B (in order). By convention, the empty set is a basis
for V = {0}.

1.2.6 Examples
Suppose V = Fn .Then E = (e1 , e2 , . . . , en ) is a basis. (Both E is linearly independent
and that Span(E) = Fn . ) For v ∈ Fn we have Ev = v, so [v]E = v. This makes this
particular basis a little special; it is therefore often referred to as the standard basis of
Fn .
12 Preliminaries

1.2.7 Exchange Lemam


Let V be a vector space spanned by elements v1 , v2 , . . . , vn , say. Lel v = c1 v1 + · · · +
cn vn ∈ V be a vector. If ci 6= 0, then
V = Span (v1 , v2 , . . . , vi−1 , v, vi+1 , . . . , vn )

1.2.8 Theorem for Independence


Let V be a vector space generated by finitely many elements (v1 , v2 , . . . , vn ) , say. If
(w1 , w2 , . . . , wk ) is a linearly independent list of elements of V , then k ≤ n.

Proof
Let L = (v1 , v2 , . . . , vn ) and M = (w1 , w2 , . . . , wk ). If n = 0 (that is, if L is empty),
then V = {0}, so any number of elements of V are linearly dependent. Hence k = 0
as well. We may therefore assume that n > 0. Suppose precisely m ≥ 0 of the ele-
ments of M are also elements of L. By reordering if necessary, we may assume that
w1 = v1 , w2 = v2 , . . . , wm = vm . We will now show how to increase m by 1 if k − m > 0.
In this case, wm+1 ∈ / L. We may write wm+1 = c1 v1 + · · · + cm vm for suitable ci ∈ F.

Claim: At least one ci with i > m must be nonzero. Indeed, otherwise cm+1 =
cm+2 = · · · = cn = 0 and
wm+1 = c1 v1 + · · · + cm vm = c1 w1 + · · · + cm wm
contradicting the fact that M is linearly independent. This proves the claim. So pick
one such i (ie. i > m and ci 6= 0 ). By the Exchange Lemma, we can replace vi by wi
in L, obtaining a new list of generators L0 which has m + 1 elements in common with
M and still satisfies V = Span (L0 ) This process can be repeated as long as k − m > 0.
Thus eventually, all elements of M must be elements of the newly created list L0 . In
particular, n ≥ k.

1.2.9 Dimension
Let V be a vector space with basis B = (v1 , v2 , . . . , vn ). The uniquely determined integer
n is called the dimension of V and denoted dimV .
The empty set by convention is a basis for V = {0} (it is after all a linearly indepen-
dent set that spans V ). So dim{0} = 0. If V does not have a (finite) basis, then we say
dimV = ∞.

Example

As expected dim R = 1 (the list with one element (1R ) is a basis), dim R2 = 2 and
dim R3 = 3. More generally, (3.23)
dim Fn = n
• The standard basis, E = (e1 , e2 , . . . , en ) of Fn has exactly n elements.
1.2 Linear Algebra Background 13

• dim Mm×n (F) = mn. Here we may choose as a basis a list whose elements are
precisely the mn matrix units ei j (in any ordering).
14 Preliminaries
Chapter 2

Supplementary Lemmas’

2.1 Lemma 1
If f (x) is a solution of
y(n) (z) = an−1 (z)y(n−1) (z) + · · · + a0 (z)y(z)
then f := 0, ∀z ∈ R ⊂ C.

2.1.1 Proof
Before going over the proof I would first like to present an example before proving the
general case.
Let us saying we have the following equation when n = 1
y1 = a0 (z)y, a0 = g(z)
Now this is easily solvable and the general solution is given by
R
a0 (z)dz
y = Ce , ∀C ∈ R
Now note that the exponential function has no zeros ⇐⇒ no poles hence there does
not
∃z0 , such that ez0 = 0
Therefore
R Z
{ a0 (z)dz=h(z0 )}
6 ∃z0 s.t e = 0 unless a0 (z)dz = ln g(z) =⇒ eln g(z) = g(z) ∃z0 s.t g(z0 ) = 0

Now considering the general solution let us divide this problem into two sub parts
g0 (z)
R
• Now consider a function a0 (z) = e a0 (z)dz = ln g(z). Then the general
g(z) then
solution is y = Cg(z) and y0 = Cg0 (z) and
let us assume that C 6= 0. Then by
assumption ∃z0 , z1 s.t g(z0 ) = 0 = g0 (z1 ), but clearly if this is the case then the
function a0 (z) is not analytic in the region R. If we exclude the point’s at which
the function a0 (z) is 0 then we get the region R at which the function has no zeros
and hence our general solution can never be 0 and hence has no zero of order 2.
Therefore the only way for y = Cg(z) = 0 is for C = 0 and hence y = 0 which is
a zero of order infinite order.
16 Supplementary Lemmas’

• Considering
R any other general function yields the same answer
R as before as
6 ∃z0 s.t e{ a0 (z)dx=h(z0 )} = 0 and hence the only way for y = Ce a0 (z)dz = 0 is y = 0.

Hence the only solution for this example to have a zero of order 2 (or a zero of
infinite order which proved by 1.18 and 1.19 could be extended to the whole of R )
would be f := 0.
Now let us prove the general case. From the hypothesis of our lemma and after
substituting f (z)(which is a solution) in our original equation ie

y(n) (z) = an−1 (z)y(n−1) (z) + · · · + a0 (z)y(z)

it is clear that f has a zero of order atleast n + 1. f (z0 ) = 0, z0 ∈ R. Now we prove this
by contradiction. Suppose that f is not identically 0 ie f 6:= 0 then ∃p ≥ 1 such that f
has a zero of order n + p at z0 . Then by 1.1.7 we have

f (z) = (z − z0 )n+p · g(z)


and we already know that g(z0 ) 6= 0 and g(z) is analytic in the Neighbourhood of z0 . To
make simplifications easier let k = n + p and then let

f (z) = (z − z0 )k · g(z)

Now taking derivatives of f we find that

f 0 = k · (z − z0 )k−1 · g(z) + g0 (z) · (z − z0 )k

f 00 = g00 (z)·(z−z0 )k +k ·(z−z0 )k−1 ·g0 (z)+k(k −1)·(z−z0 )k−2 ·g(z)+g0 (z)·k ·(z−z0 )k−1

Similarly we can find all derivatives upto f n ie the n’th derivative and substitute
these all in the equation

y(n) (z) = an−1 (z)y(n−1) (z) + · · · + a0 (z)y(z)

Example when n = 1
Then substituting in the original equation above we gain.

k · (z − z0 )k−1 · g(z) + g0 (z) · (z − z0 )k = a0 · ((z − z0 )k · g(z))

k · (z − z0 )k−1 · g(z) − a0 · ((z − z0 )k · g(z)) = −g0 (z) · (z − z0 )k

k · g(z)(z − z0 )k−1 (1 − a0 (z − z0 )) = (z − z0 )k · g0 (z)

−g0 (z)
k · g(z)(z − z0 )k−1 = (z − z0 )k ·
(1 − a0 (z − z0 ))
−g0 (z)
Now let p(z) = (1−a0 (z−z0 )) and this implies
2.2 Solution Space 17

k · g(z)(z − z0 )k−1 = (z − z0 )k · p(z)


Similarly grouping only the g(z) coefficients together and other terms naming it as
function p(z) we have

k(k − 1) · · · (k − n + 1)g(z)(z − z0 )k−n = (z − z0 )k−n+1 p(z)

k(k − 1) · · · (k − n + 1)g(z) = (z − z0 )p(z)


Now p(z) is clearly continuous as it is just gonna be a bunch of coefficients and
derivatives of g(z) and hence analytic too. Now the above equation only holds true in
some deleted neighbourhood of z0 as if it was true including z0 then we couldn’t really
define f (z) = (z − z0 )k g(z) as f (z0 ) = 0.
Now finally

lim k(k − 1) · · · (k − n + 1) · g(z0 ) = lim (z − z0 )p(z)


z→z0 z→z0

lim k(k − 1) · · · (k − n + 1) · g(z0 ) = 0


z→z0
And finally we know that
k(k − 1)....(k − n + 1) 6= 0
Hence the only possibility is g(z0 ) = 0
which is a contradiction to our statement and 1.1.7. Hence
f := 0

2.2 Solution Space


Now before proving the next lemma let us first understand what does a solution space
mean. Krom (1979)

2.2.1 Definition
The solution space of a linear homogeneous differential equation is a vector space over
any field F. This is denoted by VFn and the dimension of it denoted by dim(VFn ).
Let R ⊂ C Then VRn is a linear space of analytic functions over the field of Complex
numbers (C).

2.2.2 Example
Let F be the vector space with the basis {t, et } . We expand the determinant

y t et
0
y 1 et
00
y 0 et

18 Supplementary Lemmas’

by the elements of the first column to get (t − 1)y00 − ty0 + y = 0.


An important example is the constant coefficient differential equation
d ny dy
an n + · · · a1 + a0 y = 0, with an 6= 0
dz dz
A basis for the solution space F is given by
n o
k zλ i
ze k = 0, 1, . . . , mi − 1; i = 1, . . . , s

where λ1 , . . . , λs are the distinct roots of the characteristic equation


f (λ ) = an λ n + · · · + a1 λ + a0 = 0
and λi has multiplicity m1 .

2.3 Lemma 2
dim(VRn ) ≤ n

2.3.1 Proof
Now this should be obvious due to the fact that we proved this in 1.2.8 ie the Theorem
for independence due to the fact that (y1 , y2 · · · , yn ) generate the solution space and any
other list of such vectors of dimension k will always be less than or equal to n. The next
proof follows by the way the paper describes it and goes as follows.

Let us assume that dim(VRn ) > n and obtain a contradiction. Let (y1 , y2 , · · · , yn+1 )
be a linearly independent list of our solution space (VRn ).
Consider the system of n linear equations with n + 1 unknowns and z ∈ R as follows
n+1
∑ xk · yik (z) = 0, i ∈ {0, 1, 2 · · · , n − 1}
k=1
This system looks like

x1 · y01 (z) + x2 · y02 (z) · · · + xn+1 · y0n+1 (z) = 0


x1 · y11 (z) + x2 · y12 (z) · · · + xn+1 · y1n+1 (z) = 0
···
···

x1 · yn−1 n−1 n−1


1 (z) + x2 · y2 (z) · · · + xn+1 · yn+1 (z) = 0
Now this system has a non trivial solution say (s1 , s2 , · · · , sn+1 ) and this implies that
the solution
n+1
∑ si · yi
k=1
2.3 Lemma 2 19

satisfies
y(n) (z) = an−1 (z)y(n−1) (z) + · · · + a0 (z)y(z)
then this solution has a zero of order n at z and by Lemma 1 this implies that this
solution is identically 0 ie
n+1
∑ si · yi := 0 ∈ R
k=1
But clearly this is a contradiction since we assumed that we have a non trivial solu-
tion and that (y1 , y2 , · · · , yn+1 ) are linearly independent in R.

Clearly this is directly correlated to the previous proof of the Independence Theo-
rem.
20 Supplementary Lemmas’
Chapter 3

Proof of the Main Theorem

3.1 Proof of the Main Theorem


Finally we have reached the gist of the paper and ready to prove the Main theorem after
all the prerequisites have been met. Any additional Lemma/Theorems that are required
have been proved subsequently. Bose (1982)

3.1.1 Statement
For any arbitrary region R ⊂ C, The solution space of the homogeneous linear
differential equation of order n and where every coefficient a j (z), j = 0, 1, 2, . . . , n − 1,
is continuous
y(n) (z) + an−1 (z)y(n−1) (z) + · · · + a0 (z)y(z) = 0

is n -dimensional (dim(VRn ) = n) if and only if every coefficient a j (z), j = 0, 1, 2, . . . , n−


1 are analytic.

3.1.2 Proof
⇐= We need to first prove that if all the coefficient’s a j (z), j = 0, 1, 2, . . . , n − 1, are
analytic in R ⊂ C which are all also continuous then dim(VRn ) = n.

Now this is basically to prove Theorem 1.2.2 ie Suppose that a j (z) ∈ C(R) and
an (z) = 1 for all z ∈ R. Let z0 ∈ R. Then the initial value problem (Eqn 1)

(Ly)(z) = 0, y( j) (z0 ) = y j , j = 0, . . . , n − 1 z0 ∈ R

where y j ∈ R and L(y)(z) := y(n) (z) + an−1 (z)y(n−1) (z) + · · · + a1 (z)y0 (z) + a0 (z)y(z)
has a unique solution y(z) in a closed bounded set E ⊂ R that contains z0 .
22 Proof of the Main Theorem

3.1.3 Existence and Uniqueness Theorem


Existence
The existence of a local solution is obtained here by transforming the problem into a
first order system. This is done by introducing the variables(similar as to the case we
did in the notes)
x1 = y, x2 = y0 , · · · , xn = y(n−1)
In this case, we have
x10 = x2
x20 = x3
.. ..
.=.
0
xn−1 = xn
xn0 = −an−1 (z)xn − · · · − a1 (z)x2 − a0 (z)x1
Thus, we can write the initial-value problem as a system:
0 
0 −1 0 0 · · · 0
    
x1 x1 0
 x2   0 0 −1 0 · · · 0   x2   0 
 x3  =  ... .. .. .. ..  ..
      
. . . · · · .  x3 + . 
 .    ..   
 ..   0 0 0 0 · · · −1   .   0 
xn a0 a1 a2 a3 · · · an−1 xn 0
or in a more compact form
x0 (z) = A(z)x(z) + b(z),
x (z0 ) = x0
0 ···
 
0 1 0 0
 0 0 1 0 ··· 0 
 . . . . .. 
and where A(t) =  ..
 .. .. .. · · · . 

 0 0 0 0 ··· 1 
−a0 −a1 −a2 −a3 · · · −an−1
   
x1 0  
 x2  0 y0
 y
x3  , b(z) =  ...  , x0 =  .1
    
x(z) = 
 .     ..

 .. 

0
yn−1
xn 0
Therefore since b(z) is a 0 vector hence we can omit this out of our equation and hence
our compact form equation(Eqn 2) becomes
x0 (z) = A(z)x(z), x (z0 ) = x0
Note that if y(z) is a solution of Eqn 1 then the vector-valued function
y
 
 y0 
x(z) = 
 ... 

y(n−1)
3.1 Proof of the Main Theorem 23

is a solution to Eqn 2. Conversely, if the vector


 
x1
 x2 
 
x(z) =  x
 .3


 .. 
xn

(n−1)
is a solution of Eqn 2 then x10 = x2 , x100 = x3 , · · · , x1 = xn .
Hence
(n)
x1 = xn0 = −an−1 (z)xn − an−2 (z)xn−1 − · · · − a0 (z)x1
and
(n) (n−1) (n−2)
x1 + an−1 (z)x1 + an−2 (z)x1 + · · · + a0 (z)x1 = 0
or
y(n) + an−1 (z)y(n−1) + an−2 (z)y(n−2) + · · · + a0 (z)y = 0
which means that y = x1 (z) is a solution to Eqn 1.
(n−1)
Moreover, x1 (z0 ) = y0 , x10 (z0 ) = x2 (z0 ) = y1 , · · · , x1 (z0 ) = xn (z0 ) = yn−1 . That
is, x1 (z) satisfies the initial conditions of Eqn 1.

Next, we start by reformulating Eqn 2 as an equivalent integral equation. Integra-


tion of both sides of Eqn 2 yields (Eqn 3)
Z z Z z
0
x (s)ds = [A(s)x(s)]ds
z0 z0

Applying the Fundamental Theorem of Calculus to the left side of Eqn 3 yields
Z z
x(z) = x (z0 ) + [A(s)x(s)]ds, x (z0 ) = x0 − Eqn 4
z0

Thus, a solution of Eqn 4 is also a solution to Eqn 2 and vice versa. Now To prove the
existence of a solution, we shall use the method of successive approximation.
Letting
 
y0
 y1 
x0 = 
 ... 

yn−1
we can introduce Picard’s iterations defined recursively as follows:

x0 (z) = x0 R
x1 (z) = x0 + zz0 [A(s)x0 (s)] ds
x2 (z) = x0 + zz0 [A(s)x1 (s)] ds
R
..
.
xN (z) = x0 + zz0 [A(s)xN−1 (s)] ds
R
24 Proof of the Main Theorem

Let  
x1,N
 x2,N 
xN (z) = 
 ... 

xn,N
For i = 1, 2, · · · , n, we are going to show that the sequence {xi,N (z)}∞
N=1 converges uni-
formly to a function xi (z) such that x(t) (with components x1 , x2 , · · · , xn ) is a solution
to Eqn 4 and hence a solution to Eqn 2.

Let E be a closed bounded set containing z0 and contained in R ⊂ C. For i =


0, 1, · · · , n − 1, the function ai (z) is continuous in z ∈ R and in particular it is continuous
in E ⊆ R. We know from analysis then that a continuous function on a closed bounded
set is bounded( Theorem 1.1.24). Hence, there exist positive constants k0 , k1 , · · · , kn−1
such that

max |a0 (z)| ≤ k0 , max |a1 (z)| ≤ k1 , · · · , max |an−1 (z)| ≤ kn−1
z∈E z∈E z∈E

This implies that


kA(z)x(z)k = |x2 | + |x3 | + · · · + |xn−1 | + |a0 x1 + a1 x2 + · · · + an−1 xn |
≤ |x2 | + |x3 | + · · · + |xn−1 | + |a0 | |x1 | + |a1 | |x2 | + · · · + |an−1 | |xn |
≤ k0 |x1 | + (1 + k1 ) |x2 | + · · · + (1 + kn−2 ) |xn−1 | + kn−1 |xn |
≤ K · kxk
for all z ∈ E, where we define

||x|| = |x1 | + |x2 | + · · · + |xn |

and where
K = k0 + (1 + k1 ) + · · · + (1 + kn−2 ) + kn−1
For i = 1, 2, · · · , n, we have
Z z
|xi,N − xi,N−1 | ≤ kxN − xN−1 k ≤ kA(s) · (xN−1 − xN−2 )k ds
z0
Z z
≤K kxN−1 − xN−2 k ds
z0

Also Z z
kx1 − x0 k ≤ k[A(s) · x0 ]k ds
z0
≤ M (z − z0 )
where
M = K kx0 k
Induction on N ≥ 1 yields

(z − z0 )N
kxN − xN−1 k ≤ MK N−1
N!
3.1 Proof of the Main Theorem 25

By our assumption that R is an open connected set then the set R = {(x + y · i) ∈ C :
x ∈ (e, f ), y ∈ (c, d)} can be represented this way and hence let b = ( f − e) and a =
i · (d − c)
Since N! ≥ (N − 1)! and z − z0 < b − a we have
N N
N−1 (z − z0 ) N−1 (b − a)
kxN − xN−1 k ≤ MK ≤ MK
(N − 1)! (N − 1)!

Since

(b − a)N
∑ MK N−1 = M(b − a)eK(b−a)
N=1 (N − 1)!
by Weierstrass M-test(Theorem 1.1.20) we conclude that the series ∑∞
N=1 [xi,N − xi,N−1 ]
converges uniformly for all z ∈ E. But
N−1  
xi,N (z) = ∑ xi,k+1 (z) − xi,k (z) + xi,1
k=1

Thus, the sequence {xi,N }∞ N=1 converges uniformly to a function xi (z) for all z ∈ E
and hence the function xi (z) is a continuous function (Theorem 1.1.18). Also, we can
interchange the order of taking limits and integration for such sequences. Therefore

x(z) = lim xN (z)


N→∞
Z z
= x0 + lim (A(s)xN−1 (s)) ds
N→∞ z0
Z z
= x0 + lim (A(s)xN−1 (s)) ds
z N→∞
Z 0z
= x0 + A(s)x(s)ds
z0

This shows that x(z) is a solution to the integral equation Eqn 2 and therefore a solution
to Eqn 1.

Uniqueness

Now, the uniqueness of solution to Eqn 2 follows from Gronwall’s Inequality (Theorem
1.1.23). Suppose that y(z) and r(z) are two solutions to the initial value problem Eqn
2.
Let E = {(x + y · i) ∈ C : x ∈ [m, n], y ∈ [l, o]} Then for all z ∈ E we have
Z z
ky(z) − r(z)k ≤ Kky(s) − r(s)kds
z0

Letting u(z) = ky(z) − r(z)k we have


Z z
0 ≤ ℜ{u(z)} ≤ ℜ{ Ku(s)ds}
z0
26 Proof of the Main Theorem

so that by Gronwall’s inequality by splitting the components into the real part with C =
0 and h(z) = K, we find u(z) := 0 in [m, n] = ℜ{E} and therefore ℜ{y(z)} = ℜ{r(z)}
for all z ∈ ℜ{E} and Z z
0 ≤ ℑ{u(z)} ≤ ℑ{ Ku(s)ds}
z0
so that by Gronwall’s inequality by splitting the components into the imaginary part
with C = 0 and h(z) = K, we find u(z) := 0 in [l, o] = ℑ{E} and therefore ℑ{y(z)} =
ℑ{r(z)} for all z ∈ ℑ{E}. Combining the above two results in y(z) = r(z) ∀z ∈ E. This
completes a proof of the Uniqueness for Eqn 1.

3.1.4 If Part
Now finally we have dim(Ker(L)) = n = dim(VRn ).

Proof
Let L be defined as in Theorem 1.1.21.

Then Choose z0 ∈ I. Define T : Ker(L) → Cn by


 
Ty := y (z0 ) , y0 (z0 ) , . . . , y(n−1) (z0 )

As T is linear(Theorem 1.1.21) and then by uniqueness theorem, T (y) = 0 implies


y = 0. Therefore, T is one-to-one. The existence of solution shows that T is onto.
Thus, T is bijective. Hence dim(Ker(L)) = n which is basically our solution space
(dim(VRn )).

3.1.5 Only If Part


=⇒ Now we come to the if part of the proof ie to prove that if dim(VnR ) = n and
a j , j ∈ {0, 1, 2 · · · n − 1} are continuous in R then this implies that all the coefficient
functions a j , j ∈ {0, 1, 2 · · · n − 1} are all analytic in R.
Before proving it let me give an example for the case n = 1. Let us consider the
homogeneous equation
y0 = a(z) · y
where a(z) is continuous in R. Since our solution space is VR1 that means we have
only 1 solution. Let this solution be f (z) which is also a non-trivial solution of the
above equation. If it was a trivial solution then there is nothing to consider as Constant
functions are all analytic.
Now consider a point z0 ∈ R where f (z0 ) 6= 0 . Then we have the resulting equation
f 0 (z)
= a(z)
f (z)
(z) 0
which is analytic in some neighbourhood of z0 due to the fact that ff (z) is holomorphic
in some neighbourhood of z0 and since holomorphic implies analytic we get that a(z)
3.1 Proof of the Main Theorem 27

is analytic in some neighbourhood of z0 , and therefore a(z) is analytic ∀z ∈ R where


f (z) 6= 0. Now by applying Theorem 1.1.10 and since a(z) is continuous in R we get
that a(z) is analytic in R. Hence proved as an example for n = 1.

Now to prove the general case we proceed by induction.

Base Case
As stated by the example above that proves the base case when n = 1.

Inductive Hypothesis
Assume this holds true for some positive integer n ie
y(n) = an−1 y(n−1) + an−2 y(n−2) + · · · + a0 y,

Induction
Now consider a homogeneous linear differential equation of order n + 1.(Eqn 1)

y(n+1) = an y(n) + an−1 y(n−1) + · · · + a0 y


where ak , k ∈ {0, 1 · · · n} are continuous functions in R.

Now let (VRn+1 ) be the solution set of the above equation with the dimension of it
being n + 1.
Let y1 , y2 · · · yn+1 be the basis for this vector space of dimension n + 1. Now choose
z0 ∈ R such that y1 (z0 ) 6= 0. Let us define D to be the neighbourhood of z0 such that
y1 (z) 6= 0, , z ∈ D. Then consider the set of n functions in this manner
y2 y3 yn+1
{ , ,··· }
y1 y1 y1
are all analytic in D as y1 z0 6= 0.
0
Now define Yk = ( yy1k ) , k = 2, 3 · · · n + 1. Then Y2 · · ·Yn+1 are a set of n functions that
are analytic in D.
Now we will show that Y2 ,Y3 · · ·Yn+1 are all linearly independent in D.
Suppose ∑n+1k=2 ckYk = 0. Then
!0
n+1
∑k=2 ck yk
=0
y1
Thus, we must have the inside a constant function, say C. Then
∑n+1
k=2 ck yk
=C
y1
This gives a linear relation
n+1
∑ ck yk −Cy1 = 0
k=2
28 Proof of the Main Theorem

Since y1 , . . . , yn+1 are linearly independent, we must have

C = c2 = · · · = cn+1 = 0

This shows that Y2 , . . . ,Yn+1 are linearly independent in D.

This proves our claim and now we reduce the order of Eqn 1 by 1 as we know the
solution y1 and then we get the fact that ∀k ∈ {2, 3 · · · n + 1} Yk is a solution for this
reduced n’th order homogeneous equation as also done in the class we obtain ie Eqn 2

u(n) = cn−1 u(n−1) + cn−2 u(n−2) + · · · + c0 u

in D with coefficients cn−1 , cn−2 , . . . , c0 , continuous in D, where


   
−1 n+1 0
cn−1 = y1 − y1 + an y1
1
     
−1 n+1 00 n 0
cn−2 = y1 − y1 + an y1 + an−1 y1
2 1
       
−1 n+1 000 n 00 n−1 0
cn−3 = y1 − y1 + an y1 + an−1 y1 + an−2 y1
3 2 1
       
n + 1 (n) n (n−1) n − 1 (n−2)
c0 = y−1
1 − y1 + an y 1 + an−1 y1 + · · · + a1 y 1
n n−1 n−2
     
n + 1 n − i
or in general cn−k = y−1 − yk1 + ∑k−1 an−i yk−i
1 k i=0 k−i 1

Now let us set VDn to be the solution set of Eqn 2 in D. Since we know that the set

{Yk : k ∈ {2, 3 · · · n + 1}}

is a linearly independent set it implies that our solution space has dimension n ie
dim(VDn ) = n.
Now applying the inductive hypothesis then we gain that each

{ak : k ∈ {1, 2 · · · n}}

in analytic in D and since by Eqn 1 holds ∀z ∈ D we get that a0 is analytic in D , which


implies

a ∈ {ak : k ∈ {0, 1, 2 · · · n}} is analytic at each point z ∈ R such that y1 (z) 6= 0

Then again since we already know that the zeros of y1 are isolated (Theorem 1.1.18)
and each ak is continuous in R we get that each ak is analytic in R.

Q.E.D
Bibliography

Bose, A. K. (1982), Linear Differential Equations on the Complex Plane, The American
Mathematical Monthly, 89(4), 244–246. 3.1
Krom, M. (1979), Solution Spaces of Differential Equations. 2.2
Kuttler, J. (2019), Lecture notes - Honors Linear Algebra. 1.2
Kuttler, J. (2020), Lecture notes - Honors Calculus. 1.1
Runde, V. (2021), Honors Advanced Calculus I/II. 1.1.16

You might also like