0% found this document useful (0 votes)
11 views

2024 CSC14111 Lecture01 AlgorithmEfficiency

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

2024 CSC14111 Lecture01 AlgorithmEfficiency

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Introduction to Design and Analysis of Algorithms

Algorithm Efficiency

Nguyễn Ngọc Thảo


nnthao@fit.hcmus.edu.vn
ALLPPT.com _ Free PowerPoint Templates, Diagrams and Charts

ALLPPT.com _ Free PowerPoint Templates, Diagrams and Charts


This slide is adapted from the Lecture notes of the course
CSC14111 – Introduction to the Design and Analysis of
Algorithms taught by Dr. Nguyen Thanh Phuong (2023).

2
The analysis of
Algorithm efficiency
Algorithm efficiencies
• There are two kinds of algorithm efficiency (or complexity):
time efficiency and space efficiency.

Memory Running
space time

• Time complexity indicates how fast an algorithm runs.


• Space efficiency refers to the amount of memory required
by the algorithm.
• (in addition to the space needed for its input and output)

4
Measure the algorithm efficiencies
• Let 𝒜 be an algorithm and 𝒫 be the program implementing 𝒜.
• The time complexity of algorithm 𝒜 can be measured based on
the running time of program 𝒫.
• We simply use some standard time measurement units (e.g. seconds).
• Similarly, the space complexity of algorithm 𝒜 corresponds to the
amount of memory occupied by that program 𝒫.
• The common space measurement units are from a hierarchy of bytes
(B), e.g., kilobytes (KB), megabytes (MB), or gigabytes (GB).

• The above metrics are device-dependent, introducing biases to


the comparisons of algorithms.

5
Measure the time complexity
• The time efficiency of an algorithm is measured by counting
the number of times the basic operation(s) is executed.

Basic operation
The basic operation of an algorithm is the most important
one, contributing the most to the total running time.
• It is usually in the algorithm’s innermost loop.
• It usually relates to the data that needs to be processed.

6
Measure the time complexity
• The efficiency of an algorithm is represented as a function of
some parameter 𝑛 indicating the input size.
• Given the following terms.
• 𝒇(𝒏): the polynomial that represents the number of times the basic
operation is executed on inputs of size 𝑛.
• 𝑡: the execution time of the basic operation on a particular computer.
• Then, the running time 𝑇(𝑛) of a program implementing the
algorithm in consideration on that computer is
𝑻(𝒏) ≈ 𝒕 × 𝒇(𝒏)

7
The running time 𝑇(𝑛): Comments
• The formula 𝑻(𝒏) can only give a reasonable estimate of the
algorithm’s running time.
• The count 𝑓(𝑛) says nothing about non-basic operations.
• The reliability of the constant 𝑡 is not always easy to assess.

8
The orders of growth
• Consider a polynomial 𝑓(𝑛) that represents the number of
times the basic operation is executed on inputs of size 𝑛.
• For large values of 𝑛, constants as well as all terms except
the one of largest degree will be eliminated.

9
Example: Why ignore low-order terms?

Assume that there are two different algorithms which are designed to
solve the same problem.
Let 𝑓1 𝑛 = 0.1 × 𝑛2 + 10 × 𝑛 + 100 and 𝑓2 𝑛 = 0.1 × 𝑛2 be the number
of times the basic operation is executed by the first and second algorithm,
respectively.

𝒏 𝒇𝟏 (𝒏) 𝒇𝟐 (𝒏) 𝒇𝟏 (𝒏) / 𝒇𝟐 (𝒏)

101 210 10 21

102 2,100 1,000 2.1

103 110,100 100,000 1.101

104 100,010,000,100 100,000,000,000 1.000100001


10
Example: What happen when doubling the input size?

1
Consider the function 𝑓 𝑛 = 𝑛(𝑛 − 1).
2
1 1
If the value of 𝑛 is large enough then 𝑓 𝑛 = 𝑛2 − 𝑛 ≈ 𝑛2 .
2 2
How much longer will the algorithm run if we double its input size?
𝑇(2𝑛) 𝑓(2𝑛) (2𝑛)2
= = =4
𝑇(𝑛) 𝑓(𝑛) 𝑛2

11
Common growth-rate functions

12
Common growth-rate functions

The exponential function 2𝑛 and the factorial function 𝑛! grow so fast that their
values become astronomically large even for rather small values of 𝑛.

13
Example: Compute the running time for Fibonacci problem

Consider the following pseudo-code for computing the 𝑛𝑡ℎ Fibonacci


number in recursive manner.
Fibonacci(n) {
if (n ≤ 1)
return n;
return Fibonacci(n - 1) + Fibonacci(n - 2);
}

Assume that this algorithm is programmed on a computer that makes one


billion operations per second.
What is the running time of the above pseudo-code for each input size?
𝑛 40 60 80 100 120 160 200
Time 14
Worst-case, Best-case,
and Average-case
Algorithm efficiency: Analysis cases
• The running time of a problem may depend on the specifics
of a particular input, beside the input size.

SequentialSearch(a[1 .. n], k) {
for (i = 1; i ≤ n; i++)
Whether the search procedure
if (a[i] == k)
stops depends on the position
of the key 𝑘. return 1;
return 0;
}

16
Algorithm efficiency: Analysis cases
• The best-case efficiency of an algorithm is the algorithm’s
efficiency for the best-case input of size 𝑛.
• The algorithm runs the fastest among all possible inputs of that size.
• The worst-case efficiency of an algorithm is the algorithm’s
efficiency for the worst-case input of size 𝑛.
• The algorithm runs the longest among all possible inputs of that size.
• The average-case efficiency of an algorithm indicates the
algorithm’s behavior on a “typical” or “random” input.
• We must make some assumptions about possible inputs of size 𝑛.

17
Asymptotic notations
• Let 𝑓(𝑛) and 𝑔(𝑛) can be any nonnegative functions defined
on the set of natural numbers.
• 𝑓(𝑛) will be an algorithm’s running time, and 𝑔(𝑛) will be
some simple function to compare with.

18
Asymptotic notations: Big-O
• A function 𝑓(𝑛) is said to be in 𝑂(𝑔(𝑛)) if it is bounded above
by some positive constant multiple of 𝑔(𝑛) for all large 𝑛.
𝑂(𝑔(𝑛)) = {𝑓(𝑛): ∃𝑐 ∈ ℝ+ ∧ 𝑛0 ∈ ℕ, 0 ≤ 𝑓(𝑛) ≤ 𝑐𝑔(𝑛), ∀𝑛 ≥ 𝑛0 }

• Example: If 𝑓 𝑛 = 2𝑛 + 1, 𝑔(𝑛) = 𝑛2 then 𝑓(𝑛) ∈ 𝑂(𝑛2 ).

• 𝑂(𝑔(𝑛)) is the set of all functions with a lower or same order


of growth as 𝑔(𝑛).

19
Asymptotic notations: Big- 
• A function 𝑓(𝑛) is said to be in Ω(𝑔(𝑛)) if it is bounded below
by some positive constant multiple of 𝑔(𝑛) for all large 𝑛.
Ω(𝑔(𝑛)) = {𝑓(𝑛): ∃𝑐 ∈ ℝ+ ∧ 𝑛0 ∈ ℕ, 0 ≤ 𝑐𝑔(𝑛) ≤ 𝑓(𝑛), ∀𝑛 ≥ 𝑛0 }

• Example: 𝑓 𝑛 = 𝑛3 + 2𝑛2 + 3, 𝑔(𝑛) = 𝑛2 → 𝑓(𝑛) ∈ Ω(𝑛2 ).

• Ω(𝑔(𝑛)) is the set of all functions with a higher or same


order of growth as 𝑔(𝑛).

20
Asymptotic notations: Big-
• A function 𝑓(𝑛) is said to be in Θ(𝑔(𝑛)) if it is bounded below
by some positive constant multiple of 𝑔(𝑛) for all large 𝑛.
Θ(𝑔(𝑛)) = {𝑓(𝑛): ∃𝑐1 , 𝑐2 ∈ ℝ+ ∧ 𝑛0 ∈ ℕ,
0 ≤ 𝑐1 𝑔(𝑛) ≤ 𝑓(𝑛) ≤ 𝑐2 𝑔(𝑛), ∀𝑛 ≥ 𝑛0 }

1 2
• Example: 𝑓 𝑛 = 𝑛 − 3𝑛, 𝑔(𝑛) = 𝑛2 → 𝑓(𝑛) ∈ Θ(𝑛2 ).
2

• Θ(𝑔(𝑛)) is the set of all functions with the same order of


growth as 𝑔(𝑛).

21
Big-O notation Big- notation Big- notation

3 log 𝑛 + 8 4𝑛2 4𝑛3 + 3𝑛2


𝑶 𝒏𝟐 5𝑛 + 7 6𝑛2 + 9 6𝑛6 + 4𝑛4 𝛀 𝒏𝟐
2 log 𝑛 5𝑛2 + 2𝑛 2𝑛 + 4𝑛

𝚯 𝒏𝟐
22
Big-O and related theorems
• Theorem 1: Given 𝑓(𝑛) ∈ ℝ+ and 𝑔 𝑛 ∈ ℝ+ .
𝑓(𝑛) ∈ Θ(𝑔(𝑛)) ⇔ 𝑓(𝑛) ∈ 𝑂(𝑔(𝑛)) ∧ 𝑓(𝑛) ∈ Ω(𝑔(𝑛))

• Theorem 2: Given 𝑓(𝑛) = σ𝑑𝑖=0 𝑎𝑖 𝑛𝑖 , 𝑎𝑑 > 0.


𝑓(𝑛) ∈ 𝑂(𝑛𝑑 )
where 𝑐 = σ𝑑𝑖=0 𝑎𝑖 , ∀𝑛 > 1

• Theorem 3: If 𝑓1 𝑛 ∈ 𝑂 𝑔1 𝑛 and 𝑓2 𝑛 ∈ 𝑂 𝑔2 𝑛 then


𝑓1 (𝑛) + 𝑓2 (𝑛) ∈ 𝑂 max(𝑔1 𝑛 , 𝑔2 𝑛

• The analogous assertions are also true for Θ and Ω notations.

23
Mathematical analysis of
Nonrecursive Algorithms
Time efficiency analysis: A general plan
1. Decide on the parameter(s) that indicates the input size.
2. Identify the basic operation of the algorithm.
3. Check whether the number of times the basic operation is
executed depends on some additional property. If it does,
the efficiency cases must be investigated separately.
4. Set up a sum expressing the number of times the basic
operation is executed.
5. Using standard formulas and rules of sum manipulation,
either find a closed-form formula for the count or, at the
very least, establish its order of growth.

25
Example: Analyze the time efficiency for an algorithm

Consider the following pseudo-code for finding the value of the largest
element in a list of 𝑛 numbers.
MaxElement(a[1 .. n]) {
max = a[1];
for (i = 2; i ≤ n; i++)
if (a[i] > max)
max = a[i];
return max;
}

The expression of time efficiency for the above code is


𝑇 𝑛 =𝑛−1

26
Example: Analyze the time efficiency for an algorithm

Consider the following pseudo-code for multiplying two matrices.

MatrixMultiplication(a[1 .. n, 1 .. n], b[1 .. n, 1 .. n]){


for (i = 1; i ≤ n; i++)
for (j = 1; j ≤ n; j++) {
c[i, j] = 0;
for (k = 1; k ≤ n; k++)
c[i, j] = c[i, j] + a[i, k] * b[k, j];
}
}

The expression of time efficiency for the above code is


𝑇 𝑛 = 𝑛 × 𝑛 × 𝑛 = 𝑛3

27
Example: Bubblesort and its improvement

Consider the following pseudo-code for the original Bubblesort.

BubbleSort(a[1 .. n]) {
for (i = 2; i < n; i++)
for (j = n; j  i; j--)
if (a[j - 1] > a[j])
a[j - 1] ⇆ a[j];
}

Let 𝐶(𝑖) be the number of comparisons made on a data set of size 𝑛.


𝑛(𝑛 − 1)
Then, 𝐶 𝑖 = ∈ Θ 𝑛2 for all distributions of the input array.
2

28
Example: Bubblesort and its improvement

Consider an improved version of the previous Bubblesort.

BubbleSortImproved(a[1 .. n]) {
• Best case:
flag = true;
i = 1; 𝐵 𝑛 =𝑛−1∈Θ 𝑛
while (flag) { • Worst case:
flag = false; 𝑛(𝑛 − 1)
i++; 𝑊 𝑛 = ∈ Θ 𝑛2
2
for (j = n; j  i; j--)
• Average case
if (a[j - 1] > a[j]) {
𝑛−1
a[j - 1] ⇄ a[j]; 1
flag = true; 𝐴 𝑛 = ෍ 𝐶(𝑖) ∈ Θ 𝑛2
𝑛−1
} 𝑖=1
number of comparisons
}
made after iteration 𝑖 𝑡ℎ
}
29
Example: Insertionsort and its improvement

Consider Insertionsort and its improvement.

Original Insertionsort Improved Insertionsort

InsertionSort(a[1 .. n]) { InsWithSentinel(a[1 .. n]) {


for (i = 2; i ≤ n; i++) { for (i = 2; i ≤ n; i++) {
v = a[i]; a[0] = v = a[i];
j = i – 1; j = i – 1;
while (j  1 && a[j] > v) { while (a[j] > v) {
a[j + 1] = a[j]; a[j + 1] = a[j];
j--; j--;
} }
a[j + 1] = v; a[j + 1] = v;
} }
} }

30
Example: Insertionsort and its improvement

The best-case efficiency:


𝐵 𝑛 =𝑛−1∈Θ 𝑛
The worst-case efficiency:
𝑛
𝑛(𝑛 − 1)
𝑊 𝑛 = ෍(𝑖 − 1) = ∈ Θ 𝑛2
2
𝑖=1
The average-case efficiency:
𝑛
𝑛2 − 𝑛 𝑛2
𝐴 𝑛 = ෍ 𝐶(𝑖) ≈ + 𝑛 − ln 𝑛 − 𝛾 ≈ ∈ Θ 𝑛2
4 4
𝑖=2
1 1
where 𝐶 𝑖 = × 𝑖 − 1 + σ𝑖−1
𝑗=1 × 𝑗 — the average number of times the
𝑖 𝑖
comparison is executed when the algorithm inserts the 𝑖 𝑡ℎ element into
the left sorted subarray.
31
Example: Insertionsort and its improvement

𝑛
1 1 1 1
෍ = 1 + + + ⋯ + ≈ ln 𝑛 + 𝛾
𝑖 2 3 𝑛
𝑖=1

𝛾 = 0.5722 … is the Euler constant

Image credit: Math Stack Exchange 32


Example: Analyze the time efficiency for an algorithm

Consider the pseudo-code for finding the BitCount(n) {


number of binary digits in the binary count = 1;
representation of a positive decimal integer. while (n > 1) {
count++;
n = n / 2;
}
return count;
}

The number of times the comparison will be executed is


𝑇 𝑛 = log 2 𝑛 + 1 ∈ Θ log 2 𝑛
It is also the number of bits in the binary representation of 𝑛.

33
Recurrence relations

Rabbits and the Fibonacci Numbers (Fibonacci, 1202)


A young pair of rabbits (one of each sex) is placed on a desert
island. A pair of rabbits does not breed until they are 2 months
old. After they are 2 months old, each pair of rabbits produces
another pair each month.
Find a recurrence relation for the number of pairs of rabbits on
the island after 𝑛 months, assuming that no rabbits ever die.

34
Recurrence relations
• Let 𝐹𝑛 denote the number of pairs of rabbits after 𝑛 months.
• Firstly, there is not any pair of rabbits on this island, 𝐹𝑛 = 0.
Reproducing pairs Young pairs Month Total pairs
1 1 1
2 2 1
3 1 3 2
4 2 1 4 3
5 3 2 1 1 5 5
6 4 2 2
6 8
3 1 1 1
35
Recurrence relations
• The problem resembles the Fibonacci sequence, in which
the recurrence relation is defined as follows.
𝐹0 = 0
𝐹1 = 1
𝐹𝑛 = 𝐹𝑛−1 + 𝐹𝑛−2 𝑓𝑜𝑟 𝑛 ≥ 2

• To find the number of rabbit pairs after 𝑛 months, add the


number on the island the previous month, 𝐹𝑛−1 , and the
number of newborn pairs, 𝐹𝑛−2 .
• Each newborn pair comes from a pair at least 2 months old.

36
Solve recurrence relations
• The solution is an explicit formula, called a closed formula,
for the terms of the sequence.
• Example: Solving the recurrence relation
𝑥1 = 1
𝑥𝑛 = 2𝑥𝑛−1 + 1
gives us
𝑥𝑛 = 2𝑛 − 1

• We need to use mathematical induction to prove that our


guess is correct.

37
Approaches: Forward substitution
• We find successive terms beginning with the initial condition
about 𝑥1 and ending with 𝑥𝑛 .
• 𝑥1 = 1 = 21 − 1
• 𝑥2 = 2𝑥1 + 1 = 3 = 22 − 1
• 𝑥3 = 2𝑥2 + 1 = 7 = 23 − 1
• 𝑥4 = 2𝑥3 + 1 = 15 = 24 − 1
• …
• 𝑥𝑛 = 2𝑥𝑛−1 + 1 = 2𝑛 − 1

38
Approaches: Backward substitution
• From 𝑥𝑛 , we iterate to express it in terms of falling terms of the sequence
until we found it in terms of 𝑥1 .
• 𝑥𝑛 = 2𝑥𝑛−1 + 1
= 2 2𝑥𝑛−2 + 1 + 1 = 22 𝑥𝑛−2 + 21 + 20
= 22 2𝑥𝑛−3 + 1 + 21 + 20 = 23 𝑥𝑛−3 + 22 + 21 + 20
= 2𝑖 𝑥𝑛−𝑖 + 2𝑖−1 + ⋯ + 22 + 21 + 20
• Based on the initial condition 𝑥1 = 1, let’s set 𝑛 − 𝑖 = 1 to have 𝑖 = 𝑛 − 1.
• Therefore, 𝑥𝑛 = 2𝑛−1 𝑥𝑛−(𝑛−1) + 2(𝑛−1)−1 + ⋯ + 22 + 21 + 20
= 2𝑛−1 𝑥1 + 2𝑛−2 + ⋯ + 22 + 21 + 20
= 2𝑛 − 1

39
Linear recurrence relations
• A wide variety of recurrence relations occur in models.
• We solve them by using either iteration (forward/backward
substitution) or some other adhoc technique.

• Linear recurrence relations express the terms of a sequence


as linear combinations of previous terms.
• It is an important class of recurrence relation that can be
solved in a systematic way.

40
Linear homogeneous recurrence relations
• A linear homogeneous recurrence relation of degree 𝑘 with
constant coefficients is a recurrence relation of the form
𝑥𝑛 = 𝑐1 𝑥𝑛−1 + 𝑐2 𝑥𝑛−2 + ⋯ + 𝑐𝑘 𝑥𝑛−𝑘
or
𝑓 𝑛 = 𝑥𝑛 − 𝑐1 𝑥𝑛−1 + 𝑐2 𝑥𝑛−2 + ⋯ + 𝑐𝑘 𝑥𝑛−𝑘 = 0

where 𝑐1 , 𝑐2 , … , 𝑐𝑘 ∈ ℝ, 𝑐𝑘 ≠ 0 and the 𝑘 initial conditions are


𝑥0 = 𝐶0 , 𝑥1 = 𝐶1 , … , 𝑥𝑘−1 = 𝐶𝑘−1
• There are many solutions, depending on the value of the
initial condition.

41
Solving linear recurrence relations
• For example, the recurrence relations shown below are not
linear homogeneous with constant coefficients.
2
• 𝑥𝑛 = 𝑥𝑛−1 + 𝑥𝑛−1 is not linear.
• 𝑥𝑛 = 2𝑥𝑛−1 + 3 is not homogeneous.
• 𝑥𝑛 = 𝑛𝑥𝑛−1 does not have constant coefficients.

42
Linear homogeneous recurrence relations
• 𝑥𝑛 = 𝑟 𝑛 is a solution of the recurrence relation
𝑥𝑛 = 𝑐1 𝑥𝑛−1 + 𝑐2 𝑥𝑛−2 + ⋯ + 𝑐𝑘 𝑥𝑛−𝑘
if and only if
𝑟 𝑛 = 𝑐1 𝑟 𝑛−1 + 𝑐2 𝑟 𝑛−2 + ⋯ + 𝑐𝑘 𝑟 𝑛−𝑘
• Divide both sides of this equation by 𝑟 𝑛−𝑘 (when 𝑟 ≠ 0) and
substract the terms on the right.
𝑟 𝑘 − 𝑐1 𝑟 𝑘−1 − 𝑐2 𝑟 𝑘−2 − ⋯ − 𝑐𝑘 𝑟 0 = 0
• That is the characteristic equation of the recurrence relation.

43
Linear homogeneous recurrence relations
• The solutions of the characteristic equation of the recurrence
relation are the characteristic roots of the recurrence.
• We use these characteristic roots to give an explicit formula
for all the solutions of the recurrence relation.

• For simplicity, let’s consider linear homogeneous recurrence


relations of degree two.
• The characteristic equation in this case has the below form.
𝑎𝑟 2 + 𝑏𝑟 + 𝑐 = 0
• 𝑥𝑛 can be solved in one of the three cases.

44
Linear homogeneous recurrence relations
• Case 1: Suppose that the equation has two distinct roots
𝑟1 ∈ ℝ and 𝑟2 ∈ ℝ. The solution is:
𝑥 𝑛 = 𝛼𝑟1𝑛 + 𝛽𝑟2𝑛 ∀𝛼, 𝛽 ∈ ℝ
• Case 2: Suppose that the equation has only one root 𝑟 ∈ ℝ.
The solution is:
𝑥 𝑛 = 𝛼𝑟 𝑛 + 𝛽𝑛𝑟 𝑛 ∀𝛼, 𝛽 ∈ ℝ
• Case 3: If 𝑟1,2 = 𝑢 ± 𝑖𝑣, then the solution is
𝑥 𝑛 = 𝛾 𝑛 𝛼 cos 𝑛𝜃 + 𝛽 sin 𝑛𝜃
∀𝛼, 𝛽 ∈ ℝ, 𝛾 = 𝑢2 + 𝑣 2 , 𝜃 = arctan 𝑣/𝑢

45
Example: Find the solution of the recurrence relation

Find the solution of the followingrecurrence relation


𝑥𝑛 = 𝑥𝑛−1 + 2𝑥𝑛−2

with initial conditions 𝑥0 = 2 and 𝑥1 = 7.

The solution is 𝑥𝑛 = 3 × 2𝑛 − −1 𝑛

46
Example: Find the solution of the recurrence relation

Find an explicit formula of the followingrecurrence relation


𝑥𝑛 = 6𝑥𝑛−1 − 9𝑥𝑛−2

with initial conditions 𝑥0 = 0 and 𝑥1 = 3.

The solution is 𝑥𝑛 = 𝑛3𝑛

47
Mathematical analysis of
Recursive Algorithms
Time efficiency analysis: A general plan
1. Decide on the parameter(s) that indicates the input size.
2. Identify the basic operation of the algorithm.
3. Check whether the number of times the basic operation is
executed depends on some additional property. If it does,
the efficiency cases must be investigated separately.
4. Set up a recurrence relation, with an appropriate initial
condition, for the number of times the basic operation is
executed.
5. Solve the recurrence or, at least, ascertain the order of
growth of its solution.

49
Example: Find the factorial of 𝑛: 𝑛!

Consider the pseudo-code for finding the factorial of 𝑛: 𝑛!.

Factorial(n) {
if (n == 0)
return 1;
return Factorial(n – 1) * n;
}

Let 𝑀(𝑛) denote the number of times the basic operation is executed.
The recurrence relation is as follows:
𝑀(𝑛) = 𝑀(𝑛 − 1) + 1
𝑀(0) = 0
Thus, 𝑀(𝑛) ∈ Θ(𝑛).
50
Example: Solve the Tower of Hanoi puzzle with 𝑛 disks

Consider the pseudo-code for solving the Tower of Hanoi puzzle of 𝑛 disks.

HNTower(n, left, middle, right) {


if (n) {
HNTower(n – 1, left, right, middle);
Movedisk(1, left, right);
HNTower(n – 1, middle, left, right);
}
}

Let 𝑀(𝑛) denote the number of times the basic operation is executed.
Thus, 𝑀 𝑛 = 2𝑛 − 1 ∈ Θ(2𝑛 ), with the following recurrence relation.
𝑀 𝑛 = 𝑀 𝑛 − 1 + 1 + 𝑀 𝑛 − 1 = 2𝑀 𝑛 − 1 + 1
𝑀(1) = 1
51
Example: Analyze the time efficiency for an algorithm

Consider the pseudo-code for finding the number of binary digits in the
binary representation of a positive decimal integer.

BitCount(n) {
if (n == 1) return 1;
return BitCount( n / 2) + 1;
}

Let 𝐴(𝑛) denote the number of times the basic operation is executed.
𝑛
The number of additions made in computing BitCount( n / 2) is 𝐴 (⌊ ⌋).
2
The recurrence relation is as follows:
𝑛
𝐴(𝑛) = 𝐴 (⌊ ⌋) + 1
2
𝐴(1) = 0
52
“Smoothness rule” theorem
• Let 𝑔(𝑛) be a nonnegative function defined on the set of natural numbers.
• 𝑔(𝑛) is called smooth if it is finally nondecreasing and 𝑔 2𝑛 ∈ Θ 𝑔 𝑛 ) .

“Smoothness rule” theorem


Let 𝑓(𝑛) be an eventually nondecreasing function and 𝑔(𝑛) be
a smooth function.
If 𝑓 𝑛 ∈ Θ 𝑔 𝑛 ) for values of 𝑛 that are powers of 𝑏 where
𝑏 ≥ 2, then
𝒇(𝒏) ∈ 𝚯 𝒈(𝒏 )

(The analogous results hold for the cases of 𝑂 and Ω as well.)

53
Solve a recurrence relation: Approach
• Solve the given recurrence relation only for 𝑛 = 2𝑘 .
• Apply “Smoothness rule” theorem to give a correct answer
about the order of growth for all values of 𝑛.
• This rule claims that the order of growth observed for 𝑛 = 2𝑘 .

54
Example: Analyze the time efficiency for an algorithm

Consider the pseudo-code for finding the number of binary digits in the
binary representation of a positive decimal integer.

BitCount(n) {
if (n == 1) return 1;
return BitCount( n / 2) + 1;
}

Assume that 𝑛 = 2𝑘 .
𝑛
The recurrence relation 𝐴(𝑛) = 𝐴 (⌊ ⌋) + 1 takes the form:
2
𝐴(2𝑘 ) = 𝐴 (2𝑘−1 ) + 1
𝐴(20 ) = 0
Thus, 𝐴(𝑛) = log 2 𝑛 ∈ Θ log 2 𝑛 .
55
Example: Compute the 𝒏𝒕𝒉 Fibonacci number

The recurrence relation is 𝐹𝑛 = 𝐹𝑛−1 + 𝐹𝑛−2 with 𝐹0 = 0 and 𝐹1 = 1.


The characteristic equation of the recurrence relation is 𝑟 2 − 𝑟 − 1 = 0,

1± −1 2− 4(1)(−1) 1 ± 5
which has two distinct roots: 𝑟1,2 = =
2 2
𝑛 𝑛
1+ 5 1− 5
Hence, 𝐹𝑛 = 𝛼 +𝛽
2 2
0 1
1+ 5 1− 5
Solve the system of equations 𝐹0 = 𝛼 +𝛽 =0
2 2
1 1
1+ 5 1− 5
𝐹1 = 𝛼 +𝛽 =1
2 2
to obtain 𝛼 = 1Τ 5 and 𝛽 = −1Τ 5.
56
Example: Compute the 𝒏𝒕𝒉 Fibonacci number

Therefore, the closed-form solution for the 𝑛𝑡ℎ Fibonacci number, known
as Binet's formula, is
𝑛 𝑛
1
1+ 5 1 1− 5
𝐹𝑛 = −
5 2 5 2

1+ 5 1− 5
Let 𝜙 = ≈ 1.61803 (golden ratio) and 𝜙෠ = = 1 − 𝜙 ≈ −0.61803.
2 2

1
Then, 𝐹𝑛 = 𝜙 𝑛 − 𝜙෠ 𝑛
5

57
Some example algorithms
𝑛𝑡ℎ Fibonacci number: Recursive approach
Fibonacci(n) {
if (n ≤ 1)
return n;
return Fibonacci(n - 1) + Fibonacci(n - 2);
}

• Let 𝐴(𝑛) be the number of times the basic operation to compute 𝐹𝑛 .


• The recurrence equation for this approach is
𝐴 𝑛 = 𝐴 𝑛 − 1 + 𝐴 𝑛 − 2 + 1 𝑤𝑖𝑡ℎ 𝑛 > 1
𝐴 0 = 0, 𝐴 1 = 0
• Solving this recurrence equation gives us
1
𝐴 𝑛 = 𝜙 𝑛+1 − 𝜙෠ 𝑛+1 − 1 ∈ Θ 𝜙 𝑛
5
59
𝑛𝑡ℎ Fibonacci number: Non-recursive
• It’s easy to construct a linear algorithm using the formula
1
𝐹𝑛 = 𝜙 𝑛 − 𝜙෠ 𝑛
5

• In practice, we may use the below formula when 𝑛 → ∞.


1
𝐹𝑛 = 𝑟𝑜𝑢𝑛𝑑 𝜙𝑛
5

60
𝑛𝑡ℎ Fibonacci number: DP
• Dynamic programming
Fibonacci(n) { Fibonacci(n) {
if (n ≤ 1) if (n ≤ 1)
return n; return n;
f0 = 0; f0 = 0;
f1 = 1; f1 = 1;
for (i = 2; i ≤ n; i++) { for (i = 2; i ≤ n; i++) {
fn = f0 + f1; f1 = f1 + f0;
f0 = f1; f0 = f1 – f0;
f1 = fn; }
} return f1;
return fn; }
}
61
𝑛𝑡ℎ Fibonacci number: Matrix approach
• Consider the following equation
𝑛
𝐹(𝑛 + 1) 𝐹(𝑛) 1 1
= 𝑤𝑖𝑡ℎ 𝑛 ≥ 1
𝐹(𝑛) 𝐹(𝑛 − 1) 1 0
• It is easy to prove its correctness using induction.

• The following formula efficiently computes the right term.


𝑛/2 2
1 1
𝑛
𝑛 𝑖𝑠 𝑒𝑣𝑒𝑛
1 1 1 0
= 2
1 0 𝑛/2
1 1 1 1
× 𝑛 𝑖𝑠 𝑜𝑑𝑑
1 0 1 0

• The running time of this approach is Θ log 2 𝑛

62
𝑛𝑡ℎ Fibonacci number: Matrix approach
int fib(int n) {
F[2][2] = {{1, 1},{1, 0}};
if (n == 0) return 0;
power(F, n - 1);
return F[0][0];
} void power(int F[2][2], int n) {
if (n ≤ 1) return;
T[2][2] = {{1, 1},{1, 0}};
power(F, n / 2);
multiply(F, F);
if (n % 2 != 0)
multiply(F,T);
}

63
𝑛𝑡ℎ Fibonacci number: Matrix approach
void multiply(F[2][2], T[2][2]) {
t1 = F[0][0]*T[0][0] + F[0][1]*T[1][0];
t2 = F[0][0]*T[0][1] + F[0][1]*T[1][1];
t3 = F[1][0]*T[0][0] + F[1][1]*T[1][0];
t4 = F[1][0]*T[0][1] + F[1][1]*T[1][1];
F[0][0] = t1; F[0][1] = t2;
F[1][0] = t3; F[1][1] = t4;
}
void main() {
cout << fib(5);
}

• The weakness of the above code is recursive calls.


• Using a “loop” approach is always better.
64
𝑛𝑡ℎ Fibonacci number: Matrix approach

int Fibonacci(int n) {
i = 1; j = 0; k = 0; h = 1;
while (n) {
if (n % 2) {
t = j * h;
j = i * h + j * k + t;
i = i * k + t;
}
t = h * h; h = 2 * k * h + t; k = k * k + t;
n = n / 2;
}
return j;
}
65
66

You might also like