DAA_Tutorial_1__2_TranscriptionWIP
DAA_Tutorial_1__2_TranscriptionWIP
Question 1.
This question is designed to make you think about the growth rates of different functions for increasing input size.
ANSWER:
Fastest Slowest
Note: To illustrate the impact of exponential growth, consider a chess board that has size 8 x 8. Consider, on day 1,
you put one grain of rice at one of the 64 locations. Then you keep doubling the number of grains at each location
on each of the following days, i.e., on day 2 you put 2 x 1= 2 grains at location-2, on day 3 you put 2 x 2 = 4 grains
at location 3, etc. Hence you would put 2 amount of grains at the last location-64. Thus, in total there will be 2
64−1 64
grains of rice on the board. How many kilograms of rice are there?
If one gram of rice contains 15 to 50 grains of rice, i.e., 1kg has 15,000 to 50,000 grains, then, 2 64
grains of rice =
1.84 ∗ 10 grains weight between 1.84 ∗ 10 /15, 000 and 1.84 ∗ 10 /50, 000 kilograms!
19 19 19
b)
Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For input of
size n, insertion sort runs in 8n , while merge sort runs in 64nlg(n) steps.
2
ANSWER:
2
8n < 64nlg(n)
n < 8lg(n)
True for n = 2, 4, 8, 16, 32. Not true for n = 64. So, we can guess that the value of n is between 33 and 64, i.e.,
32 < n < 64
My Methodology:
2
8n < 64nlg(n)
2
8n 64nlg(n)
<
8 8
2
n 8nlg(n)
<
n n
n < 8nlg(n)
c)
What is the smallest value of n such that an algorithm whose running time is 100n runs faster than an algorithm
2
Hint: You can solve this by graphing both functions and finding the intersection
ANSWER:
2 n
100n < 2
Question 2.
Consider the following Java method.
JAVA
a) Describe a scenario for its best-case running time and calculate its best-case asymptotic time complexity.
b) Describe a scenario for its worse-case running time and calculate its best-case asymptotic time complexity.
ANSWERS:
a) We can argue that there no best-case for a tree with n nodes. In other words, t != null us NOT the best case,
because here you must not assume that the size of the problem is constant. 1 or 0. You cannot also assume that
the input is in error, e.g. the first node is disconnected.
Thus the time complexity of the method is still O(n) because it must visit each node once
b)
The worst case is non-termination, i.e., when t represents a graph with a cycle, rather than a tree.
OR
Assuming a valid tree input containing n nodes the worst case is when it must visit all nodes, and thus has time
complexity of O(n).
Average case is the worst case, also O(n)
a = 2 ,b=n log 2 2
= n ≥ f (x) = 1
Thus, by definition of Big Oh, we want to prove that P (n) ≤ cn for pair values of c and n 0
cn/2 − 2b + 1 ≤ cn − 2b + 1 = cn − b − (b − 1) ≤ cn − b , for b − 1 ≥ 0, OR b ≥ 1
Question 3
Part a)
Express each of the following functions in terms of O, Ω, and Θ. For each, prove that your answer is correct.
i)
3
n
2
− 100n − 100n + 3
1000
ANSWER:
Big-Oh:
3
n 2 3
− 100n − 100n + 3 = O(n )
1000
// Notice that the largest exponent is 3. Since you are asked to express the function in terms of big oh, you don't
need to prove it at this stage
Proof:
First, state what you want to prove, for this case it's f (n) = O(n ), state that, by definition you want to prove that
3
f (n) <= cn
3
for a given pair of values c > 0, and n 0 > 0
Second, you must follow logical steps to find the values of c > 0, and n 0 > 0 . In other words, you are no allowed to
just guess the values of c > 0, and n > 0 0
Logical Steps:
3
(n /1000 − 100n
2
− 100n + 3) < n /1000 + 3
3
, for n ≥ 16
3
≤ n /1000 + 3n
3
for n ≥ 10 6
4n
3
for n ≤ 10 6
Thus, (n 3
/1000 − 100n
2
− 100n + 3) <= 4n
3
for n = 10 6
Note: You can verify the correctness of your solution by using values of c and n in the inequality! 0
Big-Omega:
3 2 3
(n /1000 − 100n − 100n + 3) = Ω(n )
// Since you are asked to express the function in terms of big oh, you don't need to prove it at this stage
Proof:
By definition of big omega, you must prove (n^3/1000 - 100n^2 - 100n + 3) >= cn^3, for a given pair of values c > 0,
and n > 0 0
Logical Steps:
Big-Theta:
3 2 3
(n /1000 − 100n − 100n + 3) = Θ(n )
Proof:
By definition of Big-Theta, you must show that c 2n
3 3
≤ n /1000 − 100n
2
− 100n + 3 ≤ c 1 n
3
, for given values of c 0 > 0 ,
c1 > 0 , and n 0 > 0
Logical steps:
1. You can prove the problem separately for Big Oh, to get c and n , and for Big Omega to compute c and n . For
1 0 2 0
this problem, you have obtained c ≥ 4 and n = 1 for Big Oh, and c ≤ 1.66 ∗ 10 and n = 120000 for Big
1 0 2
−4
0
Omega.
2. Select the Larger n that were calculated for Big Oh and Big Omega for use in your Big Theta. Then, you claim
0
ii)
n
n(n + 1)
∑i =
2
i=1
ANSWER:
i)
Big Oh:
n(n + 1)/2 = O(n
2
); notice that the largest exponent is 2, after simplifying
the function to (n 2
+ n)/2
Logical steps:
(n
2
+ n)/2 <= n
2
; Note this step is ok because the RHS is always twice the value of the LHS for n ≥ 1
≤ n
2
+ n
2
for n ≥ 1; notice that n 2
≥ n for n ≥ 1, and thus this step is OK
= 2n
2
≤ cn
2
for n ≥ 1 and for c ≥ 2
ii)
Big Omega
2
n(n + 1)/2 = n
Logical steps:
// divide both sides by n^2
1/2 + 1/2(n) ≥ c
The smallest possible value for the LHS is 1/2, i.e., when n goes to infinity. Thus, we have
1/2 + 1/(2n) > c for n > 1 and c < 1/2
iii)
Big Theta:
2
n(n + 1)/2 = Θ(n )
iii)
2
∑i = n(n + 1)(2n + 1)/6
i=1
ANSWERS
Big Oh:
3
n(n + 1)(2n + 1)/6 = O(n )
Notice that the largest exponent is 3 (after simplifying the function to (2n 3
+ 3n
2
+ n)/6 )
Logical Steps:
Big Oh:
3
n(n + 1)(2n + 1)/6 = O(n )
Notice that the largest exponent is 3 (after simplifying the function to (2n 3
+ 2n
2
+ 1)/6
Proof: Follow the steps as in part (i)
By definition of Big-Oh, we must prove:
3
n(n + 1)(2n + 1)/6 ≥ cn
Logical Steps:
n(n + 1)(2n + 1)/6 ≤ n(n + 1)(2n + 1) // notice that this inequality is true for n ≥ 1
= 6n
2
≤ cn
3
, for c ≥ 6 and n ≥ 1
Big Omega:
3
n(n + 1)(2n + 1)/6 = Ω(n )
Logical Steps:
n(n + 1)(2n + 1)/6 ≥ n(n)(2n)/6 // Remove the "+ 1 ". The inequality is correct for any value of n, e.g., n ≥ 1
3 3
≥ 2n /6 = 1/3n
≥ cn
3
for c ≤ 1/3 and n ≥ 1
An alternative proof:
3 2 3 2 3
(2n + 3n + n)/6 = 1/3n + 1/2n + 1/6n ≥ cn
Big Theta:
3
n(n + 1)(2n + 1)/6 = Θ(n )
part b)
ANSWER:
By definition of Big-Oh, we must prove:
10 + 100log 2 (x − 3) ≤ c ∗ log 2 (x) ; // we must find the values of c and x that meet the inequality
Note: In general O(log a (x)) = O(log b (x)) for any constant values a and b because
, for any constant c. For example,
log c (a)
log b (a) =
log c (b)
Question 4
This question is somewhat more challenging. Feel free to work with friends or to look for guidance elsewhere, but
make sure you understand any answers obtained.
By Stirling's approximation, n! ≈ √2πn( n
e
)
n
Note:
π = 3.14159. . .
e = 2.71828183. . .
part a)
ANSWER
log 2 n! = log 2 √ 2πn(
n
e
)
n
// by Stirling's approximation
log 2 n! = nlog 2 (n) − nlog 2 (e) + (log 2 (2πn))/2 // Simplification
part b)
i=1
log 2 i = O(nlogn)
ANSWER
// notice that log a + log b = log(ab)
n n
∑ i=1 log 2 (i) = log 2 ∏ i=1 i
Logical Steps:
nlog 2 (n) − nlog 2 (e) + (log 2 (2πn))/2 ≤ nlog 2 (n) + (log 2 (2πn))/2 // to make the value of the LHS positive, n ≥ e
Hence ∑ n
i=1
log 2 (i) = O(nlog(n)) for c ≥ 4 and n ≥ π
Note:
The short way is to note that each term in the sum is O(log(n)) and that there is an n term. Thus, you can expect the
complexity to be O(nlog(n)). However, this is an informal proof.
An alternative proof:
n n
∑ i=1 log 2 (i) = log 2 ∏ i=1 i
= log 2 (n!)
≈ log 2 √ 2πn(
n
e
)
n
// by the Stirling's approximation
= 0.5log 2 (πn) + nlog 2 (n) − nlog 2 (e)
≤ log 2 (2π) + log 2 (n) + nlog 2 (n) *// the largest order in terms of n is nlog 2 (n)
Hence ∑
n
i=1
log 2 (i) = O(nlog(n))
// you need to give the value of c and n0 to prove it formally following the definition of Big-Oh
Question 5
CBF, insert the question here later
ANSWERS:
part a)
JAVA
(iv)
2n + 1 ≤ cn
≤ 2n + n for n ≥ 1
= 3n ≤ cn for c ≥ 3
part b)
for i ← n downto 0 do // (n+2)
for j ← 0 to i do // ∑ n
i=0
(i + 2)
A[i][j] ← A[i][j + 1] + 2 // ∑ n
i=0
(i + 1)
(ii)
n
∑ i=0 (i + 2) = 2 + 3+. . . +n + (n + 1) + (n + 2)
= n(n + 1)/2 − 1 + (n + 1) + (n + 2)
n
∑ i=0 (i + 1) = 2 + 3+. . . +n + (n + 1) + (n + 2) = n(n + 1)/2 + (n + 1)
2
= n + 2 + n + n − 1 + 3n + 4
2
= n + 5n + 5
(iii)
Assume each line requires O(1). The running time is O(n 2
)
(iv)
2 2
n + 5n + 5 ≤ cn
≤ n
2
+ 5n
2
+ 5n
2
for n ≥ 1
2
= 11n
≤ cn
2
for c ≥ 11 and n ≥ 1
part c)
i ← n // 1 step
while i > 0 do //lg(n) + 2
set i ← i/2 // lg(n) + 1; remember that this is an integer division
(i)
See above. Notice that if n = 8, the while loop is executed lg(8) + 2 = 5 times
i.e., i = 8/1, 8/2, 8/4, 8/8, 1 exit condition → 5
Let 2 = n or h = ln(n)
h
(iii) O(lg(n))
(iv)
2lg(n) + 4 leqclg(n)
part d)
for i ← 1 do n do // n + 1 steps
j ← i // n steps
while j > 0 do // ∑ n
i=1
(lg(i) + 2)
(ii)
n + 1 + n + lg(n!) + 2n + lg(n!) + n
= 2ln(n!) + 5n + 1
Note:
n
∑ (lg(i) + 2) = lg1(which = 0) + lg(2) + lg(3)+. . . +lg(n) + 2n
i=1
= lg(n!) + 2n
n
∑ (lg(i) + 1) = lg1(which = 0) + lg(2) + lg(3)+. . . +lg(n) + n
i=1
= lg(n!) + n
(iv)
2lg(n!) + 5n + 1 ≤ clg(n)
Question 6
While induction proofs are important in themselves, these proofs are very useful in complexity analysis, where
summation is often used.
a) ∑ n n(n+1)
i=1
i =
2
Note: The proof for this equation has been discussed in the lecture
b) ∑
n 2
i=1
i = n(n + 1)(2n + 1)/6
c) ∑
n i−1 n
2 = 2 − 1
i=1
Notes:
In general, proof by induction for a statement, e.g., S(n): ∑ n
i=1
i = n(n + 1)/2 requires proving two facts:
1. The basis case
2. The inductive step
ANSWERS:
part a)
n
n(n + 1)
∑i =
2
i=1
The solution has been discussed in the lecture, and thus can be skipped in the tutorial!
Base Step:
i=1
i =
2
LHS: ∑
n
i=1
i = 1
RHS:
1(1+1)
= 1
2
Inductive Step:
i=1
i =
k(k+1)
Note: It is also OK if the hypothesis for n, and the inductive step is for n + 1
i=1
i =
(k+1)((k+1)+1)
k+1 k+1
LH S = ∑ i=1 i = (k + 1) + ∑ i=1 i
i=1
i =
2
k(k + 1)
= (k + 1) +
2
2(k + 1) k(k + 1)
= +
2 2
2 2
2k + 2 + k + k k + 3k + 2
= =
2 2
(k + 1)(k + 2)
=
2
(k + 1)((k + 1) + 1)
=
2
2
∑i = n(n + 1)(2n + 1)/6
i=1
Base Step:
LHS: ∑ n
i=1
i
2
= 1
2
= 1
Since we have LHS = RHS = 1, the equation is true for base case n = 1
Inductive Step:
Induction hypothesis: ∑ k
i=1
i
2
= k(k + 1)(2k + 1)/6
i=1
i
2
= (k + 1)((k + 1) + 1)(2(k + 1) + 1)/6
LHS: ∑
k+1 2 2 k 2
i=1
i = (k + 1) + ∑ i=1 i
i=1
i
2
= k(k + 1)(2k + 1)/6 , we can compute LHS
2
= (k + 1) + k(k + 1)(2k + 1)/6
The final step is to simplify the LHS and RHS to show that LHS = RHS.
(Note: below is a section from my FCS Assignment. I had to prove this same exact thing, so I will just copy and
paste.)
2
(k(k + 1)(2k + 1)) + 6(k + 1) (k + 1)(k + 2)(2k + 2 + 1)
=
6 6
2 2 2
(k(2k + 3k + 1)) + 6(k + 1) (k + 3k + 2)(2k + 3)
=
6 6
3 2 2 3 2
(2k + 3k + k) + 6(k + 1) 2k + 9k + 13k + 6
=
6 6
3 2 2 3 2
(2k + 3k + k) + 6(k + 2k + 1) 2k + 9k + 13k + 6
=
6 6
3 2 2 3 2
(2k + 3k + k) + (6k + 12k + 6) 2k + 9k + 13k + 6
=
6 6
3 2 3 2
2k + 9k + 13k + 6 2k + 9k + 13k + 6
=
6 6
part c)
n
i−1 n
∑2 = 2 − 1
i=1
Base Step
Show that ∑ n
i=1
2
i−1
= 2
n
− 1 for n = 1
LHS: ∑ n
i=1
2
i−1
= 2
1−1
= 2
0
= 1
RHS: = 2 1
− 1 = 2 − 1 = 1
Since we have LHS = RHS = 1, the equation is true for base case n = 1
Inductive Step:
(k+1)
i−1 (k+1)
∑ 2 = 2 − 1
i=1
LHS:
(k+1) k
i=1 i=1
k k
= 2 + 2 − 1
k+1
2 − 1
RHS:
(k+1)
2 − 1
∴ QED
So LHS = RHS
Question 7
Use the master method to give tight asymptotic bounds for the following recurrences.
a) T (n) = 4T (n/2) + n
b) T (n) = 4T (n/2) + n 2
c) T (n) = 4T (n/2) + n 3
Hint: The function is not in a form suitable for the master method. However, it can be converted to the required
form for master method by using another variable k = log (n). For details, read page 86 of the textbook (third
2
edition).
ANSWERS:
part a)
Question 8
...
Question 9
...
Question 10
...
Question 11
...