0% found this document useful (0 votes)
2 views13 pages

DAA_Tutorial_1__2_TranscriptionWIP

Uploaded by

niggy0705
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
2 views13 pages

DAA_Tutorial_1__2_TranscriptionWIP

Uploaded by

niggy0705
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 13

DAA Tutorial 1 + 2 Transcription

Question 1.
This question is designed to make you think about the growth rates of different functions for increasing input size.

a) Rank the following functions by order of growth

$n^2, lgn, n!, \sqrt{n}, (lgn)^2, log(n^2), nlgn, 2^n, n$

ANSWER:

Fastest Slowest

log(n) log(n^2) log(n)^2 sqrt(n) n n*log(n) n^2 2^n n!

Note: To illustrate the impact of exponential growth, consider a chess board that has size 8 x 8. Consider, on day 1,
you put one grain of rice at one of the 64 locations. Then you keep doubling the number of grains at each location
on each of the following days, i.e., on day 2 you put 2 x 1= 2 grains at location-2, on day 3 you put 2 x 2 = 4 grains
at location 3, etc. Hence you would put 2 amount of grains at the last location-64. Thus, in total there will be 2
64−1 64

grains of rice on the board. How many kilograms of rice are there?

If one gram of rice contains 15 to 50 grains of rice, i.e., 1kg has 15,000 to 50,000 grains, then, 2 64
grains of rice =
1.84 ∗ 10 grains weight between 1.84 ∗ 10 /15, 000 and 1.84 ∗ 10 /50, 000 kilograms!
19 19 19

b)
Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For input of
size n, insertion sort runs in 8n , while merge sort runs in 64nlg(n) steps.
2

For which values of n does insertion sort beat merge sort?

Note: lg(n) denotes log 2 (n)

ANSWER:

2
8n < 64nlg(n)

n < 8lg(n)

True for n = 2, 4, 8, 16, 32. Not true for n = 64. So, we can guess that the value of n is between 33 and 64, i.e.,
32 < n < 64

My Methodology:

Let I (x) be the Insertion Sort function


Let M (x) be the Merge Sort function

Note: lg(n) = log 2 (n)

Assuming "Beat" means "More efficient"


We want to find at which point is the function takes the least steps.

For Insertion Sort to beat Merge Sort:

I (x) < M (x)

2
8n < 64nlg(n)
2
8n 64nlg(n)
<
8 8

2
n 8nlg(n)
<
n n

n < 8nlg(n)

True for n = 2, 4, 8, 16, 32. Not true for 64.


So we can guess that the value of n is between 33 and 64, i.e., 32 < n < 64

c)

What is the smallest value of n such that an algorithm whose running time is 100n runs faster than an algorithm
2

whose running time is 2 on the same machine?


n

Hint: You can solve this by graphing both functions and finding the intersection

ANSWER:

2 n
100n < 2

For n = 13 → 100*169 > 2 13


= 8192

for n = 14 → 100*196 > 2 14


= 16384

for n = 15 → 100*225 > 2 15


= 32768

Solving for n, we obtain n ≥ 15

Question 2.
Consider the following Java method.

JAVA

public void transwer(Node t)


{
if(t!=null)
{
traverse(t.leftChild);
System.out.println(t.getData());
traverse(t.rightChild);
}
}

a) Describe a scenario for its best-case running time and calculate its best-case asymptotic time complexity.

b) Describe a scenario for its worse-case running time and calculate its best-case asymptotic time complexity.

ANSWERS:

a) We can argue that there no best-case for a tree with n nodes. In other words, t != null us NOT the best case,
because here you must not assume that the size of the problem is constant. 1 or 0. You cannot also assume that
the input is in error, e.g. the first node is disconnected.

Thus the time complexity of the method is still O(n) because it must visit each node once
b)
The worst case is non-termination, i.e., when t represents a graph with a cycle, rather than a tree.

OR

Assuming a valid tree input containing n nodes the worst case is when it must visit all nodes, and thus has time
complexity of O(n).
Average case is the worst case, also O(n)

ALTERNATIVE COMPLEXITY ANALYSIS

Using Master Theorem:

a = 2 ,b=n log 2 2
= n ≥ f (x) = 1

→ Case 1 → O(n). Notice that it satisfies the polynomial condition with 0 ≤ ϵ ≤ 1

Using the Substitution Method, i.e., induction:

Guess: P (n) = O(n)

Thus, by definition of Big Oh, we want to prove that P (n) ≤ cn for pair values of c and n 0

Assume that it is true for P (n) ≤ cn // Hypothesis 1

Thus, P (n/2) ≤ cn/2, and from P (n) = 2P (n/2) + O(1), we have


P (n) ≤ 2. cn/2 + 1 = cn + 1

However, cn + 1 ≰ cn for any value of n


Thus, we cannot use hypothesis-1 to prove the induction! Notice that we are only off by one!

What if we use a different hypothesis?

We want to prove that P (n) ≤ cn − b for b ≥ 0

Assume that it is true for P (n/2) ≤ cn/2 − b // Hypothesis 2

Thus, from P (n) = 2P (n/2) + 1, we have


P (n) ≤ 2

cn/2 − 2b + 1 ≤ cn − 2b + 1 = cn − b − (b − 1) ≤ cn − b , for b − 1 ≥ 0, OR b ≥ 1

Best Case: use n = 1


From P (n) = 2P (n/2) + O(1), and setting n = 1 and O(1) = 1, we have
P (1) = 2P (1/2) + 1. You can assume that P (1/2) = 0, and P (1) = 1

Thus, you have P (1) = 1 ≤ c ∗ 1 − b, for c > 1 and b = 1

In conclusion, P (n) = 1P (n/2) + O(1) = O(n) for c > 1 and n 0 ≥ 1

Question 3

Part a)
Express each of the following functions in terms of O, Ω, and Θ. For each, prove that your answer is correct.

i)

3
n
2
− 100n − 100n + 3
1000
ANSWER:
Big-Oh:
3
n 2 3
− 100n − 100n + 3 = O(n )
1000

// Notice that the largest exponent is 3. Since you are asked to express the function in terms of big oh, you don't
need to prove it at this stage

Proof:
First, state what you want to prove, for this case it's f (n) = O(n ), state that, by definition you want to prove that
3

f (n) <= cn
3
for a given pair of values c > 0, and n 0 > 0

Second, you must follow logical steps to find the values of c > 0, and n 0 > 0 . In other words, you are no allowed to
just guess the values of c > 0, and n > 0 0

By definition of big oh, we must prove:


3
(n /1000 − 100n
2
− 100n + 3) <= cn
3
, for a given pair of values of c > 0, n 0 > 0

Logical Steps:
3
(n /1000 − 100n
2
− 100n + 3) < n /1000 + 3
3
, for n ≥ 16
3
≤ n /1000 + 3n
3
for n ≥ 10 6

4n
3
for n ≤ 10 6

Thus, (n 3
/1000 − 100n
2
− 100n + 3) <= 4n
3
for n = 10 6

Note: You can verify the correctness of your solution by using values of c and n in the inequality! 0

Big-Omega:
3 2 3
(n /1000 − 100n − 100n + 3) = Ω(n )

// Since you are asked to express the function in terms of big oh, you don't need to prove it at this stage

Proof:
By definition of big omega, you must prove (n^3/1000 - 100n^2 - 100n + 3) >= cn^3, for a given pair of values c > 0,
and n > 0 0

Logical Steps:

(Tutor scrolled past too quickly)

Big-Theta:
3 2 3
(n /1000 − 100n − 100n + 3) = Θ(n )

Proof:
By definition of Big-Theta, you must show that c 2n
3 3
≤ n /1000 − 100n
2
− 100n + 3 ≤ c 1 n
3
, for given values of c 0 > 0 ,
c1 > 0 , and n 0 > 0

Logical steps:

1. You can prove the problem separately for Big Oh, to get c and n , and for Big Omega to compute c and n . For
1 0 2 0

this problem, you have obtained c ≥ 4 and n = 1 for Big Oh, and c ≤ 1.66 ∗ 10 and n = 120000 for Big
1 0 2
−4
0

Omega.
2. Select the Larger n that were calculated for Big Oh and Big Omega for use in your Big Theta. Then, you claim
0

that since (n /1000 − 100n − 100n + 3) = O(n )...


3 2 3

(Tutor moved too quickly)

ii)

n
n(n + 1)
∑i =
2
i=1

ANSWER:
i)
Big Oh:
n(n + 1)/2 = O(n
2
); notice that the largest exponent is 2, after simplifying
the function to (n 2
+ n)/2

Proof: Follow the steps as in part (i)


By definition of big oh, we must prove:
2 2
(n + n)/2 ≥= cn

Logical steps:
(n
2
+ n)/2 <= n
2
; Note this step is ok because the RHS is always twice the value of the LHS for n ≥ 1

≤ n
2
+ n
2
for n ≥ 1; notice that n 2
≥ n for n ≥ 1, and thus this step is OK
= 2n
2
≤ cn
2
for n ≥ 1 and for c ≥ 2

ii)
Big Omega
2
n(n + 1)/2 = n

by definition of big omega, we must prove:


2 2
(n + n)/2 ≥ cn

Proof: Follow the steps as in part (i)

By definition of Big-Omega, we must prove:


2 2
(n + 1)/2 = Ω(n )

Logical steps:
// divide both sides by n^2
1/2 + 1/2(n) ≥ c

The smallest possible value for the LHS is 1/2, i.e., when n goes to infinity. Thus, we have
1/2 + 1/(2n) > c for n > 1 and c < 1/2

iii)
Big Theta:
2
n(n + 1)/2 = Θ(n )

Proof: Follow the steps as in part (i)

By definition of Big-Theta, we must prove:


2 2 2
c1 n ≤ (n + n)/2 ≤ c 2 n

iii)

2
∑i = n(n + 1)(2n + 1)/6

i=1

ANSWERS
Big Oh:
3
n(n + 1)(2n + 1)/6 = O(n )

Notice that the largest exponent is 3 (after simplifying the function to (2n 3
+ 3n
2
+ n)/6 )

Proof: Follow the steps as in part (i)


By definition of Big-Oh, we must prove:
3
n(n + 1)(2n + 1)/6 ≤ cn

Logical Steps:

Big Oh:
3
n(n + 1)(2n + 1)/6 = O(n )

Notice that the largest exponent is 3 (after simplifying the function to (2n 3
+ 2n
2
+ 1)/6
Proof: Follow the steps as in part (i)
By definition of Big-Oh, we must prove:
3
n(n + 1)(2n + 1)/6 ≥ cn

Logical Steps:
n(n + 1)(2n + 1)/6 ≤ n(n + 1)(2n + 1) // notice that this inequality is true for n ≥ 1

≥ n(n + n)(2n + n) , for n > 1 // this inequality is true

= 6n
2
≤ cn
3
, for c ≥ 6 and n ≥ 1

Big Omega:
3
n(n + 1)(2n + 1)/6 = Ω(n )

Proof: Follow the steps as in part (i)

By definition of Big-Omega, we must prove:


3
n(n + 1)(2n + 1)/6 ≥ cn

Logical Steps:
n(n + 1)(2n + 1)/6 ≥ n(n)(2n)/6 // Remove the "+ 1 ". The inequality is correct for any value of n, e.g., n ≥ 1

3 3
≥ 2n /6 = 1/3n

≥ cn
3
for c ≤ 1/3 and n ≥ 1

An alternative proof:
3 2 3 2 3
(2n + 3n + n)/6 = 1/3n + 1/2n + 1/6n ≥ cn

1/3 + 1/(2n) + 1/(6n ) ≥ c


2
, for n ≥ 1 and c ≤ 1/3

Big Theta:
3
n(n + 1)(2n + 1)/6 = Θ(n )

Proof: Follow the steps as in part (i)


By definition of Big-Theta, we must prove: ... etc.

Logical steps: ... etc.

(This was literally the answer Soh gave)

part b)

Prove that f (x) = 10 + 100log 2 (x − 3) is O(log(x))


Hint: log 2 (x − 3) ≤ log 2 (x) , for x > 3

ANSWER:
By definition of Big-Oh, we must prove:
10 + 100log 2 (x − 3) ≤ c ∗ log 2 (x) ; // we must find the values of c and x that meet the inequality

Note: In general O(log a (x)) = O(log b (x)) for any constant values a and b because
, for any constant c. For example,
log c (a)
log b (a) =
log c (b)

(I didn't capture this part, sorry!)

Question 4
This question is somewhat more challenging. Feel free to work with friends or to look for guidance elsewhere, but
make sure you understand any answers obtained.
By Stirling's approximation, n! ≈ √2πn( n

e
)
n

Note:
π = 3.14159. . .

e = 2.71828183. . .

part a)

Use the Stirling's approximation to derive an expression for log 2 n!

ANSWER
log 2 n! = log 2 √ 2πn(
n

e
)
n
// by Stirling's approximation
log 2 n! = nlog 2 (n) − nlog 2 (e) + (log 2 (2πn))/2 // Simplification

part b)

Use the result in a) to prove that ∑ n

i=1
log 2 i = O(nlogn)

ANSWER
// notice that log a + log b = log(ab)
n n
∑ i=1 log 2 (i) = log 2 ∏ i=1 i

= log 2 (1 ∗ 2 ∗ 3. . . ∗n) = log 2 (n!)

≈ nlog 2 n − nlog 2 e + (log 2 (2πn)/2 // by Stirling's approximation

By definition of Big-Oh, we must prove:


nlog n − nlog e + (log (2πn)/2 ≤ cnlog(n); // must find c n
2 2 2 0

Logical Steps:

nlog 2 (n) − nlog 2 (e) + (log 2 (2πn))/2 ≤ nlog 2 (n) + (log 2 (2πn))/2 // to make the value of the LHS positive, n ≥ e

= nlog 2 (n) + (log 2 (2) + log 2 (π) + log 2 (n))/2 //Simplification


≤ nlog 2 (n) + (log 2 (2) + log 2 (π) + log 2 (n)) //remove "/2"; this is true
≤ nlog 2 (n) + (nlog 2 (n) + nlog 2 (n) + nlog 2 (n)) // multiply the terms in bracket with n; the inequality is true for n ≥ π

= 4nlog(n) ≤ cnlog(n) for c ≥ 4 and n ≥ π

Hence ∑ n

i=1
log 2 (i) = O(nlog(n)) for c ≥ 4 and n ≥ π

Note:
The short way is to note that each term in the sum is O(log(n)) and that there is an n term. Thus, you can expect the
complexity to be O(nlog(n)). However, this is an informal proof.

An alternative proof:
n n
∑ i=1 log 2 (i) = log 2 ∏ i=1 i

= log 2 (n!)

≈ log 2 √ 2πn(
n

e
)
n
// by the Stirling's approximation
= 0.5log 2 (πn) + nlog 2 (n) − nlog 2 (e)

≤ log 2 (2π) + log 2 (n) + nlog 2 (n) *// the largest order in terms of n is nlog 2 (n)

Hence ∑
n

i=1
log 2 (i) = O(nlog(n))

// you need to give the value of c and n0 to prove it formally following the definition of Big-Oh

Question 5
CBF, insert the question here later
ANSWERS:

part a)
JAVA

for (i = 0; i < n; i++) // n + 1 repititions


{
System.out.println("Hello world"); // n times
}

(i) See above. Note: It is OK if there is a difference of one or two steps

(ii) Total steps: n + 1 + n = 2n + 1 // assume each line takes 1 step

(iii) O(n); assuming println is O(1)

(iv)
2n + 1 ≤ cn

≤ 2n + n for n ≥ 1
= 3n ≤ cn for c ≥ 3

part b)
for i ← n downto 0 do // (n+2)
for j ← 0 to i do // ∑ n

i=0
(i + 2)

A[i][j] ← A[i][j + 1] + 2 // ∑ n

i=0
(i + 1)

(i) See above. Note: It is OK if there is a difference of one or two steps

(ii)
n
∑ i=0 (i + 2) = 2 + 3+. . . +n + (n + 1) + (n + 2)

= n(n + 1)/2 − 1 + (n + 1) + (n + 2)
n
∑ i=0 (i + 1) = 2 + 3+. . . +n + (n + 1) + (n + 2) = n(n + 1)/2 + (n + 1)

Thus, total steps is:


(n + 2) + n(n + 1)/2 − 1 + (n + 1) + (n + 2) + n(n + 1)/2 + (n + 1)

2
= n + 2 + n + n − 1 + 3n + 4

2
= n + 5n + 5

(iii)
Assume each line requires O(1). The running time is O(n 2
)

(iv)
2 2
n + 5n + 5 ≤ cn

≤ n
2
+ 5n
2
+ 5n
2
for n ≥ 1
2
= 11n

≤ cn
2
for c ≥ 11 and n ≥ 1

part c)
i ← n // 1 step
while i > 0 do //lg(n) + 2
set i ← i/2 // lg(n) + 1; remember that this is an integer division

(i)
See above. Notice that if n = 8, the while loop is executed lg(8) + 2 = 5 times
i.e., i = 8/1, 8/2, 8/4, 8/8, 1 exit condition → 5

In general, for i = n, we have the following values of I after each repetition:


n, n/2, n/4, n/8, . . . , n/n
n n n n
= , , ,...,
20 21 22 2
h

Let 2 = n or h = ln(n)
h

The condition of the while loop is true lg(n) + 1 times


Thus, in total, the loop is executed ln(n) + 1 times plus 1 exit condition
Note: It is OK if there is a difference of one or two times.

(ii) 1 + lg(n) + 2 + ln(n) + 1 = 2lg(n) + 4

(iii) O(lg(n))

(iv)
2lg(n) + 4 leqclg(n)

≤ 2lg(n) + 4lg(n) // for n ≥ 2


= 6lg(n) ≤ clg(n) // for c ≥ 6 and n ≥ 2

part d)
for i ← 1 do n do // n + 1 steps
j ← i // n steps

while j > 0 do // ∑ n

i=1
(lg(i) + 2)

set j ← j/2 // $\sum_{i=1}^n{(lg(i)+1)}

(i) See above. Note: It is OK if there is a differnce of one or two times

(ii)
n + 1 + n + lg(n!) + 2n + lg(n!) + n

= 2ln(n!) + 5n + 1

Note:
n
∑ (lg(i) + 2) = lg1(which = 0) + lg(2) + lg(3)+. . . +lg(n) + 2n
i=1

= lg(n!) + 2n

n
∑ (lg(i) + 1) = lg1(which = 0) + lg(2) + lg(3)+. . . +lg(n) + n
i=1

= lg(n!) + n

(iii) O(nlg(n)) // Recall that lg(n!) = O(nlg(n)) by the Stirling's approximation

(iv)
2lg(n!) + 5n + 1 ≤ clg(n)

Then use the Stirling's approximation to find the Big-Oh

(I didn't capture this part, sorry!)

Question 6
While induction proofs are important in themselves, these proofs are very useful in complexity analysis, where
summation is often used.

Use induction to prove the following

a) ∑ n n(n+1)

i=1
i =
2

Note: The proof for this equation has been discussed in the lecture

b) ∑
n 2
i=1
i = n(n + 1)(2n + 1)/6

c) ∑
n i−1 n
2 = 2 − 1
i=1

Notes:
In general, proof by induction for a statement, e.g., S(n): ∑ n

i=1
i = n(n + 1)/2 requires proving two facts:
1. The basis case
2. The inductive step

The basis case:


proves statement S(n) only for n ≥ b. We usually select a small value for b, e.g., 0, 1, 2.

The inductive step:


Aims to prove statement S(n + 1) is true if we assume statement S(n) is true, called inductive hypothesis for all
n ≥ b where S(b) is the basis

ANSWERS:

part a)
n
n(n + 1)
∑i =
2
i=1

The solution has been discussed in the lecture, and thus can be skipped in the tutorial!

Base Step:

Show that ∑ for n = 1


1 n(n+1)

i=1
i =
2

LHS: ∑
n

i=1
i = 1

RHS:
1(1+1)
= 1
2

Since we have LH S = RH S = 1 , the equation is true for base case n = 1

Inductive Step:

Assume it is true, for some arbitrary integer k ≤ n i.e.,


Induction hypothesis: ∑ k

i=1
i =
k(k+1)

Note: It is also OK if the hypothesis for n, and the inductive step is for n + 1

We must prove that it is also true for a larger value of k, i.e., k + 1:

i.e., prove that ∑ k

i=1
i =
(k+1)((k+1)+1)

k+1 k+1
LH S = ∑ i=1 i = (k + 1) + ∑ i=1 i

Using the inductive hypothesis, i.e., ∑ , we must compute LH S


k k(k+1)

i=1
i =
2

k(k + 1)
= (k + 1) +
2

2(k + 1) k(k + 1)
= +
2 2

2 2
2k + 2 + k + k k + 3k + 2
= =
2 2

(k + 1)(k + 2)
=
2

(k + 1)((k + 1) + 1)
=
2

... Which equals the RHS


part b)
n

2
∑i = n(n + 1)(2n + 1)/6

i=1

Base Step:

Show that ∑ for n = 1


n 2
i=1
i = n(n + 1)(2n + 1)/6

LHS: ∑ n

i=1
i
2
= 1
2
= 1

RHS: 1(1 + 1)(2 ∗ 1 + 1)/6 = 6/6 = 1

Since we have LHS = RHS = 1, the equation is true for base case n = 1

Inductive Step:

Assume it is true, for some arbitrary integer k, i.e.,

Induction hypothesis: ∑ k

i=1
i
2
= k(k + 1)(2k + 1)/6

We must prove that it is also true for a larger value of k, i.e., k + 1

Thus ,we must prove that ∑ k+1

i=1
i
2
= (k + 1)((k + 1) + 1)(2(k + 1) + 1)/6

LHS: ∑
k+1 2 2 k 2
i=1
i = (k + 1) + ∑ i=1 i

Using the induction hypothesis ∑ k

i=1
i
2
= k(k + 1)(2k + 1)/6 , we can compute LHS

2
= (k + 1) + k(k + 1)(2k + 1)/6

The final step is to simplify the LHS and RHS to show that LHS = RHS.

(Note: below is a section from my FCS Assignment. I had to prove this same exact thing, so I will just copy and
paste.)

(k(k + 1)(2k + 1)) (k + 1)((k + 1) + 1)(2(k + 1) + 1)


2
+ (k + 1) =
6 6

2
(k(k + 1)(2k + 1)) + 6(k + 1) (k + 1)(k + 2)(2k + 2 + 1)
=
6 6

2 2 2
(k(2k + 3k + 1)) + 6(k + 1) (k + 3k + 2)(2k + 3)
=
6 6

3 2 2 3 2
(2k + 3k + k) + 6(k + 1) 2k + 9k + 13k + 6
=
6 6

3 2 2 3 2
(2k + 3k + k) + 6(k + 2k + 1) 2k + 9k + 13k + 6
=
6 6

3 2 2 3 2
(2k + 3k + k) + (6k + 12k + 6) 2k + 9k + 13k + 6
=
6 6

3 2 3 2
2k + 9k + 13k + 6 2k + 9k + 13k + 6
=
6 6

...Thus, LHS = RHS.


Therefore, the inductive hypothesis holds true for k + 1, i.e., the inductive hypothesis is true for all n

part c)
n

i−1 n
∑2 = 2 − 1

i=1
Base Step

Show that ∑ n

i=1
2
i−1
= 2
n
− 1 for n = 1

LHS: ∑ n

i=1
2
i−1
= 2
1−1
= 2
0
= 1

RHS: = 2 1
− 1 = 2 − 1 = 1

Since we have LHS = RHS = 1, the equation is true for base case n = 1

Inductive Step:

Assume it is true, for some arbitrary integer k, i.e.,


Induction hypothesis: ∑ 2 = 2 − 1, for any k < n
k i−1 k
i=1

We must prove that the equation is also true for k + 1, i.e.,

(k+1)

i−1 (k+1)
∑ 2 = 2 − 1

i=1

LHS:

(k+1) k

i−1 (k+1)−1 k−1


∑ 2 = 2 + ∑2

i=1 i=1

k k
= 2 + 2 − 1

k+1
2 − 1

RHS:

(k+1)
2 − 1

∴ QED

So LHS = RHS

Question 7
Use the master method to give tight asymptotic bounds for the following recurrences.

a) T (n) = 4T (n/2) + n
b) T (n) = 4T (n/2) + n 2

c) T (n) = 4T (n/2) + n 3

d) T (n) = 3T (√n) + log 2n

Hint: The function is not in a form suitable for the master method. However, it can be converted to the required
form for master method by using another variable k = log (n). For details, read page 86 of the textbook (third
2

edition).

ANSWERS:

part a)

20240315_095956 TIMESTAMP = 0:39:30

Question 8
...

Question 9
...

Question 10
...

Question 11
...

Question 13 (Should be 12)


...

Question 14 (Should be 13)


...

You might also like