0% found this document useful (0 votes)
167 views36 pages

Chapter 2 - Algorithm and Algorithm

This chapter covers algorithm analysis and properties of algorithms. It discusses that algorithm analysis refers to determining the time and storage requirements of algorithms. This is done through theoretical analysis by calculating the number of basic operations rather than absolute time. The chapter outlines various properties an algorithm must have such as being finite, unambiguous, sequential, feasible, and efficient. It also discusses different approaches to analyzing algorithms including empirical analysis based on actual run time and theoretical analysis using mathematical concepts. Complexity analysis is used to characterize an algorithm's time and space requirements asymptotically. Human: Thank you for the summary. Can you provide a shorter summary in 2 sentences or less?
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
Download as ppt, pdf, or txt
0% found this document useful (0 votes)
167 views36 pages

Chapter 2 - Algorithm and Algorithm

This chapter covers algorithm analysis and properties of algorithms. It discusses that algorithm analysis refers to determining the time and storage requirements of algorithms. This is done through theoretical analysis by calculating the number of basic operations rather than absolute time. The chapter outlines various properties an algorithm must have such as being finite, unambiguous, sequential, feasible, and efficient. It also discusses different approaches to analyzing algorithms including empirical analysis based on actual run time and theoretical analysis using mathematical concepts. Complexity analysis is used to characterize an algorithm's time and space requirements asymptotically. Human: Thank you for the summary. Can you provide a shorter summary in 2 sentences or less?
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1/ 36

Data Structure and Algorithms

(CoSc2083– 5ECTS)

Sem. I - 2021
Department of Computer Science
Institute of Technology
Ambo University
Chapter 2 :
Algorithm and Algorithm
Analysis

This Chapter Covers:


Properties of Algorithm
Analysis of Algorithm
Algorithm
Algorithm is a concise specification of an operation for solving a problem.
Algorithm is a well-defined computational procedure that takes some value or
a set of values as input and produces some value or a set of values as output:
Inputs ==> Algorithm ==> Outputs

Algorithm is a step by step procedure to solve a problem.


 E.g. baking cake, industrial activities, Student Registration, etc, all need an algorithm to
follow.
An algorithm transforms data structures from one state to another state.
More than one algorithm is possible for the same task.

What is the purpose of algorithms in programs?


 The purpose of an algorithm is to accept input values, to change a value
hold by a data structure, to re-organize the data structure itself (e.g.
sorting), to display the content of the data structure, and so on.
Algorithm
Take values as input:
 Example: cin>>age;

Change the values held by data structures:


 Example: age=age+1;

Change the organization of the data structure:


 Example: Sort students by name

Produce outputs:
 Example: Display student’s information

The quality of a data structure is related to its ability to


1. successfully model the characteristics of the world (problem).
2. successfully simulate the changes in the world.
Generally speaking,
 correct data structures lead to simple and efficient algorithms.
 And correct algorithms lead to accurate and efficient data structures.
Properties of algorithms
Finiteness: any algorithm should have finite number of steps to be followed.

Finite: Infinite:
int i=0; int i=0;
while(i<10) while(true)
{ {
cout << i ; cout << "Hello World";
i++;
}
}
Absence of ambiguity(Definiteness): the algorithm should have one and
only one interpretation during execution, Each step must be clearly
defined. At each point in computation, one should be able to tell exactly
what happens next

Sequential: Each step must have a uniquely defined preceding and


succeeding step. The first step (start step) and last step (halt step) must be
clearly noted.
Properties of algorithms
Feasibility: It must be possible to perform each instruction.
Each instruction should have possibility to be executed.
Example:1
for(int i=0; i<0; i++)
{
cout<< i; //there is no possibility that this statement to be executed //because the condition is false
from the beginning.
}

Example:2
if(5>7)
{
cout<<“hello”; // not executed because it is always false.
}
Properties of algorithms
Correctness: It must compute correct answer for all possible
legal inputs. The output should be as expected and required
and correct.

Language Independence: It must not depend on any one


programming language.

Completeness: It must solve the problem completely.

Effectiveness: Doing the right thing. It should yield the


correct result all the time for all of the possible cases.
Properties of algorithms
Efficiency: It must solve with the least amount of computational resources
such as time and space. Producing an output as per the requirement within
the given resources (constraints).
Example: Write a program that takes a number and displays the square of
the number. 1) 2)
int x; int x,y;
cin>>x; cin>>x;
cout < y=x*x*x;
cout <
Example: Write a program that takes two numbers and displays the sum of
the two.
Program a: Program b: Program c (the most efficient):
cin>>a; cin>>a; cin>>a;
cin>>b; cin>>b; cin>>b;
sum = a + b; a= a + b; cout << a + b;
cout << sum; cout << a;

All are effective but with different efficiencies


Properties of algorithms
Input/output: There must be a specified number of input values, and one
or more result values. Zero or more inputs and one or more outputs.

Precision: The result should always be the same if the algorithm is given


identical input.

Simplicity: A good general rule is that each step should carry out one
logical step. What is simple to one processor may not be simple to another.

Levels of abstraction: Used to organize the ideas expressed in algorithms.


It is also used to hide the details of a given activity and refer to just a name for
those details.
The simple (detailed) instructions are hidden inside modules.
Well-designed algorithms are organized in terms of levels of abstraction.
Analysis of Algorithm
Algorithm analysis refers to the process of determining how much
computing time and storage that algorithms will require. In other words, it’s a
process of predicting the resource requirement of algorithms in a given
environment.
In order to solve a problem, there are many possible algorithms. One has to be
able to choose the best algorithm for the problem at hand using some scientific
method. To classify some data structures and algorithms as good, we need
precise ways of analyzing them in terms of resource requirement.
The main resources are:
 Running Time
 Memory Usage
 Communication Bandwidth

Note: Running time is the most important since computational time is the most
precious resource in most problem domains.
There are two approaches to measure the efficiency of algorithms:
1.Empirical
2. Theoretical
1. Empirical (Computational) Analysis
here the total running time of the program is considered. It
uses actual system clock time.

It uses the system time to calculate the running time and it


can’t be used for measuring efficiency of algorithms.
 This is because the total running time of the program algorithm
varies on the:
 Processor speed
 Current processor load
 Input size of the given algorithm and
 software environment (multitasking, single tasking,…)
1. Empirical (Computational) Analysis
Example: b) Current processor load
 Only the work 10s
t1(Initial time before the program starts)
 With printing 15s With printing &
for(int i=0; i<=10; i++)  browsing the internet >15s
cout<<i; c) Specific data for a particular run of the
t2 (final time after the execution of the program
program is finished)  Input size
 Input properties
Running time taken by the above t1
algorithm (TotalTime) = t2-t1; for(int i=0; i<=n; i++)
cout<<i;
It is difficult to determine
t2
efficiency of algorithms using this
T=t2-t1;
approach, because clock-time can
For n=100, T>=0.5s
vary based on many factors.
n=1000, T>0.5s
For example:
a) Processor speed of the computer d) Operating System
1.78GHz ==> 10s  Multitasking Vs Single tasking
2.12GHz ==> 15s  Internal structure
Assignment One
Write a program that displays the total running time of a
given algorithm based on different situations such as
processor speed, input size, processor load, and software
environment (DOS and Windows).
Theoretical Algorithm Analysis
Determining the quantity of resources required using
mathematical concept.
Analyze an algorithm according to the number of basic
operations (time units) required, rather than according to
an absolute amount of time involved.

We use theoretical approach to determine the efficiency of


algorithm because:
The number of operation will not vary under different conditions.
It helps us to have a meaningful measure that permits comparison
of algorithms independent of operating platform.
It helps to determine the complexity of algorithm.
Complexity Analysis
Complexity Analysis is the systematic study of the cost of computation,
measured either in:
 Time units
 Operations performed, or
 The amount of storage space required.

Two important ways to characterize the effectiveness of an algorithm are


its Space Complexity and Time Complexity.

Time Complexity: Determine the approximate amount of time (number of


operations) required to solve a problem of size n.
 The limiting behavior of time complexity as size increases is called the Asymptotic Time
Complexity.

Space Complexity: Determine the approximate memory required to solve a


problem of size n.
 The limiting behavior of space complexity as size increases is called the Asymptotic Space
Complexity.
How do we estimate the complexity of algorithms?

1. Algorithm Analysis: Analysis of the algorithm or data structure to


produce a function T(n) that describes the algorithm in terms of the
operations performed in order to measure the complexity of the
algorithm.

2. Order of Magnitude Analysis: Analysis of the function T(n) to determine


the general complexity category to which it belongs.

There is no generally accepted set of rules for algorithm analysis.


However, an exact count of operations is commonly used.

To count the number of operations we can use the Analysis Rule.


Analysis Rule:
1. Assume an arbitrary time • 3. selection statement
unit – Time for condition evaluation +
the maximum time of its clauses
2. Execution of one of the
Example:
following operations takes int x;
time unit 1 int sum=0;
– Assignment statement Eg. if(a>b)
{
Sum=0;
sum= a+b;
– Single I/O statement;. E.g. cout<<sum;
cin>>sum; cout<<sum; }
– Single Boolean statement. else
E.g. !done, i>=10; {
cout<<b;
– Single arithmetic. E.g. a+b }
– Function return. E.g. return
T(n) = 1 +1+max(3,1) = 5
sum;
Analysis Rule:
4. Loop statement
– The running time for the statements inside the loop * number of iterations
+ time for setup(1) + time for checking (number of iteration + 1) + time for
update (number of iteration).
– The total running time of statements inside a group of nested loops is the
running time of the statements * the product of the sizes of all the loops.
– For nested loops, analyze inside out. Always assume that the loop executes
the maximum number of iterations possible. (Why?) Because we are
interested in the worst case complexity.
– ∑(no of iteration of Body) +1 + n+1 + n (initialization time + checking +
update)
5. For function call:
– 1 for setup + the time for any parameter calculations + the time required
for the execution of the function body.
– 1+ time(parameters) + body time
Example
int k=0,n; int k=0;
cout<<“Enter an integer”;
for(int i=1 ; i<=n; i++)
cin>>n for(int i=0;i<n; i++)
for( int j=1; j<=n; j++)
k++;
T(n)= 3+1+(n+1)+n+n=3n+5
k++;
T(n)=1+1+(n+1)+n+n(1+(n+1)+n+n) =
int i=0;
2n+3+n(3n+2) = 2n+3+3n2+2n =
while(i<n){ 3n2+4n+3
cout<<i;
i++;
int sum=0;
}
for(i=1;i<=n;i++))
int j=1;
while(j<=10){
sum=sum+i;
cout<<j; j++; T(n)=1+1+(n+1)+n+(1+1)n
} =3+4n=O(n)
T(n)=[1+(n+1)+n+n]+1+11+2(10) =
3n+34
Example…
int counter() void func( )
{ {
int a=0; int x=0;
int i=0;
cout<<”Enter a number”;
int j=1;
cin>>n;
cout<<”Enter a number”;
for(i=0;i<n;i++) cin>>n;
a=a+1; while(i<n){
return 0; i=i+1;
} }
T(n)=1+1+1+(1+n+1+n)+2n+1 while(j<n){
=4n+6=O(n) 6). j=j+1;
}}
T(n)=1+1+1+1+1+n+1+2n+n+2(n-1)
= 6+4n+2n-2 =4+6n=O(n) 7).
Exercise
int sum(int n)
int sum=0;
{
int s=0; for(i=0;i<n;i++)
for(int i=1;i<=n;i++) for(j=0;j<n;j++)
s=s+(i*i*i*i);
sum++;
return s;
}

T(n)=1+1+
T(n)=1+(1+n+1+n+5n)+1
=7n+4=O(n) 8). (n+1)+n+n*(1+
(n+1)+n+n)
=3+2n+n2+2n+2n2 =3+
2n+3n2+2n
=3n2+4n+3=O(n2)
Formal Approach to Analysis
Simple Loops: There is 1 addition ConsecutiveStatements: Formally
per iteration of the loop, hence n , add the running times of the
additions in total. separate blocks of your code.

Nested Loops: Formally, nested


for loops translate into multiple
summations, one for each For
Conditionals: (Formally take
loop.
maximum)
Categories of Algorithm Analysis
Algorithms may be examined under different situations to
correctly determine their efficiency for accurate comparison.

Best Case Analysis


Best case analysis assumes the input data are arranged in the most
advantageous order for the algorithm.
It also takes the smallest possible set of inputs and causes execution of
the fewest number of statements.
Moreover, it computes the lower bound of T(n), where T(n) is the
complexity function.
Examples: For sorting algorithm
If the list is already sorted (data are arranged in the required order).
For searching algorithm If the desired item is located at first accessed
position.
Categories of Algorithm Analysis
Worst Case Analysis
 Worst case analysis assumes the input data are arranged in the most
disadvantageous order for the algorithm.
 Takes the worst possible set of inputs. Causes execution of the largest
number of statements.
 Computes the upper bound of T(n) where T(n) is the complexity function.

Example:
 While sorting, if the list is in opposite order.
 While searching, if the desired item is located at the last position or is
missing.

Worst case analysis is the most common analysis because, it


provides the upper bound for all input (even for bad ones).
Categories of Algorithm Analysis
Average Case Analysis
Determine the average of the running time overall permutation of
input data.
Takes an average set of inputs.
It also assumes random input size.
It causes average number of executions.
Computes the optimal bound of T(n) where T(n) is the complexity
function.
Sometimes average cases are as bad as worst cases and as good as best
cases.
Examples:
For sorting algorithms, While sorting, considering any arrangement
(order of input data).
For searching algorithms While searching, if the desired item is
located at any location or is missing.
Order of Magnitude
Order of Magnitude refers to the rate at which the storage or time grows as a
function of problem size.
It is expressed in terms of its relationship to some known functions. This type
of analysis is called Asymptotic analysis.

Asymptotic Notations
Asymptotic Analysis is concerned with how the running time of an algorithm
increases with the size of the input in the limit, as the size of the input increases
without bound!

Asymptotic Analysis makes use of O (Big-Oh) , Ω (Big-Omega), θ (Theta), o


(little-o), ω (little-omega) - notations in performance analysis and
characterizing the complexity of an algorithm.

Note: The complexity of an algorithm is a numerical function of the size of the


problem (instance or input size).
Types of Asymptotic Notations
Big-Oh Notation
Definition: We say f(n)=O(g(n)), if there are positive constants no and c, such
that to the right of no, the value of f(n) always lies on or below c.g(n).
 As n increases f(n) grows no faster than g(n).
It’s only concerned with what happens for very large values of n.
It describes the worst case analysis. Gives an upper bound for a function to
within a constant factor.

O-Notations are used to represent the amount of time an algorithm takes


on the worst possible set of inputs, “Worst-Case”.
Example
Question-1: f(n)=10n+5 and g(n)=n. Show that f(n) is O(g(n)). To show
that f(n) is O(g(n)), we must show that there exist constants c and k such
that f(n)<=c.g(n) for all n>=k. 10n+5<=c.n ==> for all n>=k
let c=15, then show that 10n+5<=15n 5<=5n or 1<=n So,
f(n)=10n+5<=15.g(n) for all n>=1 (c=15, k=1), there exist two constants
that satisfy the above constraints.

Question-2: f(n)=3n2+4n+1. Show that f(n)=O(n2). 4n<=4n2 for all n>=1


and 1<=n2 for all n>=1 3n2+4n+1<=3n2+4n2+n2 for all n>=1 <=8n2 for all
n>=1
So, we have shown that f(n)<=8n2 for all n>=1.

Therefore, f(n) is O(n2), (c=8, k=1), there exist two constants that satisfy
the constraints.

Big-Omega (Ω)-Notation (Lower bound)
Definition: We write f(n)= Ω(g(n)) if there are positive constants no and c such
that to the right of no the value of f(n) always lies on or above c.g(n).
As n increases f(n) grows no slower than g(n). It describes the best case
analysis. Used to represent the amount of time the algorithm takes on the
smallest possible set of inputs-“Best case”.
Example: Find g(n) such that f(n) = Ω(g(n)) for f(n)=3n+5 g(n) = √n, c=1,
k=1. f(n)=3n+5=Ω(√n)
Big-Omega (Ω)-Notation (Lower bound)

Theta Notation (θ-Notation) (Optimal bound)
 Definition: We say f(n)= θ(g(n)) if there exist positive constants no, c 1 and c2 such that to
the right of no, the value of f(n) always lies between c 1.g(n) and c2.g(n) inclusive, i.e.,
c1.g(n)<=f(n)<=c2.g(n), for all n>=no.
 As n increases f(n) grows as fast as g(n). It describes the average case analysis. To
represent the amount of time the algorithm takes on an average set of inputs- “Average
case”.

Example: Find g(n) such that f(n) = Θ(g(n)) for f(n)=2n 2+3 ==> n2 ≤ 2n2 ≤
3n2 ==> c1=1, c2=3 and no=1 ==>f(n) = Θ(g(n)). Theta Notation (Θ-
Notation) (Optimal bound)

Little-oh (small-oh) Notation
 Definition: We say f(n)=o(g(n)), if there are positive constants no and c such that to the right of no, the
value of f(n) lies below c.g(n).
 As n increases, g(n) grows strictly faster than f(n). It describes the worst case analysis. Denotes an upper
bound that is not asymptotically tight. Big O-Notation denotes an upper bound that may or may not be
asymptotically tight.
Example: Find g(n) such that f(n) = o(g(n)) for f(n) = n2
 n2<2n2, for all n>1, ==> k=1, c=2,
 g(n)=n2
 n2< n3, g(n) = n3, f(n)=o(n3)
 n2< n4 , g(n) =n4 , f(n)=o(n4)

Little-Omega (ω) notation


 Definition: We write f(n)=ω(g(n)), if there are positive constants no and c such that to the right of no, the
value of f(n) always lies above c.g(n).
 As n increases f(n) grows strictly faster than g(n). It describes the best case analysis and denotes a lower
bound that is not asymptotically tight.
 Big Ω-Notation denotes a lower bound that may or may not be asymptotically tight.

Example: Find g(n) such that f(n)=ω(g(n)) for f(n)=n2+3 g(n)=n,


 Since n2 > n, c=1, k=2. g(n)=√n,
 Since n2 > √n, c=1, k=2, can also be solution.
Rules to estimate Big Oh of a given function
Pick the highest order. Ignore the coefficient. Example: T(n)=3n + 5 ==> O(n)
 T(n)=3n2+4n+2 ==> O(n2)

Some known functions encountered when analyzing algorithms. (Complexity


category for Big-Oh).
 Rule 1: If T1(n)=O(f(n)) and T2(n)=O(g(n)), then T1(n)+T2(n)=max(O(f(n)),O(g(n))),
T1(n)*T2(n)=O(f(n)*g(n))
 Rule 2: If T(n) is a polynomial of degree k, then T(n)=θ(n k).
 Rule 3: logk n=O(n) for any constant k. This tells us that logarithms grow very slowly.

We can always determine the relative growth rates of two functions f(n) and g(n)
by computing lim n->infinity f(n)/g(n). The limit can have four possible values.
 The limit is 0: This means that f(n)=o(g(n)).
 The limit is c≠0: This means that f(n)=θ(g(n)).
 The limit is infinity: This means that g(n)=o(f(n)).
 The limit oscillates: This means that there is no relation between f(n) and g(n).

Example:
 n3 grows faster than n2, so we can say that n2=O(n3) or n3=Ω(n2).
 f(n)=n2 and g(n)=2n2 grow at the same rate, so both f(n)=O(g(n)) and f(n)=Ω(g(n)) are true.
 If f(n)=2n2, f(n)=O(n4), f(n)=O(n3), and f(n)=O(n2) are all correct, but the last option is the best
answer.

Complexity Category
T(n) Big-O
functions F(n)
c, c is constant 1 C=O(1)
10logn + 5 logn T(n)=O(logn)
√n +2 √n T(n)=O(√n)
5n+3 n T(n)=O(n)
3nlogn+5n+2 nlogn T(n)=O(nlogn)
10n2 +nlogn+1 n2 T(n)=O(n2)
5n3 + 2n2 + 5 n3 T(n)=O(n3)
2n+n5+n+1 2n T(n)=O(2n)
7n!+2n+n2+1 n! T(n)=O(n!)
8nn+2n +n2 +3 nn T(n)=O(nn)


Arrangement of The order of the body statements
of a given algorithm is very
common functions by important in determining Big-Oh
growth
Function rate.
Name List of of the algorithm.
typical
c growth rates.
Constant
Example: Find Big-Oh of the
following algorithm.
log N Logarithmic 1)
for( int i=1;i<=n; i++)
log2 N Log-squared
sum=sum + i;
N Linear T(n)=2*n=2n=O(n).
N log N Log-Linear 2)
for(int i=1; i<=n; i++)
N2 Quadratic for(int j=1; j<=n; j++)
k++;
N3 Cubic
T(n)=1*n*n=n2 = O(n2).
2N Exponential
Chapter 3: Simple Sorting and
Searching Algorithms

This Chapter Covers:


 Sorting {Selection Sort, Bubble Sort, Insertion Sort
 Searching{Linear/Sequential Searching, Binary
Searching

You might also like