100% found this document useful (1 vote)
78 views98 pages

Analysis of Algorithm: Space Complexity

The document discusses space complexity of algorithms. It defines space complexity as the total amount of computer memory required by an algorithm to complete its execution. The space complexity considers the memory required to store variables, constants, structures, etc., but ignores memory for program instructions and function calls. Space complexity can be constant if the memory required is fixed for all inputs, or linear if the memory increases linearly with the input size. Examples of constant and linear space complexity are provided.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
100% found this document useful (1 vote)
78 views98 pages

Analysis of Algorithm: Space Complexity

The document discusses space complexity of algorithms. It defines space complexity as the total amount of computer memory required by an algorithm to complete its execution. The space complexity considers the memory required to store variables, constants, structures, etc., but ignores memory for program instructions and function calls. Space complexity can be constant if the memory required is fixed for all inputs, or linear if the memory increases linearly with the input size. Examples of constant and linear space complexity are provided.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 98

Analysis of Algorithm

SPACE COMPLEXITY
Definition

Space complexity of an algorithm can be defined as follows...

Total amount of computer memory required by an algorithm to complete its


execution is called as space complexity of that algorithm.

 This is essentially the number of memory cells which an algorithm needs. A good algorithm keeps
this number as small as possible.
What is Space complexity?

When we design an algorithm to solve a problem, it needs some computer memory


to complete its execution. For any algorithm, memory is required for the following
purposes...

 Memory required to store program instructions


 Memory required to store constant values
 Memory required to store variable values
When a program is under execution it uses the computer memory for
THREE reasons:
 Instruction Space: It is the amount of memory used to store
compiled version of instructions.
 Environmental Stack: It is the amount of memory used to store
information of partially executed functions at the time of function
call.
 Data Space: It is the amount of memory used to store all the
variables and constants.
When calculating the space complexity of an algorithm, we consider
only Data Space and ignore Instruction Space as well as
Environmental Stack.
That means we calculate only the memory required to store Variables,
Constants, Structures, etc.
To calculate the space complexity, we must know the memory
required to store different datatype values (according to the compiler).
For example, the C Programming Language compiler requires the
following:
 2 bytes to store Integer value,
 4 bytes to store Floating Point value,
 1 byte to store Character value,
 6 (OR) 8 bytes to store double value
Constant Space Complexity

If any algorithm requires a fixed amount of space for all input values
then that space complexity is said to be Constant Space Complexity
Constant Space Complexity

Consider the following piece of code:

Int square(int a)
{
return a*a;
}

In above piece of code, it requires 2 bytes of memory to store variable 'a' and
another 2 bytes of memory is used for return value.

That means, totally it requires 4 bytes of memory to complete its execution. And
this 4 bytes of memory is fixed for any input value of 'a'. This space complexity is
said to be Constant Space Complexity.
Linear Space Complexity

If the amount of space required by an algorithm is increased with the


increase of input value, then that space complexity is said to be Linear
Space Complexity
Linear Space Complexity

Consider the following piece of code:

Int sum(int A[], int n)


{
int sum=0,I;
for(I = 0; I < n; i++)
sum=sum + A[i];
return sum;
}
Linear Space Complexity

In above piece of code it requires:


 'n*2' bytes of memory to store array variable 'a[]‘
 2 bytes of memory for integer parameter 'n‘
 4 bytes of memory for local integer variables 'sum' and 'i' (2 bytes each)
 2 bytes of memory for return value.

That means, totally it requires '2n+8' bytes of memory to complete its execution.
Here, the amount of memory depends on the input value of 'n'. This space
complexity is said to be Linear Space Complexity.
Time complexity
REG NO .2143533
What is the complexity of the algorithm?

 Complexity of an algorithm is a measure of the amount


of time and/or space required by an algorithm for an
input of a given size (n).
 the time complexity of an algorithm quantifies the
amount of time taken by an algorithm to run as
a function of the length of the string representing the
input.
Asymptotic notations

• The limiting behavior of the execution time of an


algorithm when the size of the problem goes to infinity.
This is usually denoted in big-O notation.
Types of asymptotic notations;
 Big-O
 Big-Omega
Analysis of Algorithm
Tower Of Hanoi

 Legend has it that there were three diamond needles set into the floor of
the temple of Brahma in Hanoi.

 Stacked upon the leftmost needle were 64 golden disks, each a different
size, stacked in concentric order:
A Legend (Ct’d)

The priests were to transfer the disks from the first needle to the second
needle, using the third as necessary.

 But they could only move one disk at a time, and could never put a larger
disk on top of a smaller one.
 When they completed this task, the world would end!
To Illustrate

 For simplicity, suppose there were just 3 disks, and we’ll refer to the three
needles as A, B, and C...

 Since we can only move one disk at a time, we move the top disk from A
to B.
Example

 For simplicity, suppose there were just 3 disks, and we’ll refer to the three
needles as A, B, and C...

 We then move the top disk from A to C.


Example (Ct’d)

 For simplicity, suppose there were just 3 disks, and we’ll refer to the three
needles as A, B, and C...

 We then move the top disk from A to C.


Example (Ct’d)

 For simplicity, suppose there were just 3 disks, and we’ll refer to the three
needles as A, B, and C...

 We then move the top disk from A to B.


Example (Ct’d)

 For simplicity, suppose there were just 3 disks, and we’ll refer to the three
needles as A, B, and C...

 We then move the top disk from C to A.


Example (Ct’d)

 For simplicity, suppose there were just 3 disks, and we’ll refer to the three
needles as A, B, and C...

 We then move the top disk from C to B.


Example (Ct’d)

 For simplicity, suppose there were just 3 disks, and we’ll refer to the three
needles as A, B, and C...

 We then move the top disk from A to B.


Example (Ct’d)

 For simplicity, suppose there were just 3 disks, and we’ll refer to the three
needles as A, B, and C...

 and we’re done!


 The problem gets more difficult as the number of disks increases...
Travelling Salesman Problem

 Introduct
 The Travelling Salesman Problem (often called TSP) is a classic
algorithmic problem in the field of computer science.
In this context better solution often means a solution that is cheaper.
TSP is a mathematical problem. ion:-
Problem

You have to travel from one location to the other from the shortest path in less
time.
 Finding solution:
 Select a current city/location.
 Select desired city/location.
 Measure the distance in between each city.
 Add distance of each route.
 Select the route whose sum of distance is the shortest and less time consuming.
DIAGRAM
Restriction & Advantages:

 You only visit each city once and you can't pass through any traversed
path.

 It is fast, easy and efficient.


Travelling Salesman
Problem
Table of Content:

 Introduction.
 Problem.
 Finding Solution.
 Diagram.
 Restrictions.
Introduction:-

 The Travelling Salesman Problem (often


called TSP) is a classic algorithmic problem in
the field of computer science.
 In this context better solution often means a
solution that is cheaper.
 TSP is a mathematical problem.
Problem:-

You have to travel from one location to the


other from the shortest path in less time.
Finding solution:

 Select a current city/location.


 Select desired city/location.
 Measure the distance in between each city.
 Add distance of each route.
 Select the route whose sum of distance is the
shortest and less time consuming.
DIAGRAM:
Restriction & Advantages:

 You only visit each city once and you can't


pass through any traversed path.

 It is fast, easy and efficient.


Radix Sort
SORTING ALGORITHM
Radix Sort

• Two classifications of radix sorts are least


significant digit (LSD) radix sorts and most
significant digit (MSD) radix sorts.
• Radix sort is stable and fast.
• Container or bucket

3
Simulation

173 256 548 326 753 478 222 144 721 875

321

753 326 478

721 222 173 144 875 256 548

0 1 2 3 4 5 6 7 8 59
Simulation

753 326 478

721 222 173 144 875 256 548

0 1 2 3 4 5 6 7 8 9

321
326 478

222 548 256 875

721 144 753 173

0 1 2 3 4 5 6 7 8 69
Simulation
326 478

222 548 256 875

721 144 753 173

0 1 2 3 4 5 6 7 8 9

321

173 256 753

144 222 326 478 548 721 875


0 1 2 3 4 5 6 7 8 79
Simulation

173 256 753

144 222 326 478 548 721 875


0 1 2 3 4 5 6 7 8 9

144 173 222 256 326 478 548 721 753 875
Bubble Sort Definition

 Bubble sort is a simple sorting algorithm that repeatedly steps


through the list to be sorted, compares each pair of adjacent
items and swap them if they are in the wrong order.

 The pass through the list is repeated until no swaps are needed,
which indicates that the list is sorted.
Example: 5, 12, 3, 9, 16

● 5, 12, 3, 9, 16
○ The list stays the same because 5 is less than 12.
● 5, 3, 12, 9, 16
○ 3 and 12 are switched because 3 is less than 12
● 5, 3, 9, 12, 16
○ 9 and 12 are switched since 9 is less than 12
● 5, 3, 9, 12, 16
○ 12 and 16 do not switch because 12 is less than 16
Example:

 3, 5, 9, 12, 16
 3 is less than 5, so they do not switch
 3, 5, 9, 12, 16
 5 is less than 9 so they remain in the same places
 3, 5, 9, 12, 16
 12 is greater than 9 so they do not switch places
 3, 5, 9, 12, 16
 12 and 16 are in numerical order so they don't switch
Running Time :

 Best-Case: O(n). This is the case of the already-sorted sequence


(3). ○ (n)(1) = n

 Worst-Case: O(n^2). At maximum, there will be n passes


through the data, and each pass will test n-1 pairs (3, 4). ○ (n)(n-
1) = n^2.

 Average: O(n^2). (3,4).


Memory Efficiency and Data
Structures:

 The bubble sort is a very memory-efficient because all of the


ordering occurs within the array or list itself. No new memory is
allocated.

 No new data structures are necessary, for the same reason.


Advantages :

 ● The bubble sort requires very little memory other than that
which the array or list itself occupies.

 With a best-case running time of O(n), the bubble sort is good


for testing whether or not a list is sorted or not. Other sorting
methods often cycle through their whole sorting sequence,
which often have running times of O(n^2) or O(n log n) for this
task.
Disadvantages :

 The main disadvantage of the bubble sort method is the time it


requires. With a running time of O(n^2), it is highly inefficient for
large data sets.

 Additionally, the presence of turtles can severely slow the sort.


Dynamic Programming
IN COMPUTER SCIENCE, MATHEMATICS, MANAGEMENT SCIENCE,
ECONOMICS AND BIOINFORMATICS, DYNAMIC PROGRAMMING
(ALSO KNOWN AS DYNAMIC OPTIMIZATION) IS A METHOD FOR
SOLVING A COMPLEX PROBLEM BY BREAKING IT DOWN INTO A
COLLECTION OF SIMPLER SUB PROBLEMS, SOLVING EACH OF THOSE
SUB PROBLEMS JUST ONCE, AND STORING THEIR SOLUTIONS. THE NEXT
TIME THE SAME SUB PROBLEM OCCURS, INSTEAD OF RECOMPUTING
ITS SOLUTION, ONE SIMPLY LOOKS UP THE PREVIOUSLY COMPUTED
SOLUTION,
 The technique of storing solutions to sub problems instead of
recomputing them is called "memoization".
 Dynamic programming algorithms are often used for optimization.
A dynamic programming algorithm will examine the previously
solved sub problems and will combine their solutions to give the
best solution for the given problem
Optimization

 In mathematics, computer science and operations research,


mathematical optimization or mathematical programming, alternatively
spelled optimization,
 In the simplest case, an optimization problem consists of maximizing or
minimizing a real function by systematically choosing input values from
within an allowed set and computing the value of the function. The
generalization of optimization theory and techniques to other formulations
constitutes a large area of applied mathematics.
Memoization

 A memoized function "remembers" the results corresponding to some set of


specific inputs. Subsequent calls with remembered inputs return the
remembered result rather than recalculating it,
 Memoization is a way to lower a function's time cost in exchange for
space cost; that is, memoized functions become optimized for speed in
exchange for a higher use of computer memory space.
Non memoized Pseudo code

function factorial (n is a non-negative integer)


if n is 0 then
return 1
[by the convention that 0! = 1] else
return
factorial(n – 1) times n
end if
end function
function factorial (n is a non-negative integer) if n is 0 then
return 1 [by the convention that 0! = 1]
else if n is in lookup-table then
return lookup-table-value-for-n
else
let x = factorial(n – 1) times n
store x in lookup-table in the nth slot [remember the result of n! for later]
return x end if
end function
Greedy Algorithm

 A candidate set, from which a solution is created


 A selection function, which chooses the best candidate to be added
to the solution
 A feasibility function, that is used to determine if a candidate can be
used to contribute to a solution
 An objective function, which assigns a value to a solution, or a partial
solution
 A solution function, which will indicate when we have discovered a
complete solution
Difference between Recursion and
Dynamic programming

 Recursion uses the top-down approach to solve the problem i.e. It


begin with core(main) problem then breaks it into sub problems and
solve these sub problems similarity. In this approach same sub
problem can occur multiple times and consume more CPU cycle
,hence increase the time complexity. Whereas in Dynamic
programming same sub problem will not be solved multiple times but
the prior result will be used to optimize the solution
Method 1 ( Use recursion )

//Fibonacci Series using Recursion


#include<stdio.h>
int fib(int n)
{
if (n <= 1)
return n;
return fib(n-1) + fib(n-2);
}

int main ()
{
int n = 9;
printf("%d", fib(n));
getchar();
return 0;
}
We can observe that this implementation does a lot of
repeated work (see the following recursion tree). So this
is a bad implementation for nth Fibonacci number.

fib(5)
/
fib(4) fib(3)
/ /
fib(3) fib(2) fib(2) fib(1)
/ / /
fib(2) fib(1) fib(1) fib(0) fib(1) fib(0)
/
fib(1) fib(0)
How to solve Dynamic Programming

 Dynamic Programming (DP) is a technique that solves some particular


type of problems in Polynomial Time. Dynamic Programming solutions are
faster than exponential brute method and can be easily proved for their
correctness.
Steps to solve a DP

 1) Identify if it is a DP problem
 2) Decide a state expression with least parameters
 3) Formulate state relationship
 4) Do tabulation (or add memoization)
Identify if it is a DP problem

 All dynamic programming problems satisfy the overlapping subproblems


property and most of the classic dynamic problems also satisfy the optimal
substructure property. Once, we observe these properties in a given
problem, be sure that it can be solved using DP.
Decide a state expression with least
parameters

 DP problems are all about state and their transition. This is the most basic
step which must be done very carefully because the state transition
depends on the choice of state definition you make. So, let’s see what do
we mean by the term “state”.
State:
A state can be defined as the set of parameters that can uniquely identify a
certain position or standing in the given problem. This set of parameters
should be as small as possible to reduce state space
Formulate state relationship

 This part is the hardest part of for solving a DP problem and requires a lots
of intuition, observation and practice.
 Let’s understand it by considering a sample problem
Example:

Given 3 numbers {1, 3, 5}, we need to tell

the total number of ways we can form a number 'N'

using the sum of the given three numbers.

(allowing repetitions and different arrangements).

Total number of ways to form 6 is : 8

1+1+1+1+1+1

1+1+1+3

1+1+3+1

1+3+1+1

3+1+1+1

3+3

1+5

5+1
Do tabulation (or add memoization)

 This is the easiest part of a dynamic programming solution. We just


need to store the state answer so that next time that state is
required, we can directly use it from our memory
Heap Sort
Heap sort is Performed by implementation of Heap Data
Structure. A Heap is a complete Binary Tree . Heap sort
is very fast and Heap data Structure are well known for
Arranging the elements in Particular Ascending or
Descending Order.
Heap Sort Is Implemented using two mechanism

1. Max Heap
2. Min Heap
Max Heap

In Max Heap the Parent Node is always greater


than its Children Node. The Node or Elements on
Left and Right Hand Side of Parent node Must be
smaller than the Element Value in Parent Node.

Min Heap

In Min Heap the Parent Node is always Smaller than its


Children Node. The Node or Elements on Left and Right
Hand Side of Parent node Must be greater than the Element
Value in Parent Node.
Heap Sort is performed in 2 Steps

1. First the given Elements are


Represented in form of Heap
2. Then These elements are Sorted using
Heap Sort
Topic
breadth first search

Sheraz sher
2143174
What is graph ?

A set of vertices and edges .

Types of graph
 Directed / undirected
 Weighted / unweighted
 Cyclic / acyclic
Breadth-first search (BFS)
It is an algorithm for traversing or
searching tree or graph data structures. It starts at the tree
root sometimes referred to as a 'search key') and explores
the neighbor nodes first, before moving to the next level
neighbors.
Bfs use a FIFO queue.
Example
Complexity

The time complexity of BFS is (V + E), where V is


the number of nodes and E is the number of
edges.
ADVANTAGES OF BREADTH-FIRST SEARCH

 Breadth first search will never get trapped exploring the useless path
forever.

 If there is a solution, BFS will definitely find it out.

 If there is more than one solution then BFS can find the minimal one
that requires less number of steps.
DISADVANTAGES OF BREADTH-FIRST SEARCH

 The main drawback of Breadth first search is


its memory requirement.

 If the solution is farther away from the root, breath first search
will consume lot of time.
DEPTH FOR SEARCH

Name: Faisal Ghafoor


2143093
1.Depth first search was first investigated by French
Mathematician Charles Pierre tremaux.

2.It is an algorithm for traversing tree or graph data


structures.

3.One starts at the root and explores as deep as


possible along each branch before backtracking.

4.It can be implemented using stack.


1- A depth-first search (DFS)
explores a path all the way
to a leaf before backtracking
and exploring another path.
2-For example, after
searching A, then B, then D,
the search backtracks and
tries another path from B.
3-Node are explored in the
order A B D E H L M N I
O P C F G J K Q.
L M N O P N will be found before J.
Idea of The Depth First Search
Algorithm
1.In Depth first search edges are explored out of the most recently dis-covered
vertex. Only edges to unexplored vertices are explored.

2.When all of vertices edges have been explored, the search “back-tracks” to
explore edges leaving the vertex from which vertex was discovered.

3.The process continues until we have discovered all the vertices that are
reachable from the original source vertex.

4.If any undiscovered vertices remain, then one of them is selected as a new
source vertex.

5.This process if repeated until all vertices are discovered.


Binary Search

 Binary search: Locates a target value in a sorted array / list by successively


eliminating half of the array fro
 How many
 The binarySearch method in the Arrays class searches an array very
efficiently if the array is sorted.
 You can search the entire array, or just a range of indexes
(useful for "unfilled" arrays such as the one in ArrayIntList) elements will it
need to examine? m consideration.
 How much better is binary search than sequential search?

 efficiency: A measure of the use of computing resources by code.


 can be relative to speed (time), memory (space), etc.
 most commonly refers to run time
 binarySearch returns the index where the value is found

 if the value is not found, binarySearch returns:


-(insertionPoint + 1)

• where insertionPoint is the index where the element would have been, if it had been in the array in sorted order.
• To insert the value into the array, negate insertionPoint + 1

int indexToInsert21 = -(index2 + 1);


Efficiency examples

statement1;
statement2;
statement3;

for (int i = 1; i <= N; i++) {


statement4;
} N
4N + 3
for (int i = 1; i <= N; i++) {
statement5;
statement6;
statement7; 3N
}

You might also like