Algorithm
Algorithm
What is an Algorithm?
An algorithm is a process or a set of rules required to perform
calculations or some other problem-solving operations especially by a
computer. It is not the complete program or code; it is just a solution
(logic) of a problem, which can be represented either as an informal
description using a Flowchart or Pseudocode.
Characteristics of an Algorithm
The following are the characteristics of an algorithm:
1
Factors of an Algorithm
The following are the factors that we need to consider for
designing an algorithm:
2
Types of Algorithms:
o Brute Force
o Divide and Conquer
o Dynamic Programming
o Greedy Algorithm
o Backtracking
3
Example:
For example, imagine you have a small padlock with 4 digits, each
from 0-9. You forgot your combination, but you don't want to buy
another padlock. Since you can't remember any of the digits, you have
to use a brute force method to open the lock.
So you set all the numbers back to 0 and try them one by one: 0001,
0002, 0003, and so on until it opens. In the worst case scenario, it
would take 104, or 10,000 tries to find your combination.
4
Let us understand this concept with the help of an example.
Here, we will sort an array using the divide and conquer approach
(ie. merge sort).
3. Again, divide each subpart recursively into two halves until you
get individual elements.
5
Applications of Divide and Conquer
Approach:
1. Quick Sort
2. Merge Sort
3. Median Finding
4. Matrix Multiplication
5. Min and Max Finding
6. Binary Tree Traversals.
7. Tower of Hanoi
7
Dynamic Programming
The definition of dynamic programming says that it is a technique for
solving a complex problem by first breaking into a collection of
simpler subproblems, solving each subproblem just once, and then
storing their solutions to avoid repetitive computations.
There are two key attributes that a problem must have in order for
dynamic programming to be applicable ‘optimal substructure’ and
‘overlapping subproblems’. However when the overlapping
problems are much smaller than the original problem, the strategy is
called ‘divide and conquer’ rather than ‘dynamic programming’.
This is why merge sort, quick sort are not classified as dynamic
programming problems.
Example:
1. Multistage graph
2. All pairs shortest path
3. Fibonacci sequence
4. 0-1 knapsack problem
5. Longest common sequence.
8
How does the dynamic programming approach work?
The following are the steps that the dynamic programming follows:
9
o It finds the optimal solution to these sub-problems.
o It stores the results of subproblems (memorization). The process
of storing the results of subproblems is known as memorization.
o It reuses them so that same sub-problem is not calculated more
than once.
o Finally, calculate the result of the complex problem.
10
avoid redundant calculations and makes the algorithm more
efficient.
The top-down approach is well-suited for problems where you
can easily identify the overlapping subproblems and can
represent the recursive relationship effectively.
Example:
#include <stdio.h>
int memo[MAX];
int fibonacci_top_down(int n) {
if (n <= 1)
return n;
if (memo[n] != -1)
return memo[n];
return memo[n];
int main() {
int n = 5;
11
memo[i] = -1;
return 0;
Bottom Up Approach:
It is known as tabulation, we solve the subproblems starting from
the smallest ones and build up the solutions to larger
subproblems iteratively. Instead of using recursion, we use an
iterative loop to solve the problems in a systematic order.
We usually create an array or a table to store the solutions to
subproblems. The table is filled in a way that the solution to a
larger subproblem is built using the solutions to its smaller
subproblems.
Unlike the top-down approach, where we solve only the
necessary subproblems, the bottom-up approach solves all
subproblems, starting from the smallest, and progresses towards
the larger ones. There is no recursion or backtracking involved.
The bottom-up approach is well-suited for problems where the
order of solving subproblems is important, and the
dependencies between subproblems are straightforward and
easy to represent.
12
Example:
#include <stdio.h>
int fibonacci_bottom_up(int n) {
if (n <= 1)
return n;
int fib[n+1];
fib[0] = 0;
fib[1] = 1;
13
Difference between dynamic programming
and divide and conquer:
Divide and Conquer Dynamic Programming
It is recursive It is non recursive.
Sub-problems are non Sub-problems are overlapping
overlapping
It works on sub-problems but It works on sub-problems and
does not store the result. store the result.
In this technique, the sub- In this technique, the sub-
problems are independent of each problems are interdependent.
other.
Example: Quick Sort, Merge Sort Example: Fibonacci Series
14
Greedy Algorithm
A greedy algorithm is an approach for solving a problem by
selecting the best option available at the moment. It doesn't
worry whether the current best result will bring the overall
optimal result.
This algorithm may not produce the best result for all the
problems. It's because it always goes for the local best choice
to produce the global best result.
2. Optimal Substructure
If the optimal overall solution to the problem corresponds to the
optimal solution to its subproblems, then the problem can be
15
solved using a greedy approach. This property is called optimal
substructure.
3. Unalterable:
Once the decision is made, at any subsequence step that
option is not altered.
16
Drawback of Greedy Approach
As mentioned earlier, the greedy algorithm doesn't always produce the
optimal solution. This is the major disadvantage of the algorithm
For example, suppose we want to find the longest path in the graph below
from root to leaf. Let's use the greedy algorithm here.
Greedy Approach
1. Let's start with the root node 20. The weight of the right child is 3 and the
weight of the left child is 2.
2. Our problem is to find the largest path. And, the optimal solution at the
moment is 3. So, the greedy algorithm will choose 3.
3. Finally the weight of an only child of 3 is 1. This gives us our final result 20 +
3 + 1 = 24.
However, it is not the optimal solution. There is another path that carries more
weight (20 + 2 + 10 = 32) as shown in the image below.
17
Therefore, greedy algorithms do not always give an optimal/feasible solution.
18
Backtracking
Backtracking is one of the techniques that can be used to solve the
problem. We can write the algorithm using this strategy. It uses the
Brute force search to solve the problem, and the brute force search
says that for the given problem, we try to make all the possible
solutions and pick out the best solution from all the desired solutions
Backtracking name itself suggests that we are going back and coming
forward; if it satisfies the condition, then return success, else we go
back again. It is used to solve a problem in which a sequence of
objects is chosen from a specified set so that the sequence satisfies
some criteria.
19
Suppose another path exists from node A to node C. So, we move
from node A to node C. It is also a dead-end, so again backtrack from
node C to node A. We move from node A to the starting node.
Now we will check any other path exists from the starting node. So, we
move from start node to the node D. Since it is not a feasible solution
so we move from node D to node E. The node E is also not a feasible
solution. It is a dead end so we backtrack from node E to node D.
20
Suppose another path exists from node D to node F. So, we move
from node D to node F. Since it is not a feasible solution and it's a
dead-end, we check for another path from node F.
21
Suppose there is another path exists from the node F to node G so
move from node F to node G. The node G is a success node.
o Live node: The nodes that can be further generated are known
as live nodes.
o E node: The nodes whose children are being generated and
become a success node.
o Success node: The node is said to be a success node if it
provides a feasible solution.
o Dead node: The node which cannot be further generated and
also does not provide a feasible solution is known as a dead
node.
22
Many problems can be solved by backtracking strategy, and that
problems satisfy complex set of constraints, and these constraints are
of two types:
Applications of Backtracking
o N-queen problem
o Sum of subset problem
o Graph coloring
o Hamiliton cycle
23
Below are some major differences between Greedy
method and Dynamic programming:
Greedy method Dynamic programming
The greedy method computes its solution Dynamic programming computes its
by making its choices in a serial forward solution bottom up or top down by
fashion, never looking back or revising synthesizing them from smaller optimal
previous choices. sub solutions.
24
Tower of Hanoi
1. It is a classic problem where you try to move all the disks from one
peg to another peg using only three pegs.
2. Initially, all of the disks are stacked on top of each other with larger
disks under the smaller disks.
3. You may move the disks to any of three pegs as you attempt to
relocate all of the disks, but you cannot place the larger disks over
smaller disks and only one disk can be transferred at a time.
In the above 7 step all the disks from peg A will be transferred to C
given Condition:
25
0/1 Knapsack problem
Here knapsack is like a container or a bag. Suppose we have given
some items which have some weights or profits. We have to put some
items in the knapsack in such a way total value produces a maximum
profit.
We will discuss both the problems one by one. First, we will learn
about the 0/1 knapsack problem.
26
What is the fractional knapsack problem?
The fractional knapsack problem means that we can divide the item.
For example, we have an item of 3 kg then we can pick the item of 2
kg and leave the item of 1 kg. The fractional knapsack problem is
solved by the Greedy approach.
Weights: {3, 4, 6, 5}
Profits: {2, 3, 1, 4}
xi = {1, 0, 0, 1}
= {0, 0, 0, 1}
= {0, 1, 0, 1}
27
Steps to solve the Fractional problem:
Example:
Consider 5 items along their respective weights and values:
28
Solution:
Taking value per weight ratio is Pi = Vi/Wi
Item Wi Vi Pi = Vi/Wi
I1 5 30 6.0
I2 10 20 2.0
I3 20 100 5.0
I4 30 90 3.0
I5 40 160 4.0
Item Wi Vi Pi = Vi/Wi
I1 5 30 6.0
I3 20 100 5.0
I5 40 160 4.0
I4 30 90 3.0
I2 10 20 2.0
Now, first we choose the item I1 whose weight is 5. Then choose item I3
whose weight is 20. Now the total weight of knapsack is 20+5 = 25. Now
the next item is I5, and it’s weight is 40 but we want only 36. So we choose
the fractional part of it. That is:
Weight = 5 + 20 + 35 = 60
Maximum Value:
= 30+100+140 = 270
29
Travelling Sales Person Problem
The traveling salesman problems abide by a salesman and a set of
cities. The salesman has to visit every one of the cities starting from a
certain one (e.g., the hometown) and to return to the same city. The
challenge of the problem is that the traveling salesman needs to
minimize the total cost of the trip.
Suppose the cities are x1 x2..... xn where cost cij denotes the cost of
travelling from city xi to xj. The travelling salesperson problem is to
find a route starting and ending at x1 that will take in all cities with the
minimum cost.
30
Sorting Algorithms
Asymptotic Notations
Asymptotic notations are the mathematical notations used to
describe the running time of an algorithm when the input tends
towards a particular value or a limiting value.
31
For example: In bubble sort, when the input array is already
sorted, the time taken by the algorithm is linear i.e. the best
case.
When the input array is neither sorted nor in reverse order, then
it takes average time. These durations are denoted using
asymptotic notations.
Big-O notation
Omega notation
Theta notation
32
Omega Notation (Ω-notation)
Omega notation represents the lower bound of the running time
of an algorithm. Thus, it provides the best case complexity of an
algorithm.
33
Bubble Sort
Bubble sort is a sorting algorithm that compares two adjacent
elements and swaps them until they are in the intended order.
Just like the movement of air bubbles in the water that rise up to
the surface, each element of the array move to the end in each
iteration. Therefore, it is called a bubble sort.
34
2. Remaining Iteration
The same process goes on for the remaining iterations.
35
In each iteration, the comparison takes place up to the last
unsorted element.
The array is sorted when all the unsorted elements are placed
at their correct positions.
36
37
Optimized Bubble Sort
38
Selection Sort Algorithm
Selection sort is a sorting algorithm that selects the smallest
element from an unsorted list in each iteration and places that
element at the beginning of the unsorted list.
39
3. After each iteration, minimum is placed in the front of the
unsorted list.
40
41
42
Insertion Sort Algorithm
Insertion sort is a sorting algorithm that places an unsorted
element at its suitable place in each iteration.
Compare key with the first element. If the first element is greater
than key , then key is placed in front of the first element.
43
2. Now, the first two elements are sorted.
Take the third element and compare it with the elements on the
left of it. Placed it just behind the element smaller than it. If
there is no element smaller than it, then place it at the beginning
of the array.
44
3. Similarly, place every unsorted element at its correct position.
45
46
Merge Sort Algorithm
Merge Sort is one of the most popular sorting algorithms that is
based on the principle of Divide and Conquer Algorithm.
Here, a problem is divided into multiple sub-problems. Each
sub-problem is solved individually. Finally, sub-problems are
combined to form the final solution.
47
Quicksort Algorithm
Quicksort is a sorting algorithm based on the divide and
conquer approach where
1. An array is divided into subarrays by selecting a pivot
element (element selected from the array).
48
2. Rearrange the Array
Now the elements of the array are rearranged so that elements
that are smaller than the pivot are put on the left and the
elements greater than the pivot are put on the right.
49
3. Now, pivot is compared with other elements. If an element
smaller than the pivot element is reached, the smaller element
is swapped with the greater element found earlier.
50
5. The process goes on until the second last element is
reached.
51
3. Divide Subarrays
Pivot elements are again chosen for the left and the right sub-
parts separately. And, step 2 is repeated.
52
53
Linear Search:
https://www.programiz.com/dsa/linear-search
Binary Search:
https://www.programiz.com/dsa/binary-search
1. Dijkstra algorithm
2. Huffman coding
54