0% found this document useful (0 votes)
4 views54 pages

Algorithm

An algorithm is a set of rules or processes for problem-solving, particularly in computing, characterized by input, output, unambiguity, finiteness, and language independence. Various types of algorithms include brute force, divide and conquer, dynamic programming, greedy algorithms, and backtracking, each with distinct advantages and disadvantages. The document outlines the principles, characteristics, and applications of these algorithms, emphasizing their design factors and the differences between dynamic programming and divide and conquer approaches.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
4 views54 pages

Algorithm

An algorithm is a set of rules or processes for problem-solving, particularly in computing, characterized by input, output, unambiguity, finiteness, and language independence. Various types of algorithms include brute force, divide and conquer, dynamic programming, greedy algorithms, and backtracking, each with distinct advantages and disadvantages. The document outlines the principles, characteristics, and applications of these algorithms, emphasizing their design factors and the differences between dynamic programming and divide and conquer approaches.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 54

Algorithm

What is an Algorithm?
An algorithm is a process or a set of rules required to perform
calculations or some other problem-solving operations especially by a
computer. It is not the complete program or code; it is just a solution
(logic) of a problem, which can be represented either as an informal
description using a Flowchart or Pseudocode.

Characteristics of an Algorithm
The following are the characteristics of an algorithm:

o Input: An algorithm has some input values. We can pass 0 or


some input value to an algorithm.
o Output: We will get 1 or more output at the end of an algorithm.
o Unambiguity: An algorithm should be unambiguous which
means that the instructions in an algorithm should be clear and
simple.
o Finiteness: An algorithm should have finiteness. Here, finiteness
means that the algorithm should contain a limited number of
instructions, i.e., the instructions should be countable.
o Language independent: An algorithm must be language-
independent so that the instructions in an algorithm can be
implemented in any of the languages with the same output.

1
Factors of an Algorithm
The following are the factors that we need to consider for
designing an algorithm:

o Modularity: If any problem is given and we can break that


problem into small-small modules or small-small steps, which is
a basic definition of an algorithm, it means that this feature has
been perfectly designed for the algorithm.
o Correctness: The correctness of an algorithm is defined as when
the given inputs produce the desired output, which means that
the algorithm has been designed algorithm. The analysis of an
algorithm has been done correctly.
o Maintainability: Here, maintainability means that the algorithm
should be designed in a very simple structured way so that when
we redefine the algorithm, no major change will be done in the
algorithm.
o User-friendly: If the algorithm is not user-friendly, then the
designer will not be able to explain it to the programmer.
o Simplicity: If the algorithm is simple then it is easy to
understand.
o Extensibility: If any other algorithm designer or programmer
wants to use your algorithm then it should be extensible.

2
Types of Algorithms:
o Brute Force
o Divide and Conquer
o Dynamic Programming
o Greedy Algorithm
o Backtracking

Brute force approach


A brute force approach is an approach that tries out all the
possibilities till a satisfactory solution is not found.

Advantages of a brute-force algorithm


o This algorithm finds all the possible solutions, and it also
guarantees that it finds the correct solution to a problem.
o It is mainly used for solving simpler and small problems.

Disadvantages of a brute-force algorithm


The following are the disadvantages of the brute-force algorithm:

o It is an inefficient algorithm as it requires solving each and every


state.
o It is a very slow algorithm to find the correct solution as it solves
each state without considering whether the solution is feasible or
not.

3
Example:
For example, imagine you have a small padlock with 4 digits, each
from 0-9. You forgot your combination, but you don't want to buy
another padlock. Since you can't remember any of the digits, you have
to use a brute force method to open the lock.

So you set all the numbers back to 0 and try them one by one: 0001,
0002, 0003, and so on until it opens. In the worst case scenario, it
would take 104, or 10,000 tries to find your combination.

Divide and Conquer


This algorithm breaks a problem into sub-problems, solves a single
sub-problem and merges the solutions together to get the final
solution. It consists of the following three steps:

1. Divide the original problem into a set of subproblems.


2. Conquer: Solve every subproblem individually, recursively.
3. Combine: Put together the solutions of the subproblems to get
the solution to the whole problem.

4
Let us understand this concept with the help of an example.

Here, we will sort an array using the divide and conquer approach
(ie. merge sort).

1. Let the given array be:

2. Divide the array into two halves.

3. Again, divide each subpart recursively into two halves until you
get individual elements.

4. Now, combine the individual elements in a sorted manner.


Here, conquer and combine steps go side by side.

5
Applications of Divide and Conquer
Approach:
1. Quick Sort
2. Merge Sort
3. Median Finding
4. Matrix Multiplication
5. Min and Max Finding
6. Binary Tree Traversals.
7. Tower of Hanoi

Why Binary Search is not considered as


Divide and Conquer:
Binary Search is a searching algorithm. In each step, the algorithm
compares the input element x with the value of the middle element in
the array. If the values match, return the index of the middle.
Otherwise, if x is less than the middle element, then the algorithm
recurs for the left side of the middle element, else recurs for the right
side of the middle element. Contrary to popular belief, this is not an
6
example of Divide and Conquer because there is only one sub-
problem in each step (Divide and conquer requires that there must be
two or more sub-problems) and hence this is a case of Decrease and
Conquer.

Advantages of Divide and Conquer Algorithm:


 The difficult problem can be solved easily.
 It divides the entire problem into subproblems thus it can be
solved parallelly ensuring multiprocessing
 Efficiently uses cache memory without occupying much space
 Reduces time complexity of the problem

Disadvantages of Divide and Conquer Algorithm:


 It involves recursion which is sometimes slow
 Efficiency depends on the implementation of logic
 It may crash the system if the recursion is performed rigorously.
 It can be difficult to determine the base case or stopping
condition for the recursive calls.
 It may not be the most efficient algorithm for all problem

7
Dynamic Programming
The definition of dynamic programming says that it is a technique for
solving a complex problem by first breaking into a collection of
simpler subproblems, solving each subproblem just once, and then
storing their solutions to avoid repetitive computations.

The main use of dynamic programming is to solve optimization


problems. Here, optimization problems mean that when we are trying
to find out the minimum or the maximum solution of a problem. The
dynamic programming guarantees to find the optimal solution of a
problem if the solution exists.

There are two key attributes that a problem must have in order for
dynamic programming to be applicable ‘optimal substructure’ and
‘overlapping subproblems’. However when the overlapping
problems are much smaller than the original problem, the strategy is
called ‘divide and conquer’ rather than ‘dynamic programming’.
This is why merge sort, quick sort are not classified as dynamic
programming problems.

Example:

1. Multistage graph
2. All pairs shortest path
3. Fibonacci sequence
4. 0-1 knapsack problem
5. Longest common sequence.

8
How does the dynamic programming approach work?
The following are the steps that the dynamic programming follows:

o It breaks down the complex problem into simpler subproblems.

9
o It finds the optimal solution to these sub-problems.
o It stores the results of subproblems (memorization). The process
of storing the results of subproblems is known as memorization.
o It reuses them so that same sub-problem is not calculated more
than once.
o Finally, calculate the result of the complex problem.

Approaches of dynamic programming


There are two approaches to dynamic programming:

o Top-down approach (uses memorization technique)


o Bottom-up approach (uses tabulation method)

Top Down Approach:


 It is known as memoization, we start by breaking down the
original problem into smaller subproblems. Instead of solving
these subproblems immediately, we solve them on demand
whenever needed and store their solutions in a cache (usually an
array or a hash table).
 Before solving a subproblem, we first check if its solution already
exists in the cache. If it does, we can directly return the cached
result instead of recomputing it.
 This approach is often implemented using recursion, where the
function calls itself to solve the subproblems. Memoization helps

10
avoid redundant calculations and makes the algorithm more
efficient.
 The top-down approach is well-suited for problems where you
can easily identify the overlapping subproblems and can
represent the recursive relationship effectively.

Example:
#include <stdio.h>

#define MAX 100

int memo[MAX];

int fibonacci_top_down(int n) {

if (n <= 1)

return n;

if (memo[n] != -1)

return memo[n];

memo[n] = fibonacci_top_down(n-1) + fibonacci_top_down(n-2);

return memo[n];

int main() {

int n = 5;

for (int i = 0; i < MAX; i++)

11
memo[i] = -1;

printf("%d\n", fibonacci_top_down(n)); // Output: 5

return 0;

Bottom Up Approach:
 It is known as tabulation, we solve the subproblems starting from
the smallest ones and build up the solutions to larger
subproblems iteratively. Instead of using recursion, we use an
iterative loop to solve the problems in a systematic order.
 We usually create an array or a table to store the solutions to
subproblems. The table is filled in a way that the solution to a
larger subproblem is built using the solutions to its smaller
subproblems.
 Unlike the top-down approach, where we solve only the
necessary subproblems, the bottom-up approach solves all
subproblems, starting from the smallest, and progresses towards
the larger ones. There is no recursion or backtracking involved.
 The bottom-up approach is well-suited for problems where the
order of solving subproblems is important, and the
dependencies between subproblems are straightforward and
easy to represent.

12
Example:
#include <stdio.h>
int fibonacci_bottom_up(int n) {
if (n <= 1)
return n;

int fib[n+1];
fib[0] = 0;
fib[1] = 1;

for (int i = 2; i <= n; i++) {


fib[i] = fib[i-1] + fib[i-2];
}
return fib[n];
}
int main() {
int n = 5;
printf("%d\n", fibonacci_bottom_up(n)); // Output: 5
return 0;
}

13
Difference between dynamic programming
and divide and conquer:
Divide and Conquer Dynamic Programming
It is recursive It is non recursive.
Sub-problems are non Sub-problems are overlapping
overlapping
It works on sub-problems but It works on sub-problems and
does not store the result. store the result.
In this technique, the sub- In this technique, the sub-
problems are independent of each problems are interdependent.
other.
Example: Quick Sort, Merge Sort Example: Fibonacci Series

14
Greedy Algorithm
A greedy algorithm is an approach for solving a problem by
selecting the best option available at the moment. It doesn't
worry whether the current best result will bring the overall
optimal result.

The algorithm never reverses the earlier decision even if the


choice is wrong. It works in a top-down approach.

This algorithm may not produce the best result for all the
problems. It's because it always goes for the local best choice
to produce the global best result.

However, we can determine if the algorithm can be used with


any problem if the problem has the following properties:

1. Greedy Choice Property


If an optimal solution to the problem can be found by choosing
the best choice at each step without reconsidering the previous
steps once chosen, the problem can be solved using a greedy
approach. This property is called greedy choice property.

2. Optimal Substructure
If the optimal overall solution to the problem corresponds to the
optimal solution to its subproblems, then the problem can be

15
solved using a greedy approach. This property is called optimal
substructure.

Steps for achieving a greedy algorithm are:


1. Feasible:
Here we check whether it satisfies all possible constraints or
not, to obtain at least one solution to our problems.

2. Local Optimal Choice:


In this case, the choice should be the optimum which is
selected from the currently available.

3. Unalterable:
Once the decision is made, at any subsequence step that
option is not altered.

Advantages of Greedy Approach


 The algorithm is easier to describe.
 This algorithm can perform better than other algorithms (but,
not in all cases).

16
Drawback of Greedy Approach
As mentioned earlier, the greedy algorithm doesn't always produce the
optimal solution. This is the major disadvantage of the algorithm

For example, suppose we want to find the longest path in the graph below
from root to leaf. Let's use the greedy algorithm here.

Greedy Approach
1. Let's start with the root node 20. The weight of the right child is 3 and the
weight of the left child is 2.
2. Our problem is to find the largest path. And, the optimal solution at the
moment is 3. So, the greedy algorithm will choose 3.
3. Finally the weight of an only child of 3 is 1. This gives us our final result 20 +

3 + 1 = 24.

However, it is not the optimal solution. There is another path that carries more
weight (20 + 2 + 10 = 32) as shown in the image below.

17
Therefore, greedy algorithms do not always give an optimal/feasible solution.

Applications of Greedy Algorithm


o It is used in finding the shortest path.
o It is used to find the minimum spanning tree using the prim's
algorithm or the Kruskal's algorithm.
o It is used in a job sequencing with a deadline.
o This algorithm is also used to solve the fractional knapsack
problem.
o Huffman tree.

18
Backtracking
Backtracking is one of the techniques that can be used to solve the
problem. We can write the algorithm using this strategy. It uses the
Brute force search to solve the problem, and the brute force search
says that for the given problem, we try to make all the possible
solutions and pick out the best solution from all the desired solutions

Backtracking name itself suggests that we are going back and coming
forward; if it satisfies the condition, then return success, else we go
back again. It is used to solve a problem in which a sequence of
objects is chosen from a specified set so that the sequence satisfies
some criteria.

How does Backtracking work?


Backtracking is a systematic method of trying out various sequences of
decisions until you find out that works. Let's understand through an
example.

We start with a start node. First, we move to node A. Since it is not a


feasible solution so we move to the next node, i.e., B. B is also not a
feasible solution, and it is a dead-end so we backtrack from node B to
node A.

19
Suppose another path exists from node A to node C. So, we move
from node A to node C. It is also a dead-end, so again backtrack from
node C to node A. We move from node A to the starting node.

Now we will check any other path exists from the starting node. So, we
move from start node to the node D. Since it is not a feasible solution
so we move from node D to node E. The node E is also not a feasible
solution. It is a dead end so we backtrack from node E to node D.

20
Suppose another path exists from node D to node F. So, we move
from node D to node F. Since it is not a feasible solution and it's a
dead-end, we check for another path from node F.

21
Suppose there is another path exists from the node F to node G so
move from node F to node G. The node G is a success node.

The terms related to the backtracking are:

o Live node: The nodes that can be further generated are known
as live nodes.
o E node: The nodes whose children are being generated and
become a success node.
o Success node: The node is said to be a success node if it
provides a feasible solution.
o Dead node: The node which cannot be further generated and
also does not provide a feasible solution is known as a dead
node.
22
Many problems can be solved by backtracking strategy, and that
problems satisfy complex set of constraints, and these constraints are
of two types:

o Implicit constraint: It is a rule in which how each element in a


tuple is related.
o Explicit constraint: The rules that restrict each element to be
chosen from the given set.

Applications of Backtracking
o N-queen problem
o Sum of subset problem
o Graph coloring
o Hamiliton cycle

Difference between the Backtracking and Recursion


Recursion is a technique that calls the same function again and again
until you reach the base case. Backtracking is an algorithm that finds
all the possible solutions and selects the desired solution from the
given set of solutions.

23
Below are some major differences between Greedy
method and Dynamic programming:
Greedy method Dynamic programming

In Greedy Method, sometimes there is no It is guaranteed that Dynamic


such guarantee of getting Optimal Programming will generate an optimal
Solution. solution.

A Dynamic programming is an algorithmic


A greedy method follows the problem
technique which is usually based on a
solving heuristic of making the locally
recurrent formula that uses some
optimal choice at each stage.
previously calculated states.

It is more efficient in terms of memory as It requires Dynamic Programming table for


it never look back or revise previous Memoization and it increases it’s memory
choices complexity.

Greedy methods are generally faster. For


Dynamic Programming is generally slower.
example, Dijkstra’s shortest
For example, Bellman Ford
path algorithm takes O(ELogV + VLogV)
algorithm takes O(VE) time.
time.

The greedy method computes its solution Dynamic programming computes its
by making its choices in a serial forward solution bottom up or top down by
fashion, never looking back or revising synthesizing them from smaller optimal
previous choices. sub solutions.

Fractional knapsack . 0/1 knapsack problem

24
Tower of Hanoi
1. It is a classic problem where you try to move all the disks from one
peg to another peg using only three pegs.

2. Initially, all of the disks are stacked on top of each other with larger
disks under the smaller disks.

3. You may move the disks to any of three pegs as you attempt to
relocate all of the disks, but you cannot place the larger disks over
smaller disks and only one disk can be transferred at a time.

This problem can be easily solved by Divide & Conquer algorithm

In the above 7 step all the disks from peg A will be transferred to C
given Condition:

25
0/1 Knapsack problem
Here knapsack is like a container or a bag. Suppose we have given
some items which have some weights or profits. We have to put some
items in the knapsack in such a way total value produces a maximum
profit.

For example, the weight of the container is 20 kg. We have to select


the items in such a way that the sum of the weight of items should be
either smaller than or equal to the weight of the container, and the
profit should be maximum.

There are two types of knapsack problems:

o 0/1 knapsack problem


o Fractional knapsack problem

We will discuss both the problems one by one. First, we will learn
about the 0/1 knapsack problem.

What is the 0/1 knapsack problem?


The 0/1 knapsack problem means that the items are either completely
or no items are filled in a knapsack. For example, we have two items
having weights 2kg and 3kg, respectively. If we pick the 2kg item then
we cannot pick 1kg item from the 2kg item (item is not divisible); we
have to pick the 2kg item completely. This is a 0/1 knapsack problem
in which either we pick the item completely or we will pick that item.
The 0/1 knapsack problem is solved by the dynamic programming.

26
What is the fractional knapsack problem?
The fractional knapsack problem means that we can divide the item.
For example, we have an item of 3 kg then we can pick the item of 2
kg and leave the item of 1 kg. The fractional knapsack problem is
solved by the Greedy approach.

Example of 0/1 knapsack problem.


Consider the problem having weights and profits are:

Weights: {3, 4, 6, 5}

Profits: {2, 3, 1, 4}

The weight of the knapsack is 8 kg

The number of items is 4

The above problem can be solved by using the following method:

xi = {1, 0, 0, 1}

= {0, 0, 0, 1}

= {0, 1, 0, 1}

27
Steps to solve the Fractional problem:

1. Compute the value per pound (Vi/Wi) for each item.

2. Sort the items by value per pound.

3. Obeying a greedy strategy, we take as possible of the item with the


highest value per pound.

4. If the supply of that element is exhausted and we can still carry


more, we take as much as possible of the element with the next value
per pound.

Example:
Consider 5 items along their respective weights and values:

I = (I1, I2, I3, I4, I5)

W = (5, 10, 20, 30, 40)

V = (30, 20 100, 90, 160)

The capacity of knapsack W = 60

Now fill the knapsack according to the decreasing value of Pi

28
Solution:
Taking value per weight ratio is Pi = Vi/Wi

Item Wi Vi Pi = Vi/Wi
I1 5 30 6.0
I2 10 20 2.0
I3 20 100 5.0
I4 30 90 3.0
I5 40 160 4.0

Now sort the value of Pi in decreasing order:

Item Wi Vi Pi = Vi/Wi
I1 5 30 6.0
I3 20 100 5.0
I5 40 160 4.0
I4 30 90 3.0
I2 10 20 2.0

Now, first we choose the item I1 whose weight is 5. Then choose item I3
whose weight is 20. Now the total weight of knapsack is 20+5 = 25. Now
the next item is I5, and it’s weight is 40 but we want only 36. So we choose
the fractional part of it. That is:

5 * (5/5) + 20 * (20/20) + (40 * 35/40)

Weight = 5 + 20 + 35 = 60

Maximum Value:

30 * (5/5) + 100 * (20/20) + 160 * (35/40)

= 30+100+140 = 270

Note: The math is from Cloud IT solution (page 512)

29
Travelling Sales Person Problem
The traveling salesman problems abide by a salesman and a set of
cities. The salesman has to visit every one of the cities starting from a
certain one (e.g., the hometown) and to return to the same city. The
challenge of the problem is that the traveling salesman needs to
minimize the total cost of the trip.

Suppose the cities are x1 x2..... xn where cost cij denotes the cost of
travelling from city xi to xj. The travelling salesperson problem is to
find a route starting and ending at x1 that will take in all cities with the
minimum cost.

30
Sorting Algorithms

Complexity of Sorting Algorithms


The efficiency of any sorting algorithm is determined by the time
complexity and space complexity of the algorithm.

1. Time Complexity: Time complexity refers to the time taken


by an algorithm to complete its execution with respect to the
size of the input. It can be represented in different forms:
 Big-O notation (O)
 Omega notation (Ω)
 Theta notation (Θ)
2. Space Complexity: Space complexity refers to the total
amount of memory used by the algorithm for a complete
execution. It includes both the auxiliary memory and the input.
The auxiliary memory is the additional space occupied by the
algorithm apart from the input data.

Asymptotic Notations
Asymptotic notations are the mathematical notations used to
describe the running time of an algorithm when the input tends
towards a particular value or a limiting value.

31
For example: In bubble sort, when the input array is already
sorted, the time taken by the algorithm is linear i.e. the best
case.

But, when the input array is in reverse condition, the algorithm


takes the maximum time (quadratic) to sort the elements i.e. the
worst case.

When the input array is neither sorted nor in reverse order, then
it takes average time. These durations are denoted using
asymptotic notations.

There are mainly three asymptotic notations:

 Big-O notation
 Omega notation
 Theta notation

Big-O Notation (O-notation)


Big-O notation represents the upper bound of the running time
of an algorithm. Thus, it gives the worst-case complexity of an
algorithm.

32
Omega Notation (Ω-notation)
Omega notation represents the lower bound of the running time
of an algorithm. Thus, it provides the best case complexity of an
algorithm.

Theta Notation (Θ-notation)


it represents the upper and the lower bound of the running time
of an algorithm, it is used for analyzing the average-case
complexity of an algorithm.

Complexity Analysis of different algorithms:

33
Bubble Sort
Bubble sort is a sorting algorithm that compares two adjacent
elements and swaps them until they are in the intended order.
Just like the movement of air bubbles in the water that rise up to
the surface, each element of the array move to the end in each
iteration. Therefore, it is called a bubble sort.

Working of Bubble Sort


Suppose we are trying to sort the elements in ascending
order.
1. First Iteration (Compare and Swap)
 Starting from the first index, compare the first and the second
elements.
 If the first element is greater than the second element, they
are swapped.
 Now, compare the second and the third elements. Swap
them if they are not in order.
 The above process goes on until the last element

34
2. Remaining Iteration
The same process goes on for the remaining iterations.

After each iteration, the largest element among the unsorted


elements is placed at the end.

35
In each iteration, the comparison takes place up to the last
unsorted element.

The array is sorted when all the unsorted elements are placed
at their correct positions.

36
37
Optimized Bubble Sort

38
Selection Sort Algorithm
Selection sort is a sorting algorithm that selects the smallest
element from an unsorted list in each iteration and places that
element at the beginning of the unsorted list.

1. Set the first element as minimum .

2. Compare minimum with the second element. If the second


element is smaller than minimum , assign the second element
as minimum .

Compare minimum with the third element. Again, if the third


element is smaller, then assign minimum to the third element
otherwise do nothing. The process goes on until the last
element.

39
3. After each iteration, minimum is placed in the front of the
unsorted list.

4. For each iteration, indexing starts from the first unsorted


element. Step 1 to 3 are repeated until all the elements are
placed at their correct positions.

40
41
42
Insertion Sort Algorithm
Insertion sort is a sorting algorithm that places an unsorted
element at its suitable place in each iteration.

Working of Insertion Sort


Suppose we need to sort the following array.

1. The first element in the array is assumed to be sorted. Take


the second element and store it separately in key .

Compare key with the first element. If the first element is greater
than key , then key is placed in front of the first element.

43
2. Now, the first two elements are sorted.
Take the third element and compare it with the elements on the
left of it. Placed it just behind the element smaller than it. If
there is no element smaller than it, then place it at the beginning
of the array.

44
3. Similarly, place every unsorted element at its correct position.

45
46
Merge Sort Algorithm
Merge Sort is one of the most popular sorting algorithms that is
based on the principle of Divide and Conquer Algorithm.
Here, a problem is divided into multiple sub-problems. Each
sub-problem is solved individually. Finally, sub-problems are
combined to form the final solution.

47
Quicksort Algorithm
Quicksort is a sorting algorithm based on the divide and
conquer approach where
1. An array is divided into subarrays by selecting a pivot
element (element selected from the array).

While dividing the array, the pivot element should be positioned


in such a way that elements less than pivot are kept on the left
side and elements greater than pivot are on the right side of the
pivot.
2. The left and right subarrays are also divided using the same
approach. This process continues until each subarray contains
a single element.

3. At this point, elements are already sorted. Finally, elements


are combined to form a sorted array.

Working of Quicksort Algorithm


1. Select the Pivot Element
There are different variations of quicksort where the pivot element is selected
from different positions. Here, we will be selecting the rightmost element of the
array as the pivot element.

48
2. Rearrange the Array
Now the elements of the array are rearranged so that elements
that are smaller than the pivot are put on the left and the
elements greater than the pivot are put on the right.

Here's how we rearrange the array:

1. A pointer is fixed at the pivot element. The pivot element is


compared with the elements beginning from the first index.

2. If the element is greater than the pivot element, a second


pointer is set for that element.

49
3. Now, pivot is compared with other elements. If an element
smaller than the pivot element is reached, the smaller element
is swapped with the greater element found earlier.

4. Again, the process is repeated to set the next greater


element as the second pointer. And, swap it with another
smaller element.

50
5. The process goes on until the second last element is
reached.

6. Finally, the pivot element is swapped with the second pointer.

51
3. Divide Subarrays

Pivot elements are again chosen for the left and the right sub-
parts separately. And, step 2 is repeated.

The subarrays are divided until each subarray is formed of a


single element. At this point, the array is already sorted.

Visual Illustration of Quicksort Algorithm

You can understand the working of quicksort algorithm with the


help of the illustrations below.

52
53
Linear Search:

https://www.programiz.com/dsa/linear-search

Binary Search:

https://www.programiz.com/dsa/binary-search

Note: More things to study:

1. Dijkstra algorithm
2. Huffman coding

After reading this note please have a look into Colud It


book. It won’t take so much time. Some topics are in
the book.

54

You might also like