Approximation algorithm –
An approximation algorithm is a way of dealing with NP-completeness for an optimization
problem. This technique does not guarantee the best solution.
The goal of the approximation algorithm is to find near-optimal solutions to optimization
problems, particularly those problems that are NP-Hard, for which finding an exact optimal
solution efficiently (in polynomial time) is infeasible.
Such algorithms are called approximation algorithms or heuristic algorithms.
Performance ratio –
The main idea behind calculating the performance ratio of an approximation algorithm, which is also
called as an approximation ratio, is to find how close the approximate solution is to the optimal solution.
The approximate ratio is represented using ρ(n) where n is the input size of the algorithm, C is the near-
optimal solution obtained by the algorithm, C* is the optimal solution for the problem. The algorithm
has an approximate ratio of ρ(n) if and only if −
Intuitively, the approximation ratio measures how bad the approximate solution is distinguished with
the optimal solution. A large (small) approximation ratio measures the solution is much worse than
(more or less the same as) an optimal solution.
Approximation Algorithms can be applied on two types of optimization problems: minimization
problems and maximization problems.
For maximization problems, the approximation ratio is calculated by C*/C since 0 ≤ C ≤ C*. For
minimization problems, the approximation ratio is calculated by C/C* since 0 ≤ C* ≤ C.
Assuming that the costs of approximation algorithms are all positive, the performance ratio is well
defined and will not be less than 1. If the value is 1, that means the approximate algorithm generates
the exact optimal solution.
Example of Approximation Algorithms
Vertex Cover Problem: A vertex cover of an undirected graph is a subset of its vertices such that for
every edge (u, v) of the graph, either ‘u’ or ‘v’ is in the vertex cover. Although the name is Vertex Cover,
the set covers all edges of the given graph. Given an undirected graph, the vertex cover problem is to
find minimum size vertex cover.
It is a minimization problem since we find the minimum size of the vertex cover is the number of
vertices in it. The optimization problem is an NP-Complete problem and hence, cannot be solved in
polynomial time; but what can be found in polynomial time is the near optimal solution.
Vertex Cover Algorithm
The vertex cover approximation algorithm takes an undirected graph as an input and is executed to
obtain a set of vertices that is definitely twice as the size of optimal vertex cover.
Approx-Vertex-Cover (G = (V, E))
C = empty-set;
E'= E;
While E' is not empty do
Let (u, v) be any edge in E': (*)
Add u and v to C;
Remove from E' all edges incident to
u or v;
Return C;
The idea is to take an edge (u, v) one by one, put both vertices to C, and remove all the edges incident to
u or v. We carry on until all edges have been removed.
Exp –
The set of edges of the given graph is −
Now, we start by selecting an arbitrary edge (1,6). We eliminate all the edges, which are either incident
to vertex 1 or 6 and we add edge (1,6) to cover.
In the next step, we have chosen another edge (2,3) at random.
Now we select another edge (4,7).
We select another edge (8,5).
Hence, the vertex cover of this graph is {1,6,2,3,4,7,5,8}, which is approximate output.
Analysis: It is easy to see that the running time of this algorithm is O(V + E), using adjacency list to
represent E'.
Characteristics:
Efficiency: Approximation algorithms run in polynomial time, making them practical for large
instances of difficult problems.
Guarantee: They provide a solution that is within a certain factor of the optimal solution. This
factor is called the approximation ratio.
Bounded Performance: The quality of the approximation is usually measured by how close the
solution is to the optimal one.
Significance of Approximation Algorithms
Handling Intractable Problems: For NP-Hard problems, exact solutions are computationally expensive or
impossible to find within a reasonable time frame. Approximation algorithms offer practical solutions
that are good enough for many applications. These algorithms strike a balance between computational
efficiency and solution accuracy, which is critical in real-world scenarios where resources are limited.
Applications in Various Fields: Problems like the Travelling Salesman Problem (TSP), vehicle routing, and
scheduling benefit from approximation algorithms to optimize routes and schedules. Algorithms help in
designing efficient networks, such as minimizing the cost of connecting nodes (Minimum Spanning Tree,
Steiner Tree problems).Used in machine learning for grouping similar data points efficiently (e.g., k-
means clustering).
Theoretical Insights: Studying approximation algorithms contributes to our understanding of problem
complexity and the limits of algorithmic performance. Insights gained from approximation techniques
can lead to improved heuristics and new algorithmic strategies for tackling complex problems.
Randomized algorithm –
Randomized algorithms are algorithms that make use of random numbers at least once during their
execution to make decisions.
Randomized algorithm is a different design approach taken by the standard algorithms where few
random bits are added to a part of their logic. They are different from deterministic algorithms;
deterministic algorithms follow a definite procedure to get the same output every time an input is
passed where randomized algorithms produce a different output every time they’re executed. It is
important to note that it is not the input that is randomized, but the logic of the standard algorithm.
Deterministic algorithm
Randomized algorithm
These algorithms introduce randomness to improve efficiency or simplify the algorithm design. By
incorporating random choices into their processes, randomized algorithms can often provide faster
solutions or better approximations compared to deterministic algorithms.
Classification of Randomized Algorithms
Randomized algorithms are classified based on whether they have time constraints as the random
variable or deterministic values. They are designed in their two common forms − Las Vegas and Monte
Carlo.
Las Vegas − The Las Vegas method of randomized algorithms never gives incorrect outputs, making the
time constraint as the random variable. For example, in string matching algorithms, las vegas algorithms
start from the beginning once they encounter an error. This increases the probability of correctness.
Exp – an array is given we have to find whether an element of array is repeating or not.
Normal algorithm –
algoSearch_repeat[A]{
for(I=0;i<=n-1;i++){
for(j=i+1;j<n;j++){
if(A[i] == A[j])
return true;
Las vegas algorithm –
algoLVsearch_repeat[A]{
while(ture) do
I = random() mod n+1;
J = random() mod n+1;
If(i≠j) and (A[i]==A[j])
Return true;
Monte Carlo − The Monte Carlo method of randomized algorithms focuses on finishing the execution
within the given time constraint. Therefore, the running time of this method is deterministic. For
example, in string matching, if monte carlo encounters an error, it restarts the algorithm from the same
point. Thus, saving time.
Exp – for same example above –
AlgoSearch(A, a){
For(i=0; i<=n-1; i++)
If(A[i] == a)
Return true;
Monte Carlo algorithm –
algoMC_search(A, a, x){
for(I = 0; I <=x; i++)
I = random() mod n+1
If(A[i] == a)
Return true;
Characteristics:
1. Use of Randomness: They incorporate random choices within their logic.
2. Probabilistic Behaviour: Their performance and output can vary across different runs, even on
the same input.
Significance of Randomized Algorithms
Simplicity and Efficiency: Randomized algorithms are often simpler to design and implement compared
to their deterministic counterparts. They can offer improved performance for certain problems, both in
terms of time complexity and ease of implementation.
Handling Uncertainty: Randomization can help in dealing with uncertain or adversarial input by avoiding
worst-case scenarios that could be exploited in deterministic algorithms. For some problems,
randomized algorithms provide strong probabilistic guarantees on performance and correctness.
Practical Applications: Randomized algorithms are fundamental in cryptographic protocols, where
unpredictability and probabilistic guarantees are essential. They help in distributing tasks or resources
efficiently across multiple servers or nodes. Randomized algorithms are used in various algorithmic
paradigms, including random sampling, randomized rounding, and probabilistic data structures (e.g.,
Bloom filters, skip lists).
Examples of Randomized Algorithms
QuickSort (Randomized Version):
Problem: Sorting an array of elements.
Algorithm: QuickSort typically selects a pivot element and partitions the array. In the randomized
version, the pivot is chosen randomly. This helps avoid worst-case performance (O(n^2)) that occurs
with certain input distributions, making the average-case performance O(n log n) more likely.
Randomized Min-Cut Algorithm:
Problem: Finding the minimum cut in a graph, which separates the graph into two disjoint subsets while
minimizing the number of crossing edges.
Algorithm: The Karger's algorithm repeatedly contracts random edges until only two vertices are left.
The set of edges between these two vertices represents a cut. By running the algorithm multiple times,
the probability of finding the minimum cut increases.