Notes On Algorithm
Notes On Algorithm
1. Dynamic Programming is used to obtain 1. Greedy Method is also used to get the
the optimal solution. optimal solution.
o Candidate set: A solution that is created from the set is known as a candidate set.
o Selection function: This function is used to choose the candidate or subset which
can be added in the solution.
o Feasibility function: A function that is used to determine whether the candidate or
subset can be used to contribute to the solution or not.
o Objective function: A function is used to assign the value to the solution or the
partial solution.
o Solution function: This function is used to intimate whether the complete function
has been reached or not.
Applications of Greedy Algorithm
o It is used in finding the shortest path.
o It is used to find the minimum spanning tree using the prim's algorithm or the
Kruskal's algorithm.
o It is used in a job sequencing with a deadline.
o This algorithm is also used to solve the fractional knapsack problem.
Q.Divide and Conquer Method vs Dynamic Programming
1.It deals (involves) three steps at each level of 1.It involves the sequence of four steps:
recursion: o Characterize the structure of optimal
Divide the problem into a number of solutions.
subproblems. o Recursively defines the values of
Conquer the subproblems by solving them
optimal solutions.
recursively.
o Compute the value of optimal
Combine the solution to the subproblems into
the solution for original subproblems. solutions in a Bottom-up minimum.
6. For example: Merge Sort & Binary Search 6. For example: Matrix Multiplication.
etc.
Consider T (n) = 2T + n2
We have to obtain the asymptotic bound using recursion tree method.
Then draw the recurrence tree.
4. Ease of Understanding
5.Memory Access Patterns
The sequential search (sometimes called a linear search) is the simplest type of
search, it is used when a list of integers is not in any order. It examines the first
element in the list and then examines each "sequential" element in the list until a
match is found. Other examples are: Bubble Sort, factorial Calculation, Fibonacci
sequence.
Transitive Closure it the reachability matrix to reach from vertex u to vertex v of a graph.
One graph is given, we have to find a vertex v which is reachable from another vertex u,
for all vertex pairs (u, v).
The final matrix is the Boolean type. When there is a value 1 for vertex u to vertex v, it
means that there is at least one path from u to v.
Input:
1101
0110
0011
0001
Output:
The matrix of transitive closure
1111
0111
0011
0001
Algorithm
transColsure(graph)
Begin
copy the adjacency matrix into another matrix named transMat
for any vertex k in the graph, do
for each vertex i in the graph, do
for each vertex j in the graph, do
transMat[i, j] := transMat[i, j] OR (transMat[i, k]) AND transMat[k, j])
done
done
done
Display the transMat
End
Define feasible Solution:
A feasible solution is a concept commonly used in the context of
problem-solving and decision-making, especially in fields like
mathematics, engineering, operations research, and business. It refers
to a solution or course of action that is both practical and achievable
within the given constraints and limitations.
Key characteristics of a feasible solution include:
Intractable Problems:
1. Difficult to Solve: Intractable problems are those for which there is no known
algorithm that can solve them efficiently in all cases. The best-known algorithms for
these problems have exponential or super polynomial time complexity.
2. NP-time: Many intractable problems are part of the "NP" class in computational
complexity, which includes problems for which a proposed solution can be verified
efficiently but not necessarily found efficiently.
3. Complex Examples: Examples of intractable problems include the traveling
salesman problem, the knapsack problem, and the Boolean satisfiability problem.
4. Unpredictable Performance: Algorithms for intractable problems can perform well
for some instances but poorly for others. They are often used with heuristics or
approximations to find reasonably good solutions.
Comparison:
Ans.
In tree data structures, particularly in binary trees, internal path
length and external path length are metrics used to measure the
efficiency of tree operations like searching, inserting, or deleting
nodes.
1. Internal Path Length (IPL):
The internal path length of a tree is the sum of the depths (or levels) of
all internal nodes in the tree. An internal node is any node that has at
least one child (i.e., it is not a leaf node).
Depth of a node: The number of edges on the path from the root
to that node.
Example:
Input:
Text: A string of length n.
Pattern: A string of length m.
Output:
Starting indices of all occurrences of the Pattern in the Text.
Algorithm NAIVE(Text, Pattern)
n ← length of Text
m ← length of Pattern
1.Initialize Variables:
o n: The length of the text.
Text: "AABAACAADAABAABA"
Pattern: "AABA"
1.At i = 0, the window in text is "AABA", which matches the pattern.
2.At i = 1, the window in text is "ABAA", which does not match.
3.At i = 2, the window in text is "BAAC", which does not match.
4.Continue sliding and comparing until all positions are checked.
This naive algorithm works well for small inputs, but it can be
inefficient for large texts and patterns with many repeated characters.
More efficient algorithms like the Knuth-Morris-Pratt (KMP) or
Rabin-Karp are used for faster string matching in such cases.
What is convex hull problem??(out of syllabus)
Ans.
A convex hull of a set of points is the smallest convex shape (or
polygon) that can contain all the given points. In simpler terms, it’s like
stretching a rubber band around the outermost points, and when the
band tightens, it forms the boundary of the convex hull.
A convex set means that for any two points inside the shape, the
line segment joining them also lies completely inside the shape.
The convex hull is the boundary of the minimal convex set that can
contain all the points.
Convex Hull Problem:
Input: A set of n points in 2D (or higher dimensions).
Output: The vertices of the convex hull, ordered such that they
form a convex polygon.
Consider a set of points like:
Points = {(1, 1), (2, 2), (2, 4), (3, 3), (5, 1), (5, 5)}
The convex hull will be the polygon that includes only the points
forming the boundary:
Convex Hull = {(1, 1), (5, 1), (5, 5), (2, 4)}
What is order of growth?
Ans.
Order of growth refers to how the runtime (or space) complexity of
an algorithm changes as the size of the input increases. It helps
measure the efficiency of an algorithm in terms of its scalability and
performance. The concept of order of growth is typically expressed
using Big-O notation, which provides an upper bound on the time or
space complexity in relation to the input size.
Purpose:
Order of growth allows us to understand how an algorithm
behaves as the size of the input increases, especially for large
inputs.
It focuses on the most significant factors that contribute to the
complexity, ignoring constants and lower-order terms.
Common Orders of Growth:
These are the most common types of growth rates (or complexities)
that describe how the execution time or memory usage scales with the
input size n.
Constant Time – O(1) , Logarithmic Time – O(log n) , Linear Time – O(n)
, Linearithmic Time – O(n log n), Quadratic Time – O(n²) , Exponential
Time – O(2^n), Cubic Time – O(n³)
The order of growth describes how the computational cost (time or
space) of an algorithm increases as the size of its input grows. It gives
a high-level understanding of algorithm efficiency, focusing on the
most dominant factors through Big-O notation.
What are the basic asymtotic efficiency classes?
Ans.
Asymptotic efficiency classes describe the behavior of an algorithm's
running time or space requirements as the input size grows to infinity.
These classes are used to categorize algorithms based on their order of
growth, which allows for comparing algorithm efficiency. The most
commonly used asymptotic notations to classify these efficiency
classes are Big-O, Omega (Ω), and Theta (Θ).
Here are the basic asymptotic efficiency classes, typically
expressed in terms of Big-O notation (which represents the
worst-case upper bound).
Summary of Asymptotic Efficiency Classes:
Big-O
Class Example Algorithms Efficiency
Notation
Constant O(1) Accessing array elements Best possible
Logarithmi
O(log n) Binary Search Very efficient
c
Linear search, traversing a Reasonable for
Linear O(n)
list most tasks
Linearithm Merge Sort, Quick Sort Good for large
O(n log n)
ic (best/average case) inputs
Quadratic O(n²) Bubble Sort, Insertion Sort Poor for large
Big-O
Class Example Algorithms Efficiency
Notation
inputs
Cubic O(n³) Naive matrix multiplication Very inefficient
Exponenti Solving TSP (brute-force),
O(2^n) Highly inefficient
al subset-sum
Brute-force permutations,
Factorial O(n!) Impractical
solving TSP
List of the factors which affects the running time of the
alogorithm?
1.Efficient Compression:
o Huffman coding can achieve significant compression ratios by