0% found this document useful (0 votes)
40 views34 pages

Notes On Algorithm

important

Uploaded by

yiyenor596
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
Download as doc, pdf, or txt
0% found this document useful (0 votes)
40 views34 pages

Notes On Algorithm

important

Uploaded by

yiyenor596
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1/ 34

Dynamic Programming Greedy Method

1. Dynamic Programming is used to obtain 1. Greedy Method is also used to get the
the optimal solution. optimal solution.

2. In Dynamic Programming, we choose at 2. In a greedy Algorithm, we make whatever


each step, but the choice may depend on choice seems best at the moment and then solve
the solution to sub-problems. the sub-problems arising after the choice is
made.

3. Less efficient as compared to a greedy 3. More efficient as compared to a greedy


approach approach

4. Example: 0/1 Knapsack 4. Example: Fractional Knapsack

5. It is guaranteed that Dynamic 5. In Greedy Method, there is no such guarantee


Programming will generate an optimal of getting Optimal Solution.
solution using Principle of Optimality.
Divide and Conquer Introduction
Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to
take a dispute on a huge input, break the input into minor pieces, decide the problem on
each of the small pieces, and then merge the piecewise solutions into a global solution.
This mechanism of solving the problem is called the Divide & Conquer Strategy.
Divide and Conquer algorithm consists of a dispute using the following three steps.

1. Divide the original problem into a set of subproblems.


2. Conquer: Solve every subproblem individually, recursively.
3. Combine: Put together the solutions of the subproblems to get the solution to the
whole problem.
Generally, we can follow the divide-and-conquer approach in a three-step process.
Examples: The specific computer algorithms are based on the Divide & Conquer
approach:

1. Maximum and Minimum Problem


2. Binary Search
3. Sorting (merge sort, quick sort)
4. Tower of Hanoi.
Greedy Algorithm
The greedy method is one of the strategies like Divide and conquer used to solve the
problems. This method is used for solving optimization problems. An optimization
problem is a problem that demands either maximum or minimum results. Let's
understand through some terms.
The Greedy method is the simplest and straightforward approach. It is not an
algorithm, but it is a technique. The main function of this approach is that the
decision is taken on the basis of the currently available information. Whatever the
current information is present, the decision is made without worrying about the
effect of the current decision in future.
This technique is basically used to determine the feasible solution that may or may
not be optimal. The feasible solution is a subset that satisfies the given criteria.
Characteristics of Greedy method
The following are the characteristics of a greedy method:
o To construct the solution in an optimal way, this algorithm creates two sets where
one set contains all the chosen items, and another set contains the rejected items.
o A Greedy algorithm makes good local choices in the hope that the solution should
be either feasible or optimal.
Components of Greedy Algorithm
The components that can be used in the greedy algorithm are:

o Candidate set: A solution that is created from the set is known as a candidate set.
o Selection function: This function is used to choose the candidate or subset which
can be added in the solution.
o Feasibility function: A function that is used to determine whether the candidate or
subset can be used to contribute to the solution or not.
o Objective function: A function is used to assign the value to the solution or the
partial solution.
o Solution function: This function is used to intimate whether the complete function
has been reached or not.
Applications of Greedy Algorithm
o It is used in finding the shortest path.
o It is used to find the minimum spanning tree using the prim's algorithm or the
Kruskal's algorithm.
o It is used in a job sequencing with a deadline.
o This algorithm is also used to solve the fractional knapsack problem.
Q.Divide and Conquer Method vs Dynamic Programming

Divide and Conquer Method Dynamic Programming

1.It deals (involves) three steps at each level of 1.It involves the sequence of four steps:
recursion: o Characterize the structure of optimal
Divide the problem into a number of solutions.
subproblems. o Recursively defines the values of
Conquer the subproblems by solving them
optimal solutions.
recursively.
o Compute the value of optimal
Combine the solution to the subproblems into
the solution for original subproblems. solutions in a Bottom-up minimum.

o Construct an Optimal Solution from


computed information.

2. It is Recursive. 2. It is non Recursive.


3. It does more work on subproblems and 3. It solves subproblems only once and then
hence has more time consumption. stores in the table.

4. It is a top-down approach. 4. It is a Bottom-up approach.

5. In this subproblems are independent of each 5. In this subproblems are interdependent.


other.

6. For example: Merge Sort & Binary Search 6. For example: Matrix Multiplication.
etc.

Q.Recursion Tree Method


1. Recursion Tree Method is a pictorial representation of an iteration method which is in
the form of a tree where at each level nodes are expanded.
2. In general, we consider the second term in recurrence as root.
3. It is useful when the divide & Conquer algorithm is used.
4. It is sometimes difficult to come up with a good guess. In Recursion tree, each root and
child represents the cost of a single subproblem.
5. We sum the costs within each of the levels of the tree to obtain a set of pre-level costs
and then sum all pre-level costs to determine the total cost of all levels of the recursion.
6. A Recursion Tree is best used to generate a good guess, which can be verified by the
Substitution Method.
Example 1

Consider T (n) = 2T + n2
We have to obtain the asymptotic bound using recursion tree method.
Then draw the recurrence tree.

Q. What are sequential algorithm?

A sequential algorithm is a type of algorithm that processes data in a linear or sequential


manner. This means that the algorithm operates on elements one after another, following
a predetermined order without any parallel processing. Here are some key characteristics
and details about sequential algorithms:
Characteristics of Sequential Algorithms:
 1. Linear Execution: Sequential algorithms typically execute instructions in a
straight line from start to finish. Each step must be completed before moving on to
the next.
2. Single-threaded:
These algorithms usually run on a single thread or processor, making them
straightforward and easy to implement. They do not take advantage of multi-core
processors for parallel execution.
3. Deterministic Behavior: Sequential algorithms produce the same output for a given
input every time they are executed, following a fixed sequence of operations.

4. Ease of Understanding
5.Memory Access Patterns
The sequential search (sometimes called a linear search) is the simplest type of
search, it is used when a list of integers is not in any order. It examines the first
element in the list and then examines each "sequential" element in the list until a
match is found. Other examples are: Bubble Sort, factorial Calculation, Fibonacci
sequence.

Q.Transitive Closure of a graph:

Transitive Closure it the reachability matrix to reach from vertex u to vertex v of a graph.
One graph is given, we have to find a vertex v which is reachable from another vertex u,
for all vertex pairs (u, v).
The final matrix is the Boolean type. When there is a value 1 for vertex u to vertex v, it
means that there is at least one path from u to v.

Input and Output

Input:
1101
0110
0011
0001

Output:
The matrix of transitive closure
1111
0111
0011
0001

Algorithm

transColsure(graph)

Input: The given graph.


Output: Transitive Closure matrix.

Begin
copy the adjacency matrix into another matrix named transMat
for any vertex k in the graph, do
for each vertex i in the graph, do
for each vertex j in the graph, do
transMat[i, j] := transMat[i, j] OR (transMat[i, k]) AND transMat[k, j])
done
done
done
Display the transMat
End
Define feasible Solution:
A feasible solution is a concept commonly used in the context of
problem-solving and decision-making, especially in fields like
mathematics, engineering, operations research, and business. It refers
to a solution or course of action that is both practical and achievable
within the given constraints and limitations.
Key characteristics of a feasible solution include:

1. Practicality: A feasible solution is one that can be implemented in


real-world conditions. It is a realistic and viable option given the
available resources, time, and expertise.
2. Compatibility with Constraints: Feasible solutions adhere to the
constraints and restrictions of the problem. These constraints may
include budget limitations, time restrictions, physical limitations, or
other specified conditions.
3. Achievability: A feasible solution is one that can be realized or
accomplished using available means and without exceeding the
boundaries defined by the problem.
4. Effectiveness: The solution must effectively address the problem or
achieve the desired goals. It should provide a meaningful and
satisfactory outcome.

In summary, a feasible solution is not just a theoretical or ideal answer


to a problem but one that can practically and realistically be
implemented given the constraints and resources at hand. It is a
solution that can be executed successfully in a specific context.

Define optimal Solution:

An optimal solution is the best possible outcome or result among all


the available alternatives in a given problem or decision-making
context. It represents the most favorable and efficient solution,
typically in terms of maximizing benefits, minimizing costs, or
achieving a specific objective. Optimal solutions are often the primary
focus in optimization problems across various fields, including
mathematics, engineering, economics, and operations research.
Key characteristics of an optimal solution include:

1. Maximization or Minimization: An optimal solution may either


maximize desired outcomes (e.g., profit, utility, performance) or
minimize undesirable factors (e.g., cost, risk, errors).
2. Efficiency: It is the most efficient and effective solution, providing
the best balance of benefits and costs or the highest level of
performance within the given constraints.
3. Satisfying Objectives: It meets or exceeds the goals and objectives
of the problem or decision at hand. The objectives may include
achieving the highest revenue, the lowest cost, the shortest time, or
the greatest utility, depending on the context.
4. Infeasibility: In some cases, an optimal solution might not exist due
to infeasible constraints or an unattainable set of objectives. In such
instances, the goal is to find the best possible solution within the given
limitations.

Optimal solutions are essential in optimization problems where the aim


is to find the most efficient or effective course of action, and they serve
as benchmarks for evaluating the performance of other potential
solutions. The process of finding an optimal solution often involves
mathematical modeling, algorithms, and decision analysis to identify
the best approach or combination of factors.

Compare and contrast tractable and intractable problems


Ans:
Tractable and intractable problems are terms commonly used in computer science and
computational complexity theory to describe the ease or difficulty of solving problems.
These terms help categorize problems based on their computational complexity and the
resources (time and/or space) required for their solution. Here's a comparison and
contrast between tractable and intractable problems:
Tractable Problems:
1. Efficiently Solvable: Tractable problems are those that can be solved efficiently,
often with algorithms that run in polynomial time, which means the time required to
solve the problem grows at most as a polynomial function of the problem size.
2. P-time: In computational complexity theory, problems that can be solved in
polynomial time are often referred to as being in "P," indicating that they belong to a
class of problems that are easy to solve.
3. Practical Examples: Many everyday problems fall into this category, such as
sorting lists, finding the shortest path in a graph, and basic arithmetic operations.
4. Predictable Performance: Algorithms for tractable problems tend to have
predictable and consistent performance, making them suitable for real-time applications
and large-scale data processing.

Intractable Problems:

1. Difficult to Solve: Intractable problems are those for which there is no known
algorithm that can solve them efficiently in all cases. The best-known algorithms for
these problems have exponential or super polynomial time complexity.
2. NP-time: Many intractable problems are part of the "NP" class in computational
complexity, which includes problems for which a proposed solution can be verified
efficiently but not necessarily found efficiently.
3. Complex Examples: Examples of intractable problems include the traveling
salesman problem, the knapsack problem, and the Boolean satisfiability problem.
4. Unpredictable Performance: Algorithms for intractable problems can perform well
for some instances but poorly for others. They are often used with heuristics or
approximations to find reasonably good solutions.

Comparison:

1. Efficiency: The key difference is the efficiency of solution methods. Tractable


problems have efficient algorithms that can find solutions quickly, while intractable
problems lack efficient algorithms for general cases.
2. Polynomial Time: Tractable problems have algorithms that run in polynomial time,
whereas intractable problems have algorithms with exponential or super polynomial time
complexity.
3. Practicality: Tractable problems are practical and suitable for real-world
applications, while intractable problems are often challenging to solve in practice.
4. Examples: Tractable problems include common computational tasks, while
intractable problems are typically complex optimization or decision problems.
5. Solvability: Tractable problems have known efficient solutions, while intractable
problems may remain unsolved in polynomial time.
In summary, the distinction between tractable and intractable problems is based on the
efficiency of algorithms to solve them, with tractable problems having efficient solutions
and intractable problems lacking such solutions, at least in the general case.
Define internal path length and external path length with
example

Ans.
In tree data structures, particularly in binary trees, internal path
length and external path length are metrics used to measure the
efficiency of tree operations like searching, inserting, or deleting
nodes.
1. Internal Path Length (IPL):

The internal path length of a tree is the sum of the depths (or levels) of
all internal nodes in the tree. An internal node is any node that has at
least one child (i.e., it is not a leaf node).
 Depth of a node: The number of edges on the path from the root
to that node.
Example:

Consider this binary tree:


A (0)
/\
B (1) C (1)
/\ \
D (2) E (2) F (2)
 The depth of node A is 0, B and C are at depth 1, D, E, and F are at
depth 2.
 Internal nodes are A, B, and C.
Internal Path Length (IPL) = Depth of A + Depth of B + Depth of C = 0
+1+1=2
2. External Path Length (EPL):The external path length is the
sum of the depths of all external nodes (or leaf nodes). A leaf
node is a node that has no children.Example:

Using the same tree:


A (0)
/\
B (1) C (1)
/\ \
D (2) E (2) F (2)

 External (leaf) nodes are D, E, and F.


 Depths of D, E, and F are all 2.
External Path Length (EPL) = Depth of D + Depth of E + Depth of F
=2+2+2=6
Summary:
 Internal Path Length measures the path lengths of internal
nodes.
 External Path Length measures the path lengths of leaf
(external) nodes.
These path lengths are often used to calculate the average path length
in a tree, useful for understanding tree operation efficiency.
Q.: Write the brute force algorithm to string matching :
Algorithm NAIVE(Text,Pattern):
Ans. The Naive (Brute Force) String Matching Algorithm is one of
the simplest algorithms to find all occurrences of a pattern in a given
text. The idea is to slide the pattern over the text one by one and
check for a match at each position.
Here’s the pseudocode for the NAIVE string matching algorithm:
Algorithm: NAIVE (Text, Pattern)

Input:
 Text: A string of length n.
 Pattern: A string of length m.
Output:
 Starting indices of all occurrences of the Pattern in the Text.
Algorithm NAIVE(Text, Pattern)
n ← length of Text
m ← length of Pattern

for i ← 0 to n - m do # Slide Pattern over Text from position 0


to n - m
j←0
while j < m and Text[i + j] = Pattern[j] do # Compare Pattern
with current Text window
j←j+1

if j = m then # If Pattern[0...m-1] matched with


Text[i...i+m-1]
print "Pattern found at index", i
Explanation:

1.Initialize Variables:
o n: The length of the text.

o m: The length of the pattern.


2.Sliding Window:
o The outer loop runs from i = 0 to n - m. This shifts the pattern
over the text.
o For each position i, it checks if the pattern matches the
substring of the text starting at position i.
3.Matching:
o A while loop is used to compare the pattern with the current
window of text. The pattern's characters are compared one by
one with the corresponding characters in the text.
o If all characters of the pattern match, we print the starting
index i where the pattern is found.
4.Check for Complete Match:
o If the inner loop completes (i.e., j = m), it means that the
pattern is found starting at index i.
Time Complexity:
 Best Case: O(n) when the first character of the pattern does not
match with most of the text characters.
 Worst Case: O((n - m + 1) * m) when every character matches
except the last one (e.g., when searching for a pattern like "AAA"
in a text of "AAAAAAAA").
Example:

Text: "AABAACAADAABAABA"
Pattern: "AABA"
1.At i = 0, the window in text is "AABA", which matches the pattern.
2.At i = 1, the window in text is "ABAA", which does not match.
3.At i = 2, the window in text is "BAAC", which does not match.
4.Continue sliding and comparing until all positions are checked.
This naive algorithm works well for small inputs, but it can be
inefficient for large texts and patterns with many repeated characters.
More efficient algorithms like the Knuth-Morris-Pratt (KMP) or
Rabin-Karp are used for faster string matching in such cases.
What is convex hull problem??(out of syllabus)
Ans.
A convex hull of a set of points is the smallest convex shape (or
polygon) that can contain all the given points. In simpler terms, it’s like
stretching a rubber band around the outermost points, and when the
band tightens, it forms the boundary of the convex hull.
 A convex set means that for any two points inside the shape, the
line segment joining them also lies completely inside the shape.
 The convex hull is the boundary of the minimal convex set that can
contain all the points.
Convex Hull Problem:
 Input: A set of n points in 2D (or higher dimensions).
 Output: The vertices of the convex hull, ordered such that they
form a convex polygon.
Consider a set of points like:
Points = {(1, 1), (2, 2), (2, 4), (3, 3), (5, 1), (5, 5)}
The convex hull will be the polygon that includes only the points
forming the boundary:
Convex Hull = {(1, 1), (5, 1), (5, 5), (2, 4)}
What is order of growth?
Ans.
Order of growth refers to how the runtime (or space) complexity of
an algorithm changes as the size of the input increases. It helps
measure the efficiency of an algorithm in terms of its scalability and
performance. The concept of order of growth is typically expressed
using Big-O notation, which provides an upper bound on the time or
space complexity in relation to the input size.
Purpose:
 Order of growth allows us to understand how an algorithm
behaves as the size of the input increases, especially for large
inputs.
 It focuses on the most significant factors that contribute to the
complexity, ignoring constants and lower-order terms.
Common Orders of Growth:

These are the most common types of growth rates (or complexities)
that describe how the execution time or memory usage scales with the
input size n.
Constant Time – O(1) , Logarithmic Time – O(log n) , Linear Time – O(n)
, Linearithmic Time – O(n log n), Quadratic Time – O(n²) , Exponential
Time – O(2^n), Cubic Time – O(n³)
The order of growth describes how the computational cost (time or
space) of an algorithm increases as the size of its input grows. It gives
a high-level understanding of algorithm efficiency, focusing on the
most dominant factors through Big-O notation.
What are the basic asymtotic efficiency classes?
Ans.
Asymptotic efficiency classes describe the behavior of an algorithm's
running time or space requirements as the input size grows to infinity.
These classes are used to categorize algorithms based on their order of
growth, which allows for comparing algorithm efficiency. The most
commonly used asymptotic notations to classify these efficiency
classes are Big-O, Omega (Ω), and Theta (Θ).
Here are the basic asymptotic efficiency classes, typically
expressed in terms of Big-O notation (which represents the
worst-case upper bound).
Summary of Asymptotic Efficiency Classes:
Big-O
Class Example Algorithms Efficiency
Notation
Constant O(1) Accessing array elements Best possible
Logarithmi
O(log n) Binary Search Very efficient
c
Linear search, traversing a Reasonable for
Linear O(n)
list most tasks
Linearithm Merge Sort, Quick Sort Good for large
O(n log n)
ic (best/average case) inputs
Quadratic O(n²) Bubble Sort, Insertion Sort Poor for large
Big-O
Class Example Algorithms Efficiency
Notation
inputs
Cubic O(n³) Naive matrix multiplication Very inefficient
Exponenti Solving TSP (brute-force),
O(2^n) Highly inefficient
al subset-sum
Brute-force permutations,
Factorial O(n!) Impractical
solving TSP
List of the factors which affects the running time of the
alogorithm?

Several factors affect the running time of an algorithm, impacting how


efficiently it performs on a given input. Here is a list of key factors:
Here’s a consolidated list of factors that affect the running time of an
algorithm:
Factors Affecting Algorithm Running Time

Input Size (n)


o Larger input sizes generally lead to longer running times.
Input Data Distribution
o Specific patterns in input data (sorted, unsorted, repeated
values) can influence performance.
Algorithm's Time Complexity
o The theoretical complexity (e.g., O(n), O(log n)) dictates how
running time scales with input size.
Hardware Specifications
o CPU speed, number of cores, RAM, and cache can impact
execution speed.
Implementation Details
o Coding practices, choice of algorithms, and data structures
affect performance.
 Constant Factors
 Hidden constants in the Big-O notation can impact practical
running time.
 Memory Access Patterns
 Algorithms that optimize cache usage run faster due to better
cache locality.
 Parallelism and Concurrency
 Algorithms that leverage multi-threading or parallel processing can
execute faster.
 Programming Language
 Low-level languages typically offer better performance than higher-
level languages.
 Compiler Optimization
 Compiler settings and optimization flags can significantly enhance
performance.
 Recursion Overhead
 Algorithms with deep recursion may incur stack management
overhead.
System Load
 Multitasking environments can limit the resources available for an
algorithm.
List the advantage of Huffman Coding.

Huffman coding is a widely used algorithm for lossless data


compression. Here are some of its key advantages:
Advantages of Huffman Coding

1.Efficient Compression:
o Huffman coding can achieve significant compression ratios by

assigning shorter codes to more frequently occurring symbols


and longer codes to less frequent symbols. This leads to
reduced file sizes.
2.Lossless Compression:
o Unlike lossy compression techniques, Huffman coding
preserves all original data. This means that the original
information can be perfectly reconstructed from the
compressed data.
3.Adaptive Coding:
o Huffman coding can be adapted to changing data distributions.
Adaptive Huffman coding can update codes dynamically as
more symbols are processed, which can be beneficial for
streaming data.
4.Simple Implementation:
o The algorithm is relatively straightforward to implement.
Building the Huffman tree and generating codes can be done
with basic data structures (like heaps) and does not require
complex algorithms.
5.Optimality:
o For a given set of symbols and their frequencies, Huffman
coding produces an optimal prefix code (i.e., no code is a prefix
of another) that minimizes the average code length. This
makes it more efficient than fixed-length coding schemes.
6.Reduced Redundancy:
o By using variable-length codes, Huffman coding reduces
redundancy in data representation, which can lead to more
efficient storage and transmission.

You might also like