0% found this document useful (0 votes)
58 views15 pages

6th Sem Algorithm (c-14) Answer

The document discusses various computer science concepts including recursion trees, time complexity of insertion sort, hash functions, double hashing, quadratic probing, greedy algorithms, matrix chain multiplication, single source shortest paths, open addressing, and recurrence relations. It also discusses algorithm specifications, pseudocode, the master method, linear search, time complexity of merge sort, linear probing, Huffman codes, dynamic programming, minimum spanning trees, and ways to represent graphs.

Uploaded by

rituprajna2004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
58 views15 pages

6th Sem Algorithm (c-14) Answer

The document discusses various computer science concepts including recursion trees, time complexity of insertion sort, hash functions, double hashing, quadratic probing, greedy algorithms, matrix chain multiplication, single source shortest paths, open addressing, and recurrence relations. It also discusses algorithm specifications, pseudocode, the master method, linear search, time complexity of merge sort, linear probing, Huffman codes, dynamic programming, minimum spanning trees, and ways to represent graphs.

Uploaded by

rituprajna2004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 15

1.

5 marks, 2022 year

1.write down about Recursion tree method?

The Recursion Tree Method is a way of solving recurrence rela ons. In this method, a recurrence rela on is
converted into recursive trees. Each node represents the cost incurred at various levels of recursion. To find the
total cost, costs of all levels are summed up.

2.write down me complexity of inser on sort algorithm.

The worst-case (and average-case) complexity of the inser on sort algorithm is O(n²). Meaning that, in the worst
case, the me taken to sort a list is propor onal to the square of the number of elements in the list. The best-case
me complexity of inser on sort algorithm is O(n) me complexity.

3.What is Hash func on?

A Hash Func on is a func on that converts a given numeric or alphanumeric key to a small prac cal integer
value. The mapped integer value is used as an index in the hash table. In simple terms, a hash func on maps a
significant number or string to a small integer that can be used as the index in the hash table.

4.write down the use of double hashing?

the advantage of Double hashing is that it is one of the best forms of probing, producing a uniform distribu on
of records throughout a hash table. This technique does not yield any clusters. It is one of the effec ve methods
for resolving collisions.

5.What is quadra c probing?

Quadra c probing is an open addressing scheme in computer programming for resolving hash collisions in hash
tables. Quadra c probing operates by taking the original hash index and adding successive values of an arbitrary
quadra c polynomial un l an open slot is found.

6.what is greedy technique?

A greedy algorithm is an approach for solving a problem by selec ng the best op on available at the moment. It
doesn't worry whether the current best result will bring the overall op mal result. The algorithm never reverses
the earlier decision even if the choice is wrong.

7.what is matrix chain mul plica on?

It is a Method under Dynamic Programming in which previous output is taken as input for next. Here,
Chain means one matrix's column is equal to the second matrix's row [always]. In general:
If A = ⌊aij⌋ is a p x q matrix
B = ⌊bij⌋ is a q x r matrix
C = ⌊cij⌋ is a p x r matrix
8.what is single source shortest paths?

Single-source shortest path (or SSSP) problem requires finding the shortest path from a source node to all other
nodes in a weighted graph i.e. the sum of the weights of the edges in the paths is minimized.

9.what is open addressing?

Open addressing is an alterna ve method to resolve hash collisions. Unlike separate chaining - there are no
linked lists. Each item is placed in the hash table by searching, (or probing as we'll call it), for an open bucket to
place it.

10.what is recurrence rela ons?

A recurrence rela on is an equa on which represents a sequence based on some rule. It helps in finding the
subsequent term (next term) dependent upon the preceding term (previous term).

2MARKS

1.what is algorithm specifica on?


Characteris cs of an Algorithm

Input: An algorithm has some input values. We can pass 0 or some input value to an algorithm. Output: We will
get 1 or more output at the end of an algorithm. Unambiguity: An algorithm should be unambiguous which means
that the instruc ons in an algorithm should be clear and simple.

2. write about Pseudo Code?

Pseudocode is a detailed yet readable descrip on of what a computer program or algorithm should do. It is
wri en in a formal yet readable style that uses a natural syntax and forma ng so it can be easily understood by
programmers and others involved in the development process.

3. what is master method?

The Master Theorem is a tool used to solve recurrence rela ons that arise in the analysis of divide-and-conquer
algorithms. The Master Theorem provides a systema c way of solving recurrence rela ons of the form: T(n) =
aT(n/b) + f(n) where a, b, and f(n) are posi ve func ons and n is the size of the problem.

4. what is linear search?

In computer science, linear search or sequen al search is a method for finding an element within a list. It
sequen ally checks each element of the list un l a match is found or the whole list has been searched.

5.write down the me complexity of the merge sort.

Time complexity of Merge Sort is O(n*Log n) in all the 3 cases (worst, average and best) as merge sort always
divides the array in two halves and takes linear me to merge two halves.

6.what is linear probing?

Linear probing is a scheme in computer programming for resolving collisions in hash tables, data structures for
maintaining a collec on of key–value pairs and looking up the value associated with a given key.

7.what is Huffman codes?

Huffman coding is a method of data compression that is independent of the data type, that is, the data could
represent an image, audio or spreadsheet. This compression scheme is used in JPEG and MPEG-2. Huffman coding
works by looking at the data stream that makes up the file to be compressed.

8.what is dynamic programming?

Dynamic programming is a computer programming technique where an algorithmic problem is first broken down
into sub-problems, the results are saved, and then the sub-problems are op mized to find the overall solu on —
which usually has to do with finding the maximum and minimum range of the algorithmic query.

9. What is Minimum Spanning tree?

A minimum spanning tree (MST) is a subset of the edges of a connected, undirected graph that connects all the
ver ces with the most negligible possible total weight of the edges. A minimum spanning tree has precisely n-1
edges, where n is the number of ver ces in the graph.

10. how to represent a graph?

A graph can be represented using 3 data structures- adjacency matrix, adjacency list and adjacency set. An
adjacency matrix can be thought of as a table with rows and columns. The row labels and column labels represent
the nodes of a graph.

7MARKS

1. what is divide and conquer paradigm


Divide and Conquer is an algorithm design paradigm that involves breaking up a larger problem into
non-overlapping sub-problems, solving each of these sub-problems, and combining the results to
solve the original problems. A problem has non-overlapping sub-problems if you can find its solution
by solving each sub-problem once.
The three main steps in the divide and conquer paradigm are:

 divide: involves breaking the problem into smaller, non-overlapping chunks.


 conquer: involves solving the sub-problems recursively.
 combine: involves combining the solutions of the smaller sub-problems to solve the
original problem.

Take a look at the pictorial representation of the concept:

The divide and conquer paradigm

Merge sort, binary search, and quick sort are some of the most famous divide and conquer
algorithms.

Let’s go over some pros and cons of the divide and conquer paradigm.

Pros

 Solves difficult problems with less time complexity than its brute-force counterpart.
 Since the sub-problems are independent, they can be computed in parallel.

Cons

 Includes recursion which consumes more space.


 Recursion into small base cases may lead to huge recursive stacks

2.what is space complexity and time complexity? write down with example to justify complexity.

Time Complexity: The time complexity of an algorithm quantifies the amount of time taken by
an algorithm to run as a function of the length of the input. Note that the time to run is a function
of the length of the input and not the actual execution time of the machine on which the
algorithm is running on.

The valid algorithm takes a finite amount of time for execution. The time required by the
algorithm to solve given problem is called time complexity of the algorithm. Time complexity is
very useful measure in algorithm analysis.

It is the time needed for the completion of an algorithm. To estimate the time complexity, we
need to consider the cost of each fundamental instruction and the number of times the instruction
is executed.
The addition of two scalar numbers requires one addition operation. the time complexity of this
algorithm is constant, so T(n) = O(1) .

Order of growth is how the time of execution depends on the length of the input. In the above
example, it is clearly evident that the time of execution quadratically depends on the length of
the array. Order of growth will help to compute the running time with ease.

Space Complexity:

Definition –

Problem-solving using computer requires memory to hold temporary data or final result while
the program is in execution. The amount of memory required by the algorithm to solve given
problem is called space complexity of the algorithm.

The space complexity of an algorithm quantifies the amount of space taken by an algorithm to
run as a function of the length of the input. Consider an example: Suppose a problem to find the
frequency of array elements.

It is the amount of memory needed for the completion of an algorithm.

To estimate the memory requirement we need to focus on two parts:

(1) A fixed part: It is independent of the input size. It includes memory for instructions (code),
constants, variables, etc.

(2) A variable part: It is dependent on the input size. It includes memory for recursion stack,
referenced variables, etc.

The addi on of two scalar numbers requires one extra memory loca on to hold the result. Thus the space
complexity of this algorithm is constant, hence S(n) = O(1).

3.Write down the algorithm for binary search?

Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search
algorithm works on the principle of divide and conquer, since it divides the array into half before
searching. For this algorithm to work properly, the data collection should be in the sorted form.

Binary search looks for a particular key value by comparing the middle most item of the
collection. If a match occurs, then the index of item is returned. But if the middle item has a
value greater than the key value, the right sub-array of the middle item is searched. Otherwise,
the left sub-array is searched. This process continues recursively until the size of a subarray
reduces to zero

BINARY SEARCH ALGORITHM

BinarySearch algorithm is an interval searching method that performs the searching in intervals
only. The input taken by the binary search algorithm must always be in a sorted array since it
divides the array into subarrays based on the greater or lower values. The algorithm follows the
procedure below −
Step 1 − Select the middle item in the array and compare it with the key value to be searched. If
it is matched, return the position of the median.

Step 2 − If it does not match the key value, check if the key value is either greater than or less
than the median value.

Step 3 − If the key is greater, perform the search in the right sub-array; but if the key is lower
than the median value, perform the search in the left sub-array.

Step 4 − Repeat Steps 1, 2 and 3 iteratively, until the size of sub-array becomes 1.

Step 5 − If the key value does not exist in the array, then the algorithm returns an unsuccessful
search.

Pseudocode

The pseudocode of binary search algorithms should look like this −

Procedure binary_search
A ← sorted array
n ← size of array
x ← value to be searched

Set lowerBound = 1
Set upperBound = n

while x not found


if upperBound < lowerBound
EXIT: x does not exists.

set midPoint = lowerBound + ( upperBound - lowerBound ) / 2

if A[midPoint] < x
set lowerBound = midPoint + 1

if A[midPoint] > x
set upperBound = midPoint - 1

if A[midPoint] = x
EXIT: x found at location midPoint
end while
end procedure

4. discuss with algorithm about quick sort?

Quick sort is a highly efficient sorting algorithm and is based on partitioning of array of data into
smaller arrays. A large array is partitioned into two arrays one of which holds values smaller
than the specified value, say pivot, based on which the partition is made and another array holds
values greater than the pivot value.

Quicksort partitions an array and then calls itself recursively twice to sort the two resulting
subarrays. This algorithm is quite efficient for large-sized data sets as its average and worst-case
complexity are O(n2), respectively.

Quick Sort Pivot Pseudocode

The pseudocode for the above algorithm can be derived as −

function partitionFunc(left, right, pivot)


leftPointer = left
rightPointer = right - 1

while True do
while A[++leftPointer] < pivot do
//do-nothing
end while

while rightPointer > 0 && A[--rightPointer] > pivot do


//do-nothing
end while

if leftPointer >= rightPointer


break
else
swap leftPointer,rightPointer
end if
end while

swap leftPointer,right
return leftPointer
end function
Quick Sort Pseudocode

To get more into it, let see the pseudocode for quick sort algorithm −

procedure quickSort(left, right)


if right-left <= 0
return
else
pivot = A[right]
partition = partitionFunc(left, right, pivot)
quickSort(left,partition-1)
quickSort(partition+1,right)
end if
end procedure
5.dicuss with example about longest common subsequence.

The longest common subsequence (LCS) is defined as the longest subsequence that is common
to all the given sequences, provided that the elements of the subsequence are not required to
occupy consecutive positions within the original sequences.

If S1 and S2 are the two given sequences then, Z is the common subsequence of S1 and S2 if Z is
a subsequence of both S1 and S2. Furthermore, Z must be a strictly increasing sequence of the
indices of both S1 and S2.

In a strictly increasing sequence, the indices of the elements chosen from the original sequences
must be in ascending order in Z.

If

S1 = {B, C, D, A, A, C, D}

Then, {A, D, B} cannot be a subsequence of S1 as the order of the elements is not the same (ie.
not strictly increasing sequence).

Let us understand LCS with an example.

If

S1 = {B, C, D, A, A, C, D}
S2 = {A, C, D, B, A, C}

Then, common subsequences are {B, C}, {C, D, A, C}, {D, A, C}, {A, A, C}, {A, C},
{C, D}, ...
Among these subsequences, {C, D, A, C} is the longest common subsequence. We are going to
find this longest common subsequence using dynamic programming.

6.discuss Knapsack problem with example.

There are two types of knapsack problems:

 0/1 knapsack problem


 Frac onal knapsack problem

We will discuss both the problems one by one. First, we will learn about the 0/1 knapsack
problem.

What is the 0/1 knapsack problem?

The 0/1 knapsack problem means that the items are either completely or no items are filled in a
knapsack. For example, we have two items having weights 2kg and 3kg, respectively. If we pick
the 2kg item then we cannot pick 1kg item from the 2kg item (item is not divisible); we have to
pick the 2kg item completely. This is a 0/1 knapsack problem in which either we pick the item
completely or we will pick that item. The 0/1 knapsack problem is solved by the dynamic
programming.

What is the frac onal knapsack problem?

The fractional knapsack problem means that we can divide the item. For example, we have an
item of 3 kg then we can pick the item of 2 kg and leave the item of 1 kg. The fractional
knapsack problem is solved by the Greedy approach.

Example of 0/1 knapsack problem.

Consider the problem having weights and profits are:

Weights: {3, 4, 6, 5}

Profits: {2, 3, 1, 4}

The weight of the knapsack is 8 kg

The number of items is 4

The above problem can be solved by using the following method:

xi = {1, 0, 0, 1}

= {0, 0, 0, 1}

= {0, 1, 0, 1}

The above are the possible combinations. 1 denotes that the item is completely picked and 0
means that no item is picked. Since there are 4 items so possible combinations will be:

24 = 16; So. There are 16 possible combinations that can be made by using the above problem.
Once all the combinations are made, we have to select the combination that provides the
maximum profit.

7.write down Dijkstra’s Algorithm.


The algorithm works by maintaining a set of vertices that have already been visited, and a set of vertices
that have not yet been visited. The algorithm starts at the source vertex and adds it to the set of visited
vertices. Then, it repeatedly finds the vertex in the set of unvisited vertices that has the shortest known
path to the source vertex, and adds that vertex to the set of visited vertices. This process continues until all
vertices have been visited

The following is a pseudocode for Dijkstra algorithm:

procedure Dijkstra(G, s)
for each vertex v in G
dist[v] := infinity
prev[v] := nil
dist[s] := 0
Q := {s}
while Q is not empty
u := vertex in Q with minimum dist[u]
remove u from Q
for each vertex v adjacent to u
if dist[v] > dist[u] + weight(u, v)
dist[v] := dist[u] + weight(u, v)
prev[v] := u

8.differentiate between breadth first search and Depth first search.

the process of searching for elements in a graph is significantly more complex than in an array due to the
arbitrary nature of graph construction. Two famous algorithms for traversing all elements in a graph are
Breath First Search (BFS) and Depth First Search (DFS). Although both are capable of examining all
elements of the graph, their applications don't overlap too much. Meanwhile, they can work for both
directed and undirected graphs, although the example provided below pertains to an undirected graph.

Breath First Search (BFS)

The Breath First Search algorithm traverses the graph level by level. Let's express it
straightforwardly. As active users of LinkedIn, you may have noticed the presence of 1st, 2nd,
and 3rd connections. The definitions are presented below.

 1st connec on: People who directly connect with you.


 2nd connec on: People who are directly connected to your 1st connec on but are not directly
connected to you.
 3rd connec on: People who are directly linked to your 2nd connec on but do not have a direct
connec on with you.

As you might expect, when employing BFS to expand a network, the process involves traversing
all 1st connections before moving on to the 2nd connections and subsequent levels. This is an
accurate representation of how Breadth-First Search operates within a graph. The path will
traverse the graph in a layer-by-layer manner, rather than delving deeply into specific branches.

Depth First Search (DFS)

In contrast to Breath First Search, which traverses nodes from level to level, Depth First Search
adheres to a single path as far as possible before backtracking and seeking an alternative path,
despite minor differences in their code structures. Let us continue to use the LinkedIn network as
an example. Conducting a Depth First Search (DFS) means that you will:

 Contact one of your 1st connec ons. The name "Tom" will be assigned to it.
 Contact one of Tom's connec ons with whom you do not have a connec on. Let's designate it as
"Heather", your 2nd connec on.
 Reach out to one of Heather's connec ons with whom you do not have a connec on. The name
"Taylor" will be assigned to your 3rd connec on.
 Furthermore, and so forth
YEAR-2023, 1.5MARKS

1. List out different types of hashing functions.

Division Method.

Mid Square Method.

Folding Method.

Multiplication Method.

2. Define quick sort.


Quicksort is a divide-and-conquer algorithm. It works by selec ng a 'pivot' element from the array and
par oning the other elements into two sub-arrays, according to whether they are less than or greater than
the pivot. For this reason, it is some mes called par on-exchange sort.
3. Character cs of dynamic programming

Optimality. ... Efficiency. ... Reusability. ... States and State Variables. ... Stages. ... Transitional State.
... Optimal Choice. ... Longest Common Subsequence Problem.

4. What are the pros and cons of memorization or top down approach?

top-down management can help establish a clear vision for company direction. But employees may also
view it as bossy or dictatorial. Employees can grow resentful and challenge unilateral decisions,
particularly with a weak leader. Top-down management is not best for businesses struggling to implement
change.

5. What are greedy algorithm used for ?

A greedy algorithm is used to construct a Huffman tree during Huffman coding where it finds an
optimal solution. In decision tree learning, greedy algorithms are commonly used, however they are
not guaranteed to find the optimal solution.

6. Is Dijkstra’s algorithm a greedy or dynamic programming algorithm?

In fact, Dijkstra's Algorithm is a greedy algo- rithm, and the Floyd-Warshall algorithm, which finds
shortest paths between all pairs of vertices (see Chapter 26), is a dynamic program- ming algorithm.
Although the algorithm is popular in the OR/MS literature, it is generally regarded as a “computer
science method”.
7. Differenece between connected and non-connected graph
An undirected graph that is not connected is called disconnected. An undirected graph G is therefore
disconnected if there exist two ver ces in G such that no path in G has these ver ces as endpoints. A graph
with just one vertex is connected. An edgeless graph with two or more ver ces is disconnected.
8. What is topological sort?
A topological sort is a linear ordering of ver ces in a directed acyclic graph (DAG). Given a DAG G = (V, E), a
topological sort algorithm returns a sequence of ver ces in which the ver ces never come before their
predecessors on any paths. In other words, if (u, v) ∈ E, v never appears before u in the sequence.
9. What is longest common subsequence?
longest common subsequence (LCS) is the longest subsequence common to all sequences in a set of
sequences (o en just two sequences). It differs from the longest common substring: unlike substrings,
subsequences are not required to occupy consecu ve posi ons within the original sequences.
10. What is double hashing?
Double Hashing is a computer programming technique used in conjunc on with open addressing in hash
tables to resolve hash collisions, by using a secondary hash of the key as an offset when a collision occurs
2MARKS

1,Define Hashing:

Hashing is a technique in which given key field value is converted into the address of storage
location of the record by applying the same operation on it. The advantage of hashing is that
allows the execution time of basic operation to remain constant even for the larger side.

2.mention the various types of searching techns.

 Linear Search.
 Sentinel Linear Search.
 Binary Search.
 Meta Binary Search | One-Sided Binary Search.
 Ternary Search.
 Jump Search.
 Interpolation Search.
 Exponential Search.

3.applications of dynamic programming:

Dynamic programming is applicable in graph theory; game theory; AI and machine learning;
economics and finance problems; bioinformatics; as well as calculating the shortest path, which
is used in GPS

4.diff. between top down approach and bottom up apporach:

The main difference between the top-down and bottom-up approaches is the process's starting point
and focus. The top-down approach prioritizes high-level planning and decision-making, while the
bottom-up approach prioritizes the execution of individual tasks and the development of detailed
knowledge.

5. what are weighted graphs:

A weighted graph is a graph where each edge has a numerical value called a weight. Given below
is an example of a weighted graph: The graph consists of five nodes and seven edges (each has a
weight). For example, the edge { 0 , 3 } \{0, 3\} {0,3} has the weight 7, the edge { 1 , 2 } \{1, 2\}
{1,2} has the weight 1.

6.list some way of representing graphs

Adjacency list, graph, undirected graph, simple graph, acyclic graph, cyclic graph.

7.applications of graphs:

 It helps to define the flow of computation of software programs.


 Used in Google Maps for building transportation systems. ...
 Used in social networks such as Facebook and Linkedin.
 Operating Systems use a Resource Allocation Graph where every process and resource
acts as a node.

8.what is pseudo code?

Pseudocode is a detailed yet readable description of what a computer program or algorithm


should do. It is written in a formal yet readable style that uses a natural syntax and formatting so it
can be easily understood by programmers and others involved in the development process.

9.what is divide and conquer paradigm?


Divide and Conquer is an algorithm design paradigm that involves breaking up a larger problem into
non-overlapping sub-problems, solving each of these sub-problems, and combining the results to
solve the original problems.

10.what is Huffman codes?

Huffman coding is an algorithm for compressing data with the aim of reducing its size without
losing any of the details. This algorithm was developed by David Huffman. Huffman coding is
typically useful for the case where data that we want to compress has frequently occurring characters
in it.

7 MARKS

1.write down algorithm for insertion sort.

Here's the algorithm for Insertion Sort:

1. *Insertion Sort Algorithm*:

InsertionSort(array A)

for i from 1 to length(A) - 1 do

key = A[i]

j=i-1

while j >= 0 and A[j] > key do

A[j + 1] = A[j]

j=j-1

end while

A[j + 1] = key

end for

2. *Explanation*:

- The outer loop iterates through each element of the array from the second element (index 1) to the last
element.

- Within the loop, the current element (key) is compared with the elements before it, starting from the
element just before it (index j).

- If the element at index j is greater than the key, it is shifted one position to the right.

- This process continues until either an element smaller than the key is found or j becomes less than 0.

- Once the correct position for the key is found, it is inserted into that position in the array.

- This process repeats for each element in the array, resulting in a sorted array.

3. *Time Complexity*:
- The time complexity of the Insertion Sort algorithm is O(n^2) in the worst-case scenario, where n is the
number of elements in the array.

- However, it has an average-case and best-case time complexity of O(n^2) as well.

2. Algorithm for linear search.

Here's the algorithm for Linear Search:

1. *Linear Search Algorithm*:

LinearSearch(array A, value x)

for i from 0 to length(A) - 1 do

if A[i] equals x then

return i (index of x)

end if

end for

return -1 (indicating x not found in A)

2. *Explanation*:

- The algorithm iterates through each element of the array sequentially, starting from the first element
(index 0) to the last element.

- At each iteration, it compares the current element with the target value (x).

- If the current element matches the target value, the algorithm returns the index of that element.

- If the target value is not found after iterating through the entire array, the algorithm returns -1 to indicate
that the value is not present in the array.

3. *Time Complexity*:

- In the worst-case scenario, the time complexity of Linear Search is O(n), where n is the number of
elements in the array.

- This is because the algorithm may need to iterate through all elements of the array to find the target value,
or the value may not be present in the array at all.

- In the average and best-case scenarios, the time complexity is also O(n), as the algorithm may need to
examine all elements of the array

3. discuss the algorithm about merge sort

Merge sort is a popular sorting algorithm known for its efficiency and stability. Here's a discussion of how the
merge sort algorithm works:

1. *Divide*: The first step of merge sort is to divide the unsorted list into smaller sublists. This is done
recursively until each sublist contains only one element. This process of dividing the list continues until it is
not possible to divide further.
2. *Conquer*: Once the list is divided into individual elements (each considered as a sorted list of one
element), the merging process begins. Pairs of sublists are merged together to produce new sorted sublists.
This merging process is repeated until there is only one sorted list remaining, which is the sorted version of
the original list.

3. *Merge*: The merging process involves comparing elements from the two sublists and combining them
into a single sorted sublist. This is done by repeatedly comparing the smallest (or largest) unmerged element
from each sublist and adding it to the new sorted list. The process continues until all elements from both
sublists have been merged into the new sorted list.

4. *Combine*: Once all sublists are merged, the final sorted list is obtained.

Key points about merge sort:

- Merge sort is a divide-and-conquer algorithm, meaning it breaks the problem into smaller subproblems until
they become simple enough to solve directly.
- It has a time complexity of O(n log n) in all cases, where n is the number of elements in the list. This makes it
one of the most efficient sorting algorithms, especially for large datasets.
- Merge sort is stable, meaning it preserves the relative order of equal elements in the sorted output.
- It requires additional space proportional to the size of the input list for storing temporary sublists during the
merge phase. However, it can be implemented efficiently to minimize this overhead.

Overall, merge sort is a reliable and efficient sorting algorithm suitable for various applications, especially
when stability and performance are essential considerations.

4. what is greedy technique? Discuss with suitable example.

The greedy technique is a problem-solving strategy used in algorithm design where the optimal solution is built step by
step by making locally optimal choices at each stage with the hope that these choices will lead to a globally optimal
solution. In other words, at each step, the algorithm selects the best possible choice without considering the future
consequences, with the belief that this approach will lead to the best overall solution.

Here's an example to illustrate the greedy technique:

*Example: The Coin Change Problem*

Suppose you have a set of coins with different denominations (e.g., 1 cent, 5 cents, 10 cents, etc.), and you want to make
a certain amount of change using the minimum number of coins. This is known as the coin change problem.

Let's say you need to make 36 cents in change, and you have coins with denominations of 1 cent, 5 cents, 10 cents, and
25 cents.

Using the greedy approach:

1. Start with the largest denomination coin that does not exceed the amount needed. In this case, it's 25 cents. Subtract
25 cents from 36, and you have 11 cents left to make.
2. Now, repeat the process with the largest denomination coin that does not exceed the remaining amount. The largest
coin that does not exceed 11 cents is 10 cents. Subtract 10 cents from 11, and you have 1 cent left to make.
3. Finally, use the smallest denomination coin to make the remaining amount. In this case, it's a 1 cent coin.

So, the greedy approach in this example would use one 25-cent coin, one 10-cent coin, and one 1-cent coin, totaling
three coins.

However, the greedy approach might not always yield the optimal solution. For instance, if the coin denominations were
1 cent, 3 cents, and 4 cents, and you needed to make 6 cents in change, the greedy approach would use two 3-cent
coins, totaling two coins. But the optimal solution is to use two 3-cent coins, totaling three coins.

Despite its limitations, the greedy technique is often used because it is simple, easy to implement, and efficient for many
optimization problems. However, it's essential to verify that the greedy approach indeed produces the optimal solution
for a particular problem.

5. Discuss Matrix Chain Multiplication with example


Matrix chain multiplication is a classic problem in dynamic programming that involves finding the most efficient way to
multiply a series of matrices together. The goal is to minimize the number of scalar multiplications needed to compute
the product.

Let's consider an example to illustrate matrix chain multiplication:

Suppose we have four matrices: A, B, C, and D, with dimensions as follows:

- Matrix A: 10x20
- Matrix B: 20x30
- Matrix C: 30x40
- Matrix D: 40x30

We want to find the most efficient way to multiply these matrices together.

To solve this problem using dynamic programming, we create a matrix M[][] to store the minimum number of scalar
multiplications needed to compute the product of matrices from i to j.

1. Initialization: For a single matrix, M[i][i] = 0 since no multiplication is needed.

2. Recurrence relation: We iterate over the length of the chain (l) from 2 to n, where n is the number of matrices. For
each length, we consider all possible ways to split the chain and choose the one with the minimum number of scalar
multiplications.

3. The formula to compute M[i][j] for a chain length l is:

M[i][j] = min { M[i][k] + M[k+1][j] + d[i-1] * d[k] * d[j] } for i <= k < j

Here, d[i-1], d[k], and d[j] represent the dimensions of the matrices A[i-1], A[k], and A[j] respectively.

Using this formula, we fill the matrix M[][] until we reach M[1][n], which represents the minimum number of scalar
multiplications needed to compute the product of all matrices from A[1] to A[n].

Let's apply this to our example:

- For l=2 (chain length), we compute M[1][2], M[2][3], and M[3][4].


- For l=3, we compute M[1][3] and M[2][4].
- For l=4, we compute M[1][4].

Finally, M[1][4] will contain the minimum number of scalar multiplications needed to multiply matrices A, B, C, and D
together efficiently.

By applying dynamic programming, we can find the optimal solution efficiently, avoiding redundant computations. This
approach allows us to solve larger instances of the matrix chain multiplication problem in polynomial time.

6.wtrite down Kruskal’s algorithm

Kruskal's algorithm is a greedy algorithm used to find the minimum spanning tree (MST) of a connected, undirected
graph. The MST is a subset of the edges of the graph that forms a tree connecting all the vertices together while
minimizing the total edge weight. Here's the pseudocode for Kruskal's algorithm:

Kruskal(G):
1. Initialize an empty priority queue (min-heap) Q to store edges sorted by weight.
2. Create a forest F (a collection of trees), where each vertex in the graph is a separate tree.
3. For each edge (u, v) in the graph G:
- Add edge (u, v) to the priority queue Q.
4. Initialize an empty set S to store the edges of the MST.
5. While the priority queue Q is not empty and the MST has fewer than n-1 edges (where n is the number of vertices in
the graph):
- Remove the edge with the smallest weight from Q.
- If adding this edge to the MST does not create a cycle (i.e., the endpoints of the edge belong to different trees in the
forest F):
* Add the edge to the set S (MST).
* Merge the trees containing the endpoints of the edge into a single tree.
6. Return the set S containing the edges of the MST.
In Kruskal's algorithm, the priority queue is used to select edges in increasing order of weight. The algorithm then checks
if adding the selected edge to the MST creates a cycle. If not, the edge is added to the MST, and the trees containing its
endpoints are merged. This process continues until the MST is formed or until all edges have been considered.

Kruskal's algorithm guarantees to find the minimum spanning tree for a connected, undirected graph with non-negative
edge weights. It has a time complexity of O(E log E), where E is the number of edges in the graph, primarily due to the
sorting of edges by weight.

7. write down Bellman-ford Algorithm

The Bellman-Ford algorithm is a dynamic programming algorithm used to find the shortest paths from a single source
vertex to all other vertices in a weighted graph with negative edge weights. Here's the pseudocode for the Bellman-Ford
algorithm:

BellmanFord(Graph G, Vertex source):


1. Initialize an array distance[] and predecessor[] to store the shortest distances and predecessor vertices respectively.
2. Set distance[source] = 0 and distance[v] = ∞ for all other ver ces v.
3. Repeat the following steps for |V| - 1 iterations, where |V| is the number of vertices in the graph:
a. For each edge (u, v) in the graph:
- If distance[u] + weight(u, v) < distance[v], update distance[v] = distance[u] + weight(u, v) and predecessor[v] = u.
4. Check for negative cycles:
- For each edge (u, v) in the graph:
- If distance[u] + weight(u, v) < distance[v], report the presence of a negative cycle.
5. Return the array distance[] containing the shortest distances from the source vertex to all other vertices.

In the Bellman-Ford algorithm:

- We initialize the distance array with infinity for all vertices except the source vertex, which is set to 0.
- We iterate through all the edges for |V| - 1 times, where |V| is the number of vertices in the graph. In each iteration,
we relax each edge, updating the distance to the destination vertex if a shorter path is found through the current edge.
- After |V| - 1 iterations, the shortest paths from the source vertex to all other vertices will have been found.
- We then perform an additional iteration to check for negative cycles. If any vertex's distance can still be improved in
this iteration, it means there is a negative cycle reachable from the source vertex.
- If no negative cycles are found, we return the distance array containing the shortest distances from the source vertex to
all other vertices.

The Bellman-Ford algorithm is less efficient than Dijkstra's algorithm for graphs with non-negative edge weights but has
the advantage of being able to handle graphs with negative edge weights and detect negative cycles. It has a time
complexity of O(|V| * |E|), where |V| is the number of vertices and |E| is the number of edges in the graph.

You might also like