DESIGN AND ANALYSIS ALGORITHM
SHORT ANSWER QUESTIONS
MODULE1
1. Define an algorithm.
An algorithm is a step-by-step procedure to solve a problem or perform a task in a finite
number of steps.
Example: Algorithm to find the sum of two numbers:
1. Start
2. Read numbers A and B
3. Sum = A + B
4. Print Sum
5. Stop
2. Mention any two characteristics of a good algorithm.
Correctness: Produces the correct output for all inputs.
Efficiency: Consumes minimal time and memory.
3. Difference between time complexity and space complexity.
Aspect Time Complexity Space Complexity
Definition Time taken to execute an algorithm Memory required by an algorithm
Example O(n) for linear search O(n) for storing an array
4. Define best-case time complexity.
Best-case time complexity is the minimum time an algorithm takes to complete with the most
favourable input.
Example: Linear search best case: first element matches → O(1)
5. What is Big-O notation?
Big-O notation expresses the upper bound of an algorithm’s time/space complexity for large
inputs.
Notation: O(f(n))
Example: Binary search has O(log n) time complexity.
6. Base case in recursion.
The base case is a condition that stops further recursion.
Example: Factorial function:
factorial(n):
if n == 0: return 1 # base case
else: return n * factorial(n-1)
7. Advantage and disadvantage of recursion.
Advantage: Simplifies complex problems (e.g., factorial, Fibonacci).
Disadvantage: Uses more memory due to function call stack.
8. Significance of asymptotic notations.
Asymptotic notations (O, Ω, Θ) describe algorithm efficiency for large input sizes:
O(n): Upper bound (worst-case)
Ω(n): Lower bound (best-case)
Θ(n): Tight bound (average-case)
Example: Merge sort → O(n log n)
9. Two problems solved using divide and conquer.
Merge Sort → Divides array, sorts, merges.
Binary Search → Divides search space by half.
10. Tail recursion.
Tail recursion occurs when the recursive call is the last operation in a function, allowing
optimization.
11.Explain the characteristics of a good algorithm with examples.
A good algorithm should have the following characteristics:
Correctness: Produces the correct output for all valid inputs.
Example: Linear search correctly finds an element if it exists.
Finiteness: Must terminate after a finite number of steps.
Example: Factorial calculation stops at n=0.
Efficiency: Uses minimal resources like time and memory.
Example: Merge sort has time complexity O(n log n).
Input and Output: Should have well-defined inputs and outputs.
Example: Sorting algorithm takes an array as input and outputs a sorted array.
Definiteness: Each step should be clear and unambiguous.
Example: In Binary Search, the mid-point calculation is clearly defined.
12. Compare and contrast iteration and recursion with examples.
Feature Iteration Recursion
Definition Repeats steps using loops Function calls itself to solve problem
Memory usage Less, no call stack More, due to function call stack
Termination Controlled by loop condition Controlled by base case
Example Factorial using for loop Factorial using n * factorial(n-1)
13. Explain Big-O, Omega, and Theta notations with examples.
Big-O (O(f(n))): Upper bound of algorithm; worst-case scenario.
Example: Linear search → O(n)
Omega (Ω(f(n))): Lower bound; best-case scenario.
Example: Linear search → Ω(1)
Theta (Θ(f(n))): Tight bound; exact growth rate.
Example: Merge sort → Θ(n log n)
14. Distinguish between exact and approximate algorithms.
Feature Exact Algorithm Approximate Algorithm
Result Produces precise solution Produces near-optimal solution
Example Dijkstra’s algorithm (shortest path) Travelling Salesman problem using heuristics
Usage When correctness is critical When problem is NP-hard or complex
15. Describe the Divide and Conquer method with an example.
Method: Divide a problem into smaller subproblems, solve them recursively, and
combine results.
Example (Binary Search):
1. Divide array into two halves
2. Compare key with middle element
3. If equal → return index, else search in left or right half
Time complexity: O(log n)
16. Explain the steps involved in designing an algorithm.
1. Problem analysis: Understand inputs, outputs, constraints.
2. Algorithm design: Decide logic and steps to solve the problem.
3. Correctness check: Ensure it produces correct outputs.
4. Efficiency analysis: Evaluate time and space complexity.
5. Implementation: Convert algorithm into code or pseudocode.
17. Write a recursive algorithm for computing factorial and analyze its time complexity.
Algorithm (Factorial):
factorial(n):
if n == 0:
return 1
else:
return n * factorial(n-1)
Time Complexity: Each call multiplies once → O(n)
Space Complexity: Stack stores n calls → O(n)
18. Define performance analysis. Explain a priori and a posteriori analysis.
Performance analysis: Study of algorithm efficiency in terms of time and space.
A priori analysis: Analytical evaluation using formulas before implementation.
Example: Time complexity of merge sort → O(n log n)
A posteriori analysis: Empirical evaluation after implementation using actual input.
Example: Measuring execution time of sorting program on a dataset.
19. Explain different classifications of algorithms based on design method.
Divide and Conquer: Break problem → solve → combine.
Dynamic Programming: Solve subproblems and store results.
Greedy Method: Choose the best local solution at each step.
Backtracking: Explore all possibilities, backtrack on failure.
Brute Force: Try all possible solutions.
20. Discuss advantages and disadvantages of recursion.
Advantages:
o Simplifies complex problems like tree traversal, factorial.
o Makes code more readable and elegant.
Disadvantages:
o Uses more memory due to call stack.
o May cause stack overflow for large inputs.
MODULE 2
1. What is Divide and Conquer?
Divide and Conquer is a design technique where a problem is divided into smaller
subproblems, solved recursively, and combined to get the final solution.
Example: Merge Sort, Binary Search.
2. Mention any two algorithms that use Divide and Conquer.
Merge Sort
Quick Sort
3. Define Binary Search.
Binary Search is an efficient searching algorithm that finds the position of a target element in
a sorted array by repeatedly dividing the search space in half.
4. What are the prerequisites for applying Binary Search?
Array must be sorted.
Random access of elements should be possible
5. Write the best and worst-case time complexities of Binary Search.
Best-case: O(1) (element found at middle)
Worst-case: O(log n) (after repeated halving)
6. What is the time complexity of Merge Sort?
Merge Sort: O(n log n) (all cases)
7. Define the term "Pivot" in Quick Sort.
Pivot is an element chosen to partition the array into two halves where elements less than
pivot go to left, and greater go to right.
8. What is the space complexity of Merge Sort?
Merge Sort uses O(n) extra space for temporary arrays during merging.
9. Define the Greedy Choice Property.
A problem has the Greedy Choice Property if a global optimal solution can be arrived at by
making a locally optimal choice at each step.
10. What is Optimal Substructure?
A problem has Optimal Substructure if the optimal solution of the problem contains optimal
solutions to its subproblems.
11. What is a Spanning Tree?
A spanning tree of a graph is a subgraph that connects all vertices with no cycles and n-1
edges (for n vertices).
12. How many edges are there in a spanning tree of a graph with n vertices?
n - 1 edges
13. Mention two applications of Greedy Algorithms.
Minimum Spanning Tree (Prim's/Kruskal's)
Fractional Knapsack Problem
14. State the recurrence relation used in Strassen's Matrix Multiplication.
T(n) = 7T(n/2) + O(n²)
15. What is the goal of Dijkstra's algorithm?
To find the shortest path from a source vertex to all other vertices in a weighted graph.
16. Discuss Divide and Conquer design technique. Explain Merge Sort and Binary Search
using it (with example).
Divide and Conquer: Break a problem into smaller subproblems, solve recursively, combine
results.
Merge Sort Example:
1. Divide array [38, 27, 43, 3] → [38, 27] & [43, 3]
2. Divide further → [38], [27], [43], [3]
3. Merge sorted halves → [27, 38], [3, 43]
4. Final merge → [3, 27, 38, 43]
Time Complexity: O(n log n)
Binary Search Example:
Search 27 in [3, 27, 38, 43]
o Compare mid → 27 = mid → found
o Time Complexity: O(log n)
17. Discuss Quick Sort and the role of pivot selection.
Quick Sort: Partition array around a pivot, recursively sort left and right halves.
Pivot Selection:
o First element, last element, random element, or median
o Efficient pivot reduces recursion depth → improves average time O(n log n).
Worst case: O(n²) if pivot is poorly chosen.
18. Compare Merge Sort and Quick Sort in terms of time and space complexities.
Algorithm Time Complexity Space Complexity Stability
Merge Sort O(n log n) (all) O(n) Stable
Quick Sort O(n log n) avg, O(n²) worst O(log n) Not stable
19.Strassen's Matrix Multiplication technique (short note)
Multiplies two matrices faster than classical O(n³)
Divides each matrix into 4 submatrices
Recursively computes 7 multiplications instead of 8
Recurrence: T(n) = 7T(n/2) + O(n²)
Time complexity: O(n^2.81)
20.Greedy method and control abstraction
Greedy method: Chooses locally optimal solution at each step aiming for global
optimum.
Control abstraction: Algorithm chooses next step based on greedy criterion; no
backtracking is needed.
Example: Fractional Knapsack
21. Ordering Paradigm vs Subset Paradigm
Feature Ordering Paradigm Subset Paradigm
Concept Arrange elements optimally Select subset of elements
Example Huffman coding Knapsack problem
22.Knapsack Problem and greedy approach
Problem: Maximize value in knapsack of capacity W with n items (weight w[i], value
v[i])
Greedy approach: Take items with maximum value/weight ratio until full.
23.Properties of Spanning Trees
Connects all vertices
No cycles
Exactly n-1 edges for n vertices
Subgraph of original graph
24. Steps involved in Prim's Algorithm
1. Initialize MST with a vertex
2. Add the minimum weight edge connecting MST to new vertex
3. Repeat until all vertices are included
25.. Steps in Kruskal's Algorithm
1. Sort all edges by weight
2. Pick smallest edge, add to MST if it doesn't form a cycle
3. Repeat until MST has n-1 edges
26.. Dijkstra's Algorithm
Finds shortest path from a source to all vertices
Initialize distances: 0 for source, ∞ for others
Pick vertex with min distance, update neighbors
Repeat until all vertices visited
MODULE 3
1. What do we mean by dynamic programming in algorithm design?
Dynamic Programming (DP) is a method of solving problems by breaking them into
overlapping subproblems, solving each once, and storing results for reuse.
2. Can you give an example of overlapping subproblems?
Fibonacci sequence: calculating fib(n) requires fib(n-1) and fib(n-2), which are reused
multiple times.
3. What is meant by optimal substructure in a problem?
A problem has optimal substructure if an optimal solution can be constructed from
optimal solutions of its subproblems.
4. Name two real-world problems where Floyd-Warshall algorithm is useful.
o Finding shortest routes between all cities in a network.
o Network routing in telecommunications.
5. How is bottom-up dynamic programming different from top-down?
o Bottom-up: solves smaller subproblems first, stores results in a table.
o Top-down: solves recursively with memoization.
6. What is the time complexity of the Floyd-Warshall algorithm?
O(V³), where V is the number of vertices.
7. Which data structure does BFS use to keep track of the next nodes to visit?
Queue.
8. In graph theory, what is a connected component?
A subgraph where every pair of vertices is connected by a path, and no vertex is
connected to a vertex outside the component.
9. What is the recurrence relation used in Floyd-Warshall?
10. dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
where k is an intermediate vertex.
11. What do we mean by the 'shortest path' between two nodes in a graph?
The path with the minimum sum of edge weights between the two nodes.
11. Define backtracking in one or two lines.
Backtracking is a method of solving problems by exploring all possible options and
abandoning paths that violate constraints.
12. Name a situation where backtracking is more effective than brute force.
N-Queens problem: avoids exploring invalid placements unlike brute force.
13. What is the goal in the Hamiltonian Circuit problem?
Visit every vertex exactly once and return to the starting vertex.
14. What's the key rule for placing a queen safely in the N-Queens puzzle?
No two queens can be in the same row, column, or diagonal.
15. What does space complexity tell us?
The amount of memory required by an algorithm as a function of input size.
16. Name two problems commonly solved using dynamic programming.
o Knapsack problem
o Matrix Chain Multiplication
17. What does DFS stand for? What's its main idea?
DFS – Depth-First Search; explores as deep as possible along each branch before
backtracking.
18. Mention one major difference between DFS and BFS.
DFS uses depth-wise exploration (stack/recursion), BFS uses level-wise exploration
(queue).
19. In DFS on directed graphs, what does it mean if one node is "reachable" from
another?
There exists a path from the first node to the second node.
20. What's the role of the visited[] array in DFS?
Keeps track of visited nodes to avoid revisiting and infinite loops.
ESSAY QUESTIONS MODULE 1
1.Algorithm Design Techniques?
Algorithm Design Techniques
An algorithm is a finite sequence of well-defined instructions to solve a specific problem.
The process of creating efficient algorithms for solving computational problems is known as
Algorithm Design. The goal of algorithm design is to develop methods that are correct,
efficient (in time and space), and easy to understand and implement.
Major Algorithm Design Techniques
1. Divide and Conquer:
The problem is divided into smaller subproblems of the same type, solved recursively, and
the results are combined.
Examples: Merge Sort, Quick Sort, Binary Search.
2. Greedy Method:
Builds the solution step by step by choosing the best possible option at each stage.
Examples: Dijkstra’s Algorithm, Kruskal’s MST.
3. Dynamic Programming:
Stores results of subproblems to avoid recomputation when subproblems overlap.
Examples: Fibonacci, Floyd-Warshall.
4. Backtracking:
Builds solution incrementally and abandons paths that fail to satisfy constraints.
Examples: N-Queens Problem, Sudoku Solver.
5. Branch and Bound:
Used for optimization problems, pruning branches that cannot yield better solutions.
Examples: Travelling Salesman Problem.
Conclusion: Algorithm design techniques help in developing efficient algorithms by
providing frameworks to approach problems logically and optimally
2.Performance Analysis?
Performance analysis of algorithms determines how efficiently an algorithm uses
computational resources such as time and space. It helps compare multiple algorithms and
choose the most suitable one for a particular task.
Types of Analysis
1. A Priori Analysis – Theoretical analysis before implementation.
2. A Posteriori Analysis – Experimental analysis after implementation.
Performance Measures
Time Complexity: Measures the time taken by an algorithm as a function of input size.
Notations: O (worst case), Ω (best case), Θ (average case).
Space Complexity: Measures the memory required by an algorithm during execution. Total
space = fixed part + variable part.
Asymptotic Notations
1. Big O (O): Upper bound – worst case.
2. Omega (Ω): Lower bound – best case.
3. Theta (Θ): Tight bound – average case.
Example: Linear Search
Best Case: O(1)
Worst Case: O(n)
Average Case: Θ(n)
Conclusion: Performance analysis ensures algorithms are optimized for real-world efficiency
and resource usage.
3.Recursive algorithms?
Recursion is a method where a function calls itself to solve smaller instances of the same
problem. A recursive algorithm solves a problem by reducing it to subproblems of the same
type.
Structure of Recursive Algorithm
Every recursive algorithm has two main parts:
1. Base Case – stops recursion.
2. Recursive Case – calls itself with smaller input.
Examples
Factorial Example:
int factorial(int n) {
if (n==0) return 1;
else return n*factorial(n-1);
}
Fibonacci Example:
int fib(int n) {
if (n<=1) return n;
else return fib(n-1)+fib(n-2);
}
Binary Search Example:
int binarySearch(arr, low, high, key) { ... recursive call ... }
Advantages and Disadvantages
Advantages: Simple logic, less code, suitable for tree-based problems.
Disadvantages: High memory usage due to stack, slower execution, possible stack overflow.
Conclusion: Recursive algorithms simplify complex problems but must be used carefully to
avoid inefficiency.
MODULE 2 ESSAY QUESTIONS
1.Explain Prims Algorithm with example?
Prim's algorithm is a greedy algorithm used to find a Minimum Spanning Tree (MST) for a
weighted, undirected graph. The goal of an MST is to connect all vertices in a graph with the
minimum possible total edge weight, without forming any cycles
2.Explain kruskals algorithm with example
Kruskal's algorithm is a greedy algorithm used to find a Minimum Spanning Tree (MST) for
a connected, weighted, undirected graph. A Minimum Spanning Tree is a subgraph that
connects all the vertices of the original graph with the minimum possible total edge weight
and contains no cycles.
3.Explain Merge Sort with example
Merge sort is a popular sorting algorithm known for its efficiency and stability. It follows
the Divide and Conquer approach. It works by recursively dividing the input array into two
halves, recursively sorting the two halves and finally merging them back together to obtain
the sorted array.
Here's a step-by-step explanation of how merge sort works:
1. Divide: Divide the list or array recursively into two halves until it can no more be
divided.
2. Conquer: Each subarray is sorted individually using the merge sort algorithm.
3. Merge: The sorted subarrays are merged back together in sorted order. The process
continues until all elements from both subarrays have been merged.
4.Explain QUICK SORT WITH EXAMPLE
Quicksort is an efficient, comparison-based sorting algorithm that employs a divide-and-
conquer strategy. It was developed by British computer scientist Tony Hoare in 1959.
Algorithm Steps:
Choose a Pivot:
An element from the array is selected as the "pivot." Common choices include the first, last,
middle, or a random element. The choice of pivot can significantly impact performance.
Partitioning:
The array is rearranged such that all elements smaller than the pivot are placed before it, and
all elements greater than the pivot are placed after it. After this step, the pivot element is in its
correct sorted position.
Recursive Calls:
Quicksort is then recursively applied to the two subarrays: the one containing elements
smaller than the pivot and the one containing elements greater than the pivot.
Base Case:
The recursion terminates when a subarray has zero or one element, as such a subarray is
inherently sorted.
5.Explain binary search Algorith with example?
Binary Search is a searching algorithm that operates on a sorted or monotonic search
space, repeatedly dividing it into halves to find a target value or optimal answer in
logarithmic time O(log N)
ESSAY QUESTIONS MODULE 3
1.Knapsack problem
Refer notes
2.N Queens problem
N Queen Problem
Given an integer n, place n queens on an n × n chessboard such that no two queens attack
each other. A queen can attack another queen if they are placed in the same row, the
same column, or on the same diagonal.
Find all possible distinct arrangements of the queens on the board that satisfy these
conditions.
The output should be an array of solutions, where each solution is represented as an array of
integers of size n, and the i-th integer denotes the column position of the queen in the i-th
row. If no solution exists, return an empty array.
3.Explain Hamiltonian circuit
4.Difference betwwen BFS AND DFS