0% found this document useful (0 votes)
17 views7 pages

Define An Algorithm.

Uploaded by

Aadi Tiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views7 pages

Define An Algorithm.

Uploaded by

Aadi Tiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

1. Define an algorithm.

• An algorithm is a step-by-step procedure for solving a problem.


• It consists of a finite sequence of instructions.
• Each step must be clear and unambiguous.
• It must terminate after a finite number of steps.

2. List any two characteristics of a good algorithm.


• Finiteness: Must terminate after a finite number of steps.
• Unambiguity: Each step should be precisely defined.
• Correctness: Must produce the correct output for valid input.
• Efficiency: Should use minimum time and memory.

3. Differentiate between best case and worst case analysis.


• Best Case: Minimum number of steps an algorithm takes.
• Worst Case: Maximum number of steps an algorithm takes.
• Example: Linear Search → Best case O(1), Worst case O(n).
• Best case shows optimistic performance, worst case shows guaranteed bound.

4. What is time complexity?


• Time complexity measures the running time of an algorithm.
• It is expressed as a function of input size n.
• It helps compare efficiency of algorithms.
• Common notations: O(n), O(log n), O(n²), etc.

5. Give an example of an amortized analysis scenario.


• Dynamic array resizing in vectors or ArrayList.
• Most insertions take O(1) time.
• Occasionally resizing takes O(n) time.
• Amortized cost per insertion remains O(1).

6. What is space complexity?


• The total memory required by an algorithm to execute.
• Includes memory for input, variables, constants, and recursion stack.
• Helps evaluate how efficiently memory is used.
• Example: Merge Sort has O(n) space complexity, Quick Sort has O(log n).

7. State the Master Method in algorithm analysis.


For recurrence T(n) = aT(n/b) + f(n):
• Case 1: If f(n) = O(n^(log_b a - ε)), T(n) = Θ(n^(log_b a)).
• Case 2: If f(n) = Θ(n^(log_b a)), T(n) = Θ(n^(log_b a) log n).
• Case 3: If f(n) = Ω(n^(log_b a + ε)) and regularity condition holds, T(n) = Θ(f(n)).
• Useful for analyzing divide-and-conquer algorithms.

8. Give an example of an algorithm with O(1) complexity.


• Accessing an element in an array using index.
• Insertion at the end of a linked list (if tail pointer is maintained).
• Checking if a number is even or odd.
• Swapping two numbers.

9. Define average case analysis.


• Average case analysis gives expected running time over all inputs.
• It assumes a probability distribution of inputs.
• More realistic than best case, less pessimistic than worst case.
• Example: Linear search average case is O(n/2).

10. Write the best case complexity of Bubble Sort.


• Bubble Sort best case occurs when the array is already sorted.
• Only one pass is required with no swaps.
• Complexity = O(n)×
• Uses fewer comparisons than average/worst case.

11. What is the time complexity of Selection Sort?


• Selection Sort always makes n(n-1)/2 comparisons.
• Complexity is O(n²) in best, average, and worst case.
• Swaps are fewer compared to Bubble Sort.
• Hence, time complexity is O(n²) for all cases.

12. State one limitation of Insertion Sort.


• Inefficient for large input sizes (O(n²) in average and worst case).
• Requires many shifting operations.
• Requires many shifting operations.
• Not suitable for huge datasets.
• Only efficient for small or nearly sorted arrays.

13. Write the heap property used in Heap Sort.


• A heap is a complete binary tree.
• Max-Heap property: Each parent ≥ its children.
• Min-Heap property: Each parent ≤ its children.
• Heap Sort uses max-heap to repeatedly extract maximum element.

14. Give the average case complexity of Bubble Sort.


• In average case, elements are in random order.
• About n/2 swaps are required per pass.
• Total complexity = O(n²).
• Thus, average case = O(n²).

15. Which sorting algorithm works in O(n) time in the best case?
• Insertion Sort works in O(n) if input is already sorted.
• It only checks each element once, with no shifting.
• Best case occurs in ascending order input.
• Complexity = O(n)×

16. State one difference between Radix Sort and Counting Sort.
• Radix Sort: Works digit by digit, using stable sub-sorting algorithm.
• Counting Sort: Works directly by counting occurrences of each element.
• Radix Sort can handle larger ranges by breaking into digits.
• Counting Sort requires range (k) not too large compared to n.

Perfect These look like your 8-mark type questions (detailed answers expected). I’ll prepare answers with
points, explanation, and examples so they’re exam-ready.

Data Structures & Algorithms – 8 Marks Questions (Semester III)

1. Explain the importance of analyzing algorithms.


• Analysis helps us compare multiple algorithms solving the same problem.
• It predicts the efficiency of an algorithm before implementation.
• Saves resources (time & memory) by choosing the most suitable approach.
• Helps determine scalability of the algorithm for large inputs.
• Important in real-world applications like search engines, banking systems, etc.

2. Differentiate between time complexity and space complexity.

Aspect Time Complexity Space Complexity


Definition Amount of time taken by an algorithm as input size increases Amount of memory required during
execution
Measurement Count of basic operations Memory used for input, variables, recursion stack
GoalMinimize execution time Minimize memory usage
Example Quick Sort → O(n log n) Merge Sort → O(n) extra space

3. What are the steps involved in designing a good algorithm?


1. Problem Definition – Understand the problem clearly.
2. Input/Output Specification – Define valid inputs and expected outputs.
3. Algorithm Design – Use design techniques (divide-and-conquer, greedy, dynamic programming,
etc.).
4. Correctness Proof – Verify algorithm produces correct output.
5. Complexity Analysis – Analyze time and space requirements.
6. Optimization – Refine to improve efficiency.
7. Implementation – Convert into program code.

4. Explain amortized analysis with an example.


• Amortized analysis studies the average running time per operation over a sequence of operations,
even if one operation is costly.
• Example: Dynamic Array (Vector/ArrayList) resizing.
• Normal insertion = O(1)×
• Occasionally, when array is full, resizing takes O(n).
• But after many insertions, the amortized cost per operation = O(1)×

5. Write down the general recurrence relation solved using the Master Method.
• General recurrence:
T(n) = aT(n/b) + f(n)
• a = number of subproblems
• n/b = size of each subproblem
• f(n) = work done outside recursive calls

6. Give an example of best, worst, and average case for Linear Search.
• Best Case: Element found at first position → O(1).
• Worst Case: Element found at last position or not present → O(n).
• Average Case: Element found somewhere in middle → O(n/2).

7. Explain the significance of Big-O, Big-Ω, and Big-Θ notations.


• Big-O: Upper bound → Worst case performance. Example: Quick Sort worst case O(n²).
• Big-Ω: Lower bound → Best case performance. Example: Linear Search best case Ω(1).
• Big-Θ: Tight bound → Both upper & lower bound. Example: Merge Sort Θ(n log n).

8. Write loop invariant for Insertion Sort.


• Loop Invariant: At the start of each iteration of the outer loop, the subarray A[0..i-1] is sorted.
• Maintains correctness because new elements are inserted in correct position.
• Initialization → First element is sorted.
• Maintenance → Next element inserted correctly.
• Termination → Full array is sorted.

9. Compare Bubble Sort and Selection Sort in terms of efficiency.

Aspect Bubble Sort Selection Sort


Best Case O(n) (if sorted) O(n²) (always same)
Worst Case O(n²) O(n²)
Swaps Many swaps Very few swaps
Efficiency Better when array is nearly sorted Better when swaps are costly

10. Write the steps of the Heap Sort algorithm.


1. Build a Max Heap from the input array.
2. Swap root (maximum) with last element.
3. Reduce heap size by 1.
4. Heapify the root to maintain heap property.
5. Repeat until only one element remains.
6. Result → Sorted array in ascending order.

11. Differences between comparison-based and non-comparison-based sorting.


Aspect Comparison-Based Non-Comparison-Based
Basis Compares elements directly Uses digit/position/counting methods
Examples Bubble Sort, Quick Sort, Merge SortCounting Sort, Radix Sort, Bucket Sort
Lower Bound Minimum O(n log n) Can achieve O(n)
Stability Some are stable (Merge, Insertion) Usually stable

12. Write pseudo-code for Selection Sort.

SelectionSort(A, n):
for i = 0 to n-2:
min_index = i
for j = i+1 to n-1:
if A[j] < A[min_index]:
min_index = j
swap(A[i], A[min_index])

13. Compare time complexities of Bubble, Selection, and Insertion Sort.

Algorithm Best Case Average Case Worst Case


Bubble Sort O(n) O(n²) O(n²)
Selection Sort O(n²) O(n²) O(n²)
Insertion Sort O(n) O(n²) O(n²)

14. Explain with an example how Counting Sort works.


• Counting Sort counts occurrences of each element, then places them in output array.
• Example: Input = [4, 2, 2, 8, 3, 3, 1]
1. Count frequencies → [1, 2, 2, 1, 0, 0, 0, 1]
2. Cumulative sum → [1, 3, 5, 6, 6, 6, 6, 7]
3. Place elements → [1, 2, 2, 3, 3, 4, 8]
• Time Complexity = O(n + k)×

15. Solve using Master Method.


1. T(n) = 2T(n/2) + n
• a=2, b=2 → n^(log₂2) = n.
• f(n)=n → Case 2 → T(n)=Θ(n log n).
• f(n)=n → Case 2 → T(n)=Θ(n log n).
2. T(n) = 4T(n/2) + n²logn
• a=4, b=2 → n^(log₂4)=n².
• f(n)=n² log n = Θ(n² log n).
• Case 2 → T(n)=Θ(n² log² n).
3. T(n) = 8T(n/2) + n log n
• a=8, b=2 → n^(log₂8)=n³.
• f(n)=n log n = O(n^(3-ε)).
• Case 1 → T(n)=Θ(n³).
4. T(n) = 9T(n/3) + 1
• a=9, b=3 → n^(log₃9)=n².
• f(n)=1 = O(n^(2-ε)).
• Case 1 → T(n)=Θ(n²).

You might also like