0% found this document useful (0 votes)
3 views11 pages

abstract.html.docs[1]

Uploaded by

Abadullah Hamidi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
3 views11 pages

abstract.html.docs[1]

Uploaded by

Abadullah Hamidi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 11

1.

Bubble Sort

Description:
Bubble Sort is a simple sorting algorithm that repeatedly steps through the list, compares
adjacent items, and swaps them if they are in the wrong order. The process is repeated until the
list is sorted.

Algorithm:

1. Start with the first element.


2. Compare the current element with the next one.
3. If the current element is greater than the next, swap them.
4. Continue this process for the entire list.
5. After each pass, the largest element "bubbles up" to the end of the list.
6. Repeat the process for all elements, reducing the number of comparisons each time.

Pseudocode:

css
Copy code
for i = 0 to n-1
for j = 0 to n-i-2
if arr[j] > arr[j+1]
swap(arr[j], arr[j+1])

Time Complexity:

 Best case: O(n) (if the array is already sorted)


 Worst case: O(n²)
 Space Complexity: O(1) (in-place sorting)

2. Insertion Sort

Description:
Insertion Sort builds the sorted array one element at a time. It takes each element from the
unsorted portion and places it at the correct position in the sorted portion of the array.

Algorithm:

1. Start from the second element.


2. Compare it with the previous elements.
3. Shift the larger elements to the right to make space for the current element.
4. Insert the current element into its correct position.
5. Repeat this process for all the elements.
Pseudocode:

css
Copy code
for i = 1 to n-1
key = arr[i]
j = i - 1
while j >= 0 and arr[j] > key
arr[j+1] = arr[j]
j = j - 1
arr[j+1] = key

Time Complexity:

 Best case: O(n) (when the array is already sorted)


 Worst case: O(n²)
 Space Complexity: O(1) (in-place sorting)

3. Heap Sort

Description:
Heap Sort is a comparison-based sorting algorithm that uses a binary heap data structure. The
algorithm builds a max-heap (for ascending order) and repeatedly extracts the largest element to
place it at the end of the array.

Algorithm:

1. Build a max-heap from the input array.


2. Swap the root element with the last element of the heap.
3. Reduce the heap size by 1 and heapify the root.
4. Repeat the process until the heap is empty.

Pseudocode:

less
Copy code
heapify(arr, n, i):
largest = i
left = 2*i + 1
right = 2*i + 2
if left < n and arr[left] > arr[largest]
largest = left
if right < n and arr[right] > arr[largest]
largest = right
if largest != i:
swap(arr[i], arr[largest])
heapify(arr, n, largest)
heapSort(arr):
n = length of arr
for i = n//2 - 1 down to 0
heapify(arr, n, i)
for i = n-1 down to 1
swap(arr[0], arr[i])
heapify(arr, i, 0)

Time Complexity:

 Best case: O(n log n)


 Worst case: O(n log n)
 Space Complexity: O(1) (in-place sorting)

4. Divide and Conquer Algorithms

Description:
Divide and Conquer is a strategy for solving problems by recursively breaking them into smaller
subproblems, solving the subproblems, and combining their solutions.

Examples:

1. Merge Sort:
o Divide the array into two halves.
o Recursively sort each half.
o Merge the sorted halves.
2. Quick Sort:
o Choose a pivot element.
o Partition the array into two subarrays: one with elements smaller than the pivot,
and one with elements larger.
o Recursively sort each subarray.

General Algorithm:

1. Divide the problem into smaller subproblems.


2. Solve the subproblems.
3. Combine the solutions of the subproblems.

Time Complexity:

 Merge Sort & Quick Sort: O(n log n) on average


 Merge Sort Worst Case: O(n log n)
 Quick Sort Worst Case: O(n²) (when the pivot is poorly chosen)
5. Binary Search

Description:
Binary Search is an efficient algorithm to search for a target element in a sorted array. It works
by repeatedly dividing the search range in half.

Algorithm:

1. Start with the entire array (low = 0, high = n-1).


2. Find the middle element: mid = (low + high) / 2.
3. If the target is equal to the middle element, return the index.
4. If the target is smaller, search the left half.
5. If the target is larger, search the right half.
6. Repeat until the target is found or the search range is empty.

Pseudocode:

vbnet
Copy code
binarySearch(arr, low, high, target):
if low <= high:
mid = (low + high) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
return binarySearch(arr, mid + 1, high, target)
else:
return binarySearch(arr, low, mid - 1, target)
return -1 # Target not found

Time Complexity:

 Best case: O(1) (when the element is at the mid)


 Worst case: O(log n)
 Space Complexity: O(1) (for iterative) or O(log n) (for recursive)

6. Finding Maximum and Minimum

Description:
To find the maximum and minimum elements in an array, iterate through the array once, keeping
track of the largest and smallest elements found so far.

Algorithm:

1. Initialize two variables: max and min.


2. Traverse through the array.
3. For each element, compare it with the current max and min.
4. Update max or min accordingly.

Pseudocode:

arduino
Copy code
findMaxMin(arr):
max = arr[0]
min = arr[0]
for i = 1 to n-1:
if arr[i] > max:
max = arr[i]
if arr[i] < min:
min = arr[i]
return max, min

Time Complexity:

 Best and Worst case: O(n) (single traversal)


 Space Complexity: O(1) (in-place)

7. Matrix Multiplication

Description:
Matrix multiplication involves multiplying two matrices and obtaining a resulting matrix. The
number of rows of the first matrix must match the number of columns of the second matrix.

For two matrices A (m × n) and B (n × p), the resulting matrix C will have dimensions m × p.

Algorithm:

1. For each row of A and each column of B, compute the dot product.
2. The element in row iii and column jjj of the resulting matrix C is computed as the sum of
the products of the corresponding elements of row iii from A and column jjj from B.

Pseudocode:

less
Copy code
matrixMultiplication(A, B):
C = matrix of size m × p
for i = 0 to m-1:
for j = 0 to p-1:
C[i][j] = 0
for k = 0 to n-1:
C[i][j] += A[i][k] * B[k][j]
return C
Time Complexity:

 O(m * n * p) (naive method)

8. Quick Sort

Description:
Quick Sort is a divide-and-conquer sorting algorithm. It selects a pivot element, partitions the
array into two subarrays (one with elements smaller and one with larger elements), and
recursively sorts the subarrays.

Algorithm:

1. Choose a pivot element from the array.


2. Partition the array into two subarrays: elements less than the pivot, and elements greater
than the pivot.
3. Recursively apply Quick Sort to both subarrays.
4. Combine the subarrays to produce the sorted array.

Pseudocode:

less
Copy code
quickSort(arr, low, high):
if low < high:
pivotIndex = partition(arr, low, high)
quickSort(arr, low, pivotIndex - 1)
quickSort(arr, pivotIndex + 1, high)

partition(arr, low, high):


pivot = arr[high]
i = low - 1
for j = low to high-1:
if arr[j] <= pivot:
i = i + 1
swap arr[i] and arr[j]
swap arr[i + 1] and arr[high]
return i + 1

Time Complexity:

 Best and Average case: O(n log n)


 Worst case: O(n²) (if the pivot is poorly chosen)
 Space Complexity: O(log n) (due to recursion)
Key Takeaways:

 Bubble Sort: Inefficient but simple (O(n²)).


 Insertion Sort: Efficient for small or nearly sorted data (O(n²)).
 Heap Sort: Efficient, in-place, no recursion (O(n log n)).
 Divide and Conquer: Fundamental strategy for efficient algorithms like Merge Sort and
Quick Sort (O(n log n) on average).
 Binary Search: Fast search in sorted arrays (O(log n)).
 Finding Maximum and Minimum: Simple linear scan (O(n)).
 Matrix Multiplication: Naive method is O(n³).
 Quick Sort: Fast average case but worst case can degrade to O(n²) (pivot choice matters).

Asymptotic Notation

Asymptotic notation is used to describe the behavior of algorithms as the input size becomes
large. It gives us a way to express the efficiency or complexity of an algorithm in terms of time
or space. The most common asymptotic notations are:

1. Big O Notation (O):


o Purpose: Describes the upper bound of the algorithm's running time.
o Interpretation: It provides the worst-case scenario, i.e., the maximum time an
algorithm could take for an input of size n.
o Example: O(n²) means that the algorithm's running time increases quadratically
as the input size increases.
o Example: A sorting algorithm with O(n²) complexity will take at most n²
operations in the worst case.

Formal Definition:
An algorithm is O(f(n)) if there exist constants c > 0 and n₀ such that for all n ≥ n₀, the
time complexity is less than or equal to c * f(n).

2. Big Omega Notation (Ω):


o Purpose: Describes the lower bound of the algorithm's running time.
o Interpretation: It gives the best-case scenario, i.e., the minimum time an
algorithm could take.
o Example: Ω(n) means that the algorithm will take at least n operations for an
input of size n.
Formal Definition:
An algorithm is Ω(f(n)) if there exist constants c > 0 and n₀ such that for all n ≥ n₀, the
time complexity is greater than or equal to c * f(n).

3. Big Theta Notation (Θ):


o Purpose: Describes both the upper and lower bounds of an algorithm's running
time.
o Interpretation: If an algorithm has a time complexity of Θ(f(n)), it means the
algorithm's time complexity is bounded both above and below by f(n), i.e., the
algorithm performs in a predictable range.
o Example: Θ(n log n) means that the algorithm takes between n log n and n log
n operations in all cases.

Formal Definition:
An algorithm is Θ(f(n)) if there exist constants c₁ > 0, c₂ > 0, and n₀ such that for all n
≥ n₀, the time complexity is bounded by c₁ * f(n) and c₂ * f(n).

4. Little o Notation (o):


o Purpose: Describes a strict upper bound that is not tight.
o Interpretation: The time complexity of the algorithm grows strictly slower than
f(n) (for all sufficiently large n).
o Example: o(n²) means that the algorithm grows strictly slower than n², i.e., it has
a complexity less than n² for large input sizes.
5. Little omega Notation (ω):
o Purpose: Describes a strict lower bound that is not tight.
o Interpretation: The time complexity of the algorithm grows strictly faster than
f(n) for all sufficiently large n.
o Example: ω(n) means that the algorithm grows strictly faster than linear time for
large n.

Heap Sort: Complete Information

Heap Sort is a comparison-based sorting algorithm that utilizes a binary heap data structure to
sort elements. The binary heap is a complete binary tree that satisfies the heap property. Heap
Sort works by building a max heap (for sorting in ascending order) and then repeatedly
extracting the maximum element to place it in the correct position in the array.

Key Concepts
1. Binary Heap:
o Max Heap: A complete binary tree where each parent node is greater than or
equal to its child nodes.
o Min Heap: A complete binary tree where each parent node is less than or equal to
its child nodes.
2. Heap Property:
o In a Max Heap, the key at each node is greater than or equal to the keys of its
children, and the maximum key is at the root.
o In a Min Heap, the key at each node is less than or equal to the keys of its
children, and the minimum key is at the root.

Steps of Heap Sort

1. Build a Max Heap:


o Convert the array into a max heap. This ensures that the largest element is at the
root (index 0).
2. Extract the Root (Max) Element:
o The root (maximum) element is swapped with the last element of the heap. This
moves the largest element to the correct position in the sorted array.
3. Heapify the Remaining Heap:
o After the swap, the heap property might be violated, so we need to heapify the
tree to restore the heap structure.
4. Repeat the Process:
o Reduce the heap size by one and repeat the process (extracting the max and
heapifying) until all elements are sorted.

Detailed Algorithm

1. Build Max Heap:


o For an array with nn elements, start from the last non-leaf node (at index n2−1\
frac{n}{2} - 1) and move upwards to the root, heapifying each node.
2. Heapify:
o A function that ensures the tree maintains the max-heap property. If the root is
smaller than any of its children, it swaps with the larger child and then recursively
heapifies the affected subtree.
3. Sort the Array:
o Swap the root (max element) with the last element of the heap, reduce the heap
size by 1, and heapify the root. Repeat until the heap size is reduced to 1.

Pseudocode for Heap Sort


def heapify(arr, n, i):
largest = i # Initialize largest as root
left = 2 * i + 1 # left child
right = 2 * i + 2 # right child

# If left child is larger than root


if left < n and arr[left] > arr[largest]:
largest = left

# If right child is larger than the largest so far


if right < n and arr[right] > arr[largest]:
largest = right

# If largest is not root, swap and continue heapifying


if largest != i:
arr[i], arr[largest] = arr[largest], arr[i] # Swap
heapify(arr, n, largest)

def heapSort(arr):
n = len(arr)

# Build a max heap


for i in range(n // 2 - 1, -1, -1):
heapify(arr, n, i)

# One by one extract elements from the heap


for i in range(n-1, 0, -1):
arr[0], arr[i] = arr[i], arr[0] # Swap the root (max element) with
the last element
heapify(arr, i, 0) # Heapify the reduced heap

Time Complexity Analysis

1. Building the Max Heap:


o Heapify takes O(log⁡n)O(\log n) time for each node, and there are O(n)O(n)
nodes. However, the total work done during the heap construction is O(n)O(n),
because most of the heapify operations are on smaller subtrees.
2. Extracting the Max (root) and Heapifying:
o For each of the nn elements, we perform a swap and heapify operation.
o Each heapify operation takes O(log⁡n)O(\log n) time. Therefore, extracting and
heapifying the root nn times will take O(nlog⁡n)O(n \log n).

Overall Time Complexity:

 Best Case: O(nlog⁡n)O(n \log n)


 Worst Case: O(nlog⁡n)O(n \log n)
 Average Case: O(nlog⁡n)O(n \log n)

Heap Sort's performance is consistent, as it always requires O(nlog⁡n)O(n \log n) time, regardless
of the input data's initial order.
Space Complexity

Heap Sort is an in-place sorting algorithm, meaning it does not require extra space proportional
to the input size. The only space used is for recursive calls during the heapify process, which is
O(log⁡n)O(\log n) in the worst case.

Space Complexity:

 O(1) for iterative implementations.


 O(\log n) for recursive implementations (due to the stack space used in the recursion).

Advantages of Heap Sort

1. Efficient Sorting: Heap Sort has a guaranteed worst-case time complexity of


O(nlog⁡n)O(n \log n), which is better than algorithms like Bubble Sort or Insertion Sort.
2. In-Place Sorting: No extra memory is required apart from the input array.
3. Non-Recursive Option: Heap Sort can be implemented without recursion, making it
suitable for situations with limited stack space.

Disadvantages of Heap Sort

1. Not Stable: Heap Sort is not a stable sort. This means that the relative order of equal
elements may not be preserved.
2. Slow in Practice: Although the time complexity is O(nlog⁡n)O(n \log n), Heap Sort is
generally slower than algorithms like Merge Sort and Quick Sort in practice, especially
due to the constant factors involved in the heap operations.
3. Cache Inefficiency: Heap operations may not take advantage of modern memory
hierarchies as efficiently as other algorithms like Quick Sort.

Conclusion

Heap Sort is an efficient, comparison-based, in-place sorting algorithm with a worst-case time
complexity of O(nlog⁡n)O(n \log n). While it is guaranteed to perform well and does not require
additional space, it is not stable and may be slower than other sorting algorithms in practice, such
as Quick Sort. However, its consistent O(nlog⁡n)O(n \log n) performance makes it a reliable
choice in cases where stability or cache efficiency is not a concern.

You might also like