Data Structure Operations
Data Structure Operations
Data structure operations are the methods used to manipulate the data in a data structure. The most
common data structure operations are:
• Traversal
Traversal operations are used to visit each node in a data structure in a specific order. This
technique is typically employed for printing, searching, displaying, and reading the data stored in
a data structure.
• Insertion
Insertion operations add new data elements to a data structure. You can do this at the data
structure's beginning, middle, or end.
• Deletion
Deletion operations remove data elements from a data structure. These operations are typically
performed on nodes that are no longer needed.
• Search
Search operations are used to find a specific data element in a data structure. These operations
typically employ a compare function to determine if two data elements are equal.
• Sort
Sort operations are used to arrange the data elements in a data structure in a specific order. This
can be done using various sorting algorithms, such as insertion sort, bubble sort, merge sort, and
quick sort.
• Merge
Merge operations are used to combine two data structures into one. This operation is typically
used when two data structures need to be combined into a single structure.
• Copy
Copy operations are used to create a duplicate of a data structure. This can be done by copying
each element in the original data structure to the new one.
Sorting Algorithms
A Sorting Algorithm is used to rearrange a given array or list of elements in an order. Sorting is provided
in library implementation of most of the programming languages.
Comparison Based : Selection Sort, Bubble Sort, Insertion Sort, Merge Sort, Quick Sort, Heap Sort, Cycle
Sort, 3-way Merge Sort
Non Comparison Based : Counting Sort, Radix Sort, Bucket Sort, TimSort, Comb Sort, Pigeonhole Sort
Hybrid Sorting Algorithms : IntroSort, Tim Sort
• We start with second element of the array as first element in the array is assumed to be sorted.
• Compare second element with the first element and check if the second element is smaller then
swap them.
• Move to the third element and compare it with the first two elements and put at its correct
position
Initial:
• Current element is 23
First Pass:
Second Pass:
Third Pass:
Fourth Pass:
• The sorted part until 4th index is: [1, 2, 5, 10, 23]
Final Array:
• Best case: O(n), If the list is already sorted, where n is the number of elements in the list.
• Auxiliary Space: O (1), Insertion sort requires O (1) additional space, making it a space-efficient
sorting algorithm.
• Adoptive. the number of inversions is directly proportional to number of swaps. For example, no
swapping happens for a sorted array and it takes O(n) time only.
• Not as efficient as other sorting algorithms (e.g., merge sort, quick sort) for most cases.
Bubble Sort Algorithm
Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent elements
if they are in the wrong order. This algorithm is not suitable for large data sets as its average and worst-
case time complexity are quite high.
• We sort the array using multiple passes. After the first pass, the maximum element goes to end
(its correct position). Same way, after second pass, the second largest element goes to second
last position and so on.
• In every pass, we process only those elements that have already not moved to correct position.
After k passes, the largest k elements must have been moved to the last k positions.
• In a pass, we consider remaining elements and compare all adjacent and swap if larger element
is before a smaller element. If we keep doing this, we get the largest (among the remaining
elements) at its correct position.
• It is a stable sorting algorithm, meaning that elements with the same key value maintain their
relative order in the sorted output.
• Bubble sort has a time complexity of O(n2) which makes it very slow for large data sets.
• Bubble sort is a comparison-based sorting algorithm, which means that it requires a comparison
operator to determine the relative order of elements in the input data set. It can limit the
efficiency of the algorithm in certain cases
Merge Sort
Merge sort is a sorting algorithm that follows the divide-and-conquer approach. It works by recursively
dividing the input array into smaller subarrays and sorting those subarrays then merging them back
together to obtain the sorted array.
In simple terms, we can say that the process of merge sort is to divide the array into two halves, sort
each half, and then merge the sorted halves back together. This process is repeated until the entire array
is sorted.
How does Merge Sort work?
Merge sort is a popular sorting algorithm known for its efficiency and stability. It follows the divide-and-
conquer approach to sort a given array of elements.
1. Divide: Divide the list or array recursively into two halves until it can no more be divided.
2. Conquer: Each subarray is sorted individually using the merge sort algorithm.
3. Merge: The sorted subarrays are merged back together in sorted order. The process continues
until all elements from both subarrays have been merged.
Divide:
• [38, 27, 43, 10] is divided into [38, 27 ] and [43, 10] .
Conquer:
Merge:
• Merge [27, 38] and [10,43] to get the final sorted list [10, 27, 38, 43]
o Best Case: O(n log n), When the array is already sorted or nearly sorted.
o Average Case: O(n log n), When the array is randomly ordered.
o Worst Case: O(n log n), When the array is sorted in reverse order.
• Auxiliary Space: O(n), Additional space is required for the temporary array used during merging.
• Guaranteed worst-case performance: Merge sort has a worst-case time complexity of O(N logN)
, which means it performs well even on large datasets.
• Naturally Parallel: We independently merge subarrays that makes it suitable for parallel
processing.
• Space complexity: Merge sort requires additional memory to store the merged sub-arrays
during the sorting process.
• Not in-place: Merge sort is not an in-place sorting algorithm, which means it requires additional
memory to store the sorted data. This can be a disadvantage in applications where memory
usage is a concern.
• Slower than QuickSort in general. QuickSort is more cache friendly because it works in-place.
Quick Sort
QuickSort is a sorting algorithm based on the Divide and Conquer that picks an element as a pivot and
partitions the given array around the picked pivot by placing the pivot in its correct position in the sorted
array.
QuickSort works on the principle of divide and conquer, breaking down the problem into smaller sub-
problems.
2. Partition the Array: Rearrange the array around the pivot. After partitioning, all elements
smaller than the pivot will be on its left, and all elements greater than the pivot will be on its
right. The pivot is then in its correct position, and we obtain the index of the pivot.
3. Recursively Call: Recursively apply the same process to the two partitioned sub-arrays (left and
right of the pivot).
4. Base Case: The recursion stops when there is only one element left in the sub-array, as a single
element is already sort
• Best Case: (Ω(n log n)), Occurs when the pivot element divides the array into two equal halves.
• Average Case (θ(n log n)), On average, the pivot divides the array into two parts, but not
necessarily equal.
• Worst Case: (O(n²)), Occurs when the smallest or largest element is always chosen as the pivot
(e.g., sorted arrays).
• Fastest general purpose algorithm for large data when stability is not required.
• It is tail recursive and hence all the tail call optimization can be done.
• It is not a stable sort, meaning that if two elements have the same key, their relative order will
not be preserved in the sorted output in case of quick sort, because here we are swapping
elements according to the pivot’s position (without considering their original positions).