Before you get all practical working with Abstract Data Types, here's another theoretical concept straight out of computer science jargon to understand.
Big-O notation is a metric for defining the complexity of an algorithm. Algorithms? Wait, wait, wait... There's too much theory dropping at once here!
What is an Algorithm
Your primary focus in this section will be to build data structures, but there's always also the question of what you'll do with them. And that's where algorithms come into play. Just having a "list" doesn't mean much if you don't have any solutions on how you can add and remove items from the list or access them!
Now, the solutions you use to do something with your data structures are what we'll call algorithms, for example:
- Iterating over a collection
- Accessing the first element in a collection
- Deleting the last element of a collection
Does that sound familiar? You've already used plenty of algorithms in your programming journey. For example, when you wrote a for loop, when you used a method on an object, or when you wrote your own functions.
Okay, with that term out of the way, it's time to look at Big-O notation.
Big-O Notation in Cooking Terms
Big-O notation is a way of measuring how long an algorithm (a set of steps for solving a problem) takes to run. It tells us how the running time of the algorithm changes as the input size grows.
Imagine you have a recipe for making soup. If you've followed along the course, you probably have a few! The recipe might have you cut, dice, and mix together some ingredients, then cook the soup for a certain amount of time. The time it takes to make the soup is like the running time of an algorithm.
Now, imagine you want to make a bigger batch of soup by using the same recipe. You'll need more time to prepare the ingredients, and you'll need to cook the soup for longer. The time it takes to make the bigger batch of soup is longer than the time it takes to make the smaller batch.
Big-O notation lets you compare how the running time of an algorithm changes as the input size grows. For example, if an algorithm takes twice as long to run on a problem that is twice as big, we say that the algorithm is $`O(1)`$ (pronounced "order 1"). This means that the running time is not affected by the size of the input.
On the other hand, if an algorithm takes four times as long to run on a problem that is twice as big, we say that the algorithm is $`O(n^2)`$ (pronounced "order n squared"). This means that the running time is proportional to the size of the input squared.
Big-O notation is a way of comparing algorithms and understanding how their running time changes as the input size grows. It helps us choose the best algorithm for a given problem based on how fast it runs and how much resources, such as memory or computing power, it uses.
Big-O Notation in Coding Terms
Here are some code examples to help clarify the concept of Big-O notation:
$`O(1)`$ - Constant Time
An algorithm has a running time of $`O(1)`$ if the running time doesn't depend on the size of the input. For example, the following function has a running time of $`O(1)`$ because it always takes the same amount of time to run, regardless of the size of the input:
def constant_time(x):
return x * 2
$`O(n)`$ - Linear Time
An algorithm has a running time of $`O(n)`$ if the running time is directly proportional to the size of the input. For example, the following function has a running time of $`O(n)`$ because it takes longer to run on a list with more elements:
def linear_time(lst):
for i in lst:
print(i)
$`O(n^2)`$ - Quadratic Time
An algorithm has a running time of $`O(n^2)`$ if the running time is proportional to the size of the input squared. For example, the following function has a running time of $`O(n^2)`$ because it takes longer to run on a list with more elements, and the running time increases even more quickly if the list has more elements:
def quadratic_time(lst):
for i in lst:
for j in lst:
print(i, j)
Post on Discord if you have any questions or want to discuss the concept with other learners.
Implications and Considerations
For small data sets, a complexity of $`O(n^2)`$ is not really a problem. But when that data set turns into tens of thousands or hundreds of thousands, or millions and millions of elements, all the sudden exponential growth in complexity, as well as computational resources and time itself, becomes a problem:
Graph by Cmglee on Wikipedia (CC4.0)
As a programmer, you should try to design algorithms with the lowest complexity. You want your programs and your algorithms to grow gracefully. Whether it be ten elements that you're processing or a hundred million elements, you want straightforward algorithms that can scale and handle data sets of all sizes without becoming so slow that they require vast amounts of resources and time to complete their task.
Addition Resources
- Explore CodingNomads Data Structures and Algorithms course
- Learn more about Big O Notation
Additionally, you can learn more about Big-O notation in the following resources:
- Big-O Cheat Sheet by Eric Rowell
- HackerRank - Big O Notation on YouTube
- Estefannie - The Big Oh Algorithm Analysis
In the next lesson, you'll learn about a fundamental Abstract Data Type that you'll later implement yourself in Python.
Summary: What is Big O Notation
- Big-O notation is a way of measuring the complexity of an algorithm
- An algorithm is a solution that you apply to your data structure (ie. insert data, find data, sort data, and delete data)
- Problems through complexity increase with the size of your data set
Examples of Big O-Notation
- $`O(1)`$: Constant Time
- $`O(n)`$: Linear Time
- $`O(n^2)`$: Quadratic Time