In this short article, I will try to explain the Big O Notation. I will also include one example of Big O Notation. Whether you are a computer engineer or a developer(Software or Web), I bet you have heard the phrase “Big O Notation“.
It is a perfect time to know about Big O Notation if you have not heard about it. This article will be an overview of Big O Notation. Learning the Big O Notation is a must before learning any Algorithm.
Definition of Big O Notation
Big O Notation is a mathematical notation. It is a way to represent how well an algorithm performs as its input size grows. Big O Notation is one of the most essential tools to evaluate the cost of an algorithm. Big O belongs to a family of notations invented by Paul Bachmann, Edmund Landau.
O(N)
Here, O = Order Of Complexity , N = Number Of Inputs
To determine algorithmic complexity, we can use big-O notation –
let us assume the scenario for a problem of size N.
- Order 1 : O(1) = constant-time function/method.
- Order N : O(N) = linear-time function/method.
- Order N squared” : O(N 2 ) = quadratic-time function/method.
Also Read : What is CalyxOS? – CalyxOS Features
Examples of Big O Notation
1. Order 1 : O(1) = constant-time function/method
Suppose,you have an array (array[50]) with 50 elements,
const array = [1,2,3,4,5,……..,49,50]
to search an element by its index number takes the same amount of time. Whether you want to find the first element or the last element. No matter how big that array is, it will take the same amount of time.
2.Order N: O(N) = linear-time function/method
Suppose you have an array,
const array = [1,2,3,4,5]
to loop over each element takes a specific unit of time. As the input increases, the running time increases linearly (O(n)).
3.Order N squared: O(N 2 ) = quadratic-time function/method
Suppose you a loop within a loop,
for (let i = 0; i < arr.length; i++) {
for (let j = 0; j < arr[i].length; j++) {
}
}
This is a quadratic-time function or complexity.
Also Read: What is Redis Database? – Remote Dictionary Server
Time Complexities of 8 Sorting Algorithms:
Algorithm | Best | Average | Worst |
Bubble Sort | Ω(n) | Θ(n^2) | O(n^2) |
Quick Sort | Ω(n log(n)) | Θ(n log(n)) | O(n^2) |
Merge Sort | Ω(n log(n)) | Θ(n log(n)) | O(n log(n)) |
Insertion Sort | Ω(n) | Θ(n^2) | O(n^2) |
Selection Sort | Ω(n^2) | Θ(n^2) | O(n^2) |
Heap Sort | Ω(n log(n)) | Θ(n log(n)) | O(n log(n)) |
Radix Sort | Ω(nk) | Θ(nk) | O(nk) |
Bucket Sort | Ω(n+k) | Θ(n+k) | O(n^2) |
Conclusion:
An Algorithm can be efficient for a small problem, but it might become very slow or time-consuming for a big problem as time and space (or memory) and this is where Big O Notation comes in. Big O
Notation helps us determine the best possible algorithm (for the specific problem) to save time and space. The Big O Notation Explained in Simple language.