Understanding Big O Complexity
Big O notation is used in computer science to describe the performance or complexity of an algorithm. It specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g., in memory or on disk) by an algorithm. It characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
O(1) - Constant
The algorithm takes the same amount of time regardless of the input size. Ex: Accessing an array element by its index.
O(log n) - Logarithmic
Time increases logarithmically with the input size. Very efficient. Ex: Binary search.
O(n) - Linear
Time is directly proportional to the input size. Ex: Iterating through an array.
O(n log n) - Log-Linear
A very common complexity for efficient sorting algorithms. Ex: Mergesort, Heapsort.
O(n²) - Quadratic
Time is proportional to the square of the input size. Becomes slow quickly. Ex: Nested loops over the same collection.
O(2ⁿ) - Exponential
Time doubles with each addition to the input data set. Very slow. Ex: Recursive calculation of Fibonacci numbers (naive approach).
O(n!) - Factorial
Grows extremely fast. Often seen in permutation problems. Ex: Traveling Salesman Problem (brute-force).
Growth Rate Visualization
Order the Complexities
Drag and drop the complexities into the correct order, from fastest (best) to slowest (worst).