The Big-O notation is used to represent asymptotic upper-bounds. It allows a person to see if a problem will take years or seconds to compute on a modern computer.

In computer science, it is most commonly used when talking about the time complexity of algorithms, but can also refer to the storage required.

For example, a linear search on an unsorted array of size N, is O(N). If we put the elements first in a hash table, the space used is O(N) (Theta(N) to be more precise), but the search time is O(1) in the average case.

It should be noted that Big-O only represents an *upper* bound for a function. Therefore an O(N) function will also be O(NlogN), O(N²), O(N!), etc. In many cases Big-O is used imprecisely and Big-Theta should be used instead.

If a complexity is given by a recurrence relation, an analysis can often be carried out via the Master Theorem.

## Properties

*Summation*

O(f(n)) + O(g(n)) -> O(max(f(n), g(n)))

For example: O(n^2) + O(n) = O(n^2)*Multiplication by a positive constant*

O(c * f(n)) -> O(f(n))

For example: O(1000 * n^2) = O(n^2)*Multiplication*

O(f(n)) * O(g(n)) -> O(f(n) * g(n))

For example: O(n^2) * O(n) = O(n^2 * n) = O(n^3)*Transitivity*

f(n) = O(g(n)) and g(n) = O(h(n)) then f(n) = O(h(n))

## Groups of Big-O

Complexity | Sample algorithms ------------------------------------------------------------------ - O(N!) | Get all permutations of N items - O(2^N) | Iterating over all subsets of N items - O(N^3) | Calculating all triplets from N items - O(N^2) | Enumerating all pairs from N items, insert sort - O(NLog(N)) | Quick sort, merge sort - O(N) | Getting min, max, average, iterating over N items - O(Log(N)) | Binary search - O(1) | Getting an item by the index in the array