Notes on Time Complexity

Subhankar Halder
2 min readMar 30, 2020

Time Complexity is a fundamental concept of core Computer Science, which allows a person to analyze algorithms. I have been reading about this concept from the book “The Algorithm Design Manual” by Prof. Steven Skiena. I decided to jot down some notes on this topic.

Time Complexity

Time Complexity is an estimate of an algorithm’s performance. Time complexity is computed as a function of the input “n” and determines the number of steps taken by the algorithm.

In his book, Prof. Skiena writes about the RAM Model of Computation, wherein:

  1. Every simple operation takes exactly one time step
  2. Loops and Routines are not considered simple operations
  3. Each memory access takes exactly one time step

Big Oh

To express the time complexity of an algorithm or to differentiate algorithms on the basis of time complexity, we use the Big Oh notation.

Mathematically, we can express the Big Oh notation as an upper bound of an input function. Thus,

f(n) = O(g(n)) means c.g(n) is an upper bound on f(n), where “c” is a constant.

Therefore, there exists a constant “c” such that f(n) is always less than equal to c.g(n), for large n.

Note that, in the analysis of algorithms, Big Oh notation ignores the difference between multiplicative constants.

Thus, functions f(n) = 3n and g(n) = n are identical in Big Oh analysis.

Dominance Relations

The Big Oh notation classifies functions into groups of time complexities. Further, a faster growing group dominates a slower growing one. The dominance relationship can be seen below as:

n! >> 2^n >> n³ >>n²>> n log n >> n >> log n >> 1

Thus, in the above dominance relation, we can say that n! is a faster growing function than n.

We now discuss each of the above mentioned functions of time complexities:

  1. Factorial: These functions arise when generating all permutations of n items.

2. Exponential: These functions like 2^n arise when we write out all subsets of n items.

3. Cubic: These functions arise when we enumerate through all triples of items in an n-element universe. These come up in some dynamic programming algorithm.

4. Quadratic: These function arise when we enumerate through all pairs of items in an n-element universe. They come up in “insertion/selection” sorts.

5. Superlinear/Loglinear: They come up in “quick/merge” sorts.

6. Linear: These functions arise in the cost of looking at each item once in an n-element array. So, for example, identify the biggest/smallest item or compute the average value.

7. Log: These functions arise in Binary Search.

8. Constant: These functions arise when adding 2 nos, printing a statement. In general, there’s no dependence on ‘n’.

This should be enough for the theory about Time Complexity. If I get time, I’ll put up some applied examples in my next blog posts!

--

--