Analysis of Algorithms | Big-O Analysis

Analysis is figuring out what not do.

Table of Contents

  • Introduction
  • Fundamentals – What is Time and Space Complexity
  • Time Complexity analysis of a recursive function
  • Aggregate Analysis
  • Complexity – Better to Worse
  • Complexity Analysis is not everything!
  • Summary

Introduction

You ask your friend what car you should buy and your friend says, “Mercedes model XYZ” or “Toyota model ABC”. How good of an answer is it? Not much. Maybe that is a good suggestion and maybe you will buy that. But I am sure you would first look at the specifications of multiple cars, check their power, capacity, style, features, and of course the price. You will compare these attributes and then choose the one as per your desires and budgets. You will follow a similar approach when buying a phone or buying a house. The commonality in all of these scenarios is you pay some price and you get what you want in return. The exact similar approach is applicable to solving computer science problems as well. You write the code and the computer does what you want it to do. But what price do you pay? You pay in terms of CPU usage, memory usage, etc. And the estimation of this price is what the complexity analysis is for. 

For any problem, usually, there are multiple ways to tackle it. Solving a coding problem in an interview is no different. There are almost always multiple solutions to the given problem. Each solution has some cost i.e. time and space complexity associated with it. Your job is to identify various solutions, analyze their time & space complexities, discuss this with the interviewer, and implement the most suitable solution depending on the inputs and constraints. At the same time, generally, there is no one best solution to the problem. The best solution depends on what you are trying to optimize on. Some solutions have better time complexity but they use additional space, while other solutions may not use additional space but will perform poorly in terms of time. This analytical thinking, the trade-off between time & space complexity are what interviewers look for in strong candidates. How often have you heard someone wrote the most optimal solution in the interview but still didn’t move forward? This is what might be lacking. In this blog post, we are going to look into time & space complexity and we will see how to find it. Many people struggle when finding this is not trivial, like in recursive solutions. We will also look into those. And lastly, sometimes you will have to employ an even more sophisticated technique and we will look into that as well. You should note that the way in which I will explain this will be informal and will help you in understanding them well. But if you want the formal and absolutely-accurate language & terms, reading a book would be the right approach.

Fundamentals – What is Time and Space Complexity

The time and space complexity are measured in terms of the size of the input, generally denoted as ‘n’. It’s important to differentiate between input ‘n’ vs size of input ‘n’. If you are given an array of size n, then the size of the input is ‘n’. If you are asked to find an nth Fibonacci number, then also the size of the input is ‘n’. But let’s say, you are asked to print the input parameter ‘n’. Here, the size of the input is just 1, or you are asked to return a square of the input n – then also the size of the input is just 1, not ‘n’. So remember that the complexity analysis is done based on the size of the input.

The time complexity is nothing but a rate at which the amount of time required to compute a result for the input of size ‘n’ grows with the increase in ‘n’. In simpler terms – with the increase in the input size, how much more time is required for the computation. Similarly, the space complexity is the rate at which the memory required to compute the result grows. We will see some examples to make it clearer, but before that, I would like to talk about constant-time operations & constant-space.

As we discussed, the time and space complexities are measured in proportion to the size of the input. Anything that does not depend on the size of the input is considered as a constant time operation. Common constant time operations are simple mathematical operations like adding two numbers, assigning values to variables, comparison/boolean operations, or allocating memory for a simple object, etc. It does not matter which two numbers are getting added, so we can say it is independent of input and hence a constant time operation. On the other hand, let’s say you want to find the sum of all elements inside the array of size n. Now there will be a total ‘n’ number of additions. Although one addition is a constant time operation, we will perform that ‘n’ times i.e. the number of operations are proportional to the input size, and hence it won’t be a constant time. It goes hand in hand with space as well. If we use additional space in proportion to the input size, then it is not a constant time like the addition of all numbers in the array. But adding two numbers is a constant space operation.

Hence, In very simple words, I like to think of time complexity as “how many constant time operations will be performed in proportion to the input size”. Keeping this in mind will make it easy to find the time complexity of any solution a lot easier. Let’s see some examples and determine the number of constant-time operations that will be performed.

Example 1
Print a number.
Single operation, so 1.

Example 2
Print an array of numbers of the size n.
Printing a number is 1 operation. For the array, that operation will be performed `n` times, so the answer is n.

Example 3
Find a square of a number.
One multiplication operation, so 1.

Example 4
Find the smallest number in the array of size n.
Finding the smaller between the two numbers is a single operation that will be performed ‘n’ times, so ‘n’.

Example 5

int sum = 0;                  // Single operation
for (int i=0; i<n; i++) {     // n times
    sum = sum + i;            // single operation
}
Number of operations (in proportion to n) = n

Example 6

int temp = a;      // 1
a = b;             // 1
b = temp;          // 1

Constant time

Example 7

int sum = 0;                      // 1
for (int i=0; i<n; i++) {         // n times
    for (int j=0; j<n; j++) {     // n times
        sum += 1;                 // 1
    }
}

n times n operations so n^2.

Example 8

while (n != 1) {     // ?? times
    n = n / 2;       // 1
}

This one is tricky. In every iteration,
n is getting halved. So how many times 
will n get halved without reaching 1?
The answer, of course, is log2(n).

Formally, the time and space complexities are denoted using big-O, Omega, and Theta notations. Omega and Theta notations are rarely used because as with real-world situations, we always try to be prepared for the worst-case and measuring the worst-case time complexity is done by big-O notation. Omega notation is to denote the best-case performance while the Theta notation denotes the average case.

As I said, we care most about the worst-case performance, and hence the big-O notation is the one most important to us. If some algorithm takes constant time to run, it is denoted as O(1). So the addition of two numbers is O(1). Similarly, if some algorithm performs the number of operations proportional to input size ‘n’, it is denoted as O(n), so adding all numbers in the array is O(n). The 7th example above is O(n^2).

As you might have observed in the above examples, we ignored the constant time operations from calculations. If we perform, say n^2 + 1 operations, we will still call it O(n^2). The general rule is that we only consider the most significant power because time and space complexities are a way to compare two solutions by estimating how many resources (CPU ~ Time, Memory ~ Space)  will each of the solutions need. For the same reason, we also don’t consider the constants associated with powers of `n`, e.g. if there will be 2*(n^2) operations, we will still say O(n^2). 

A couple more example to summarize it all –

Example 1

int foo(vector<int> &nums) {
int maximum = 0; // 1
for (int i=0; i<n; i++) { // n times
maximum = max(maximum, nums[i]); // 1
}
int minimum = 0; // 1
for (int i=0; i<n; i++) { // n times
minimum = min(minimum, nums[i]); // 1
}
int sum = 0; // 1
for (int i=0; i<n; i++) { // n times
for (int j=0; j<n; j++) { // n times
sum += maximum + minimum + nums[i] + nums[[j]; // 4
}
}
return sum; // 1
}
  • Number of operations in foo() method
    = 1 + n*1 + 1 + n*1 + 1 + n*n*4 + 1
    = 4n^2 + 2n + 4

    We will call it –
    O(n^2) – Ignore constants and ignore smaller powers.

Example 2

int sum(vector<int> nums) {
int ans = 0; // 1
for (int i=0; i<nums.size(); i++) { // n times
ans += nums[i]; // 1
}
return ans; // 1
}
void efficient(vector<int> nums) {
int addition = sum(nums); // O(n). Note: Single line does not
// mean constant time.
for (int i=0; i<n; i++) { // n times
nums[i] = nums[i] + addition; // 1
}
}
void inefficient(vector<int> nums) {
for (int i=0; i<n; i++) { // n times
nums[i] = nums[i] + sum(nums); // O(n)
}
}

  1. Number of operations in sum() method
    = 1 + n*1 + 1
    = O(n)
  2. Number of operations in efficient() method
    = O(n) + n*1
    = O(n)
  3. Number of operations in inefficient() method
    = n*O(n)
    = O(n^2)

So far, we saw that finding the time complexity is not difficult. Generally, we only have to look at loops and how many times they are running and that’s what gives us the idea about the time complexity. We also saw that calling a separate method needs to be accounted too and accounting that is straightforward too. However, in the case of recursion, it is a bit tricky. Let’s take a look.

Time Complexity analysis of a recursive function

Here is what makes finding time complexity of a recursive method tricky – In the last example, we saw that both efficient() and inefficient() methods were calling the sum() method. We already knew the time complexity of the sum() method, so it wasn’t hard to find the time complexity of both the methods.

However, what would you do in case of –

void fun(int x) {
// ....
fun(x-1);
// ....
}

You can find the time complexity of fun() by finding the time complexity of each operation. And hence, to find the time complexity of fun(), you will need to find time complexity of fun(). That’s a bad position to be in but this is when the recurrence relation comes into the picture.

Recurrence relations are easier to write. You assume the time complexity of the method which takes the input of size n as T(n) and hence time complexity when the input size is n-1 is T(n-1). So the problem of knowing the time complexity of the method is solved and you already know the time complexity of other statements. From this information, you can write the recurrence relations. Then we will see how to solve them to find the time complexity in big-O terms. Let’s see some examples –

Example 1

int factorial(int n) { // T(n)
if (n <= 1) { // 1
return 1; // 1
}
int temp = factorial(n-1); // T(n-1)
return n*temp; // 1
}
T(n) = T(n-1) + 3
i.e.
T(n) = T(n-1) + O(1)

Example 2

int Fibonacci(int n) { // T(n)
if (n <= 1) { // 1
return n; // 1
}
int prev = Fibonacci(n-1); // T(n-1)
int prevprev = Fibonacci(n-2); // T(n-2)
return prev + prevprev; // 1
}
T(n) = T(n-1) + T(n-2) + O(1)
T(n) = 2*T(n-1) + O(1) ... T(n-1) is strictly >= T(n-2)


(Note: Don’t ignore the constants associated with T(). This is not a big-O notation. This is a recurrence relation. We only ignore constants in big-O notations.)

Example 3

int binary_search(vector<int> a, int low, int high, int key) { // T(n)
if (low > high) { // 1
return -1; // 1
}
int mid = (low+high)/2; // 1
if (a[mid] == key) { // 1
return mid; // 1
}
if (a[mid] < key) { // 1
return binary_search(a, mid+1, high, key); // T(n/2)
} else { // 1
return binary_search(a, low, mid-1, key); // T(n/2)
}
}

Here, n is the size of the array.
(Note: Only one recursive call is going to happen because it is in if … else … statement.)

So,

T(n) = T(n/2) + O(1)


Example 4

... merge( .... ) { // O(n)
.....
}
vector<int> mergeSort(vector<int> nums) { // T(n)
.... // 1
mergeSort(leftHalf); // T(n/2)
mergeSort(rightHalf); // T(n/2)
sorted = merge(leftHalft, rightHalf); // O(n)
return sorted; // O(n)
}


Without going into detailed implementation of the merge sort, I want you to see the variety of recurrence relations.

Here,

T(n) = 2*T(n/2) + O(n)

Now you understand how to write the recurrence relation. So let’s move to solve them. 

One thing we assume here is that, for an input of size 1, the answer can be found in constant time. Hence, T(n) = O(1) for n =1. Again, remember – it is the input of size ‘n’ and not the input ‘n’. This makes sense because there is little you need to do when the input is as small as 1 unit. (Sorry, this is the best I can explain this without going into formal terms).

Let’s solve each of the above recurrence relations one by one. Just to make sure that recurrence relations provide the right solutions, let’s see the time complexities of each of them. We know that the factorial is a O(n) operation, Fibonacci is exponential if not implemented efficiently and precisely it is O(2^n), binary search is O(log(n)) and the merge sort is O(n*log(n)). Let’s see if recurrence relations give the same answers or not.

A. T(n) = T(n-1) + O(1)

T(n)
= T(n-1) + O(1)
= T(n-2) + O(1) + O(1)
= T(n-3) + 3*O(1)
= ..
= ..
= T(n-k) + k*O(1)

We know that T(1) = O(1). 
So substitute, k = n-1

so,
= T(n-k) + k*O(1)
= T(n-(n-1)) + (n-1)*O(1)
= T(1) + n*O(1) - O(1)
= O(1) + n*O(1) - O(1)
= n * O(1)
= O(n)

B. T(n) = 2*T(n-1) + O(1)

T(n)
= 2*T(n-1) + O(1)
= 2*(2*T(n-2) + O(1)) + O(1)
= 2^2 * T(n-2) + (2+1) * O(1)
= 2^2 * (2*T(n-3)) +  (2+1) * O(1)
= 2^3 * T(n-3) + (4+2+1) * O(1)
= ..
= 2^k * T(n-k) + (2^(k-1) + ... + 1) * O(1)
= 2^k * T(n-k) + (2^k-1) * O(1)
     .... (1 + 2 + 4 ... + 2^k-1)  = (2^k) -1 )

We know T(1) = O(1), so put k = n-1

=2^(n-1) * T(1) + (2^(n-1) - 1) * O(1)
=2^(n-1) * O(1) + (2^(n-1) - 1) * O(1)
=(2^(n-1) + 2^(n-1) - 1) *O(1)
= (2*2^(n-1) - 1) * O(1)
= 2^n * O(1)   -  1 * O(1)
= O(2^n) - O(1) 
= O(2^n)    .... (Just keep the most significant term).

C. T(n) = T(n/2) + O(1)

T(n)
= T(n/2) + O(1)
= T(n/4) + O(1) + O(1)
= T(n/8) + 3*O(1)
= ..
= T(n/ 2^k)  +  k*O(1)

We know T(1) = O(1), so put 2^k = n i.e. k = log2(n)

= T(n/(2^(log2(n)))) + log2(n) * O(1)
= T(n / n) + log2(n) * O(1)
= T(1) + log2(n) * O(1)
= O(1) + log2(n) * O(1)
= O(1) + O(log2(n))
= O(log2(n))  ... Keep only the most significant term.
= O(log(n))

In this way, we also saw how to solve recurrence relations. I won’t be a good teacher unless I give some exercise :p So I keep the 4th one as an exercise. I will look forward to your derivation in the comments.

So for recursive solutions, it’s a two-step process to find the time complexity.

  1. Find the recurrence relation.
  2. Solve the recurrence relation.

Moving forward.

Aggregate Analysis

Sometimes it’s not enough to solve the recurrence relation even if the code is recursive, or it’s not enough to look at loops and see what is the terminating condition. This situation arises when loops are terminated in between based on some conditions or recursive calls are limited by some conditions. This is when Aggregate Analysis comes into the picture.

As the name suggests, we need to look at the holistic picture, the overall code to understand how much the execution time would be. For all previous examples, we used to find the time complexity of each statement, add it together to find the time complexity of the overall function. But let’s take an example of depth-first traversal.

Here is the pseudo-code –
DFS:

  1. Input : graph, start_node, visited_nodes
  2. START
  3. If start_node is already visited, then STOP
  4. Add start_node into visited_nodes
  5. for each neighbor of the start_node
    1. DFS(graph, neighbor_node, visited_nodes)
  6. STOP

It is a recursive function, but you can’t simply know how many times DFS will get called because we check visited_nodes every time, so even if a recursive call is made, it might return immediately if the node is visited. Secondly, the count of neighbor nodes of each node is unknown, so we don’t know how many times it will run. Let’s take a look at it in a different way.

If there are V nodes and E edges in the graph -DFS is going to get called at least once for each unique node as input. It will then perform some constant time work like checking if the node is already visited. Secondly, we don’t know how many neighbors each node has. But what we know is, because we know there are E edges, the sum of the number of neighbors for each node will be E. Hence the loop is going to get executed E number of times. So if you think carefully, the total number of operations is only going to be in proportion to (V+E) and this makes the time complexity of DFS to be O(V+E) where V is the number of vertices in the graph and E is the number of edges in the graph. I know it’s a bit tricky and may not be immediately intuitive, but give it some time and you will realize this. Also, if you remember my earlier way of thinking about time complexity as a number of constant-time operations, you would have understood by now that it really helps everywhere including the aggregate analysis.

Generally, the aggregate analysis is necessary for graph algorithms because the size of the input is not clear there.

Let’s see one more example. This is not from real code, but this kind of pattern is seen multiple times.

int start = 0; // 1
int end = 0; // 1
while (end <= n-1) { // n times
if ( ... ) { // 1
while (start <= end && nums[start] < nums[end]) { // ??
...
start++; // 1
}
}
end++; // 1
}

We can see that the ‘end’ is initialized to 0 and the outer while-loop will run n times because the ‘end’ is incremented at the end of each iteration.

Also, all other operations are O(1).

However, what we don’t know is how many times the inner while-loop runs.

If we simply look at two nested loops and say it is O(n^2) which is generally the case, it wouldn’t be accurate.

Let’s see a bit deeper. The ‘start’ is only initialized once and it always stays behind or with the ‘end’. Over the course of the entire function, the ‘end’ will travel from 0 to n-1. The ‘start’ travels from 0 to n-1 overall. So if you think about it, the inner while loop is only going to run ‘n’ times, and hence the inner while loop is just `n` number of operations over the period of the function.

We already know the time complexity of the outer while loop which is O(n) plus overall inner loop will run only n times i.e. O(n).

Hence the overall time complexity is O(n).

Complexity – Better to Worse

By now, you must have realized, or you might have already known – the better the time complexity, the lesser the number of instructions that computer needs to run (in general) in proportion to the input size. Lesser the instructions, quicker the result, and lesser the power that the CPU will need to use. Hence, in general, it is better to try to reduce complexity. Here is the list of common complexities of algorithms in order from better to worse –

  1. O(1) – Constant time is best. Irrespective of the size of the input, your computer takes the same amount of time to produce the result.
  2. O(logn)
  3. O(sqrt(n))
  4. O(n)
  5. O(nlogn) – Till this point, algorithms are generally considered good. Beyond this point, it gets worse and worse.
  6. O(n^2)
  7. O(n^3)
  8. O(n^k) – Polynomial-time. (k is a constant)
  9. O(2^n) – Exponential time.
  10. O(n!) – Really bad.
  11. O(n^n).

Complexity Analysis is not everything!

Although complexity analysis is a very important part of considering the feasibility of a solution in the real world, it has its own limitations which you might have thought of already. The most important one is that it ignores constants from the consideration or ignores the smaller orders from consideration i.e. O(n^3 + 5*n^2 + 6*n + 20) is simply written as O(n^3). Sometimes, the constants that we ignore along the way can be very significant in the performance of the solution and that information gets ignored. Two different algorithms with the same complexity can vary a lot in performance because of this. Similarly, O(nlogn) which is generally worse than O(n) can prove to be better in reality because of hidden constants associated with the O(n) solution. I have personally observed this when researching a Maximum Bandwidth Path problem along with Prof. Jianer Chen. Hence, complexity analysis can help a lot in general, but it can be misleading sometimes.

Secondly, for any software, it’s maintenance cost is a major part of its pricing and naturally, we want to minimize that cost. Hence it makes sense to keep the code clean, readable which can sometimes come at the cost of efficiency. Especially in the early stage, it can turn out better to keep the code simple instead of using a complex, hard to understand the algorithm that reduces the complexity. So code maintainability, readability is another thing to consider when evaluating two algorithms.

Complexity analysis might not be very helpful when working on low latency systems where improvement of some nanoseconds or microseconds is the key. Time taken by two O(1) instructions or algorithms can differ quite a lot when the resolution of time we deal with is micro and nanoseconds.

Summary

Time and space complexity analysis are critical to success in a coding interview. You must understand them well and know their importance. In real-world software development, there is more than just complexity analysis. I hope you learned something new in this post or it was a good refresher for you. Although I tried to add details wherever necessary, let me know what is still unclear. Comment below and share your thoughts and what you learned from this post. Have you ever come across an instance where theoretically better time complexity is practically worse than the theoretically worse solution? I would be keen to know.

Don’t forget to like, share the post, and subscribe to the blog.

Happy interviewing!


Related Posts


One thought on “Analysis of Algorithms | Big-O Analysis

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s