Podcast Title

Author Name

0:00
0:00
Album Art

Big O Notation Explained in 5 Minutes

By 10xdev team August 03, 2025

In this article, we're going to explore Big O notation. What is it? Big O notation provides a simplified analysis of an algorithm's efficiency. This publication covers numerous algorithms, and we need a consistent method to compare them and estimate their execution time.

Big O describes an algorithm's complexity relative to its input size, denoted as 'n'. It allows us to abstract the performance of our code from the specific hardware it runs on. Instead of focusing on machine specifications, we analyze the fundamental computational steps within the code. Big O can be used to analyze both time and space complexity.

Worst, Best, and Average Cases

There are several ways to look at an algorithm's efficiency. We can examine its performance in worst-case, best-case, and average-case scenarios. When we're talking about Big O notation, we typically focus on the worst-case. This isn't to say the other perspectives are unimportant; however, the worst-case provides an upper bound on performance.

Key Rules of Big O

Let's talk about a few foundational rules.

1. Ignore Constants: Big O notation ignores constants. For example, if you have a function with a running time of 5n, we say that it runs on the order of O(n). This is because as 'n' gets sufficiently large, the constant factor of 5 no longer has a significant impact on the growth rate.

2. Drop Lower-Order Terms: As 'n' grows, certain terms in a complexity function will dominate others. We ignore or drop lower-order terms when they are overshadowed by higher-order ones. For instance, in a function with complexity O(n² + n), the term will eventually dominate the n term, so we simplify it to O(n²).

Complexity Hierarchy

It's useful to understand the hierarchy of common complexities. The following list is ordered from fastest to slowest growth rate:

  • O(1) - Constant
  • O(log n) - Logarithmic
  • O(n) - Linear
  • O(n log n) - Log-Linear
  • O(n²) - Quadratic
  • O(2^n) - Exponential
  • O(n!) - Factorial

A handy visual guide and a more detailed chart can be found at BigOCheatSheet.com, which is an excellent resource for studying the Big O of various important algorithms.

Practical Examples

Let's run through several examples so you can see what we mean by "basic computer steps."

Constant Time: O(1)

Imagine we had the following line of code:

let x = 15 + 25;

This basic computer statement computes a value for x. Notice that its execution time does not depend on the input size 'n' in any way. We say this is O(1), or constant time.

What happens when we have a sequence of constant-time statements?

let x = 15 + 25; // O(1)
let y = 30;      // O(1)
let z = x + y;   // O(1)

Notice that all of these are constant time. To compute the Big O for this block of code, we simply add their times, which gives us 3 * O(1). But remember, we drop constants, so the overall complexity is still O(1).

Linear Time: O(n)

Suppose we have the following for loop that prints the numbers from 0 to n-1:

for (let i = 0; i < n; i++) {
  console.log(i);
}

We know the console.log statement inside the loop is O(1). Since this statement is executed 'n' times, the block of code is n * O(1), which in other words is O(n).

Here's another sequence:

let y = 30; // O(1)

for (let i = 0; i < n; i++) { // O(n)
  console.log(i);
}

The first line is O(1) and the for loop is O(n). The total time is the summation of these two, O(1) + O(n). But remember, we drop lower-order terms. When 'n' gets large, the time it takes to compute y is meaningless, as the for loop dominates the runtime. The final complexity is O(n).

Quadratic Time: O(n²)

I think you can see that with a nested loop, the print statement will be executed n * n times:

for (let i = 0; i < n; i++) {
  for (let j = 0; j < n; j++) {
    console.log(i, j);
  }
}

This gives us a Big O of O(n²).

Combining Complexities

Let's do two more examples covering everything we've talked about. Say we have the following block of code. What is its total runtime?

// Block 1: O(1)
let x = 10;

// Block 2: O(n)
for (let i = 0; i < n; i++) {
  // some constant time operation
}

// Block 3: O(n²)
for (let i = 0; i < n; i++) {
  for (let j = 0; j < n; j++) {
    // some constant time operation
  }
}

Well, we know the runtime for each of these, so the total runtime is simply the max of the three. The nested for loop dominates here, so we get O(n²).

How about this if-else statement? Pretend the sequence of statements in each clause has already been deduced to the Big O shown.

if (condition) {
  // Sequence of statements with O(n) complexity
} else {
  // Sequence of statements with O(n²) complexity
}

We talked earlier that when we're discussing Big O, we usually look at the worst-case scenario. So for this situation, we choose the largest runtime, which happens to be O(n²).

Real-World Considerations

I hope this gives you a solid understanding of Big O notation. Let's wrap up by talking about the real world.

When you're coding your algorithm, please realize that constants absolutely do matter. Many situations have small input sizes, so a constant of two or three could have a large impact on performance.

Lastly, for the same reason, be cognizant of best-case and average-case scenarios. Depending on your application, these may be more applicable for your algorithm's performance.

Join the 10xdev Community

Subscribe and get 8+ free PDFs that contain detailed roadmaps with recommended learning periods for each programming language or field, along with links to free resources such as books, YouTube tutorials, and courses with certificates.

Recommended For You

Up Next