Javatpoint Logo
Javatpoint Logo

What Does Big O(N^2) Complexity Mean

It's critical to consider how algorithms function as the size of the input increases while analyzing them. Big O notation is a crucial statistic computer scientists use to categorize algorithms, which indicates the sequence of increase of an algorithm's execution time. O(N^2) algorithms are a significant and popular Big O class, whose execution time climbs quadratically as the amount of the input increases. For big inputs, algorithms with this time complexity are deemed inefficient because doubling the input size will result in a four-fold increase in runtime.

This article will explore what Big O(N^2) means, analyze some examples of quadratic algorithms, and discuss why this complexity can be problematic for large data sets. Understanding algorithmic complexity classes like O(N^2) allows us to characterize the scalability and efficiency of different algorithms for various use cases.

Different Big Oh Notations.

O(1) - Constant Time:

An O(1) algorithm takes the same time to complete regardless of the input size. An excellent example is to retrieve an array element using its index. Looking up a key in a hash table or dictionary is also typically O(1). These operations are very fast, even for large inputs.

O(log N) - Logarithmic Time:

Algorithms with log time complexity are very efficient. For a sorted array, binary search is a classic example of O(log N) because the search space is halved each iteration. Finding an item in a balanced search tree also takes O(log N) time. Logarithmic runtime grows slowly with N.

O(N) - Linear Time:

Linear complexity algorithms iterate through the input at least once. Simple algorithms for sorting, searching unsorted data, or accessing each element of an array take O(N) time. As data sets get larger, linear runtimes may become too slow. But linear is still much better than quadratic or exponential runtimes.

O(N log N) - Log-Linear Time:

This complexity results in inefficient sorting algorithms like merge sort and heap sort. The algorithms split data into smaller chunks, sort each chunk (O(N)) and then merge the results (O(log N)). Well-designed algorithms aimed at efficiency often have log-linear runtime.

O(N^2) - Quadratic Time:

Quadratic algorithms involve nested iterations over data. Simple sorting methods like bubble and insertion sort are O(N^2). Matrix operations like multiplication are also frequently O(N^2). Quadratic growth becomes infeasible for large inputs. More complex algorithms are needed for big data.

O(2^N) - Exponential Time:

Exponential runtimes are not good in algorithms. Adding just one element to the input doubles the processing time. Recursive calculations of Fibonacci numbers are a classic exponential time example. Exponential growth makes these algorithms impractical even for modestly large inputs.

What is Big O(N^2)?

  • An O(N2) algorithm's runtime grows proportionally to the square of the input size N.
  • Doubling the input size quadruples the runtime. If it takes 1 second to run on 10 elements, it will take about 4 seconds on 20 elements, 16 seconds on 40 elements, etc.
  • O(N^2) algorithms involve nested iterations through data. For example, checking every possible pair of elements or operating on a 2D matrix.
  • Simple sorting algorithms like bubble sort, insertion sort, and selection sort are typically O(N^2). Comparing and swapping adjacent elements leads to nested loops.
  • Brute force search algorithms are often O(N^2). Checking every subarray or substring for a condition requires nested loops.
  • Basic matrix operations like multiplication of NxN matrices are O(N^2). Each entry of the product matrix depends on a row and column of the input matrices.
  • Graph algorithms like Floyd-Warshall for finding the shortest paths between all vertex pairs is O(N^2). Every possible path between vertices is checked.
  • O(N^2) is fine for small inputs but becomes very slow for large data sets. Algorithms with quadratic complexity cannot scale well.
  • For large inputs, more efficient algorithms like merge sort O(N log N) and matrix multiplication O(N^2.807) should be preferred over O(N^2) algorithms.
  • However, O(N^2) may be reasonable for small local data sets where inputs don't grow indefinitely.

Different Properties of Quadratic Time Complexity?

  • Runtime grows proportional to the square of the input size N. Doubling N quadruples the runtime.
  • Involves nested iterations or operations over the data set. For example, nested for loops processing a 2D array.
  • Algorithms with nested loops iterating over the data will likely be O(N^2).
  • Sorting algorithms like bubble sort, insertion sort, and selection sort are common examples with a runtime of O(N^2).
  • The brute force comparison of each element with every other element is O(N^2). For example, checking all possible pairs in an array.
  • Basic operations on NxN matrices, like multiplication or finding the determinant, are typically O(N^2).
  • Graph algorithms that explore relationships between each vertex pair, like Floyd-Warshall for shortest paths, are O(N^2).
  • Runtime can be represented more formally as cN^2 + bN + a, where c, b and a are constants. But we drop the constants for simplicity.
  • Quadratic algorithms become extremely slow for large inputs. Doubling N makes the algorithm 4 times slower.
  • O(N^2) is often too inefficient for real-world big data applications. Therefore, faster algorithms are preferred.
  • However, quick O(N^2) implementations may be useful for small data sets where N is limited.
  • Analysing, the overall performance of the algorithms. But constants and real-world factors also matter when choosing algorithms.

Popular Algorithms with Quadratic Time Complexity

Bubble sort: When bubble sorting a list, adjacent elements are compared and swapped if they are out of order. It requires nested loops resulting in O(N^2).

Insertion sort: It works similarly to bubble sort but inserts elements in a sorted position. The inner loop shifts elements over, also O(N^2).

Selection sort: It finds minimum unsorted elements and swaps to the front. The inner loop searches the remaining unsorted area, so O(N^2).

Matrix multiplication: Entry depends on a row and column to multiply two N x N matrices. Nested loops lead to O(N^2).

Floyd-Warshall algorithm: Finds shortest paths between all vertex pairs in a graph. Exploration of all vertex combinations is O(N^2).

Brute force string search: Checking all substrings of size M in a length N string is O(N^2).

Checking all pairs: Finding pairs in an array that meet some condition requires nested loops checking all combinations - O(N^2).

Image processing filter: Operations like blurring is O(N^2), which process pixel neighbourhoods have nested loops over the image.


Let us take a C Program of Quadratic Time Complexity.


Enter a positive integer: 5
Sum of first 5 natural numbers is 35


  • This program computes the sum of the first N natural numbers using nested for loops.
  • The outer loop iterates N times to handle each number from 1 to N.
  • To add up each number, the inner loop iterates from 1 to the current number, i.
  • So, for input 5, it calculates:
  • The inner loop executes 1 + 2 + 3 + ... + N times, which is O(N^2) work.
  • The runtime grows as the square of N. So, this demonstrates an algorithm with quadratic time complexity.


In conclusion, Big O(N^2) is an important algorithmic time complexity class to understand. It refers to algorithms whose running time grows quadratically as the input size increases. The nested iterations or operations over data that produce the O(N^2) growth rate can be seen in many common algorithms, including basic sorting methods, matrix multiplication, Floyd-Warshall, and other graph algorithms. While simple O(N^2) algorithms may be useful for small problem sizes, for larger inputs, their performance degrades rapidly, doubling in running time for each quadrupling of input size. For large data sets and real-time applications, algorithms with better asymptotic complexity, like O(N log N), should be utilized whenever possible. Knowing the common algorithmic complexities and selecting an appropriate algorithm for the problem size and performance needs is a key skill in designing efficient programs and applications.

Youtube For Videos Join Our Youtube Channel: Join Now


Help Others, Please Share

facebook twitter pinterest

Learn Latest Tutorials


Trending Technologies

B.Tech / MCA