Big O Notation in Data StructuresAsymptotic analysis is the study of how the algorithm's performance changes when the order of the input size changes. We employ bignotation to asymptotically confine the expansion of a running time to within constant factors above and below. The amount of time, storage, and other resources required to perform an algorithm determine its efficiency. Asymptotic notations are used to determine the efficiency. For different types of inputs, an algorithm's performance may vary. The performance will fluctuate as the input size grows larger. When the input tends towards a certain value or a limiting value, asymptotic notations are used to represent how long an algorithm takes to execute. When the input array is already sorted, for example, the time spent by the method is linear, which is the best scenario. However, when the input array is in reverse order, the method takes the longest (quadratic) time to sort the items, which is the worstcase scenario. It takes average time when the input array is not sorted or in reverse order. Asymptotic notations are used to represent these durations. Big O notation classifies functions based on their growth rates: several functions with the same growth rate can be written using the same O notation. The symbol O is utilized since a function's development rate is also known as the order of the function. A large O notation description of a function generally only offers an upper constraint on the function's development rate. It would be convenient to have a form of asymptotic notation that means "the running time grows at most this much, but it could grow more slowly." We use "bigO" notation for just such occasions. Advantages of Big O Notation
ExamplesNow let us have a deeper look at the Big O notation of various examples: O(1): This function runs in O(1) time (or "constant time") relative to its input. The input array could be 1 item or 1,000 items, but this function would still just require one step. O(n): This function runs in O(n) time (or "linear time"), where n is the number of items in the array. If the array has 10 items, we have to print 10 times. If it has 1000 items, we have to print 1000 times. O(n^2): Here we're nesting two loops. If our array has n items, our outer loop runs n times, and our inner loop runs n times for each iteration of the outer loop, giving us n^2 total prints. If the array has 10 items, we have to print 100 times. If it has 1000 items, we have to print 1000000 times. Thus this function runs in O(n^2) time (or "quadratic time"). O(2^n): An example of an O(2^n) function is the recursive calculation of Fibonacci numbers. O(2^n) denotes an algorithm whose growth doubles with each addition to the input data set. The growth curve of an O(2^n) function is exponential  starting off very shallow, then rising meteorically. So, in this article, we understood what Big O Notation in Data Structures is and how we can use it in our daily practices to understand the time complexity of our routine deliverables.
Next TopicBinary Tree Traversal in Data Structure
