There was a time when developers needed to know the number of operations that executed when their code was being run. Computers were much slower back then, and it was important to optimize software so that no precious memory and processing power were wasted. Nowadays we possess extremely powerful computers capable of processing hundreds of millions of operations per second. As a result, many developers don’t care too much about the number of operations. Up to a certain point. When dealing with massive, massive amounts of data however, optimization becomes critical. Especially if you are trying to squeeze out a couple more milliseconds of performance.

Thus, the Big O notation was born! This mathematical symbol O( function ) was created to help computer scientists understand the amount of operations being executed when a particular function is being run, given a variable input data. Simply put, how long does a function take to process when fed some amount of information. What is important to understand here is that the Big O does not focus on the actual amount of data being operated upon. That’s not the important part. When analyzing time complexity, it is vital to understand the way the number of operations increase when the number of data increases. Utilizing Big O does not give us the exact amount of operations, but it can provide developers with a general idea of performance.

It may not sound so important for smaller amounts of data – and that’s completely true! If you only had a couple thousand pieces of data lying around, optimizations would bring very little difference to the table. Our modern processors are more than capable of handling such tasks, even when badly optimized. However, what if you were to run a badly optimized function on a dataset in the tens of millions, hundreds of millions? At a certain point, the different numbers of operations per data will begin to show their true colors. A constant time operation, O(n), will stay relatively linear in terms of operations while a time complexity of O(n^2) will quickly explode. Conversely, a function that produces a complexity of O(log(n)) will scale exponential better and better as the amount of data grows.

Let’s imagine that we had a dataset containing 2000 elements.

An O(1) time complexity will cost a total of 1 operations.

An O(n) time complexity will cost a total of 2000 operations.

An O(log(n)) time complexity will cost a total of 7 operations.

An O(n * log(n)) time complexity will cost a total of 14,000 operations.

An O(n^2) time complexity will cost a total of 4,000,000 operations.

See the differences in cost? 2000 elements is a negligible amount of data to be concerned about, but the time complexity difference is still apparent. What if we were to use a significantly larger dataset? Let’s say, 100 million? Now we’re starting to get somewhere. Now, optimization becomes extremely important.

So what does this mean when we deal with hundreds of millions of data? In certain industries, such as finance, where high frequency trades are being executed every millisecond, optimizations within time complexity could mean the difference between winning or losing millions of currency. In other industries, such as e-commerce, providing a good user experience means toning down load times between different pages as tons and tons of data is being processed. A long load time will result in a dramatic decrease of users.

This is why it’s still important to understand this concept. When we analyze the Big O for a given function, we are looking for its worst possible scenario. There are a multitude of important questions to be asked. What is the case where processes are being wasted by the millions and billions, and what can we do to lower that amount and make our code faster and more competitive in certain ecosystems.