What is time complexity?

Time complexity represents the number of times a statement is run. The most common misconception with time complexity is that time complexity is not the actual time required to run an algorithm. This is due to the fact that programming languages, OS, and processing power differ from program to program. There are many different types of time complexity formulas.

Linear Time Complexity: O(n)

Linear Time Complexity is the same thing as a linear graph. If the input grows the number of operations increases and vice versa. These circumstances are when you have to look at every item in your data to do whatever the algorithm is made for. An example of this is when you want to find a certain value. This means that every time it looks for our data it's spending more time to get the desired result.

Visualization:

`Desired = 10 Input= [1,2,3,4,10] The algorithm to find the Desired will run like False, False, False, False, True.`

And it will take the longest time whereas below.

`Desired = 1 Input= [1,2,3,4,10] The algorithm to find the Desired will run like True and break resulting in the slowest number of operations.`

Constant Time Complexity: O(1)

When we have constant time complexity, the size of the input doesn't matter. This means that algorithms of n size will take a constant amount of time to run. Due to the nature or constant time complexity, this is the fastest algorithm out there. This means that if our input string or number is 1 it will run just as fast if we were to have 1 million or even 1 billion numbers.

Logarithmic Time Complexity: O(log n)

Algorithms that manage to run in this time frame are great for computation time. An algorithm is said to run in logarithmic time if it's time execution is proportional to the logarithm of the input size. In simple terms, that means that the time decreases as operations grow instead of taking longer like Linear Time Complexity. These programs run by discarding portions as it goes instead of going through all of it, a.k.a Divide and Conquer.

Quadratic Time Complexity: O(n^2)

This type of algorithm grows directly proportional to the square of the size (That’s why it’s n^2 where n is the input). In most cases, algorithms with large quantities of data should be avoided in this complexity as it takes a long time to run. Nested for loops run in quadratic time because you are running one for loop and then another for loop inside it.

Exponential Time Complexity: 0(2^n)

In Exponential Time Complexity, the growth rate doubles with each input (That’s why it's 2^n where n is the input). This means that when the input increases by 1, it causes you to double the number of runs performed. This could be a bad thing, but it could also be a really good thing. This is because there are some situations where we don't have enough information about the best solution, so it has to try all the possible permutations and combinations.

The best programs are the programs that take the least amount of iterations before shooting out an output. The best thing to keep in mind when programming is to try to keep your algorithms running with or below linear time-complexity, but sometimes that may not be possible. Finding time complexity takes some practice and a good idea of the formulas and what you are running.

More information here

Previous Section

1.3 Why do we need to learn Data Structures and Algorithms**Copyright © 2021 Code 4 Tomorrow. All rights reserved.**The code in this course is licensed under the MIT License.

*If you would like to use content from any of our courses, you must obtain our explicit written permission and provide credit. Please contact*

*classes@code4tomorrow.org*

*for inquiries.*