When we write an algorithm to solve a problem our first goal is, and should be, to correctly solve the problem at hand. In other words, we want to describe a set of instructions such that for any given input to the problem the set of instructions that we have specified always calculates the correct answer. Given that we have correctly solve the problem, the second aim that we have is whether the algorithm that we have written is efficient enough. For problems that the input is usually small the speed of an algorithm doesn’t really matter, specially with the computational power that computers have nowadays. For a small input even an inefficient algorithm seems to the user that the calculation is instant. However, big software companies deal with more and more data and more complex systems, then the efficiency of the algorithms used becomes important.

An obvious approach to measure the efficiency of an algorithm would be to write the program, make it run for some inputs and calculate the time the computer took to calculate the solution. With this approach we have two main drawbacks. Firstly, we need to write the code before we figure out the performance. Secondly, it is not easy to compare two algorithms which solve the same problem that they do not run at the same system, the resulting time for each case now depends not only on the algorithms but also in the system that the algorithms are used for the calculations. What we do instead to better measure the efficiency of an algorithm is to count the number of operations that a given algorithm would require to calculate an answer for a given input. For example let us consider the problem of finding the telephone number of a given name in a phone book.

A phone book representation in a computer can be a tuple, for a name-phone entry, and a list sorted by name. Let us have the following phone-book represented as a sorted list:

phone_bk = [('Alex', '514-327-6237'), ('Anne', '514-348-4839'), ('Beth', '514-549-2340'),
('Catrina', '514-483-3479'), ('Eduardo', '514-3570-9738'), ('Eva', '514-257-7391'),
('Hamilton', '514-462-2651'), ('James', '514-375-8916'), ('Lina', '514-376-2898'),
('Maria', '514-672-3217'), ('Zoe', '514-236-7264')]


Now given the phone-book phone_bk we want to find the telephone number for a given name, let’s say ‘James’. One approach would be to search every name one by one until we find ‘James’. Another approach would be to go and find the name in the middle of the phone book and compare it with ‘James’. The middle element of phone_bk is (‘Eva’, ‘514-257-7391’), we expect to find ‘James’ after ‘Eva’ so we focus only on the name after ‘Eva’ in the phone book.  The remaining names that we have to search in the phone book are ‘Hamilton’, ‘James’, ‘Lina’, ‘Maria’, and ‘Zoe’. We can split again the remaining list into two parts and compare the middle name, which is ‘Lina’ in this case, with ‘James’. We can continue, in a similar way, by splitting  any remaining list into two parts and we continue with searching only one of them.

Both of the above algorithms would result with the telephone number of ‘James’ as we would like. However, the first algorithm would have to check all the names in the phone book up to ‘James’ before it would return the correct number, in our example this algorithm would check 8 names before it would return anything. The second algorithm on the other hand would need to do only 3 comparisons before it finds ‘James’.

In the above example the number of comparisons that we had to do (even for the same algorithm) depend on the size of the phone-book and on in which position is the name that we are searching. The example above has 11 elements, then the first algorithm can do up to 11 comparisons and the second algorithm can do up to 3 comparisons, which should indicate that the second algorithm should be in general faster. However, for specific inputs the number of comparisons would change, in our example if we were searching for the phone number of ‘Alex’ the first algorithm would do only one comparison but the second one would do more. To avoid the problem that for different inputs the number of steps could be different, we generally assume that the input given to the algorithm is the one that makes each algorithm to do the most number of steps. Note that it is not the same input for all the algorithms that solve the same problem, for each algorithm we choose the ‘worst’ possible input. Then, the thing that matters for how many steps an algorithm would do is the size of the input.

To measure the complexity of an algorithm we use most of the time the big-O notation. This notation makes easy to compare algorithms without focusing in non-important constants, it measures the asymptotic growth of an algorithm as the size of the input becomes bigger. For example consider that the input size is of $$n$$ digits and we want to measure algorithms that require $$n^2$$, $$100 \cdot n^2$$, and $$50 \cdot n\cdot \log (n)$$ number of steps as a function of the input size $$n$$, for small numbers all these functions give different results however as $$n$$ becomes large enough the first two functions are similar and result in much higher numbers than the third one. If we take the limit of two such functions one divided by the other and it is a constant number greater than as n becomes big enough, then these two algorithms would have the same complexity. If the same division would result in zero or infinity then these two algorithms would have different complexities. The first two algorithms that we gave earlier, required $$n^2$$ and $$100 \cdot n^2$$ number of steps, have the same complexity and we denote it by $$O(n^2)$$, and the complexity of the third algorithm would be $$O( n\cdot \log (n))$$.

Perhaps the big-O notation as described above seems more difficult as really is, but with using it often in measuring the complexity of the algorithms seen in this series, we can familiarize ourselves with it (if we are not already). Soon, it will be very easy to classify the complexity of most algorithms immediately just by reading the code that we have written. However, there are some recursive algorithms that the complexity of them is more complicated to find. For these algorithms the only thing that we have to remember it the following theorem, also called Master Theorem:

Let the complexity of an algorithm is given by the recursive equation $$T(n) = a T( \frac{n}{b} ) + O(n^d)$$. Then, the complexity of the above algorithm is:

• $$O(n^d)$$ , if  $$d > log_b a$$,
• $$O(n^d log(n))$$ , if $$d = log_b a$$,
• $$O(n^{log_b a})$$ , if $$d < log_b a$$.