Algorithmic Analysis: An Introduction to Best, Worst and Average Case Complexities

NirmalGaud
5 min readFeb 8, 2021

Analysis of Algorithms means an investigation of an algorithms efficiency with respect to two resources: Running Time and Memory Space.

Time efficiency is also called as time complexity; means how fast an algorithm in question runs.

Space efficiency is also called as space complexity: means amount of memory units required by the algorithm in addition to the space needed for its input and output.

Its logical to investigate an algorithms efficiency as a function of some parameter ’n’ indicating the algorithm input size. The choice of an appropriate size metric can be influenced by operations of the algorithm in question.

An approach is to count the number of times each of the algorithms operations is executed. Therefore, its better to identify the most important operation of the algorithm, called the basic operation, the operation contributing the most to the total running time.

Time efficiency suggests measuring it by counting the number of times the algorithms basic operation is executed on inputs of size ’n’. For example; Let Cop is execution time of an algorithm's basic operation on a particular computer, C(n) is number of times this operation needs to be executed for this algorithm. Therefore, the running time T(n) of a program implementing this algorithm on that computer by the formula: T(n)=Cop * C(n).

Here, the analysis framework concentrates on the counts order of growth to within a constant multiple for the large size inputs.

For large values of ‘n’, it is the function’s order of growth that counts. Values of several function important for analysis of algorithms.

Algorithm that require an exponential number of operations are practical for solving problems of very small sizes.

Worst Case, Best Case and Average Case Efficiencies

Lets consider an example of sequential search. This algorithm searches for a given item (some search key K) in a list of ‘n’ elements by checking successive elements of the list until either a match with the search key is found or the list is exhausted. Here is the algorithm’s pseudo code, in which, for simplicity, a list is implemented as an array. It also assumes that the second condition , A[i] = K will not be checked if the first one, which checks that the array’s index does not exceed its upper bound, fails.

In the worst case, when there are no matching elements or the first matching element happens to be the last one on the list, the algorithm makes the largest number of key comparisons among all possible inputs of size ‘n’. Therefore; Cworst(n)=n.

The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size ‘n’, which is an input (or inputs) of size ‘n’ for which the algorithm runs the longest among all possible inputs of that size.

The best-case inputs for sequential search are lists of size n with their first element equal to a search key; accordingly, Cbest(n)=1 for this algorithm.

The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the fastest among all possible inputs of that size.

The analysis of the best-case efficiency is not nearly as important as that of the worst-case efficiency. But it is not completely useless, either. In insertion sort algorithm, the best-case inputs are already sorted arrays on which the algorithm works very fast. Moreover, the best-case efficiency deteriorates only slightly for almost-sorted arrays. Therefore, such an algorithm might well be the method of choice for applications dealing with almost-sorted arrays. And, of course, if the best-case efficiency of an algorithm is unsatisfactory, we can immediately discard it without further analysis.

However, that neither the worst-case analysis nor its best-case counterpart yields the necessary information about an algorithm’s behavior on a “typical” or “random” input. This is the information that the average-case efficiency seeks to provide. To analyze the algorithm’s average case efficiency, we must make some assumptions about possible inputs of size n.

Let’s consider again sequential search. The standard assumptions are that:

  1. the probability of a successful search is equal to p (0 ≤ p ≤ 1)
  2. the probability of the first match occurring in the ith position of the list is the same for every i.

Under these assumptions, the validity of which is usually difficult to verify, their reasonableness not withstanding. We can find the average number of key comparisons Cavg(n) as follows. In the case of a successful search, the probability of the first match occurring in the ith position of the list is p/n for every i, and the number of comparisons made by the algorithm in such a situation is obviously i. In the case of an unsuccessful search, the number of comparisons will be n with the probability of such a search being (1−p). Therefore,

This general formula yields some quite reasonable answers.

  1. if p= 1 (the search must be successful), the average number of key comparisons made by sequential search is (n + 1)/2; that is, the algorithm will inspect, on average, about half of the list’s elements.
  2. If p = 0 (the search must be unsuccessful), the average number of key comparisons will be n because the algorithm will inspect all n elements on all such inputs.

Investigation of the average-case efficiency is considerably more difficult than investigation of the worst-case and best-case efficiencies. Also, average-case efficiency cannot be obtained by taking the average of the worst-case and the best-case efficiencies. Even though this average does occasionally coincide with the average-case cost, it is not a legitimate way of performing the average-case analysis.

Why we need average-case efficiency information?

There are many important algorithms for which the average case efficiency is much better than the overly pessimistic worst-case efficiency would lead us to believe. So, without the average-case analysis, computer scientists could have missed many important algorithms.

--

--

NirmalGaud

Founder-"GeekZie" | Assistant Professor | Computer Science and Engineering | SATI |