The empirical approach is to program the competing algorithms and try them on different instances with the help of a computer. The theoretical approach is to determine mathematically the quantity of resources needed by each algorithm as a function of the size of the instances considered. The resources are computing time and storage space. Computing time being the more critical of the two.
The size of an instance is any integer that in some way measures the number of components in an instance.
We usually consider the worst case, i.e. for a given instance size we consider those instances which requires the most time. The average behavior of an algorithm is much harder to analyze.
We analyze an algorithm in terms of the number of elementary operations that are involved. An elementary operation is one whose execution time is bounded by a constant for a particular machine and programming language. Thus within a multiplicative constant it is the number of elementary operations executed that matters in the analysis and not the exact time.
Since the exact time for an elementary operation is unimportant, we say, that an elementary operation can be executed at unit cost. We use the "Big O" notation for the execution of algorithms. The "Big O" notation gives the asymptotic execution time of an algorithm.
Algorithms can be classified using the "Big O" notation.
sum = 0; for item in a; sum = sum + itemThe number of additions depends on the length of the array. Hence the run time is O(N).
def selectionSort (a): for i in range (len(a) - 1): // Find the minimum min = a[i] minIdx = i for j in range (i + 1, len(a)): if (a[j] < min): min = a[j] minIdx = j // Swap the minimum element with the element at the ith place a[minIdx] = a[i] a[i] = min