Sunday, March 12, 2017

BIG O Notation

In algorithm analysis it’s important to quantify the number of operations or steps that algorithm will require in order to find or compute solution.

For each of these steps are considered to be a basic unit of computation then the execution time for an algorithm can be express as a number of steps required to solve the problem. Good basic unit of computation for sum algorithm is to count number of assignment statements perform to compute the sum. In this function the number of assignment statements is 1 which is total = 0 plus the value of n, the number of times we perform total = total + i.
T(n) = 1 + n

Where n is the size of the problem and the T(n) is the time it takes to solve the problem of size n namely 1 + n steps. The dominant term is what in the end is used for comparison. Order of magnitude functions describes the part of T(n) that increases the fastest as value of n increases.
Order of magnitude is often called Big-O notation and is written as O(f(n)). In case of T(n) = 1 +n if n is small then the number 1 plays significant role to the final result. As n gets larger the 1 becomes  insignificant to the final result. In that case we can drop 1 an say that running time is O(n).
For example if T(n) = 10 n^3 + 2105 when n is small then 2105 plays significant role but when the n is a large number than 2105 can be  negligible.
Sometimes the performance of algorithm depends on the exact values of data rather than simply the size of the problem. For these kinds of algorithms we need to characterize their performance in terms of best case, worst case and average case performance. The worst case is when algorithm performs poorly. In most cases the algorithm performs somewhere in between these two extremes which is an average case. 

No comments:

Post a Comment