Tuesday, January 6, 2015

Complexity of an algorithm



Algorithm efficiency can be defined as how long a particular algorithm runs with respect to its input. So the lesser time it takes to run an algorithm the better the efficiency of it.

Usually complexity can be described by using Big O notation . Big O notation typically excludes constants .If the order of complexity is in terms of sum of several items we typically express it with the highest order 

For example : if the complexity is  n^2 + n , then usually we refer to it as O(n^2).


O(1) - If the output of a logic does not depend on number of inputs , we describe it with this complexity

O(logn) - logn complexity usually be for a binary kind of operations where we try to eliminate half of the sets for each loop we traverse . Like for example binary search tree .

O(n) - For these sort of complexities , it requires one pass for each of the elements present .

O(nlogn) - if we have two loops , outer loops complexity is logn and inner loop complexity is n , then we refer the total complexity as O(nlogn)  . For example merge sort - where we divide list into two halves and sort them and merge finally .

O(n^2) - if we have two loops each of the complexity is n then the total complexity would be O(n^2).


Example code :


for(int x=0; x
{
 int min = x;
 for(int y=x; y
 {
if(array[y]
 min=y;
 }
 int temp = array[x];
 array[x] = array[min];
 array[min] = temp;
}


This is just a basic logic for some sorting operation , we have two loops hers outer loop complexity is n , inner loop worst case complexity is n and best case is 0 so the average complexity of inner loop is n/2 .

But as i have mentioned already we don't consider constants for calculating complexity , so inner loop complexity also would be n

So the total complexity would be O(n^2)


No comments:

Post a Comment