Wednesday, February 4, 2015

Merge Sorting !



Hello All ,  

In this section i am going to discuss one of the famous sorting technique available for us . 

Merge sort : 
The main idea behind this sorting is , we first need to divide the set of elements to half , sort on both sides and merge them . This way we are eliminating the need for traversing entire elements .

This technique is a stable sort .

What is a stable sorting ? - In stable sort the implementation or algorithm tries to preserve the order of equal elements , meaning if two objects with equal keys appear in the same order as in the input. I have found this very good example from wiki pages which might clear the things for us .


In this beautiful example , we have two 5s after different type . A stable sort tries to preserve the order in the output .

Now let us come back to merge sort . As mentioned above it divides the elements ,sort & merge as explained in the below figure .



My advise for all the readers is , try to come up with your own logic before looking at the code below . Come back and check my implementations 

Coming to finding the complexity of this algorithm , as we have seen in one of the article "Finding complexity " we will  use our knowledge to find the complexity here  .

As we have seen in the above approach ,  we would be dividing the set of elements into half for each iteration which gives us the complexity as logn .

Now , for each of the set we sort the elements . So in total there will be n elements that needs to be checked and sorted ( combining all the sets)

So the final complexity of this sorting method would be O(nlogn)

#include
#include

char temp[10]={4,2,10,6,7,9,22,1,8};

void merge_sort(int low, int mid, int high)
{
  char local[10];
  int i,k ;
  int a,b;
  a = low;
  b = mid+1;
  i=low;

  while((a <=mid) && (b<=high))
  {
    if(temp[a]<=temp[b])
    {
      local[i]=temp[a];
      a++;
    }
    else
    {
      local[i]=temp[b];
      b++;
    }
    i++;
  }

/*For the left over elements during the swap operation ..
 * this will be calculated from the position of left & right partitioned values
 * */
  if(a>mid)
   {
     for(k=b;k<=high;k++)
      {
        local[i]=temp[k];
        i++;
      }
   }
  else
  {
     for(k=a;k<=mid;k++)
      {
        local[i]=temp[k];
        i++;
      }

  }

  for(k=low;k<=high;k++)
   temp[k] = local[k];

}

void parition(int low , int high)
{
  int mid ;

 if(low
  {
    mid = (low+high)/2;
    parition(low,mid);
    parition(mid+1,high);
    merge_sort(low,mid,high);
  }

}

void main()
{
   int i =0;
   parition(0,8);
   printf("After sorting the array\n");
  for(i=0;i<9 i="" p="">
   printf("%d ",temp[i]);

}


Hope i am able to come up with an efficient solution !






Reversing a single linked list !



Hello everybody !

This one questions is one of most commonly asked interview questions if you appear for a system programming job . Let me try to write the program for it in most simple way .

Let us consider we have 4 nodes A->B->C->D , to arrange these nodes in reverse order , let us first


  • Take the first node 'A' and detach from the link , 
  • Make the next of A as NULL 
  • Make B as root 


Now , let us go through the first list and detach it from main link and connect it to second link . 


We need to repeat these steps till we reach NULL for the main list , once we are done with the traversing  - its the time for declaring new head as our head :-)

You can get the C code for the same below .


void reverse_list()
{
  struct list *temp ;

  new = head ;

/*Detaching the first node from the main list */
  temp =head->next;
  head = head->next;
  new->next =NULL;

/*Traverse through the main list and attach each node to new list 
   that we are trying to create !*/

  while (temp!= NULL)
  {
    temp = head->next;
    head->next=new;
    new = head ;
    head =temp;
  }

  head = new;

}

Adding & displaying a single linked list you can take as exercise  !!


Enjoy reversing !!





Tuesday, January 6, 2015

Complexity of an algorithm



Algorithm efficiency can be defined as how long a particular algorithm runs with respect to its input. So the lesser time it takes to run an algorithm the better the efficiency of it.

Usually complexity can be described by using Big O notation . Big O notation typically excludes constants .If the order of complexity is in terms of sum of several items we typically express it with the highest order 

For example : if the complexity is  n^2 + n , then usually we refer to it as O(n^2).


O(1) - If the output of a logic does not depend on number of inputs , we describe it with this complexity

O(logn) - logn complexity usually be for a binary kind of operations where we try to eliminate half of the sets for each loop we traverse . Like for example binary search tree .

O(n) - For these sort of complexities , it requires one pass for each of the elements present .

O(nlogn) - if we have two loops , outer loops complexity is logn and inner loop complexity is n , then we refer the total complexity as O(nlogn)  . For example merge sort - where we divide list into two halves and sort them and merge finally .

O(n^2) - if we have two loops each of the complexity is n then the total complexity would be O(n^2).


Example code :


for(int x=0; x
{
 int min = x;
 for(int y=x; y
 {
if(array[y]
 min=y;
 }
 int temp = array[x];
 array[x] = array[min];
 array[min] = temp;
}


This is just a basic logic for some sorting operation , we have two loops hers outer loop complexity is n , inner loop worst case complexity is n and best case is 0 so the average complexity of inner loop is n/2 .

But as i have mentioned already we don't consider constants for calculating complexity , so inner loop complexity also would be n

So the total complexity would be O(n^2)