@ -20,7 +20,7 @@ A set of instructions in a loop. Iterative instructions can have different compl
\\
+ If *inner loop iterator doesn't depend on outer loop*, the complexity of the inner loop is multiplied by the number of times outer loop runs to get the time complexity For example, suppose we have loop as
#+BEGIN_SRC
#+BEGIN_SRC C
for(int i = 0; i < n; i++){
...
for(int j = 0; j < n; j *= 2){
@ -35,7 +35,7 @@ Thus the time complexity is *O(n.log(n))*.
+ If *inner loop and outer loop are related*, then complexities have to be computed using sums. Example, we have loop
#+BEGIN_SRC
#+BEGIN_SRC C
for(int i = 0; i <= n; i++){
...
for(int j = 0; j <= i; j++){
@ -70,7 +70,7 @@ total number of times inner loop runs = $\frac{n^2}{2} + \frac{n}{2}$
*Another example,*
\\
Suppose we have loop
#+BEGIN_SRC
#+BEGIN_SRC C
for(int i = 1; i <= n; i++){
...
for(int j = 1; j <= i; j *= 2){
@ -108,7 +108,7 @@ Time complexity = $O(n.log(n))$
** An example for time complexities of nested loops
@ -15,6 +15,8 @@ It is an inplace sorting technique. In this algorithm, we will get the minimum e
}
#+END_SRC
** Time complexity
The total number of comparisions is,
\[ \text{Total number of comparisions} = (n -1) + (n-2) + (n-3) + ... + (1) \]
\[ \text{Total number of comparisions} = \frac{n(n-1)}{2} \]
@ -48,7 +50,7 @@ It is an inplace sorting algorithm.
}
#+END_SRC
+ Time complexity
**Time complexity
*Best Case* : The best case is when input array is already sorted. In this case, we do *(n-1)* comparisions and no swaps. The time complexity will be $\theta (n)$
\\
@ -90,4 +92,59 @@ Total number of inversions = 1 + 2 = 3
If the inversion of an array is f(n), then the time complexity of the insertion sort will be $\theta (n + f(n))$.
* Quick sort
It is a divide and conquer technique.
It is a divide and conquer technique. It uses a partition algorithm which will choose an element from array, then place all smaller elements to it's left and larger to it's right. Then we can take these two parts of the array and recursively place all elements in correct position. For ease, the element chosen by the partition algorithm is either leftmost or rightmost element.
#+BEGIN_SRC C
void quick_sort(int array[], int low, int high){
if(low < high){
int x = partition(array, low, high);
quick_sort(array, low, x-1);
quick_sort(array, x+1, high);
}
}
#+END_SRC
As we can see, the main component of this algorithm is the partition algorithm.
** Lomuto partition
The partition algorithm will work as follows:
#+BEGIN_SRC C
/* Will return the index where the array is partitioned */
int partition(int array[], int low, int high){
int pivot = array[high];
/* This will point to the element greater than pivot */
For an array of size *n*, the number ofcomparisions done by this algorithm is always *n - 1*. Therefore, the time complexity of this partition algorithm is,
\[ T(n) = \theta (n) \]
** Time complexity of quicksort
+ *Best Case* : The partition algorithm always divides the array to two equal parts. In this case, the recursive relation becomes
\[ T(n) = 2T(n/2) + \theta (n) \]
Where, $\theta (n)$ is the time complexity for creating partition.
\\
Using the master's theorem.
\[ T(n) = \theta( n.log(n) ) \]
+ *Worst Case* : The partition algorithm always creates the partition at one of the extreme positions of the array. This creates a single partition with *n-1* elements. Therefore, the quicksort algorithm has to be called on the remaining *n-1* elements of the array.
\[ T(n) = T(n-1) + \theta (n) \]
Again, $\theta (n)$ is the time complexity for creating partition.