You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

166 lines
7.1 KiB
Org Mode

1 year ago
* Square matrix multiplication
Matrix multiplication algorithms taken from here:
[[https://www.cs.mcgill.ca/~pnguyen/251F09/matrix-mult.pdf]]
** Straight forward method
#+BEGIN_SRC C
/* This will calculate A X B and store it in C. */
#define N 3
int main(){
int A[N][N] = {
{1,2,3},
{4,5,6},
{7,8,9} };
int B[N][N] = {
{10,20,30},
{40,50,60},
{70,80,90} };
int C[N][N];
for(int i = 0; i < N; i++){
for(int j = 0; j < N; j++){
C[i][j] = 0;
for(int k = 0; k < N; k++){
C[i][j] += A[i][k] * B[k][j];
}
}
}
return 0;
}
#+END_SRC
Time complexity is $O(n^3)$
** Divide and conquer approach
The divide and conquer algorithm only works for a square matrix whose size is n X n, where n is a power of 2. The algorithm works as follows.
#+BEGIN_SRC
MatrixMul(A, B, n):
If n == 2 {
return A X B
}else{
Break A into four parts A_11, A_12, A_21, A_22, where A = [[ A_11, A_12],
[ A_21, A_22]]
Break B into four parts B_11, B_12, B_21, B_22, where B = [[ B_11, B_12],
[ B_21, B_22]]
C_11 = MatrixMul(A_11, B_11, n/2) + MatrixMul(A_12, B_21, n/2)
C_12 = MatrixMul(A_11, B_12, n/2) + MatrixMul(A_12, B_22, n/2)
C_21 = MatrixMul(A_21, B_11, n/2) + MatrixMul(A_22, B_21, n/2)
C_22 = MatrixMul(A_21, B_12, n/2) + MatrixMul(A_22, B_22, n/2)
C = [[ C_11, C_12],
[ C_21, C_22]]
return C
}
#+END_SRC
The addition of matricies of size (n X n) takes time $\theta (n^2)$, therefore, for computation of C_11 will take time of $\theta \left( \left( \frac{n}{2} \right)^2 \right)$, which is equals to $\theta \left( \frac{n^2}{4} \right)$. Therefore, computation time of C_11, C_12, C_21 and C_22 combined will be $\theta \left( 4 \frac{n^2}{4} \right)$, which is equals to $\theta (n^2)$.
\\
There are 8 recursive calls in this function with MatrixMul(n/2), therefore, time complexity will be
\[ T(n) = 8T(n/2) + \theta (n^2) \]
Using the *master's theorem*
\[ T(n) = \theta (n^{log_28}) \]
\[ T(n) = \theta (n^3) \]
** Strassen's algorithm
Another, more efficient divide and conquer algorithm for matrix multiplication. This algorithm also only works on square matrices with n being a power of 2. This algorithm is based on the observation that, for A X B = C. We can calculate C_11, C_12, C_21 and C_22 as,
\[ \text{C_11 = P_5 + P_4 - P_2 + P_6} \]
\[ \text{C_12 = P_1 + P_2} \]
\[ \text{C_21 = P_3 + P_4} \]
\[ \text{C_22 = P_1 + P _5 - P_3 - P_7} \]
Where,
\[ \text{P_1 = A_11 X (B_12 - B_22)} \]
\[ \text{P_2 = (A_11 + A_12) X B_22} \]
\[ \text{P_3 = (A_21 + A_22) X B_11} \]
\[ \text{P_4 = A_22 X (B_21 - B_11)} \]
\[ \text{P_5 = (A_11 + A_22) X (B_11 + B_22)} \]
\[ \text{P_6 = (A_12 - A_22) X (B_21 + B_22)} \]
\[ \text{P_7 = (A_11 - A_21) X (B_11 + B_12)} \]
This reduces number of recursion calls from 8 to 7.
#+BEGIN_SRC
Strassen(A, B, n):
If n == 2 {
return A X B
}
Else{
Break A into four parts A_11, A_12, A_21, A_22, where A = [[ A_11, A_12],
[ A_21, A_22]]
Break B into four parts B_11, B_12, B_21, B_22, where B = [[ B_11, B_12],
[ B_21, B_22]]
P_1 = Strassen(A_11, B_12 - B_22, n/2)
P_2 = Strassen(A_11 + A_12, B_22, n/2)
P_3 = Strassen(A_21 + A_22, B_11, n/2)
P_4 = Strassen(A_22, B_21 - B_11, n/2)
P_5 = Strassen(A_11 + A_22, B_11 + B_22, n/2)
P_6 = Strassen(A_12 - A_22, B_21 + B_22, n/2)
P_7 = Strassen(A_11 - A_21, B_11 + B_12, n/2)
C_11 = P_5 + P_4 - P_2 + P_6
C_12 = P_1 + P_2
C_21 = P_3 + P_4
C_22 = P_1 + P_5 - P_3 - P_7
C = [[ C_11, C_12],
[ C_21, C_22]]
return C
}
#+END_SRC
This algorithm uses 18 matrix addition operations. So our computation time for that is $\theta \left(18\left( \frac{n}{2} \right)^2 \right)$ which is equal to $\theta (4.5 n^2)$ which is equal to $\theta (n^2)$.
\\
There are 7 recursive calls in this function which are Strassen(n/2), therefore, time complexity is
\[ T(n) = 7T(n/2) + \theta (n^2) \]
Using the master's theorem
\[ T(n) = \theta (n^{log_27}) \]
\[ T(n) = \theta (n^{2.807}) \]
+ /*NOTE* : The divide and conquer approach and strassen's algorithm typically use n == 1 as their terminating condition since for multipliying 1 X 1 matrices, we only need to calculate product of the single element they contain, that product is thus the single element of our resultant 1 X 1 matrix./
* Sorting algorithms
** In place vs out place sorting algorithm
If the space complexity of a sorting algorithm is $\theta (1)$, then the algorithm is called in place sorting, else the algorithm is called out place sorting.
** Bubble sort
Simplest sorting algorithm, easy to implement so it is useful when number of elements to sort is small. It is an in place sorting algorithm. We will compare pairs of elements from array and swap them to be in correct order. Suppose input has n elements.
+ For first pass of the array, we will do *n-1* comparisions between pairs, so 1st and 2nd element; then 2nd and 3rd element; then 3rd and 4th element; till comparision between (n-1)th and nth element, swapping positions according to the size. /A single pass will put a single element at the end of the list at it's correct position./
+ For second pass of the array, we will do *n-2* comparisions because the last element is already in it's place after the first pass.
+ Similarly, we will continue till we only do a single comparision.
+ The total number of comparisions will be
\[ \text{Total comparisions} = (n - 1) + (n - 2) + (n - 3) + ..... + 2 + 1 \]
\[ \text{Total comparisions} = \frac{n(n-1)}{2} \]
Therefore, *time complexity is $\theta (n^2)$*
#+BEGIN_SRC C
void binary_search(int array[]){
/* i is the number of comparisions in the pass */
for(int i = len(array) - 1; i >= 1; i--){
/* j is used to traverse the list */
for(int j = 0; j < i; j++){
if(array[j] > array[j+1])
array[j], array[j+1] = array[j+1], array[j];
}
}
}
#+END_SRC
*/Minimum number of swaps can be calculated by checking how many swap operations are needed to get each element in it's correct position./* This can be done by checking the number of smaller elements towards the left. For descending, check the number of larger elements towards the left of the given element. Example for ascending sort,
| Array | 21 | 16 | 17 | 8 | 31 |
| Minimum number of swaps to get in correct position | 3 | 1 | 0 | 0 | 0 |
Therefore, minimum number of swaps is ( 3 + 1 + 0 + 0 + 0) , which is equal to 4 swaps.
+ */Reducing number of comparisions in implementation/* : at the end of every pass, check the number of swaps. *If number of swaps in a pass is zero, then the array is sorted.* This implementation does not give minimum number of comparisions, but reduces number of comparisions from default implementation. It reduces the time complexity to $\theta (n)$ for best case scenario, since we only need to pass through array once.
Recursive time complexity : $T(n) = T(n-1) + n - 1$