You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

3120 lines
130 KiB
HTML

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
<head>
<!-- 2023-07-24 Mon 21:32 -->
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Algorithms</title>
<meta name="author" content="Anmol Nawani" />
<meta name="generator" content="Org Mode" />
<style>
#content { max-width: 60em; margin: auto; }
.title { text-align: center;
margin-bottom: .2em; }
.subtitle { text-align: center;
font-size: medium;
font-weight: bold;
margin-top:0; }
.todo { font-family: monospace; color: red; }
.done { font-family: monospace; color: green; }
.priority { font-family: monospace; color: orange; }
.tag { background-color: #eee; font-family: monospace;
padding: 2px; font-size: 80%; font-weight: normal; }
.timestamp { color: #bebebe; }
.timestamp-kwd { color: #5f9ea0; }
.org-right { margin-left: auto; margin-right: 0px; text-align: right; }
.org-left { margin-left: 0px; margin-right: auto; text-align: left; }
.org-center { margin-left: auto; margin-right: auto; text-align: center; }
.underline { text-decoration: underline; }
#postamble p, #preamble p { font-size: 90%; margin: .2em; }
p.verse { margin-left: 3%; }
pre {
border: 1px solid #e6e6e6;
border-radius: 3px;
background-color: #f2f2f2;
padding: 8pt;
font-family: monospace;
overflow: auto;
margin: 1.2em;
}
pre.src {
position: relative;
overflow: auto;
}
pre.src:before {
display: none;
position: absolute;
top: -8px;
right: 12px;
padding: 3px;
color: #555;
background-color: #f2f2f299;
}
pre.src:hover:before { display: inline; margin-top: 14px;}
/* Languages per Org manual */
pre.src-asymptote:before { content: 'Asymptote'; }
pre.src-awk:before { content: 'Awk'; }
pre.src-authinfo::before { content: 'Authinfo'; }
pre.src-C:before { content: 'C'; }
/* pre.src-C++ doesn't work in CSS */
pre.src-clojure:before { content: 'Clojure'; }
pre.src-css:before { content: 'CSS'; }
pre.src-D:before { content: 'D'; }
pre.src-ditaa:before { content: 'ditaa'; }
pre.src-dot:before { content: 'Graphviz'; }
pre.src-calc:before { content: 'Emacs Calc'; }
pre.src-emacs-lisp:before { content: 'Emacs Lisp'; }
pre.src-fortran:before { content: 'Fortran'; }
pre.src-gnuplot:before { content: 'gnuplot'; }
pre.src-haskell:before { content: 'Haskell'; }
pre.src-hledger:before { content: 'hledger'; }
pre.src-java:before { content: 'Java'; }
pre.src-js:before { content: 'Javascript'; }
pre.src-latex:before { content: 'LaTeX'; }
pre.src-ledger:before { content: 'Ledger'; }
pre.src-lisp:before { content: 'Lisp'; }
pre.src-lilypond:before { content: 'Lilypond'; }
pre.src-lua:before { content: 'Lua'; }
pre.src-matlab:before { content: 'MATLAB'; }
pre.src-mscgen:before { content: 'Mscgen'; }
pre.src-ocaml:before { content: 'Objective Caml'; }
pre.src-octave:before { content: 'Octave'; }
pre.src-org:before { content: 'Org mode'; }
pre.src-oz:before { content: 'OZ'; }
pre.src-plantuml:before { content: 'Plantuml'; }
pre.src-processing:before { content: 'Processing.js'; }
pre.src-python:before { content: 'Python'; }
pre.src-R:before { content: 'R'; }
pre.src-ruby:before { content: 'Ruby'; }
pre.src-sass:before { content: 'Sass'; }
pre.src-scheme:before { content: 'Scheme'; }
pre.src-screen:before { content: 'Gnu Screen'; }
pre.src-sed:before { content: 'Sed'; }
pre.src-sh:before { content: 'shell'; }
pre.src-sql:before { content: 'SQL'; }
pre.src-sqlite:before { content: 'SQLite'; }
/* additional languages in org.el's org-babel-load-languages alist */
pre.src-forth:before { content: 'Forth'; }
pre.src-io:before { content: 'IO'; }
pre.src-J:before { content: 'J'; }
pre.src-makefile:before { content: 'Makefile'; }
pre.src-maxima:before { content: 'Maxima'; }
pre.src-perl:before { content: 'Perl'; }
pre.src-picolisp:before { content: 'Pico Lisp'; }
pre.src-scala:before { content: 'Scala'; }
pre.src-shell:before { content: 'Shell Script'; }
pre.src-ebnf2ps:before { content: 'ebfn2ps'; }
/* additional language identifiers per "defun org-babel-execute"
in ob-*.el */
pre.src-cpp:before { content: 'C++'; }
pre.src-abc:before { content: 'ABC'; }
pre.src-coq:before { content: 'Coq'; }
pre.src-groovy:before { content: 'Groovy'; }
/* additional language identifiers from org-babel-shell-names in
ob-shell.el: ob-shell is the only babel language using a lambda to put
the execution function name together. */
pre.src-bash:before { content: 'bash'; }
pre.src-csh:before { content: 'csh'; }
pre.src-ash:before { content: 'ash'; }
pre.src-dash:before { content: 'dash'; }
pre.src-ksh:before { content: 'ksh'; }
pre.src-mksh:before { content: 'mksh'; }
pre.src-posh:before { content: 'posh'; }
/* Additional Emacs modes also supported by the LaTeX listings package */
pre.src-ada:before { content: 'Ada'; }
pre.src-asm:before { content: 'Assembler'; }
pre.src-caml:before { content: 'Caml'; }
pre.src-delphi:before { content: 'Delphi'; }
pre.src-html:before { content: 'HTML'; }
pre.src-idl:before { content: 'IDL'; }
pre.src-mercury:before { content: 'Mercury'; }
pre.src-metapost:before { content: 'MetaPost'; }
pre.src-modula-2:before { content: 'Modula-2'; }
pre.src-pascal:before { content: 'Pascal'; }
pre.src-ps:before { content: 'PostScript'; }
pre.src-prolog:before { content: 'Prolog'; }
pre.src-simula:before { content: 'Simula'; }
pre.src-tcl:before { content: 'tcl'; }
pre.src-tex:before { content: 'TeX'; }
pre.src-plain-tex:before { content: 'Plain TeX'; }
pre.src-verilog:before { content: 'Verilog'; }
pre.src-vhdl:before { content: 'VHDL'; }
pre.src-xml:before { content: 'XML'; }
pre.src-nxml:before { content: 'XML'; }
/* add a generic configuration mode; LaTeX export needs an additional
(add-to-list 'org-latex-listings-langs '(conf " ")) in .emacs */
pre.src-conf:before { content: 'Configuration File'; }
table { border-collapse:collapse; }
caption.t-above { caption-side: top; }
caption.t-bottom { caption-side: bottom; }
td, th { vertical-align:top; }
th.org-right { text-align: center; }
th.org-left { text-align: center; }
th.org-center { text-align: center; }
td.org-right { text-align: right; }
td.org-left { text-align: left; }
td.org-center { text-align: center; }
dt { font-weight: bold; }
.footpara { display: inline; }
.footdef { margin-bottom: 1em; }
.figure { padding: 1em; }
.figure p { text-align: center; }
.equation-container {
display: table;
text-align: center;
width: 100%;
}
.equation {
vertical-align: middle;
}
.equation-label {
display: table-cell;
text-align: right;
vertical-align: middle;
}
.inlinetask {
padding: 10px;
border: 2px solid gray;
margin: 10px;
background: #ffffcc;
}
#org-div-home-and-up
{ text-align: right; font-size: 70%; white-space: nowrap; }
textarea { overflow-x: auto; }
.linenr { font-size: smaller }
.code-highlighted { background-color: #ffff00; }
.org-info-js_info-navigation { border-style: none; }
#org-info-js_console-label
{ font-size: 10px; font-weight: bold; white-space: nowrap; }
.org-info-js_search-highlight
{ background-color: #ffff00; color: #000000; font-weight: bold; }
.org-svg { }
</style>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
displayAlign: "center",
displayIndent: "0em",
"HTML-CSS": { scale: 100,
linebreaks: { automatic: "false" },
webFont: "TeX"
},
SVG: {scale: 100,
linebreaks: { automatic: "false" },
font: "TeX"},
NativeMML: {scale: 100},
TeX: { equationNumbers: {autoNumber: "AMS"},
MultLineWidth: "85%",
TagSide: "right",
TagIndent: ".8em"
}
});
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS_HTML"></script>
</head>
<body>
<div id="content" class="content">
<h1 class="title">Algorithms</h1>
<div id="table-of-contents" role="doc-toc">
<h2>Table of Contents</h2>
<div id="text-table-of-contents" role="doc-toc">
<ul>
<li><a href="#org6f0ef56">1. Data structure and Algorithm</a></li>
<li><a href="#org3b1ddf6">2. Characteristics of Algorithms</a></li>
<li><a href="#orgf179581">3. Behaviour of algorithm</a>
<ul>
<li><a href="#org39e6ae4">3.1. Best, Worst and Average Cases</a></li>
<li><a href="#org9fefef0">3.2. Bounds of algorithm</a></li>
</ul>
</li>
<li><a href="#orgf1696fc">4. Asymptotic Notations</a>
<ul>
<li><a href="#org6a6acb8">4.1. Big-Oh Notation [O]</a></li>
<li><a href="#org9f980c7">4.2. Omega Notation [ \(\Omega\) ]</a></li>
<li><a href="#org32c4a9a">4.3. Theta Notation [ \(\theta\) ]</a></li>
<li><a href="#org3230053">4.4. Little-Oh Notation [o]</a></li>
<li><a href="#org46e6cac">4.5. Little-Omega Notation [ \(\omega\) ]</a></li>
</ul>
</li>
<li><a href="#orgfa1a112">5. Comparing Growth rate of funtions</a>
<ul>
<li><a href="#orgb56535e">5.1. Applying limit</a></li>
<li><a href="#org30701e9">5.2. Using logarithm</a></li>
<li><a href="#org3c18fe1">5.3. Common funtions</a></li>
</ul>
</li>
<li><a href="#org5ff6c34">6. Properties of Asymptotic Notations</a>
<ul>
<li><a href="#orgd51bb66">6.1. Big-Oh</a></li>
<li><a href="#org1646053">6.2. Properties</a></li>
</ul>
</li>
<li><a href="#org10296f6">7. Calculating time complexity of algorithm</a>
<ul>
<li><a href="#orga1ac140">7.1. Sequential instructions</a></li>
<li><a href="#orgac46ed6">7.2. Iterative instructions</a></li>
<li><a href="#orgf41dd21">7.3. An example for time complexities of nested loops</a></li>
</ul>
</li>
<li><a href="#org8324828">8. Time complexity of recursive instructions</a>
<ul>
<li><a href="#org8fa29ba">8.1. Time complexity in recursive form</a></li>
</ul>
</li>
<li><a href="#org3703328">9. Solving Recursive time complexities</a>
<ul>
<li><a href="#org7893443">9.1. Iterative method</a></li>
<li><a href="#org7f3013f">9.2. Master Theorem for Subtract recurrences</a></li>
<li><a href="#orgc508d87">9.3. Master Theorem for divide and conquer recurrences</a></li>
</ul>
</li>
<li><a href="#orgce2f29e">10. Square root recurrence relations</a>
<ul>
<li><a href="#orgd6a4ff2">10.1. Iterative method</a></li>
<li><a href="#orga309b9e">10.2. Master Theorem for square root recurrence relations</a></li>
</ul>
</li>
<li><a href="#org7d79637">11. Extended Master's theorem for time complexity of recursive algorithms</a>
<ul>
<li><a href="#org7c8438d">11.1. For (k = -1)</a></li>
<li><a href="#org91a21bf">11.2. For (k &lt; -1)</a></li>
</ul>
</li>
<li><a href="#org4cd2e2d">12. Tree method for time complexity of recursive algorithms</a>
<ul>
<li><a href="#org1d1c9a3">12.1. Avoiding tree method</a></li>
</ul>
</li>
<li><a href="#org4fdc5de">13. Space complexity</a>
<ul>
<li><a href="#org1bdab7b">13.1. Auxiliary space complexity</a></li>
</ul>
</li>
<li><a href="#org26eb543">14. Calculating auxiliary space complexity</a>
<ul>
<li><a href="#org2d4751e">14.1. Data Space used</a></li>
<li><a href="#org32d747b">14.2. Code Execution space in recursive algorithm</a></li>
</ul>
</li>
<li><a href="#org423e1e2">15. Divide and Conquer algorithms</a></li>
<li><a href="#orgeec7ed3">16. Searching for element in array</a>
<ul>
<li><a href="#orgf5c47f0">16.1. Straight forward approach for searching (<b>Linear Search</b>)</a></li>
<li><a href="#org53e9b50">16.2. Divide and conquer approach (<b>Binary search</b>)</a></li>
</ul>
</li>
<li><a href="#org3b4deed">17. Max and Min element from array</a>
<ul>
<li><a href="#orged1501e">17.1. Straightforward approach</a></li>
<li><a href="#orgd668fa2">17.2. Divide and conquer approach</a></li>
<li><a href="#orgb190e11">17.3. Efficient single loop approach (Increment by 2)</a></li>
</ul>
</li>
<li><a href="#org0d2bf32">18. Square matrix multiplication</a>
<ul>
<li><a href="#orge92fb56">18.1. Straight forward method</a></li>
<li><a href="#orgd75a384">18.2. Divide and conquer approach</a></li>
<li><a href="#orgbf700f5">18.3. Strassen's algorithm</a></li>
</ul>
</li>
<li><a href="#orgf02b34d">19. Sorting algorithms</a>
<ul>
<li><a href="#orgc2f900d">19.1. In place vs out place sorting algorithm</a></li>
</ul>
</li>
<li><a href="#orgceeb3f1">20. Bubble sort</a></li>
<li><a href="#org6e1f335">21. Selection sort</a>
<ul>
<li><a href="#org78f1644">21.1. Time complexity</a></li>
</ul>
</li>
<li><a href="#orged540c9">22. Insertion sort</a>
<ul>
<li><a href="#org7cfbdd7">22.1. Time complexity</a></li>
</ul>
</li>
<li><a href="#org937bc7e">23. Inversion in array</a>
<ul>
<li><a href="#orgca4bf29">23.1. Relation between time complexity of insertion sort and inversion</a></li>
</ul>
</li>
<li><a href="#org8edc47c">24. Quick sort</a>
<ul>
<li><a href="#org8e462f5">24.1. Lomuto partition</a></li>
<li><a href="#orgad5fe08">24.2. Time complexity of quicksort</a></li>
<li><a href="#org029ad1b">24.3. Number of comparisions</a></li>
</ul>
</li>
<li><a href="#org9d7e721">25. Merging two sorted arrays (2-Way Merge)</a></li>
<li><a href="#org7a957cb">26. Merging k sorted arrays (k-way merge)</a></li>
<li><a href="#orgb932252">27. Merge sort</a>
<ul>
<li><a href="#org49114b6">27.1. Time complexity</a></li>
<li><a href="#org87aa6fd">27.2. Space complexity</a></li>
</ul>
</li>
<li><a href="#org3aabfd6">28. Stable and unstable sorting algorithms</a></li>
<li><a href="#org58c0022">29. Non-comparitive sorting algorithms</a>
<ul>
<li><a href="#org7078369">29.1. Counting sort</a></li>
<li><a href="#org2198ab4">29.2. Radix sort</a></li>
<li><a href="#orge88fb20">29.3. Bucket sort</a>
<ul>
<li><a href="#org8b4d13b">29.3.1. Time complexity</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</div>
</div>
<div id="outline-container-org6f0ef56" class="outline-2">
<h2 id="org6f0ef56"><span class="section-number-2">1.</span> Data structure and Algorithm</h2>
<div class="outline-text-2" id="text-1">
<ul class="org-ul">
<li>A <b>data structure</b> is a particular way of storing and organizing data. The purpose is to effectively access and modify data effictively.</li>
<li>A procedure to solve a specific problem is called <b>Algorithm</b>.</li>
</ul>
<p>
During programming we use data structures and algorithms that work on that data.
</p>
</div>
</div>
<div id="outline-container-org3b1ddf6" class="outline-2">
<h2 id="org3b1ddf6"><span class="section-number-2">2.</span> Characteristics of Algorithms</h2>
<div class="outline-text-2" id="text-2">
<p>
An algorithm has follwing characteristics.
</p>
<ul class="org-ul">
<li><b>Input</b> : Zero or more quantities are externally supplied to algorithm.</li>
<li><b>Output</b> : An algorithm should produce atleast one output.</li>
<li><b>Finiteness</b> : The algorithm should terminate after a finite number of steps. It should not run infinitely.</li>
<li><b>Definiteness</b> : Algorithm should be clear and unambiguous. All instructions of an algorithm must have a single meaning.</li>
<li><b>Effectiveness</b> : Algorithm must be made using very basic and simple operations that a computer can do.</li>
<li><b>Language Independance</b> : A algorithm is language independent and can be implemented in any programming language.</li>
</ul>
</div>
</div>
<div id="outline-container-orgf179581" class="outline-2">
<h2 id="orgf179581"><span class="section-number-2">3.</span> Behaviour of algorithm</h2>
<div class="outline-text-2" id="text-3">
<p>
The behaviour of an algorithm is the analysis of the algorithm on basis of <b>Time</b> and <b>Space</b>.
</p>
<ul class="org-ul">
<li><b>Time complexity</b> : Amount of time required to run the algorithm.</li>
<li><b>Space complexity</b> : Amount of space (memory) required to execute the algorithm.</li>
</ul>
<p>
The behaviour of algorithm can be used to compare two algorithms which solve the same problem.
<br />
The preference is traditionally/usually given to better time complexity. But we may need to give preference to better space complexity based on needs.
</p>
</div>
<div id="outline-container-org39e6ae4" class="outline-3">
<h3 id="org39e6ae4"><span class="section-number-3">3.1.</span> Best, Worst and Average Cases</h3>
<div class="outline-text-3" id="text-3-1">
<p>
The input size tells us the size of the input given to algorithm. Based on the size of input, the time/storage usage of the algorithm changes. <b>Example</b>, an array with larger input size (more elements) will taken more time to sort.
</p>
<ul class="org-ul">
<li>Best Case : The lowest time/storage usage for the given input size.</li>
<li>Worst Case : The highest time/storage usage for the given input size.</li>
<li>Average Case : The average time/storage usage for the given input size.</li>
</ul>
</div>
</div>
<div id="outline-container-org9fefef0" class="outline-3">
<h3 id="org9fefef0"><span class="section-number-3">3.2.</span> Bounds of algorithm</h3>
<div class="outline-text-3" id="text-3-2">
<p>
Since algorithms are finite, they have <b>bounded time</b> taken and <b>bounded space</b> taken. Bounded is short for boundries, so they have a minimum and maximum time/space taken. These bounds are upper bound and lower bound.
</p>
<ul class="org-ul">
<li>Upper Bound : The maximum amount of space/time taken by the algorithm is the upper bound. It is shown as a function of worst cases of time/storage usage over all the possible input sizes.</li>
<li>Lower Bound : The minimum amount of space/time taken by the algorithm is the lower bound. It is shown as a function of best cases of time/storage usage over all the possible input sizes.</li>
</ul>
</div>
</div>
</div>
<div id="outline-container-orgf1696fc" class="outline-2">
<h2 id="orgf1696fc"><span class="section-number-2">4.</span> Asymptotic Notations</h2>
<div class="outline-text-2" id="text-4">
</div>
<div id="outline-container-org6a6acb8" class="outline-3">
<h3 id="org6a6acb8"><span class="section-number-3">4.1.</span> Big-Oh Notation [O]</h3>
<div class="outline-text-3" id="text-4-1">
<ul class="org-ul">
<li>The Big Oh notation is used to define the upper bound of an algorithm.</li>
<li>Given a non negative funtion f(n) and other non negative funtion g(n), we say that \(f(n) = O(g(n)\) if there exists a positive number \(n_0\) and a positive constant \(c\), such that \[ f(n) \le c.g(n) \ \ \forall n \ge n_0 \]</li>
<li>So if growth rate of g(n) is greater than or equal to growth rate of f(n), then \(f(n) = O(g(n))\).</li>
</ul>
</div>
</div>
<div id="outline-container-org9f980c7" class="outline-3">
<h3 id="org9f980c7"><span class="section-number-3">4.2.</span> Omega Notation [ \(\Omega\) ]</h3>
<div class="outline-text-3" id="text-4-2">
<ul class="org-ul">
<li>It is used to shown the lower bound of the algorithm.</li>
<li>For any positive integer \(n_0\) and a positive constant \(c\), we say that, \(f(n) = \Omega (g(n))\) if \[ f(n) \ge c.g(n) \ \ \forall n \ge n_0 \]</li>
<li>So growth rate of \(g(n)\) should be less than or equal to growth rate of \(f(n)\)</li>
</ul>
<p>
<b>Note</b> : If \(f(n) = O(g(n))\) then \(g(n) = \Omega (f(n))\)
</p>
</div>
</div>
<div id="outline-container-org32c4a9a" class="outline-3">
<h3 id="org32c4a9a"><span class="section-number-3">4.3.</span> Theta Notation [ \(\theta\) ]</h3>
<div class="outline-text-3" id="text-4-3">
<ul class="org-ul">
<li>If is used to provide the asymptotic <b>equal bound</b>.</li>
<li>\(f(n) = \theta (g(n))\) if there exists a positive integer \(n_0\) and a positive constants \(c_1\) and \(c_2\) such that \[ c_1 . g(n) \le f(n) \le c_2 . g(n) \ \ \forall n \ge n_0 \]</li>
<li>So the growth rate of \(f(n)\) and \(g(n)\) should be equal.</li>
</ul>
<p>
<b>Note</b> : So if \(f(n) = O(g(n))\) and \(f(n) = \Omega (g(n))\), then \(f(n) = \theta (g(n))\)
</p>
</div>
</div>
<div id="outline-container-org3230053" class="outline-3">
<h3 id="org3230053"><span class="section-number-3">4.4.</span> Little-Oh Notation [o]</h3>
<div class="outline-text-3" id="text-4-4">
<ul class="org-ul">
<li>The little o notation defines the strict upper bound of an algorithm.</li>
<li>We say that \(f(n) = o(g(n))\) if there exists positive integer \(n_0\) and positive constant \(c\) such that, \[ f(n) < c.g(n) \ \ \forall n \ge n_0 \]</li>
<li>Notice how condition is &lt;, rather than \(\le\) which is used in Big-Oh. So growth rate of \(g(n)\) is strictly greater than that of \(f(n)\).</li>
</ul>
</div>
</div>
<div id="outline-container-org46e6cac" class="outline-3">
<h3 id="org46e6cac"><span class="section-number-3">4.5.</span> Little-Omega Notation [ \(\omega\) ]</h3>
<div class="outline-text-3" id="text-4-5">
<ul class="org-ul">
<li>The little omega notation defines the strict lower bound of an algorithm.</li>
<li>We say that \(f(n) = \omega (g(n))\) if there exists positive integer \(n_0\) and positive constant \(c\) such that, \[ f(n) > c.g(n) \ \ \forall n \ge n_0 \]</li>
<li>Notice how condition is &gt;, rather than \(\ge\) which is used in Big-Omega. So growth rate of \(g(n)\) is strictly less than that of \(f(n)\).</li>
</ul>
</div>
</div>
</div>
<div id="outline-container-orgfa1a112" class="outline-2">
<h2 id="orgfa1a112"><span class="section-number-2">5.</span> Comparing Growth rate of funtions</h2>
<div class="outline-text-2" id="text-5">
</div>
<div id="outline-container-orgb56535e" class="outline-3">
<h3 id="orgb56535e"><span class="section-number-3">5.1.</span> Applying limit</h3>
<div class="outline-text-3" id="text-5-1">
<p>
To compare two funtions \(f(n)\) and \(g(n)\). We can use limit
\[ \lim_{n\to\infty} \frac{f(n)}{g(n)} \]
</p>
<ul class="org-ul">
<li>If result is 0 then growth of \(g(n)\) &gt; growth of \(f(n)\)</li>
<li>If result is \(\infty\) then growth of \(g(n)\) &lt; growth of \(f(n)\)</li>
<li>If result is any finite number (constant), then growth of \(g(n)\) = growth of \(f(n)\)</li>
</ul>
<p>
<b>Note</b> : L'Hôpital's rule can be used in this limit.
</p>
</div>
</div>
<div id="outline-container-org30701e9" class="outline-3">
<h3 id="org30701e9"><span class="section-number-3">5.2.</span> Using logarithm</h3>
<div class="outline-text-3" id="text-5-2">
<p>
Using logarithm can be useful to compare exponential functions. When comaparing functions \(f(n)\) and \(g(n)\),
</p>
<ul class="org-ul">
<li>If growth of \(\log(f(n))\) is greater than growth of \(\log(g(n))\), then growth of \(f(n)\) is greater than growth of \(g(n)\)</li>
<li>If growth of \(\log(f(n))\) is less than growth of \(\log(g(n))\), then growth of \(f(n)\) is less than growth of \(g(n)\)</li>
<li>When using log for comparing growth, comaparing constants after applying log is also required. For example, if functions are \(2^n\) and \(3^n\), then their logs are \(n.log(2)\) and \(n.log(3)\). Since \(log(2) < log(3)\), the growth rate of \(3^n\) will be higher.</li>
<li>On equal growth after applying log, we can't decide which function grows faster.</li>
</ul>
</div>
</div>
<div id="outline-container-org3c18fe1" class="outline-3">
<h3 id="org3c18fe1"><span class="section-number-3">5.3.</span> Common funtions</h3>
<div class="outline-text-3" id="text-5-3">
<p>
Commonly, growth rate in increasing order is
\[ c < c.log(log(n)) < c.log(n) < c.n < n.log(n) < c.n^2 < c.n^3 < c.n^4 ... \]
\[ n^c < c^n < n! < n^n \]
Where \(c\) is any constant.
</p>
</div>
</div>
</div>
<div id="outline-container-org5ff6c34" class="outline-2">
<h2 id="org5ff6c34"><span class="section-number-2">6.</span> Properties of Asymptotic Notations</h2>
<div class="outline-text-2" id="text-6">
</div>
<div id="outline-container-orgd51bb66" class="outline-3">
<h3 id="orgd51bb66"><span class="section-number-3">6.1.</span> Big-Oh</h3>
<div class="outline-text-3" id="text-6-1">
<ul class="org-ul">
<li><b>Product</b> : \[ Given\ f_1 = O(g_1)\ \ and\ f_2 = O(g_2) \implies f_1 f_2 = O(g_1 g_2) \] \[ Also\ f.O(g) = O(f g) \]</li>
<li><b>Sum</b> : For a sum of two functions, the big-oh can be represented with only with funcion having higer growth rate. \[ O(f_1 + f_2 + ... + f_i) = O(max\ growth\ rate(f_1, f_2, .... , f_i )) \]</li>
<li><b>Constants</b> : For a constant \(c\) \[ O(c.g(n)) = O(g(n)) \], this is because the constants don't effect the growth rate.</li>
</ul>
</div>
</div>
<div id="outline-container-org1646053" class="outline-3">
<h3 id="org1646053"><span class="section-number-3">6.2.</span> Properties</h3>
<div class="outline-text-3" id="text-6-2">
<div id="orgbbd0668" class="figure">
<p><img src="lectures/imgs/asymptotic-notations-properties.png" alt="asymptotic-notations-properties.png" />
</p>
</div>
<ul class="org-ul">
<li><b>Reflexive</b> : \(f(n) = O(f(n)\) and \(f(n) = \Omega (f(n))\) and \(f(n) = \theta (f(n))\)</li>
<li><b>Symmetric</b> : If \(f(n) = \theta (g(n))\) then \(g(n) = \theta (f(n))\)</li>
<li><b>Transitive</b> : If \(f(n) = O(g(n))\) and \(g(n) = O(h(n))\) then \(f(n) = O(h(n))\)</li>
<li><b>Transpose</b> : If \(f(n) = O(g(n))\) then we can also conclude that \(g(n) = \Omega (f(n))\) so we say Big-Oh is transpose of Big-Omega and vice-versa.</li>
<li><b>Antisymmetric</b> : If \(f(n) = O(g(n))\) and \(g(n) = O(f(n))\) then we conclude that \(f(n) = g(n)\)</li>
<li><b>Asymmetric</b> : If \(f(n) = \omega (g(n))\) then we can conclude that \(g(n) \ne \omega (f(n))\)</li>
</ul>
</div>
</div>
</div>
<div id="outline-container-org10296f6" class="outline-2">
<h2 id="org10296f6"><span class="section-number-2">7.</span> Calculating time complexity of algorithm</h2>
<div class="outline-text-2" id="text-7">
<p>
We will look at three types of situations
</p>
<ul class="org-ul">
<li>Sequential instructions</li>
<li>Iterative instructions</li>
<li>Recursive instructions</li>
</ul>
</div>
<div id="outline-container-orga1ac140" class="outline-3">
<h3 id="orga1ac140"><span class="section-number-3">7.1.</span> Sequential instructions</h3>
<div class="outline-text-3" id="text-7-1">
<p>
A sequential set of instructions are instructions in a sequence without iterations and recursions. It is a simple block of instructions with no branches. A sequential set of instructions has <b>time complexity of O(1)</b>, i.e., it has <b>constant time complexity</b>.
</p>
</div>
</div>
<div id="outline-container-orgac46ed6" class="outline-3">
<h3 id="orgac46ed6"><span class="section-number-3">7.2.</span> Iterative instructions</h3>
<div class="outline-text-3" id="text-7-2">
<p>
A set of instructions in a loop. Iterative instructions can have different complexities based on how many iterations occurs depending on input size.
</p>
<ul class="org-ul">
<li>For fixed number of iterations (number of iterations known at compile time i.e. independant of the input size), the time complexity is constant, O(1). Example for(int i = 0; i &lt; 100; i++) { &#x2026; } will always have 100 iterations, so constant time complexity.</li>
<li>For n number of iterations ( n is the input size ), the time complexity is O(n). Example, a loop for(int i = 0; i &lt; n; i++){ &#x2026; } will have n iterations where n is the input size, so complexity is O(n). Loop for(int i = 0; i &lt; n/2; i++){&#x2026;} also has time complexity O(n) because n/2 iterations are done by loop and 1/2 is constant thus not in big-oh notation.</li>
<li>For a loop like for(int i = 1; i &lt;= n; i = i*2){&#x2026;} the value of i is update as *=2, so the number of iterations will be \(log_2 (n)\). Therefore, the time complexity is \(O(log_2 (n))\).</li>
<li>For a loop like for(int i = n; i &gt; 1; i = i/2){&#x2026;} the value of i is update as *=2, so the number of iterations will be \(log_2 (n)\). Therefore, the time complexity is \(O(log_2 (n))\).</li>
</ul>
<p>
<b><span class="underline">Nested Loops</span></b>
<br />
</p>
<ul class="org-ul">
<li>If <b>inner loop iterator doesn't depend on outer loop</b>, the complexity of the inner loop is multiplied by the number of times outer loop runs to get the time complexity For example, suppose we have loop as</li>
</ul>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; n; i++){
...
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = 0; j &lt; n; j *= 2){
...
}
...
}
</pre>
</div>
<p>
Here, the outer loop will <b>n</b> times and the inner loop will run <b>log(n)</b> times. Therefore, the total number of time statements in the inner loop run is n.log(n) times.
Thus the time complexity is <b>O(n.log(n))</b>.
</p>
<ul class="org-ul">
<li>If <b>inner loop and outer loop are related</b>, then complexities have to be computed using sums. Example, we have loop</li>
</ul>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt;= n; i++){
...
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = 0; j &lt;= i; j++){
...
}
...
}
</pre>
</div>
<p>
Here the outer loop will run <b>n</b> times, so i goes from <b>0 to n</b>. The number of times inner loop runs is j, which depends on <b>i</b>.
</p>
<table border="2" cellspacing="0" cellpadding="6" rules="all" frame="border">
<colgroup>
<col class="org-left" />
<col class="org-left" />
</colgroup>
<thead>
<tr>
<th scope="col" class="org-left">Value of i</th>
<th scope="col" class="org-left">Number of times inner loop runs</th>
</tr>
</thead>
<tbody>
<tr>
<td class="org-left">0</td>
<td class="org-left">0</td>
</tr>
<tr>
<td class="org-left">1</td>
<td class="org-left">1</td>
</tr>
<tr>
<td class="org-left">2</td>
<td class="org-left">2</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">n</td>
<td class="org-left">n</td>
</tr>
</tbody>
</table>
<p>
So the total number of times inner loop runs = \(1+2+3+....+n\)
<br />
total number of times inner loop runs = \(\frac{n.(n+1)}{2}\)
<br />
total number of times inner loop runs = \(\frac{n^2}{2} + \frac{n}{2}\)
<br />
<b><i>Therefore, time complexity is</i></b> \(O(\frac{n^2}{2} + \frac{n}{2}) = O(n^2)\)
<br />
<b>Another example,</b>
<br />
Suppose we have loop
</p>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 1; i &lt;= n; i++){
...
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = 1; j &lt;= i; j *= 2){
...
}
...
}
</pre>
</div>
<p>
The outer loop will run n times with i from <b>1 to n</b>, and inner will run log(i) times.
</p>
<table border="2" cellspacing="0" cellpadding="6" rules="all" frame="border">
<colgroup>
<col class="org-left" />
<col class="org-left" />
</colgroup>
<thead>
<tr>
<th scope="col" class="org-left">Value of i</th>
<th scope="col" class="org-left">Number of times inner loop runs</th>
</tr>
</thead>
<tbody>
<tr>
<td class="org-left">1</td>
<td class="org-left">log(1)</td>
</tr>
<tr>
<td class="org-left">2</td>
<td class="org-left">log(2)</td>
</tr>
<tr>
<td class="org-left">3</td>
<td class="org-left">log(3)</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">n</td>
<td class="org-left">log(n)</td>
</tr>
</tbody>
</table>
<p>
Thus, total number of times the inner loop runs is \(log(1) + log(2) + log(3) + ... + log(n)\).
<br />
total number of times inner loop runs = \(log(1.2.3...n)\)
<br />
total number of times inner loop runs = \(log(n!)\)
<br />
Using <b><i>Stirling's approximation</i></b>, we know that \(log(n!) = n.log(n) - n + 1\)
<br />
total number of times inner loop runs = \(n.log(n) - n + 1\)
<br />
Time complexity = \(O(n.log(n))\)
</p>
</div>
</div>
<div id="outline-container-orgf41dd21" class="outline-3">
<h3 id="orgf41dd21"><span class="section-number-3">7.3.</span> An example for time complexities of nested loops</h3>
<div class="outline-text-3" id="text-7-3">
<p>
Suppose a loop,
</p>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 1; i &lt;= n; <span style="color: #c18401;">i</span> *= 2){
...
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = 1; j &lt;= i; j *= 2){
...
}
...
}
</pre>
</div>
<p>
Here, outer loop will run <b>log(n)</b> times. Let's consider for some given n, it runs <b>k</b> times, i.e, let
\[ k = log(n) \]
</p>
<p>
The inner loop will run <b>log(i)</b> times, so number of loops with changing values of i is
</p>
<table border="2" cellspacing="0" cellpadding="6" rules="all" frame="border">
<colgroup>
<col class="org-left" />
<col class="org-left" />
</colgroup>
<thead>
<tr>
<th scope="col" class="org-left">Value of i</th>
<th scope="col" class="org-left">Number of times inner loop runs</th>
</tr>
</thead>
<tbody>
<tr>
<td class="org-left">1</td>
<td class="org-left">log(1)</td>
</tr>
<tr>
<td class="org-left">2<sup>1</sup></td>
<td class="org-left">log(2)</td>
</tr>
<tr>
<td class="org-left">2<sup>2</sup></td>
<td class="org-left">2.log(2)</td>
</tr>
<tr>
<td class="org-left">2<sup>3</sup></td>
<td class="org-left">3.log(2)</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">2<sup>k-1</sup></td>
<td class="org-left">(k-1).log(2)</td>
</tr>
</tbody>
</table>
<p>
So the total number of times inner loop runs is \(log(1) + log(2) + 2.log(2) + 3.log(2) + ... + (k-1).log(2)\)
\[ \text{number of times inner loop runs} = log(1) + log(2).[1+2+3+...+(k-1)] \]
\[ \text{number of times inner loop runs} = log(1) + log(2). \frac{(k-1).k}{2} \]
\[ \text{number of times inner loop runs} = log(1) + log(2). \frac{k^2}{2} - \frac{k}{2} \]
Putting value \(k = log(n)\)
\[ \text{number of times inner loop runs} = log(1) + log(2). \frac{log^2(n)}{2} - \frac{log(n)}{2} \]
\[ \text{Time complexity} = O(log^2(n)) \]
</p>
</div>
</div>
</div>
<div id="outline-container-org8324828" class="outline-2">
<h2 id="org8324828"><span class="section-number-2">8.</span> Time complexity of recursive instructions</h2>
<div class="outline-text-2" id="text-8">
<p>
To get time complexity of recursive functions/calls, we first also show time complexity as recursive manner.
</p>
</div>
<div id="outline-container-org8fa29ba" class="outline-3">
<h3 id="org8fa29ba"><span class="section-number-3">8.1.</span> Time complexity in recursive form</h3>
<div class="outline-text-3" id="text-8-1">
<p>
We first have to create a way to describe time complexity of recursive functions in form of an equation as,
\[ T(n) = ( \text{Recursive calls by the function} ) + ( \text{Time taken per call, i.e, the time taken except for recursive calls in the function} ) \]
</p>
<ul class="org-ul">
<li>Example, suppose we have a recursive function</li>
</ul>
<div class="org-src-container">
<pre class="src src-c"><span style="color: #c18401;">int</span> <span style="color: #0184bc;">fact</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>){
<span style="color: #a626a4;">if</span>(n == 0 || n == 1)
<span style="color: #a626a4;">return</span> 1;
<span style="color: #a626a4;">else</span>
<span style="color: #a626a4;">return</span> n * fact(n-1);
}
</pre>
</div>
<p>
in this example, the recursive call is fact(n-1), therefore the time complexity of recursive call is T(n-1) and the time complexity of function except for recursive call is constant (let's assume <b>c</b>). So the time complexity is
\[ T(n) = T(n-1) + c \]
\[ T(1) = T(0) = C\ \text{where C is constant time} \]
</p>
<ul class="org-ul">
<li>Another example,</li>
</ul>
<div class="org-src-container">
<pre class="src src-c"><span style="color: #c18401;">int</span> <span style="color: #0184bc;">func</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>){
<span style="color: #a626a4;">if</span>(n == 0 || n == 1)
<span style="color: #a626a4;">return</span> 1;
<span style="color: #a626a4;">else</span>
<span style="color: #a626a4;">return</span> func(n - 1) * func(n - 2);
}
</pre>
</div>
<p>
Here, the recursive calls are func(n-1) and func(n-2), therefore time complexities of recursive calls is T(n-1) and T(n-2). The time complexity of function except the recursive calls is constant (let's assume <b>c</b>), so the time complexity is
\[ T(n) = T(n-1) + T(n-2) + c \]
\[ T(1) = T(0) = C\ \text{where C is constant time} \]
</p>
<ul class="org-ul">
<li>Another example,</li>
</ul>
<div class="org-src-container">
<pre class="src src-c"><span style="color: #c18401;">int</span> <span style="color: #0184bc;">func</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>){
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">r</span> = 0;
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; n; i++)
r += i;
<span style="color: #a626a4;">if</span>(n == 0 || n == 1)
<span style="color: #a626a4;">return</span> r;
<span style="color: #a626a4;">else</span>
<span style="color: #a626a4;">return</span> r * func(n - 1) * func(n - 2);
}
</pre>
</div>
<p>
Here, the recursive calls are func(n-1) and func(n-2), therefore time complexities of recursive calls is T(n-1) and T(n-2). The time complexity of function except the recursive calls is <b>&theta; (n)</b> because of the for loop, so the time complexity is
</p>
<p>
\[ T(n) = T(n-1) + T(n-2) + n \]
\[ T(1) = T(0) = C\ \text{where C is constant time} \]
</p>
</div>
</div>
</div>
<div id="outline-container-org3703328" class="outline-2">
<h2 id="org3703328"><span class="section-number-2">9.</span> Solving Recursive time complexities</h2>
<div class="outline-text-2" id="text-9">
</div>
<div id="outline-container-org7893443" class="outline-3">
<h3 id="org7893443"><span class="section-number-3">9.1.</span> Iterative method</h3>
<div class="outline-text-3" id="text-9-1">
<ul class="org-ul">
<li>Take for example,</li>
</ul>
<p>
\[ T(1) = T(0) = C\ \text{where C is constant time} \]
\[ T(n) = T(n-1) + c \]
</p>
<p>
We can expand T(n-1).
\[ T(n) = [ T(n - 2) + c ] + c \]
\[ T(n) = T(n-2) + 2.c \]
Then we can expand T(n-2)
\[ T(n) = [ T(n - 3) + c ] + 2.c \]
\[ T(n) = T(n - 3) + 3.c \]
</p>
<p>
So, if we expand it k times, we will get
</p>
<p>
\[ T(n) = T(n - k) + k.c \]
Since we know this recursion <b>ends at T(1)</b>, let's put \(n-k=1\).
Therefore, \(k = n-1\).
\[ T(n) = T(1) + (n-1).c \]
</p>
<p>
Since T(1) = C
\[ T(n) = C + (n-1).c \]
So time complexity is,
\[ T(n) = O(n) \]
</p>
<ul class="org-ul">
<li>Another example,</li>
</ul>
<p>
\[ T(1) = C\ \text{where C is constant time} \]
\[ T(n) = T(n-1) + n \]
</p>
<p>
Expanding T(n-1),
\[ T(n) = [ T(n-2) + n - 1 ] + n \]
\[ T(n) = T(n-2) + 2.n - 1 \]
</p>
<p>
Expanding T(n-2),
\[ T(n) = [ T(n-3) + n - 2 ] + 2.n - 1 \]
\[ T(n) = T(n-3) + 3.n - 1 - 2 \]
</p>
<p>
Expanding T(n-3),
\[ T(n) = [ T(n-4) + n - 3 ] + 3.n - 1 - 2 \]
\[ T(n) = T(n-4) + 4.n - 1 - 2 - 3 \]
</p>
<p>
So expanding till T(n-k)
\[ T(n) = T(n-k) + k.n - [ 1 + 2 + 3 + .... + k ] \]
\[ T(n) = T(n-k) + k.n - \frac{k.(k+1)}{2} \]
</p>
<p>
Putting \(n-k=1\). Therefore, \(k=n-1\).
\[ T(n) = T(1) + (n-1).n - \frac{(n-1).(n)}{2} \]
\[ T(n) = C + n^2 - n - \frac{n^2}{2} + \frac{n}{2} \]
</p>
<p>
Time complexity is
\[ T(n) = O(n^2) \]
</p>
</div>
</div>
<div id="outline-container-org7f3013f" class="outline-3">
<h3 id="org7f3013f"><span class="section-number-3">9.2.</span> Master Theorem for Subtract recurrences</h3>
<div class="outline-text-3" id="text-9-2">
<p>
For recurrence relation of type
</p>
<p>
\[ T(n) = c\ for\ n \le 1 \]
\[ T(n) = a.T(n-b) + f(n)\ for\ n > 1 \]
\[ \text{where for f(n) we can say, } f(n) = O(n^k) \]
\[ \text{where, a > 0, b > 0 and k} \ge 0 \]
</p>
<ul class="org-ul">
<li>If a &lt; 1, then T(n) = O(n<sup>k</sup>)</li>
<li>If a = 1, then T(n) = O(n<sup>k+1</sup>)</li>
<li>If a &gt; 1, then T(n) = O(n<sup>k</sup> . a<sup>n/b</sup>)</li>
</ul>
<p>
Example, \[ T(n) = 3T(n-1) + n^2 \]
Here, f(n) = O(n<sup>2</sup>), therfore k = 2,
<br />
Also, a = 3 and b = 1
<br />
Since a &gt; 1, \(T(n) = O(n^2 . 3^n)\)
</p>
</div>
</div>
<div id="outline-container-orgc508d87" class="outline-3">
<h3 id="orgc508d87"><span class="section-number-3">9.3.</span> Master Theorem for divide and conquer recurrences</h3>
<div class="outline-text-3" id="text-9-3">
<p>
\[ T(n) = aT(n/b) + f(n).(log(n))^k \]
\[ \text{here, f(n) is a polynomial function} \]
\[ \text{and, a > 0, b > 0 and k } \ge 0 \]
We calculate a value \(n^{log_ba}\)
</p>
<ul class="org-ul">
<li>If \(\theta (f(n)) < \theta ( n^{log_ba} )\) then \(T(n) = \theta (n^{log_ba})\)</li>
<li>If \(\theta (f(n)) > \theta ( n^{log_ba} )\) then \(T(n) = \theta (f(n).(log(n))^k )\)</li>
<li>If \(\theta (f(n)) = \theta ( n^{log_ba} )\) then \(T(n) = \theta (f(n) . (log(n))^{k+1})\)</li>
</ul>
<p>
For the above comparision, we say higher growth rate is greater than slower growth rate. Eg, &theta; (n<sup>2</sup>) &gt; &theta; (n).
</p>
<p>
Example, calculating complexity for
\[ T(n) = T(n/2) + 1 \]
Here, f(n) = 1
<br />
Also, a = 1, b = 2 and k = 0.
<br />
Calculating n<sup>log<sub>ba</sub></sup> = n<sup>log<sub>21</sub></sup> = n<sup>0</sup> = 1
<br />
Therfore, &theta; (f(n)) = &theta; (n<sup>log<sub>ba</sub></sup>)
<br />
So time complexity is
\[ T(n) = \theta ( 1 . (log(n))^{0 + 1} ) \]
\[ T(n) = \theta (log(n)) \]
</p>
<p>
Another example, calculate complexity for
\[ T(n) = 2T(n/2) + nlog(n) \]
</p>
<p>
Here, f(n) = n
<br />
Also, a = 2, b = 2 and k = 1
<br />
Calculating n<sup>log<sub>ba</sub></sup> = n<sup>log<sub>22</sub></sup> = n
<br />
Therefore, &theta; (f(n)) = &theta; (n<sup>log<sub>ba</sub></sup>)
<br />
So time complexity is,
\[ T(n) = \theta ( n . (log(n))^{2}) \]
</p>
</div>
</div>
</div>
<div id="outline-container-orgce2f29e" class="outline-2">
<h2 id="orgce2f29e"><span class="section-number-2">10.</span> Square root recurrence relations</h2>
<div class="outline-text-2" id="text-10">
</div>
<div id="outline-container-orgd6a4ff2" class="outline-3">
<h3 id="orgd6a4ff2"><span class="section-number-3">10.1.</span> Iterative method</h3>
<div class="outline-text-3" id="text-10-1">
<p>
Example,
\[ T(n) = T( \sqrt{n} ) + 1 \]
we can write this as,
\[ T(n) = T( n^{1/2}) + 1 \]
Now, we expand \(T( n^{1/2})\)
\[ T(n) = [ T(n^{1/4}) + 1 ] + 1 \]
\[ T(n) = T(n^{1/(2^2)}) + 1 + 1 \]
Expand, \(T(n^{1/4})\)
\[ T(n) = [ T(n^{1/8}) + 1 ] + 1 + 1 \]
\[ T(n) = T(n^{1/(2^3)}) + 1 + 1 + 1 \]
</p>
<p>
Expanding <b>k</b> times,
\[ T(n) = T(n^{1/(2^k)}) + 1 + 1 ... \text{k times } + 1 \]
\[ T(n) = T(n^{1/(2^k)}) + k \]
</p>
<p>
Let's consider \(T(2)=C\) where C is constant.
<br />
Putting \(n^{1/(2^k)} = 2\)
\[ \frac{1}{2^k} log(n) = log(2) \]
\[ \frac{1}{2^k} = \frac{log(2)}{log(n)} \]
\[ 2^k = \frac{log(n)}{log(2)} \]
\[ 2^k = log_2n \]
\[ k = log(log(n)) \]
</p>
<p>
So putting <b>k</b> in time complexity equation,
\[ T(n) = T(2) + log(log(n)) \]
\[ T(n) = C + log(log(n)) \]
Time complexity is,
\[ T(n) = \theta (log(log(n))) \]
</p>
</div>
</div>
<div id="outline-container-orga309b9e" class="outline-3">
<h3 id="orga309b9e"><span class="section-number-3">10.2.</span> Master Theorem for square root recurrence relations</h3>
<div class="outline-text-3" id="text-10-2">
<p>
For recurrence relations with square root, we need to first convert the recurrance relation to the form with which we use master theorem. Example,
\[ T(n) = T( \sqrt{n} ) + 1 \]
Here, we need to convert \(T( \sqrt{n} )\) , we can do that by <b>substituting</b>
\[ \text{Substitute } n = 2^m \]
\[ T(2^m) = T ( \sqrt{2^m} ) + 1 \]
\[ T(2^m) = T ( 2^{m/2} ) + 1 \]
</p>
<p>
Now, we need to consider a new function such that,
\[ \text{Let, } S(m) = T(2^m) \]
Thus our time recurrance relation will become,
\[ S(m) = S(m/2) + 1 \]
Now, we can apply the master's theorem.
<br />
Here, f(m) = 1
<br />
Also, a = 1, and b = 2 and k = 0
<br />
Calculating m<sup>log<sub>ba</sub></sup> = m<sup>log<sub>21</sub></sup> = m<sup>0</sup> = 1
<br />
Therefore, &theta; (f(m)) = &theta; ( m<sup>log<sub>ba</sub></sup> )
<br />
So by master's theorem,
\[ S(m) = \theta (1. (log(m))^{0+1} ) \]
\[ S(m) = \theta (log(m) ) \]
Now, putting back \(m = log(n)\)
\[ T(n) = \theta (log(log(n))) \]
Another example,
\[ T(n) = 2.T(\sqrt{n})+log(n) \]
Substituting \(n = 2^m\)
\[ T(2^m) = 2.T(\sqrt{2^m}) + log(2^m) \]
\[ T(2^m) = 2.T(2^{m/2}) + m \]
Consider a function \(S(m) = T(2^m)\)
\[ S(m) = 2.S(m/2) + m \]
Here, f(m) = m
<br />
Also, a = 2, b = 2 and k = 0
<br />
Calculating m<sup>log<sub>ba</sub></sup> = m<sup>log<sub>22</sub></sup> = 1
<br />
Therefore, &theta; (f(m)) &gt; &theta; (m<sup>log<sub>ba</sub></sup>)
<br />
Using master's theorem,
\[ S(m) = \theta (m.(log(m))^0 ) \]
\[ S(m) = \theta (m.1) \]
Putting value of m,
\[ T(n) = \theta (log(n)) \]
</p>
</div>
</div>
</div>
<div id="outline-container-org7d79637" class="outline-2">
<h2 id="org7d79637"><span class="section-number-2">11.</span> Extended Master's theorem for time complexity of recursive algorithms</h2>
<div class="outline-text-2" id="text-11">
</div>
<div id="outline-container-org7c8438d" class="outline-3">
<h3 id="org7c8438d"><span class="section-number-3">11.1.</span> For (k = -1)</h3>
<div class="outline-text-3" id="text-11-1">
<p>
\[ T(n) = aT(n/b) + f(n).(log(n))^{-1} \]
\[ \text{Here, } f(n) \text{ is a polynomial function} \]
\[ a > 0\ and\ b > 1 \]
</p>
<ul class="org-ul">
<li>If &theta; (f(n)) &lt; &theta; ( n<sup>log<sub>ba</sub></sup> ) then, T(n) = &theta; (n<sup>log<sub>ba</sub></sup>)</li>
<li>If &theta; (f(n)) &gt; &theta; ( n<sup>log<sub>ba</sub></sup> ) then, T(n) = &theta; (f(n))</li>
<li>If &theta; (f(n)) &lt; &theta; ( n<sup>log<sub>ba</sub></sup> ) then, T(n) = &theta; (f(n).log(log(n)))</li>
</ul>
</div>
</div>
<div id="outline-container-org91a21bf" class="outline-3">
<h3 id="org91a21bf"><span class="section-number-3">11.2.</span> For (k &lt; -1)</h3>
<div class="outline-text-3" id="text-11-2">
<p>
\[ T(n) = aT(n/b) + f(n).(log(n))^{k} \]
\[ \text{Here, } f(n) \text{ is a polynomial function} \]
\[ a > 0\ and\ b > 1\ and\ k < -1 \]
</p>
<ul class="org-ul">
<li>If &theta; (f(n)) &lt; &theta; ( n<sup>log<sub>ba</sub></sup> ) then, T(n) = &theta; (n<sup>log<sub>ba</sub></sup>)</li>
<li>If &theta; (f(n)) &gt; &theta; ( n<sup>log<sub>ba</sub></sup> ) then, T(n) = &theta; (f(n))</li>
<li>If &theta; (f(n)) &lt; &theta; ( n<sup>log<sub>ba</sub></sup> ) then, T(n) = &theta; (n<sup>log<sub>ba</sub></sup>)</li>
</ul>
</div>
</div>
</div>
<div id="outline-container-org4cd2e2d" class="outline-2">
<h2 id="org4cd2e2d"><span class="section-number-2">12.</span> Tree method for time complexity of recursive algorithms</h2>
<div class="outline-text-2" id="text-12">
<p>
Tree method is used when there are multiple recursive calls in our recurrance relation. Example,
\[ T(n) = T(n/5) + T(4n/5) + f(n) \]
Here, one call is T(n/5) and another is T(4n/5). So we can't apply master's theorem. So we create a tree of recursive calls which is used to calculate time complexity.
The first node, i.e the root node is T(n) and the tree is formed by the child nodes being the calls made by the parent nodes. Example, let's consider the recurrance relation
\[ T(n) = T(n/5) + T(4n/5) + f(n) \]
</p>
<pre class="example">
+-----T(n/5)
T(n)--+
+-----T(4n/5)
</pre>
<p>
Since T(n) calls T(n/5) and T(4n/5), the graph for that is shown as drawn above. Now using recurrance relation, we can say that T(n/5) will call T(n/5<sup>2</sup>) and T(4n/5<sup>2</sup>). Also, T(4n/5) will call T(4n/5<sup>2</sup>) and T(4<sup>2</sup> n/ 5<sup>2</sup>).
</p>
<pre class="example">
+--T(n/5^2)
+-----T(n/5)--+
+ +--T(4n/5^2)
T(n)--+
+ +--T(4n/5^2)
+-----T(4n/5)-+
+--T(4^2 n/5^2)
</pre>
<p>
Suppose we draw this graph for an unknown number of levels.
</p>
<pre class="example">
+--T(n/5^2)- - - - - - - etc.
+-----T(n/5)--+
+ +--T(4n/5^2) - - - - - - - - - etc.
T(n)--+
+ +--T(4n/5^2) - - - - - - - - - etc.
+-----T(4n/5)-+
+--T(4^2 n/5^2)- - - - - - etc.
</pre>
<p>
We will now replace T()'s with the <b>cost of the call</b>. The cost of the call is <b>f(n)</b>, i.e, the time taken other than that caused by the recursive calls.
</p>
<pre class="example">
+--f(n/5^2)- - - - - - - etc.
+-----f(n/5)--+
+ +--f(4n/5^2) - - - - - - - - - etc.
f(n)--+
+ +--f(4n/5^2) - - - - - - - - - etc.
+-----f(4n/5)-+
+--f(4^2 n/5^2)- - - - - - etc.
</pre>
<p>
In our example, <b>let's assume f(n) = n</b>, therfore,
</p>
<pre class="example">
+-- n/5^2 - - - - - - - etc.
+----- n/5 --+
+ +-- 4n/5^2 - - - - - - - - - etc.
n --+
+ +-- 4n/5^2 - - - - - - - - -etc.
+----- 4n/5 -+
+-- 4^2 n/5^2 - - - - - - etc.
</pre>
<p>
Now we can get cost of each level.
</p>
<pre class="example">
+-- n/5^2 - - - - - - - etc.
+----- n/5 --+
+ +-- 4n/5^2 - - - - - - - - - etc.
n --+
+ +-- 4n/5^2 - - - - - - - - -etc.
+----- 4n/5 --+
+-- 4^2 n/5^2 - - - - - - etc.
Sum : n n/5 n/25
+4n/5 +4n/25
+4n/25
+16n/25
..... ..... ......
n n n
</pre>
<p>
Since sum on all levels is n, we can say that Total time taken is
\[ T(n) = \Sigma \ (cost\ of\ level_i) \]
</p>
<p>
Now we need to find the longest branch in the tree. If we follow the pattern of expanding tree in a sequence as shown, then the longest branch is <b>always on one of the extreme ends of the tree</b>. So for our example, if tree has <b>(k+1)</b> levels, then our branch is either (n/5<sup>k</sup>) of (4<sup>k</sup> n/5<sup>k</sup>). Consider the terminating condition is, \(T(a) = C\). Then we will calculate value of k by equating the longest branch as,
\[ \frac{n}{5^k} = a \]
\[ k = log_5 (n/a) \]
Also,
\[ \frac{4^k n}{5^k} = a \]
\[ k = log_{5/4} n/a \]
</p>
<p>
So, we have two possible values of k,
\[ k = log_{5/4}(n/a),\ log_5 (n/a) \]
</p>
<p>
Now, we can say that,
\[ T(n) = \sum_{i=1}^{k+1} \ (cost\ of\ level_i) \]
Since in our example, cost of every level is <b>n</b>.
\[ T(n) = n.(k+1) \]
Putting values of k,
\[ T(n) = n.(log_{5/4}(n/a) + 1) \]
or
\[ T(n) = n.(log_{5}(n/a) + 1) \]
</p>
<p>
Of the two possible time complexities, we consider the one with higher growth rate in the big-oh notation.
</p>
</div>
<div id="outline-container-org1d1c9a3" class="outline-3">
<h3 id="org1d1c9a3"><span class="section-number-3">12.1.</span> Avoiding tree method</h3>
<div class="outline-text-3" id="text-12-1">
<p>
The tree method as mentioned is mainly used when we have multiple recursive calls with different factors. But when using the big-oh notation (O). We can avoid tree method in favour of the master's theorem by converting recursive call with smaller factor to larger. This works since big-oh calculates worst case. Let's take our previous example
\[ T(n) = T(n/5) + T(4n/5) + f(n) \]
Since T(n) is an increasing function. We can say that
\[ T(n/5) < T(4n/5) \]
So we can replace smaller one and approximate our equation to,
\[ T(n) = T(4n/5) + T(4n/5) + f(n) \]
\[ T(n) = 2.T(4n/5) + f(n) \]
</p>
<p>
Now, our recurrance relation is in a form where we can apply the mater's theorem.
</p>
</div>
</div>
</div>
<div id="outline-container-org4fdc5de" class="outline-2">
<h2 id="org4fdc5de"><span class="section-number-2">13.</span> Space complexity</h2>
<div class="outline-text-2" id="text-13">
<p>
The amount of memory used by the algorithm to execute and produce the result for a given input size is space complexity. Similar to time complexity, when comparing two algorithms space complexity is usually represented as the growth rate of memory used with respect to input size. The space complexity includes
</p>
<ul class="org-ul">
<li><b>Input space</b> : The amount of memory used by the inputs to the algorithm.</li>
<li><b>Auxiliary space</b> : The amount of memory used during the execution of the algorithm, excluding the input space.</li>
</ul>
<p>
<b>NOTE</b> : <i>Space complexity by definition includes both input space and auxiliary space, but when comparing algorithms the input space is often ignored. This is because two algorithms that solve the same problem will have same input space based on input size (Example, when comparing two sorting algorithms, the input space will be same because both get a list as an input). So from this point on, refering to space complexity, we are actually talking about <b>Auxiliary Space Complexity</b>, which is space complexity but only considering the auxiliary space</i>.
</p>
</div>
<div id="outline-container-org1bdab7b" class="outline-3">
<h3 id="org1bdab7b"><span class="section-number-3">13.1.</span> Auxiliary space complexity</h3>
<div class="outline-text-3" id="text-13-1">
<p>
The space complexity when we disregard the input space is the auxiliary space complexity, so we basically treat algorithm as if it's input space is zero. Auxiliary space complexity is more useful when comparing algorithms because the algorithms which are working towards same result will have the same input space, Example, the sorting algorithms will all have the input space of the list, so it is not a metric we can use to compare algorithms. So from here, when we calculate space complexity, we are trying to calculate auxiliary space complexity and sometimes just refer to it as space complexity.
</p>
</div>
</div>
</div>
<div id="outline-container-org26eb543" class="outline-2">
<h2 id="org26eb543"><span class="section-number-2">14.</span> Calculating auxiliary space complexity</h2>
<div class="outline-text-2" id="text-14">
<p>
There are two parameters that affect space complexity,
</p>
<ul class="org-ul">
<li><b>Data space</b> : The memory taken by the variables in the algorithm. So allocating new memory during runtime of the algorithm is what forms the data space. The space which was allocated for the input space is not considered a part of the data space.</li>
<li><b>Code Execution Space</b> : The memory taken by the instructions themselves is called code execution space. Unless we have recursion, the code execution space remains constant since the instructions don't change during runtime of the algorithm. When using recursion, the instructions are loaded again and again in memory, thus increasing code execution space.</li>
</ul>
</div>
<div id="outline-container-org2d4751e" class="outline-3">
<h3 id="org2d4751e"><span class="section-number-3">14.1.</span> Data Space used</h3>
<div class="outline-text-3" id="text-14-1">
<p>
The data space used by the algorithm depends on what data structures it uses to solve the problem. Example,
</p>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Input size of n</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #c18401;">void</span> <span style="color: #0184bc;">algorithms</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>){
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Creating an array of whose size depends on input size</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">data</span>[n];
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; n; i++){
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">x</span> = data[i];
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">Work on data</span>
}
}
</pre>
</div>
<p>
Here, we create an array of size <b>n</b>, so the increase in allocated space increases with the input size. So the space complexity is, <b>\(\theta (n)\)</b>.
<br />
</p>
<ul class="org-ul">
<li>Another example,</li>
</ul>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Input size of n</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #c18401;">void</span> <span style="color: #0184bc;">algorithms</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>){
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Creating a matrix sized n*n of whose size depends on input size</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">data</span>[n][n];
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; n; i++){
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = 0; j &lt; n; j++){
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">x</span> = data[i][j];
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">Work on data</span>
}
}
}
</pre>
</div>
<p>
Here, we create a matrix of size <b>n*n</b>, so the increase in allocated space increases with the input size by \(n^2\). So the space complexity is, <b>\(\theta (n^2)\)</b>.
</p>
<ul class="org-ul">
<li>If we use a node based data structure like linked list or trees, then we can show space complexity as the number of nodes used by algorithm based on input size, (if all nodes are of equal size).</li>
<li>Space complexity of the hash map is considered <b>O(n)</b> where <b>n</b> is the number of entries in the hash map.</li>
</ul>
</div>
</div>
<div id="outline-container-org32d747b" class="outline-3">
<h3 id="org32d747b"><span class="section-number-3">14.2.</span> Code Execution space in recursive algorithm</h3>
<div class="outline-text-3" id="text-14-2">
<p>
When we use recursion, the function calls are stored in the stack. This means that code execution space will increase. A single function call has fixed (constant) space it takes in the memory. So to get space complexity, <b>we need to know how many function calls occur in the longest branch of the function call tree</b>.
</p>
<ul class="org-ul">
<li><b>NOTE</b> : Space complexity <b>only depends on the longest branch</b> of the function calls tree.</li>
<li><i><b>The tree is made the same way we make it in the tree method for calculating time complexity of recursive algorithms</b></i></li>
</ul>
<p>
This is because at any given time, the stack will store only a single branch.
</p>
<ul class="org-ul">
<li>Example,</li>
</ul>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">int</span> <span style="color: #0184bc;">func</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>){
<span style="color: #a626a4;">if</span>(n == 1 || n == 0)
<span style="color: #a626a4;">return</span> 1;
<span style="color: #a626a4;">else</span>
<span style="color: #a626a4;">return</span> n * func(n - 1);
}
</pre>
</div>
<p>
To calculate space complexity we can use the tree method. But rather than when calculating time complexity, we will count the number of function calls using the tree.
We will do this by drawing tree of what function calls will look like for given input size <b>n</b>.
<br />
The tree for <b>k+1</b> levels is,
</p>
<pre class="example">
func(n)--func(n-1)--func(n-2)--.....--func(n-k)
</pre>
<p>
This tree only has a single branch. To get the number of levels for a branch, we put the terminating condition at the extreme branches of the tree. Here, the terminating condition is func(1), therefore, we will put \(func(1) = func(n-k)\), i.e,
\[ 1 = n - k \]
\[ k + 1 = n \]
</p>
<p>
So the number of levels is \(n\). Therefore, space complexity is <b>\(\theta (n)\)</b>
</p>
<ul class="org-ul">
<li>Another example,</li>
</ul>
<div class="org-src-container">
<pre class="src src-c"><span style="color: #c18401;">void</span> <span style="color: #0184bc;">func</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>){
<span style="color: #a626a4;">if</span>(n/2 &lt;= 1)
<span style="color: #a626a4;">return</span> n;
func(n/2);
func(n/2);
}
</pre>
</div>
<p>
Drawing the tree for <b>k+1</b> levels.
</p>
<pre class="example">
+--func(n/2^2)- - - - - - - func(n/2^k)
+-----func(n/2)--+
+ +--func(n/2^2) - - - - - - - - - func(n/2^k)
func(n)--+
+ +--func(n/2^2) - - - - - - - - - func(n/2^k)
+-----func(n/2)-+
+--func(n/2^2)- - - - - - func(n/2^k)
</pre>
<ul class="org-ul">
<li><i><b>As we know from the tree method, the two extreme branches of the tree will always be the longest ones.</b></i></li>
</ul>
<p>
Both the extreme branches have the same call which here is func(n/2<sup>k</sup>). To get the number of levels for a branch, we put the terminating condition at the extreme branches of the tree. Here, the terminating condition is func(2), therefore, we will put \(func(2) = func(n/2^k)\), i.e,
\[ 2 = \frac{n}{2^k} \]
\[ k + 1 = log_2n \]
Number of levels is \(log_2n\). Therefore, space complexity is <b>\(\theta (log_2n)\).</b>
</p>
</div>
</div>
</div>
<div id="outline-container-org423e1e2" class="outline-2">
<h2 id="org423e1e2"><span class="section-number-2">15.</span> Divide and Conquer algorithms</h2>
<div class="outline-text-2" id="text-15">
<p>
Divide and conquer is a problem solving strategy. In divide and conquer algorithms, we solve problem recursively applying three steps :
</p>
<ul class="org-ul">
<li><b>Divide</b> : Problem is divided into smaller problems that are instances of same problem.</li>
<li><b>Conquer</b> : If subproblems are large, divide and solve them recursivly. If subproblem is small enough then solve it in a straightforward method</li>
<li><b>Combine</b> : combine the solutions of subproblems into the solution for the original problem.</li>
</ul>
<p>
<b>Example</b>,
</p>
<ol class="org-ol">
<li>Binary search</li>
<li>Quick sort</li>
<li>Merge sort</li>
<li>Strassen's matrix multiplication</li>
</ol>
</div>
</div>
<div id="outline-container-orgeec7ed3" class="outline-2">
<h2 id="orgeec7ed3"><span class="section-number-2">16.</span> Searching for element in array</h2>
<div class="outline-text-2" id="text-16">
</div>
<div id="outline-container-orgf5c47f0" class="outline-3">
<h3 id="orgf5c47f0"><span class="section-number-3">16.1.</span> Straight forward approach for searching (<b>Linear Search</b>)</h3>
<div class="outline-text-3" id="text-16-1">
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">int</span> <span style="color: #0184bc;">linear_search</span>(<span style="color: #c18401;">int</span> *<span style="color: #8b4513;">array</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">x</span>){
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; n; i++){
<span style="color: #a626a4;">if</span>(array[i] == x){
printf(<span style="color: #50a14f;">"Found at index : %d"</span>, i);
<span style="color: #a626a4;">return</span> i;
}
}
<span style="color: #a626a4;">return</span> -1;
}
</pre>
</div>
<p>
Recursive approach
</p>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">call this function with index = 0</span>
<span style="color: #c18401;">int</span> <span style="color: #0184bc;">linear_search</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">array</span>[], <span style="color: #c18401;">int</span> <span style="color: #8b4513;">item</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">index</span>){
<span style="color: #a626a4;">if</span>( index &gt;= len(array) )
<span style="color: #a626a4;">return</span> -1;
<span style="color: #a626a4;">else</span> <span style="color: #a626a4;">if</span> (array[index] == item)
<span style="color: #a626a4;">return</span> index;
<span style="color: #a626a4;">else</span>
<span style="color: #a626a4;">return</span> linear_search(array, item, index + 1);
}
</pre>
</div>
<p>
<b>Recursive time complexity</b> : \(T(n) = T(n-1) + 1\)
</p>
<ul class="org-ul">
<li><b>Best Case</b> : The element to search is the first element of the array. So we need to do a single comparision. Therefore, time complexity will be constant <b>O(1)</b>.</li>
</ul>
<p>
<br />
</p>
<ul class="org-ul">
<li><b>Worst Case</b> : The element to search is the last element of the array. So we need to do <b>n</b> comparisions for the array of size n. Therefore, time complexity is <b>O(n)</b>.</li>
</ul>
<p>
<br />
</p>
<ul class="org-ul">
<li><b>Average Case</b> : For calculating the average case, we need to consider the average number of comparisions done over all possible cases.</li>
</ul>
<table border="2" cellspacing="0" cellpadding="6" rules="all" frame="border">
<colgroup>
<col class="org-left" />
<col class="org-left" />
</colgroup>
<thead>
<tr>
<th scope="col" class="org-left">Position of element to search (x)</th>
<th scope="col" class="org-left">Number of comparisions done</th>
</tr>
</thead>
<tbody>
<tr>
<td class="org-left">0</td>
<td class="org-left">1</td>
</tr>
<tr>
<td class="org-left">1</td>
<td class="org-left">2</td>
</tr>
<tr>
<td class="org-left">2</td>
<td class="org-left">3</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">n-1</td>
<td class="org-left">n</td>
</tr>
<tr>
<td class="org-left">&#x2026;&#x2026;&#x2026;&#x2026;&#x2026;&#x2026;..</td>
<td class="org-left">&#x2026;&#x2026;&#x2026;&#x2026;&#x2026;&#x2026;..</td>
</tr>
<tr>
<td class="org-left">Sum</td>
<td class="org-left">\(\frac{n(n+1)}{2}\)</td>
</tr>
</tbody>
</table>
<p>
\[ \text{Average number of comparisions} = \frac{ \text{Sum of number of comparisions of all cases} }{ \text{Total number of cases.} } \]
\[ \text{Average number of comparisions} = \frac{n(n+1)}{2} \div n \]
\[ \text{Average number of comparisions} = \frac{n+1}{2} \]
\[ \text{Time complexity in average case} = O(n) \]
</p>
</div>
</div>
<div id="outline-container-org53e9b50" class="outline-3">
<h3 id="org53e9b50"><span class="section-number-3">16.2.</span> Divide and conquer approach (<b>Binary search</b>)</h3>
<div class="outline-text-3" id="text-16-2">
<p>
The binary search algorithm works on an array which is sorted. In this algorithm we:
</p>
<ol class="org-ol">
<li>Check the middle element of the array, return the index if element found.</li>
<li>If element &gt; array[mid], then our element is in the right part of the array, else it is in the left part of the array.</li>
<li>Get the mid element of the left/right sub-array</li>
<li>Repeat this process of division to subarray's and comparing the middle element till our required element is found.</li>
</ol>
<p>
The divide and conquer algorithm works as,
<br />
Suppose binarySearch(array, left, right, key), left and right are indicies of left and right of subarray. key is the element we have to search.
</p>
<ul class="org-ul">
<li><b>Divide part</b> : calculate mid index as mid = left + (right - left) /2 or (left + right) / 2. If array[mid] == key, return the value of mid.</li>
<li><b>Conquer part</b> : if array[mid] &gt; key, then key must not be in right half. So we search for key in left half, so we will recursively call binarySearch(array, left, mid - 1, key). Similarly, if array[mid] &lt; key, then key must not be in left half. So we search for key in right half, so recursively call binarySearch(array, mid + 1, right, key).</li>
<li><b>Combine part</b> : Since the binarySearch function will either return -1 or the index of the key, there is no need to combine the solutions of the subproblems.</li>
</ul>
<div id="orgfc79624" class="figure">
<p><img src="lectures/imgs/binary-search.jpg" alt="binary-search.jpg" />
</p>
</div>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">int</span> <span style="color: #0184bc;">binary_search</span>(<span style="color: #c18401;">int</span> *<span style="color: #8b4513;">array</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">x</span>){
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">low</span> = 0;
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">high</span> = n;
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">mid</span> = (low + high) / 2;
<span style="color: #a626a4;">while</span>(low &lt;= high){
mid = (low + high) / 2;
<span style="color: #a626a4;">if</span> (x == array[mid]){
<span style="color: #a626a4;">return</span> mid;
}<span style="color: #a626a4;">else</span> <span style="color: #a626a4;">if</span> (x &lt; array[mid]){
low = low;
high = mid - 1;
}<span style="color: #a626a4;">else</span>{
low = mid + 1;
high = high;
}
}
<span style="color: #a626a4;">return</span> -1;
}
</pre>
</div>
<p>
Recursive approach:
</p>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">int</span> <span style="color: #0184bc;">binary_search</span>(<span style="color: #c18401;">int</span> *<span style="color: #8b4513;">array</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">left</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">right</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">x</span>){
<span style="color: #a626a4;">if</span>(left &gt; right)
<span style="color: #a626a4;">return</span> -1;
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">mid</span> = (left + right) / 2;
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">or we can use mid = left + (right - left) / 2, this will avoid int overflow when array has more elements.</span>
<span style="color: #a626a4;">if</span> (x == array[mid])
<span style="color: #a626a4;">return</span> mid;
<span style="color: #a626a4;">else</span> <span style="color: #a626a4;">if</span> (x &lt; array[mid])
<span style="color: #a626a4;">return</span> binary_search(array, left, mid - 1, x);
<span style="color: #a626a4;">else</span>
<span style="color: #a626a4;">return</span> binary_search(array, mid + 1, right, x);
}
</pre>
</div>
<p>
<b>Recursive time complexity</b> : \(T(n) = T(n/2) + 1\)
</p>
<ul class="org-ul">
<li><b>Best Case</b> : Time complexity = O(1)</li>
<li><b>Average Case</b> : Time complexity = O(log n)</li>
<li><b>Worst Case</b> : Time complexity = O(log n)</li>
</ul>
<p>
<i>Binary search is better for sorted arrays and linear search is better for sorted arrays.</i>
<br />
<i>Another way to visualize binary search is using the binary tree.</i>
</p>
</div>
</div>
</div>
<div id="outline-container-org3b4deed" class="outline-2">
<h2 id="org3b4deed"><span class="section-number-2">17.</span> Max and Min element from array</h2>
<div class="outline-text-2" id="text-17">
</div>
<div id="outline-container-orged1501e" class="outline-3">
<h3 id="orged1501e"><span class="section-number-3">17.1.</span> Straightforward approach</h3>
<div class="outline-text-3" id="text-17-1">
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">struc</span> <span style="color: #8b4513;">min_max</span> {<span style="color: #c18401;">int</span> <span style="color: #8b4513;">min</span>; <span style="color: #c18401;">int</span> <span style="color: #8b4513;">max</span>;}
<span style="color: #0184bc;">min_max</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">array</span>[]){
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">max</span> = array[0];
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">min</span> = array[0];
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; len(array); i++){
<span style="color: #a626a4;">if</span>(array[i] &gt; max)
max = array[i];
<span style="color: #a626a4;">else</span> <span style="color: #a626a4;">if</span>(array[i] &lt; min)
min = array[i];
}
<span style="color: #a626a4;">return</span> (<span style="color: #a626a4;">struct</span> <span style="color: #c18401;">min_max</span>) {min, max};
}
</pre>
</div>
<ul class="org-ul">
<li><b>Best case</b> : array is sorted in ascending order. Number of comparisions is \(n-1\). Time complexity is \(O(n)\).</li>
<li><b>Worst case</b> : array is sorted in descending order. Number of comparisions is \(2.(n-1)\). Time complexity is \(O(n)\).</li>
<li><b>Average case</b> : array can we arranged in n! ways, this makes calculating number of comparisions in the average case hard and it is somewhat unnecessary, so it is skiped. Time complexity is \(O(n)\)</li>
</ul>
</div>
</div>
<div id="outline-container-orgd668fa2" class="outline-3">
<h3 id="orgd668fa2"><span class="section-number-3">17.2.</span> Divide and conquer approach</h3>
<div class="outline-text-3" id="text-17-2">
<p>
Suppose the function is MinMax(array, left, right) which will return a tuple (min, max). We will divide the array in the middle, mid = (left + right) / 2. The left array will be array[left:mid] and right aray will be array[mid+1:right]
</p>
<ul class="org-ul">
<li><b>Divide part</b> : Divide the array into left array and right array. If array has only single element then both min and max are that single element, if array has two elements then compare the two and the bigger element is max and other is min.</li>
<li><b>Conquer part</b> : Recursively get the min and max of left and right array, leftMinMax = MinMax(array, left, mid) and rightMinMax = MinMax(array, mid + 1, right).</li>
<li><b>Combine part</b> : If leftMinMax[0] &gt; rightMinmax[0], then min = righMinMax[0], else min = leftMinMax[0]. Similarly, if leftMinMax[1] &gt; rightMinMax[1], then max = leftMinMax[1], else max = rightMinMax[1].</li>
</ul>
<div class="org-src-container">
<pre class="src src-python"><span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">Will return (min, max)</span>
<span style="color: #a626a4;">def</span> <span style="color: #0184bc;">minmax</span>(array, left, right):
<span style="color: #a626a4;">if</span> left == right: <span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">Single element in array</span>
<span style="color: #a626a4;">return</span> (array[left], array[left])
<span style="color: #a626a4;">elif</span> left + 1 == right: <span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">Two elements in array</span>
<span style="color: #a626a4;">if</span> array[left] &gt; array[right]:
<span style="color: #a626a4;">return</span> (array[right], array[left])
<span style="color: #a626a4;">else</span>:
<span style="color: #a626a4;">return</span> (array[left], array[right])
<span style="color: #a626a4;">else</span>: <span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">More than two elements</span>
mid = (left + right) / 2
<span style="color: #8b4513;">minimum</span>, <span style="color: #8b4513;">maximum</span> = 0, 0
leftMinMax = minmax(array, left, mid)
rightMinMax = minmax(array, mid + 1, right)
<span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">Combining result of the minimum from left and right subarray's</span>
<span style="color: #a626a4;">if</span> leftMinMax[0] &gt; rightMinMax[0]:
minimum = rightMinMax[0]
<span style="color: #a626a4;">else</span>:
minimum = leftMinMax[0]
<span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">Combining result of the maximum from left and right subarray's</span>
<span style="color: #a626a4;">if</span> leftMinMax[1] &gt; rightMinMax[1]:
maximum = leftMinMax[1]
<span style="color: #a626a4;">else</span>:
maximum = rightMinMax[1]
<span style="color: #a626a4;">return</span> (minimum, maximum)
</pre>
</div>
<ul class="org-ul">
<li>Time complexity</li>
</ul>
<p>
We are dividing the problem into two parts of approximately, and it takes two comparisions on each part. Let's consider a comparision takes unit time. Then time complexity is
\[ T(n) = T(n/2) + T(n/2) + 2 \]
\[ T(n) = 2.T(n/2) + 2 \]
The recurrance terminated if single element in array with zero comparisions, i.e, \(T(1) = 0\), or when two elements with single comparision \(T(2) = 1\).
<br />
<i>Now we can use the <b>master's theorem</b> or <b>tree method</b> to solve for time complexity.</i>
\[ T(n) = \theta (n) \]
</p>
<ul class="org-ul">
<li>Space complexity</li>
</ul>
<p>
For space complexity, we need to find the longest branch of the recursion tree. Since both recursive calls are same sized, and the factor is (1/2), for <b>k+1</b> levels, function call will be func(n/2<sup>k</sup>), and terminating condition is func(2)
\[ func(2) = func(n/2^k) \]
\[ 2 = \frac{n}{2^k} \]
\[ k + 1 = log_2n \]
Since longest branch has \(log_2n\) nodes, the space complexity is \(O(log_2n)\).
</p>
<ul class="org-ul">
<li>Number of comparisions</li>
</ul>
<p>
In every case i.e, average, best and worst cases, <b>the number of comparisions in this algorithm is same</b>.
\[ \text{Total number of comparisions} = \frac{3n}{2} - 2 \]
If n is not a power of 2, we will round the number of comparision up.
</p>
</div>
</div>
<div id="outline-container-orgb190e11" class="outline-3">
<h3 id="orgb190e11"><span class="section-number-3">17.3.</span> Efficient single loop approach (Increment by 2)</h3>
<div class="outline-text-3" id="text-17-3">
<p>
In this algorithm we will compare pairs of numbers from the array. It works on the idea that the larger number of the two in pair can be the maximum number and smaller one can be the minimum one. So after comparing the pair, we can simply test from maximum from the bigger of two an minimum from smaller of two. This brings number of comparisions to check two numbers in array from 4 (when we increment by 1) to 3 (when we increment by 2).
</p>
<div class="org-src-container">
<pre class="src src-python"><span style="color: #a626a4;">def</span> <span style="color: #0184bc;">min_max</span>(array):
(<span style="color: #8b4513;">minimum</span>, <span style="color: #8b4513;">maximum</span>) = (array[0], array[0])
<span style="color: #8b4513;">i</span> = 1
<span style="color: #a626a4;">while</span> i &lt; <span style="color: #e44649;">len</span>(array):
<span style="color: #a626a4;">if</span> i + 1 == <span style="color: #e44649;">len</span>(array): <span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">don't check i+1, it's out of bounds, break the loop after checking a[i]</span>
<span style="color: #a626a4;">if</span> array[i] &gt; <span style="color: #8b4513;">maximum</span>:
maximum = array[i]
<span style="color: #a626a4;">elif</span> array[i] &lt; <span style="color: #8b4513;">minimum</span>:
minimum = array[i]
<span style="color: #a626a4;">break</span>
<span style="color: #a626a4;">if</span> array[i] &gt; array[i + 1]:
<span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">check possibility that array[i] is maximum and array[i+1] is minimum</span>
<span style="color: #a626a4;">if</span> array[i] &gt; <span style="color: #8b4513;">maximum</span>:
maximum = array[i]
<span style="color: #a626a4;">if</span> array[i + 1] &lt; <span style="color: #8b4513;">minimum</span>:
minimum = array[i + 1]
<span style="color: #a626a4;">else</span>:
<span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">check possibility that array[i+1] is maximum and array[i] is minimum</span>
<span style="color: #a626a4;">if</span> array[i + 1] &gt; <span style="color: #8b4513;">maximum</span>:
maximum = array[i + 1]
<span style="color: #a626a4;">if</span> array[i] &lt; <span style="color: #8b4513;">minimum</span>:
minimum = array[i]
<span style="color: #8b4513;">i</span> += 2
<span style="color: #a626a4;">return</span> (minimum, maximum)
</pre>
</div>
<ul class="org-ul">
<li>Time complexity = O(n)</li>
<li>Space complexity = O(1)</li>
<li>Total number of comparisions =
\[ \text{If n is odd}, \frac{3(n-1)}{2} \]
\[ \text{If n is even}, \frac{3n}{2} - 2 \]</li>
</ul>
</div>
</div>
</div>
<div id="outline-container-org0d2bf32" class="outline-2">
<h2 id="org0d2bf32"><span class="section-number-2">18.</span> Square matrix multiplication</h2>
<div class="outline-text-2" id="text-18">
<p>
Matrix multiplication algorithms taken from here:
<a href="https://www.cs.mcgill.ca/~pnguyen/251F09/matrix-mult.pdf">https://www.cs.mcgill.ca/~pnguyen/251F09/matrix-mult.pdf</a>
</p>
</div>
<div id="outline-container-orge92fb56" class="outline-3">
<h3 id="orge92fb56"><span class="section-number-3">18.1.</span> Straight forward method</h3>
<div class="outline-text-3" id="text-18-1">
<div class="org-src-container">
<pre class="src src-C"><span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">This will calculate A X B and store it in C.</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #e44649;">#define</span> <span style="color: #8b4513;">N</span> 3
<span style="color: #c18401;">int</span> <span style="color: #0184bc;">main</span>(){
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">A</span>[N][N] = {
{1,2,3},
{4,5,6},
{7,8,9} };
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">B</span>[N][N] = {
{10,20,30},
{40,50,60},
{70,80,90} };
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">C</span>[N][N];
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; N; i++){
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = 0; j &lt; N; j++){
C[i][j] = 0;
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">k</span> = 0; k &lt; N; k++){
C[i][j] += A[i][k] * B[k][j];
}
}
}
<span style="color: #a626a4;">return</span> 0;
}
</pre>
</div>
<p>
Time complexity is \(O(n^3)\)
</p>
</div>
</div>
<div id="outline-container-orgd75a384" class="outline-3">
<h3 id="orgd75a384"><span class="section-number-3">18.2.</span> Divide and conquer approach</h3>
<div class="outline-text-3" id="text-18-2">
<p>
The divide and conquer algorithm only works for a square matrix whose size is n X n, where n is a power of 2. The algorithm works as follows.
</p>
<pre class="example">
MatrixMul(A, B, n):
If n == 2 {
return A X B
}else{
Break A into four parts A_11, A_12, A_21, A_22, where A = [[ A_11, A_12],
[ A_21, A_22]]
Break B into four parts B_11, B_12, B_21, B_22, where B = [[ B_11, B_12],
[ B_21, B_22]]
C_11 = MatrixMul(A_11, B_11, n/2) + MatrixMul(A_12, B_21, n/2)
C_12 = MatrixMul(A_11, B_12, n/2) + MatrixMul(A_12, B_22, n/2)
C_21 = MatrixMul(A_21, B_11, n/2) + MatrixMul(A_22, B_21, n/2)
C_22 = MatrixMul(A_21, B_12, n/2) + MatrixMul(A_22, B_22, n/2)
C = [[ C_11, C_12],
[ C_21, C_22]]
return C
}
</pre>
<p>
The addition of matricies of size (n X n) takes time \(\theta (n^2)\), therefore, for computation of C<sub>11</sub> will take time of \(\theta \left( \left( \frac{n}{2} \right)^2 \right)\), which is equals to \(\theta \left( \frac{n^2}{4} \right)\). Therefore, computation time of C<sub>11</sub>, C<sub>12</sub>, C<sub>21</sub> and C<sub>22</sub> combined will be \(\theta \left( 4 \frac{n^2}{4} \right)\), which is equals to \(\theta (n^2)\).
<br />
There are 8 recursive calls in this function with MatrixMul(n/2), therefore, time complexity will be
\[ T(n) = 8T(n/2) + \theta (n^2) \]
Using the <b>master's theorem</b>
\[ T(n) = \theta (n^{log_28}) \]
\[ T(n) = \theta (n^3) \]
</p>
</div>
</div>
<div id="outline-container-orgbf700f5" class="outline-3">
<h3 id="orgbf700f5"><span class="section-number-3">18.3.</span> Strassen's algorithm</h3>
<div class="outline-text-3" id="text-18-3">
<p>
Another, more efficient divide and conquer algorithm for matrix multiplication. This algorithm also only works on square matrices with n being a power of 2. This algorithm is based on the observation that, for A X B = C. We can calculate C<sub>11</sub>, C<sub>12</sub>, C<sub>21</sub> and C<sub>22</sub> as,
</p>
<p>
\[ \text{C_11 = P_5 + P_4 - P_2 + P_6} \]
\[ \text{C_12 = P_1 + P_2} \]
\[ \text{C_21 = P_3 + P_4} \]
\[ \text{C_22 = P_1 + P _5 - P_3 - P_7} \]
Where,
\[ \text{P_1 = A_11 X (B_12 - B_22)} \]
\[ \text{P_2 = (A_11 + A_12) X B_22} \]
\[ \text{P_3 = (A_21 + A_22) X B_11} \]
\[ \text{P_4 = A_22 X (B_21 - B_11)} \]
\[ \text{P_5 = (A_11 + A_22) X (B_11 + B_22)} \]
\[ \text{P_6 = (A_12 - A_22) X (B_21 + B_22)} \]
\[ \text{P_7 = (A_11 - A_21) X (B_11 + B_12)} \]
This reduces number of recursion calls from 8 to 7.
</p>
<pre class="example">
Strassen(A, B, n):
If n == 2 {
return A X B
}
Else{
Break A into four parts A_11, A_12, A_21, A_22, where A = [[ A_11, A_12],
[ A_21, A_22]]
Break B into four parts B_11, B_12, B_21, B_22, where B = [[ B_11, B_12],
[ B_21, B_22]]
P_1 = Strassen(A_11, B_12 - B_22, n/2)
P_2 = Strassen(A_11 + A_12, B_22, n/2)
P_3 = Strassen(A_21 + A_22, B_11, n/2)
P_4 = Strassen(A_22, B_21 - B_11, n/2)
P_5 = Strassen(A_11 + A_22, B_11 + B_22, n/2)
P_6 = Strassen(A_12 - A_22, B_21 + B_22, n/2)
P_7 = Strassen(A_11 - A_21, B_11 + B_12, n/2)
C_11 = P_5 + P_4 - P_2 + P_6
C_12 = P_1 + P_2
C_21 = P_3 + P_4
C_22 = P_1 + P_5 - P_3 - P_7
C = [[ C_11, C_12],
[ C_21, C_22]]
return C
}
</pre>
<p>
This algorithm uses 18 matrix addition operations. So our computation time for that is \(\theta \left(18\left( \frac{n}{2} \right)^2 \right)\) which is equal to \(\theta (4.5 n^2)\) which is equal to \(\theta (n^2)\).
<br />
There are 7 recursive calls in this function which are Strassen(n/2), therefore, time complexity is
\[ T(n) = 7T(n/2) + \theta (n^2) \]
Using the master's theorem
\[ T(n) = \theta (n^{log_27}) \]
\[ T(n) = \theta (n^{2.807}) \]
</p>
<ul class="org-ul">
<li><i><b>NOTE</b> : The divide and conquer approach and strassen's algorithm typically use n == 1 as their terminating condition since for multipliying 1 X 1 matrices, we only need to calculate product of the single element they contain, that product is thus the single element of our resultant 1 X 1 matrix.</i></li>
</ul>
</div>
</div>
</div>
<div id="outline-container-orgf02b34d" class="outline-2">
<h2 id="orgf02b34d"><span class="section-number-2">19.</span> Sorting algorithms</h2>
<div class="outline-text-2" id="text-19">
</div>
<div id="outline-container-orgc2f900d" class="outline-3">
<h3 id="orgc2f900d"><span class="section-number-3">19.1.</span> In place vs out place sorting algorithm</h3>
<div class="outline-text-3" id="text-19-1">
<p>
If the space complexity of a sorting algorithm is \(\theta (1)\), then the algorithm is called in place sorting, else the algorithm is called out place sorting.
</p>
</div>
</div>
</div>
<div id="outline-container-orgceeb3f1" class="outline-2">
<h2 id="orgceeb3f1"><span class="section-number-2">20.</span> Bubble sort</h2>
<div class="outline-text-2" id="text-20">
<p>
Simplest sorting algorithm, easy to implement so it is useful when number of elements to sort is small. It is an in place sorting algorithm. We will compare pairs of elements from array and swap them to be in correct order. Suppose input has n elements.
</p>
<ul class="org-ul">
<li>For first pass of the array, we will do <b>n-1</b> comparisions between pairs, so 1st and 2nd element; then 2nd and 3rd element; then 3rd and 4th element; till comparision between (n-1)th and nth element, swapping positions according to the size. <i>A single pass will put a single element at the end of the list at it's correct position.</i></li>
<li>For second pass of the array, we will do <b>n-2</b> comparisions because the last element is already in it's place after the first pass.</li>
<li>Similarly, we will continue till we only do a single comparision.</li>
<li>The total number of comparisions will be
\[ \text{Total comparisions} = (n - 1) + (n - 2) + (n - 3) + ..... + 2 + 1 \]
\[ \text{Total comparisions} = \frac{n(n-1)}{2} \]
Therefore, <b>time complexity is \(\theta (n^2)\)</b></li>
</ul>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">void</span> <span style="color: #0184bc;">binary_search</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">array</span>[]){
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">i is the number of comparisions in the pass</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = len(array) - 1; i &gt;= 1; i--){
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">j is used to traverse the list</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = 0; j &lt; i; j++){
<span style="color: #a626a4;">if</span>(array[j] &gt; array[j+1])
array[j], array[j+1] = array[j+1], array[j];
}
}
}
</pre>
</div>
<p>
<b><i>Minimum number of swaps can be calculated by checking how many swap operations are needed to get each element in it's correct position.</i></b> This can be done by checking the number of smaller elements towards the left. For descending, check the number of larger elements towards the left of the given element. Example for ascending sort,
</p>
<table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
<colgroup>
<col class="org-left" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
</colgroup>
<tbody>
<tr>
<td class="org-left">Array</td>
<td class="org-right">21</td>
<td class="org-right">16</td>
<td class="org-right">17</td>
<td class="org-right">8</td>
<td class="org-right">31</td>
</tr>
<tr>
<td class="org-left">Minimum number of swaps to get in correct position</td>
<td class="org-right">3</td>
<td class="org-right">1</td>
<td class="org-right">0</td>
<td class="org-right">0</td>
<td class="org-right">0</td>
</tr>
</tbody>
</table>
<p>
Therefore, minimum number of swaps is ( 3 + 1 + 0 + 0 + 0) , which is equal to 4 swaps.
</p>
<ul class="org-ul">
<li><b><i>Reducing number of comparisions in implementation</i></b> : at the end of every pass, check the number of swaps. <b>If number of swaps in a pass is zero, then the array is sorted.</b> This implementation does not give minimum number of comparisions, but reduces number of comparisions from default implementation. It reduces the time complexity to \(\theta (n)\) for best case scenario, since we only need to pass through array once.</li>
</ul>
<p>
Recursive time complexity : \(T(n) = T(n-1) + n - 1\)
</p>
</div>
</div>
<div id="outline-container-org6e1f335" class="outline-2">
<h2 id="org6e1f335"><span class="section-number-2">21.</span> Selection sort</h2>
<div class="outline-text-2" id="text-21">
<p>
It is an inplace sorting technique. In this algorithm, we will get the minimum element from the array, then we swap it to the first position. Now we will get the minimum from array[1:] and place it in index 1. Similarly, we get minimum from array[2:] and then place it on index 2. We do till we get minimum from array[len(array) - 2:] and place minimum on index [len(array) - 2].
</p>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">void</span> <span style="color: #0184bc;">selection_sort</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">array</span>[]){
<span style="color: #a626a4;">for</span>( <span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; len(array)-2 ; i++ ) {
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Get the minimum index from the sub-array [i:]</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">min_index</span> = i;
<span style="color: #a626a4;">for</span>( <span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = i+1; j &lt; len(array) - 1; j++ )
<span style="color: #a626a4;">if</span> (array[j] &lt; array[min_index]) { min_index = j; }
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Swap the min_index with it's position at start of sub-array</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
array[i], array[min_index] = array[min_index], array[i];
}
}
</pre>
</div>
</div>
<div id="outline-container-org78f1644" class="outline-3">
<h3 id="org78f1644"><span class="section-number-3">21.1.</span> Time complexity</h3>
<div class="outline-text-3" id="text-21-1">
<p>
The total number of comparisions is,
\[ \text{Total number of comparisions} = (n -1) + (n-2) + (n-3) + ... + (1) \]
\[ \text{Total number of comparisions} = \frac{n(n-1)}{2} \]
For this algorithm, number of comparisions are same in best, average and worst case.
Therefore the time complexity in all cases is, \[ \text{Time complexity} = \theta (n) \]
</p>
<ul class="org-ul">
<li>Recurrance time complexity : \(T(n) = T(n-1) + n - 1\)</li>
</ul>
</div>
</div>
</div>
<div id="outline-container-orged540c9" class="outline-2">
<h2 id="orged540c9"><span class="section-number-2">22.</span> Insertion sort</h2>
<div class="outline-text-2" id="text-22">
<p>
It is an inplace sorting algorithm.
</p>
<ul class="org-ul">
<li>In this algorithm, we first divide array into two sections. Initially, the left section has a single element and right section has all the other elements. Therefore, the left part is sorted and right part is unsorted.</li>
<li>We call the leftmost element of the right section the key.</li>
<li>Now, we insert the key in it's correct position, in left section.</li>
<li>As commanly known, for insertion operation we need to shift elements. So we shift elements in the left section.</li>
</ul>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">void</span> <span style="color: #0184bc;">insertion_sort</span> ( <span style="color: #c18401;">int</span> <span style="color: #8b4513;">array</span>[] ) {
<span style="color: #a626a4;">for</span>( <span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 1; i &lt; len(array); i++ ) {
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Key is the first element of the right section of array</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">key</span> = array[j];
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = i - 1;
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Shift till we find the correct position of the key in the left section</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #a626a4;">while</span> ( j &gt; 0 &amp;&amp; array[j] &gt; key ) {
array[j + 1] = array[j];
j -= 1;
}
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Insert key in it's correct position</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
array[j+1] = key;
}
}
</pre>
</div>
</div>
<div id="outline-container-org7cfbdd7" class="outline-3">
<h3 id="org7cfbdd7"><span class="section-number-3">22.1.</span> Time complexity</h3>
<div class="outline-text-3" id="text-22-1">
<p>
<b>Best Case</b> : The best case is when input array is already sorted. In this case, we do <b>(n-1)</b> comparisions and no swaps. The time complexity will be \(\theta (n)\)
<br />
<b>Worst Case</b> : The worst case is when input array is is descending order when we need to sort in ascending order and vice versa (basically reverse of sorted). The number of comparisions is
<br />
\[ [1 + 2 + 3 + .. + (n-1)] = \frac{n(n-1)}{2} \]
<br />
The number of element shift operations is
<br />
\[ [1 + 2 + 3 + .. + (n-1)] = \frac{n(n-1)}{2} \]
<br />
Total time complexity becomes \(\theta \left( 2 \frac{n(n-1)}{2} \right)\), which is simplified to \(\theta (n^2)\).
</p>
<ul class="org-ul">
<li><b>NOTE</b> : Rather than using <b>linear search</b> to find the position of key in the left (sorted) section, we can use <b>binary search</b> to reduce number of comparisions.</li>
</ul>
</div>
</div>
</div>
<div id="outline-container-org937bc7e" class="outline-2">
<h2 id="org937bc7e"><span class="section-number-2">23.</span> Inversion in array</h2>
<div class="outline-text-2" id="text-23">
<p>
The inversion of array is the measure of how close array is from being sorted.
<br />
For an ascending sort, it is the amount of element pairs such that array[i] &gt; array[j] and i &lt; j OR IN OTHER WORDS array[i] &lt; array[j] and i &gt; j.
</p>
<ul class="org-ul">
<li>For <b>ascending sort</b>, we can simply look at the number of elements to left of the given element that are smaller.</li>
</ul>
<table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
<colgroup>
<col class="org-left" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
</colgroup>
<tbody>
<tr>
<td class="org-left">Array</td>
<td class="org-right">10</td>
<td class="org-right">6</td>
<td class="org-right">12</td>
<td class="org-right">8</td>
<td class="org-right">3</td>
<td class="org-right">1</td>
</tr>
<tr>
<td class="org-left">Inversions</td>
<td class="org-right">4</td>
<td class="org-right">2</td>
<td class="org-right">3</td>
<td class="org-right">2</td>
<td class="org-right">1</td>
<td class="org-right">0</td>
</tr>
</tbody>
</table>
<p>
Total number of inversions = (4+2+3+2+1+0) = 12
</p>
<ul class="org-ul">
<li>For <b>descending sort</b>, we can simply look at the number of elements to the left of the given element that are larger.</li>
</ul>
<table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
<colgroup>
<col class="org-left" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
</colgroup>
<tbody>
<tr>
<td class="org-left">Array</td>
<td class="org-right">10</td>
<td class="org-right">6</td>
<td class="org-right">12</td>
<td class="org-right">8</td>
<td class="org-right">3</td>
<td class="org-right">1</td>
</tr>
<tr>
<td class="org-left">Inversions</td>
<td class="org-right">1</td>
<td class="org-right">2</td>
<td class="org-right">0</td>
<td class="org-right">0</td>
<td class="org-right">0</td>
<td class="org-right">0</td>
</tr>
</tbody>
</table>
<p>
Total number of inversions = 1 + 2 = 3
</p>
<ul class="org-ul">
<li>For an array of size <b>n</b></li>
</ul>
<p>
\[ \text{Maximum possible number of inversions} = \frac{n(n-1)}{2} \]
\[ \text{Minimum possible number of inversions} = 0 \]
</p>
</div>
<div id="outline-container-orgca4bf29" class="outline-3">
<h3 id="orgca4bf29"><span class="section-number-3">23.1.</span> Relation between time complexity of insertion sort and inversion</h3>
<div class="outline-text-3" id="text-23-1">
<p>
If the inversion of an array is f(n), then the time complexity of the insertion sort will be \(\theta (n + f(n))\).
</p>
</div>
</div>
</div>
<div id="outline-container-org8edc47c" class="outline-2">
<h2 id="org8edc47c"><span class="section-number-2">24.</span> Quick sort</h2>
<div class="outline-text-2" id="text-24">
<p>
It is a divide and conquer technique. It uses a partition algorithm which will choose an element from array, then place all smaller elements to it's left and larger to it's right. Then we can take these two parts of the array and recursively place all elements in correct position. For ease, the element chosen by the partition algorithm is either leftmost or rightmost element.
</p>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">void</span> <span style="color: #0184bc;">quick_sort</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">array</span>[], <span style="color: #c18401;">int</span> <span style="color: #8b4513;">low</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">high</span>){
<span style="color: #a626a4;">if</span>(low &lt; high){
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">x</span> = partition(array, low, high);
quick_sort(array, low, x-1);
quick_sort(array, x+1, high);
}
}
</pre>
</div>
<p>
As we can see, the main component of this algorithm is the partition algorithm.
</p>
</div>
<div id="outline-container-org8e462f5" class="outline-3">
<h3 id="org8e462f5"><span class="section-number-3">24.1.</span> Lomuto partition</h3>
<div class="outline-text-3" id="text-24-1">
<p>
The partition algorithm will work as follows:
</p>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Will return the index where the array is partitioned</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #c18401;">int</span> <span style="color: #0184bc;">partition</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">array</span>[], <span style="color: #c18401;">int</span> <span style="color: #8b4513;">low</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">high</span>){
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">pivot</span> = array[high];
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">This will point to the element greater than pivot</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = low - 1;
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = low; j &lt; high; j++){
<span style="color: #a626a4;">if</span>(array[j] &lt;= pivot){
i += 1;
array[i], array[j] = array[j], array[i];
}
}
array[i+1], array[high] = array[high], array[i+1];
<span style="color: #a626a4;">return</span> (i + 1);
}
</pre>
</div>
<ul class="org-ul">
<li>Time complexity</li>
</ul>
<p>
For an array of size <b>n</b>, the number ofcomparisions done by this algorithm is always <b>n - 1</b>. Therefore, the time complexity of this partition algorithm is,
\[ T(n) = \theta (n) \]
</p>
</div>
</div>
<div id="outline-container-orgad5fe08" class="outline-3">
<h3 id="orgad5fe08"><span class="section-number-3">24.2.</span> Time complexity of quicksort</h3>
<div class="outline-text-3" id="text-24-2">
<p>
In quick sort, we don't have a fixed recursive relation. The recursive relations differ for different cases.
</p>
<ul class="org-ul">
<li><b>Best Case</b> : The partition algorithm always divides the array to two equal parts. In this case, the recursive relation becomes
\[ T(n) = 2T(n/2) + \theta (n) \]
Where, \(\theta (n)\) is the time complexity for creating partition.
<br />
Using the master's theorem.
\[ T(n) = \theta( n.log(n) ) \]</li>
<li><b>Worst Case</b> : The partition algorithm always creates the partition at one of the extreme positions of the array. This creates a single partition with <b>n-1</b> elements. Therefore, the quicksort algorithm has to be called on the remaining <b>n-1</b> elements of the array.
\[ T(n) = T(n-1) + \theta (n) \]
Again, \(\theta (n)\) is the time complexity for creating partition.
<br />
Using master's theorem
\[ T(n) = \theta (n^2) \]</li>
<li><b>Average Case</b> : The average case is closer to the best case in quick sort rather than to the worst case.</li>
</ul>
<p>
<br />
To get the average case, we will <b>consider a recursive function for number of comparisions</b> \(C(n)\).
<br />
For the function \(C(n)\), there are \(n-1\) comparisions for the partition algorithm.
<br />
Now, suppose that the index of partition is <b>i</b>.
<br />
This will create two recursive comparisions \(C(i)\) and \(C(n-i-1)\).
<br />
<b>i</b> can be any number between <b>0</b> and <b>n-1</b>, with each case being equally probable. So the average number of comparisions for \(C(n)\) will be
\[ \frac{1}{n} \sum_{i=0}^{n-1} \left( C(i) + C(n-i-1) \right) \]
Therefore, total number of comparisions for input size <b>n</b> will be,
\[ C(n) = \left( n-1 \right) + \frac{1}{n} \sum_{i=0}^{n-1} \left( C(i) + C(n-i-1) \right) \]
Solving the above recurrance relation will give us,
\[ C(n) \approx 2\ n\ ln(n) \]
\[ C(n) \approx 1.39\ n\ log_2(n) \]
Therefore, the time complexity in average case becomes,
\[ T(n) = \theta (n\ log_2(n)) \]
</p>
</div>
</div>
<div id="outline-container-org029ad1b" class="outline-3">
<h3 id="org029ad1b"><span class="section-number-3">24.3.</span> Number of comparisions</h3>
<div class="outline-text-3" id="text-24-3">
<p>
The number of comparisions in quick sort for,
</p>
<ul class="org-ul">
<li>Worst Case : \[ \text{Number of comparisions} = \frac{n(n-1)}{2} \]</li>
</ul>
</div>
</div>
</div>
<div id="outline-container-org9d7e721" class="outline-2">
<h2 id="org9d7e721"><span class="section-number-2">25.</span> Merging two sorted arrays (2-Way Merge)</h2>
<div class="outline-text-2" id="text-25">
<p>
Suppose we have two arrays that are already sorted. The first array has <b>n</b> elements and the second array has <b>m</b> elements.
<br />
The way to merge them is to compare the elements in a sequence between the two arrays. We first add a pointer to start of both arrays. The element pointed by the pointers are compared and the smaller one is added to our new array. Then we move pointer on that array forward. These comparisions are repeated until we reach the end of one of the array. At this point, we can simply append all the elements of the remaining array.
</p>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">int</span> *<span style="color: #0184bc;">merge</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">a</span>[], <span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">b</span>[], <span style="color: #c18401;">int</span> <span style="color: #8b4513;">m</span>){
<span style="color: #c18401;">int</span> *<span style="color: #8b4513;">c</span> = malloc((m+n) * <span style="color: #a626a4;">sizeof</span>(<span style="color: #c18401;">int</span>));
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; <span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = 0;
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">k</span> = 0;
<span style="color: #a626a4;">while</span> (i != n &amp;&amp; j != m) {
<span style="color: #a626a4;">if</span> ( a[i] &gt; b[j] ) { c[k++] = b[j++]; } <span style="color: #a626a4;">else</span> { c[k++] = a[i++]; };
}
<span style="color: #a626a4;">while</span> (i != n) {
c[k++] = a[i++];
}
<span style="color: #a626a4;">while</span> (j != m) {
c[k++] = b[j++];
}
<span style="color: #a626a4;">return</span> c;
}
</pre>
</div>
<ul class="org-ul">
<li>The maximum number of comparisions to merge the arrays is (m + n - 1).</li>
<li>The minimum number of comparisions to merge the arrays is either <b>m</b> or <b>n</b>. Depending of which one is smaller.</li>
</ul>
</div>
</div>
<div id="outline-container-org7a957cb" class="outline-2">
<h2 id="org7a957cb"><span class="section-number-2">26.</span> Merging k sorted arrays (k-way merge)</h2>
<div class="outline-text-2" id="text-26">
<p>
k-way merge algorithms take k different sorted arrays and merge them into a single single array. The algorithm is same as that in two way merge except we need to get the smallest element from the pointer on k array's and then move it's corresponding pointer.
</p>
</div>
</div>
<div id="outline-container-orgb932252" class="outline-2">
<h2 id="orgb932252"><span class="section-number-2">27.</span> Merge sort</h2>
<div class="outline-text-2" id="text-27">
<p>
Merge sort is a pure divide and conquer algorithm. In this sorting algorithm, we merge the sorted sub-arrays till we get a final sorted array.<br />
The algorithm will work as follows :
</p>
<ol class="org-ol">
<li>Divide the array of n elements into <b>n</b> subarrays, each having one element.</li>
<li>Repeatdly merge the subarrays to form merged subarrays of larger sizes until there is one list remaining.</li>
</ol>
<p>
For divide and conquer steps:
</p>
<ul class="org-ul">
<li><b>Divide</b> : Divide the array from the middle into two equal sizes.</li>
<li><b>Conquer</b> : Call merge sort recursively on the two subarrays</li>
<li><b>Combine</b> : Merge the sorted array</li>
</ul>
<p>
The algorithm works as follows (this isn't real c code)
</p>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">A function that will merge two sorted arrays</span>
<span style="color: #c18401;">int</span>[] merge(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">first</span>[], <span style="color: #c18401;">int</span> <span style="color: #8b4513;">second</span>[]);
<span style="color: #c18401;">int</span>[] merge_sort(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">array</span>[], <span style="color: #c18401;">int</span> <span style="color: #8b4513;">left</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">right</span>){
<span style="color: #a626a4;">if</span>(left &lt; right){
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">mid</span> = (left + right) / 2;
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">sorted_first</span>[] = merge_sort(array[], left, mid);
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">sorted_second</span>[] = merge_sort(array[], mid + 1, right);
<span style="color: #a626a4;">return</span> merge(sorted_first, sorted_second);
}
}
</pre>
</div>
<p>
This algorithm is often used in languages which have great support for linked lists, for example lisp and haskell. For more traditional c-like languages, often quicksort is easier to implement.
<br />
An implementation in C language is as follows.
</p>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">buffer is memory of size equal to or bigger than size of array</span>
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">buffer is used when merging the arrays</span>
<span style="color: #c18401;">void</span> <span style="color: #0184bc;">merge_sort</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">array</span>[], <span style="color: #c18401;">int</span> <span style="color: #8b4513;">left</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">right</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">buffer</span>[]){
<span style="color: #a626a4;">if</span>(left &lt; right){
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">Divide part</span>
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">mid</span> = ( left + right ) / 2;
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">Conquer part</span>
merge_sort(array,left, mid, buffer);
merge_sort(array, mid + 1, right, buffer);
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">Combine part : Merges the two sorted parts</span>
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = left; <span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = mid + 1; <span style="color: #c18401;">int</span> <span style="color: #8b4513;">k</span> = 0;
<span style="color: #a626a4;">while</span>( i != (mid+1) &amp;&amp; j != (right+1) ){
<span style="color: #a626a4;">if</span>(array[i] &lt; array[j]) { buffer[k++] = array[i++]; } <span style="color: #a626a4;">else</span> { buffer[k++] = array[j++]; }
}
<span style="color: #a626a4;">while</span>(i != (mid+1))
buffer[k++] = array[i++];
<span style="color: #a626a4;">while</span>(j != (right+1))
buffer[k++] = array[j++];
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">x</span> = left; x &lt;= right; x++)
array[x] = buffer[x - left];
}
}
</pre>
</div>
</div>
<div id="outline-container-org49114b6" class="outline-3">
<h3 id="org49114b6"><span class="section-number-3">27.1.</span> Time complexity</h3>
<div class="outline-text-3" id="text-27-1">
<p>
Unlike quick sort, <b>the recurrance relation is same for merge sort in all cases.</b>
<br />
Since divide part divides array into two equal sizes, the input size is halfed (i.e, <b>T(n/2)</b> ).
<br />
In conquer part, there are two calls so <b>2.T(n/2)</b> is added to time complexity.
<br />
The cost for merging two arrays of size n/2 each is either <b>n-1</b> of <b>n/2</b>. That is to say that time complexity to merge two arrays of size n/2 each is always \(\theta (n)\). Thus, the final recurrance relation is
\[ T(n) = 2.T(n/2) + \theta (n) \]
Using the master's theorem.
\[ T(n) = \theta (n.log_2n) \]
</p>
</div>
</div>
<div id="outline-container-org87aa6fd" class="outline-3">
<h3 id="org87aa6fd"><span class="section-number-3">27.2.</span> Space complexity</h3>
<div class="outline-text-3" id="text-27-2">
<p>
As we can see in the C code, the space complexity is \(\theta (n)\)
</p>
</div>
</div>
</div>
<div id="outline-container-org3aabfd6" class="outline-2">
<h2 id="org3aabfd6"><span class="section-number-2">28.</span> Stable and unstable sorting algorithms</h2>
<div class="outline-text-2" id="text-28">
<p>
We call sorting algorithms unstable or stable on the basis of whether they change order of equal values.
</p>
<ul class="org-ul">
<li><b>Stable sorting algorithm</b> : a sorting algorithm that preserves the order of the elements with equal values.</li>
<li><b>Unstable sorting algorithm</b> : a sorting algorithm that does not preserve the order of the elements with equal values.
<br /></li>
</ul>
<p>
This is of importance when we store data in pairs of keys and values and then sort data using the keys. So we may want to preserve the order in which the entries where added.
<br />
Example, suppose we add (key, value) pairs as:
</p>
<pre class="example">
(2, v1), (1, v2), (3, v3), (1, v1), (2, v4), (3, v2)
</pre>
<p>
Now, if we sort using the keys a sorting algorithm which is stabe will preserve the order of elements with equal keys. So output is always
</p>
<pre class="example">
(1, v2), (1, v1), (2,v1), (2, v4), (3, v3), (3, v2)
</pre>
<p>
i.e, the <b>order of keys with same values is preserved</b>.
<br />
Whereas an unstable sorting algorithm will sort without preserving the order of key values.
</p>
</div>
</div>
<div id="outline-container-org58c0022" class="outline-2">
<h2 id="org58c0022"><span class="section-number-2">29.</span> Non-comparitive sorting algorithms</h2>
<div class="outline-text-2" id="text-29">
<p>
Sorting algorithms which do not use comparisions to sort elements are called non-comparitive sorting algorithms. These tend to be faster than comparitive sorting algorithms.
</p>
</div>
<div id="outline-container-org7078369" class="outline-3">
<h3 id="org7078369"><span class="section-number-3">29.1.</span> Counting sort</h3>
<div class="outline-text-3" id="text-29-1">
<ul class="org-ul">
<li>Counting sort <b>only works on integer arrays</b></li>
<li>Couting sort only works if <b>all elements of array are non-negative</b>, i.e, elements are only allowed to be in range [0,k] .</li>
</ul>
<div class="org-src-container">
<pre class="src src-c"><span style="color: #a0a1a7; font-weight: bold;">//</span><span style="color: #a0a1a7;">* The input array is sorted and result is stored in output array *//</span>
<span style="color: #a0a1a7; font-weight: bold;">//</span><span style="color: #a0a1a7;">* max is the largest element of the array *//</span>
<span style="color: #c18401;">void</span> <span style="color: #0184bc;">counting_sort</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">input</span>[], <span style="color: #c18401;">int</span> <span style="color: #8b4513;">max</span> ,<span style="color: #c18401;">int</span> <span style="color: #8b4513;">output</span>[]){
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">count array should have a size greater than or equal to (max + 1)</span>
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">count</span>[max + 1];
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">initialize count array to zero, can also use memset</span>
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; max+1; i++) count[i] = 0;
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">i from 0 to len(array) - 1</span>
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">this loop stores number of elements equal to i in count array</span>
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; len(array); i++)
count[input[i]] = count[input[i]] + 1;
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">i from 1 to max</span>
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">this loop stores number of elements less that or equal to i in count array</span>
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 1; i &lt;= max; i++)
count[i] = count[i] + count[i - 1];
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">i from len(array) - 1 to 0</span>
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = len(array) - 1; i &gt;= 0; i--){
count[input[i]] = count[input[i]] - 1;
output[count[input[i]]] = input[i];
}
}
</pre>
</div>
<ul class="org-ul">
<li><p>
<b>Time complexity</b> : Since there are only simple loops and arithmetic operations, we can get time complexity by considering the number of times loops are executed.
</p>
<p>
\[ \text{Number of times loops are executed} = n + (max - 1) + n \]
\[ \text{Where, } n = len(array) \text{ i.e, the input size} \]
</p>
<p>
Therefore,
\[ \text{Number of times loops are executed} = 2n + max - 1 \]
\[ \text{Time complexity} = \theta (n + max) \]
</p></li>
</ul>
</div>
</div>
<div id="outline-container-org2198ab4" class="outline-3">
<h3 id="org2198ab4"><span class="section-number-3">29.2.</span> Radix sort</h3>
<div class="outline-text-3" id="text-29-2">
<p>
In radix sort, we sort using the digits, from least significant digit (lsd) to most significant digit (msd). In other words, we sort digits from right to left. The algorithm used to sort digits <b>should be a stable sorting algorithm</b>.
</p>
<div id="org22aeb43" class="figure">
<p><img src="lectures/imgs/radix-sort.png" alt="radix-sort.png" />
</p>
</div>
<p>
For the following example, we will use the bubble sort since it is the easiest to implement. But, for best performance, <b>radix sort is paired with counting sort</b>.
</p>
<div class="org-src-container">
<pre class="src src-c"><span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">d = 0, will return digit at unit's place</span>
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">d = 1, will return digit at ten's place</span>
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">and so on.</span>
<span style="color: #c18401;">int</span> <span style="color: #0184bc;">get_digit</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">d</span>){
assert(d &gt;= 0);
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">place</span> = (<span style="color: #c18401;">int</span>) pow(10, d);
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">digit</span> = (n / place) % 10;
<span style="color: #a626a4;">return</span> digit;
}
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">bubble sort the array for only digits of the given place</span>
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">d = 0, unit's place</span>
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">d = 1, ten's place</span>
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">and so on.</span>
<span style="color: #c18401;">void</span> <span style="color: #0184bc;">bubble_sort_digit</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">array</span>[], <span style="color: #c18401;">int</span> <span style="color: #8b4513;">d</span>){
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = len(array); i &gt;= 1; i--){
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = 0; j &lt; i; j++){
<span style="color: #a626a4;">if</span>(get_digit(array[j], d) &gt; get_digit(array[j + 1], d))
array[j], array[j + 1] = array[j + 1], array[j];
}
}
}
<span style="color: #c18401;">void</span> <span style="color: #0184bc;">radix_sort</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">array</span>[], <span style="color: #c18401;">int</span> <span style="color: #8b4513;">no_of_digits</span>){
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; no_of_digits ; i++){
bubble_sort_digit(array, i );
}
}
</pre>
</div>
<ul class="org-ul">
<li><b>Time complexity</b> : \[ \text{Time Complexity} = \theta (d.(n + max)) \]
Where, <b>d = number of digits in max elemet</b>, and
<br />
radix sort is paired with counting sort.</li>
</ul>
</div>
</div>
<div id="outline-container-orge88fb20" class="outline-3">
<h3 id="orge88fb20"><span class="section-number-3">29.3.</span> Bucket sort</h3>
<div class="outline-text-3" id="text-29-3">
<p>
Counting sort only works for non-negative integers. Bucket sort is a generalization of counting sort. If we know the range of the elements in the array, we can sort them using bucket sort. In bucket sort, we distribute the elements into buckets (collections of elements). Each bucket will hold elements of different ranges. Then, we can either sort elements in the buckets using some other sorting algorithm or by using bucket sort algorithm recursively.
<br />
Bucket sort works as follows:
</p>
<ol class="org-ol">
<li>Set up empty buckets</li>
<li><b>Scatter</b> the elements into buckets based on different ranges.</li>
<li><b>Sort</b> elements in non-empty buckets.</li>
<li><b>Gather</b> the elements from buckets and place in orignal array.</li>
</ol>
<table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
<colgroup>
<col class="org-left" />
<col class="org-left" />
</colgroup>
<tbody>
<tr>
<td class="org-left"><img src="lectures/imgs/Bucket_sort_1.svg " alt="Bucket_sort_1.svg " /></td>
<td class="org-left"><b>Elements are distributed among bins</b></td>
</tr>
<tr>
<td class="org-left"><img src="lectures/imgs/Bucket_sort_2.svg" alt="Bucket_sort_2.svg" class="org-svg" /></td>
<td class="org-left"><b>Then, elements are sorted within each bin and then result is concatenated</b></td>
</tr>
</tbody>
</table>
<p>
To get the ranges of the buckets, we can use the smallest (min) and biggest (max) element of the array.
<br />
The number of elements in each bucket will be,
</p>
<p>
\[ \text{Range of each bucket} (r) = \frac{(\text{max} - \text{min} + 1)}{ \text{number of buckets}} \]
</p>
<p>
Then, the ranges of buckets will be,
</p>
<ul class="org-ul">
<li>(min + 0.r) &lt;==&gt; (min + 1.r - 1)</li>
<li>(min + 1.r) &lt;==&gt; (min + 2.r - 1)</li>
<li>(min + 2.r) &lt;==&gt; (min + 3.r - 1)</li>
<li>(min + 3.r) &lt;==&gt; (min + 4.r - 1)</li>
<li><b>etc.</b></li>
</ul>
<p>
Then, we can get the bucket number to which we add any array[i] as,
\[ \text{bucket index} = \frac{ \text{array[i]} - \text{min} }{ r } \]
Where,
\[ r = \frac{(\text{max} - \text{min} + 1)}{ \text{number of buckets}} \]
</p>
<div class="org-src-container">
<pre class="src src-c"><span style="color: #c18401;">void</span> <span style="color: #0184bc;">bucket_sort</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">array</span>[], <span style="color: #c18401;">size_t</span> <span style="color: #8b4513;">n</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">min</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">max</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">number_of_buckets</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">output</span>[]){
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">a bucket will have capacity of [ (max - min + 1) / number_of_buckets ] elements</span>
Vector&lt;<span style="color: #c18401;">int</span>&gt; buckets[number_of_buckets];
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">r</span> = (max - min + 1) / number_of_buckets;
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">if (max - min + 1) &lt; number_of_buckets, then r could be 0.</span>
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">in this case, just set r to 1</span>
<span style="color: #a626a4;">if</span>(r &lt;= 0) r = 1;
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; n; i++){
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">put array[i] in bucket number (array[i] - min) / r</span>
buckets[ (array[i] - min) / r].put(array[i]);
}
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">sort elements of buckets and append to final output array</span>
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; number_of_buckets; i++){
buckets[i].sort();
output.append(bucket[i]);
}
}
</pre>
</div>
</div>
<div id="outline-container-org8b4d13b" class="outline-4">
<h4 id="org8b4d13b"><span class="section-number-4">29.3.1.</span> Time complexity</h4>
<div class="outline-text-4" id="text-29-3-1">
<p>
The time complexity in bucket sort is affected by what sorting algorithm will be used to sort elements in a bucket.
<br />
We also have to add the time complexities for initializing the buckets. Suppose there are k buckets, then the time to initialize then is \(\theta (k)\).
<br />
Also the scattering of elements in buckets will take \(\theta (n)\) time.
</p>
<ul class="org-ul">
<li><b>Worst Case</b> : Worst case for bucket sort is if all the <b>elements are in the same bucket</b>. In this case, the <b>time complexity is the same as the time complexity of the sorting algorithm used</b> plus the time to scatter elements and initialize buckets. Therfore,
\[ \text{Time complexity} = \theta (n + k + f(n) ) \]
Where, \(f(n)\) is the time complexity of the sorting algorithm and <b>k</b> is the number of buckets.
<br />
<br />
<br /></li>
<li><p>
<b>Best Case &amp; Average Case</b> : Best case for bucket sort is if elements are equally distributed. Then, all buckets will have \(n/k\) elements. The time taken to sort single bucket will become f(n/k) and the time taken to sort k buckets will be,
\[ \text{time to sort all buckets} = k \times f \left( \frac{n}{k} \right) \]
Suppose we were using insertion sort, then
\[ \text{for insertion sort} : f(n) = n^2 \]
\[ f \left( \frac{n}{k} \right) = \frac{n^2}{k^2} \]
Therefore,
\[ \text{time to sort all buckets} = \frac{n^2}{k} \]
</p>
<p>
So, total time have time added to initialize buckets and also scatter elements.
</p>
<p>
\[ \text{Time complexity} = \theta ( n + k + \frac{n^2}{k} ) \]
</p>
<p>
This is considered the time complexity for average case.
For best case, we consider the number of buckets is approximately equal to number of elements.
\[ k \approx n \]
</p>
<p>
Therefore, in best case,
\[ \text{Time complexity} = \theta (n) \]
</p></li>
</ul>
</div>
</div>
</div>
</div>
</div>
</body>
</html>