You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

2404 lines
93 KiB
HTML

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
<head>
<!-- 2023-06-23 Fri 16:30 -->
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Algorithms</title>
<meta name="author" content="Anmol Nawani" />
<meta name="generator" content="Org Mode" />
<link rel="stylesheet" type="text/css" href="src/readtheorg_theme/css/htmlize.css"/>
<link rel="stylesheet" type="text/css" href="src/readtheorg_theme/css/readtheorg.css"/>
<script type="text/javascript" src="src/lib/js/jquery.min.js"></script>
<script type="text/javascript" src="src/lib/js/bootstrap.min.js"></script>
<script type="text/javascript" src="src/lib/js/jquery.stickytableheaders.min.js"></script>
<script type="text/javascript" src="src/readtheorg_theme/js/readtheorg.js"></script>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
displayAlign: "center",
displayIndent: "0em",
"HTML-CSS": { scale: 100,
linebreaks: { automatic: "false" },
webFont: "TeX"
},
SVG: {scale: 100,
linebreaks: { automatic: "false" },
font: "TeX"},
NativeMML: {scale: 100},
TeX: { equationNumbers: {autoNumber: "AMS"},
MultLineWidth: "85%",
TagSide: "right",
TagIndent: ".8em"
}
});
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS_HTML"></script>
</head>
<body>
<div id="content" class="content">
<h1 class="title">Algorithms</h1>
<div id="table-of-contents" role="doc-toc">
<h2>Table of Contents</h2>
<div id="text-table-of-contents" role="doc-toc">
<ul>
<li><a href="#org4514307">1. Lecture 1</a>
<ul>
<li><a href="#orgb881061">1.1. Data structure and Algorithm</a></li>
<li><a href="#org7860130">1.2. Characteristics of Algorithms</a></li>
<li><a href="#orgad5cd54">1.3. Behaviour of algorithm</a>
<ul>
<li><a href="#org6e21d5a">1.3.1. Best, Worst and Average Cases</a></li>
<li><a href="#org654d982">1.3.2. Bounds of algorithm</a></li>
</ul>
</li>
<li><a href="#org8c53531">1.4. Asymptotic Notations</a>
<ul>
<li><a href="#org8ec8ad1">1.4.1. Big-Oh Notation [O]</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="#org42af6e0">2. Lecture 2</a>
<ul>
<li><a href="#org716d410">2.1. Asymptotic Notations</a>
<ul>
<li><a href="#orge74888c">2.1.1. Omega Notation [ \(\Omega\) ]</a></li>
<li><a href="#org0949287">2.1.2. Theta Notation [ \(\theta\) ]</a></li>
<li><a href="#orge63f8b1">2.1.3. Little-Oh Notation [o]</a></li>
<li><a href="#org0b0f72f">2.1.4. Little-Omega Notation [ \(\omega\) ]</a></li>
</ul>
</li>
<li><a href="#org5a61c2c">2.2. Comparing Growth rate of funtions</a>
<ul>
<li><a href="#org3484bdd">2.2.1. Applying limit</a></li>
<li><a href="#org25e014e">2.2.2. Using logarithm</a></li>
<li><a href="#org0790a44">2.2.3. Common funtions</a></li>
</ul>
</li>
<li><a href="#org8762226">2.3. Properties of Asymptotic Notations</a>
<ul>
<li><a href="#org726a6e4">2.3.1. Big-Oh</a></li>
<li><a href="#org83b5d5a">2.3.2. Properties</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="#org0d285a1">3. Lecture 3</a>
<ul>
<li><a href="#org4801110">3.1. Calculating time complexity of algorithm</a>
<ul>
<li><a href="#org9a24492">3.1.1. Sequential instructions</a></li>
<li><a href="#orga72d036">3.1.2. Iterative instructions</a></li>
<li><a href="#org796d28b">3.1.3. An example for time complexities of nested loops</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="#org84b365f">4. Lecture 4</a>
<ul>
<li><a href="#org1ac7bb0">4.1. Time complexity of recursive instructions</a>
<ul>
<li><a href="#org1e82264">4.1.1. Time complexity in recursive form</a></li>
</ul>
</li>
<li><a href="#org34532f5">4.2. Solving Recursive time complexities</a>
<ul>
<li><a href="#org7953024">4.2.1. Iterative method</a></li>
<li><a href="#org840cd55">4.2.2. Master Theorem for Subtract recurrences</a></li>
<li><a href="#orgba1fe15">4.2.3. Master Theorem for divide and conquer recurrences</a></li>
</ul>
</li>
<li><a href="#orgcffc2b7">4.3. Square root recurrence relations</a>
<ul>
<li><a href="#orgdb02f9d">4.3.1. Iterative method</a></li>
<li><a href="#orga185bd1">4.3.2. Master Theorem for square root recurrence relations</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="#org4ee9130">5. Lecture 5</a>
<ul>
<li><a href="#org0646087">5.1. Extended Master's theorem for time complexity of recursive algorithms</a>
<ul>
<li><a href="#orgf381287">5.1.1. For (k = -1)</a></li>
<li><a href="#org95d965b">5.1.2. For (k &lt; -1)</a></li>
</ul>
</li>
<li><a href="#org62ea5af">5.2. Tree method for time complexity of recursive algorithms</a>
<ul>
<li><a href="#org426f45a">5.2.1. Avoiding tree method</a></li>
</ul>
</li>
<li><a href="#org33c011b">5.3. Space complexity</a>
<ul>
<li><a href="#org24de75a">5.3.1. Auxiliary space complexity</a></li>
</ul>
</li>
<li><a href="#org3e6fc48">5.4. Calculating auxiliary space complexity</a>
<ul>
<li><a href="#org328fc47">5.4.1. Data Space used</a></li>
<li><a href="#orga6b6723">5.4.2. Code Execution space in recursive algorithm</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="#orgecf5585">6. Lecture 6</a>
<ul>
<li><a href="#org4ea791c">6.1. Divide and Conquer algorithms</a></li>
<li><a href="#org7d2edaf">6.2. Searching for element in array</a>
<ul>
<li><a href="#orgb0f0eb9">6.2.1. Straight forward approach for searching (<b>Linear Search</b>)</a></li>
<li><a href="#org810960f">6.2.2. Divide and conquer approach (<b>Binary search</b>)</a></li>
</ul>
</li>
<li><a href="#org6977da8">6.3. Max and Min element from array</a>
<ul>
<li><a href="#org451edcf">6.3.1. Straightforward approach</a></li>
<li><a href="#org90353a2">6.3.2. Divide and conquer approach</a></li>
<li><a href="#orgbd13f37">6.3.3. Efficient single loop approach (Increment by 2)</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="#org6f4e2ff">7. Lecture 7</a>
<ul>
<li><a href="#org5400a63">7.1. Square matrix multiplication</a>
<ul>
<li><a href="#orgb44a809">7.1.1. Straight forward method</a></li>
<li><a href="#orgff68f07">7.1.2. Divide and conquer approach</a></li>
<li><a href="#org25ef748">7.1.3. Strassen's algorithm</a></li>
</ul>
</li>
<li><a href="#org72167b4">7.2. Sorting algorithms</a>
<ul>
<li><a href="#orgc7ba8f0">7.2.1. In place vs out place sorting algorithm</a></li>
<li><a href="#org49041fd">7.2.2. Bubble sort</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="#org04d4663">8. Lecture 8</a>
<ul>
<li><a href="#org000935c">8.1. Selection sort</a></li>
<li><a href="#orgcf47e26">8.2. Insertion sort</a></li>
<li><a href="#orgdaadb6b">8.3. Inversion in array</a>
<ul>
<li><a href="#org90463d0">8.3.1. Relation between time complexity of insertion sort and inversion</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</div>
</div>
<div id="outline-container-org4514307" class="outline-2">
<h2 id="org4514307"><span class="section-number-2">1.</span> Lecture 1</h2>
<div class="outline-text-2" id="text-1">
</div>
<div id="outline-container-orgb881061" class="outline-3">
<h3 id="orgb881061"><span class="section-number-3">1.1.</span> Data structure and Algorithm</h3>
<div class="outline-text-3" id="text-1-1">
<ul class="org-ul">
<li>A <b>data structure</b> is a particular way of storing and organizing data. The purpose is to effectively access and modify data effictively.</li>
<li>A procedure to solve a specific problem is called <b>Algorithm</b>.</li>
</ul>
<p>
During programming we use data structures and algorithms that work on that data.
</p>
</div>
</div>
<div id="outline-container-org7860130" class="outline-3">
<h3 id="org7860130"><span class="section-number-3">1.2.</span> Characteristics of Algorithms</h3>
<div class="outline-text-3" id="text-1-2">
<p>
An algorithm has follwing characteristics.
</p>
<ul class="org-ul">
<li><b>Input</b> : Zero or more quantities are externally supplied to algorithm.</li>
<li><b>Output</b> : An algorithm should produce atleast one output.</li>
<li><b>Finiteness</b> : The algorithm should terminate after a finite number of steps. It should not run infinitely.</li>
<li><b>Definiteness</b> : Algorithm should be clear and unambiguous. All instructions of an algorithm must have a single meaning.</li>
<li><b>Effectiveness</b> : Algorithm must be made using very basic and simple operations that a computer can do.</li>
<li><b>Language Independance</b> : A algorithm is language independent and can be implemented in any programming language.</li>
</ul>
</div>
</div>
<div id="outline-container-orgad5cd54" class="outline-3">
<h3 id="orgad5cd54"><span class="section-number-3">1.3.</span> Behaviour of algorithm</h3>
<div class="outline-text-3" id="text-1-3">
<p>
The behaviour of an algorithm is the analysis of the algorithm on basis of <b>Time</b> and <b>Space</b>.
</p>
<ul class="org-ul">
<li><b>Time complexity</b> : Amount of time required to run the algorithm.</li>
<li><b>Space complexity</b> : Amount of space (memory) required to execute the algorithm.</li>
</ul>
<p>
The behaviour of algorithm can be used to compare two algorithms which solve the same problem.
<br />
The preference is traditionally/usually given to better time complexity. But we may need to give preference to better space complexity based on needs.
</p>
</div>
<div id="outline-container-org6e21d5a" class="outline-4">
<h4 id="org6e21d5a"><span class="section-number-4">1.3.1.</span> Best, Worst and Average Cases</h4>
<div class="outline-text-4" id="text-1-3-1">
<p>
The input size tells us the size of the input given to algorithm. Based on the size of input, the time/storage usage of the algorithm changes. <b>Example</b>, an array with larger input size (more elements) will taken more time to sort.
</p>
<ul class="org-ul">
<li>Best Case : The lowest time/storage usage for the given input size.</li>
<li>Worst Case : The highest time/storage usage for the given input size.</li>
<li>Average Case : The average time/storage usage for the given input size.</li>
</ul>
</div>
</div>
<div id="outline-container-org654d982" class="outline-4">
<h4 id="org654d982"><span class="section-number-4">1.3.2.</span> Bounds of algorithm</h4>
<div class="outline-text-4" id="text-1-3-2">
<p>
Since algorithms are finite, they have <b>bounded time</b> taken and <b>bounded space</b> taken. Bounded is short for boundries, so they have a minimum and maximum time/space taken. These bounds are upper bound and lower bound.
</p>
<ul class="org-ul">
<li>Upper Bound : The maximum amount of space/time taken by the algorithm is the upper bound. It is shown as a function of worst cases of time/storage usage over all the possible input sizes.</li>
<li>Lower Bound : The minimum amount of space/time taken by the algorithm is the lower bound. It is shown as a function of best cases of time/storage usage over all the possible input sizes.</li>
</ul>
</div>
</div>
</div>
<div id="outline-container-org8c53531" class="outline-3">
<h3 id="org8c53531"><span class="section-number-3">1.4.</span> Asymptotic Notations</h3>
<div class="outline-text-3" id="text-1-4">
</div>
<div id="outline-container-org8ec8ad1" class="outline-4">
<h4 id="org8ec8ad1"><span class="section-number-4">1.4.1.</span> Big-Oh Notation [O]</h4>
<div class="outline-text-4" id="text-1-4-1">
<ul class="org-ul">
<li>The Big Oh notation is used to define the upper bound of an algorithm.</li>
<li>Given a non negative funtion f(n) and other non negative funtion g(n), we say that \(f(n) = O(g(n)\) if there exists a positive number \(n_0\) and a positive constant \(c\), such that \[ f(n) \le c.g(n) \ \ \forall n \ge n_0 \]</li>
<li>So if growth rate of g(n) is greater than or equal to growth rate of f(n), then \(f(n) = O(g(n))\).</li>
</ul>
</div>
</div>
</div>
</div>
<div id="outline-container-org42af6e0" class="outline-2">
<h2 id="org42af6e0"><span class="section-number-2">2.</span> Lecture 2</h2>
<div class="outline-text-2" id="text-2">
</div>
<div id="outline-container-org716d410" class="outline-3">
<h3 id="org716d410"><span class="section-number-3">2.1.</span> Asymptotic Notations</h3>
<div class="outline-text-3" id="text-2-1">
</div>
<div id="outline-container-orge74888c" class="outline-4">
<h4 id="orge74888c"><span class="section-number-4">2.1.1.</span> Omega Notation [ \(\Omega\) ]</h4>
<div class="outline-text-4" id="text-2-1-1">
<ul class="org-ul">
<li>It is used to shown the lower bound of the algorithm.</li>
<li>For any positive integer \(n_0\) and a positive constant \(c\), we say that, \(f(n) = \Omega (g(n))\) if \[ f(n) \ge c.g(n) \ \ \forall n \ge n_0 \]</li>
<li>So growth rate of \(g(n)\) should be less than or equal to growth rate of \(f(n)\)</li>
</ul>
<p>
<b>Note</b> : If \(f(n) = O(g(n))\) then \(g(n) = \Omega (f(n))\)
</p>
</div>
</div>
<div id="outline-container-org0949287" class="outline-4">
<h4 id="org0949287"><span class="section-number-4">2.1.2.</span> Theta Notation [ \(\theta\) ]</h4>
<div class="outline-text-4" id="text-2-1-2">
<ul class="org-ul">
<li>If is used to provide the asymptotic <b>equal bound</b>.</li>
<li>\(f(n) = \theta (g(n))\) if there exists a positive integer \(n_0\) and a positive constants \(c_1\) and \(c_2\) such that \[ c_1 . g(n) \le f(n) \le c_2 . g(n) \ \ \forall n \ge n_0 \]</li>
<li>So the growth rate of \(f(n)\) and \(g(n)\) should be equal.</li>
</ul>
<p>
<b>Note</b> : So if \(f(n) = O(g(n))\) and \(f(n) = \Omega (g(n))\), then \(f(n) = \theta (g(n))\)
</p>
</div>
</div>
<div id="outline-container-orge63f8b1" class="outline-4">
<h4 id="orge63f8b1"><span class="section-number-4">2.1.3.</span> Little-Oh Notation [o]</h4>
<div class="outline-text-4" id="text-2-1-3">
<ul class="org-ul">
<li>The little o notation defines the strict upper bound of an algorithm.</li>
<li>We say that \(f(n) = o(g(n))\) if there exists positive integer \(n_0\) and positive constant \(c\) such that, \[ f(n) < c.g(n) \ \ \forall n \ge n_0 \]</li>
<li>Notice how condition is &lt;, rather than \(\le\) which is used in Big-Oh. So growth rate of \(g(n)\) is strictly greater than that of \(f(n)\).</li>
</ul>
</div>
</div>
<div id="outline-container-org0b0f72f" class="outline-4">
<h4 id="org0b0f72f"><span class="section-number-4">2.1.4.</span> Little-Omega Notation [ \(\omega\) ]</h4>
<div class="outline-text-4" id="text-2-1-4">
<ul class="org-ul">
<li>The little omega notation defines the strict lower bound of an algorithm.</li>
<li>We say that \(f(n) = \omega (g(n))\) if there exists positive integer \(n_0\) and positive constant \(c\) such that, \[ f(n) > c.g(n) \ \ \forall n \ge n_0 \]</li>
<li>Notice how condition is &gt;, rather than \(\ge\) which is used in Big-Omega. So growth rate of \(g(n)\) is strictly less than that of \(f(n)\).</li>
</ul>
</div>
</div>
</div>
<div id="outline-container-org5a61c2c" class="outline-3">
<h3 id="org5a61c2c"><span class="section-number-3">2.2.</span> Comparing Growth rate of funtions</h3>
<div class="outline-text-3" id="text-2-2">
</div>
<div id="outline-container-org3484bdd" class="outline-4">
<h4 id="org3484bdd"><span class="section-number-4">2.2.1.</span> Applying limit</h4>
<div class="outline-text-4" id="text-2-2-1">
<p>
To compare two funtions \(f(n)\) and \(g(n)\). We can use limit
\[ \lim_{n\to\infty} \frac{f(n)}{g(n)} \]
</p>
<ul class="org-ul">
<li>If result is 0 then growth of \(g(n)\) &gt; growth of \(f(n)\)</li>
<li>If result is \(\infty\) then growth of \(g(n)\) &lt; growth of \(f(n)\)</li>
<li>If result is any finite number (constant), then growth of \(g(n)\) = growth of \(f(n)\)</li>
</ul>
<p>
<b>Note</b> : L'Hôpital's rule can be used in this limit.
</p>
</div>
</div>
<div id="outline-container-org25e014e" class="outline-4">
<h4 id="org25e014e"><span class="section-number-4">2.2.2.</span> Using logarithm</h4>
<div class="outline-text-4" id="text-2-2-2">
<p>
Using logarithm can be useful to compare exponential functions. When comaparing functions \(f(n)\) and \(g(n)\),
</p>
<ul class="org-ul">
<li>If growth of \(\log(f(n))\) is greater than growth of \(\log(g(n))\), then growth of \(f(n)\) is greater than growth of \(g(n)\)</li>
<li>If growth of \(\log(f(n))\) is less than growth of \(\log(g(n))\), then growth of \(f(n)\) is less than growth of \(g(n)\)</li>
<li>When using log for comparing growth, comaparing constants after applying log is also required. For example, if functions are \(2^n\) and \(3^n\), then their logs are \(n.log(2)\) and \(n.log(3)\). Since \(log(2) < log(3)\), the growth rate of \(3^n\) will be higher.</li>
<li>On equal growth after applying log, we can't decide which function grows faster.</li>
</ul>
</div>
</div>
<div id="outline-container-org0790a44" class="outline-4">
<h4 id="org0790a44"><span class="section-number-4">2.2.3.</span> Common funtions</h4>
<div class="outline-text-4" id="text-2-2-3">
<p>
Commonly, growth rate in increasing order is
\[ c < c.log(log(n)) < c.log(n) < c.n < n.log(n) < c.n^2 < c.n^3 < c.n^4 ... \]
\[ n^c < c^n < n! < n^n \]
Where \(c\) is any constant.
</p>
</div>
</div>
</div>
<div id="outline-container-org8762226" class="outline-3">
<h3 id="org8762226"><span class="section-number-3">2.3.</span> Properties of Asymptotic Notations</h3>
<div class="outline-text-3" id="text-2-3">
</div>
<div id="outline-container-org726a6e4" class="outline-4">
<h4 id="org726a6e4"><span class="section-number-4">2.3.1.</span> Big-Oh</h4>
<div class="outline-text-4" id="text-2-3-1">
<ul class="org-ul">
<li><b>Product</b> : \[ Given\ f_1 = O(g_1)\ \ and\ f_2 = O(g_2) \implies f_1 f_2 = O(g_1 g_2) \] \[ Also\ f.O(g) = O(f g) \]</li>
<li><b>Sum</b> : For a sum of two functions, the big-oh can be represented with only with funcion having higer growth rate. \[ O(f_1 + f_2 + ... + f_i) = O(max\ growth\ rate(f_1, f_2, .... , f_i )) \]</li>
<li><b>Constants</b> : For a constant \(c\) \[ O(c.g(n)) = O(g(n)) \], this is because the constants don't effect the growth rate.</li>
</ul>
</div>
</div>
<div id="outline-container-org83b5d5a" class="outline-4">
<h4 id="org83b5d5a"><span class="section-number-4">2.3.2.</span> Properties</h4>
<div class="outline-text-4" id="text-2-3-2">
<div id="orgc9a9c89" class="figure">
<p><img src="lectures/imgs/asymptotic-notations-properties.png" alt="asymptotic-notations-properties.png" />
</p>
</div>
<ul class="org-ul">
<li><b>Reflexive</b> : \(f(n) = O(f(n)\) and \(f(n) = \Omega (f(n))\) and \(f(n) = \theta (f(n))\)</li>
<li><b>Symmetric</b> : If \(f(n) = \theta (g(n))\) then \(g(n) = \theta (f(n))\)</li>
<li><b>Transitive</b> : If \(f(n) = O(g(n))\) and \(g(n) = O(h(n))\) then \(f(n) = O(h(n))\)</li>
<li><b>Transpose</b> : If \(f(n) = O(g(n))\) then we can also conclude that \(g(n) = \Omega (f(n))\) so we say Big-Oh is transpose of Big-Omega and vice-versa.</li>
<li><b>Antisymmetric</b> : If \(f(n) = O(g(n))\) and \(g(n) = O(f(n))\) then we conclude that \(f(n) = g(n)\)</li>
<li><b>Asymmetric</b> : If \(f(n) = \omega (g(n))\) then we can conclude that \(g(n) \ne \omega (f(n))\)</li>
</ul>
</div>
</div>
</div>
</div>
<div id="outline-container-org0d285a1" class="outline-2">
<h2 id="org0d285a1"><span class="section-number-2">3.</span> Lecture 3</h2>
<div class="outline-text-2" id="text-3">
</div>
<div id="outline-container-org4801110" class="outline-3">
<h3 id="org4801110"><span class="section-number-3">3.1.</span> Calculating time complexity of algorithm</h3>
<div class="outline-text-3" id="text-3-1">
<p>
We will look at three types of situations
</p>
<ul class="org-ul">
<li>Sequential instructions</li>
<li>Iterative instructions</li>
<li>Recursive instructions</li>
</ul>
</div>
<div id="outline-container-org9a24492" class="outline-4">
<h4 id="org9a24492"><span class="section-number-4">3.1.1.</span> Sequential instructions</h4>
<div class="outline-text-4" id="text-3-1-1">
<p>
A sequential set of instructions are instructions in a sequence without iterations and recursions. It is a simple block of instructions with no branches. A sequential set of instructions has <b>time complexity of O(1)</b>, i.e., it has <b>constant time complexity</b>.
</p>
</div>
</div>
<div id="outline-container-orga72d036" class="outline-4">
<h4 id="orga72d036"><span class="section-number-4">3.1.2.</span> Iterative instructions</h4>
<div class="outline-text-4" id="text-3-1-2">
<p>
A set of instructions in a loop. Iterative instructions can have different complexities based on how many iterations occurs depending on input size.
</p>
<ul class="org-ul">
<li>For fixed number of iterations (number of iterations known at compile time i.e. independant of the input size), the time complexity is constant, O(1). Example for(int i = 0; i &lt; 100; i++) { &#x2026; } will always have 100 iterations, so constant time complexity.</li>
<li>For n number of iterations ( n is the input size ), the time complexity is O(n). Example, a loop for(int i = 0; i &lt; n; i++){ &#x2026; } will have n iterations where n is the input size, so complexity is O(n). Loop for(int i = 0; i &lt; n/2; i++){&#x2026;} also has time complexity O(n) because n/2 iterations are done by loop and 1/2 is constant thus not in big-oh notation.</li>
<li>For a loop like for(int i = 1; i &lt;= n; i = i*2){&#x2026;} the value of i is update as *=2, so the number of iterations will be \(log_2 (n)\). Therefore, the time complexity is \(O(log_2 (n))\).</li>
<li>For a loop like for(int i = n; i &gt; 1; i = i/2){&#x2026;} the value of i is update as *=2, so the number of iterations will be \(log_2 (n)\). Therefore, the time complexity is \(O(log_2 (n))\).</li>
</ul>
<p>
<b><span class="underline">Nested Loops</span></b>
<br />
</p>
<ul class="org-ul">
<li>If <b>inner loop iterator doesn't depend on outer loop</b>, the complexity of the inner loop is multiplied by the number of times outer loop runs to get the time complexity For example, suppose we have loop as</li>
</ul>
<pre class="example">
for(int i = 0; i &lt; n; i++){
...
for(int j = 0; j &lt; n; j *= 2){
...
}
...
}
</pre>
<p>
Here, the outer loop will <b>n</b> times and the inner loop will run <b>log(n)</b> times. Therefore, the total number of time statements in the inner loop run is n.log(n) times.
Thus the time complexity is <b>O(n.log(n))</b>.
</p>
<ul class="org-ul">
<li>If <b>inner loop and outer loop are related</b>, then complexities have to be computed using sums. Example, we have loop</li>
</ul>
<pre class="example">
for(int i = 0; i &lt;= n; i++){
...
for(int j = 0; j &lt;= i; j++){
...
}
...
}
</pre>
<p>
Here the outer loop will run <b>n</b> times, so i goes from <b>0 to n</b>. The number of times inner loop runs is j, which depends on <b>i</b>.
</p>
<table border="2" cellspacing="0" cellpadding="6" rules="all" frame="border">
<colgroup>
<col class="org-left" />
<col class="org-left" />
</colgroup>
<thead>
<tr>
<th scope="col" class="org-left">Value of i</th>
<th scope="col" class="org-left">Number of times inner loop runs</th>
</tr>
</thead>
<tbody>
<tr>
<td class="org-left">0</td>
<td class="org-left">0</td>
</tr>
<tr>
<td class="org-left">1</td>
<td class="org-left">1</td>
</tr>
<tr>
<td class="org-left">2</td>
<td class="org-left">2</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">n</td>
<td class="org-left">n</td>
</tr>
</tbody>
</table>
<p>
So the total number of times inner loop runs = \(1+2+3+....+n\)
<br />
total number of times inner loop runs = \(\frac{n.(n+1)}{2}\)
<br />
total number of times inner loop runs = \(\frac{n^2}{2} + \frac{n}{2}\)
<br />
<b><i>Therefore, time complexity is</i></b> \(O(\frac{n^2}{2} + \frac{n}{2}) = O(n^2)\)
<br />
<b>Another example,</b>
<br />
Suppose we have loop
</p>
<pre class="example">
for(int i = 1; i &lt;= n; i++){
...
for(int j = 1; j &lt;= i; j *= 2){
...
}
...
}
</pre>
<p>
The outer loop will run n times with i from <b>1 to n</b>, and inner will run log(i) times.
</p>
<table border="2" cellspacing="0" cellpadding="6" rules="all" frame="border">
<colgroup>
<col class="org-left" />
<col class="org-left" />
</colgroup>
<thead>
<tr>
<th scope="col" class="org-left">Value of i</th>
<th scope="col" class="org-left">Number of times inner loop runs</th>
</tr>
</thead>
<tbody>
<tr>
<td class="org-left">1</td>
<td class="org-left">log(1)</td>
</tr>
<tr>
<td class="org-left">2</td>
<td class="org-left">log(2)</td>
</tr>
<tr>
<td class="org-left">3</td>
<td class="org-left">log(3)</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">n</td>
<td class="org-left">log(n)</td>
</tr>
</tbody>
</table>
<p>
Thus, total number of times the inner loop runs is \(log(1) + log(2) + log(3) + ... + log(n)\).
<br />
total number of times inner loop runs = \(log(1.2.3...n)\)
<br />
total number of times inner loop runs = \(log(n!)\)
<br />
Using <b><i>Stirling's approximation</i></b>, we know that \(log(n!) = n.log(n) - n + 1\)
<br />
total number of times inner loop runs = \(n.log(n) - n + 1\)
<br />
Time complexity = \(O(n.log(n))\)
</p>
</div>
</div>
<div id="outline-container-org796d28b" class="outline-4">
<h4 id="org796d28b"><span class="section-number-4">3.1.3.</span> An example for time complexities of nested loops</h4>
<div class="outline-text-4" id="text-3-1-3">
<p>
Suppose a loop,
</p>
<pre class="example">
for(int i = 1; i &lt;= n; i *= 2){
...
for(int j = 1; j &lt;= i; j *= 2){
...
}
...
}
</pre>
<p>
Here, outer loop will run <b>log(n)</b> times. Let's consider for some given n, it runs <b>k</b> times, i.e, let
\[ k = log(n) \]
</p>
<p>
The inner loop will run <b>log(i)</b> times, so number of loops with changing values of i is
</p>
<table border="2" cellspacing="0" cellpadding="6" rules="all" frame="border">
<colgroup>
<col class="org-left" />
<col class="org-left" />
</colgroup>
<thead>
<tr>
<th scope="col" class="org-left">Value of i</th>
<th scope="col" class="org-left">Number of times inner loop runs</th>
</tr>
</thead>
<tbody>
<tr>
<td class="org-left">1</td>
<td class="org-left">log(1)</td>
</tr>
<tr>
<td class="org-left">2<sup>1</sup></td>
<td class="org-left">log(2)</td>
</tr>
<tr>
<td class="org-left">2<sup>2</sup></td>
<td class="org-left">2.log(2)</td>
</tr>
<tr>
<td class="org-left">2<sup>3</sup></td>
<td class="org-left">3.log(2)</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">2<sup>k-1</sup></td>
<td class="org-left">(k-1).log(2)</td>
</tr>
</tbody>
</table>
<p>
So the total number of times inner loop runs is \(log(1) + log(2) + 2.log(2) + 3.log(2) + ... + (k-1).log(2)\)
\[ \text{number of times inner loop runs} = log(1) + log(2).[1+2+3+...+(k-1)] \]
\[ \text{number of times inner loop runs} = log(1) + log(2). \frac{(k-1).k}{2} \]
\[ \text{number of times inner loop runs} = log(1) + log(2). \frac{k^2}{2} - \frac{k}{2} \]
Putting value \(k = log(n)\)
\[ \text{number of times inner loop runs} = log(1) + log(2). \frac{log^2(n)}{2} - \frac{log(n)}{2} \]
\[ \text{Time complexity} = O(log^2(n)) \]
</p>
</div>
</div>
</div>
</div>
<div id="outline-container-org84b365f" class="outline-2">
<h2 id="org84b365f"><span class="section-number-2">4.</span> Lecture 4</h2>
<div class="outline-text-2" id="text-4">
</div>
<div id="outline-container-org1ac7bb0" class="outline-3">
<h3 id="org1ac7bb0"><span class="section-number-3">4.1.</span> Time complexity of recursive instructions</h3>
<div class="outline-text-3" id="text-4-1">
<p>
To get time complexity of recursive functions/calls, we first also show time complexity as recursive manner.
</p>
</div>
<div id="outline-container-org1e82264" class="outline-4">
<h4 id="org1e82264"><span class="section-number-4">4.1.1.</span> Time complexity in recursive form</h4>
<div class="outline-text-4" id="text-4-1-1">
<p>
We first have to create a way to describe time complexity of recursive functions in form of an equation as,
\[ T(n) = ( \text{Recursive calls by the function} ) + ( \text{Time taken per call, i.e, the time taken except for recursive calls in the function} ) \]
</p>
<ul class="org-ul">
<li>Example, suppose we have a recursive function</li>
</ul>
<div class="org-src-container">
<pre class="src src-c"><span style="color: #c18401;">int</span> <span style="color: #0184bc;">fact</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>){
<span style="color: #a626a4;">if</span>(n == 0 || n == 1)
<span style="color: #a626a4;">return</span> 1;
<span style="color: #a626a4;">else</span>
<span style="color: #a626a4;">return</span> n * fact(n-1);
}
</pre>
</div>
<p>
in this example, the recursive call is fact(n-1), therefore the time complexity of recursive call is T(n-1) and the time complexity of function except for recursive call is constant (let's assume <b>c</b>). So the time complexity is
\[ T(n) = T(n-1) + c \]
\[ T(1) = T(0) = C\ \text{where C is constant time} \]
</p>
<ul class="org-ul">
<li>Another example,</li>
</ul>
<div class="org-src-container">
<pre class="src src-c"><span style="color: #c18401;">int</span> <span style="color: #0184bc;">func</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>){
<span style="color: #a626a4;">if</span>(n == 0 || n == 1)
<span style="color: #a626a4;">return</span> 1;
<span style="color: #a626a4;">else</span>
<span style="color: #a626a4;">return</span> func(n - 1) * func(n - 2);
}
</pre>
</div>
<p>
Here, the recursive calls are func(n-1) and func(n-2), therefore time complexities of recursive calls is T(n-1) and T(n-2). The time complexity of function except the recursive calls is constant (let's assume <b>c</b>), so the time complexity is
\[ T(n) = T(n-1) + T(n-2) + c \]
\[ T(1) = T(0) = C\ \text{where C is constant time} \]
</p>
<ul class="org-ul">
<li>Another example,</li>
</ul>
<div class="org-src-container">
<pre class="src src-c"><span style="color: #c18401;">int</span> <span style="color: #0184bc;">func</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>){
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">r</span> = 0;
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; n; i++)
r += i;
<span style="color: #a626a4;">if</span>(n == 0 || n == 1)
<span style="color: #a626a4;">return</span> r;
<span style="color: #a626a4;">else</span>
<span style="color: #a626a4;">return</span> r * func(n - 1) * func(n - 2);
}
</pre>
</div>
<p>
Here, the recursive calls are func(n-1) and func(n-2), therefore time complexities of recursive calls is T(n-1) and T(n-2). The time complexity of function except the recursive calls is <b>&theta; (n)</b> because of the for loop, so the time complexity is
</p>
<p>
\[ T(n) = T(n-1) + T(n-2) + n \]
\[ T(1) = T(0) = C\ \text{where C is constant time} \]
</p>
</div>
</div>
</div>
<div id="outline-container-org34532f5" class="outline-3">
<h3 id="org34532f5"><span class="section-number-3">4.2.</span> Solving Recursive time complexities</h3>
<div class="outline-text-3" id="text-4-2">
</div>
<div id="outline-container-org7953024" class="outline-4">
<h4 id="org7953024"><span class="section-number-4">4.2.1.</span> Iterative method</h4>
<div class="outline-text-4" id="text-4-2-1">
<ul class="org-ul">
<li>Take for example,</li>
</ul>
<p>
\[ T(1) = T(0) = C\ \text{where C is constant time} \]
\[ T(n) = T(n-1) + c \]
</p>
<p>
We can expand T(n-1).
\[ T(n) = [ T(n - 2) + c ] + c \]
\[ T(n) = T(n-2) + 2.c \]
Then we can expand T(n-2)
\[ T(n) = [ T(n - 3) + c ] + 2.c \]
\[ T(n) = T(n - 3) + 3.c \]
</p>
<p>
So, if we expand it k times, we will get
</p>
<p>
\[ T(n) = T(n - k) + k.c \]
Since we know this recursion <b>ends at T(1)</b>, let's put \(n-k=1\).
Therefore, \(k = n-1\).
\[ T(n) = T(1) + (n-1).c \]
</p>
<p>
Since T(1) = C
\[ T(n) = C + (n-1).c \]
So time complexity is,
\[ T(n) = O(n) \]
</p>
<ul class="org-ul">
<li>Another example,</li>
</ul>
<p>
\[ T(1) = C\ \text{where C is constant time} \]
\[ T(n) = T(n-1) + n \]
</p>
<p>
Expanding T(n-1),
\[ T(n) = [ T(n-2) + n - 1 ] + n \]
\[ T(n) = T(n-2) + 2.n - 1 \]
</p>
<p>
Expanding T(n-2),
\[ T(n) = [ T(n-3) + n - 2 ] + 2.n - 1 \]
\[ T(n) = T(n-3) + 3.n - 1 - 2 \]
</p>
<p>
Expanding T(n-3),
\[ T(n) = [ T(n-4) + n - 3 ] + 3.n - 1 - 2 \]
\[ T(n) = T(n-4) + 4.n - 1 - 2 - 3 \]
</p>
<p>
So expanding till T(n-k)
\[ T(n) = T(n-k) + k.n - [ 1 + 2 + 3 + .... + k ] \]
\[ T(n) = T(n-k) + k.n - \frac{k.(k+1)}{2} \]
</p>
<p>
Putting \(n-k=1\). Therefore, \(k=n-1\).
\[ T(n) = T(1) + (n-1).n - \frac{(n-1).(n)}{2} \]
\[ T(n) = C + n^2 - n - \frac{n^2}{2} + \frac{n}{2} \]
</p>
<p>
Time complexity is
\[ T(n) = O(n^2) \]
</p>
</div>
</div>
<div id="outline-container-org840cd55" class="outline-4">
<h4 id="org840cd55"><span class="section-number-4">4.2.2.</span> Master Theorem for Subtract recurrences</h4>
<div class="outline-text-4" id="text-4-2-2">
<p>
For recurrence relation of type
</p>
<p>
\[ T(n) = c\ for\ n \le 1 \]
\[ T(n) = a.T(n-b) + f(n)\ for\ n > 1 \]
\[ \text{where for f(n) we can say, } f(n) = O(n^k) \]
\[ \text{where, a > 0, b > 0 and k} \ge 0 \]
</p>
<ul class="org-ul">
<li>If a &lt; 1, then T(n) = O(n<sup>k</sup>)</li>
<li>If a = 1, then T(n) = O(n<sup>k+1</sup>)</li>
<li>If a &gt; 1, then T(n) = O(n<sup>k</sup> . a<sup>n/b</sup>)</li>
</ul>
<p>
Example, \[ T(n) = 3T(n-1) + n^2 \]
Here, f(n) = O(n<sup>2</sup>), therfore k = 2,
<br />
Also, a = 3 and b = 1
<br />
Since a &gt; 1, \(T(n) = O(n^2 . 3^n)\)
</p>
</div>
</div>
<div id="outline-container-orgba1fe15" class="outline-4">
<h4 id="orgba1fe15"><span class="section-number-4">4.2.3.</span> Master Theorem for divide and conquer recurrences</h4>
<div class="outline-text-4" id="text-4-2-3">
<p>
\[ T(n) = aT(n/b) + f(n).(log(n))^k \]
\[ \text{here, f(n) is a polynomial function} \]
\[ \text{and, a > 0, b > 0 and k } \ge 0 \]
We calculate a value \(n^{log_ba}\)
</p>
<ul class="org-ul">
<li>If \(\theta (f(n)) < \theta ( n^{log_ba} )\) then \(T(n) = \theta (n^{log_ba})\)</li>
<li>If \(\theta (f(n)) > \theta ( n^{log_ba} )\) then \(T(n) = \theta (f(n).(log(n))^k )\)</li>
<li>If \(\theta (f(n)) = \theta ( n^{log_ba} )\) then \(T(n) = \theta (f(n) . (log(n))^{k+1})\)</li>
</ul>
<p>
For the above comparision, we say higher growth rate is greater than slower growth rate. Eg, &theta; (n<sup>2</sup>) &gt; &theta; (n).
</p>
<p>
Example, calculating complexity for
\[ T(n) = T(n/2) + 1 \]
Here, f(n) = 1
<br />
Also, a = 1, b = 2 and k = 0.
<br />
Calculating n<sup>log<sub>ba</sub></sup> = n<sup>log<sub>21</sub></sup> = n<sup>0</sup> = 1
<br />
Therfore, &theta; (f(n)) = &theta; (n<sup>log<sub>ba</sub></sup>)
<br />
So time complexity is
\[ T(n) = \theta ( 1 . (log(n))^{0 + 1} ) \]
\[ T(n) = \theta (log(n)) \]
</p>
<p>
Another example, calculate complexity for
\[ T(n) = 2T(n/2) + nlog(n) \]
</p>
<p>
Here, f(n) = n
<br />
Also, a = 2, b = 2 and k = 1
<br />
Calculating n<sup>log<sub>ba</sub></sup> = n<sup>log<sub>22</sub></sup> = n
<br />
Therefore, &theta; (f(n)) = &theta; (n<sup>log<sub>ba</sub></sup>)
<br />
So time complexity is,
\[ T(n) = \theta ( n . (log(n))^{2}) \]
</p>
</div>
</div>
</div>
<div id="outline-container-orgcffc2b7" class="outline-3">
<h3 id="orgcffc2b7"><span class="section-number-3">4.3.</span> Square root recurrence relations</h3>
<div class="outline-text-3" id="text-4-3">
</div>
<div id="outline-container-orgdb02f9d" class="outline-4">
<h4 id="orgdb02f9d"><span class="section-number-4">4.3.1.</span> Iterative method</h4>
<div class="outline-text-4" id="text-4-3-1">
<p>
Example,
\[ T(n) = T( \sqrt{n} ) + 1 \]
we can write this as,
\[ T(n) = T( n^{1/2}) + 1 \]
Now, we expand \(T( n^{1/2})\)
\[ T(n) = [ T(n^{1/4}) + 1 ] + 1 \]
\[ T(n) = T(n^{1/(2^2)}) + 1 + 1 \]
Expand, \(T(n^{1/4})\)
\[ T(n) = [ T(n^{1/8}) + 1 ] + 1 + 1 \]
\[ T(n) = T(n^{1/(2^3)}) + 1 + 1 + 1 \]
</p>
<p>
Expanding <b>k</b> times,
\[ T(n) = T(n^{1/(2^k)}) + 1 + 1 ... \text{k times } + 1 \]
\[ T(n) = T(n^{1/(2^k)}) + k \]
</p>
<p>
Let's consider \(T(2)=C\) where C is constant.
<br />
Putting \(n^{1/(2^k)} = 2\)
\[ \frac{1}{2^k} log(n) = log(2) \]
\[ \frac{1}{2^k} = \frac{log(2)}{log(n)} \]
\[ 2^k = \frac{log(n)}{log(2)} \]
\[ 2^k = log_2n \]
\[ k = log(log(n)) \]
</p>
<p>
So putting <b>k</b> in time complexity equation,
\[ T(n) = T(2) + log(log(n)) \]
\[ T(n) = C + log(log(n)) \]
Time complexity is,
\[ T(n) = \theta (log(log(n))) \]
</p>
</div>
</div>
<div id="outline-container-orga185bd1" class="outline-4">
<h4 id="orga185bd1"><span class="section-number-4">4.3.2.</span> Master Theorem for square root recurrence relations</h4>
<div class="outline-text-4" id="text-4-3-2">
<p>
For recurrence relations with square root, we need to first convert the recurrance relation to the form with which we use master theorem. Example,
\[ T(n) = T( \sqrt{n} ) + 1 \]
Here, we need to convert \(T( \sqrt{n} )\) , we can do that by <b>substituting</b>
\[ \text{Substitute } n = 2^m \]
\[ T(2^m) = T ( \sqrt{2^m} ) + 1 \]
\[ T(2^m) = T ( 2^{m/2} ) + 1 \]
</p>
<p>
Now, we need to consider a new function such that,
\[ \text{Let, } S(m) = T(2^m) \]
Thus our time recurrance relation will become,
\[ S(m) = S(m/2) + 1 \]
Now, we can apply the master's theorem.
<br />
Here, f(m) = 1
<br />
Also, a = 1, and b = 2 and k = 0
<br />
Calculating m<sup>log<sub>ba</sub></sup> = m<sup>log<sub>21</sub></sup> = m<sup>0</sup> = 1
<br />
Therefore, &theta; (f(m)) = &theta; ( m<sup>log<sub>ba</sub></sup> )
<br />
So by master's theorem,
\[ S(m) = \theta (1. (log(m))^{0+1} ) \]
\[ S(m) = \theta (log(m) ) \]
Now, putting back \(m = log(n)\)
\[ T(n) = \theta (log(log(n))) \]
Another example,
\[ T(n) = 2.T(\sqrt{n})+log(n) \]
Substituting \(n = 2^m\)
\[ T(2^m) = 2.T(\sqrt{2^m}) + log(2^m) \]
\[ T(2^m) = 2.T(2^{m/2}) + m \]
Consider a function \(S(m) = T(2^m)\)
\[ S(m) = 2.S(m/2) + m \]
Here, f(m) = m
<br />
Also, a = 2, b = 2 and k = 0
<br />
Calculating m<sup>log<sub>ba</sub></sup> = m<sup>log<sub>22</sub></sup> = 1
<br />
Therefore, &theta; (f(m)) &gt; &theta; (m<sup>log<sub>ba</sub></sup>)
<br />
Using master's theorem,
\[ S(m) = \theta (m.(log(m))^0 ) \]
\[ S(m) = \theta (m.1) \]
Putting value of m,
\[ T(n) = \theta (log(n)) \]
</p>
</div>
</div>
</div>
</div>
<div id="outline-container-org4ee9130" class="outline-2">
<h2 id="org4ee9130"><span class="section-number-2">5.</span> Lecture 5</h2>
<div class="outline-text-2" id="text-5">
</div>
<div id="outline-container-org0646087" class="outline-3">
<h3 id="org0646087"><span class="section-number-3">5.1.</span> Extended Master's theorem for time complexity of recursive algorithms</h3>
<div class="outline-text-3" id="text-5-1">
</div>
<div id="outline-container-orgf381287" class="outline-4">
<h4 id="orgf381287"><span class="section-number-4">5.1.1.</span> For (k = -1)</h4>
<div class="outline-text-4" id="text-5-1-1">
<p>
\[ T(n) = aT(n/b) + f(n).(log(n))^{-1} \]
\[ \text{Here, } f(n) \text{ is a polynomial function} \]
\[ a > 0\ and\ b > 1 \]
</p>
<ul class="org-ul">
<li>If &theta; (f(n)) &lt; &theta; ( n<sup>log<sub>ba</sub></sup> ) then, T(n) = &theta; (n<sup>log<sub>ba</sub></sup>)</li>
<li>If &theta; (f(n)) &gt; &theta; ( n<sup>log<sub>ba</sub></sup> ) then, T(n) = &theta; (f(n))</li>
<li>If &theta; (f(n)) &lt; &theta; ( n<sup>log<sub>ba</sub></sup> ) then, T(n) = &theta; (f(n).log(log(n)))</li>
</ul>
</div>
</div>
<div id="outline-container-org95d965b" class="outline-4">
<h4 id="org95d965b"><span class="section-number-4">5.1.2.</span> For (k &lt; -1)</h4>
<div class="outline-text-4" id="text-5-1-2">
<p>
\[ T(n) = aT(n/b) + f(n).(log(n))^{k} \]
\[ \text{Here, } f(n) \text{ is a polynomial function} \]
\[ a > 0\ and\ b > 1\ and\ k < -1 \]
</p>
<ul class="org-ul">
<li>If &theta; (f(n)) &lt; &theta; ( n<sup>log<sub>ba</sub></sup> ) then, T(n) = &theta; (n<sup>log<sub>ba</sub></sup>)</li>
<li>If &theta; (f(n)) &gt; &theta; ( n<sup>log<sub>ba</sub></sup> ) then, T(n) = &theta; (f(n))</li>
<li>If &theta; (f(n)) &lt; &theta; ( n<sup>log<sub>ba</sub></sup> ) then, T(n) = &theta; (n<sup>log<sub>ba</sub></sup>)</li>
</ul>
</div>
</div>
</div>
<div id="outline-container-org62ea5af" class="outline-3">
<h3 id="org62ea5af"><span class="section-number-3">5.2.</span> Tree method for time complexity of recursive algorithms</h3>
<div class="outline-text-3" id="text-5-2">
<p>
Tree method is used when there are multiple recursive calls in our recurrance relation. Example,
\[ T(n) = T(n/5) + T(4n/5) + f(n) \]
Here, one call is T(n/5) and another is T(4n/5). So we can't apply master's theorem. So we create a tree of recursive calls which is used to calculate time complexity.
The first node, i.e the root node is T(n) and the tree is formed by the child nodes being the calls made by the parent nodes. Example, let's consider the recurrance relation
\[ T(n) = T(n/5) + T(4n/5) + f(n) \]
</p>
<pre class="example">
+-----T(n/5)
T(n)--+
+-----T(4n/5)
</pre>
<p>
Since T(n) calls T(n/5) and T(4n/5), the graph for that is shown as drawn above. Now using recurrance relation, we can say that T(n/5) will call T(n/5<sup>2</sup>) and T(4n/5<sup>2</sup>). Also, T(4n/5) will call T(4n/5<sup>2</sup>) and T(4<sup>2</sup> n/ 5<sup>2</sup>).
</p>
<pre class="example">
+--T(n/5^2)
+-----T(n/5)--+
+ +--T(4n/5^2)
T(n)--+
+ +--T(4n/5^2)
+-----T(4n/5)-+
+--T(4^2 n/5^2)
</pre>
<p>
Suppose we draw this graph for an unknown number of levels.
</p>
<pre class="example">
+--T(n/5^2)- - - - - - - etc.
+-----T(n/5)--+
+ +--T(4n/5^2) - - - - - - - - - etc.
T(n)--+
+ +--T(4n/5^2) - - - - - - - - - etc.
+-----T(4n/5)-+
+--T(4^2 n/5^2)- - - - - - etc.
</pre>
<p>
We will now replace T()'s with the <b>cost of the call</b>. The cost of the call is <b>f(n)</b>, i.e, the time taken other than that caused by the recursive calls.
</p>
<pre class="example">
+--f(n/5^2)- - - - - - - etc.
+-----f(n/5)--+
+ +--f(4n/5^2) - - - - - - - - - etc.
f(n)--+
+ +--f(4n/5^2) - - - - - - - - - etc.
+-----f(4n/5)-+
+--f(4^2 n/5^2)- - - - - - etc.
</pre>
<p>
In our example, <b>let's assume f(n) = n</b>, therfore,
</p>
<pre class="example">
+-- n/5^2 - - - - - - - etc.
+----- n/5 --+
+ +-- 4n/5^2 - - - - - - - - - etc.
n --+
+ +-- 4n/5^2 - - - - - - - - -etc.
+----- 4n/5 -+
+-- 4^2 n/5^2 - - - - - - etc.
</pre>
<p>
Now we can get cost of each level.
</p>
<pre class="example">
+-- n/5^2 - - - - - - - etc.
+----- n/5 --+
+ +-- 4n/5^2 - - - - - - - - - etc.
n --+
+ +-- 4n/5^2 - - - - - - - - -etc.
+----- 4n/5 --+
+-- 4^2 n/5^2 - - - - - - etc.
Sum : n n/5 n/25
+4n/5 +4n/25
+4n/25
+16n/25
..... ..... ......
n n n
</pre>
<p>
Since sum on all levels is n, we can say that Total time taken is
\[ T(n) = \Sigma \ (cost\ of\ level_i) \]
</p>
<p>
Now we need to find the longest branch in the tree. If we follow the pattern of expanding tree in a sequence as shown, then the longest branch is <b>always on one of the extreme ends of the tree</b>. So for our example, if tree has <b>(k+1)</b> levels, then our branch is either (n/5<sup>k</sup>) of (4<sup>k</sup> n/5<sup>k</sup>). Consider the terminating condition is, \(T(a) = C\). Then we will calculate value of k by equating the longest branch as,
\[ \frac{n}{5^k} = a \]
\[ k = log_5 (n/a) \]
Also,
\[ \frac{4^k n}{5^k} = a \]
\[ k = log_{5/4} n/a \]
</p>
<p>
So, we have two possible values of k,
\[ k = log_{5/4}(n/a),\ log_5 (n/a) \]
</p>
<p>
Now, we can say that,
\[ T(n) = \sum_{i=1}^{k+1} \ (cost\ of\ level_i) \]
Since in our example, cost of every level is <b>n</b>.
\[ T(n) = n.(k+1) \]
Putting values of k,
\[ T(n) = n.(log_{5/4}(n/a) + 1) \]
or
\[ T(n) = n.(log_{5}(n/a) + 1) \]
</p>
<p>
Of the two possible time complexities, we consider the one with higher growth rate in the big-oh notation.
</p>
</div>
<div id="outline-container-org426f45a" class="outline-4">
<h4 id="org426f45a"><span class="section-number-4">5.2.1.</span> Avoiding tree method</h4>
<div class="outline-text-4" id="text-5-2-1">
<p>
The tree method as mentioned is mainly used when we have multiple recursive calls with different factors. But when using the big-oh notation (O). We can avoid tree method in favour of the master's theorem by converting recursive call with smaller factor to larger. This works since big-oh calculates worst case. Let's take our previous example
\[ T(n) = T(n/5) + T(4n/5) + f(n) \]
Since T(n) is an increasing function. We can say that
\[ T(n/5) < T(4n/5) \]
So we can replace smaller one and approximate our equation to,
\[ T(n) = T(4n/5) + T(4n/5) + f(n) \]
\[ T(n) = 2.T(4n/5) + f(n) \]
</p>
<p>
Now, our recurrance relation is in a form where we can apply the mater's theorem.
</p>
</div>
</div>
</div>
<div id="outline-container-org33c011b" class="outline-3">
<h3 id="org33c011b"><span class="section-number-3">5.3.</span> Space complexity</h3>
<div class="outline-text-3" id="text-5-3">
<p>
The amount of memory used by the algorithm to execute and produce the result for a given input size is space complexity. Similar to time complexity, when comparing two algorithms space complexity is usually represented as the growth rate of memory used with respect to input size. The space complexity includes
</p>
<ul class="org-ul">
<li><b>Input space</b> : The amount of memory used by the inputs to the algorithm.</li>
<li><b>Auxiliary space</b> : The amount of memory used during the execution of the algorithm, excluding the input space.</li>
</ul>
<p>
<b>NOTE</b> : <i>Space complexity by definition includes both input space and auxiliary space, but when comparing algorithms the input space is often ignored. This is because two algorithms that solve the same problem will have same input space based on input size (Example, when comparing two sorting algorithms, the input space will be same because both get a list as an input). So from this point on, refering to space complexity, we are actually talking about <b>Auxiliary Space Complexity</b>, which is space complexity but only considering the auxiliary space</i>.
</p>
</div>
<div id="outline-container-org24de75a" class="outline-4">
<h4 id="org24de75a"><span class="section-number-4">5.3.1.</span> Auxiliary space complexity</h4>
<div class="outline-text-4" id="text-5-3-1">
<p>
The space complexity when we disregard the input space is the auxiliary space complexity, so we basically treat algorithm as if it's input space is zero. Auxiliary space complexity is more useful when comparing algorithms because the algorithms which are working towards same result will have the same input space, Example, the sorting algorithms will all have the input space of the list, so it is not a metric we can use to compare algorithms. So from here, when we calculate space complexity, we are trying to calculate auxiliary space complexity and sometimes just refer to it as space complexity.
</p>
</div>
</div>
</div>
<div id="outline-container-org3e6fc48" class="outline-3">
<h3 id="org3e6fc48"><span class="section-number-3">5.4.</span> Calculating auxiliary space complexity</h3>
<div class="outline-text-3" id="text-5-4">
<p>
There are two parameters that affect space complexity,
</p>
<ul class="org-ul">
<li><b>Data space</b> : The memory taken by the variables in the algorithm. So allocating new memory during runtime of the algorithm is what forms the data space. The space which was allocated for the input space is not considered a part of the data space.</li>
<li><b>Code Execution Space</b> : The memory taken by the instructions themselves is called code execution space. Unless we have recursion, the code execution space remains constant since the instructions don't change during runtime of the algorithm. When using recursion, the instructions are loaded again and again in memory, thus increasing code execution space.</li>
</ul>
</div>
<div id="outline-container-org328fc47" class="outline-4">
<h4 id="org328fc47"><span class="section-number-4">5.4.1.</span> Data Space used</h4>
<div class="outline-text-4" id="text-5-4-1">
<p>
The data space used by the algorithm depends on what data structures it uses to solve the problem. Example,
</p>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Input size of n</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #c18401;">void</span> <span style="color: #0184bc;">algorithms</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>){
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Creating an array of whose size depends on input size</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">data</span>[n];
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; n; i++){
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">x</span> = data[i];
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">Work on data</span>
}
}
</pre>
</div>
<p>
Here, we create an array of size <b>n</b>, so the increase in allocated space increases with the input size. So the space complexity is, <b>\(\theta (n)\)</b>.
<br />
</p>
<ul class="org-ul">
<li>Another example,</li>
</ul>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Input size of n</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #c18401;">void</span> <span style="color: #0184bc;">algorithms</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>){
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Creating a matrix sized n*n of whose size depends on input size</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">data</span>[n][n];
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; n; i++){
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = 0; j &lt; n; j++){
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">x</span> = data[i][j];
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">Work on data</span>
}
}
}
</pre>
</div>
<p>
Here, we create a matrix of size <b>n*n</b>, so the increase in allocated space increases with the input size by \(n^2\). So the space complexity is, <b>\(\theta (n^2)\)</b>.
</p>
<ul class="org-ul">
<li>If we use a node based data structure like linked list or trees, then we can show space complexity as the number of nodes used by algorithm based on input size, (if all nodes are of equal size).</li>
<li>Space complexity of the hash map is considered <b>O(n)</b> where <b>n</b> is the number of entries in the hash map.</li>
</ul>
</div>
</div>
<div id="outline-container-orga6b6723" class="outline-4">
<h4 id="orga6b6723"><span class="section-number-4">5.4.2.</span> Code Execution space in recursive algorithm</h4>
<div class="outline-text-4" id="text-5-4-2">
<p>
When we use recursion, the function calls are stored in the stack. This means that code execution space will increase. A single function call has fixed (constant) space it takes in the memory. So to get space complexity, <b>we need to know how many function calls occur in the longest branch of the function call tree</b>.
</p>
<ul class="org-ul">
<li><b>NOTE</b> : Space complexity <b>only depends on the longest branch</b> of the function calls tree.</li>
<li><i><b>The tree is made the same way we make it in the tree method for calculating time complexity of recursive algorithms</b></i></li>
</ul>
<p>
This is because at any given time, the stack will store only a single branch.
</p>
<ul class="org-ul">
<li>Example,</li>
</ul>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">int</span> <span style="color: #0184bc;">func</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>){
<span style="color: #a626a4;">if</span>(n == 1 || n == 0)
<span style="color: #a626a4;">return</span> 1;
<span style="color: #a626a4;">else</span>
<span style="color: #a626a4;">return</span> n * func(n - 1);
}
</pre>
</div>
<p>
To calculate space complexity we can use the tree method. But rather than when calculating time complexity, we will count the number of function calls using the tree.
We will do this by drawing tree of what function calls will look like for given input size <b>n</b>.
<br />
The tree for <b>k+1</b> levels is,
</p>
<pre class="example">
func(n)--func(n-1)--func(n-2)--.....--func(n-k)
</pre>
<p>
This tree only has a single branch. To get the number of levels for a branch, we put the terminating condition at the extreme branches of the tree. Here, the terminating condition is func(1), therefore, we will put \(func(1) = func(n-k)\), i.e,
\[ 1 = n - k \]
\[ k + 1 = n \]
</p>
<p>
So the number of levels is \(n\). Therefore, space complexity is <b>\(\theta (n)\)</b>
</p>
<ul class="org-ul">
<li>Another example,</li>
</ul>
<div class="org-src-container">
<pre class="src src-c"><span style="color: #c18401;">void</span> <span style="color: #0184bc;">func</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>){
<span style="color: #a626a4;">if</span>(n/2 &lt;= 1)
<span style="color: #a626a4;">return</span> n;
func(n/2);
func(n/2);
}
</pre>
</div>
<p>
Drawing the tree for <b>k+1</b> levels.
</p>
<pre class="example">
+--func(n/2^2)- - - - - - - func(n/2^k)
+-----func(n/2)--+
+ +--func(n/2^2) - - - - - - - - - func(n/2^k)
func(n)--+
+ +--func(n/2^2) - - - - - - - - - func(n/2^k)
+-----func(n/2)-+
+--func(n/2^2)- - - - - - func(n/2^k)
</pre>
<ul class="org-ul">
<li><i><b>As we know from the tree method, the two extreme branches of the tree will always be the longest ones.</b></i></li>
</ul>
<p>
Both the extreme branches have the same call which here is func(n/2<sup>k</sup>). To get the number of levels for a branch, we put the terminating condition at the extreme branches of the tree. Here, the terminating condition is func(2), therefore, we will put \(func(2) = func(n/2^k)\), i.e,
\[ 2 = \frac{n}{2^k} \]
\[ k + 1 = log_2n \]
Number of levels is \(log_2n\). Therefore, space complexity is <b>\(\theta (log_2n)\).</b>
</p>
</div>
</div>
</div>
</div>
<div id="outline-container-orgecf5585" class="outline-2">
<h2 id="orgecf5585"><span class="section-number-2">6.</span> Lecture 6</h2>
<div class="outline-text-2" id="text-6">
</div>
<div id="outline-container-org4ea791c" class="outline-3">
<h3 id="org4ea791c"><span class="section-number-3">6.1.</span> Divide and Conquer algorithms</h3>
<div class="outline-text-3" id="text-6-1">
<p>
Divide and conquer is a problem solving strategy. In divide and conquer algorithms, we solve problem recursively applying three steps :
</p>
<ul class="org-ul">
<li><b>Divide</b> : Problem is divided into smaller problems that are instances of same problem.</li>
<li><b>Conquer</b> : If subproblems are large, divide and solve them recursivly. If subproblem is small enough then solve it in a straightforward method</li>
<li><b>Combine</b> : combine the solutions of subproblems into the solution for the original problem.</li>
</ul>
<p>
<b>Example</b>,
</p>
<ol class="org-ol">
<li>Binary search</li>
<li>Quick sort</li>
<li>Merge sort</li>
<li>Strassen's matrix multiplication</li>
</ol>
</div>
</div>
<div id="outline-container-org7d2edaf" class="outline-3">
<h3 id="org7d2edaf"><span class="section-number-3">6.2.</span> Searching for element in array</h3>
<div class="outline-text-3" id="text-6-2">
</div>
<div id="outline-container-orgb0f0eb9" class="outline-4">
<h4 id="orgb0f0eb9"><span class="section-number-4">6.2.1.</span> Straight forward approach for searching (<b>Linear Search</b>)</h4>
<div class="outline-text-4" id="text-6-2-1">
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">int</span> <span style="color: #0184bc;">linear_search</span>(<span style="color: #c18401;">int</span> *<span style="color: #8b4513;">array</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">x</span>){
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; n; i++){
<span style="color: #a626a4;">if</span>(array[i] == x){
printf(<span style="color: #50a14f;">"Found at index : %d"</span>, i);
<span style="color: #a626a4;">return</span> i;
}
}
<span style="color: #a626a4;">return</span> -1;
}
</pre>
</div>
<p>
Recursive approach
</p>
<div class="org-src-container">
<pre class="src src-python"><span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">call this function with index = 0</span>
<span style="color: #a626a4;">def</span> <span style="color: #0184bc;">linear_search</span>(array, item, index):
<span style="color: #a626a4;">if</span> <span style="color: #e44649;">len</span>(array) &lt; 1:
<span style="color: #a626a4;">return</span> -1
<span style="color: #a626a4;">elif</span> array[index] == item:
<span style="color: #a626a4;">return</span> index
<span style="color: #a626a4;">else</span>:
<span style="color: #a626a4;">return</span> linear_search(array, item, index + 1)
</pre>
</div>
<p>
<b>Recursive time complexity</b> : \(T(n) = T(n-1) + 1\)
</p>
<ul class="org-ul">
<li><b>Best Case</b> : The element to search is the first element of the array. So we need to do a single comparision. Therefore, time complexity will be constant <b>O(1)</b>.</li>
</ul>
<p>
<br />
</p>
<ul class="org-ul">
<li><b>Worst Case</b> : The element to search is the last element of the array. So we need to do <b>n</b> comparisions for the array of size n. Therefore, time complexity is <b>O(n)</b>.</li>
</ul>
<p>
<br />
</p>
<ul class="org-ul">
<li><b>Average Case</b> : For calculating the average case, we need to consider the average number of comparisions done over all possible cases.</li>
</ul>
<table border="2" cellspacing="0" cellpadding="6" rules="all" frame="border">
<colgroup>
<col class="org-left" />
<col class="org-left" />
</colgroup>
<thead>
<tr>
<th scope="col" class="org-left">Position of element to search (x)</th>
<th scope="col" class="org-left">Number of comparisions done</th>
</tr>
</thead>
<tbody>
<tr>
<td class="org-left">0</td>
<td class="org-left">1</td>
</tr>
<tr>
<td class="org-left">1</td>
<td class="org-left">2</td>
</tr>
<tr>
<td class="org-left">2</td>
<td class="org-left">3</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">.</td>
<td class="org-left">.</td>
</tr>
<tr>
<td class="org-left">n-1</td>
<td class="org-left">n</td>
</tr>
<tr>
<td class="org-left">&#x2026;&#x2026;&#x2026;&#x2026;&#x2026;&#x2026;..</td>
<td class="org-left">&#x2026;&#x2026;&#x2026;&#x2026;&#x2026;&#x2026;..</td>
</tr>
<tr>
<td class="org-left">Sum</td>
<td class="org-left">\(\frac{n(n+1)}{2}\)</td>
</tr>
</tbody>
</table>
<p>
\[ \text{Average number of comparisions} = \frac{ \text{Sum of number of comparisions of all cases} }{ \text{Total number of cases.} } \]
\[ \text{Average number of comparisions} = \frac{n(n+1)}{2} \div n \]
\[ \text{Average number of comparisions} = \frac{n+1}{2} \]
\[ \text{Time complexity in average case} = O(n) \]
</p>
</div>
</div>
<div id="outline-container-org810960f" class="outline-4">
<h4 id="org810960f"><span class="section-number-4">6.2.2.</span> Divide and conquer approach (<b>Binary search</b>)</h4>
<div class="outline-text-4" id="text-6-2-2">
<p>
The binary search algorithm works on an array which is sorted. In this algorithm we:
</p>
<ol class="org-ol">
<li>Check the middle element of the array, return the index if element found.</li>
<li>If element &gt; array[mid], then our element is in the right part of the array, else it is in the left part of the array.</li>
<li>Get the mid element of the left/right sub-array</li>
<li>Repeat this process of division to subarray's and comparing the middle element till our required element is found.</li>
</ol>
<p>
The divide and conquer algorithm works as,
<br />
Suppose binarySearch(array, left, right, key), left and right are indicies of left and right of subarray. key is the element we have to search.
</p>
<ul class="org-ul">
<li><b>Divide part</b> : calculate mid index as mid = left + (right - left) /2 or (left + right) / 2. If array[mid] == key, return the value of mid.</li>
<li><b>Conquer part</b> : if array[mid] &gt; key, then key must not be in right half. So we search for key in left half, so we will recursively call binarySearch(array, left, mid - 1, key). Similarly, if array[mid] &lt; key, then key must not be in left half. So we search for key in right half, so recursively call binarySearch(array, mid + 1, right, key).</li>
<li><b>Combine part</b> : Since the binarySearch function will either return -1 or the index of the key, there is no need to combine the solutions of the subproblems.</li>
</ul>
<div id="orgb28e3dc" class="figure">
<p><img src="lectures/imgs/binary-search.jpg" alt="binary-search.jpg" />
</p>
</div>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">int</span> <span style="color: #0184bc;">binary_search</span>(<span style="color: #c18401;">int</span> *<span style="color: #8b4513;">array</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">n</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">x</span>){
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">low</span> = 0;
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">high</span> = n;
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">mid</span> = (low + high) / 2;
<span style="color: #a626a4;">while</span>(low &lt;= high){
mid = (low + high) / 2;
<span style="color: #a626a4;">if</span> (x == array[mid]){
<span style="color: #a626a4;">return</span> mid;
}<span style="color: #a626a4;">else</span> <span style="color: #a626a4;">if</span> (x &lt; array[mid]){
low = low;
high = mid - 1;
}<span style="color: #a626a4;">else</span>{
low = mid + 1;
high = high;
}
}
<span style="color: #a626a4;">return</span> -1;
}
</pre>
</div>
<p>
Recursive approach:
</p>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">int</span> <span style="color: #0184bc;">binary_search</span>(<span style="color: #c18401;">int</span> *<span style="color: #8b4513;">array</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">left</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">right</span>, <span style="color: #c18401;">int</span> <span style="color: #8b4513;">x</span>){
<span style="color: #a626a4;">if</span>(left &gt; right)
<span style="color: #a626a4;">return</span> -1;
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">mid</span> = (left + right) / 2;
<span style="color: #a0a1a7; font-weight: bold;">// </span><span style="color: #a0a1a7;">or we can use mid = left + (right - left) / 2, this will avoid int overflow when array has more elements.</span>
<span style="color: #a626a4;">if</span> (x == array[mid])
<span style="color: #a626a4;">return</span> mid;
<span style="color: #a626a4;">else</span> <span style="color: #a626a4;">if</span> (x &lt; array[mid])
<span style="color: #a626a4;">return</span> binary_search(array, left, mid - 1, x);
<span style="color: #a626a4;">else</span>
<span style="color: #a626a4;">return</span> binary_search(array, mid + 1, right, x);
}
</pre>
</div>
<p>
<b>Recursive time complexity</b> : \(T(n) = T(n/2) + 1\)
</p>
<ul class="org-ul">
<li><b>Best Case</b> : Time complexity = O(1)</li>
<li><b>Average Case</b> : Time complexity = O(log n)</li>
<li><b>Worst Case</b> : Time complexity = O(log n)</li>
</ul>
<p>
<i>Binary search is better for sorted arrays and linear search is better for sorted arrays.</i>
<br />
<i>Another way to visualize binary search is using the binary tree.</i>
</p>
</div>
</div>
</div>
<div id="outline-container-org6977da8" class="outline-3">
<h3 id="org6977da8"><span class="section-number-3">6.3.</span> Max and Min element from array</h3>
<div class="outline-text-3" id="text-6-3">
</div>
<div id="outline-container-org451edcf" class="outline-4">
<h4 id="org451edcf"><span class="section-number-4">6.3.1.</span> Straightforward approach</h4>
<div class="outline-text-4" id="text-6-3-1">
<div class="org-src-container">
<pre class="src src-python"><span style="color: #a626a4;">def</span> <span style="color: #0184bc;">min_max</span>(a):
<span style="color: #e44649;">max</span> = <span style="color: #e44649;">min</span> = a[1]
<span style="color: #a626a4;">for</span> i <span style="color: #a626a4;">in</span> <span style="color: #e44649;">range</span>(2, n):
<span style="color: #a626a4;">if</span> a[i] &gt; <span style="color: #e44649;">max</span>:
<span style="color: #e44649;">max</span> = a[i];
<span style="color: #a626a4;">elif</span> a[i] &lt; <span style="color: #e44649;">min</span>:
<span style="color: #e44649;">min</span> = a[i];
<span style="color: #a626a4;">return</span> (<span style="color: #e44649;">min</span>,<span style="color: #e44649;">max</span>)
</pre>
</div>
<ul class="org-ul">
<li><b>Best case</b> : array is sorted in ascending order. Number of comparisions is \(n-1\). Time complexity is \(O(n)\).</li>
<li><b>Worst case</b> : array is sorted in descending order. Number of comparisions is \(2.(n-1)\). Time complexity is \(O(n)\).</li>
<li><b>Average case</b> : array can we arranged in n! ways, this makes calculating number of comparisions in the average case hard and it is somewhat unnecessary, so it is skiped. Time complexity is \(O(n)\)</li>
</ul>
</div>
</div>
<div id="outline-container-org90353a2" class="outline-4">
<h4 id="org90353a2"><span class="section-number-4">6.3.2.</span> Divide and conquer approach</h4>
<div class="outline-text-4" id="text-6-3-2">
<p>
Suppose the function is MinMax(array, left, right) which will return a tuple (min, max). We will divide the array in the middle, mid = (left + right) / 2. The left array will be array[left:mid] and right aray will be array[mid+1:right]
</p>
<ul class="org-ul">
<li><b>Divide part</b> : Divide the array into left array and right array. If array has only single element then both min and max are that single element, if array has two elements then compare the two and the bigger element is max and other is min.</li>
<li><b>Conquer part</b> : Recursively get the min and max of left and right array, leftMinMax = MinMax(array, left, mid) and rightMinMax = MinMax(array, mid + 1, right).</li>
<li><b>Combine part</b> : If leftMinMax[0] &gt; rightMinmax[0], then min = righMinMax[0], else min = leftMinMax[0]. Similarly, if leftMinMax[1] &gt; rightMinMax[1], then max = leftMinMax[1], else max = rightMinMax[1].</li>
</ul>
<div class="org-src-container">
<pre class="src src-python"><span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">Will return (min, max)</span>
<span style="color: #a626a4;">def</span> <span style="color: #0184bc;">minmax</span>(array, left, right):
<span style="color: #a626a4;">if</span> left == right: <span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">Single element in array</span>
<span style="color: #a626a4;">return</span> (array[left], array[left])
<span style="color: #a626a4;">elif</span> left + 1 == right: <span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">Two elements in array</span>
<span style="color: #a626a4;">if</span> array[left] &gt; array[right]:
<span style="color: #a626a4;">return</span> (array[right], array[left])
<span style="color: #a626a4;">else</span>:
<span style="color: #a626a4;">return</span> (array[left], array[right])
<span style="color: #a626a4;">else</span>: <span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">More than two elements</span>
mid = (left + right) / 2
<span style="color: #8b4513;">minimum</span>, <span style="color: #8b4513;">maximum</span> = 0, 0
leftMinMax = minmax(array, left, mid)
rightMinMax = minmax(array, mid + 1, right)
<span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">Combining result of the minimum from left and right subarray's</span>
<span style="color: #a626a4;">if</span> leftMinMax[0] &gt; rightMinMax[0]:
minimum = rightMinMax[0]
<span style="color: #a626a4;">else</span>:
minimum = leftMinMax[0]
<span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">Combining result of the maximum from left and right subarray's</span>
<span style="color: #a626a4;">if</span> leftMinMax[1] &gt; rightMinMax[1]:
maximum = leftMinMax[1]
<span style="color: #a626a4;">else</span>:
maximum = rightMinMax[1]
<span style="color: #a626a4;">return</span> (minimum, maximum)
</pre>
</div>
<ul class="org-ul">
<li>Time complexity</li>
</ul>
<p>
We are dividing the problem into two parts of approximately, and it takes two comparisions on each part. Let's consider a comparision takes unit time. Then time complexity is
\[ T(n) = T(n/2) + T(n/2) + 2 \]
\[ T(n) = 2.T(n/2) + 2 \]
The recurrance terminated if single element in array with zero comparisions, i.e, \(T(1) = 0\), or when two elements with single comparision \(T(2) = 1\).
<br />
<i>Now we can use the <b>master's theorem</b> or <b>tree method</b> to solve for time complexity.</i>
\[ T(n) = \theta (n) \]
</p>
<ul class="org-ul">
<li>Space complexity</li>
</ul>
<p>
For space complexity, we need to find the longest branch of the recursion tree. Since both recursive calls are same sized, and the factor is (1/2), for <b>k+1</b> levels, function call will be func(n/2<sup>k</sup>), and terminating condition is func(2)
\[ func(2) = func(n/2^k) \]
\[ 2 = \frac{n}{2^k} \]
\[ k + 1 = log_2n \]
Since longest branch has \(log_2n\) nodes, the space complexity is \(O(log_2n)\).
</p>
<ul class="org-ul">
<li>Number of comparisions</li>
</ul>
<p>
In every case i.e, average, best and worst cases, <b>the number of comparisions in this algorithm is same</b>.
\[ \text{Total number of comparisions} = \frac{3n}{2} - 2 \]
If n is not a power of 2, we will round the number of comparision up.
</p>
</div>
</div>
<div id="outline-container-orgbd13f37" class="outline-4">
<h4 id="orgbd13f37"><span class="section-number-4">6.3.3.</span> Efficient single loop approach (Increment by 2)</h4>
<div class="outline-text-4" id="text-6-3-3">
<p>
In this algorithm we will compare pairs of numbers from the array. It works on the idea that the larger number of the two in pair can be the maximum number and smaller one can be the minimum one. So after comparing the pair, we can simply test from maximum from the bigger of two an minimum from smaller of two. This brings number of comparisions to check two numbers in array from 4 (when we increment by 1) to 3 (when we increment by 2).
</p>
<div class="org-src-container">
<pre class="src src-python"><span style="color: #a626a4;">def</span> <span style="color: #0184bc;">min_max</span>(array):
(<span style="color: #8b4513;">minimum</span>, <span style="color: #8b4513;">maximum</span>) = (array[0], array[0])
<span style="color: #8b4513;">i</span> = 1
<span style="color: #a626a4;">while</span> i &lt; <span style="color: #e44649;">len</span>(array):
<span style="color: #a626a4;">if</span> i + 1 == <span style="color: #e44649;">len</span>(array): <span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">don't check i+1, it's out of bounds, break the loop after checking a[i]</span>
<span style="color: #a626a4;">if</span> array[i] &gt; <span style="color: #8b4513;">maximum</span>:
maximum = array[i]
<span style="color: #a626a4;">elif</span> array[i] &lt; <span style="color: #8b4513;">minimum</span>:
minimum = array[i]
<span style="color: #a626a4;">break</span>
<span style="color: #a626a4;">if</span> array[i] &gt; array[i + 1]:
<span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">check possibility that array[i] is maximum and array[i+1] is minimum</span>
<span style="color: #a626a4;">if</span> array[i] &gt; <span style="color: #8b4513;">maximum</span>:
maximum = array[i]
<span style="color: #a626a4;">if</span> array[i + 1] &lt; <span style="color: #8b4513;">minimum</span>:
minimum = array[i + 1]
<span style="color: #a626a4;">else</span>:
<span style="color: #a0a1a7; font-weight: bold;"># </span><span style="color: #a0a1a7;">check possibility that array[i+1] is maximum and array[i] is minimum</span>
<span style="color: #a626a4;">if</span> array[i + 1] &gt; <span style="color: #8b4513;">maximum</span>:
maximum = array[i + 1]
<span style="color: #a626a4;">if</span> array[i] &lt; <span style="color: #8b4513;">minimum</span>:
minimum = array[i]
<span style="color: #8b4513;">i</span> += 2
<span style="color: #a626a4;">return</span> (minimum, maximum)
</pre>
</div>
<ul class="org-ul">
<li>Time complexity = O(n)</li>
<li>Space complexity = O(1)</li>
<li>Total number of comparisions =
\[ \text{If n is odd}, \frac{3(n-1)}{2} \]
\[ \text{If n is even}, \frac{3n}{2} - 2 \]</li>
</ul>
</div>
</div>
</div>
</div>
<div id="outline-container-org6f4e2ff" class="outline-2">
<h2 id="org6f4e2ff"><span class="section-number-2">7.</span> Lecture 7</h2>
<div class="outline-text-2" id="text-7">
</div>
<div id="outline-container-org5400a63" class="outline-3">
<h3 id="org5400a63"><span class="section-number-3">7.1.</span> Square matrix multiplication</h3>
<div class="outline-text-3" id="text-7-1">
<p>
Matrix multiplication algorithms taken from here:
<a href="https://www.cs.mcgill.ca/~pnguyen/251F09/matrix-mult.pdf">https://www.cs.mcgill.ca/~pnguyen/251F09/matrix-mult.pdf</a>
</p>
</div>
<div id="outline-container-orgb44a809" class="outline-4">
<h4 id="orgb44a809"><span class="section-number-4">7.1.1.</span> Straight forward method</h4>
<div class="outline-text-4" id="text-7-1-1">
<div class="org-src-container">
<pre class="src src-C"><span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">This will calculate A X B and store it in C.</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #e44649;">#define</span> <span style="color: #8b4513;">N</span> 3
<span style="color: #c18401;">int</span> <span style="color: #0184bc;">main</span>(){
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">A</span>[N][N] = {
{1,2,3},
{4,5,6},
{7,8,9} };
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">B</span>[N][N] = {
{10,20,30},
{40,50,60},
{70,80,90} };
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">C</span>[N][N];
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; N; i++){
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = 0; j &lt; N; j++){
C[i][j] = 0;
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">k</span> = 0; k &lt; N; k++){
C[i][j] += A[i][k] * B[k][j];
}
}
}
<span style="color: #a626a4;">return</span> 0;
}
</pre>
</div>
<p>
Time complexity is \(O(n^3)\)
</p>
</div>
</div>
<div id="outline-container-orgff68f07" class="outline-4">
<h4 id="orgff68f07"><span class="section-number-4">7.1.2.</span> Divide and conquer approach</h4>
<div class="outline-text-4" id="text-7-1-2">
<p>
The divide and conquer algorithm only works for a square matrix whose size is n X n, where n is a power of 2. The algorithm works as follows.
</p>
<pre class="example">
MatrixMul(A, B, n):
If n == 2 {
return A X B
}else{
Break A into four parts A_11, A_12, A_21, A_22, where A = [[ A_11, A_12],
[ A_21, A_22]]
Break B into four parts B_11, B_12, B_21, B_22, where B = [[ B_11, B_12],
[ B_21, B_22]]
C_11 = MatrixMul(A_11, B_11, n/2) + MatrixMul(A_12, B_21, n/2)
C_12 = MatrixMul(A_11, B_12, n/2) + MatrixMul(A_12, B_22, n/2)
C_21 = MatrixMul(A_21, B_11, n/2) + MatrixMul(A_22, B_21, n/2)
C_22 = MatrixMul(A_21, B_12, n/2) + MatrixMul(A_22, B_22, n/2)
C = [[ C_11, C_12],
[ C_21, C_22]]
return C
}
</pre>
<p>
The addition of matricies of size (n X n) takes time \(\theta (n^2)\), therefore, for computation of C<sub>11</sub> will take time of \(\theta \left( \left( \frac{n}{2} \right)^2 \right)\), which is equals to \(\theta \left( \frac{n^2}{4} \right)\). Therefore, computation time of C<sub>11</sub>, C<sub>12</sub>, C<sub>21</sub> and C<sub>22</sub> combined will be \(\theta \left( 4 \frac{n^2}{4} \right)\), which is equals to \(\theta (n^2)\).
<br />
There are 8 recursive calls in this function with MatrixMul(n/2), therefore, time complexity will be
\[ T(n) = 8T(n/2) + \theta (n^2) \]
Using the <b>master's theorem</b>
\[ T(n) = \theta (n^{log_28}) \]
\[ T(n) = \theta (n^3) \]
</p>
</div>
</div>
<div id="outline-container-org25ef748" class="outline-4">
<h4 id="org25ef748"><span class="section-number-4">7.1.3.</span> Strassen's algorithm</h4>
<div class="outline-text-4" id="text-7-1-3">
<p>
Another, more efficient divide and conquer algorithm for matrix multiplication. This algorithm also only works on square matrices with n being a power of 2. This algorithm is based on the observation that, for A X B = C. We can calculate C<sub>11</sub>, C<sub>12</sub>, C<sub>21</sub> and C<sub>22</sub> as,
</p>
<p>
\[ \text{C_11 = P_5 + P_4 - P_2 + P_6} \]
\[ \text{C_12 = P_1 + P_2} \]
\[ \text{C_21 = P_3 + P_4} \]
\[ \text{C_22 = P_1 + P _5 - P_3 - P_7} \]
Where,
\[ \text{P_1 = A_11 X (B_12 - B_22)} \]
\[ \text{P_2 = (A_11 + A_12) X B_22} \]
\[ \text{P_3 = (A_21 + A_22) X B_11} \]
\[ \text{P_4 = A_22 X (B_21 - B_11)} \]
\[ \text{P_5 = (A_11 + A_22) X (B_11 + B_22)} \]
\[ \text{P_6 = (A_12 - A_22) X (B_21 + B_22)} \]
\[ \text{P_7 = (A_11 - A_21) X (B_11 + B_12)} \]
This reduces number of recursion calls from 8 to 7.
</p>
<pre class="example">
Strassen(A, B, n):
If n == 2 {
return A X B
}
Else{
Break A into four parts A_11, A_12, A_21, A_22, where A = [[ A_11, A_12],
[ A_21, A_22]]
Break B into four parts B_11, B_12, B_21, B_22, where B = [[ B_11, B_12],
[ B_21, B_22]]
P_1 = Strassen(A_11, B_12 - B_22, n/2)
P_2 = Strassen(A_11 + A_12, B_22, n/2)
P_3 = Strassen(A_21 + A_22, B_11, n/2)
P_4 = Strassen(A_22, B_21 - B_11, n/2)
P_5 = Strassen(A_11 + A_22, B_11 + B_22, n/2)
P_6 = Strassen(A_12 - A_22, B_21 + B_22, n/2)
P_7 = Strassen(A_11 - A_21, B_11 + B_12, n/2)
C_11 = P_5 + P_4 - P_2 + P_6
C_12 = P_1 + P_2
C_21 = P_3 + P_4
C_22 = P_1 + P_5 - P_3 - P_7
C = [[ C_11, C_12],
[ C_21, C_22]]
return C
}
</pre>
<p>
This algorithm uses 18 matrix addition operations. So our computation time for that is \(\theta \left(18\left( \frac{n}{2} \right)^2 \right)\) which is equal to \(\theta (4.5 n^2)\) which is equal to \(\theta (n^2)\).
<br />
There are 7 recursive calls in this function which are Strassen(n/2), therefore, time complexity is
\[ T(n) = 7T(n/2) + \theta (n^2) \]
Using the master's theorem
\[ T(n) = \theta (n^{log_27}) \]
\[ T(n) = \theta (n^{2.807}) \]
</p>
<ul class="org-ul">
<li><i><b>NOTE</b> : The divide and conquer approach and strassen's algorithm typically use n == 1 as their terminating condition since for multipliying 1 X 1 matrices, we only need to calculate product of the single element they contain, that product is thus the single element of our resultant 1 X 1 matrix.</i></li>
</ul>
</div>
</div>
</div>
<div id="outline-container-org72167b4" class="outline-3">
<h3 id="org72167b4"><span class="section-number-3">7.2.</span> Sorting algorithms</h3>
<div class="outline-text-3" id="text-7-2">
</div>
<div id="outline-container-orgc7ba8f0" class="outline-4">
<h4 id="orgc7ba8f0"><span class="section-number-4">7.2.1.</span> In place vs out place sorting algorithm</h4>
<div class="outline-text-4" id="text-7-2-1">
<p>
If the space complexity of a sorting algorithm is \(\theta (1)\), then the algorithm is called in place sorting, else the algorithm is called out place sorting.
</p>
</div>
</div>
<div id="outline-container-org49041fd" class="outline-4">
<h4 id="org49041fd"><span class="section-number-4">7.2.2.</span> Bubble sort</h4>
<div class="outline-text-4" id="text-7-2-2">
<p>
Simplest sorting algorithm, easy to implement so it is useful when number of elements to sort is small. It is an in place sorting algorithm. We will compare pairs of elements from array and swap them to be in correct order. Suppose input has n elements.
</p>
<ul class="org-ul">
<li>For first pass of the array, we will do <b>n-1</b> comparisions between pairs, so 1st and 2nd element; then 2nd and 3rd element; then 3rd and 4th element; till comparision between (n-1)th and nth element, swapping positions according to the size. <i>A single pass will put a single element at the end of the list at it's correct position.</i></li>
<li>For second pass of the array, we will do <b>n-2</b> comparisions because the last element is already in it's place after the first pass.</li>
<li>Similarly, we will continue till we only do a single comparision.</li>
<li>The total number of comparisions will be
\[ \text{Total comparisions} = (n - 1) + (n - 2) + (n - 3) + ..... + 2 + 1 \]
\[ \text{Total comparisions} = \frac{n(n-1)}{2} \]
Therefore, <b>time complexity is \(\theta (n^2)\)</b></li>
</ul>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">void</span> <span style="color: #0184bc;">binary_search</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">array</span>[]){
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">i is the number of comparisions in the pass</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = len(array) - 1; i &gt;= 1; i--){
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">j is used to traverse the list</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #a626a4;">for</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = 0; j &lt; i; j++){
<span style="color: #a626a4;">if</span>(array[j] &gt; array[j+1])
array[j], array[j+1] = array[j+1], array[j];
}
}
}
</pre>
</div>
<p>
<b><i>Minimum number of swaps can be calculated by checking how many swap operations are needed to get each element in it's correct position.</i></b> This can be done by checking the number of smaller elements towards the left. For descending, check the number of larger elements towards the left of the given element. Example for ascending sort,
</p>
<table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
<colgroup>
<col class="org-left" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
</colgroup>
<tbody>
<tr>
<td class="org-left">Array</td>
<td class="org-right">21</td>
<td class="org-right">16</td>
<td class="org-right">17</td>
<td class="org-right">8</td>
<td class="org-right">31</td>
</tr>
<tr>
<td class="org-left">Minimum number of swaps to get in correct position</td>
<td class="org-right">3</td>
<td class="org-right">1</td>
<td class="org-right">0</td>
<td class="org-right">0</td>
<td class="org-right">0</td>
</tr>
</tbody>
</table>
<p>
Therefore, minimum number of swaps is ( 3 + 1 + 0 + 0 + 0) , which is equal to 4 swaps.
</p>
<ul class="org-ul">
<li><b><i>Reducing number of comparisions in implementation</i></b> : at the end of every pass, check the number of swaps. <b>If number of swaps in a pass is zero, then the array is sorted.</b> This implementation does not give minimum number of comparisions, but reduces number of comparisions from default implementation. It reduces the time complexity to \(\theta (n)\) for best case scenario, since we only need to pass through array once.</li>
</ul>
<p>
Recursive time complexity : \(T(n) = T(n-1) + n - 1\)
</p>
</div>
</div>
</div>
</div>
<div id="outline-container-org04d4663" class="outline-2">
<h2 id="org04d4663"><span class="section-number-2">8.</span> Lecture 8</h2>
<div class="outline-text-2" id="text-8">
</div>
<div id="outline-container-org000935c" class="outline-3">
<h3 id="org000935c"><span class="section-number-3">8.1.</span> Selection sort</h3>
<div class="outline-text-3" id="text-8-1">
<p>
It is an inplace sorting technique. In this algorithm, we will get the minimum element from the array, then we swap it to the first position. Now we will get the minimum from array[1:] and place it in index 1. Similarly, we get minimum from array[2:] and then place it on index 2. We do till we get minimum from array[len(array) - 2:] and place minimum on index [len(array) - 2].
</p>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">void</span> <span style="color: #0184bc;">selection_sort</span>(<span style="color: #c18401;">int</span> <span style="color: #8b4513;">array</span>[]){
<span style="color: #a626a4;">for</span>( <span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 0; i &lt; len(array)-2 ; i++ ) {
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Get the minimum index from the sub-array [i:]</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">min_index</span> = i;
<span style="color: #a626a4;">for</span>( <span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = i+1; j &lt; len(array) - 1; j++ )
<span style="color: #a626a4;">if</span> (array[j] &lt; array[min_index]) { min_index = j; }
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Swap the min_index with it's position at start of sub-array</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
array[i], array[min_index] = array[min_index], array[i];
}
}
</pre>
</div>
<p>
The total number of comparisions is,
\[ \text{Total number of comparisions} = (n -1) + (n-2) + (n-3) + ... + (1) \]
\[ \text{Total number of comparisions} = \frac{n(n-1)}{2} \]
For this algorithm, number of comparisions are same in best, average and worst case.
Therefore the time complexity in all cases is, \[ \text{Time complexity} = \theta (n) \]
</p>
<ul class="org-ul">
<li>Recurrance time complexity : \(T(n) = T(n-1) + n - 1\)</li>
</ul>
</div>
</div>
<div id="outline-container-orgcf47e26" class="outline-3">
<h3 id="orgcf47e26"><span class="section-number-3">8.2.</span> Insertion sort</h3>
<div class="outline-text-3" id="text-8-2">
<p>
It is an inplace sorting algorithm.
</p>
<ul class="org-ul">
<li>In this algorithm, we first divide array into two sections. Initially, the left section has a single element and right section has all the other elements. Therefore, the left part is sorted and right part is unsorted.</li>
<li>We call the leftmost element of the right section the key.</li>
<li>Now, we insert the key in it's correct position, in left section.</li>
<li>As commanly known, for insertion operation we need to shift elements. So we shift elements in the left section.</li>
</ul>
<div class="org-src-container">
<pre class="src src-C"><span style="color: #c18401;">void</span> <span style="color: #0184bc;">insertion_sort</span> ( <span style="color: #c18401;">int</span> <span style="color: #8b4513;">array</span>[] ) {
<span style="color: #a626a4;">for</span>( <span style="color: #c18401;">int</span> <span style="color: #8b4513;">i</span> = 1; i &lt; len(array); i++ ) {
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Key is the first element of the right section of array</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">key</span> = array[j];
<span style="color: #c18401;">int</span> <span style="color: #8b4513;">j</span> = i - 1;
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Shift till we find the correct position of the key in the left section</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
<span style="color: #a626a4;">while</span> ( j &gt; 0 &amp;&amp; array[j] &gt; key ) {
array[j + 1] = array[j];
j -= 1;
}
<span style="color: #a0a1a7; font-weight: bold;">/* </span><span style="color: #a0a1a7;">Insert key in it's correct position</span><span style="color: #a0a1a7; font-weight: bold;"> */</span>
array[j+1] = key;
}
}
</pre>
</div>
<ul class="org-ul">
<li>Time complexity</li>
</ul>
<p>
<b>Best Case</b> : The best case is when input array is already sorted. In this case, we do <b>(n-1)</b> comparisions and no swaps. The time complexity will be \(\theta (n)\)
<br />
<b>Worst Case</b> : The worst case is when input array is is descending order when we need to sort in ascending order and vice versa (basically reverse of sorted). The number of comparisions is
<br />
\[ [1 + 2 + 3 + .. + (n-1)] = \frac{n(n-1)}{2} \]
<br />
The number of element shift operations is
<br />
\[ [1 + 2 + 3 + .. + (n-1)] = \frac{n(n-1)}{2} \]
<br />
Total time complexity becomes \(\theta \left( 2 \frac{n(n-1)}{2} \right)\), which is simplified to \(\theta (n^2)\).
</p>
<ul class="org-ul">
<li><b>NOTE</b> : Rather than using <b>linear search</b> to find the position of key in the left (sorted) section, we can use <b>binary search</b> to reduce number of comparisions.</li>
</ul>
</div>
</div>
<div id="outline-container-orgdaadb6b" class="outline-3">
<h3 id="orgdaadb6b"><span class="section-number-3">8.3.</span> Inversion in array</h3>
<div class="outline-text-3" id="text-8-3">
<p>
The inversion of array is the measure of how close array is from being sorted.
<br />
For an ascending sort, it is the amount of element pairs such that array[i] &gt; array[j] and i &lt; j OR IN OTHER WORDS array[i] &lt; array[j] and i &gt; j.
</p>
<ul class="org-ul">
<li>For <b>ascending sort</b>, we can simply look at the number of elements to left of the given element that are smaller.</li>
</ul>
<table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
<colgroup>
<col class="org-left" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
</colgroup>
<tbody>
<tr>
<td class="org-left">Array</td>
<td class="org-right">10</td>
<td class="org-right">6</td>
<td class="org-right">12</td>
<td class="org-right">8</td>
<td class="org-right">3</td>
<td class="org-right">1</td>
</tr>
<tr>
<td class="org-left">Inversions</td>
<td class="org-right">4</td>
<td class="org-right">2</td>
<td class="org-right">3</td>
<td class="org-right">2</td>
<td class="org-right">1</td>
<td class="org-right">0</td>
</tr>
</tbody>
</table>
<p>
Total number of inversions = (4+2+3+2+1+0) = 12
</p>
<ul class="org-ul">
<li>For <b>descending sort</b>, we can simply look at the number of elements to the left of the given element that are larger.</li>
</ul>
<table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
<colgroup>
<col class="org-left" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
<col class="org-right" />
</colgroup>
<tbody>
<tr>
<td class="org-left">Array</td>
<td class="org-right">10</td>
<td class="org-right">6</td>
<td class="org-right">12</td>
<td class="org-right">8</td>
<td class="org-right">3</td>
<td class="org-right">1</td>
</tr>
<tr>
<td class="org-left">Inversions</td>
<td class="org-right">1</td>
<td class="org-right">2</td>
<td class="org-right">0</td>
<td class="org-right">0</td>
<td class="org-right">0</td>
<td class="org-right">0</td>
</tr>
</tbody>
</table>
<p>
Total number of inversions = 1 + 2 = 3
</p>
<ul class="org-ul">
<li>For an array of size <b>n</b></li>
</ul>
<p>
\[ \text{Maximum possible number of inversions} = \frac{n(n-1)}{2} \]
\[ \text{Minimum possible number of inversions} = 0 \]
</p>
</div>
<div id="outline-container-org90463d0" class="outline-4">
<h4 id="org90463d0"><span class="section-number-4">8.3.1.</span> Relation between time complexity of insertion sort and inversion</h4>
<div class="outline-text-4" id="text-8-3-1">
<p>
If the inversion of an array is f(n), then the time complexity of the insertion sort will be \(\theta (n + f(n))\).
</p>
</div>
</div>
</div>
</div>
</div>
</body>
</html>