Just complete TODO's now

main
lomna 1 year ago
parent 173ccf2254
commit 1ca0d3d62c

@ -0,0 +1,14 @@
<svg width="201mm" height="95mm" version="1.1" viewBox="0 0 201 95" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<rect width="201" height="95" fill="#fff"/>
<g transform="translate(-3-3)" stroke-width="1.5" stroke="#000">
<path d="m103.3 61c-56.6 .4-56.2 34.2 0 34 56.2 0 56.6-33.7 0-34z" fill="#eef" stroke-dasharray="1.5"/>
<g id="c"><path d="m6 5.7 0 89.3c46-.4 86-55.8 85.8-89.3z" fill="#eef" stroke-dasharray="1.5"/><g fill="#fff"><circle cx="18" cy="21.6" r="9.4"/><circle cx="74.6" cy="78" r="9.4"/><circle cx="74.6" cy="21.6" r="9.4"/><circle cx="18" cy="78.2" r="9.4"/></g>
<use transform="rotate(90 74.6 78.2)" xlink:href="#a"/><path d="m27.4 78.2h29.5"/>
<path id="a" d="m50.3 71.5 3.3 6.7-3.3 6.7 13.4-6.7z" stroke="none" fill="#000"/><path d="m74.6 31 0 29.6"/></g>
<use transform="translate(0-56.6)" xlink:href="#a"/><use transform="rotate(-90 17.5 77.6)" xlink:href="#a"/><use transform="rotate(135 46 66.8)" xlink:href="#a"/><use transform="matrix(-1 0 0 1 207 0)" xlink:href="#c"/><use transform="translate(57.8-56.6)" xlink:href="#a"/>
<g id="b" fill="none"><use transform="rotate(133 74.8 82)" stroke="none" xlink:href="#a"/><path d="m88.7 72.7c8-8.7 19.2-10 34.3 5.5"/>
<g transform="rotate(180 104 77.4)"><use transform="rotate(133.4 74.8 82)" stroke="none" xlink:href="#a"/><path d="m88.6 73c9-10.3 17.7-9.7 34.4 5"/></g></g>
<use transform="translate(56.4-55)" xlink:href="#b"/><use transform="rotate(-90 131.8 22.2)" xlink:href="#b"/>
<g fill="none"><path d="m27.4 21.6 29.5 0"/><path d="m18 69v-30.7"/><path d="m68 28.4-38.5 38"/><path d="m84 21.6h30.8"/></g>
<g style="font-family:'Arial';-inkscape-font-specification:'Arial'" font-weight="bold" text-anchor="middle" font-size="12px" stroke="none"><text x="18" y="24.7">a</text><text x="74.4" y="26">b</text><text x="132" y="24.7">c</text><text x="189" y="24.7">d</text><text x="18" y="81.3">e</text>
<text x="74.3" y="82.6">f</text><text x="132.5" y="80">g</text><text x="188.8" y="82.5">h</text></g></g></svg>

After

Width:  |  Height:  |  Size: 2.0 KiB

File diff suppressed because it is too large Load Diff

@ -12,7 +12,8 @@ To create a stack, we will keep track of the index which is the *top* of the arr
** Operation on stack ** Operation on stack
A stack has two operations A stack has two operations
1. Push
2. Pop
* Direct Address Table * Direct Address Table
Direct Address Tables are useful when we know that key is within a small range. Then, we can allocate an array such that each possible key gets an index and just add the values according to the keys. Direct Address Tables are useful when we know that key is within a small range. Then, we can allocate an array such that each possible key gets an index and just add the values according to the keys.
\\ \\
@ -306,7 +307,8 @@ h(k) = 532 mod 100
h(k) = 32 h(k) = 32
** Universal Hashing ** Universal Hashing
TODO: Basics of universal hashing. TODO : Basics of universal hashing.
** Perfect Hashing ** Perfect Hashing
*NOTE*: This doesn't seem to be in B.Tech syllabus, but it seems cool. *NOTE*: This doesn't seem to be in B.Tech syllabus, but it seems cool.
\\ \\
@ -852,7 +854,7 @@ This function runs in $\theta (log_2n)$ time. The algorithm for this works as fo
Since we shift element upwards, this operation is often called /up-heap/ operation. It is also known as /trickle-up, swim-up, heapify-up, or cascade-up/ Since we shift element upwards, this operation is often called /up-heap/ operation. It is also known as /trickle-up, swim-up, heapify-up, or cascade-up/
\\ \\
\\ \\
*TODO* : Maybe up-heapfiy funtion should be made cleaner rather than trying to mirror down-heapify funtion. TODO : Maybe up-heapfiy funtion should be made cleaner rather than trying to mirror down-heapify funtion.
*** Insertion *** Insertion
Insertion takes $\theta (log_2n)$ time in a binary heap. To insert and element in heap, we will add it to the end of the heap and then apply up-heapify operation of the elment Insertion takes $\theta (log_2n)$ time in a binary heap. To insert and element in heap, we will add it to the end of the heap and then apply up-heapify operation of the elment
@ -953,6 +955,7 @@ We need a way to represent graphs in computers and to search a graph. Searching
\\ \\
The two common ways of representing graphs are either using adjacency lists and adjacency matrix. Either can represent both directed and undirected graphs. The two common ways of representing graphs are either using adjacency lists and adjacency matrix. Either can represent both directed and undirected graphs.
TODO : add images to show how it is represented
*** Adjacency List *** Adjacency List
Every node in the graph is represented by a linked list. The list contains the nodes to which the list node is connected by an edge. Every node in the graph is represented by a linked list. The list contains the nodes to which the list node is connected by an edge.
\\ \\
@ -1112,24 +1115,26 @@ Therefore, we can get the shortest path now as follows
This will print shortest path from end node to start node. This will print shortest path from end node to start node.
*** Depth first search *** Depth first search
Unlike BFS, depth first search is more biased towards the farthest nodes of a graph. It follows a single path till it reaches the end of a path. After that, it back tracks to the last open path and follows that one. This process is repeated till all nodes are covered. Unlike BFS, depth first search is more biased towards the farthest nodes of a graph. It follows a single path till it reaches the end of a path. After that, it back tracks to the last open path and follows that one. This process is repeated till all nodes are covered.
\\
\\
Implementation of DFS is very similar to BFS with two differences. Rather than using a queue, we use a *stack*. In BFS, the explored nodes are added to the queue, but in DFS we will add unexplored nodes to the stack. Implementation of DFS is very similar to BFS with two differences. Rather than using a queue, we use a *stack*. In BFS, the explored nodes are added to the queue, but in DFS we will add unexplored nodes to the stack.
Also in DFS, nodes are accessed two times, first when they are discovered and then when they are backtracked to and are considered finished.
#+BEGIN_SRC c #+BEGIN_SRC c
DFS(graph_type graph, node_type start){ DFS(graph_type graph, node_type start){
stack_type stack; stack_type stack;
stack.push(start); stack.push(start);
while(stack.len != 0){ while(stack.len != 0){
node_type v = stack.pop(); node_type v = stack.pop();
if(v.explored == false){ if(v.discovered == false){
v.explored = true; v.discovered = true;
node_list adjacency_list = graph.adj_list(start); node_list adjacency_list = graph.adj_list(start);
while(adjacency_list != NULL){ while(adjacency_list != NULL){
stack.push(adjacency_list.node); stack.push(adjacency_list.node);
adjacency_list = adjacency_list.next; adjacency_list = adjacency_list.next;
} }
v.finished = true;
} }
} }
} }
@ -1157,5 +1162,103 @@ For an input graph $G=(V,E)$, the time complexity for Depth first search is $\th
\[ \text{Time complexity of DFS : } \theta (V + E) \] \[ \text{Time complexity of DFS : } \theta (V + E) \]
*** Properties of DFS *** Properties of DFS
DFS is very useful to understand the structure of a graph. To understand the DFS is very useful to */understand the structure of a graph/*. To study the structure of a graph using DFS, we will get two attributes of each node using DFS. We suppose that each step in traversal takes a unit of time.
+ *Discovery time* : The time when we first discovered the node. We will set this at the time we push node to stack. We will denote it as node.d
+ *Finishing time* : The time when we explored the node. We will set this when we pop the node and explore it. We will denote it as node.f
So our funtion will become
#+BEGIN_SRC c
// call start node with time = NULL
DFS(graph_type graph, node_type node, size_t *time){
node.discovered = true;
// if time is NULL, initialize it
if(time == NULL){
size_t initial_time = 0;
time = &initial_time;
}
(*time) = (*time) + 1;
node.d = (*time);
node_list adjacency_list = graph.adj_list(node);
while(adjacency_list != NULL){
node_type u = adjacency_list.node;
if(u.discovered == false)
DFS(graph, u, time);
adjacency_list = adjacency_list.next;
}
(*time) = (*time) + 1;
node.f = (*time);
}
#+END_SRC
This algorithm will give all nodes the (node.d) and (node.f) attribute. *Similar to BFS, we can create a tree from DFS.* Having knowledge of these attributes can tell us properites of this DFS tree.
**** *Parenthesis theorem*
The paranthesis theorem is used to find relationship between two nodes in the *Depth First Search Tree*.
\\
For any two given nodes $x$ and $y$.
+ If range $[x.d, x.f]$ is completely within $[y.d, y.f]$, then $x$ is a descendant of $y$.
+ If range $[x.d, x.f]$ and $[y.d, y.f]$ are completely disjoint, then neither is descendant or ancestor of another.
So if node, $y$ is a proper descendant of node $x$ in the depth first tree, then
\[ \text{x is ancestor of y} : x.d < y.d < y.f < x.f \]
**** *White path theorem*
If $y$ is a descendant of $x$ in graph G, then at time $t = x.d$, the path from $u$ to $v$ was undiscovered.
That is, all the nodes in path from $x$ to $y$ were undiscovered. Undiscovered nodes are shown by white vertices in visual representations of DFS, therfore this theorem was named white path theorem.
**** *Classification of edges*
We can arrange the connected nodes of a graph into the form of a Depth-first tree. When the graph is arranged in this way, the edges can be classified into four types
1. Tree edge : The edges of graph which become the edges of the depth-first tree.
2. Back edge : The edges of graph which point from a descendant node to an ancestor node of depth-first tree. They are called back edge because they point backwards to the root of the tree oppsite to all tree edges.
3. Forward edge : The edges of graph which point from a point from an ancestor node to a descendant node.
4. Cross edge : An edge of graph which points to two different nodes
The back edge, forward edge and cross edge are not a part of the depth-first tree but a part of the original graph.
+ In an *undirected graph* G, every edge is either a *tree edge or a back edge*.
*** Depth-first and Breadth-first Forests
In directed graphs, the depth-first and breadth-first algorithms *can't traverse to nodes which are not connected by a directed edge*. This can leave parts of graph not mapped by a single tree.
These tree's can help us better understand the graph and get properties of nodes, so we can't leave them when converting a graph to tree.
\\
To solve this, we have /*collection of trees for the graph*/. This collection of trees will cover all the nodes of the graph and is called a *forest*. The forest of graph $G$ is represented by $G_{\pi}$.
Thus when using DFS or BFS on a graph, we store this collection of trees i.e, forests so that we can get properties of all the nodes.
+ *NOTE* : When making a depth-first forest, we *don't reset the the time* when going from one tree to another. So if finishing time of for root of a tree is $t$, the discovery time of root node of next tree will be $(t+1)$.
*** Topological sort using DFS *** Topological sort using DFS
Topological sorting can only be done on *directed acyclic graphs*. A topological sort is a linear ordering of the nodes of a directed acyclic graph (dag). It is ordering the nodes such that all the *the edges point right*.
Topological sorting is used on *precedence graphs* to tell which node will have higher precedence.
To topologically sort, we first call DFS to calculate the the finishing time for all the nodes in graph and form a depth-first forest. Then, we can just sort the finishing times of the nodes in descending order.
TODO : Add image to show process of topological sorting
+ A directed graph $G$ is *acyclic if and only if* the depth-first forest has *no back edges*.
** Strongly connected components
If we can traverse from a node $x$ to node $y$ in a directed graph, we show it as $x \rightsquigarrow y$.
+ A pair of nodes $x$ and $y$ is called if $x \rightsquigarrow y$ and $y \rightsquigarrow x$
+ A graph is said to be strongly connected if all pairs of nodes are strongly connected in the graph.
+ If a graph is not strongly connected, we can divide the graph into subgraphs made from neighbouring nodes which are strongly connected. These subgraphs are called *strongly connected componnents*.
Example, the dotted regions are the strongly connected components (SCC) of the graph.
[[./imgs/strongly-connected-component.svg]]
*** Finding strongly connected components
We can find the strongly connected components of a graph $G$ using DFS. The algorithm is called Kosaraju's algorithm.
For this algorithm, we also need the transpose of graph $G$. The transpose of graph $G$ is denoted by $G^T$ and is the graph with the direction of all the edges flipped. So all edges from $x$ to $y$ in $G$, will go from $y$ to $x$ in $G^T$.
The algorithm uses the property that transpose of a graph will have the same SCC's as the original graph.
The algorithm works as follows
+ *Step 1* : Perform DFS on the tree to compute the finishing time of all vertices. When a node finishes, push it to a stack.
+ *Step 2* : Find the transpose of the input graph. The transpose of graph is graph with same vertices, but the edges are flipped.
+ *Step 3* : Pop a node from stack and apply DFS on it. All nodes that will be traversed by the DFS will be a part of an SCC. After the first SCC is found, begin popping nodes from stack till we get an undiscovered node. Then apply DFS on the undiscovered node to get the next SCC. Repeat this process till the stack is empty.
Example, consider the graph
+ Step 1 : we start DFS at node $1$, push nodes to a stack when they are finished
+ Step 2 : Find transpose of the graph
+ Step 3 : pop node from stack till we find a node which is undiscovered, then apply DFS to it. In our example, first node is $1$
TODO : Add images for this

@ -1,7 +1,7 @@
*Export to HTML * Export to HTML
#do #do
emacs --script src/export.el emacs --script src/export.el
*Remove intermediate * Remove intermediate
#do #do
rm main.html~ rm main.html~

Loading…
Cancel
Save