From f6f51d0ff9349d2374acd9d219c6d525d43835b2 Mon Sep 17 00:00:00 2001 From: lomna Date: Wed, 2 Aug 2023 23:30:16 +0530 Subject: [PATCH] Final commit for now TODO : 1. Add images notes for topics mentioned in main.org 2. Notes on stack, queue and linked list (or maybe these structures are too elementary and don't need notes) --- README.md | 3 - README.org | 4 + main.html | 518 +++++++++++++++++++++++++++++++---------------------- main.org | 58 +++++- 4 files changed, 355 insertions(+), 228 deletions(-) delete mode 100644 README.md create mode 100644 README.org diff --git a/README.md b/README.md deleted file mode 100644 index ab9b28a..0000000 --- a/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# basic_data_structures - -Basic data structures \ No newline at end of file diff --git a/README.org b/README.org new file mode 100644 index 0000000..071fc7e --- /dev/null +++ b/README.org @@ -0,0 +1,4 @@ +* Basic data structures +Notes for basic data structures taught in B.Tech. + +Currently does not include stacks, queues and linked lists. diff --git a/main.html b/main.html index 38fc352..c0b1a45 100644 --- a/main.html +++ b/main.html @@ -3,7 +3,7 @@ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> - + Data Structures @@ -223,114 +223,121 @@

Table of Contents

-
-

1. Stack

+
+

1. Stack

A stack is a data structure which only allows insertion and deletion from one end of the array. The insertion is always on the extreme end of the array. The deletion can only be done on the element which was most recently added. @@ -353,8 +360,8 @@ To create a stack, we will keep track of the index which is the top of th

-
-

1.1. Operation on stack

+
+

1.1. Operation on stack

A stack has two operations @@ -366,8 +373,8 @@ A stack has two operations

-
-

2. Direct Address Table

+
+

2. Direct Address Table

Direct Address Tables are useful when we know that key is within a small range. Then, we can allocate an array such that each possible key gets an index and just add the values according to the keys. @@ -444,8 +451,8 @@ This also assumes that keys are integers

-
-

3. Hash Table

+
+

3. Hash Table

When the set of possible keys is large, it is impractical to allocate a table big enough for all keys. In order to fit all possible keys into a small table, rather than directly using keys as the index for our array, we wil first calculate a hash for it using a hash function. Since we are relying on hashes for this addressing in the table, we call it a hash table. @@ -461,8 +468,8 @@ So the main purpose of the hash function is to reduce the range of array indices

-
-

3.1. Collision

+
+

3.1. Collision

Because we are reducing the range of indices, the hash function may hash two keys to the same slot. This is called a collision. @@ -481,8 +488,8 @@ There are two ways we will look at to resolve collision.

-
-

3.1.1. Chaining

+
+

3.1.1. Chaining

In chaining, rather than storing values in table slots. We will have linked lists at each slot which will store (key, value) pairs. @@ -561,8 +568,8 @@ Insertion can be done in \(\theta (1)\) time if we assume that key being inserte

-
-

3.1.2. Performance of chaining hash table

+
+

3.1.2. Performance of chaining hash table

The load factor is defined as number of elements per slot and is calculated as @@ -578,8 +585,8 @@ If we also assume that hash funtion takes constant time, then in the average cas

-
-

3.1.3. Open Addressing

+
+

3.1.3. Open Addressing

In open addressing, all the key and value pair of entries are stored in the table itself. Because of this, the load factor \(\left( \alpha \right)\) can never exceed 1. @@ -594,7 +601,7 @@ It is necessary to keep probe sequence fixed for any given key, so that we can s

    -
  1. Linear probing
    +
  2. Linear probing

    For a given ordinary hash function \(h(k)\), the linear probing uses the hash function @@ -610,7 +617,7 @@ Linear probing is easy to implement, but it suffers from primary clusterin

  3. -
  4. Quadratic probing
    +
  5. Quadratic probing

    For given auxiliary hash function \(h(k)\), the quadratic probing uses @@ -631,7 +638,7 @@ If \(quadratic\_h(k_1, 0) = quadratic\_h(k_2,0)\), then that implies that all \(

  6. -
  7. Double Hashing
    +
  8. Double Hashing

    Double hashing is one of the best available method for open addressing.
    @@ -666,8 +673,8 @@ The number of probes on averge in a successful search is at most \(\frac{1}{\alp

-
-

3.2. Hash Functions

+
+

3.2. Hash Functions

A good hash funtion will approximately satisfy the simple uniform hashing, which means that any element is equally likely to be hashed to any slot. @@ -689,8 +696,8 @@ We will look at a few ways to make a hash function.

-
-

3.2.1. The division method

+
+

3.2.1. The division method

In division method, we map a key \(k\) into one of the \(m\) slots by taking the remainder of k divided by m. @@ -706,8 +713,8 @@ But there are some cases where \(m\) is chosen to be something else.

-
-

3.2.2. The multiplication method

+
+

3.2.2. The multiplication method

In multiplication method, we first multiply the key \(k\) with a constant \(A\) which is in range \(0 < A < 1\). Then we get the fractional part of \(kA\). Then we multiply the fractional part by \(m\) and floor it to get the hash. @@ -750,8 +757,8 @@ In C language,

-
-

3.2.3. Mid square method

+
+

3.2.3. Mid square method

In this method, we square the keys and then we choose some digits from the middle. @@ -764,8 +771,8 @@ With huge numbers, we need to take care of overflow conditions in this method.

-
-

3.2.4. Folding method

+
+

3.2.4. Folding method

While this method can be used on integers, this method is usually used where the key is segmented. For example in arrays or when key is a string. @@ -790,28 +797,102 @@ h(k) = 32

-
-

3.3. Universal Hashing

+
+

3.3. Universal Hashing

-TODO : Basics of universal hashing. +Suppose a malicious adversary who know's our hash function chooses the keys that are to be hashed. He can choose keys that all hash to same slot therefore degrading performance of our hash table to \(\theta (n)\). +

+ +

+Fixed hash functions are vulnerable to such attacks. To prevent this from happening, we create a class of function from which a function will be choosen randomly in a way that is independent of the keys, i.e, any function can be choosen for any key. This is called universal hashing. +

+ +

+The randomization of chosen hash function will almost guarentee that we won't get the worst case behaviour. The hash function is not changed every time we do an insert or delete operation. Changing hash function after each operation will not allow us to lookup elements in optimal time. We only change to another hash function when we do rehashing. +

+
+
+

3.3.1. Rehashing

+
+

+When we need to increase the size of hash table or change the hash function, we have to do rehashing. +

+ +

+Rehashing is the process of taking all the entries in a hash table and then reapplying the hash function (possibly changing the hash function) and adding the entries into a new hash table, whose size is usually greater than the previous hash table. +

+ +

+Rehashing is usually done when load factor increases to the point that it affects performace. +
+In universal hashing, we will change the hash function each time we rehash the hash table.

+
+

3.3.2. Universal family

+
+

+For universal hashing, the set of hash functions which is used is called universal family. +

+ +

+The set of hash functions is called universal family if, for every distinct pair of keys \((x,y)\), the number of functions in set where \(h(x) = h(y)\) is less than or equal to \((|H| \div m)\). +

-
-

3.4. Perfect Hashing

+

+In other words, the probability of collision between any two distinct keys \((x,y)\) is less than or equal to \((1/m)\) if hash function is randomly choosen from the universal family. +

+ +

+Here, \(m\) is the number of slots in hash table. +
+Sometimes, universal family may be called a universal of hash functions. +

+
+
+
+

3.3.3. Performance of universal hashing

+
+

+For any hash function \(h\) from the universal. We know that the probability of collision between two keys is \((1/m)\). +
+Using this, we can show that when using chaining, the expected (or average) length of each list in the hash table will be \((1 + \alpha)\). +
+Where, alpha is the load factor of hash table. +

+
+
+
+

3.3.4. Example for universal set of hash functions

+
+

+Suppose we have set of keys \(\{ 0,1,2,...,k \}\), we will choose a prime number \(p > k\). +Then we can define a hash funtion +\[ h_{ab}(k) = \left( (ak + b)\ mod\ p \right) \ mod\ m \] +And, the universal is +\[ H = \{ h_{ab} : a \in \{ 1,2,...,(p-1) \} \ and \ b \in \{ 0,1,...,(p-1) \} \} \] +This class of hash functions will map from set \(\{ 0,1,2,...,(p-1) \}\) to set \(\{ 0,1,2,...,(m-1) \}\). +
+Here, \(m\) is the number of slots in hash table. +

+
+
+
+
+

3.4. Perfect Hashing

-NOTE: This doesn't seem to be in B.Tech syllabus, but it seems cool. +TODO : Doing this or nah +NOTE : This doesn't seem to be in B.Tech syllabus, but it seems cool.

- -
-

4. Representing rooted trees using nodes

+
+

4. Representing rooted trees using nodes

We can represent trees using nodes. A node only stores a single element of the tree. What is a node will depend on the language being used. @@ -847,8 +928,8 @@ In languages with oop, we create node class which will store refrences to other

-
-

4.1. Fixed number of children

+
+

4.1. Fixed number of children

When we know how many children any given node can have, i.e, the number of children is bounded. We can just use refrences or pointers to the nodes directly. @@ -867,8 +948,8 @@ For example, if we know we are making a binary tree, then we can just store refr

-
-

4.2. Unbounded number of children

+
+

4.2. Unbounded number of children

When we don't know how many children any given node will have. Thus any node can have any number of children, we can't just use refrences. We could create an array of refrences to nodes, but some nodes will only have one or two childs and some may have no childs. This will lead to a lot of wasted memory. @@ -916,8 +997,8 @@ can be represented using refrences and pointers as :

-
-

5. Binary Search Trees

+
+

5. Binary Search Trees

A tree where any node can have only two child nodes is called a binary tree. @@ -941,15 +1022,15 @@ In C, we can make a binary tree as

-
-

5.1. Quering a BST

+
+

5.1. Quering a BST

Some common ways in which we usually query a BST are searching for a node, minimum & maximum node and successor & predecessor nodes. We will also look at how we can get the parent node for a given node, if we already store a parent pointer then that algorithm will be unnecessary.

-
-

5.1.1. Searching for node

+
+

5.1.1. Searching for node

We can search for a node very effectively with the help of binary search tree property. The search will return the node if it is found, else it will return NULL. @@ -991,8 +1072,8 @@ We can also search iteratively rather than recursively.

-
-

5.1.2. Minimum and maximum

+
+

5.1.2. Minimum and maximum

Finding the minimum and maximum is simple in a Binary Search Tree. The minimum element will be the leftmost node and maximum will be the rightmost node. We can get the minimum and maximum nodes by using these algorithms. @@ -1024,8 +1105,8 @@ Finding the minimum and maximum is simple in a Binary Search Tree. The minimum e

-
-

5.1.3. Find Parent Node

+
+

5.1.3. Find Parent Node

This algorithm will return the parent node. It uses a trailing node to get the parent. If the root node is given, then it will return NULL. This algorithm makes the assumption that the node is in the tree. @@ -1051,8 +1132,8 @@ This algorithm will return the parent node. It uses a trailing node to get the p

-
-

5.1.4. Is ancestor

+
+

5.1.4. Is ancestor

This algorithm will take two nodes, ancestor and descendant. Then it will check if ancestor node is really the ancestor of descendant node. @@ -1078,8 +1159,8 @@ This algorithm will take two nodes, ancestor and descendant. Then it will check

-
-

5.1.5. Successor and predecessor

+
+

5.1.5. Successor and predecessor

We often need to find the successor or predecessor of an element in a Binary Search Tree. The search for predecessor and succesor is divided in to two cases. @@ -1087,7 +1168,7 @@ We often need to find the successor or predecessor of an element in a Binary Sea

    -
  1. For Successor
    +
  2. For Successor
    // get successor of x
    @@ -1114,7 +1195,7 @@ We often need to find the successor or predecessor of an element in a Binary Sea
     

  3. -
  4. For Predecessor
    +
  5. For Predecessor
    struct binary_tree *
    @@ -1143,15 +1224,15 @@ We often need to find the successor or predecessor of an element in a Binary Sea
     
-
-

5.2. Inserting and Deleting nodes

+
+

5.2. Inserting and Deleting nodes

When inserting and deleting nodes in BST, we need to make sure that the Binary Search Tree property continues to hold. Inserting node is easier in a binary search tree than deleting a node.

-
-

5.2.1. Insertion

+
+

5.2.1. Insertion

Insertion is simple in a binary search tree. We search for the node we want to insert in the tree and insert it where we find first NULL spot. @@ -1211,19 +1292,30 @@ The algorithm for iterative insertion is

-
-

5.2.2. Deletion

+
+

5.2.2. Deletion

Deletion in Binary Search Trees is tricky because we need to delete nodes in a way that the property of the Binary Search Tree holds after the deletion of the node. So we first have to remove the node from the tree before we can free it.

-TODO : Write four cases of node deletion here +There are four different cases which can occur when we try to delete a node. All four have a different method to handle them. These four cases relate to how many children the node which we want to delete has. +
+Suppose the node is \(X\).

-
    -
  1. Implementation in code
    -
    +
  2. Node \(X\) has no children i.e. it is a leaf node. In this case, we can simply delete the node and replace it with NULL.
  3. +
  4. Node \(X\) has one child. In this case, the child of node \(X\) will take it's place and we can delete node \(X\).
  5. +
  6. Node \(X\) has both left and right child, and the right child of \(X\), is the successor of \(X\). In this case, we will replace the left child of successor to left child of \(X\), then replace \(X\) with it's own right child.
  7. +
  8. Node \(X\) has both left and right child, and the right child if not the successor of \(X\). In this case, we will replace the successor node with it's own right child. Then, we will replace both left and right child of succesor node with left and right childs of \(X\) respectively. Finally, we can replace \(X\) with the succesor node.
  9. +
+ +

+TODO : add images here for four cases. +

+
    +
  • Implementation in code
  • +

We also use a helper function called Replace Child for deletion of node. This function will simply take parent node, old child node and new child node and replace old child with new child.

@@ -1305,13 +1397,11 @@ Now we can make a delete node function which will remove the node, reattach the
- -
-
-

5.3. Performance of BST

+
+

5.3. Performance of BST

The performance of the search operation depends on the height of the tree. If the tree has \(n\) elements, the height of a binary tree can be between \(n\) and \(floor\left( 1+ log_2(n) \right)\). @@ -1330,16 +1420,16 @@ A balanced binary search tree in worst case for any operation will take \(\theta

-
-

5.4. Traversing a Binary Tree

+
+

5.4. Traversing a Binary Tree

There are three ways to traverse a binary tree, inorder tree walk, preorder tree walk and postorder tree walk. All three algorithm will take \(\theta (n)\) time to traverse the \(n\) nodes.

-
-

5.4.1. Inorder tree walk

+
+

5.4.1. Inorder tree walk

This algorithm is named so because it first traverses the left sub-tree recursively, then the node value and then traverses right sub-tree recursively. @@ -1366,8 +1456,8 @@ This algorithm is named so because it first traverses the left sub-tree recursiv

-
-

5.4.2. Preorder tree walk

+
+

5.4.2. Preorder tree walk

This algorithm is called preorder algorithm because it will first traverse the current node, then recursively traverses the left sub-tree and then recursively traverse the right sub-tree. @@ -1392,8 +1482,8 @@ This algorithm is called preorder algorithm because it will first traverse the c

-
-

5.4.3. Postorder tree walk

+
+

5.4.3. Postorder tree walk

In this algorithm, we first traverse the left sub-tree recursively, then the right-sub tree recursively and finally the node. @@ -1420,8 +1510,8 @@ In this algorithm, we first traverse the left sub-tree recursively, then the rig

-
-

6. Binary Heap

+
+

6. Binary Heap

Heap is a data structure represented as a complete tree which follows the heap property. All levels in a heap tree are completely filled except possible the last one, which is filled from left to right. @@ -1434,8 +1524,8 @@ The heap data structure is used to implement priority queues. In many cas

-
-

6.1. Heap Property

+
+

6.1. Heap Property

Heaps are of two types @@ -1454,8 +1544,8 @@ The heap property is different for min-heaps and max-heaps.

-
-

6.2. Shape of Heap

+
+

6.2. Shape of Heap

Also reffered to as shape property of heap. @@ -1464,8 +1554,8 @@ A heap is represented as a complete tree. A complete tree is one where all the l

-
-

6.3. Array implementation

+
+

6.3. Array implementation

We can implement binary heap using arrays. The root of tree is the first element of the array. The next two elements are elements of second level of tree and children of the root node. Similary, the next four elements are elements of third level of tree and so on. @@ -1492,15 +1582,15 @@ In C, we can create a heap struct for easier implementation of algorithms

-
-

6.4. Operations on heaps

+
+

6.4. Operations on heaps

Both insertion and deletion in heap must be done in a way which conform to the heap property as well as shape property of heap. Before we can look at insertion and deletion, we need a way to find parent and child for a given index. We will also first see up-heapify and down-heapfiy funtions.

-
-

6.4.1. Parent and child indices

+
+

6.4.1. Parent and child indices

In a binary heap, we can find parent and children for any given index using simple formulas. @@ -1520,8 +1610,8 @@ In a binary heap, we can find parent and children for any given index using simp

-
-

6.4.2. Down-heapify

+
+

6.4.2. Down-heapify

The down-heapify is a function which can re-heapify an array if no element of heap violates the heap property other than index and it's two children. @@ -1559,8 +1649,8 @@ Since we shift element downwards, this operation is often called down-heap

-
-

6.4.3. Up-heapify

+
+

6.4.3. Up-heapify

The up-heapify is a function which can re-heapify an array if no element of heap violates the heap property other than index and it's parent. @@ -1590,15 +1680,11 @@ This function runs in \(\theta (log_2n)\) time. The algorithm for this works as

Since we shift element upwards, this operation is often called up-heap operation. It is also known as trickle-up, swim-up, heapify-up, or cascade-up -
-
-TODO : Maybe up-heapfiy funtion should be made cleaner rather than trying to mirror down-heapify funtion.

- -
-

6.4.4. Insertion

+
+

6.4.4. Insertion

Insertion takes \(\theta (log_2n)\) time in a binary heap. To insert and element in heap, we will add it to the end of the heap and then apply up-heapify operation of the elment @@ -1623,8 +1709,8 @@ The code shows example of insertion in a max-heap.

-
-

6.4.5. Deletion or Extraction

+
+

6.4.5. Deletion or Extraction

Like insertion, extraction also takes \(\theta (log_2n)\) time. Extraction from heap will extract the root element of the heap. We can use the down-heapify function in order to re-heapify after extracting the root node. @@ -1653,8 +1739,8 @@ The code shows example of extraction in max-heap.

-
-

6.4.6. Insert then extract

+
+

6.4.6. Insert then extract

Inserting an element and then extracting from the heap can be done more efficiently than simply calling these functions seperately as defined previously. If we call both funtions we define above, we have to do an up-heap operation followed by a down-heap. Instead, there is a way to do just a single down-heap. @@ -1691,16 +1777,16 @@ In python, this is implemented by the name of heap replace.

-
-

6.4.7. Searching

+
+

6.4.7. Searching

Searching for a arbitrary element takes linear time in a heap. We use linear search to search for element in array.

-
-

6.4.8. Deleting arbitray element

+
+

6.4.8. Deleting arbitray element

For a max-heap, deleting an arbitrary element is done as follows @@ -1712,8 +1798,8 @@ For a max-heap, deleting an arbitrary element is done as follows

-
-

6.4.9. Decrease and increase keys

+
+

6.4.9. Decrease and increase keys

TODO : I don't know if it is neccessary to do this operation. It looks simple to implement. @@ -1721,8 +1807,8 @@ TODO : I don't know if it is neccessary to do this operation. It looks simple to

-
-

6.5. Building a heap from array

+
+

6.5. Building a heap from array

We can convert a normal array into a heap using the down-heapify operation in linear time \(\left( \theta (n) \right)\) @@ -1746,15 +1832,15 @@ If we are using a one indexed language, then range of for loop is

-
-

7. Graphs

+
+

7. Graphs

A graph is a data structure which consists of nodes/vertices, and edges. We sometimes write it as \(G=(V,E)\), where \(V\) is the set of vertices and \(E\) is the set of edges. When we are working on runtime of algorithms related to graphs, we represent runtime in two input sizes. \(|V|\) which we simply write as \(V\) is the number of vertices and similarly \(E\) is the number of edges.

-
-

7.1. Representing graphs

+
+

7.1. Representing graphs

We need a way to represent graphs in computers and to search a graph. Searching a graph means to systematically follow edges of graphs in order to reach vertices. @@ -1767,8 +1853,8 @@ The two common ways of representing graphs are either using adjacency lists and TODO : add images to show how it is represented

-
-

7.1.1. Adjacency List

+
+

7.1.1. Adjacency List

Every node in the graph is represented by a linked list. The list contains the nodes to which the list node is connected by an edge. @@ -1792,8 +1878,8 @@ The adjacency list representation is very robust and can represent various types

-
-

7.1.2. Adjacency Matrix

+
+

7.1.2. Adjacency Matrix

We use a single matrix to represent the graph. The size of the matrix is \(\left( |V| \times |V| \right)\). When we make the matrix, all it's elements are zero, i.e the matrix is zero initialized. @@ -1827,8 +1913,8 @@ We can store weighted graphs in adjacency matrix by storing the weights along wi

-
-

7.2. Vertex and edge attributes

+
+

7.2. Vertex and edge attributes

Many times we have to store attributes with either vertices or edges or sometimes both. How this is differs by language. In notation, we will write it using a dot (.) @@ -1840,8 +1926,8 @@ Similarly, the attribute x of edge (u , v) will be denoted as (u , v).x

-
-

7.3. Density of graph

+
+

7.3. Density of graph

Knowing the density of a graph can help us choose the way in which we represent our graph. @@ -1864,8 +1950,8 @@ Therefore, maximum density for a graph is 1. The minimum density for a graph is Knowing this, we can say graph with low density is a sparse graph and graph with high density is a dense graph.

-
-

7.3.1. Which representation to use

+
+

7.3.1. Which representation to use

For a quick approximation, when undirected graph and \(2|E|\) is close to \(|V|^2\), we say that graph is dense, else we say it is sparse. @@ -1881,8 +1967,8 @@ Another criteria is how algorithm will use the graph. If we want to traverse to

-
-

7.4. Searching Graphs

+
+

7.4. Searching Graphs

Graph search (or graph traversal) algorithms are used to explore a graph to find nodes and edges. Vertices not connected by edges are not explored by such algorithms. These algorithms start at a source vertex and traverse as much of the connected graph as possible. @@ -1891,8 +1977,8 @@ Graph search (or graph traversal) algorithms are used to explore a graph to find Searching graphs algorithm can also be used on trees, because trees are also graphs.

-
-

7.4.1. Breadth first search

+
+

7.4.1. Breadth first search

BFS is one of the simplest algorithms for searching a graph and is used as an archetype for many other graph algorithms. This algorithm works well with the adjacency list representation. @@ -1935,8 +2021,8 @@ For an input graph \(G=(V,E)\), every node is enqued only once and hence, dequeu

-
-

7.4.2. Breadth-first trees for shortest path

+
+

7.4.2. Breadth-first trees for shortest path

For a simple graph, we may want to get the shortest path between two nodes. This can be done by making a Breadth-first tree. @@ -1993,8 +2079,8 @@ This will print shortest path from end node to start node.

-
-

7.4.3. Depth first search

+
+

7.4.3. Depth first search

Unlike BFS, depth first search is more biased towards the farthest nodes of a graph. It follows a single path till it reaches the end of a path. After that, it back tracks to the last open path and follows that one. This process is repeated till all nodes are covered. @@ -2061,8 +2147,8 @@ For an input graph \(G=(V,E)\), the time complexity for Depth first search is \(

-
-

7.4.4. Properties of DFS

+
+

7.4.4. Properties of DFS

DFS is very useful to understand the structure of a graph. To study the structure of a graph using DFS, we will get two attributes of each node using DFS. We suppose that each step in traversal takes a unit of time. @@ -2107,7 +2193,7 @@ This algorithm will give all nodes the (node.d) and (node.f) attribute. Simil

    -
  1. Parenthesis theorem
    +
  2. Parenthesis theorem

    The paranthesis theorem is used to find relationship between two nodes in the Depth First Search Tree. @@ -2124,7 +2210,7 @@ So if node, \(y\) is a proper descendant of node \(x\) in the depth first tree,

  3. -
  4. White path theorem
    +
  5. White path theorem

    If \(y\) is a descendant of \(x\) in graph G, then at time \(t = x.d\), the path from \(u\) to \(v\) was undiscovered. @@ -2135,7 +2221,7 @@ That is, all the nodes in path from \(x\) to \(y\) were undiscovered. Undiscover

  6. -
  7. Classification of edges
    +
  8. Classification of edges

    We can arrange the connected nodes of a graph into the form of a Depth-first tree. When the graph is arranged in this way, the edges can be classified into four types @@ -2156,8 +2242,8 @@ The back edge, forward edge and cross edge are not a part of the depth-first tre

-
-

7.4.5. Depth-first and Breadth-first Forests

+
+

7.4.5. Depth-first and Breadth-first Forests

In directed graphs, the depth-first and breadth-first algorithms can't traverse to nodes which are not connected by a directed edge. This can leave parts of graph not mapped by a single tree. @@ -2178,8 +2264,8 @@ Thus when using DFS or BFS on a graph, we store this collection of trees i.e, fo

-
-

7.4.6. Topological sort using DFS

+
+

7.4.6. Topological sort using DFS

Topological sorting can only be done on directed acyclic graphs. A topological sort is a linear ordering of the nodes of a directed acyclic graph (dag). It is ordering the nodes such that all the the edges point right. @@ -2203,8 +2289,8 @@ TODO : Add image to show process of topological sorting

-
-

7.5. Strongly connected components

+
+

7.5. Strongly connected components

If we can traverse from a node \(x\) to node \(y\) in a directed graph, we show it as \(x \rightsquigarrow y\). @@ -2221,14 +2307,14 @@ Example, the dotted regions are the strongly connected components (SCC) of the g

-
+

strongly-connected-component.svg

-
-

7.5.1. Finding strongly connected components

+
+

7.5.1. Finding strongly connected components

We can find the strongly connected components of a graph \(G\) using DFS. The algorithm is called Kosaraju's algorithm. @@ -2269,7 +2355,7 @@ TODO : Add images for this

Author: Anmol Nawani

-

Created: 2023-08-01 Tue 21:22

+

Created: 2023-08-02 Wed 23:26

Validate

diff --git a/main.org b/main.org index 1c5ec66..95d3dc1 100644 --- a/main.org +++ b/main.org @@ -307,12 +307,48 @@ h(k) = 532 mod 100 h(k) = 32 ** Universal Hashing -TODO : Basics of universal hashing. +Suppose a malicious adversary who know's our hash function chooses the keys that are to be hashed. He can choose keys that all hash to same slot therefore degrading performance of our hash table to $\theta (n)$. -** Perfect Hashing -*NOTE*: This doesn't seem to be in B.Tech syllabus, but it seems cool. +Fixed hash functions are vulnerable to such attacks. To prevent this from happening, we create a class of function from which a function will be choosen randomly in a way that is independent of the keys, i.e, any function can be choosen for any key. This is called *universal hashing*. + +The randomization of chosen hash function will almost guarentee that we won't get the worst case behaviour. The hash function is /*not changed every time we do an insert or delete operation.*/ Changing hash function after each operation will not allow us to lookup elements in optimal time. We only change to another hash function when we do rehashing. +*** Rehashing +When we need to increase the size of hash table or change the hash function, we have to do rehashing. + +Rehashing is the process of taking all the entries in a hash table and then reapplying the hash function (possibly changing the hash function) and adding the entries into a new hash table, whose size is usually greater than the previous hash table. + +Rehashing is usually done when load factor increases to the point that it affects performace. \\ +In universal hashing, we will change the hash function each time we rehash the hash table. +*** Universal family +For universal hashing, the set of hash functions which is used is called *universal family*. + +The set of hash functions is called universal family if, for every distinct pair of keys $(x,y)$, *the number of functions in set where $h(x) = h(y)$ is less than or equal to $(|H| \div m)$*. +In other words, *the probability of collision between any two distinct keys $(x,y)$ is less than or equal to $(1/m)$* if hash function is randomly choosen from the universal family. + +Here, $m$ is the number of slots in hash table. +\\ +Sometimes, universal family may be called a universal of hash functions. +*** Performance of universal hashing +For any hash function $h$ from the universal. We know that the probability of collision between two keys is $(1/m)$. +\\ +Using this, we can show that when using chaining, the expected (or average) length of each list in the hash table will be $(1 + \alpha)$. +\\ +Where, alpha is the load factor of hash table. +*** Example for universal set of hash functions +Suppose we have set of keys $\{ 0,1,2,...,k \}$, we will choose a prime number $p > k$. +Then we can define a hash funtion +\[ h_{ab}(k) = \left( (ak + b)\ mod\ p \right) \ mod\ m \] +And, the universal is +\[ H = \{ h_{ab} : a \in \{ 1,2,...,(p-1) \} \ and \ b \in \{ 0,1,...,(p-1) \} \} \] +This class of hash functions will map from set $\{ 0,1,2,...,(p-1) \}$ to set $\{ 0,1,2,...,(m-1) \}$. +\\ +Here, $m$ is the number of slots in hash table. +** Perfect Hashing +TODO : Doing this or nah +NOTE : This doesn't seem to be in B.Tech syllabus, but it seems cool. +\\ * Representing rooted trees using nodes We can represent trees using nodes. A node only stores a single element of the tree. What is a node will depend on the language being used. \\ @@ -604,8 +640,16 @@ The algorithm for iterative insertion is Deletion in Binary Search Trees is tricky because we need to delete nodes in a way that the property of the Binary Search Tree holds after the deletion of the node. So we first have to remove the node from the tree before we can free it. \\ \\ -TODO : Write four cases of node deletion here -**** *Implementation in code* +There are *four different cases* which can occur when we try to delete a node. All four have a different method to handle them. These four cases relate to how many children the node which we want to delete has. +\\ +Suppose the node is $X$. +1. Node $X$ has no children i.e. it is a leaf node. In this case, we can simply delete the node and replace it with NULL. +2. Node $X$ has one child. In this case, the child of node $X$ will take it's place and we can delete node $X$. +3. Node $X$ has both left and right child, and the right child of $X$, is the successor of $X$. In this case, we will replace the left child of successor to left child of $X$, then replace $X$ with it's own right child. +4. Node $X$ has both left and right child, and the right child if not the successor of $X$. In this case, we will replace the successor node with it's own right child. Then, we will replace both left and right child of succesor node with left and right childs of $X$ respectively. Finally, we can replace $X$ with the succesor node. + +TODO : add images here for four cases. ++ *Implementation in code* We also use a helper function called Replace Child for deletion of node. This function will simply take parent node, old child node and new child node and replace old child with new child. #+BEGIN_SRC c @@ -852,10 +896,6 @@ This function runs in $\theta (log_2n)$ time. The algorithm for this works as fo #+END_SRC Since we shift element upwards, this operation is often called /up-heap/ operation. It is also known as /trickle-up, swim-up, heapify-up, or cascade-up/ -\\ -\\ -TODO : Maybe up-heapfiy funtion should be made cleaner rather than trying to mirror down-heapify funtion. - *** Insertion Insertion takes $\theta (log_2n)$ time in a binary heap. To insert and element in heap, we will add it to the end of the heap and then apply up-heapify operation of the elment \\