From ae4c14a01ca45ecd90a75151966a8b2ab89faec3 Mon Sep 17 00:00:00 2001 From: "2021.sneha.utekar@ves.ac.in" <2021.sneha.utekar@ves.ac.in> Date: Sun, 6 Oct 2024 15:03:41 +0530 Subject: [PATCH 1/2] Add FAQ tab at the bottom of article template #25 --- public/ArticleList/ArticleDataStyle.css | 80 ++++++++- public/ArticleList/binary-lifting.html | 184 +++++++++++++------- public/ArticleList/dijkstra-algorithm.html | 178 +++++++++++++------ public/ArticleList/huffman-coding.html | 74 +++++++- public/ArticleList/time-complexity-bfs.html | 75 ++++++-- 5 files changed, 453 insertions(+), 138 deletions(-) diff --git a/public/ArticleList/ArticleDataStyle.css b/public/ArticleList/ArticleDataStyle.css index 0ffb0be..f004118 100644 --- a/public/ArticleList/ArticleDataStyle.css +++ b/public/ArticleList/ArticleDataStyle.css @@ -112,4 +112,82 @@ header{ .author{ font-style: italic; color: #afafafe6; -} \ No newline at end of file +} + +/* FAQ Section Styling */ +.faq { + margin-top: 30px; +} + +.faq-title { + font-size: 1.5rem; + margin-bottom: 10px; + text-align: center; + color: #ffffff; +} + +.faq-content { + display: none; /* Hidden by default */ + margin-top: 20px; +} + +.faq-question { + font-weight: bold; + margin: 15px 0 5px 0; + font-size: 18px; + color: #FFFFFF; +} + +.faq-answer { + margin-bottom: 15px; + color: #cfcfcf; + font-size: 16px; +} + +/* Button Styling */ +#faq-toggle { + display: block; + margin: 20px auto; + padding: 10px 20px; + font-size: 16px; + background: linear-gradient(to right, #87CEEB, #00BFFF); /* Sky-blue gradient */ + color: white; + border: none; + cursor: pointer; + border-radius: 5px; + transition: background 0.3s ease; +} + +#faq-toggle:hover { + background: linear-gradient(to right, #00BFFF, #1E90FF); /* Darker sky-blue on hover */ +} + +@media (max-width: 1200px) { + .faq-question { + font-size: 16px; + } + + .faq-answer { + font-size: 14px; + } + + #faq-toggle { + font-size: 14px; + padding: 8px 16px; + } +} + +@media (max-width: 600px) { + .faq-question { + font-size: 14px; + } + + .faq-answer { + font-size: 13px; + } + + #faq-toggle { + font-size: 12px; + padding: 6px 12px; + } +} diff --git a/public/ArticleList/binary-lifting.html b/public/ArticleList/binary-lifting.html index 188422b..03ffc6a 100644 --- a/public/ArticleList/binary-lifting.html +++ b/public/ArticleList/binary-lifting.html @@ -1,65 +1,125 @@ - - - - - CP - Article - - - - - - -
-
- -
- UniAlgo -
- -
-
-
-
-
- How does Binary Lifting utilizes power of 2 jumps? -
-
-
- -
-
- -

- Suppose you are given a tree and then Q queries. In each query we are given two nodes of the tree and are asked to find the lowest common ancestor of both of them. Now finding LCA is easy and can be done in O(N) (which is simple to understand), but for q queries the time complexity become O(Q*N). So we need to preprocess the tree and then calculate the LCA of two nodes in O(log(N)) . Whenever we need to have log(N) complexity which can be achieved if we somehow used powers of 2 (seems obvious). -

-

- Firstly, let us see what the algorithm do. So instead of going over every node for each query , we create a matrix of n * (log2(Depth of tree)) approximately. And for each node's row we store the 2^0 th parent (i.e 1st) , 2^1 st parent(i.e 2nd) , then 2^2 ,.., and more until the root node is crossed. Another thing we need to precompute is the depth of each of the node which can be done in O(N) and need to be done once only. And Creation matrix will take O(N*log(depth)). Generally the log(depth) would be less than 20 even. -

-

- NOTE: Don't worry if you feel like why are we doing this. -

-

- Now we can use this precomputed information for each query as - let us take two node a and b. If a has a depth d1 and b has a depth d2 (assuming d1>d2), then it is intuitive that we must at least cover the difference in depth of a and b, because LCA will anyhow lie above b. So we need the (d1-d2) the parent of a which is very simple to find. If we represent (d1-d2) in binary representation say 0101 then it means the 5th parent we need can be achieved by 1st parent and then its fourth parent. Hence we can see that we are just taking the parent (some 2^j th parent) which we already precomputed. So this way we just took log2(d1-d2) to cover the difference in depth. -

-

- Now there may be case that the d1-d2 th parent of a is b itself, so we may check the case if a==b, else it means that the two nodes are in separate branches currently. -

-

- One feature of LCA we use here in tree is that above the LCA all other nodes that come above it are always their common parents. So again we will use each bit of binary representation of depth of two nodes (which is essentially the same now) and if the jumping by that power of 2 gives us a common parent , then it means that either this is the LCA or it lies below it, so we need to look below, so we reduces the power by 1 and then jump, if the parent are different then definitely the LCA lies above , so we upgrade our parent of a = mat[a][j] (where mat is the matrix we created and mat[a][j] representing the 2^j th parent of a) and similarly b = mat[b][j]. -

-

- In this way we keep coming closer to the lowest common ancestor. The reason we will always reach the ancestor is that imagine like the difference in depth of LCA and the node is d. And we know that any number can be represented in powers of 2, so basically you take the maximum possible jump each time (less than the difference in LCA and current depth of node) and cover up the difference in log time complexity. -

-
-
- Written by: UniAlgo Owner -
-
- + + + + + CP - Article + + + + + + + +
+
+ +
+ UniAlgo +
+ +
+
+
+
+
+ How does Binary Lifting utilize the power of 2 jumps? +
+
+
+ +
+
+ +

+ Suppose you are given a tree and then Q queries. In each query we are given two nodes of the tree and are asked to find the lowest common ancestor of both of them. Now finding LCA is easy and can be done in O(N) (which is simple to understand), but for q queries the time complexity becomes O(Q*N). So we need to preprocess the tree and then calculate the LCA of two nodes in O(log(N)). Whenever we need to have log(N) complexity which can be achieved if we somehow used powers of 2 (seems obvious). +

+

+ Firstly, let us see what the algorithm does. So instead of going over every node for each query, we create a matrix of n * (log2(Depth of tree)) approximately. And for each node's row, we store the 2^0 th parent (i.e. 1st), 2^1 st parent (i.e. 2nd), then 2^2, .., and more until the root node is crossed. Another thing we need to precompute is the depth of each node which can be done in O(N) and needs to be done once only. And creation matrix will take O(N*log(depth)). Generally, the log(depth) would be less than 20 even. +

+

+ NOTE: Don't worry if you feel like why are we doing this. +

+

+ Now we can use this precomputed information for each query as - let us take two nodes a and b. If a has a depth d1 and b has a depth d2 (assuming d1>d2), then it is intuitive that we must at least cover the difference in depth of a and b, because LCA will anyhow lie above b. So we need the (d1-d2) th parent of a which is very simple to find. If we represent (d1-d2) in binary representation say 0101 then it means the 5th parent we need can be achieved by 1st parent and then its fourth parent. Hence we can see that we are just taking the parent (some 2^j th parent) which we already precomputed. So this way we just took log2(d1-d2) to cover the difference in depth. +

+

+ Now there may be a case that the d1-d2 th parent of a is b itself, so we may check the case if a==b, else it means that the two nodes are in separate branches currently. +

+

+ One feature of LCA we use here in the tree is that above the LCA all other nodes that come above it are always their common parents. So again we will use each bit of the binary representation of the depth of two nodes (which is essentially the same now) and if the jumping by that power of 2 gives us a common parent, then it means that either this is the LCA or it lies below it, so we need to look below, so we reduce the power by 1 and then jump. If the parents are different then definitely the LCA lies above, so we upgrade our parent of a = mat[a][j] (where mat is the matrix we created and mat[a][j] representing the 2^j th parent of a) and similarly b = mat[b][j]. +

+

+ In this way, we keep coming closer to the lowest common ancestor. The reason we will always reach the ancestor is that imagine like the difference in depth of LCA and the node is d. And we know that any number can be represented in powers of 2, so basically you take the maximum possible jump each time (less than the difference in LCA and current depth of the node) and cover up the difference in log time complexity. +

+
+ + + +
+
+ Frequently Asked Questions (FAQs) +
+
+ + + + +
+
+ Q1: What is Binary Lifting? +
+
+ Binary Lifting is a technique used to find the Lowest Common Ancestor (LCA) of two nodes in a tree in logarithmic time. It preprocesses the tree to create a table of ancestors at different powers of two. +
+
+ Q2: How does Binary Lifting improve query time for LCA? +
+
+ Instead of checking all nodes, Binary Lifting allows for jumping directly to the 2^j th parent of a node, which reduces the query time to O(log N) after an initial O(N log N) preprocessing step. +
+
+ Q3: What is the time complexity for preprocessing the tree? +
+
+ The preprocessing of the tree takes O(N log N) time, where N is the number of nodes in the tree. +
+
+ Q4: What is the main limitation of Binary Lifting? +
+
+ Binary Lifting does not work efficiently for dynamic trees where nodes can be added or removed frequently, as it requires a complete reprocessing of the ancestor table. +
+
+ Q5: Can Binary Lifting be used in graphs other than trees? +
+
+ Binary Lifting is primarily designed for trees. However, it can be adapted for directed acyclic graphs (DAGs) under certain conditions but is not generally applicable to arbitrary graphs. +
+
+
+
+ Written by: UniAlgo Owner +
+
+ + + + diff --git a/public/ArticleList/dijkstra-algorithm.html b/public/ArticleList/dijkstra-algorithm.html index 501848c..249a6b4 100644 --- a/public/ArticleList/dijkstra-algorithm.html +++ b/public/ArticleList/dijkstra-algorithm.html @@ -1,59 +1,125 @@ - - - - - CP - Article - - - - - - -
-
- -
- UniAlgo -
- -
-
-
-
-
- How Dijkstra's Algorithm Works -
-
-
- -
-
- -

- The algorithm computes the shortest path from a starting node to all other nodes in the graph. It selects the node with the smallest known distance, updates the distances of its neighbors, and repeats this process until all nodes have been visited. -

-

- Steps of Dijkstra's Algorithm: 1. Set the distance to the source node as 0 and to all other nodes as infinity. 2. Mark the source node as visited. For all its neighbors, calculate the tentative distance using the current node’s distance. If this new distance is smaller than the previously known distance, update it. 3. Move to the unvisited node with the smallest tentative distance and repeat the process of updating distances for its neighbors. 4. Continue this process until all nodes are visited, ensuring that the shortest path to each node is found. -

-

- Example: Consider a graph where nodes represent cities and edges represent the distance between them. Starting from city A, Dijkstra's Algorithm will calculate the shortest distance to all other cities, considering the sum of edge weights in each step. -

-

- Time Complexity: The time complexity of Dijkstra's Algorithm is O((V + E) log V), where V is the number of vertices and E is the number of edges in the graph. -

-

- Limitations: Dijkstra's Algorithm does not work with graphs that have negative edge weights because the greedy approach might not always lead to the optimal solution. -

-
-
- Written by: Yeshwanth DS -
-
- + + + + + CP - Article + + + + + + +
+
+ +
+ UniAlgo +
+ +
+
+
+
+
+ How Dijkstra's Algorithm Works +
+
+
+ +
+
+ +

+ The algorithm computes the shortest path from a starting node to all other nodes in the graph. It selects the node with the smallest known distance, updates the distances of its neighbors, and repeats this process until all nodes have been visited. +

+

+ Steps of Dijkstra's Algorithm: + 1. Set the distance to the source node as 0 and to all other nodes as infinity. + 2. Mark the source node as visited. For all its neighbors, calculate the tentative distance using the current node's distance. If this new distance is smaller than the previously known distance, update it. + 3. Move to the unvisited node with the smallest tentative distance and repeat the process of updating distances for its neighbors. + 4. Continue this process until all nodes are visited, ensuring that the shortest path to each node is found. +

+

+ Example: Consider a graph where nodes represent cities and edges represent the distance between them. Starting from city A, Dijkstra's Algorithm will calculate the shortest distance to all other cities, considering the sum of edge weights in each step. +

+

+ Time Complexity: The time complexity of Dijkstra's Algorithm is O((V + E) log V), where V is the number of vertices and E is the number of edges in the graph. +

+

+ Limitations: Dijkstra's Algorithm does not work with graphs that have negative edge weights because the greedy approach might not always lead to the optimal solution. +

+
+ +
+
+ Frequently Asked Questions (FAQs) +
+
+ + + + +
+
+ Q1: What is Dijkstra's Algorithm? +
+
+ Dijkstra's Algorithm is a graph search algorithm that finds the shortest path from a starting node to all other nodes in a weighted graph. +
+
+ Q2: How does Dijkstra's Algorithm work? +
+
+ The algorithm initializes distances to infinity, sets the distance of the starting node to zero, and iteratively selects the node with the smallest known distance to update the distances of its neighbors. +
+
+ Q3: What is the time complexity of Dijkstra's Algorithm? +
+
+ The time complexity is O((V + E) log V), where V is the number of vertices and E is the number of edges in the graph. +
+
+ Q4: Can Dijkstra's Algorithm handle negative edge weights? +
+
+ No, Dijkstra's Algorithm does not work with graphs that have negative edge weights as it assumes that once a node's shortest path is found, it cannot be improved. +
+
+ Q5: What are the practical applications of Dijkstra's Algorithm? +
+
+ Dijkstra's Algorithm is commonly used in GPS systems, network routing protocols, and other applications requiring efficient pathfinding in graphs. +
+
+
+
+ Written by: Yeshwanth DS +
+
+ + + + + + + + + \ No newline at end of file diff --git a/public/ArticleList/huffman-coding.html b/public/ArticleList/huffman-coding.html index 6131fde..62978e0 100644 --- a/public/ArticleList/huffman-coding.html +++ b/public/ArticleList/huffman-coding.html @@ -9,6 +9,7 @@ +
@@ -34,29 +35,88 @@
- +

- Suppose there is a problem in which you have wood pieces of different length (a1,a2,a3,...,an) and you want to merge to make a new stick. But there is a constraint, that is in each operation the cost of merging is the sum of lengths of 2 sticks and you need to minimize to overall cost of merging. + Suppose there is a problem in which you have wood pieces of different lengths (a1, a2, a3,...,an) and you want to merge them to make a new stick. But there is a constraint: in each operation, the cost of merging is the sum of the lengths of 2 sticks, and you need to minimize the overall cost of merging.

- So we need an to optimally take the sticks according to their lengths. Suppose we are given a=1, a2=4, a3=9, a4=11. Now, let's take a2 and a3 and merge them into one so their sum = 4+9=13 , and cost = 13. Then let us take a1 , since previous sticks were merged to form a stick of length 13, let us take the previous stick and a1 = 1, so new sum = 1+13=14, and cost += 14 means cost = 13 + 14 = 27. Then we are left with a4 so new stick length sum = 14+11 = 25. And total cost = becomes 27 + 25 = 52. + So we need to optimally take the sticks according to their lengths. Suppose we are given a1=1, a2=4, a3=9, a4=11. Now, let's take a2 and a3 and merge them into one, so their sum = 4+9=13, and cost = 13. Then let us take a1; since previous sticks were merged to form a stick of length 13, let us take the previous stick and a1 = 1, so new sum = 1+13=14, and cost += 14 means cost = 13 + 14 = 27. Then we are left with a4, so new stick length sum = 14+11 = 25. And the total cost becomes 27 + 25 = 52.

- But instead if we had taken first a1 and a2 , then a1+a2=5, then cost = 5. Then going in sorted order we sum the next two smallest length which are currently 5 and 9, so length = 14, and cost = 5+15 = 19. Now we are left with two sticks of length 14 and 11 , so summing them the length becomes 25 and cost = 25 + 19 = 44. + But instead, if we had taken first a1 and a2, then a1+a2=5, then cost = 5. Then going in sorted order, we sum the next two smallest lengths, which are currently 5 and 9, so length = 14, and cost = 5+14 = 19. Now we are left with two sticks of length 14 and 11, so summing them, the length becomes 25 and cost = 25 + 19 = 44.

- So, if we observe that the greater the number we take the earlier the cost increases. Because every time we add new length to current length, the big element would repeat every time. So this is the intuition behind taking the smallest elements first which ensures that we are minimizing the immediate cost of each merge operation. + So, if we observe that the greater the number we take, the earlier the cost increases. Because every time we add a new length to the current length, the big element would repeat every time. So this is the intuition behind taking the smallest elements first, which ensures that we are minimizing the immediate cost of each merge operation.

- Now this is actually the concept behind actual Huffman Coding which is a way to compress a string into a segment of 0s and 1s and of the smallest length so that there would be less chance of compressed string to exceed the integer or long range. + Now this is actually the concept behind actual Huffman Coding, which is a way to compress a string into a segment of 0s and 1s and of the smallest length so that there would be less chance of the compressed string exceeding the integer or long range.

- Can you find how? A hint is to find the bits representation for each unique character, and note none of the character's bit representation should be a prefix of another ! + Can you find how? A hint is to find the bits representation for each unique character, and note none of the character's bit representation should be a prefix of another!

+ + +
+
+ Frequently Asked Questions (FAQs) +
+
+ + + + +
+
+ Q1: What is Huffman Coding? +
+
+ Huffman Coding is a compression algorithm that assigns variable-length codes to input characters based on their frequencies. Characters that occur more frequently are assigned shorter codes, while less frequent characters receive longer codes. +
+
+ Q2: How does Huffman Coding minimize the cost of merging sticks? +
+
+ Huffman Coding minimizes the cost by always merging the two smallest sticks first. This strategy ensures that the larger lengths, which contribute more to the total cost, are added later in the process, thus reducing the overall merging cost. +
+
+ Q3: What role do binary trees play in Huffman Coding? +
+
+ In Huffman Coding, a binary tree is used to represent the codes. Each leaf node represents a character, and the path from the root to the leaf determines the character's binary representation (0s and 1s). +
+
+ Q4: Can Huffman Coding guarantee the shortest binary representation? +
+
+ Yes, Huffman Coding guarantees the shortest possible binary representation for a given set of characters and their frequencies, as long as the characters' bit representations do not prefix each other. +
+
+ Q5: What is the importance of prefix-free codes? +
+
+ Prefix-free codes ensure that no code is a prefix of another. This property is essential for uniquely decodable encoding; it prevents ambiguity when decoding the compressed data back into its original form. +
+
+
+
Written by: UniAlgo Owner
+ + + diff --git a/public/ArticleList/time-complexity-bfs.html b/public/ArticleList/time-complexity-bfs.html index a18888b..12f1439 100644 --- a/public/ArticleList/time-complexity-bfs.html +++ b/public/ArticleList/time-complexity-bfs.html @@ -9,7 +9,7 @@ - +
@@ -34,34 +34,85 @@
- +

DFS is a graph traversal technique used to explore nodes and edges of a graph. It can be implemented using recursion or an explicit stack. The algorithm works by starting from a source node, marking it as visited, and then recursively visiting all its unvisited neighbors.

Analyzing Time Complexity

-

-

-

-

1. Looping Through Nodes: O(V)

+ • When you start the DFS, you typically loop through all the vertices in the graph. This is necessary to ensure that even disconnected components of the graph are visited.

+ • For each node, you check if it has been visited. This check is O(1) for each node, and since you do this for all V nodes, this part contributes O(V) to the time complexity.

-

- • When you start the DFS, you typically loop through all the vertices in the graph. This is necessary to ensure that even disconnected components of the graph are visited. -

-

- • For each node, you check if it has been visited. This check is O(1) for each node, and since you do this for all V nodes, this part contributes O(V) to the time complexity. -

+ + +
+
+ Frequently Asked Questions (FAQs) +
+
+ + + + +
+
+ Q1: How does DFS differ from BFS? +
+
+ DFS uses a depth-first approach, diving deep into a branch before exploring another, while BFS explores nodes level by level, using a queue structure. +
+
+ Q2: What is the time complexity of DFS? +
+
+ The time complexity of DFS is O(V + E), where V is the number of vertices and E is the number of edges in the graph. +
+
+ Q3: Can DFS be used to find connected components? +
+
+ Yes, DFS can be used to find connected components in a graph by performing DFS on each unvisited node and marking all reachable nodes from that starting node. +
+
+ Q4: What are the practical applications of DFS? +
+
+ DFS is used in topological sorting, finding strongly connected components, and solving maze problems. It’s also helpful in generating paths and in algorithms like finding bridges and articulation points. +
+
+ Q5: Does DFS guarantee the shortest path in an unweighted graph? +
+
+ No, DFS does not guarantee the shortest path in an unweighted graph. BFS is used to find the shortest path in such cases because it explores nodes level by level. +
+
+
+
Written by: UniAlgo Owner
+ + + From f50582014a05fb6b74835775634604274c78d174 Mon Sep 17 00:00:00 2001 From: "2021.sneha.utekar@ves.ac.in" <2021.sneha.utekar@ves.ac.in> Date: Sun, 6 Oct 2024 20:00:02 +0530 Subject: [PATCH 2/2] Add FAQ tab at the bottom of article template #25 --- public/ArticleList/binary-lifting.html | 186 ++++++------------ public/ArticleList/dijkstra-algorithm.html | 180 ++++++----------- public/ArticleList/huffman-coding.html | 76 +------ public/ArticleList/time-complexity-bfs.html | 77 ++------ public/ArticleTemplate.html | 77 +++++--- public/Articles-FAQ/binary-lifting-faq.txt | 14 ++ .../Articles-FAQ/dijkstra-algorithm-faq.txt | 14 ++ public/Articles-FAQ/huffman-coding-faq.txt | 14 ++ .../Articles-FAQ/time-complexity-bfs-faq.txt | 14 ++ 9 files changed, 246 insertions(+), 406 deletions(-) create mode 100644 public/Articles-FAQ/binary-lifting-faq.txt create mode 100644 public/Articles-FAQ/dijkstra-algorithm-faq.txt create mode 100644 public/Articles-FAQ/huffman-coding-faq.txt create mode 100644 public/Articles-FAQ/time-complexity-bfs-faq.txt diff --git a/public/ArticleList/binary-lifting.html b/public/ArticleList/binary-lifting.html index 03ffc6a..e885a16 100644 --- a/public/ArticleList/binary-lifting.html +++ b/public/ArticleList/binary-lifting.html @@ -1,125 +1,65 @@ - - - - - CP - Article - - - - - - - -
- -
-
-
- How does Binary Lifting utilize the power of 2 jumps? -
-
-
- -
-
- -

- Suppose you are given a tree and then Q queries. In each query we are given two nodes of the tree and are asked to find the lowest common ancestor of both of them. Now finding LCA is easy and can be done in O(N) (which is simple to understand), but for q queries the time complexity becomes O(Q*N). So we need to preprocess the tree and then calculate the LCA of two nodes in O(log(N)). Whenever we need to have log(N) complexity which can be achieved if we somehow used powers of 2 (seems obvious). -

-

- Firstly, let us see what the algorithm does. So instead of going over every node for each query, we create a matrix of n * (log2(Depth of tree)) approximately. And for each node's row, we store the 2^0 th parent (i.e. 1st), 2^1 st parent (i.e. 2nd), then 2^2, .., and more until the root node is crossed. Another thing we need to precompute is the depth of each node which can be done in O(N) and needs to be done once only. And creation matrix will take O(N*log(depth)). Generally, the log(depth) would be less than 20 even. -

-

- NOTE: Don't worry if you feel like why are we doing this. -

-

- Now we can use this precomputed information for each query as - let us take two nodes a and b. If a has a depth d1 and b has a depth d2 (assuming d1>d2), then it is intuitive that we must at least cover the difference in depth of a and b, because LCA will anyhow lie above b. So we need the (d1-d2) th parent of a which is very simple to find. If we represent (d1-d2) in binary representation say 0101 then it means the 5th parent we need can be achieved by 1st parent and then its fourth parent. Hence we can see that we are just taking the parent (some 2^j th parent) which we already precomputed. So this way we just took log2(d1-d2) to cover the difference in depth. -

-

- Now there may be a case that the d1-d2 th parent of a is b itself, so we may check the case if a==b, else it means that the two nodes are in separate branches currently. -

-

- One feature of LCA we use here in the tree is that above the LCA all other nodes that come above it are always their common parents. So again we will use each bit of the binary representation of the depth of two nodes (which is essentially the same now) and if the jumping by that power of 2 gives us a common parent, then it means that either this is the LCA or it lies below it, so we need to look below, so we reduce the power by 1 and then jump. If the parents are different then definitely the LCA lies above, so we upgrade our parent of a = mat[a][j] (where mat is the matrix we created and mat[a][j] representing the 2^j th parent of a) and similarly b = mat[b][j]. -

-

- In this way, we keep coming closer to the lowest common ancestor. The reason we will always reach the ancestor is that imagine like the difference in depth of LCA and the node is d. And we know that any number can be represented in powers of 2, so basically you take the maximum possible jump each time (less than the difference in LCA and current depth of the node) and cover up the difference in log time complexity. -

-
- - - -
-
- Frequently Asked Questions (FAQs) -
-
- - - - -
-
- Q1: What is Binary Lifting? -
-
- Binary Lifting is a technique used to find the Lowest Common Ancestor (LCA) of two nodes in a tree in logarithmic time. It preprocesses the tree to create a table of ancestors at different powers of two. -
-
- Q2: How does Binary Lifting improve query time for LCA? -
-
- Instead of checking all nodes, Binary Lifting allows for jumping directly to the 2^j th parent of a node, which reduces the query time to O(log N) after an initial O(N log N) preprocessing step. -
-
- Q3: What is the time complexity for preprocessing the tree? -
-
- The preprocessing of the tree takes O(N log N) time, where N is the number of nodes in the tree. -
-
- Q4: What is the main limitation of Binary Lifting? -
-
- Binary Lifting does not work efficiently for dynamic trees where nodes can be added or removed frequently, as it requires a complete reprocessing of the ancestor table. -
-
- Q5: Can Binary Lifting be used in graphs other than trees? -
-
- Binary Lifting is primarily designed for trees. However, it can be adapted for directed acyclic graphs (DAGs) under certain conditions but is not generally applicable to arbitrary graphs. -
-
-
-
- Written by: UniAlgo Owner -
-
- - - - - + + + + + CP - Article + + + + + + +
+ +
+
+
+ How does Binary Lifting utilizes power of 2 jumps? +
+
+
+ +
+
+ +

+ Suppose you are given a tree and then Q queries. In each query we are given two nodes of the tree and are asked to find the lowest common ancestor of both of them. Now finding LCA is easy and can be done in O(N) (which is simple to understand), but for q queries the time complexity become O(Q*N). So we need to preprocess the tree and then calculate the LCA of two nodes in O(log(N)) . Whenever we need to have log(N) complexity which can be achieved if we somehow used powers of 2 (seems obvious). +

+

+ Firstly, let us see what the algorithm do. So instead of going over every node for each query , we create a matrix of n * (log2(Depth of tree)) approximately. And for each node's row we store the 2^0 th parent (i.e 1st) , 2^1 st parent(i.e 2nd) , then 2^2 ,.., and more until the root node is crossed. Another thing we need to precompute is the depth of each of the node which can be done in O(N) and need to be done once only. And Creation matrix will take O(N*log(depth)). Generally the log(depth) would be less than 20 even. +

+

+ NOTE: Don't worry if you feel like why are we doing this. +

+

+ Now we can use this precomputed information for each query as - let us take two node a and b. If a has a depth d1 and b has a depth d2 (assuming d1>d2), then it is intuitive that we must at least cover the difference in depth of a and b, because LCA will anyhow lie above b. So we need the (d1-d2) the parent of a which is very simple to find. If we represent (d1-d2) in binary representation say 0101 then it means the 5th parent we need can be achieved by 1st parent and then its fourth parent. Hence we can see that we are just taking the parent (some 2^j th parent) which we already precomputed. So this way we just took log2(d1-d2) to cover the difference in depth. +

+

+ Now there may be case that the d1-d2 th parent of a is b itself, so we may check the case if a==b, else it means that the two nodes are in separate branches currently. +

+

+ One feature of LCA we use here in tree is that above the LCA all other nodes that come above it are always their common parents. So again we will use each bit of binary representation of depth of two nodes (which is essentially the same now) and if the jumping by that power of 2 gives us a common parent , then it means that either this is the LCA or it lies below it, so we need to look below, so we reduces the power by 1 and then jump, if the parent are different then definitely the LCA lies above , so we upgrade our parent of a = mat[a][j] (where mat is the matrix we created and mat[a][j] representing the 2^j th parent of a) and similarly b = mat[b][j]. +

+

+ In this way we keep coming closer to the lowest common ancestor. The reason we will always reach the ancestor is that imagine like the difference in depth of LCA and the node is d. And we know that any number can be represented in powers of 2, so basically you take the maximum possible jump each time (less than the difference in LCA and current depth of node) and cover up the difference in log time complexity. +

+
+
+ Written by: UniAlgo Owner +
+
+ + \ No newline at end of file diff --git a/public/ArticleList/dijkstra-algorithm.html b/public/ArticleList/dijkstra-algorithm.html index 249a6b4..825e291 100644 --- a/public/ArticleList/dijkstra-algorithm.html +++ b/public/ArticleList/dijkstra-algorithm.html @@ -1,125 +1,59 @@ - - - - - CP - Article - - - - - - -
- -
-
-
- How Dijkstra's Algorithm Works -
-
-
- -
-
- -

- The algorithm computes the shortest path from a starting node to all other nodes in the graph. It selects the node with the smallest known distance, updates the distances of its neighbors, and repeats this process until all nodes have been visited. -

-

- Steps of Dijkstra's Algorithm: - 1. Set the distance to the source node as 0 and to all other nodes as infinity. - 2. Mark the source node as visited. For all its neighbors, calculate the tentative distance using the current node's distance. If this new distance is smaller than the previously known distance, update it. - 3. Move to the unvisited node with the smallest tentative distance and repeat the process of updating distances for its neighbors. - 4. Continue this process until all nodes are visited, ensuring that the shortest path to each node is found. -

-

- Example: Consider a graph where nodes represent cities and edges represent the distance between them. Starting from city A, Dijkstra's Algorithm will calculate the shortest distance to all other cities, considering the sum of edge weights in each step. -

-

- Time Complexity: The time complexity of Dijkstra's Algorithm is O((V + E) log V), where V is the number of vertices and E is the number of edges in the graph. -

-

- Limitations: Dijkstra's Algorithm does not work with graphs that have negative edge weights because the greedy approach might not always lead to the optimal solution. -

-
- -
-
- Frequently Asked Questions (FAQs) -
-
- - - - -
-
- Q1: What is Dijkstra's Algorithm? -
-
- Dijkstra's Algorithm is a graph search algorithm that finds the shortest path from a starting node to all other nodes in a weighted graph. -
-
- Q2: How does Dijkstra's Algorithm work? -
-
- The algorithm initializes distances to infinity, sets the distance of the starting node to zero, and iteratively selects the node with the smallest known distance to update the distances of its neighbors. -
-
- Q3: What is the time complexity of Dijkstra's Algorithm? -
-
- The time complexity is O((V + E) log V), where V is the number of vertices and E is the number of edges in the graph. -
-
- Q4: Can Dijkstra's Algorithm handle negative edge weights? -
-
- No, Dijkstra's Algorithm does not work with graphs that have negative edge weights as it assumes that once a node's shortest path is found, it cannot be improved. -
-
- Q5: What are the practical applications of Dijkstra's Algorithm? -
-
- Dijkstra's Algorithm is commonly used in GPS systems, network routing protocols, and other applications requiring efficient pathfinding in graphs. -
-
-
-
- Written by: Yeshwanth DS -
-
- - - - - - - - - - \ No newline at end of file + + + + + CP - Article + + + + + + +
+ +
+
+
+ How Dijkstra's Algorithm Works +
+
+
+ +
+
+ +

+ The algorithm computes the shortest path from a starting node to all other nodes in the graph. It selects the node with the smallest known distance, updates the distances of its neighbors, and repeats this process until all nodes have been visited. +

+

+ Steps of Dijkstra's Algorithm: 1. Set the distance to the source node as 0 and to all other nodes as infinity. 2. Mark the source node as visited. For all its neighbors, calculate the tentative distance using the current node’s distance. If this new distance is smaller than the previously known distance, update it. 3. Move to the unvisited node with the smallest tentative distance and repeat the process of updating distances for its neighbors. 4. Continue this process until all nodes are visited, ensuring that the shortest path to each node is found. +

+

+ Example: Consider a graph where nodes represent cities and edges represent the distance between them. Starting from city A, Dijkstra's Algorithm will calculate the shortest distance to all other cities, considering the sum of edge weights in each step. +

+

+ Time Complexity: The time complexity of Dijkstra's Algorithm is O((V + E) log V), where V is the number of vertices and E is the number of edges in the graph. +

+

+ Limitations: Dijkstra's Algorithm does not work with graphs that have negative edge weights because the greedy approach might not always lead to the optimal solution. +

+
+
+ Written by: Yeshwanth DS +
+
+ + \ No newline at end of file diff --git a/public/ArticleList/huffman-coding.html b/public/ArticleList/huffman-coding.html index 62978e0..579dbb5 100644 --- a/public/ArticleList/huffman-coding.html +++ b/public/ArticleList/huffman-coding.html @@ -9,7 +9,6 @@ -
@@ -35,88 +34,29 @@
- +

- Suppose there is a problem in which you have wood pieces of different lengths (a1, a2, a3,...,an) and you want to merge them to make a new stick. But there is a constraint: in each operation, the cost of merging is the sum of the lengths of 2 sticks, and you need to minimize the overall cost of merging. + Suppose there is a problem in which you have wood pieces of different length (a1,a2,a3,...,an) and you want to merge to make a new stick. But there is a constraint, that is in each operation the cost of merging is the sum of lengths of 2 sticks and you need to minimize to overall cost of merging.

- So we need to optimally take the sticks according to their lengths. Suppose we are given a1=1, a2=4, a3=9, a4=11. Now, let's take a2 and a3 and merge them into one, so their sum = 4+9=13, and cost = 13. Then let us take a1; since previous sticks were merged to form a stick of length 13, let us take the previous stick and a1 = 1, so new sum = 1+13=14, and cost += 14 means cost = 13 + 14 = 27. Then we are left with a4, so new stick length sum = 14+11 = 25. And the total cost becomes 27 + 25 = 52. + So we need an to optimally take the sticks according to their lengths. Suppose we are given a=1, a2=4, a3=9, a4=11. Now, let's take a2 and a3 and merge them into one so their sum = 4+9=13 , and cost = 13. Then let us take a1 , since previous sticks were merged to form a stick of length 13, let us take the previous stick and a1 = 1, so new sum = 1+13=14, and cost += 14 means cost = 13 + 14 = 27. Then we are left with a4 so new stick length sum = 14+11 = 25. And total cost = becomes 27 + 25 = 52.

- But instead, if we had taken first a1 and a2, then a1+a2=5, then cost = 5. Then going in sorted order, we sum the next two smallest lengths, which are currently 5 and 9, so length = 14, and cost = 5+14 = 19. Now we are left with two sticks of length 14 and 11, so summing them, the length becomes 25 and cost = 25 + 19 = 44. + But instead if we had taken first a1 and a2 , then a1+a2=5, then cost = 5. Then going in sorted order we sum the next two smallest length which are currently 5 and 9, so length = 14, and cost = 5+15 = 19. Now we are left with two sticks of length 14 and 11 , so summing them the length becomes 25 and cost = 25 + 19 = 44.

- So, if we observe that the greater the number we take, the earlier the cost increases. Because every time we add a new length to the current length, the big element would repeat every time. So this is the intuition behind taking the smallest elements first, which ensures that we are minimizing the immediate cost of each merge operation. + So, if we observe that the greater the number we take the earlier the cost increases. Because every time we add new length to current length, the big element would repeat every time. So this is the intuition behind taking the smallest elements first which ensures that we are minimizing the immediate cost of each merge operation.

- Now this is actually the concept behind actual Huffman Coding, which is a way to compress a string into a segment of 0s and 1s and of the smallest length so that there would be less chance of the compressed string exceeding the integer or long range. + Now this is actually the concept behind actual Huffman Coding which is a way to compress a string into a segment of 0s and 1s and of the smallest length so that there would be less chance of compressed string to exceed the integer or long range.

- Can you find how? A hint is to find the bits representation for each unique character, and note none of the character's bit representation should be a prefix of another! + Can you find how? A hint is to find the bits representation for each unique character, and note none of the character's bit representation should be a prefix of another !

- - -
-
- Frequently Asked Questions (FAQs) -
-
- - - - -
-
- Q1: What is Huffman Coding? -
-
- Huffman Coding is a compression algorithm that assigns variable-length codes to input characters based on their frequencies. Characters that occur more frequently are assigned shorter codes, while less frequent characters receive longer codes. -
-
- Q2: How does Huffman Coding minimize the cost of merging sticks? -
-
- Huffman Coding minimizes the cost by always merging the two smallest sticks first. This strategy ensures that the larger lengths, which contribute more to the total cost, are added later in the process, thus reducing the overall merging cost. -
-
- Q3: What role do binary trees play in Huffman Coding? -
-
- In Huffman Coding, a binary tree is used to represent the codes. Each leaf node represents a character, and the path from the root to the leaf determines the character's binary representation (0s and 1s). -
-
- Q4: Can Huffman Coding guarantee the shortest binary representation? -
-
- Yes, Huffman Coding guarantees the shortest possible binary representation for a given set of characters and their frequencies, as long as the characters' bit representations do not prefix each other. -
-
- Q5: What is the importance of prefix-free codes? -
-
- Prefix-free codes ensure that no code is a prefix of another. This property is essential for uniquely decodable encoding; it prevents ambiguity when decoding the compressed data back into its original form. -
-
-
-
Written by: UniAlgo Owner
- - - - + \ No newline at end of file diff --git a/public/ArticleList/time-complexity-bfs.html b/public/ArticleList/time-complexity-bfs.html index 12f1439..cd21df7 100644 --- a/public/ArticleList/time-complexity-bfs.html +++ b/public/ArticleList/time-complexity-bfs.html @@ -9,7 +9,7 @@ - +
@@ -34,85 +34,34 @@
- +

DFS is a graph traversal technique used to explore nodes and edges of a graph. It can be implemented using recursion or an explicit stack. The algorithm works by starting from a source node, marking it as visited, and then recursively visiting all its unvisited neighbors.

Analyzing Time Complexity

+

+

+

+

1. Looping Through Nodes: O(V)

- • When you start the DFS, you typically loop through all the vertices in the graph. This is necessary to ensure that even disconnected components of the graph are visited.

- • For each node, you check if it has been visited. This check is O(1) for each node, and since you do this for all V nodes, this part contributes O(V) to the time complexity. +

+

+ • When you start the DFS, you typically loop through all the vertices in the graph. This is necessary to ensure that even disconnected components of the graph are visited. +

+

+ • For each node, you check if it has been visited. This check is O(1) for each node, and since you do this for all V nodes, this part contributes O(V) to the time complexity.

- - -
-
- Frequently Asked Questions (FAQs) -
-
- - - - -
-
- Q1: How does DFS differ from BFS? -
-
- DFS uses a depth-first approach, diving deep into a branch before exploring another, while BFS explores nodes level by level, using a queue structure. -
-
- Q2: What is the time complexity of DFS? -
-
- The time complexity of DFS is O(V + E), where V is the number of vertices and E is the number of edges in the graph. -
-
- Q3: Can DFS be used to find connected components? -
-
- Yes, DFS can be used to find connected components in a graph by performing DFS on each unvisited node and marking all reachable nodes from that starting node. -
-
- Q4: What are the practical applications of DFS? -
-
- DFS is used in topological sorting, finding strongly connected components, and solving maze problems. It’s also helpful in generating paths and in algorithms like finding bridges and articulation points. -
-
- Q5: Does DFS guarantee the shortest path in an unweighted graph? -
-
- No, DFS does not guarantee the shortest path in an unweighted graph. BFS is used to find the shortest path in such cases because it explores nodes level by level. -
-
-
-
Written by: UniAlgo Owner
- - - - + \ No newline at end of file diff --git a/public/ArticleTemplate.html b/public/ArticleTemplate.html index a96ce5b..ade7463 100644 --- a/public/ArticleTemplate.html +++ b/public/ArticleTemplate.html @@ -1,6 +1,6 @@ - + @@ -9,35 +9,56 @@ <link href="ArticleDataStyle.css" rel="stylesheet" type="text/css"/> <link href="https://fonts.googleapis.com/css?family=Rubik" rel="stylesheet"/> <link href="../asset/logo.png" rel="icon" type="image"/> - </head> - <body> +</head> +<body> <header> - <div class="centered"> - <a href="../index.html"> - <div class="name"> - UniAlgo - </div> - <div class="logo"> - <img src="../asset/logo.png"/> - </div> - </a> - </div> + <div class="centered"> + <a href="../index.html"> + <div class="name"> + UniAlgo + </div> + <div class="logo"> + <img src="../asset/logo.png"/> + </div> + </a> + </div> </header> <div class="article"> - <div class="title"> - <!-- to be filled via generator --> - </div> - <div> - <button id="back-link" onclick="location.href='../article.html'"> - Back to Articles - </button> - </div> - <div class="content"> - <!-- to be filled via generator --> - </div> - <div class="author"> - <!-- to be filled via generator --> - </div> + <div class="title"> + <!-- to be filled via generator --> + </div> + <div> + <button id="back-link" onclick="location.href='../article.html'"> + Back to Articles + </button> + </div> + <div class="content"> + <!-- to be filled via generator --> + </div> + <div class="author"> + <!-- to be filled via generator --> + </div> + <div class="faq-section"> + <div class="faq-title"> + Frequently Asked Questions + </div> + <div class="faq-content"> + <!-- FAQ content will be inserted here via generator --> + </div> + </div> </div> - </body> + <!-- JavaScript for Toggle --> + <script> + document.getElementById('faq-toggle').addEventListener('click', function() { + const faqContent = document.getElementById('faq-content'); + if (faqContent.style.display === "none" || faqContent.style.display === "") { + faqContent.style.display = "block"; + this.textContent = "Hide FAQs"; + } else { + faqContent.style.display = "none"; + this.textContent = "Show FAQs"; + } + }); + </script> +</body> </html> diff --git a/public/Articles-FAQ/binary-lifting-faq.txt b/public/Articles-FAQ/binary-lifting-faq.txt new file mode 100644 index 0000000..18f42b7 --- /dev/null +++ b/public/Articles-FAQ/binary-lifting-faq.txt @@ -0,0 +1,14 @@ +Q1: What is Binary Lifting? +Binary Lifting is a technique used to find the Lowest Common Ancestor (LCA) of two nodes in a tree in logarithmic time. It preprocesses the tree to create a table of ancestors at different powers of two. + +Q2: How does Binary Lifting improve query time for LCA? +Instead of checking all nodes, Binary Lifting allows for jumping directly to the 2^j th parent of a node, which reduces the query time to O(log N) after an initial O(N log N) preprocessing step. + +Q3: What is the time complexity for preprocessing the tree? +The preprocessing of the tree takes O(N log N) time, where N is the number of nodes in the tree. + +Q4: What is the main limitation of Binary Lifting? +Binary Lifting does not work efficiently for dynamic trees where nodes can be added or removed frequently, as it requires a complete reprocessing of the ancestor table. + +Q5: Can Binary Lifting be used in graphs other than trees? +Binary Lifting is primarily designed for trees. However, it can be adapted for directed acyclic graphs (DAGs) under certain conditions but is not generally applicable to arbitrary graphs. diff --git a/public/Articles-FAQ/dijkstra-algorithm-faq.txt b/public/Articles-FAQ/dijkstra-algorithm-faq.txt new file mode 100644 index 0000000..e9e55ec --- /dev/null +++ b/public/Articles-FAQ/dijkstra-algorithm-faq.txt @@ -0,0 +1,14 @@ +Q1: What is Dijkstra's Algorithm? +Dijkstra's Algorithm is a graph search algorithm that finds the shortest path from a starting node to all other nodes in a weighted graph. + +Q2: How does Dijkstra's Algorithm work? +The algorithm initializes distances to infinity, sets the distance of the starting node to zero, and iteratively selects the node with the smallest known distance to update the distances of its neighbors. + +Q3: What is the time complexity of Dijkstra's Algorithm? +The time complexity is O((V + E) log V), where V is the number of vertices and E is the number of edges in the graph. + +Q4: Can Dijkstra's Algorithm handle negative edge weights? +No, Dijkstra's Algorithm does not work with graphs that have negative edge weights as it assumes that once a node's shortest path is found, it cannot be improved. + +Q5: What are the practical applications of Dijkstra's Algorithm? +Dijkstra's Algorithm is commonly used in GPS systems, network routing protocols, and other applications requiring efficient pathfinding in graphs. diff --git a/public/Articles-FAQ/huffman-coding-faq.txt b/public/Articles-FAQ/huffman-coding-faq.txt new file mode 100644 index 0000000..7abafdc --- /dev/null +++ b/public/Articles-FAQ/huffman-coding-faq.txt @@ -0,0 +1,14 @@ +Q1: What is Huffman Coding? +Huffman Coding is a compression algorithm that assigns variable-length codes to input characters based on their frequencies. Characters that occur more frequently are assigned shorter codes, while less frequent characters receive longer codes. + +Q2: How does Huffman Coding minimize the cost of merging sticks? +Huffman Coding minimizes the cost by always merging the two smallest sticks first. This strategy ensures that the larger lengths, which contribute more to the total cost, are added later in the process, thus reducing the overall merging cost. + +Q3: What role do binary trees play in Huffman Coding? +In Huffman Coding, a binary tree is used to represent the codes. Each leaf node represents a character, and the path from the root to the leaf determines the character's binary representation (0s and 1s). + +Q4: Can Huffman Coding guarantee the shortest binary representation? +Yes, Huffman Coding guarantees the shortest possible binary representation for a given set of characters and their frequencies, as long as the characters' bit representations do not prefix each other. + +Q5: What is the importance of prefix-free codes? +Prefix-free codes ensure that no code is a prefix of another. This property is essential for uniquely decodable encoding; it prevents ambiguity when decoding the compressed data back into its original form. diff --git a/public/Articles-FAQ/time-complexity-bfs-faq.txt b/public/Articles-FAQ/time-complexity-bfs-faq.txt new file mode 100644 index 0000000..40d91cf --- /dev/null +++ b/public/Articles-FAQ/time-complexity-bfs-faq.txt @@ -0,0 +1,14 @@ +Q1: How does DFS differ from BFS? +DFS uses a depth-first approach, diving deep into a branch before exploring another, while BFS explores nodes level by level, using a queue structure. + +Q2: What is the time complexity of DFS? +The time complexity of DFS is O(V + E), where V is the number of vertices and E is the number of edges in the graph. + +Q3: Can DFS be used to find connected components? +Yes, DFS can be used to find connected components in a graph by performing DFS on each unvisited node and marking all reachable nodes from that starting node. + +Q4: What are the practical applications of DFS? +DFS is used in topological sorting, finding strongly connected components, and solving maze problems. It’s also helpful in generating paths and in algorithms like finding bridges and articulation points. + +Q5: Does DFS guarantee the shortest path in an unweighted graph? +No, DFS does not guarantee the shortest path in an unweighted graph. BFS is used to find the shortest path in such cases because it explores nodes level by level.