id
stringlengths
6
26
chapter
stringclasses
36 values
section
stringlengths
3
5
title
stringlengths
3
27
source_file
stringlengths
13
29
question_markdown
stringlengths
17
6.29k
answer_markdown
stringlengths
3
6.76k
code_blocks
listlengths
0
9
has_images
bool
2 classes
image_refs
listlengths
0
7
24-24.3-4
24
24.3
24.3-4
docs/Chap24/24.3.md
Professor Gaedel has written a program that he claims implements Dijkstra's algorithm. The program produces $v.d$ and $v.\pi$ for each vertex $v \in V$. Give an $O(V + E)$-time algorithm to check the output of the professor's program. It should determine whether the $d$ and $\pi$ attributes match those of some shortest-paths tree. You may assume that all edge weights are nonnegative.
(Removed)
[]
false
[]
24-24.3-5
24
24.3
24.3-5
docs/Chap24/24.3.md
Professor Newman thinks that he has worked out a simpler proof of correctness for Dijkstra's algorithm. He claims that Dijkstra's algorithm relaxes the edges of every shortest path in the graph in the order in which they appear on the path, and therefore the path-relaxation property applies to every vertex reachable from the source. Show that the professor is mistaken by constructing a directed graph for which Dijkstra's algorithm could relax the edges of a shortest path out of order.
(Removed)
[]
false
[]
24-24.3-6
24
24.3
24.3-6
docs/Chap24/24.3.md
We are given a directed graph $G = (V, E)$ on which each edge $(u, v) \in E$ has an associated value $r(u, v)$, which is a real number in the range $0 \le r(u, v) \le 1$ that represents the reliability of a communication channel from vertex $u$ to vertex $v$. We interpret $r(u, v)$ as the probability that the channel from $u$ to $v$ will not fail, and we assume that these probabilities are independent. Give an efficient algorithm to find the most reliable path between two given vertices.
(Removed)
[]
false
[]
24-24.3-7
24
24.3
24.3-7
docs/Chap24/24.3.md
Let $G = (V, E)$ be a weighted, directed graph with positive weight function $w: E \rightarrow \\{1, 2, \ldots, W\\}$ for some positive integer $W$, and assume that no two vertices have the same shortest-path weights from source vertex $s$. Now suppose that we define an unweighted, directed graph $G' = (V \cup V', E')$ by replacing each edge $(u, v) \in E$ with $w(u, v)$ unit-weight edges in series. How many vertices does $G'$ have? Now suppose that we run a breadth-first search on $G'$. Show that the order in which the breadth-first search of $G'$ colors vertices in $V$ black is the same as the order in which Dijkstra's algorithm extracts the vertices of $V$ from the priority queue when it runs on $G$.
$V + \sum_{(u, v) \in E} w(u, v) - E$.
[]
false
[]
24-24.3-8
24
24.3
24.3-8
docs/Chap24/24.3.md
Let $G = (V, E)$ be a weighted, directed graph with nonnegative weight function $w: E \rightarrow \\{0, 1, \ldots, W\\}$ for some nonnegative integer $W$. Modify Dijkstra's algorithm to compute the shortest paths from a given source vertex s in $O(WV + E)$ time.
(Removed)
[]
false
[]
24-24.3-9
24
24.3
24.3-9
docs/Chap24/24.3.md
Modify your algorithm from Exercise 24.3-8 to run in $O((V + E) \lg W)$ time. ($\textit{Hint:}$ How many distinct shortest-path estimates can there be in $V - S$ at any point in time?)
(Removed)
[]
false
[]
24-24.3-10
24
24.3
24.3-10
docs/Chap24/24.3.md
Suppose that we are given a weighted, directed graph $G = (V, E)$ in which edges that leave the source vertex $s$ may have negative weights, all other edge weights are nonnegative, and there are no negative-weight cycles. Argue that Dijkstra's algorithm correctly finds shortest paths from $s$ in this graph.
The proof of correctness, Theorem 24.6, goes through exactly as stated in the text. The key fact was that $\delta(s, y) \le \delta(s, u)$. It is claimed that this holds because there are no negative edge weights, but in fact that is stronger than is needed. This always holds if $y$ occurs on a shortest path from $s$ to $u$ and $y \ne s$ because all edges on the path from $y$ to $u$ have nonnegative weight. If any had negative weight, this would imply that we had "gone back" to an edge incident with $s$, which implies that a cycle is involved in the path, which would only be the case if it were a negative-weight cycle. However, these are still forbidden.
[]
false
[]
24-24.4-1
24
24.4
24.4-1
docs/Chap24/24.4.md
Find a feasible solution or determine that no feasible solution exists for the following system of difference constraints: $$ \begin{aligned} x_1 - x_2 & \le & 1, \\\\ x_1 - x_4 & \le & -4, \\\\ x_2 - x_3 & \le & 2, \\\\ x_2 - x_5 & \le & 7, \\\\ x_2 - x_6 & \le & 5, \\\\ x_3 - x_6 & \le & 10, \\\\ x_4 - x_2 & \le & 2, \\\\ x_5 - x_1 & \le & -1, \\\\ x_5 - x_4 & \le & 3, \\\\ x_6 - x_3 & \le & 8 \end{aligned} $$
Our vertices of the constraint graph will be $$\\{v_0, v_1, v_2, v_3, v_4, v_5, v_6\\}.$$ The edges will be $$(v_0, v_1), (v_0, v_2), (v_0, v_3), (v_0, v_4), (v_0, v_5), (v_0, v_6), (v_2, v_1), (v_4, v_1), (v_3, v_2), (v_5, v_2), (v_6, v_2), (v_6, v_3),$$ with edge weights $$0, 0, 0, 0, 0, 0, 1, -4, 2, 7, 5, 10, 2, -1, 3, -8$$ respectively. Then, computing $$(\delta(v_0, v_1), \delta(v_0, v_2), \delta(v_0, v_3), \delta(v_0, v_4), \delta(v_0, v_5), \delta(v_0, v_6)),$$ we get $$(-5, -3, 0, -1, -6, -8),$$ which is a feasible solution by Theorem 24.9.
[]
false
[]
24-24.4-2
24
24.4
24.4-2
docs/Chap24/24.4.md
Find a feasible solution or determine that no feasible solution exists for the following system of difference constraints: $$ \begin{aligned} x_1 - x_2 & \le &4, \\\\ x_1 - x_5 & \le &5, \\\\ x_2 - x_4 & \le &-6, \\\\ x_3 - x_2 & \le &1, \\\\ x_4 - x_1 & \le &3, \\\\ x_4 - x_3 & \le &5, \\\\ x_4 - x_5 & \le &10, \\\\ x_5 - x_3 & \le &-4, \\\\ x_5 - x_4 & \le &-8. \end{aligned} $$
There is no feasible solution because the constraint graph contains a negative-weight cycle: $(v_1, v_4, v_2, v_3, v_5, v_1)$ has weight $-1$.
[]
false
[]
24-24.4-3
24
24.4
24.4-3
docs/Chap24/24.4.md
Can any shortest-path weight from the new vertex $v_0$ in a constraint graph be positive? Explain.
No, it cannot be positive. This is because for every vertex $v \ne v_0$, there is an edge $(v_0, v)$ with weight zero. So, there is some path from the new vertex to every other of weight zero. Since $\delta(v_0, v)$ is a minimum weight of all paths, it cannot be greater than the weight of this weight zero path that consists of a single edge.
[]
false
[]
24-24.4-4
24
24.4
24.4-4
docs/Chap24/24.4.md
Express the single-pair shortest-path problem as a linear program.
(Removed)
[]
false
[]
24-24.4-5
24
24.4
24.4-5
docs/Chap24/24.4.md
Show how to modify the Bellman-Ford algorithm slightly so that when we use it to solve a system of difference constraints with $m$ inequalities on $n$ unknowns, the running time is $O(nm)$.
We can follow the advice of problem 24.4-7 and solve the system of constraints on a modified constraint graph in which there is no new vertex $v_0$. This is simply done by initializing all of the vertices to have a $d$ value of $0$ before running the iterated relaxations of Bellman Ford. Since we don't add a new vertex and the $n$ edges going from it to to vertex corresponding to each variable, we are just running Bellman Ford on a graph with $n$ vertices and $m$ edges, and so it will have a runtime of $O(mn)$.
[]
false
[]
24-24.4-6
24
24.4
24.4-6
docs/Chap24/24.4.md
Suppose that in addition to a system of difference constraints, we want to handle **_equality constraints_** of the form $x_i = x_j + b_k$. Show how to adapt the Bellman-Ford algorithm to solve this variety of constraint system.
To obtain the equality constraint $x_i = x_j + b_k$ we simply use the inequalities $x_i - x_j \le b_k$ and $x_j - x_i \le -bk$, then solve the problem as usual.
[]
false
[]
24-24.4-7
24
24.4
24.4-7
docs/Chap24/24.4.md
Show how to solve a system of difference constraints by a Bellman-Ford-like algorithm that runs on a constraint graph without the extra vertex $v_0$.
(Removed)
[]
false
[]
24-24.4-8
24
24.4
24.4-8 $\star$
docs/Chap24/24.4.md
Let $Ax \le b$ be a system of $m$ difference constraints in $n$ unknowns. Show that the Bellman-Ford algorithm, when run on the corresponding constraint graph, maximizes $\sum_{i = 1}^n x_i$ subject to $Ax \le b$ and $x_i \le 0$ for all $x_i$.
Bellman-Ford correctly solves the system of difference constraints so $Ax \le b$ is always satisfied. We also have that $x_i = \delta(v_0, v_i) \le w(v_0, v_i) = 0$ so $x_i \le 0$ for all $i$. To show that $\sum x_i$ is maximized, we'll show that for any feasible solution $(y_1, y_2, \ldots, y_n)$ which satisfies the constraints we have $yi \le \delta(v_0, v_i) = x_i$. Let $v_0, v_{i_1}, \ldots, v_{i_k}$ be a shortest path from $v_0$ to $v_i$ in the constraint graph. Then we must have the constraints $y_{i_2} - y_{i_1} \le w(v_{i_1}, v_{i_2}), \ldots, y_{i_k} - y_{i_{k - 1}} \le w(v_{i_{k - 1}},v_{i_k})$. Summing these up we have $$y_i \le y_i - y_1 \le \sum_{m = 2}^k w(v_{i_m}, v_{i_{m - 1}}) = \delta(v_0, v_i) = x_i.$$
[]
false
[]
24-24.4-9
24
24.4
24.4-9 $\star$
docs/Chap24/24.4.md
Show that the Bellman-Ford algorithm, when run on the constraint graph for a system $Ax \le b$ of difference constraints, minimizes the quantity $(\max\\{x_i\\} - \min\\{x_i\\})$ subject to $Ax \le b$. Explain how this fact might come in handy if the algorithm is used to schedule construction jobs.
We can see that the Bellman-Ford algorithm run on the graph whose construction is described in this section causes the quantity $\max\\{x_i\\} - \min\\{x_i\\}$ to be minimized. We know that the largest value assigned to any of the vertices in the constraint graph is a $0$. It is clear that it won't be greater than zero, since just the single edge path to each of the vertices has cost zero. We also know that we cannot have every vertex having a shortest path with negative weight. To see this, notice that this would mean that the pointer for each vertex has it's $p$ value going to some other vertex that is not the source. This means that if we follow the procedure for reconstructing the shortest path for any of the vertices, we have that it can never get back to the source, a contradiction to the fact that it is a shortest path from the source to that vertex. Next, we note that when we run Bellman-Ford, we are maximizing $\min\\{x_i\\}$. The shortest distance in the constraint graphs is the bare minimum of what is required in order to have all the constraints satisfied, if we were to increase any of the values we would be violating a constraint. This could be in handy when scheduling construction jobs because the quantity $\max\\{x_i\\} - \min\\{x_i\\}$ is equal to the difference in time between the last task and the first task. Therefore, it means that minimizing it would mean that the total time that all the jobs takes is also minimized. And, most people want the entire process of construction to take as short of a time as possible.
[]
false
[]
24-24.4-10
24
24.4
24.4-10
docs/Chap24/24.4.md
Suppose that every row in the matrix $A$ of a linear program $Ax \le b$ corresponds to a difference constraint, a single-variable constraint of the form $x_i \le b_k$, or a singlevariable constraint of the form $-x_i \le b_k$. Show how to adapt the Bellman-Ford algorithm to solve this variety of constraint system.
(Removed)
[]
false
[]
24-24.4-11
24
24.4
24.4-11
docs/Chap24/24.4.md
Give an efficient algorithm to solve a system $Ax \le b$ of difference constraints when all of the elements of $b$ are real-valued and all of the unknowns $x_i$ must be integers.
To do this, just take the floor of (largest integer that is less than or equal to) each of the $b$ values and solve the resulting integer difference problem. These modified constraints will be admitting exactly the same set of assignments since we required that the solution have integer values assigned to the variables. This is because since the variables are integers, all of their differences will also be integers. For an integer to be less than or equal to a real number, it is necessary and sufficient for it to be less than or equal to the floor of that real number.
[]
false
[]
24-24.4-12
24
24.4
24.4-12 $\star$
docs/Chap24/24.4.md
Give an efficient algorithm to solve a system $Ax \le b$ of difference constraints when all of the elements of $b$ are real-valued and a specified subset of some, but not necessarily all, of the unknowns $x_i$ must be integers.
To solve the problem of $Ax \le b$ where the elements of $b$ are real-valued we carry out the same procedure as before, running Bellman-Ford, but allowing our edge weights to be real-valued. To impose the integer condition on the $x_i$'s, we modify the $\text{RELAX}$ procedure. Suppose we call $\text{RELAX}(v_i, v_j, w)$ where $v_j$ is required to be integral valued. If $v_j.d > \lfloor v_i.d + w(v_i, v_j) \rfloor$, set $v_j.d = \lfloor v_i.d + w(v_i, v_j) \rfloor$. This guarantees that the condition that $v_j.d - v_i.d \le w(v_i, v_j)$ as desired. It also ensures that $v_j$ is integer valued. Since the triangle inequality still holds, $x = (v_1.d, v_2.d, \ldots, v_n.d)$ is a feasible solution for the system, provided that $G$ contains no negative weight cycles.
[]
false
[]
24-24.5-1
24
24.5
24.5-1
docs/Chap24/24.5.md
Give two shortest-paths trees for the directed graph of Figure 24.2 (on page 648) other than the two shown.
Since the induced shortest path trees on $\\{s, t, y\\}$ and on $\\{t, x, y, z\\}$ are independent and have to possible configurations each, there are four total arising from that. So, we have the two not shown in the figure are the one consisting of the edges $\\{(s, t), (s, y), (y, x), (x, z)\\}$ and the one consisting of the edges $\\{(s, t), (t, y), (t, x), (y, z)\\}$.
[]
false
[]
24-24.5-2
24
24.5
24.5-2
docs/Chap24/24.5.md
Give an example of a weighted, directed graph $G = (V, E)$ with weight function $w: E \rightarrow \mathbb R$ and source vertex $s$ such that $G$ satisfies the following property: For every edge $(u, v) \in E$, there is a shortest-paths tree rooted at $s$ that contains $(u, v)$ and another shortest-paths tree rooted at $s$ that does not contain $(u, v)$.
Let $G$ have $3$ vertices $s$, $x$, and $y$. Let the edges be $(s, x)$, $(s, y)$, and $(x, y)$ with weights $1$, $1$, and $0$ respectively. There are $3$ possible trees on these vertices rooted at $s$, and each is a shortest paths tree which gives $\delta(s, x) = \delta(s, y) = 1$.
[]
false
[]
24-24.5-3
24
24.5
24.5-3
docs/Chap24/24.5.md
Embellish the proof of Lemma 24.10 to handle cases in which shortest-path weights are $\infty$ or $-\infty$.
To modify Lemma 24.10 to allow for possible shortest path weights of $\infty$ and $-\infty$, we need to define our addition as $\infty + c = \infty$, and $-\infty + c = -\infty$. This will make the statement behave correctly, that is, we can take the shortest path from $s$ to $u$ and tack on the edge $(u, v)$ to the end. That is, if there is a negative weight cycle on your way to $u$ and there is an edge from $u$ to $v$, there is a negative weight cycle on our way to $v$. Similarly, if we cannot reach $v$ and there is an edge from $u$ to $v$, we cannot reach $u$.
[]
false
[]
24-24.5-4
24
24.5
24.5-4
docs/Chap24/24.5.md
Let $G = (V, E)$ be a weighted, directed graph with source vertex $s$, and let $G$ be initialized by $\text{INITIALIZE-SINGLE-SOURCE}(G, s)$. Prove that if a sequence of relaxation steps sets $s.\pi$ to a non-$\text{NIL}$ value, then $G$ contains a negative-weight cycle.
(Removed)
[]
false
[]
24-24.5-5
24
24.5
24.5-5
docs/Chap24/24.5.md
Let $G = (V, E)$ be a weighted, directed graph with no negative-weight edges. Let $s \in V$ be the source vertex, and suppose that we allow $v.\pi$ to be the predecessor of $v$ on _any_ shortest path to $v$ from source $s$ if $v \in V - \\{s\\}$ is reachable from $s$, and $\text{NIL}$ otherwise. Give an example of such a graph $G$ and an assignment of $\pi$ values that produces a cycle in $G_\pi$. (By Lemma 24.16, such an assignment cannot be produced by a sequence of relaxation steps.)
Suppose that we have a grap hon three vertices $\\{s, u, v\\}$ and containing edges $(s, u), (s, v), (u, v), (v, u)$ all with weight $0$. Then, there is a shortest path from $s$ to $v$ of $s$, $u$, $v$ and a shortest path from $s$ to $u$ of $s$ $v$, $u$. Based off of these, we could set $v.\pi = u$ and $u.\pi = v$. This then means that there is a cycle consisting of $u, v$ in $G_\pi$.
[]
false
[]
24-24.5-6
24
24.5
24.5-6
docs/Chap24/24.5.md
Let $G = (V, E)$ be a weighted, directed graph with weight function $w: E \rightarrow \mathbb R$ and no negative-weight cycles. Let $s \in V$ be the source vertex, and let $G$ be initialized by $\text{INITIALIZE-SINGLE-SOURCE}(G, s)$. Prove that for every vertex $v \in V_\pi$, there exists a path from $s$ to $v$ in $G_\pi$ and that this property is maintained as an invariant over any sequence of relaxations.
We will prove this by induction on the number of relaxations performed. For the base-case, we have just called $\text{INITIALIZE-SINGLE-SOURCE}(G, s)$. The only vertex in $V_\pi$ is $s$, and there is trivially a path from $s$ to itself. Now suppose that after any sequence of $n$ relaxations, for every vertex $v \in V_\pi$ there exists a path from $s$ to $v$ in $G_\pi$. Consider the $(n + 1)$th relaxation. Suppose it is such that $v.d > u.d + w(u, v)$. When we relax $v$, we update $v.\pi = u.\pi$. By the induction hypothesis, there was a path from $s$ to $u$ in $G_\pi$. Now $v$ is in $V_\pi$, and the path from $s$ to $u$, followed by the edge $(u,v) = (v.\pi, v)$ is a path from s to $v$ in $G_\pi$, so the claim holds.
[]
false
[]
24-24.5-7
24
24.5
24.5-7
docs/Chap24/24.5.md
Let $G = (V, E)$ be a weighted, directed graph that contains no negative-weight cycles. Let $s \in V$ be the source vertex, and let $G$ be initialized by $\text{INITIALIZE-SINGLE-SOURCE}(G, s)$. Prove that there exists a sequence of $|V| - 1$ relaxation steps that produces $v.d = \delta(s, v)$ for all $v \in V$.
(Removed)
[]
false
[]
24-24.5-8
24
24.5
24.5-8
docs/Chap24/24.5.md
Let $G$ be an arbitrary weighted, directed graph with a negative-weight cycle reachable from the source vertex $s$. Show how to construct an infinite sequence of relaxations of the edges of $G$ such that every relaxation causes a shortest-path estimate to change.
(Removed)
[]
false
[]
24-24-1
24
24-1
24-1
docs/Chap24/Problems/24-1.md
Suppose that we order the edge relaxations in each pass of the Bellman-Ford algorithm as follows. Before the first pass, we assign an arbitrary linear order $v_1, v_2, \ldots, v_{|V|}$ to the vertices of the input graph $G = (V, E)$. Then, we partition the edge set $E$ into $E_f \cup E_b$, where $E_f = \\{(v_i, v_j) \in E: i < j\\}$ and $E_b = \\{(v_i, v_j) \in E: i > j\\}$. (Assume that $G$ contains no self-loops, so that every edge is in either $E_f$ or $E_b$.) Define $G_f = (V, E_f)$ and $G_b = (V, E_b)$. **a.** Prove that $G_f$ is acyclic with topological sort $\langle v_1, v_2, \ldots, v_{|V|} \rangle$ and that $G_b$ is acyclic with topological sort $\langle v_{|V|}, v_{|V| - 1}, \ldots, v_1 \rangle$. Suppose that we implement each pass of the Bellman-Ford algorithm in the following way. We visit each vertex in the order $v_1, v_2, \ldots, v_{|V|}$, relaxing edges of $E_f$ that leave the vertex. We then visit each vertex in the order $v_{|V|}, v_{|V| - 1}, \ldots, v_1$, relaxing edges of $E_b$ that leave the vertex. **b.** Prove that with this scheme, if $G$ contains no negative-weight cycles that are reachable from the source vertex $s$, then after only $\lceil |V| / 2 \rceil$ passes over the edges, $v.d = \delta(s, v)$ for all vertices $v \in V$. **c.** Does this scheme improve the asymptotic running time of the Bellman-Ford algorithm?
**a.** Since in $G_f$ edges only go from vertices with smaller index to vertices with greater index, there is no way that we could pick a vertex, and keep increasing it's index, and get back to having the index equal to what we started with. This means that $G_f$ is acyclic. Similarly, there is no way to pick an index, keep decreasing it, and get back to the same vertex index. By these definitions, since $G_f$ only has vertices going from lower indices to higher indices, $(v_1, \dots, v_{|V|})$ is a topological ordering of the vertices. Similarly, for $G_b$, $(v_{|V|}, \dots, v_1)$ is a topological ordering of the vertices. **b.** Suppose that we are trying to find the shortest path from $s$ to $v$. Then, list out the vertices of this shortest path $v_{k_1}, v_{k_2}, \dots, v_{k_m}$. Then, we have that the number of times that the sequence $\\{k_i\\}\_i$ goes from increasing to decreasing or from decreasing to increasing is the number of passes over the edges that are necessary to notice this path. This is because any increasing sequence of vertices will be captured in a pass through $E_f$ and any decreasing sequence will be captured in a pass through $E_b$. Any sequence of integers of length $|V|$ can only change direction at most $\lfloor |V| / 2 \rfloor$ times. However, we need to add one more in to account for the case that the source appears later in the ordering of the vertices than $v_{k_2}$, as it is in a sense initially expecting increasing vertex indices, as it runs through $E_f$ before $E_b$. **c.** It does not improve the asymptotic runtime of Bellman ford, it just drops the runtime from having a leading coefficient of $1$ to a leading coefficient of $\frac{1}{2}$. Both in the original and in the modified version, the runtime is $O(EV)$.
[]
false
[]
24-24-2
24
24-2
24-2
docs/Chap24/Problems/24-2.md
A $d$-dimensional box with dimensions $(x_1, x_2, \ldots, x_d)$ **_nests_** within another box with dimensions $(y_1, y_2, \ldots, y_d)$ if there exists a permutation $\pi$ on $\\{1, 2, \ldots, d\\}$ such that $x_{\pi(1)} < y_1$, $x_{\pi(2)} < y_2$, $\ldots$, $x_{\pi(d)} < y_d$. **a.** Argue that the nesting relation is transitive. **b.** Describe an efficient method to determine whether or not one $d$-dimensional box nests inside another. **c.** Suppose that you are given a set of $n$ $d$-dimensional boxes $\\{B_1, B_2, \ldots, B_n\\}$. Give an efficient algorithm to find the longest sequence $\langle B_{i_1}, B_{i_2}, \ldots, B_{i_k} \rangle$ of boxes such that $B_{i_j}$ nests within $B_{i_{j + 1}}$ for $j = 1, 2, \ldots, k - 1$. Express the running time of your algorithm in terms of $n$ and $d$.
**a.** Suppose that box $x = (x_1, \dots, x_d)$ nests with box $y = (y_1, \dots, y_d)$ and box $y$ nests with box $z = (z_1, \dots, z_d)$. Then there exist permutations $\pi$ and $\sigma$ such that $x_{\pi(1)} < y_1, \dots, x_{\pi(d)} < y_d$ and $y_{\sigma(1)} < z_1, \dots, y_{\sigma(d)} < z_d$. This implies $x_{\pi(\sigma(1))} < z_1, \dots, x_{\pi(\sigma(d))} < z_d$, so $x$ nests with $z$ and the nesting relation is transitive. **b.** Box $x$ nests inside box $y$ if and only if the increasing sequence of dimensions of $x$ is component-wise strictly less than the increasing sequence of dimensions of $y$. Thus, it will suffice to sort both sequences of dimensions and compare them. Sorting both length $d$ sequences is done in $O(d\lg d)$, and comparing their elements is done in $O(d)$, so the total time is $O(d\lg d)$. **c.** We will create a nesting-graph $G$ with vertices $B_1, \dots, B_n$ as follows. For each pair of boxes $B_i$ , $B_j$, we decide if one nests inside the other. If $B_i$ nests in $B_j$, draw an arrow from $B_i$ to $B_j$. If $B_j$ nests in $B_i$, draw an arrow from $B_j$ to $B_i$. If neither nests, draw no arrow. To determine the arrows efficiently, after sorting each list of dimensions in $O(nd\lg d)$ we compair all pairs of boxes using the algorithm from part (b) in $O(n^2 d)$. By part (a), the resulted graph is acyclic, which allows us to easily find the longest chain in it in $O(n^2)$ in a bottom-up manner. This chain is our answer. Thus, the total time is $O(nd\max(\lg d, n))$.
[]
false
[]
24-24-3
24
24-3
24-3
docs/Chap24/Problems/24-3.md
**_Arbitrage_** is the use of discrepancies in currency exchange rates to transform one unit of a currency into more than one unit of the same currency. For example, suppose that $1$ U.S. dollar buys $49$ Indian rupees, $1$ Indian rupee buys $2$ Japanese yen, and $1$ Japanese yen buys $0.0107$ U.S. dollars. Then, by converting currencies, a trader can start with $1$ U.S. dollar and buy $49 \times 2 \times 0.0107 = 1.0486$ U.S. dollars, thus turning a profit of $4.86$ percent. Suppose that we are given $n$ currencies $c_1, c_2, \ldots, c_n$ and an $n \times n$ table $R$ of exchange rates, such that one unit of currency $c_i$ buys $R[i, j]$ units of currency $c_j$. **a.** Give an efficient algorithm to determine whether or not there exists a sequence of currencies $\langle c_{i_1}, c_{i_2}, \ldots, c_{i_k} \rangle$ such that $$R[i_1, i_2] \cdot R[i_2, i_3] \cdots R[i_{k - 1}, i_k] \cdot R[i_k, i_1] > 1.$$ Analyze the running time of your algorithm. **b.** Give an efficient algorithm to print out such a sequence if one exists. Analyze the running time of your algorithm.
**a.** To do this we take the negative of the natural log (or any other base will also work) of all the values $c_i$ that are on the edges between the currencies. Then, we detect the presence or absence of a negative weight cycle by applying Bellman Ford. To see that the existence of an arbitrage situation is equivalent to there being a negative weight cycle in the original graph, consider the following sequence of steps: $$ \begin{aligned} R[i_1, i_2] · R[i_2, i_3] \cdot \cdots \cdot R[i_k, i_1] & > 1 \\\\ \ln(R[i_1, i_2]) + \ln(R[i_2, i_3]) + \cdots + \ln(R[i_k, i_1]) & > 0 \\\\ −\ln(R[i_1, i_2]) − \ln(R[i_2, i_3]) − \cdots − \ln(R[i_k, i_1]) & < 0. \end{aligned} $$ **b.** To do this, we first perform the same modification of all the edge weights as done in part (a) of this problem. Then, we wish to detect the negative weight cycle. To do this, we relax all the edges $|V| − 1$ many times, as in BellmanFord algorithm. Then, we record all of the $d$ values of the vertices. Then, we relax all the edges $|V|$ more times. Then, we check to see which vertices had their $d$ value decrease since we recorded them. All of these vertices must lie on some (possibly disjoint) set of negative weight cycles. Call $S$ this set of vertices. To find one of these cycles in particular, we can pick any vertex in $S$ and greedily keep picking any vertex that it has an edge to that is also in $S$. Then, we just keep an eye out for a repeat. This finds us our cycle. We know that we will never get to a dead end in this process because the set $S$ consists of vertices that are in some union of cycles, and so every vertex has out degree at least $1$.
[]
false
[]
24-24-4
24
24-4
24-4
docs/Chap24/Problems/24-4.md
A **_scaling_** algorithm solves a problem by initially considering only the highestorder bit of each relevant input value (such as an edge weight). It then refines the initial solution by looking at the two highest-order bits. It progressively looks at more and more high-order bits, refining the solution each time, until it has examined all bits and computed the correct solution. In this problem, we examine an algorithm for computing the shortest paths from a single source by scaling edge weights. We are given a directed graph $G = (V, E)$ with nonnegative integer edge weights $w$. Let $W = \max_{(u, v) \in E} \\{w(u, v)\\}$. Our goal is to develop an algorithm that runs in $O(E\lg W)$ time. We assume that all vertices are reachable from the source. The algorithm uncovers the bits in the binary representation of the edge weights one at a time, from the most significant bit to the least significant bit. Specifically, let $k = \lceil \lg(W + 1) \rceil$ be the number of bits in the binary representation of $W$, and for $i = 1, 2, \ldots, k$, let $w_i(u, v) = \lfloor w(u, v) / 2^{k - i} \rfloor$. That is, $w_i(u, v)$ is the "scaled-down" version of $w(u, v)$ given by the $i$ most significant bits of $w(u, v)$. (Thus, $w_k(u, v) = w(u, v)$ for all $(u, v) \in E$.) For example, if $k = 5$ and $w(u, v) = 25$, which has the binary representation $\langle 11001 \rangle$, then $w_3(u, v) = \langle 110 \rangle = 6$. As another example with $k = 5$, if $w(u, v) = \langle 00100 \rangle = 4$, then $w_3(u, v) = \langle 001 \rangle = 1$. Let us define $\delta_i(u, v)$ as the shortest-path weight from vertex $u$ to vertex $v$ using weight function $w_i$. Thus, $\delta_k(u, v) = \delta(u, v)$ for all $u, v \in V$. For a given source vertex $s$, the scaling algorithm first computes the shortest-path weights $\delta_1(s, v)$ for all $v \in V$, then computes $\delta_2(s, v)$ for all $v \in V$, and so on, until it computes $\delta_k(s, v)$ for all $v \in V$. We assume throughout that $|E| \ge |V| - 1$, and we shall see that computing $\delta_i$ from $\delta_{i - 1}$ takes $O(E)$ time, so that the entire algorithm takes $O(kE) = O(E\lg W)$ time. **a.** Suppose that for all vertices $v \in V$, we have $\delta(s, v) \le |E|$. Show that we can compute $\delta(s, v)$ for all $v \in V$ in $O(E)$ time. **b.** Show that we can compute $\delta_1(s, v)$ for all $v \in V$ in $O(E)$ time. Let us now focus on computing $\delta_i$ from $\delta_{i - 1}$. **c.** Prove that for $i = 2, 3, \ldots, k$, we have either $w_i(u, v) = 2w_{i - 1}(u, v)$ or $w_i(u, v) = 2w_{i - 1}(u, v) + 1$. Then, prove that $$2\delta_{i - 1}(s, v) \le \delta_i(s, v) \le 2\delta_{i - 1}(s, v) + |V| - 1$$ for all $v \in V$. **d.** Define for $i = 2, 3, \ldots, k$ and all $(u, v) \in E$, $$\hat w_i = w_i(u, v) + 2\delta_{i - 1}(s, u) - 2\delta_{i - 1}(s, v).$$ Prove that for $i = 2, 3, \ldots, k$ and all $u, v \in V$, the "reweighted" value $\hat w_i(u, v)$ of edge $(u, v)$ is a nonnegative integer. **e.** Now, define $\hat\delta_i(s, v)$ as the shortest-path weight from $s$ to $v$ using the weight function $\hat w_i$. Prove that for $i = 2, 3, \ldots, k$ and all $v \in V$, $$\delta_i(s, v) = \hat\delta_i(s, v) + 2\delta_{i - 1}(s, v)$$ and that $\hat\delta_i(s, v) \le |E|$. **f.** Show how to compute $\delta_i(s, v)$ from $\delta_{i - 1}(s, v)$ for all $v \in V$ in $O(E)$ time, and conclude that we can compute $\delta(s, v)$ for all $v \in V$ in $O(E\lg W)$ time.
**a.** We can do this in $O(E)$ by the algorithm described in exercise 24.3-8 since our "priority queue" takes on only integer values and is bounded in size by $E$. **b.** We can do this in $O(E)$ by the algorithm described in exercise 24.3-8 since $w$ takes values in $\\{0, 1\\}$ and $V = O(E)$. **c.** If the $i$th digit, read from left to right, of $w(u, v)$ is $0$, then $w_i(u, v) = 2w_{i − 1}(u, v)$. If it is a $1$, then $w_i(u, v) = 2w_{i − 1}(u, v) + 1$. Now let $s = v_0, v_1, \dots, v_n = v$ be a shortest path from $s$ to $v$ under $w_i$. Note that any shortest path under $w_i$ is necessarily also a shortest path under $w_{i − 1}$. Then we have $$ \begin{aligned} \delta_i(s, v) & = \sum_{m = 1}^n w_i(v_{m − 1}, v_m) \\\\ & \le \sum_{m = 1}^n [2w_{i − 1}(u, v) + 1] \\\\ & \le \sum_{m = 1}^n w_{i − 1}(u, v) + n \\\\ & \le 2\delta_{i − 1}(s, v) + |V| − 1. \end{aligned} $$ On the other hand, we also have $$ \begin{aligned} \delta_i(s, v) & = \sum_{m = 1}^n w_i(v_{m - 1}, v_m) \\\\ & \ge \sum_{m = 1}^n 2w_{i - 1}(v_{m - 1}, v_m) \\\\ & \ge 2\delta_{i - 1}(s, v). \end{aligned} $$ **d.** Note that every quantity in the definition of $\hat w_i$ is an integer, so $\hat w_i$ is clearly an integer. Since $w_i(u, v) \ge 2w_{i - 1}(u, v)$, it will suffice to show that $w_{i - 1}(u, v) + \delta_{i - 1}(s, u) \ge \delta_{i - 1}(s, v)$ to prove nonnegativity. This follows immediately from the triangle inequality. **e.** First note that $s = v_0, v_1, \dots, v_n = v$ is a shortest path from $s$ to $v$ with respect to $\hatw$ if and only if it is a shortest path with respect to $w$. Then we have $$ \begin{aligned} \hat\delta_i(s, v) & = \sum_{m = 1}^n w_i(v_{m - 1}, v_m) + 2\delta_{i - 1}(s, v_{m - 1}) − 2\delta_{i - 1}(s, v_m) \\\\ & = \sum_{m = 1}^n w_i(v_{m - 1}, v_m) − 2\delta_{i - 1}(s, v_n) \\\\ & = \delta_i(s, v) − 2\delta_{i - 1}(s, v). \end{aligned} $$ **f.** By part (a) we can compute $\hat\delta_i(s, v)$ for all $v \in V$ in $O(E)$ time. If we have already computed $\delta_i - 1$ then we can compute $\delta_i$ in $O(E)$ time. Since we can compute $\delta_1$ in $O(E)$ by part b, we can compute $\delta_i$ from scratch in $O(iE)$ time. Thus, we can compute $\delta = \delta_k$ in $O(Ek) = O(E\lg W)$ time.
[]
false
[]
24-24-5
24
24-5
24-5
docs/Chap24/Problems/24-5.md
Let $G = (V, E)$ be a directed graph with weight function $w: E \to \mathbb R$, and let $n = |V|$. We define the **_mean weight_** of a cycle $c = \langle e_1, e_2, \ldots, e_k \rangle$ of edges in $E$ to be $$\mu(c) = \frac{1}{k} \sum_{i = 1}^k w(e_i).$$ Let $\mu^\* = \min_c \mu\(c\)$, where $c$ ranges over all directed cycles in $G$. We call a cycle $c$ for which $\mu\(c\) = \mu^\*$ a **_minimum mean-weight cycle_**. This problem investigates an efficient algorithm for computing $\mu^\*$. Assume without loss of generality that every vertex $v \in V$ is reachable from a source vertex $s \in V$. Let $\delta(s, v)$ be the weight of a shortest path from $s$ to $v$, and let $\delta_k(s, v)$ be the weight of a shortest path from $s$ to $v$ consisting of _exactly_ $k$ edges. If there is no path from $s$ to $v$ with exactly $k$ edges, then $\delta_k(s, v) = \infty$. **a.** Show that if $\mu^\* = 0$, then $G$ contains no negative-weight cycles and $\delta(s, v) = \min_{0 \le k \le n - 1} \delta_k(s, v)$ for all vertices $v \in V$. **b.** Show that if $\mu^\* = 0$, then $$\max_{0 \le k \le n - 1} \frac{\delta_n(s, v) - \delta_k(s, v)}{n - k} \ge 0$$ for all vertices $v \in V$. ($\textit{Hint:}$ Use both properties from part (a).) **c.** Let $c$ be a $0$-weight cycle, and let $u$ and $v$ be any two vertices on $c$. Suppose that $\mu^\* = 0$ and that the weight of the simple path from $u$ to $v$ along the cycle is $x$. Prove that $\delta(s, v) = \delta(s, u) + x$. ($\textit{Hint:}$ The weight of the simple path from $v$ to $u$ along the cycle is $-x$.) **d.** Show that if $\mu^\* = 0$, then on each minimum mean-weight cycle there exists a vertex $v$ such that $$\max_{0 \le k \le n - 1} \frac{\delta_n(s, v) - \delta_k(s, v)}{n - k} = 0.$$ ($\textit{Hint:}$ Show how to extend a shortest path to any vertex on a minimum meanweight cycle along the cycle to make a shortest path to the next vertex on the cycle.) **e.** Show that if $\mu^\* = 0$, then $$\min_{v \in V} \max_{0 \le k \le n - 1} \frac{\delta_n(s, v) - \delta_k(s, v)}{n - k} = 0.$$ **f.** Show that if we add a constant $t$ to the weight of each edge of $G$, then $\mu^\*$ increases by $t$. Use this fact to show that $$\mu^* = \min_{v \in V} \max_{0 \le k \le n - 1} \frac{\delta_n(s, v) - \delta_k(s, v)}{n - k}.$$ **g.** Give an $O(VE)$-time algorithm to compute $\mu^\*$.
**a.** If $\mu^\* = 0$, then we have that the lowest that $\frac{1}{k}\_{i = 1}^k w(e_i)$ can be zero. This means that the lowest $\sum_{i = 1}^k w(e_i)$ can be $0$. This means that no cycle can have negative weight. Also, we know that for any path from $s$ to $v$, we can make it simple by removing any cycles that occur. This means that it had a weight equal to some path that has at most $n - 1$ edges in it. Since we take the minimum over all possible number of edges, we have the minimum over all paths. **b.** To show that $$\max_{0 \le k \le n - 1} \frac{\delta_n(s, v) - \delta_k(s, v)}{n - k} \ge 0,$$ we need to show that $$\max_{0 \le k \le n - 1} \delta_n(s, v) - \delta_k(s, v) \ge 0.$$ Since we have that $\mu^\* = 0$, there aren't any negative weight cycles. This means that we can't have the minimum cost of a path decrease as we increase the possible length of the path past $n - 1$. This means that there will be a path that at least ties for cheapest when we restrict to the path being less than length $n$. Note that there may also be cheapest path of longer length since we necessarily do have zero cost cycles. However, this isn't guaranteed since the zero cost cycle may not lie along a cheapest path from $s$ to $v$. **c.** Since the total cost of the cycle is $0$, and one part of it has cost $x$, in order to balance that out, the weight of the rest of the cycle has to be $-x$. So, suppose we have some shortest length path from $s$ to $u$, then, we could traverse the path from $u$ to $v$ along the cycle to get a path from $s$ to $u$ that has length $\delta(s, u) + x$. This gets us that $\delta(s, v) \le \delta(s, u) + x$. To see the converse inequality, suppose that we have some shortest length path from $s$ to $v$. Then, we can traverse the cycle going from $v$ to $u$. We already said that this part of the cycle had total cost $-x$. This gets us that $\delta(s, u) \le \delta(s, v) - x$. Or, rearranging, we have $\delta(s, u) + x \le \delta(s, v)$. Since we have inequalities both ways, we must have equality. **d.** To see this, we find a vertex $v$ and natural number $k \le n - 1$ so that $\delta_n(s, v) - \delta_k(s, v) = 0$. To do this, we will first take any shortest length, smallest number of edges path from $s$ to any vertex on the cycle. Then, we will just keep on walking around the cycle until we've walked along $n$ edges. Whatever vertex we end up on at that point will be our $v$. Since we did not change the $d$ value of $v$ after looking at length $n$ paths, by part (a), we know that there was some length of this path, say $k$, which had the same cost. That is, we have $\delta_n(s, v) = \delta_k(s,v)$. **e.** This is an immediate result of the previous problem and part (b). Part (a) says that the inequality holds for all $v$, so, we have $$\min_{v \in V} \max_{0 \le k \le n - 1} \frac{\delta_n(s, v) - \delta(s, v)}{n - k} \ge 0.$$ The previous part says that there is some $v$ on each minimum weight cycle so that $$\max_{0 \le k \le n - 1} \frac{\delta_n(s, v) - \delta(s, v)}{n - k} = 0,$$ which means that $$\min_{v \in V} \max_{0 \le k \le n - 1} \frac{\delta_n(s, v) - \delta_k(s, v)}{n - k} \le 0.$$ Putting the two inequalities together, we have the desired equality. **f.** If we add $t$ to the weight of each edge, the mean weight of any cycle becomes $$\mu(c) = \frac{1}{k} \sum_{i = 1}^k (w(e_i) + t) = \frac{1}{k} \Big(\sum_i^k w(e_i) \Big) + \frac{kt}{k} = \frac{1}{k} \Big(\sum_i^k w(e_i) \Big) + t.$$ This is the original, unmodified mean weight cycle, plus $t$. Since this is how the mean weight of every cycle is changed, the lowest mean weight cycle stays the lowest mean weight cycle. This means that $\mu^\*$ will increase by $t$. Suppose that we first compute $\mu^\*$. Then, we subtract from every edge weight the value $\mu^\*$. This will make the new $\mu^\*$ equal zero, which by part (e) means that $$\min_{v \in V} \max_{0 \le k \le n - 1} \frac{\delta_n(s, v) - \delta_k(s, v)}{n - k} = 0.$$ Since they are both equal to zero, they are both equal to each other. **g.** By the previous part, it suffices to compute the expression on the previous line. We will start by creating a table that lists $\delta_k(s, v)$ for every $k \in \\{1, \ldots, n\\}$ and $v \in V$. This can be done in time $O(V(E + V))$ by creating a $|V|$ by $|V|$ table, where the $k$th row and vth column represent $\delta_k(s, v)$ when wanting to compute a particular entry, we need look at a number of entries in the previous row equal to the in degree of the vertex we want to compute. So, summing over the computation required for each row, we need $O(E + V)$. Note that this total runtime can be bumped down to $O(VE)$ by not including in the table any isolated vertices, this will ensure that $E \in \Omega(V)$. So, $O(V(E + V))$ becomes $O(VE)$. Once we have this table of values computed, it is simple to just replace each row with the last row minus what it was, and divide each entry by $n - k$, then, find the min column in each row, and take the max of those numbers.
[]
false
[]
24-24-6
24
24-6
24-6
docs/Chap24/Problems/24-6.md
A sequence is **_bitonic_** if it monotonically increases and then monotonically decreases, or if by a circular shift it monotonically increases and then monotonically decreases. For example the sequences $\langle 1, 4, 6, 8, 3, -2 \rangle$, $\langle 9, 2, -4, -10, -5 \rangle$, and $\langle 1, 2, 3, 4 \rangle$ are bitonic, but $\langle 1, 3, 12, 4, 2, 10 \rangle$ is not bitonic. (See Problem 15-3 for the bitonic euclidean traveling-salesman problem.) Suppose that we are given a directed graph $G = (V, E)$ with weight function $w: E \to \mathbb R$, where all edge weights are unique, and we wish to find single-source shortest paths from a source vertex $s$. We are given one additional piece of information: for each vertex $v \in V$, the weights of the edges along any shortest path from $s$ to $v$ form a bitonic sequence. Give the most efficient algorithm you can to solve this problem, and analyze its running time.
We'll use the Bellman-Ford algorithm, but with a careful choice of the order in which we relax the edges in order to perform a smaller number of $\text{RELAX}$ operations. In any bitonic path there can be at most two distinct increasing sequences of edge weights, and similarly at most two distinct decreasing sequences of edge weights. Thus, by the path-relaxation property, if we relax the edges in order of increasing weight then decreasing weight twice (for a total of four times relaxing every edge) the we are guaranteed that $v.d$ will equal $\delta(s, v)$ for all $v \in V$ . Sorting the edges takes $O(E\lg E)$. We relax every edge $4$ times, taking $O(E)$. Thus the total runtime is $O(E\lg E) + O(E) = O(E\lg E)$, which is asymptotically faster than the usual $O(VE)$ runtime of Bellman-Ford.
[]
false
[]
25-25.1-1
25
25.1
25.1-1
docs/Chap25/25.1.md
Run $\text{SLOW-ALL-PAIRS-SHORTEST-PATHS}$ on the weighted, directed graph of Figure 25.2, showing the matrices that result for each iteration of the loop. Then do the same for $\text{FASTER-ALL-PAIRS-SHORTEST-PATHS}$.
- Initial: $$ \begin{pmatrix} 0 & \infty & \infty & \infty & -1 & \infty \\\\ 1 & 0 & \infty & 2 & \infty & \infty \\\\ \infty & 2 & 0 & \infty & \infty & -8 \\\\ -4 & \infty & \infty & 0 & 3 & \infty \\\\ \infty & 7 & \infty & \infty & 0 & \infty \\\\ \infty & 5 & 10 & \infty & \infty & 0 \end{pmatrix} $$ - Slow: $m = 2$: $$ \begin{pmatrix} 0 & 6 & \infty & \infty & -1 & \infty \\\\ -2 & 0 & \infty & 2 & 0 & \infty \\\\ 3 & -3 & 0 & 4 & \infty & -8 \\\\ -4 & 10 & \infty & 0 & -5 & \infty \\\\ 8 & 7 & \infty & 9 & 0 & \infty \\\\ 6 & 5 & 10 & 7 & \infty & 0 \end{pmatrix} $$ $m = 3$: $$ \begin{pmatrix} 0 & 6 & \infty & 8 & -1 & \infty \\\\ -2 & 0 & \infty & 2 & -3 & \infty \\\\ -2 & -3 & 0 & -1 & 2 & -8 \\\\ -4 & 2 & \infty & 0 & -5 & \infty \\\\ 5 & 7 & \infty & 9 & 0 & \infty \\\\ 3 & 5 & 10 & 7 & 5 & 0 \end{pmatrix} $$ $m = 4$: $$ \begin{pmatrix} 0 & 6 & \infty & 8 & -1 & \infty \\\\ -2 & 0 & \infty & 2 & -3 & \infty \\\\ -5 & -3 & 0 & -1 & -3 & -8 \\\\ -4 & 2 & \infty & 0 & -5 & \infty \\\\ 5 & 7 & \infty & 9 & 0 & \infty \\\\ 3 & 5 & 10 & 7 & 2 & 0 \end{pmatrix} $$ $m = 5$: $$ \begin{pmatrix} 0 & 6 & \infty & 8 & -1 & \infty \\\\ -2 & 0 & \infty & 2 & -3 & \infty \\\\ -5 & -3 & 0 & -1 & -6 & -8 \\\\ -4 & 2 & \infty & 0 & -5 & \infty \\\\ 5 & 7 & \infty & 9 & 0 & \infty \\\\ 3 & 5 & 10 & 7 & 2 & 0 \end{pmatrix} $$ - Fast: $m = 2$: $$ \begin{pmatrix} 0 & 6 & \infty & \infty & -1 & \infty \\\\ -2 & 0 & \infty & 2 & 0 & \infty \\\\ 3 & -3 & 0 & 4 & \infty & -8 \\\\ -4 & 10 & \infty & 0 & -5 & \infty \\\\ 8 & 7 & \infty & 9 & 0 & \infty \\\\ 6 & 5 & 10 & 7 & \infty & 0 \end{pmatrix} $$ $m = 4$: $$ \begin{pmatrix} 0 & 6 & \infty & 8 & -1 & \infty \\\\ -2 & 0 & \infty & 2 & -3 & \infty \\\\ -5 & -3 & 0 & -1 & -3 & -8 \\\\ -4 & 2 & \infty & 0 & -5 & \infty \\\\ 5 & 7 & \infty & 9 & 0 & \infty \\\\ 3 & 5 & 10 & 7 & 2 & 0 \end{pmatrix} $$ $m = 8$: $$ \begin{pmatrix} 0 & 6 & \infty & 8 & -1 & \infty \\\\ -2 & 0 & \infty & 2 & -3 & \infty \\\\ -5 & -3 & 0 & -1 & -6 & -8 \\\\ -4 & 2 & \infty & 0 & -5 & \infty \\\\ 5 & 7 & \infty & 9 & 0 & \infty \\\\ 3 & 5 & 10 & 7 & 2 & 0 \end{pmatrix} $$
[]
false
[]
25-25.1-2
25
25.1
25.1-2
docs/Chap25/25.1.md
Why do we require that $w_{ii} = 0$ for all $1 \le i \le n$?
This is consistent with the fact that the shortest path from a vertex to itself is the empty path of weight $0$. If there were another path of weight less than $0$ then it must be a negative-weight cycle, since it starts and ends at $v_i$. If $w_{ii} \ne 0$, then $L^{(1)}$ produced after the first run of $\text{EXTEND-SHORTEST-PATHS}$ would not contain the minimum weight of any path from $i$ to its neighbours. If $w_{ii} = 0$, then in line 7 of $\text{EXTEND-SHORTEST-PATHS}$, the second argument to $min$ would not equal the weight of the edge going from $i$ to its neighbours.
[]
false
[]
25-25.1-3
25
25.1
25.1-3
docs/Chap25/25.1.md
What does the matrix $$ L^{(0)} = \begin{pmatrix} 0 & \infty & \infty & \cdots & \infty \\\\ \infty & 0 & \infty & \cdots & \infty \\\\ \infty & \infty & 0 & \cdots & \infty \\\\ \vdots & \vdots & \vdots & \ddots & \vdots \\\\ \infty & \infty & \infty & \cdots & 0 \end{pmatrix} $$ used in the shortest-paths algorithms correspond to in regular matrix multiplication?
The identity matrix.
[]
false
[]
25-25.1-4
25
25.1
25.1-4
docs/Chap25/25.1.md
Show that matrix multiplication defined by $\text{EXTEND-SHORTEST-PATHS}$ is associative.
To verify associativity, we need to check that $(W^iW^j)W^p = W^i(W^jW^p)$ for all $i$, $j$ and $p$, where we use the matrix multiplication defined by the $\text{EXTEND-SHORTEST-PATHS}$ procedure. Consider entry $(a, b)$ of the left hand side. This is: $$ \begin{aligned} \min_{1 \le k \le n} [W^iW^j]\_{a, k} + W_{k, b}^p & = \min_{1 \le k \le n} \min_{1 \le q \le n} W_{a, q}^i + W_{q, k}^j + W_{k, b}^p \\\\ & = \min_{1 \le q \le n} W_{a, q}^i + \min_{1 \le k \le n} W_{q, k}^j + W_{k, b}^p \\\\ & = \min_{1 \le q \le n} W_{a, q}^i + [W^jW^p]\_{q, b}, \end{aligned} $$ which is precisely entry $(a, b)$ of the right hand side.
[]
false
[]
25-25.1-5
25
25.1
25.1-5
docs/Chap25/25.1.md
Show how to express the single-source shortest-paths problem as a product of matrices and a vector. Describe how evaluating this product corresponds to a Bellman-Ford-like algorithm (see Section 24.1).
(Removed)
[]
false
[]
25-25.1-6
25
25.1
25.1-6
docs/Chap25/25.1.md
Suppose we also wish to compute the vertices on shortest paths in the algorithms of this section. Show how to compute the predecessor matrix $\prod$ from the completed matrix $L$ of shortest-path weights in $O(n^3)$ time.
For each source vertex $v_i$ we need to compute the shortest-paths tree for $v_i$. To do this, we need to compute the predecessor for each $j \ne i$. For fixed $i$ and $j$, this is the value of $k$ such that $L_{i, k} + w(k, j) = L[i, j]$. Since there are $n$ vertices whose trees need computing, $n$ vertices for each such tree whose predecessors need computing, and it takes $O(n)$ to compute this for each one (checking each possible $k$), the total time is $O(n^3)$.
[]
false
[]
25-25.1-7
25
25.1
25.1-7
docs/Chap25/25.1.md
We can also compute the vertices on shortest paths as we compute the shortestpath weights. Define $\pi_{ij}^{(m)}$ as the predecessor of vertex $j$ on any minimum-weight path from $i$ to $j$ that contains at most $m$ edges. Modify the $\text{EXTEND-SHORTESTPATHS}$ and $\text{SLOW-ALL-PAIRS-SHORTEST-PATHS}$ procedures to compute the matrices$\prod^{(1)}, \prod^{(2)}, \ldots, \prod^{(n - 1)}$ as the matrices $L^{(1)}, L^{(2)}, \ldots, L^{(n - 1)}$ are computed.
To have the procedure compute the predecessor along the shortest path, see the modified procedures, $\text{EXTEND-SHORTEST-PATH-MOD}$ and $\text{SLOW-ALL-PAIRS-SHORTEST-PATHS-MOD}$ ```cpp EXTEND-SHORTEST-PATH-MOD(∏, L, W) n = L.row let L' = l'[i, j] be a new n × n matirx ∏' = π'[i, j] is a new n × n matrix for i = 1 to n for j = 1 to n l'[i, j] = ∞ π'[i, j] = NIL for k = 1 to n if l[i, k] + l[k, j] < l[i, j] l[i, j] = l[i, k] + l[k, j] if k != j π'[i, j] = k else π'[i, j] = π[i, j] return (∏', L') ``` ```cpp SLOW-ALL-PAIRS-SHORTEST-PATHS-MOD(W) n = W.rows L(1) = W ∏(1) = π[i, j](1) where π[i, j](1) = i if there is an edge from i to j, and NIL otherwise for m = 2 to n - 1 ∏(m), L(m) = EXTEND-SHORTEST-PATH-MOD(∏(m - 1), L(m - 1), W) return (∏(n - 1), L(n - 1)) ```
[ { "lang": "cpp", "code": "EXTEND-SHORTEST-PATH-MOD(∏, L, W)\n n = L.row\n let L' = l'[i, j] be a new n × n matirx\n ∏' = π'[i, j] is a new n × n matrix\n for i = 1 to n\n for j = 1 to n\n l'[i, j] = ∞\n π'[i, j] = NIL\n for k = 1 to n\n if l[i, k] + l[k, j] < l[i, j]\n l[i, j] = l[i, k] + l[k, j]\n if k != j\n π'[i, j] = k\n else\n π'[i, j] = π[i, j]\n return (∏', L')" }, { "lang": "cpp", "code": "SLOW-ALL-PAIRS-SHORTEST-PATHS-MOD(W)\n n = W.rows\n L(1) = W\n ∏(1) = π[i, j](1) where π[i, j](1) = i if there is an edge from i to j, and NIL otherwise\n for m = 2 to n - 1\n ∏(m), L(m) = EXTEND-SHORTEST-PATH-MOD(∏(m - 1), L(m - 1), W)\n return (∏(n - 1), L(n - 1))" } ]
false
[]
25-25.1-8
25
25.1
25.1-8
docs/Chap25/25.1.md
The $\text{FASTER-ALL-PAIRS-SHORTEST-PATHS}$ procedure, as written, requires us to store $\lceil \lg(n - 1) \rceil$ matrices, each with $n^2$ elements, for a total space requirement of $\Theta(n^2\lg n)$. Modify the procedure to require only $\Theta(n^2)$ space by using only two $n \times n$ matrices.
We can overwrite matrices as we go. Let $A \star B$ denote multiplication defined by the $\text{EXTEND-SHORTEST-PATHS}$ procedure. Then we modify $\text{FASTER-ALL-EXTEND-SHORTEST-PATHS}(W)$. We initially create an $n$ by $n$ matrix $L$. Delete line 5 of the algorithm, and change line 6 to $L = W \star W$, followed by $W = L$.
[]
false
[]
25-25.1-9
25
25.1
25.1-9
docs/Chap25/25.1.md
Modify $\text{FASTER-ALL-PAIRS-SHORTEST-PATHS}$ so that it can determine whether the graph contains a negative-weight cycle.
For the modification, keep computing for one step more than the original, that is, we compute all the way up to $L^{(2k + 1)}$ where $2^k > n - 1$. Then, if there aren't any negative weight cycles, then, we will have that the two matrices should be equal since having no negative weight cycles means that between any two vertices, there is a path that is tied for shortest and contains at most $n - 1$ edges. However, if there is a cycle of negative total weight, we know that it's length is at most $n$, so, since we are allowing paths to be larger by $2k \ge n$ between these two matrices, we have that we would need to have all of the vertices on the cycle have their distance reduce by at least the negative weight of the cycle. Since we can detect exactly when there is a negative cycle, based on when these two matrices are different. This algorithm works. It also only takes time equal to a single matrix multiplication which is littlee oh of the unmodified algorithm.
[]
false
[]
25-25.1-10
25
25.1
25.1-10
docs/Chap25/25.1.md
Give an efficient algorithm to find the length (number of edges) of a minimum-length negative-weight cycle in a graph.
(Removed)
[]
false
[]
25-25.2-1
25
25.2
25.2-1
docs/Chap25/25.2.md
Run the Floyd-Warshall algorithm on the weighted, directed graph of Figure 25.2. Show the matrix $D^{(k)}$ that results for each iteration of the outer loop.
$k = 1$: $$ \begin{pmatrix} 0 & \infty & \infty & \infty & -1 & \infty \\\\ 1 & 0 & \infty & 2 & 0 & \infty \\\\ \infty & 2 & 0 & \infty & \infty & -8 \\\\ -4 & \infty & \infty & 0 & -5 & \infty \\\\ \infty & 7 & \infty & \infty & 0 & \infty \\\\ \infty & 5 & 10 & \infty & \infty & 0 \end{pmatrix} $$ $k = 2$: $$ \begin{pmatrix} 0 & \infty & \infty & \infty & -1 & \infty \\\\ 1 & 0 & \infty & 2 & 0 & \infty \\\\ 3 & 2 & 0 & 4 & 2 & - 8 \\\\ -4 & \infty & \infty & 0 & -5 & \infty \\\\ 8 & 7 & \infty & 9 & 0 & \infty \\\\ 6 & 5 & 10 & 7 & 5 & 0 \end{pmatrix} $$ $k = 3$: $$ \begin{pmatrix} 0 & \infty & \infty & \infty & -1 & \infty \\\\ 1 & 0 & \infty & 2 & 0 & \infty \\\\ 3 & 2 & 0 & 4 & 2 & -8 \\\\ -4 & \infty & \infty & 0 & -5 & \infty \\\\ 8 & 7 & \infty & 9 & 0 & \infty \\\\ 6 & 5 & 10 & 7 & 5 & 0 \end{pmatrix} $$ $k = 4$: $$ \begin{pmatrix} 0 & \infty & \infty & \infty & -1 & \infty \\\\ -2 & 0 & \infty & 2 & -3 & \infty \\\\ 0 & 2 & 0 & 4 & -1 & -8 \\\\ -4 & \infty & \infty & 0 & -5 & \infty \\\\ 5 & 7 & \infty & 9 & 0 & \infty \\\\ 3 & 5 & 10 & 7 & 2 & 0 \end{pmatrix} $$ $k = 5$: $$ \begin{pmatrix} 0 & 6 & \infty & 8 & -1 & \infty \\\\ -2 & 0 & \infty & 2 & -3 & \infty \\\\ 0 & 2 & 0 & 4 & -1 & -8 \\\\ -4 & 2 & \infty & 0 & -5 & \infty \\\\ 5 & 7 & \infty & 9 & 0 & \infty \\\\ 3 & 5 & 10 & 7 & 2 & 0 \end{pmatrix} $$ $k = 6$: $$ \begin{pmatrix} 0 & 6 & \infty & 8 & -1 & \infty \\\\ -2 & 0 & \infty & 2 & -3 & \infty \\\\ -5 & -3 & 0 & -1 & -6 & -8 \\\\ -4 & 2 & \infty & 0 & -5 & \infty \\\\ 5 & 7 & \infty & 9 & 0 & \infty \\\\ 3 & 5 & 10 & 7 & 2 & 0 \end{pmatrix} $$
[]
false
[]
25-25.2-2
25
25.2
25.2-2
docs/Chap25/25.2.md
Show how to compute the transitive closure using the technique of Section 25.1.
We set $w_{ij} = 1$ if $(i, j)$ is an edge, and $w_{ij} = 0$ otherwise. Then we replace line 7 of $\text{EXTEND-SHORTEST-PATHS}(L, W)$ by $l''\_{ij} = l''\_{ij} \lor (l_{ik} \land w_{kj})$. Then run the $\text{SLOW-ALL-PAIRS-SHORTEST-PATHS}$ algorithm.
[]
false
[]
25-25.2-3
25
25.2
25.2-3
docs/Chap25/25.2.md
Modify the $\text{FLOYD-WARSHALL}$ procedure to compute the $\prod^{(k)}$ matrices according to equations $\text{(25.6)}$ and $\text{(25.7)}$. Prove rigorously that for all $i \in V$, the predecessor subgraph $G_{\pi, i}$ is a shortest-paths tree with root $i$. ($\textit{Hint:}$ To show that $G_{\pi, i}$ is acyclic, first show that $\pi_{ij}^{(k)} = l$ implies $d_{ij}^{(k)} \ge d_{il}^{(k)} + w_{lj}$, according to the definition of $\pi_{ij}^{(k)}$. Then, adapt the proof of Lemma 23.16.)
```cpp MOD-FLOYD-WARSHALL(W) n = W.rows D(0) = W let π(0) be a new n × n matrix for i = 1 to n for j = 1 to n if i != j and D[i, j](0) < ∞ π[i, j](0) = i for k = 1 to n let D(k) be a new n × n matrix let π(k) be a new n × n matrix for i = 1 to n for j = 1 to n if d[i, j](k - 1) ≤ d[i, k](k - 1) + d[k, j](k - 1) d[i, j](k) = d[i, j](k - 1) π[i, j](k) = π[i, j](k - 1) else d[i, j](k) = d[i, k](k - 1) + d[k, j](k - 1) π[i, j](k) = π[k, j](k - 1) ``` In order to have that $\pi^{(k)}\_{ij} = l$, we need that $d^{(k)}\_{ij} \ge d^{(k)}\_{il} + w_{lj}$. To see this fact, we will note that having $\pi^{(k)}\_{ij} = l$ means that a shortest path from $i$ to $j$ last goes through $l$. A path that last goes through $l$ corresponds to taking a chepest path from $i$ to $l$ and then following the single edge from $l$ to $j$. However, This means that $d_{il} \le d_{ij} - w_{ij}$, which we can rearrange to get the desired inequality. We can just continue following this inequality around, and if we ever get some cycle, $i_1, i_2, \ldots, i_c$, then we would have that $d_{ii_1} \le d_{ii_1} + w_{i_1i_2} + w_{i_2i_3} + \cdots + w_{i_ci_1}$. So, if we subtract the common term sfrom both sides, we get that $0 \le w_{i_ci_1} + \sum_{q = 1}^{c - 1} w_{i_qi_{q + 1}}$. So, we have that we would only have a cycle in the precedessor graph if we ahvt that there is a zero weight cycle in the original graph. However, we would never have to go around the weight zero cycle since the constructed path of shortest weight favors ones with a fewer number of edges because of the way that we handle the equality case in equation $\text{(25.7)}$.
[ { "lang": "cpp", "code": "MOD-FLOYD-WARSHALL(W)\n n = W.rows\n D(0) = W\n let π(0) be a new n × n matrix\n for i = 1 to n\n for j = 1 to n\n if i != j and D[i, j](0) < ∞\n π[i, j](0) = i\n for k = 1 to n\n let D(k) be a new n × n matrix\n let π(k) be a new n × n matrix\n for i = 1 to n\n for j = 1 to n\n if d[i, j](k - 1) ≤ d[i, k](k - 1) + d[k, j](k - 1)\n d[i, j](k) = d[i, j](k - 1)\n π[i, j](k) = π[i, j](k - 1)\n else\n d[i, j](k) = d[i, k](k - 1) + d[k, j](k - 1)\n π[i, j](k) = π[k, j](k - 1)" } ]
false
[]
25-25.2-4
25
25.2
25.2-4
docs/Chap25/25.2.md
As it appears above, the Floyd-Warshall algorithm requires $\Theta(n^3)$ space, since we compute $d_{ij}^{(k)}$ for $i, j, k = 1, 2, \ldots, n$. Show that the following procedure, which simply drops all the superscripts, is correct, and thus only $\Theta(n^2)$ space is required. ```cpp FLOYD-WARSHALL'(W) n = W.rows D = W for k = 1 to n for i = 1 to n for j = 1 to n d[i, j] = min(d[i, j], d[i, k] + d[k, j]) return D ```
(Removed)
[ { "lang": "cpp", "code": "> FLOYD-WARSHALL'(W)\n> n = W.rows\n> D = W\n> for k = 1 to n\n> for i = 1 to n\n> for j = 1 to n\n> d[i, j] = min(d[i, j], d[i, k] + d[k, j])\n> return D\n>" } ]
false
[]
25-25.2-5
25
25.2
25.2-5
docs/Chap25/25.2.md
Suppose that we modify the way in which equation $\text{(25.7)}$ handles equality: $$ \pi_{ij}^{(k)} = \begin{cases} \pi_{ij}^{(k - 1)} & \text{ if } d_{ij}^{(k - 1)} < d_{ik}^{(k - 1)} + d_{kj}^{(k - 1)}, \\\\ \pi_{kj}^{(k - 1)} & \text{ if } d_{ij}^{(k - 1)} \ge d_{ik}^{(k - 1)} + d_{kj}^{(k - 1)}. \end{cases} $$ Is this alternative definition of the predecessor matrix $\prod$ correct?
If we change the way that we handle the equality case, we will still be generating a the correct values for the $\pi$ matrix. This is because updating the $\pi$ values to make paths that are longer but still tied for the lowest weight. Making $\pi_{ij} = \pi_{kj}$ means that we are making the shortest path from $i$ to $j$ passes through $k$ at some point. This has the same cost as just going from $i$ to $j$, since $d_{ij} = d_{ik} + d_{kj}$.
[]
false
[]
25-25.2-6
25
25.2
25.2-6
docs/Chap25/25.2.md
How can we use the output of the Floyd-Warshall algorithm to detect the presence of a negative-weight cycle?
(Removed)
[]
false
[]
25-25.2-7
25
25.2
25.2-7
docs/Chap25/25.2.md
Another way to reconstruct shortest paths in the Floyd-Warshall algorithm uses values $\phi_{ij}^{(k)}$ for $i, j, k = 1, 2, \ldots, n$, where $\phi_{ij}^{(k)}$ is the highest-numbered intermediate vertex of a shortest path from $i$ to $j$ in which all intermediate vertices are in the set $\\{1, 2, \ldots, k \\}$. Give a recursive formulation for $\phi_{ij}^{(k)}$, modify the $\text{FLOYD-WARSHALL}$ procedure to compute the $\phi_{ij}^{(k)}$ values, and rewrite the $\text{PRINT-ALLPAIRS-SHORTEST-PATH}$ procedure to take the matrix $\Phi = \big(\phi_{ij}^{(n)}\big)$ as an input. How is the matrix $\Phi$ like the $s$ table in the matrix-chain multiplication problem of Section 15.2?
We can recursively compute the values of $\phi_{ij}^{(k)}$ by, letting it be $\phi_{ij}^{(k - 1)}$ if $d_{ik}^{(k)} + d_{kj}^{(k)} \ge d_{ij}^{(k - 1)}$, and otherwise, let it be $k$. This works correctly because it perfectly captures whether we decided to use vertex $k$ when we were repeatedly allowing ourselves use of each vertex one at a time. To modify Floyd-Warshall to compute this, we would just need to stick within the innermost for loop, something that computes $\phi(k)$ by this recursive rule, this would only be a constant amount of work in this innermost for loop, and so would not cause the asymptotic runtime to increase. It is similar to the s table in matrix-chain multiplication because it is computed by a similar recurrence. If we already have the $n^3$ values in $\phi_{ij}^{(k)}$ provided, then we can reconstruct the shortest path from $i$ to $j$ because we know that the largest vertex in the path from $i$ to $j$ is $\phi_{ij}^{(n)}$, call it $a_1$. Then, we know that the largest vertex in the path before $a_1$ will be $\phi_{ia_1}^{(a_1 - 1)}$ and the largest after $a_1$ will be $\phi_{a_1j}^{(a_1 - 1)}$. By continuing to recurse until we get that the largest element showing up at some point is $\text{NIL}$, we will be able to continue subdividing the path until it is entirely constructed.
[]
false
[]
25-25.2-8
25
25.2
25.2-8
docs/Chap25/25.2.md
Give an $O(VE)$-time algorithm for computing the transitive closure of a directed graph $G = (V, E)$.
We can determine the vertices reachable from a particular vertex in $O(V + E)$ time using any basic graph searching algorithm. Thus we can compute the transitive closure in $O(VE + V^2)$ time by searching the graph with each vertex as the source. If $|V| = O(E)$, we're done as $VE$ is now the dominating term in the running time bound. If not, we preprocess the graph and mark all degree-$0$ vertices in $O(V + E)$ time. The rows representing these vertices in the transitive closure are all $0$s, which means that the algorithm remains correct if we ignore these vertices when searching. After preprocessing, $|V| = O(E)$ as $|E| \geq |V|/2$. Therefore searching can be done in $O(VE)$ time.
[]
false
[]
25-25.2-9
25
25.2
25.2-9
docs/Chap25/25.2.md
Suppose that we can compute the transitive closure of a directed acyclic graph in $f(|V|, |E|)$ time, where $f$ is a monotonically increasing function of $|V|$ and $|E|$. Show that the time to compute the transitive closure $G^\* = (V, E^\*)$ of a general directed graph $G = (V, E)$ is then $f(|V|, |E|) + O(V + E^\*)$.
First, compute the strongly connected components of the directed graph, and look at it's component graph. This component graph is going to be acyclic and have at most as many vertices and at most as many edges as the original graph. Since it is acyclic, we can run our transitive closure algorithm on it. Then, for every edge $(S_1, S_2)$ that shows up in the transitive closure of the component graph, we add an edge from each vertex in $S_1$ to a vertex in $S_2$. This takes time equal to $O(V + E')$. So, the total time required is $\le f(|V|, |E|) + O(V + E)$.
[]
false
[]
25-25.3-1
25
25.3
25.3-1
docs/Chap25/25.3.md
Use Johnson's algorithm to find the shortest paths between all pairs of vertices in the graph of Figure 25.2. Show the values of $h$ and $\hat w$ computed by the algorithm.
$$ \begin{array}{c|c} v & h(v) \\\\ \hline 1 & -5 \\\\ 2 & -3 \\\\ 3 & 0 \\\\ 4 & -1 \\\\ 5 & -6 \\\\ 6 & -8 \end{array} $$ $$ \begin{array}{ccc|ccc} u & v & \hat w(u, v) & u & v & \hat w(u, v) \\\\ \hline 1 & 2 & \text{NIL} & 4 & 1 & 0 \\\\ 1 & 3 & \text{NIL} & 4 & 2 & \text{NIL} \\\\ 1 & 4 & \text{NIL} & 4 & 3 & \text{NIL} \\\\ 1 & 5 & 0 & 4 & 5 & 8 \\\\ 1 & 6 & \text{NIL} & 4 & 6 & \text{NIL} \\\\ 2 & 1 & 3 & 5 & 1 & \text{NIL} \\\\ 2 & 3 & \text{NIL} & 5 & 2 & 4 \\\\ 2 & 4 & 0 & 5 & 3 & \text{NIL} \\\\ 2 & 5 & \text{NIL} & 5 & 4 & \text{NIL} \\\\ 2 & 6 & \text{NIL} & 5 & 6 & \text{NIL} \\\\ 3 & 1 & \text{NIL} & 6 & 1 & \text{NIL} \\\\ 3 & 2 & 5 & 6 & 2 & 0 \\\\ 3 & 4 & \text{NIL} & 6 & 3 & 2 \\\\ 3 & 5 & \text{NIL} & 6 & 4 & \text{NIL} \\\\ 3 & 6 & 0 & 6 & 5 & \text{NIL} \\\\ \end{array} $$ So, the $d_{ij}$ values that we get are $$ \begin{pmatrix} 0 & 6 & \infty & 8 & -1 & \infty \\\\ -2 & 0 & \infty & 2 & -3 & \infty \\\\ -5 & -3 & 0 & -1 & -6 & -8 \\\\ -4 & 2 & \infty & 0 & -5 & \infty \\\\ 5 & 7 & \infty & 9 & 0 & \infty \\\\ 3 & 5 & 10 & 7 & 2 & 0 \end{pmatrix} . $$
[]
false
[]
25-25.3-2
25
25.3
25.3-2
docs/Chap25/25.3.md
What is the purpose of adding the new vertex $s$ to $V'$, yielding $V'$?
This is only important when there are negative-weight cycles in the graph. Using a dummy vertex gets us around the problem of trying to compute $-\infty + \infty$ to find $\hat w$. Moreover, if we had instead used a vertex $v$ in the graph instead of the new vertex $s$, then we run into trouble if a vertex fails to be reachable from $v$.
[]
false
[]
25-25.3-3
25
25.3
25.3-3
docs/Chap25/25.3.md
Suppose that $w(u, v) \ge 0$ for all edges $(u, v) \in E$. What is the relationship between the weight functions $w$ and $\hat w$?
If all the edge weights are nonnegative, then the values computed as the shortest distances when running Bellman-Ford will be all zero. This is because when constructing $G'$ on the first line of Johnson's algorithm, we place an edge of weight zero from s to every other vertex. Since any path within the graph has no negative edges, its cost cannot be negative, and so, cannot beat the trivial path that goes straight from $s$ to any given vertex. Since we have that $h(u) = h(v)$ for every $u$ and $v$, the reweighting that occurs only adds and subtracts $0$, and so we have that $w(u, v) = \hat w(u, v)$
[]
false
[]
25-25.3-4
25
25.3
25.3-4
docs/Chap25/25.3.md
Professor Greenstreet claims that there is a simpler way to reweight edges than the method used in Johnson's algorithm. Letting $w^\* = \min_{(u, v) \in E} \\{w(u, v)\\}$, just define $\hat w(u, v) = w(u, v) - w^\*$ for all edges $(u, v) \in E$. What is wrong with the professor's method of reweighting?
Consider a graph with four vertices labeled $A$, $B$, $C$, and $D$. We want to find the shortest path from vertex $A$ to vertex $C$. Initially, there are two potential paths: $A \to B \to C$ (with 2 edges) and $A \to C$ (with a direct edge). Introducing a reweighting factor, denoted as $w^\*$, may lead to incorrect results. Reweighting could potentially make the path with more edges ($A \to B \to C$) longer than the direct edge ($A \to C$), which contradicts our goal of finding the shortest path. ![](../img/25.3-4.png)
[]
true
[ "../img/25.3-4.png" ]
25-25.3-5
25
25.3
25.3-5
docs/Chap25/25.3.md
Suppose that we run Johnson's algorithm on a directed graph $G$ with weight function $w$. Show that if $G$ contains a $0$-weight cycle $c$, then $\hat w(u, v) = 0$ for every edge $(u, v)$ in $c$.
If $\delta(s, v) - \delta(s, u) < w(u, v)$, we have $$\delta(s, u) \le \delta(s, v) + (0 - w(u, v)) < \delta(s, u) + w(u, v) - w(u, v) = \delta(s, u),$$ which is impossible, thus $\delta(s, v) - \delta(s, u) = w(u, v)$, $\hat w(u, v) = w(u, v) + \delta(s, u) - \delta(s, v) = 0$.
[]
false
[]
25-25.3-6
25
25.3
25.3-6
docs/Chap25/25.3.md
Professor Michener claims that there is no need to create a new source vertex in line 1 of $\text{JOHNSON}$. He claims that instead we can just use $G' = G$ and let $s$ be any vertex. Give an example of a weighted, directed graph $G$ for which incorporating the professor's idea into $\text{JOHNSON}$ causes incorrect answers. Then show that if $G$ is strongly connected (every vertex is reachable from every other vertex), the results returned by $\text{JOHNSON}$ with the professor's modification are correct.
(Removed)
[]
false
[]
25-25-1
25
25-1
25-1
docs/Chap25/Problems/25-1.md
Suppose that we wish to maintain the transitive closure of a directed graph $G = (V, E)$ as we insert edges into $E$. That is, after each edge has been inserted, we want to update the transitive closure of the edges inserted so far. Assume that the graph $G$ has no edges initially and that we represent the transitive closure as a boolean matrix. **a.** Show how to update the transitive closure $G^\* = (V, E^\*)$ of a graph $G = (V, E)$ in $O(V^2)$ time when a new edge is added to $G$. **b.** Give an example of a graph $G$ and an edge $e$ such that $\Omega(V^2)$ time is required to update the transitive closure after the insertion of $e$ into $G$, no matter what algorithm is used. **c.** Describe an efficient algorithm for updating the transitive closure as edges are inserted into the graph. For any sequence of $n$ insertions, your algorithm should run in total time $\sum_{i = 1}^n t_i = O(V^3)$, where $t_i$ is the time to update the transitive closure upon inserting the $i$th edge. Prove that your algorithm attains this time bound.
**a.** We can update the transitive closure in time $O(V^2)$ as follows. Suppose that we add the edge $(x_1, x_2)$. Then, we will consider every pair of vertices $(u, v)$. In order to of created a path between them, we would need some part of that path that goes from $u$ to $x_1$ and some second part of that path that goes from $x_2$ to $v$. This means that we add the edge $(u, v)$ to the transitive closure if and only if the transitive closure contains the edges $(u, x_1)$ and $(x_2, v)$. Since we only had to consider every pair of vertices once, the runtime of this update is only $O(V^2)$. **b.** Suppose that we currently have two strongly connected components, each of size $|V| / 2$ with no edges between them. Then their transitive closures computed so far will consist of two complete directed graphs on $|V| / 2$ vertices each. So, there will be a total of $|V|^2 / 2$ edges adding the number of edges in each together. Then, we add a single edge from one component to the other. This will mean that every vertex in the component the edge is coming from will have an edge going to every vertex in the component that the edge is going to. So, the total number of edges after this operation will be $|V| / 2 + |V| / 4$ So, the number of edges increased by $|V| / 4$. Since each time we add an edge, we need to use at least constant time, since there is no cheap way to add many edges at once, the total amount of time needed is $\Omega(|V|^2)$. **c.** We will have each vertex maintain a tree of vertices that have a path to it and a tree of vertices that it has a path to. The second of which is the transitive closure at each step. Then, upon inserting an edge, $(u, v)$, we will look at successive ancestors of $u$, and add $v$ to their successor tree, just past $u$. If we ever don't insert an edge when doing this, we can stop exploring that branch of the ancestor tree. Similarly, we keep doing this for all of the ancestors of $v$. Since we are able to short circuit if we ever notice that we have already added an edge, we know that we will only ever reconsider the same edge at most $n$ times, and, since the number of edges is $O(n^2)$, the total runtime is $O(n^3)$.
[]
false
[]
25-25-2
25
25-2
25-2
docs/Chap25/Problems/25-2.md
A graph $G = (V, E)$ is **_$\epsilon$-dense_** if $|E| = \Theta(V^{1 + \epsilon})$ for some constant $\epsilon$ in the range $0 < \epsilon \le 1$. By using $d$-ary min-heaps (see Problem 6-2) in shortest-paths algorithms on $\epsilon$-dense graphs, we can match the running times of Fibonacci-heap-based algorithms without using as complicated a data structure. **a.** What are the asymptotic running times for $\text{INSERT}$, $\text{EXTRACT-MIN}$, and $\text{DECREASE-KEY}$, as a function of $d$ and the number $n$ of elements in a $d$-ary min-heap? What are these running times if we choose $d = \Theta(n^\alpha)$ for some constant $0 < \alpha \le 1$? Compare these running times to the amortized costs of these operations for a Fibonacci heap. **b.** Show how to compute shortest paths from a single source on an $\epsilon$-dense directed graph $G = (V, E)$ with no negative-weight edges in $O(E)$ time. ($\textit{Hint:}$ Pick $d$ as a function of $\epsilon$.) **c.** Show how to solve the all-pairs shortest-paths problem on an $\epsilon$-dense directed graph $G = (V, E)$ with no negative-weight edges in $O(VE)$ time. **d.** Show how to solve the all-pairs shortest-paths problem in $O(VE)$ time on an $\epsilon$-dense directed graph $G = (V, E)$ that may have negative-weight edges but has no negative-weight cycles.
**a.** - $\text{INSERT}$: $\Theta(\log_d n) = \Theta(1 / \alpha)$. - $\text{EXTRACT-MIN}$: $\Theta(d\log_d n) = \Theta(n^\alpha / \alpha)$. - $\text{DECREASE-KEY}$: $\Theta(\log_d n) = \Theta(1 / \alpha)$. **b.** Dijkstra, $O(d\log_d V \cdot V + \log_d V \cdot E)$, if $d = V^\epsilon$, then $$ \begin{aligned} O(d \log_d V \cdot V + \log_d V \cdot E) & = O(V^\epsilon \cdot V / \epsilon + E / \epsilon) \\\\ & = O((V^{1+\epsilon} + E) / \epsilon) \\\\ & = O((E + E) / \epsilon) \\\\ & = O(E). \end{aligned} $$ **c.** Run $|V|$ times Dijkstra, since the algorithm is $O(E)$ based on (b), the total time is $O(VE)$. **d.** Johnson's reweight is $O(VE)$.
[]
false
[]
26-26.1-1
26
26.1
26.1-1
docs/Chap26/26.1.md
Show that splitting an edge in a flow network yields an equivalent network. More formally, suppose that flow network $G$ contains edge $(u, v)$, and we create a new flow network $G'$ by creating a new vertex $x$ and replacing $(u, v)$ by new edges $(u, x)$ and $(x, v)$ with $c(u, x) = c(x, v) = c(u, v)$. Show that a maximum flow in $G'$ has the same value as a maximum flow in $G$.
Suppose the maximum flow of a graph $G = (V, E)$ with source $s$ and destination $t$ is $|f| = \sum{f(s, v)}$, where $v \in V$ are vertices in the maximum flow between $s$ and $t$. We know every vertex $v \in V$ must obey the Flow conservation rule. Therefore, if we can add or delete some vertices between $s$ and $t$ without changing $|f|$ or violating the Flow conversation rule, then the new graph $G' = (V', E')$ will have the same maximum flow as the original graph $G$, and that's why we can replace edge $(u, v)$ by new edges $(u, x)$ and $(x, v)$ with $c(u, x) = c(x, v) = c(u, v)$. After doing so, vertex $v_1$ and $v_2$ still obey the Flow conservation rule since the values flow in to or flow out of $v_1$ and $v_2$ do not change at all. Meanwhile, the value $|f| = \sum{f(s, v)}$ remains the same. In fact, we can split any edges in this way, even if two vertex $u$ and $v$ doesn't have any connection between them, we can still add a vertex $y$ and make $c(u, y) = c(y, v) = 0$. To conclude, we can transform any graph with or without antiparallel edges into an equivalent graph without antiparallel edges and have the same maximum flow value.
[]
false
[]
26-26.1-2
26
26.1
26.1-2
docs/Chap26/26.1.md
Extend the flow properties and definitions to the multiple-source, multiple-sink problem. Show that any flow in a multiple-source, multiple-sink flow network corresponds to a flow of identical value in the single-source, single-sink network obtained by adding a supersource and a supersink, and vice versa.
Capacity constraint: for all $u, v \in V$, we require $0 \le f(u, v) \le c(u, v)$. Flow conservation: for all $u \in V - S - T$, we require $\sum_{v \in V} f(v, u) = \sum_{v \in V} f(u, v)$.
[]
false
[]
26-26.1-3
26
26.1
26.1-3
docs/Chap26/26.1.md
Suppose that a flow network $G = (V, E)$ violates the assumption that the network contains a path $s \leadsto v \leadsto t$ for all vertices $v \in V$. Let $u$ be a vertex for which there is no path $s \leadsto u \leadsto t$. Show that there must exist a maximum flow $f$ in $G$ such that $f(u, v) = f(v, u) = 0$ for all vertices $v \in V$.
(Removed)
[]
false
[]
26-26.1-4
26
26.1
26.1-4
docs/Chap26/26.1.md
Let $f$ be a flow in a network, and let $\alpha$ be a real number. The **_scalar flow product_**, denoted $\alpha f$, is a function from $V \times V$ to $\mathbb{R}$ defined by $$(\alpha f)(u, v) = \alpha \cdot f(u, v).$$ Prove that the flows in a network form a **_convex set_**. That is, show that if $f_1$ and $f_2$ are flows, then so is $\alpha f_1 + (1 - \alpha) f_2$ for all $\alpha$ in the range $0 \le \alpha \le 1$.
(Removed)
[]
false
[]
26-26.1-5
26
26.1
26.1-5
docs/Chap26/26.1.md
State the maximum-flow problem as a linear-programming problem.
$$ \begin{array}{ll} \max & \sum\limits_{v \in V} f(s, v) - \sum\limits_{v \in V} f(v, s) \\\\ s.t. & 0 \le f(u, v) \le c(u, v) \\\\ & \sum\limits_{v \in V} f(v, u) - \sum\limits_{v \in V} f(u, v) = 0 \end{array} $$
[]
false
[]
26-26.1-6
26
26.1
26.1-6
docs/Chap26/26.1.md
Professor Adam has two children who, unfortunately, dislike each other. The problem is so severe that not only do they refuse to walk to school together, but in fact each one refuses to walk on any block that the other child has stepped on that day. The children have no problem with their paths crossing at a corner. Fortunately both the professor's house and the school are on corners, but beyond that he is not sure if it is going to be possible to send both of his children to the same school. The professor has a map of his town. Show how to formulate the problem of determining whether both his children can go to the same school as a maximum-flow problem.
(Removed)
[]
false
[]
26-26.1-7
26
26.1
26.1-7
docs/Chap26/26.1.md
Suppose that, in addition to edge capacities, a flow network has **_vertex capacities_**. That is each vertex $v$ has a limit $l(v)$ on how much flow can pass though $v$. Show how to transform a flow network $G = (V, E)$ with vertex capacities into an equivalent flow network $G' = (V', E')$ without vertex capacities, such that a maximum flow in $G'$ has the same value as a maximum flow in $G$. How many vertices and edges does $G'$ have?
(Removed)
[]
false
[]
26-26.2-1
26
26.2
26.2-1
docs/Chap26/26.2.md
Prove that the summations in equation $\text{(26.6)}$ equal the summations in equation $\text{(26.7)}$.
(Removed)
[]
false
[]
26-26.2-2
26
26.2
26.2-2
docs/Chap26/26.2.md
In Figure $\text{26.1}$(b), what is the flow across the cut $(\\{s, v_2, v_4\\}, \\{v_1, v_3, t\\})$? What is the capacity of this cut?
$$ \begin{aligned} f(S, T) & = f(s, v_1) + f(v_2, v_1) + f(v_4, v_3) + f(v_4, t) - f(v_3, v_2) = 11 + 1 + 7 + 4 - 4 = 19, \\\\ c(S, T) & = c(s, v_1) + c(v_2, v_1) + c(v_4, v_3) + c(v_4, t) = 16 + 4 + 7 + 4 = 31. \end{aligned} $$
[]
false
[]
26-26.2-3
26
26.2
26.2-3
docs/Chap26/26.2.md
Show the execution of the Edmonds-Karp algorithm on the flow network of Figure 26.1(a).
If we perform a breadth first search where we consider the neighbors of a vertex as they appear in the ordering $\\{s, v_1, v_2, v_3, v_4, t\\}$, the first path that we will find is $s, v_1, v_3, t$. The min capacity of this augmenting path is $12$, so we send $12$ units along it. We perform a $\text{BFS}$ on the resulting residual network. This gets us the path $s, v_2, v_4, t$. The min capacity along this path is $4$, so we send $4$ units along it. Then, the only path remaining in the residual network is $\\{s, v_2, v_4, v_3, t\\}$ which has a min capacity of $7$, since that's all that's left, we find it in our $\text{BFS}$. Putting it all together, the total flow that we have found has a value of $23$.
[]
false
[]
26-26.2-4
26
26.2
26.2-4
docs/Chap26/26.2.md
In the example of Figure 26.6, what is the minimum cut corresponding to the maximum flow shown? Of the augmenting paths appearing in the example, which one cancels flow?
A minimum cut corresponding to the maximum flow is $S = \\{s, v_1, v_2, v_4\\}$ and $T = \\{v_3, t\\}$. The augmenting path in part \(c\) cancels flow on edge $(v_3, v_2)$.
[]
false
[]
26-26.2-5
26
26.2
26.2-5
docs/Chap26/26.2.md
Recall that the construction in Section 26.1 that converts a flow network with multiple sources and sinks into a single-source, single-sink network adds edges with infinite capacity. Prove that any flow in the resulting network has a finite value if the edges of the original network with multiple sources and sinks have finite capacity.
Since the only edges that have infinite value are those going from the supersource or to the supersink, as long as we pick a cut that has the supersource and all the original sources on one side, and the other side has the supersink as well as all the original sinks, then it will only cut through edges of finite capacity. Then, by Corollary 26.5, we have that the value of the flow is bounded above by the value of any of these types of cuts, which is finite.
[]
false
[]
26-26.2-6
26
26.2
26.2-6
docs/Chap26/26.2.md
Suppose that each source $s_i$ in a flow network with multiple sources and sinks produces exactly $p_i$ units of flow, so that $\sum_{v \in V} f(s_i, v) = p_i$. Suppose also that each sink $t_j$ consumes exactly $q_j$ units, so that $\sum_{v \in V} f(v, t_j) = q_j$, where $\sum_i p_i = \sum_j q_j$. Show how to convert the problem of finding a flow $f$ that obeys these additional constraints into the problem of finding a maximum flow in a single-source, single-sink flow network.
$c(s, s_i) = p_i$, $c(t_j, t) = q_j$.
[]
false
[]
26-26.2-7
26
26.2
26.2-7
docs/Chap26/26.2.md
Prove Lemma 26.2.
To check that $f_p$ is a flow, we make sure that it satisfies both the capacity constraints and the flow constraints. First, the capacity constraints. To see this, we recall our definition of $c_f(p)$, that is, it is the smallest residual capacity of any of the edges along the path $p$. Since we have that the residual capacity is always less than or equal to the initial capacity, we have that each value of the flow is less than the capacity. Second, we check the flow constraints, Since the only edges that are given any flow are along a path, we have that at each vertex interior to the path, the flow in from one edge is immediately canceled by the flow out to the next vertex in the path. Lastly, we can check that its value is equal to $c_f(p)$ because, while $s$ may show up at spots later on in the path, it will be canceled out as it leaves to go to the next vertex. So, the only net flow from s is the initial edge along the path, since it (along with all the other edges) is given flow $c_f(p)$, that is the value of the flow $f_p$.
[]
false
[]
26-26.2-8
26
26.2
26.2-8
docs/Chap26/26.2.md
Suppose that we redefine the residual network to disallow edges into $s$. Argue that the procedure $\text{FORD-FULKERSON}$ still correctly computes a maximum flow.
(Removed)
[]
false
[]
26-26.2-9
26
26.2
26.2-9
docs/Chap26/26.2.md
Suppose that both $f$ and $f'$ are flows in a network $G$ and we compute flow $f \uparrow f'$. Does the augmented flow satisfy the flow conservation property? Does it satisfy the capacity constraint?
(Removed)
[]
false
[]
26-26.2-10
26
26.2
26.2-10
docs/Chap26/26.2.md
Show how to find a maximum flow in a network $G = (V, E)$ by a sequence of at most $|E|$ augmenting paths. ($\textit{Hint:}$ Determine the paths after finding the maximum flow.)
Suppose we already have a maximum flow $f$. Consider a new graph $G$ where we set the capacity of edge $(u, v)$ to $f(u, v)$. Run Ford-Fulkerson, with the modification that we remove an edge if its flow reaches its capacity. In other words, if $f(u, v) = c(u, v)$ then there should be no reverse edge appearing in residual network. This will still produce correct output in our case because we never exceed the actual maximum flow through an edge, so it is never advantageous to cancel flow. The augmenting paths chosen in this modified version of Ford-Fulkerson are precisely the ones we want. There are at most $|E|$ because every augmenting path produces at least one edge whose flow is equal to its capacity, which we set to be the actual flow for the edge in a maximum flow, and our modification prevents us from ever destroying this progress.
[]
false
[]
26-26.2-11
26
26.2
26.2-11
docs/Chap26/26.2.md
The **_edge connectivity_** of an undirected graph is the minimum number $k$ of edges that must be removed to disconnect the graph. For example, the edge connectivity of a tree is $1$, and the edge connectivity of a cyclic chain of vertices is $2$. Show how to determine the edge connectivity of an undirected graph $G = (V, E)$ by running a maximum-flow algorithm on at most $|V|$ flow networks, each having $O(V)$ vertices and $O(E)$ edges.
Create an directed version of the graph. Then create a flow network out of it, resolving all antiparallel edges. All edges' capacities are set to $1$. Pick any vertex that wasn't created for antiparallel workaround as the sink and run maximum-flow algorithm with all vertexes that aren't for antipararrel workaround (except the sink) as sources. Find the minimum value out of all $|V| - 1$ maximum flow values.
[]
false
[]
26-26.2-12
26
26.2
26.2-12
docs/Chap26/26.2.md
Suppose that you are given a flow network $G$, and $G$ has edges entering the source $s$. Let $f$ be a flow in $G$ in which one of the edges $(v, s)$ entering the source has $f(v, s) = 1$. Prove that there must exist another flow $f'$ with $f'(v, s) = 0$ such that $|f| = |f'|$. Give an $O(E)$-time algorithm to compute $f'$, given $f$, and assuming that all edge capacities are integers.
(Removed)
[]
false
[]
26-26.2-13
26
26.2
26.2-13
docs/Chap26/26.2.md
Suppose that you wish to find, among all minimum cuts in a flow network $G$ with integral capacities, one that contains the smallest number of edges. Show how to modify the capacities of $G$ to create a new flow network $G'$ in which any minimum cut in $G'$ is a minimum cut with the smallest number of edges in $G$.
(Removed)
[]
false
[]
26-26.3-1
26
26.3
26.3-1
docs/Chap26/26.3.md
Run the Ford-Fulkerson algorithm on the flow network in Figure 26.8 \(c\) and show the residual network after each flow augmentation. Number the vertices in $L$ top to bottom from $1$ to $5$ and in $R$ top to bottom from $6$ to $9$. For each iteration, pick the augmenting path that is lexicographically smallest.
First, we pick an augmenting path that passes through vertices 1 and 6. Then, we pick the path going through 2 and 8. Then, we pick the path going through 3 and 7. Then, the resulting residual graph has no path from $s$ to $t$. So, we know that we are done, and that we are pairing up vertices $(1, 6)$, $(2, 8)$, and $(3, 7)$. This number of unit augmenting paths agrees with the value of the cut where you cut the edges $(s, 3)$, $(6, t)$, and $(7, t)$.
[]
false
[]
26-26.3-2
26
26.3
26.3-2
docs/Chap26/26.3.md
Prove Theorem 26.10.
We proceed by induction on the number of iterations of the while loop of Ford-Fulkerson. After the first iteration, since $c$ only takes on integer values and $(u, v).f$ is set to $0$, $c_f$ only takes on integer values. Thus, lines 7 and 8 of Ford-Fulkerson only assign integer values to $(u, v).f$. Assume that $(u, v).f \in \mathbb Z$ for all $(u, v)$ after the $n$th iteration. On the $(n + 1)$th iteration $c_f(p)$ is set to the minimum of $c_f(u, v)$ which is an integer by the induction hypothesis. Lines 7 and 8 compute $(u, v).f$ or $(v, u).f$. Either way, these the the sum or difference of integers by assumption, so after the $(n + 1)$th iteration we have that $(u, v).f$ is an integer for all $(u, v) \in E$. Since the value of the flow is a sum of flows of edges, we must have $|f| \in \mathbb Z$ as well.
[]
false
[]
26-26.3-3
26
26.3
26.3-3
docs/Chap26/26.3.md
Let $G = (V, E)$ be a bipartite graph with vertex partition $V = L \cup R$, and let $G'$ be its corresponding flow network. Give a good upper bound on the length of any augmenting path found in $G'$ during the execution of $\text{FORD-FULKERSON}$.
(Removed)
[]
false
[]
26-26.3-4
26
26.3
26.3-4 $\star$
docs/Chap26/26.3.md
A **_perfect matching_** is a matching in which every vertex is matched. Let $G = (V, E)$ be an undirected bipartite graph with vertex partition $V = L \cup R$, where $|L| = |R|$. For any $X \subseteq V$, define the **_neighborhood_** of $X$ as $$N(X) = \\{y \in V: (x, y) \in E \text{ for some } x \in X\\},$$ that is, the set of vertices adjacent to some member of $X$. Prove **_Hall's theorem_**: there exists a perfect matching in $G$ if and only if $|A| \le |N(A)|$ for every subset $A \subseteq L$.
First suppose there exists a perfect matching in $G$. Then for any subset $A \subseteq L$, each vertex of $A$ is matched with a neighbor in $R$, and since it is a matching, no two such vertices are matched with the same vertex in $R$. Thus, there are at least $|A|$ vertices in the neighborhood of $A$. Now suppose that $|A| \le |N(A)|$ for all $A \subseteq L$. Run Ford-Fulkerson on the corresponding flow network. The flow is increased by $1$ each time an augmenting path is found, so it will suffice to show that this happens $|L|$ times. Suppose the while loop has run fewer than $L$ times, but there is no augmenting path. Then fewer than $L$ edges from $L$ to $R$ have flow $1$. Let $v_1 \in L$ be such that no edge from $v_1$ to a vertex in $R$ has nonzero flow. By assumption, $v_1$ has at least one neighbor $v_1' \in R$. If any of $v_1$'s neighbors are connected to $t$ in $G_f$ then there is a path, so assume this is not the case. Thus, there must be some edge $(v_2, v_1)$ with flow $1$. By assumption, $N(\\{v_1, v_2\\}) \ge 2$, so there must exist $v_2' \ne v_1'$ such that $v_2'\in N(\\{v_1, v_2 \\})$. If $(v_2', t)$ is an edge in the residual network we're done since $v_2'$ must be a neighbor of $v_2$, so $s$, $v_1$, $v_1'$, $v_2$, $v_2'$, and $t$ is a path in $G_f$. Otherwise $v_2'$ must have a neighbor $v_3 \in L$ such that $(v_3, v_2')$ is in $G_f$. Specifically, $v_3 \ne v_1$ since $(v_3, v_2')$ has flow $1$, and $v_3 \ne v_2$ since $(v_2, v_1')$ has flow $1$, so no more flow can leave $v_2$ without violating conservation of flow. Again by our hypothesis, $N(\\{v_1, v_2, v_3\\}) \ge 3$, so there is another neighbor $v_3' \in R$. Continuing in this fashion, we keep building up the neighborhood $v_i'$, expanding each time we find that $(v_i', t)$ is not an edge in $G_f$. This cannot happen $L$ times, since we have run the Ford-Fulkerson while-loop fewer than $|L|$ times, so there still exist edges into $t$ in $G_f$. Thus, the process must stop at some vertex $v_k'$, and we obtain an augmenting path $$s, v_1, v_1', v_2, v_2', v_3, \ldots, v_k, v_k', t,$$ contradicting our assumption that there was no such path. Therefore the while loop runs at least $|L|$ times. By Corollary 26.3 the flow strictly increases each time by $f_p$. By Theorem 26.10 $f_p$ is an integer. In particular, it is equal to $1$. This implies that $|f| \ge |L|$. It is clear that $|f| \le |L|$, so we must have $|f| = |L|$. By Corollary 26.11 this is the cardinality of a maximum matching. Since $|L| = |R|$, any maximum matching must be a perfect matching.
[]
false
[]
26-26.3-5
26
26.3
26.3-5 $\star$
docs/Chap26/26.3.md
We say that a bipartite graph $G = (V, E)$, where $V = L \cup R$, is **_$d$-regular_** if every vertex $v \in V$ has degree exactly $d$. Every $d$-regular bipartite graph has $|L| = |R|$. Prove that every $d$-regular bipartite graph has a matching of cardinality $|L|$ by arguing that a minimum cut of the corresponding flow network has capacity $|L|$.
We convert the bipartite graph into a flow problem by making a new vertex for the source which has an edge of unit capacity going to each of the vertices in $L$, and a new vertex for the sink that has an edge from each of the vertices in $R$, each with unit capacity. We want to show that the number of edge between the two parts of the cut is at least $L$, this would get us by the max-flow-min-cut theorem that there is a flow of value at least $|L|$. The, we can apply the integrality theorem that all of the flow values are integers, meaning that we are selecting $|L|$ disjoint edges between $L$ and $R$. To see that every cut must have capacity at lest $|L|$, let $S_1$ be the side of the cut containing the source and let $S_2$ be the side of the cut containing the sink. Then, look at $L \cap S_1$. The source has an edge going to each of $L \cap (S_1)^c$, and there is an edge from $R \cap S_1$ to the sink that will be cut. This means that we need that there are at least $|L \cap S_1| - |R \cap S_1|$ many edges going from $L \cap S_1$ to $R \cap S_2$. If we look at the set of all neighbors of $L \cap S_1$, we get that there must be at least the same number of neighbors in $R$, because otherwise we could sum up the degrees going from $L \cap S_1$ to $R$ on both sides, and get that some of the vertices in $R$ would need to have a degree higher than $d$. This means that the number of neighbors of $L \cap S_1$ is at least $L \cap S_1$, but we have that they are in $S_1$, but there are only $|R \cap S_1|$ of those, so, we have that the size of the set of neighbors of $L \cap S_1$ that are in $S_2$ is at least $|L \cap S_1| - |R \cap S_1|$. Since each of these neighbors has an edge crossing the cut, we have that the total number of edges that the cut breaks is at least $$(|L| - |L \cap S_1|) + (|L \cap S_1| - |R \cap S_1|) + |R \cap S_1| = |L|.$$ Since each of these edges is unit valued, the value of the cut is at least $|L|$.
[]
false
[]
26-26.4-1
26
26.4
26.4-1
docs/Chap26/26.4.md
Prove that, after the procedure $\text{INITIALIZE-PREFLOW}(G, S)$ terminates, we have $s.e \le -|f^\*|$, where $f^\*$ is a maximum flow for $G$.
(Removed)
[]
false
[]
26-26.4-2
26
26.4
26.4-2
docs/Chap26/26.4.md
Show how to implement the generic push-relabel algorithm using $O(V)$ time per relabel operation, $O(1)$ time per push, and $O(1)$ time to select an applicable operation, for a total time of $O(V^2E)$.
We must select an appropriate data structure to store all the information which will allow us to select a valid operation in constant time. To do this, we will need to maintain a list of overflowing vertices. By Lemma 26.14, a push or a relabel operation always applies to an overflowing vertex. To determine which operation to perform, we need to determine whether $u.h = v.h + 1$ for some $v \in N(u)$. We'll do this by maintaining a list $u.high$ of all neighbors of $u$ in $G_f$ which have height greater than or equal to $u$. We'll update these attributes in the $\text{PUSH}$ and $\text{RELABEL}$ functions. It is clear from the pseudocode given for $\text{PUSH}$ that we can execute it in constant time, provided we have maintain the attributes $\delta_f(u, v)$, $u.e$, $c_f(u, v)$, $(u, v).f$ and $u.h$. Each time we call $\text{PUSH}(u, v)$ the result is that $u$ is no longer overflowing, so we must remove it from the list. Maintain a pointer $u.overflow$ to $u$'s position in the overflow list. If a vertex $u$ is not overflowing, set $u.overflow = \text{NIL}$. Next, check if $v$ became overflowing. If so, set $v.overflow$ equal to the head of the overflow list. Since we can update the pointer in constant time and delete from a linked list given a pointer to the element to be deleted in constant time, we can maintain the list in $O(1)$. The $\text{RELABEL}$ operation takes $O(V)$ because we need to compute the minimum $v.h$ from among all $(u, v) \in E_f$, and there could be $|V| - 1$ many such $v$. We will also need to update $u.high$ during $\text{RELABEL}$. When $\text{RELABEL}(u)$ is called, set $u.high$ equal to the empty list and for each vertex $v$ which is adjacent to $u$, if $v.h = u.h + 1$, add $u$ to the list $v.high$. Since this takes constant time per adjacent vertex we can maintain the attribute in $O(V)$ per call to relabel.
[]
false
[]
26-26.4-3
26
26.4
26.4-3
docs/Chap26/26.4.md
Prove that the generic push-relabel algorithm spends a total of only $O(VE)$ time in performing all the $O(V^2)$ relabel operations.
(Removed)
[]
false
[]
26-26.4-4
26
26.4
26.4-4
docs/Chap26/26.4.md
Suppose that we have found a maximum flow in a flow network $G = (V, E)$ using a push-relabel algorithm. Give a fast algorithm to find a minimum cut in $G$.
(Removed)
[]
false
[]
26-26.4-5
26
26.4
26.4-5
docs/Chap26/26.4.md
Give an efficient push-relabel algorithm to find a maximum matching in a bipartite graph. Analyze your algorithm.
First, construct the flow network for the bipartite graph as in the previous section. Then, we relabel everything in $L$. Then, we push from every vertex in $L$ to a vertex in $R$, so long as it is possible. Keeping track of those that vertices of $L$ that are still overflowing can be done by a simple bit vector. Then, we relabel everything in R and push to the last vertex. Once these operations have been done, The only possible valid operations are to relabel the vertices of $L$ that weren't able to find an edge that they could push their flow along, so could possibly have to get a push back from $R$ to $L$. This continues until there are no more operations to do. This takes time of $O(V(E + V))$.
[]
false
[]
26-26.4-6
26
26.4
26.4-6
docs/Chap26/26.4.md
Suppose that all edge capacities in a flow network $G = (V, E)$ are in the set $\\{1, 2, \ldots, k\\}$. Analyze the running time of the generic push-relabel algorithm in terms of $|V|$, $|E|$, and $k$. ($\textit{Hint:}$ How many times can each edge support a nonsaturating push before it becomes saturated?)
The number of relabel operations and saturating pushes is the same as before. An edge can handle at most $k$ nonsaturating pushes before it becomes saturated, so the number of nonsaturating pushes is at most $2k|V||E|$. Thus, the total number of basic operations is at most $2|V|^2 + 2|V||E| + 2k|V||E| = O(kVE)$.
[]
false
[]
26-26.4-7
26
26.4
26.4-7
docs/Chap26/26.4.md
Show that we could change line 6 of $\text{INITIALIZE-PREFLOW}$ to ```cpp 6 s.h = |G.V| - 2 ``` without affecting the correctness or asymptotic performance of the generic pushrelabel algorithm.
(Removed)
[ { "lang": "cpp", "code": "> 6 s.h = |G.V| - 2\n>" } ]
false
[]
26-26.4-8
26
26.4
26.4-8
docs/Chap26/26.4.md
Let $\delta_f(u, v)$ be the distance (number of edges) from $u$ to $v$ in the residual network $G_f$. Show that the $\text{GENERIC-PUSH-RELABEL}$ procedure maintains the properties that $u.h < |V|$ implies $u.h \le \delta_f(u, t)$ and that $u.h \ge |V|$ implies $u.h - |V| \le \delta_f(u, s)$.
We'll prove the claim by induction on the number of push and relabel operations. Initially, we have $u.h = |V|$ if $u = s$ and $0$ otherwise. We have $s.h - |V| = 0 \le \delta_f(s, s) = 0$ and $u.h = 0 \le \delta_f(u, t)$ for all $u \ne s$, so the claim holds prior to the first iteration of the while loop on line 2 of the $\text{GENERIC-PUSH-RELABEL}$ algorithm. Suppose that the properties have been maintained thus far. If the next iteration is a nonsaturating push then the properties are maintained because the heights and existence of edges in the residual network are preserved. If it is a saturating push then edge $(u, v)$ is removed from the residual network, which increases both $\delta_f(u, t)$ and $\delta_f(u, s)$, so the properties are maintained regardless of the height of $u$. Now suppose that the next iteration causes a relabel of vertex $u$. For all $v$ such that $(u, v) \in E_f$ we must have $u.h \le v.h$. Let $v' = \min\\{v.h \mid (u,v) \in E_f\\}$. There are two cases to consider. - First, suppose that $v.h < |V|$. Then after relabeling we have $$u.h = 1 + v'.h \le 1 + \min_{(u, v)} \in E_f \delta_f(v, t) = \delta_f(u, t).$$ - Second, suppose that $v'.h \ge |V|$. Then after relabeling we have $$u.h = 1 + v'.h \le 1 + |V| + \min_{(u, v)} \in E_f \delta_f(v, s) = \delta_f(u, s) + |V|,$$ which implies that $u.h - |V| \le \delta_f(u, s)$. Therefore, the $\text{GENERIC-PUSH-RELABEL}$ procedure maintains the desired properties.
[]
false
[]
26-26.4-9
26
26.4
26.4-9 $\star$
docs/Chap26/26.4.md
As in the previous exercise, let $\delta_f(u, v)$ be the distance from $u$ to $v$ in the residual network $G_f$. Show how to modify the generic push-relabel algorithm to maintain the property that $u.h < |V|$ implies $u.h = \delta_f(u, t)$ and that $u.h \ge |V|$ implies $u.h - |V| = \delta_f(u, s)$. The total time that your implementation dedicates to maintaining this property should be $O(VE)$.
What we should do is to, for successive backwards neighborhoods of $t$, relabel everything in that neighborhood. This will only take at most $O(VE)$ time (see 26.4-3). This also has the upshot of making it so that once we are done with it, every vertex's height is equal to the quantity $\delta_f(u, t)$. Then, since we begin with equality, after doing this, the inductive step we had in the solution to the previous exercise shows that this equality is preserved.
[]
false
[]
26-26.4-10
26
26.4
26.4-10
docs/Chap26/26.4.md
Show that the number of nonsaturating pushes executed by the $\text{GENERIC-PUSH-RELABEL}$ procedure on a flow network $G = (V, E)$ is at most $4|V|^2|E|$ for $|V| \ge 4$.
Each vertex has maximum height $2|V| - 1$. Since heights don't decrease, and there are $|V| - 2$ vertices which can be overflowing, the maximum contribution of relabels to $\Phi$ over all vertices is $(2|V| - 1)(|V| - 2)$. A saturating push from $u$ to $v$ increases $\Phi$ by at most $v.h \le 2|V| - 1$, and there are at most $2|V||E|$ saturating pushes, so the total contribution over all saturating pushes to $\Phi$ is at most $(2|V| - 1)(2|V||E|)$. Since each nonsaturating push decrements $\Phi$ by at least on and $\Phi$ must equal zero upon termination, we must have that the number of nonsaturating pushes is at most $$(2|V| - 1)(|V| - 2) + (2|V| - 1)(2|V||E|) = 4|V|^2|E| + 2|V|^2 - 5|V| + 3 - 2|V||E|.$$ Using the fact that $|E| \ge |V| - 1$ and $|V| \ge 4$ we can bound the number of saturating pushes by $4|V|^2|E|$.
[]
false
[]
26-26.5-1
26
26.5
26.5-1
docs/Chap26/26.5.md
Illustrate the execution of $\text{RELABEL-TO-FRONT}$ in the manner of Figure 26.10 for the flow network in Figure 26.1(a). Assume that the initial ordering of vertices in $L$ is $\langle v_1, v_2, v_3, v_4 \rangle$ and that the neighbor lists are $$ \begin{aligned} v_1.N & = \langle s, v_2, v_3 \rangle, \\\\ v_2.N & = \langle s, v_1, v_3, v_4 \rangle, \\\\ v_3.N & = \langle v_1, v_2, v_4, t \rangle, \\\\ v_4.N & = \langle v_2, v_3, t \rangle. \end{aligned} $$
When we initialize the preflow, we have $29$ units of flow leaving $s$. Then, we consider $v_1$ since it is the first element in the $L$ list. When we discharge it, we increase it's height to $1$ so that it can dump $12$ of it's excess along its edge to vertex $v_3$, to discharge the rest of it, it has to increase it's height to $|V| + 1$ to discharge it back to $s$. It was already at the front, so, we consider $v_2$. We increase its height to $1$. Then, we send all of its excess along its edge to $v_4$. We move it to the front, which means we next consider $v_1$, and do nothing because it is not overflowing. Up next is vertex $v_3$. After increasing its height to $1$, it can send all of its excess to $t$. This puts $v_3$ at the front, and we consider the non-overflowing vertices $v_2$ and $v_1$. Then, we consider $v_4$, it increases its height to $1$, then sends $4$ units to $t$. Since it still has an excess of $9$ units, it increases its height once again. Then it becomes valid for it to send flow back to $v_2$ or to $v_3$. It considers $v_2$ first because of the ordering of its neighbor list. This means that $9$ units of flow are pushed back to $v_2$. Since $v_4.h$ increased, it moves to the front of the list Then, we consider $v_2$ since it is the only still overflowing vertex. We increase its height to $3$. Then, it is overflowing by $9$ so it increases its height to $3$ to send $9$ units to $v_4$. It's height increased so it goes to the of the list. Then, we consider $v_4$, which is overflowing. It pushes $7$ units to $v_3$. Since it is still overflowing by $2$, it increases its height to $4$ and pushes the rest back to $v_2$ and goes to the front of the list. Up next is $v_2$, which increases its height by $2$ to sends its overflow to $v_4$. The excess flow keeps bobbing around the four vertices, each time requiring them to increase their height a bit to discharge to a neighbor only to have that neighbor increase to discharge it back until $v_2$ has increased in height enough to send all of it's excess back to s. Last but not least, $v_3$ pushes its overflow of $7$ units to $t$, and gives us a maximum flow of $23$.
[]
false
[]
26-26.5-2
26
26.5
26.5-2 $\star$
docs/Chap26/26.5.md
We would like to implement a push-relabel algorithm in which we maintain a firstin, first-out queue of overflowing vertices. The algorithm repeatedly discharges the vertex at the head of the queue, and any vertices that were not overflowing before the discharge but are overflowing afterward are placed at the end of the queue. After the vertex at the head of the queue is discharged, it is removed. When the queue is empty, the algorithm terminates. Show how to implement this algorithm to compute a maximum flow in $O(V^3)$ time.
Initially, the vertices adjacent to $s$ are the only ones which are overflowing. The implementation is as follows: ```cpp PUSH-RELABEL-QUEUE(G, s) INITIALIZE-PREFLOW(G, s) let q be a new empty queue for v ∈ G.Adj[s] PUSH(q, v) while q.head != NIL DISCHARGE(q.head) POP(q) ``` Note that we need to modify the $\text{DISCHARGE}$ algorithm to push vertices $v$ onto the queue if $v$ was not overflowing before a discharge but is overflowing after one. Between lines 7 and 8 of $\text{DISCHARGE}(u)$, add the line "if $v.e > 0$, $\text{PUSH}(q, v)$." This is an implementation of the generic push-relabel algorithm, so we know it is correct. The analysis of runtime is almost identical to that of Theorem 26.30. We just need to verify that there are at most $|V|$ calls to $\text{DISCHARGE}$ between two consecutive relabel operations. Observe that after calling $\text{PUSH}(u, v)$, Corollary 26.28 tells us that no admissible edges are entering $v$. Thus, once $v$ is put into the queue because of the push, it can't be added again until it has been relabeled. Thus, at most $|V|$ vertices are added to the queue between relabel operations.
[ { "lang": "cpp", "code": "PUSH-RELABEL-QUEUE(G, s)\n INITIALIZE-PREFLOW(G, s)\n let q be a new empty queue\n for v ∈ G.Adj[s]\n PUSH(q, v)\n while q.head != NIL\n DISCHARGE(q.head)\n POP(q)" } ]
false
[]
26-26.5-3
26
26.5
26.5-3
docs/Chap26/26.5.md
Show that the generic algorithm still works if $\text{RELABEL}$ updates $u.h$ by simply computing $u.h = u.h + 1$. How would this change affect the analysis of $\text{RELABEL-TO-FRONT}$?
If we change relabel to just increment the value of $u$, we will not be ruining the correctness of the Algorithm. This is because since it only applies when $u.h \le v.h$, we won't be every creating a graph where $h$ ceases to be a height function, since $u.h$ will only ever be increasing by exactly $1$ whenever relabel is called, ensuring that $u.h + 1 \le v.h$. This means that Lemmatae 26.15 and 26.16 will still hold. Even Corollary 26.21 holds since all it counts on is that relabel causes some vertex's $h$ value to increase by at least $1$, it will still work when we have all of the operations causing it to increase by exactly $1$. However, Lemma 26.28 will no longer hold. That is, it may require more than a single relabel operation to cause an admissible edge to appear, if for example, $u.h$ was strictly less than the $h$ values of all its neighbors. However, this lemma is not used in the proof of Exercise 26.4-3, which bounds the number of relabel operations. Since the number of relabel operations still have the same bound, and we know that we can simulate the old relabel operation by doing (possibly many) of these new relabel operations, we have the same bound as in the original algorithm with this different relabel operation.
[]
false
[]
26-26.5-4
26
26.5
26.5-4 $\star$
docs/Chap26/26.5.md
Show that if we always discharge a highest overflowing vertex, we can make the push-relabel method run in $O(V^3)$ time.
We'll keep track of the heights of the overflowing vertices using an array and a series of doubly linked lists. In particular, let $A$ be an array of size $|V|$, and let $A[i]$ store a list of the elements of height $i$. Now we create another list $L$, which is a list of lists. The head points to the list containing the vertices of highest height. The next pointer of this list points to the next nonempty list stored in $A$, and so on. This allows for constant time insertion of a vertex into $A$, and also constant time access to an element of largest height, and because all lists are doubly linked, we can add and delete elements in constant time. Essentially, we are implementing the algorithm of Exercise 26.5-2, but with the queue replaced by a priority queue with constant time operations. As before, it will suffice to show that there are at most $|V|$ calls to discharge between consecutive relabel operations. Consider what happens when a vertex $v$ is put into the priority queue. There must exist a vertex $u$ for which we have called $\text{PUSH}(u, v)$. After this, no ad- missible edge is entering $v$, so it can't be added to the priority queue again until after a relabel operation has occurred on $v$. Moreover, every call to $\text{DISCHARGE}$ terminates with a $\text{PUSH}$, so for every call to $\text{DISCHARGE}$ there is another vertex which can't be added until a relabel operation occurs. After $|V|$ $\text{DISCHARGE}$ operations and no relabel operations, there are no remaining valid $\text{PUSH}$ operations, so either the algorithm terminates, or there is a valid relabel operation which is performed. Thus, there are $O(V^3)$ calls to $\text{DISCHARGE}$. By carrying out the rest of the analysis of Theorem 26.30, we conclude that the runtime is $O(V^3)$.
[]
false
[]
26-26.5-5
26
26.5
26.5-5
docs/Chap26/26.5.md
Suppose that at some point in the execution of a push-relabel algorithm, there exists an integer $0 < k \le |V| - 1$ for which no vertex has $v.h = k$. Show that all vertices with $v.h > k$ are on the source side of a minimum cut. If such a $k$ exists, the **_gap heuristic_** updates every vertex $v \in V - \\{s\\}$ for which $v.h > k$, to set $v.h = \max(v.h, |V| + 1)$. Show that the resulting attribute $h$ is a height function. (The gap heuristic is crucial in making implementations of the push-relabel method perform well in practice.)
Suppose to try and obtain a contradiction that there were some minimum cut for which a vertex that had $v.h > k$ were on the sink side of that cut. For that minimum cut, there is a residual flow network for which that cut is saturated. Then, if there were any vertices that were also on the sink side of the cut which had an edge going to $v$ in this residual flow network, since it's $h$ value cannot be equal to $k$, we know that it must be greater than $k$ since it could be only at most one less than $v$. We can continue in this way to let $S$ be the set of vertices on the sink side of the graph which have an $h$ value greater than $k$. Suppose that there were some simple path from a vertex in $S$ to $s$. Then, at each of these steps, the height could only decrease by at most $1$, since it cannot get from above $k$ to $0$ without going through $k$, we know that there is no path in the residual flow network going from a vertex in $S$ to $s$. Since a minimal cut corresponds to disconnected parts of the residual graph for a maximum flow, and we know there is no path from $S$ to $s$, there is a minimum cut for which $S$ lies entirely on the source side of the cut. This was a contradiction to how we selected $v$, and so have shown the first claim. Now we show that after updating the $h$ values as suggested, we are still left with a height function. Suppose we had an edge $(u, v)$ in the residual graph. We knew from before that $u.h \le v.h + 1$. However, this means that if $u.h > k$, so must be $v.h$. So, if both were above $k$, we would be making them equal, causing the inequality to still hold. Also, if just $v.k$ were above $k$, then we have not decreased it's $h$ value, meaning that the inequality also still must hold. Since we have not changed the value of $s.h$, and $t.h$, we have all the required properties to have a height function after modifying the $h$ values as described.
[]
false
[]