这门课程所使用的教材是康奈尔的 Algorithm Design 和 MIT 的 Introduction to Algorithms,本份笔记直接沿用书本中对变量及名词的定义,在此没有对它们进行特殊解释。
Lecture 5
- Steps for solving DP problems:
- Describe subproblem;
- Describe base case;
- Describe recurrence;
- Analyze complexity;
- Write pseudo code.
Lecture 6
- Shortest path:
- Single source & positive-weighted: Dijkstra’s Algorithm;
- Single source & negative-weighted: Bellman-Ford Algorithm;
- Case 1: D[v, k] = D[v, k - 1], path uses at most k - 1 edges;
- Case 2: D[v, k] = min(w, v)∈E(D[w, k - 1] + Cwv);
- Goal: Compute D(t, v - 1);
- Complexity: O(EV);
- Negative cycle? Run 1 more round for k = v, if D decrease for some point, then yes.
- All-pairs: 1. Run Bellman-Ford V times: O(EV2); 2. Floyd-Warshall Algorithm;
- D[i, j, k] = minu(D[i, u, k - 1] + D[u, j, k - 1], D[i, j, k - 1]);
- Pseudo code:
D[i,j,V[0]] = c[i,j] for all i and j for k in V: for i in V: for j in V: D[i,j,k] = min(D[i,k,k-1]+D[k,j,k-1],D[i,j,k-1])
- Complexity: O(V3)
- Negative cycle? If the diagonal has nagetive elements, then yes;
- Extract the shortest path? Every time we update D[i, j], we set P[i, j] to k. Then we recursively compute the shortest path from i to k = P[i, j] and the path from k = P[i, j] to j.
Lecture 7
- Max-flow problem: Ford-Fulkerson Algorithm:
- Complexity: O(|f|(E + V));
- Edmonds-Karp Algorithm: If we replace DFS with BFS, the complexity would be O(E2V);
- Capacity-Scaling Algorithm: If we choose the augmenting path with highest bottleneck capacity, the complexity would be O(log|f|E2).
- Max-flow theorem: the following conditions are equivalent for any f:
- ∃ cut(A, B) s.t. cap(A, B) = |f|;
- f is a max-flow;
- There is no augmenting path wrt to f.
Value of the max-flow = capacity of the min-cut.
- Applications of max-flow:
- Bipartite max matching: |f| ≤ V, V’ = 2V, so complexity: O(V(E + 2V)) = O(EV).
- Maximum number of edge disjoint paths. Complexity: O(EV)
Lecture 8
- Image segmentation problem: solved by min-cut problem.
- Circulation problem can be reduced to max-flow problem: there is a feasible circulation in G if and only if the max-flow can achieve Σd(v)>0d(v).
- Circulation problem with demands can be reduced to circulation problem without demands: just push the lower bounds of flow through each edges, then modify the capacity of each edge and demand of each vertex respectively.
- Survey Design Problem.
Lecture 9
- Undecidable problems: there is no computer program that can always give the right answer: it may give the wrong answer, or run forever. NP-Hard contains undecidable problems.
- Example: The halting problem.
- To prove X is in NP-Complete, we first prove X is in NP (answer can be verified in polynomial time), then we prove X is in NP-Hard (∃Y ∈ NP, Y ≤p X).
- NP-Complete problems are the most difficult NP problems.
- Reductions of NP-Complete problems:
- SAT is in NP-Complete without proof;
- 3-SAT ≤p Independent set;
- Independent set ≤p Vertex cover;
- Vertex cover ≤p Vertex cover-even;
- 3-SAT ≤p 3-colorable;
- 3-SAT ≤p Hamiltonian Cycle (without proof).
Lecture 10
- Standard linear programming problem in matrix form:
max(cTx),
subject to: Ax ≤ b, x ≥ 0.- Linear programming can be used to solve max-flow and shortest path problem.
- Integer linear programming (ILP) is in NP-Hard.
- Independent set ≤p ILP;
- ILP can be used to solve 0-1 Knapsack.
- The dual of standard linear programming problem:
min(bTy),
subject to: ATy ≥ c, y ≥ 0.- The weak duality: if x is a feasible solution for primal and y is a feasible solution for dual, then cTx ≤ bTy;
- The strong duality: opt(primal) = opt(dual);
-
P \ D F.B. F.U. I. F.B. Yes No No F.U. No No Yes I. No Yes Yes
-
General form of primal and dual:
Primal Dual max(cTx) min(b1Ty1 + b2Ty2 + b3Ty3) A1x ≤ b1 A1Ty1 + A2Ty2 + A3Ty3 ≥ c A2x = b2 y1 ≥ 0 A3x ≥ b3 y2 unrestricted x ≥ 0 y3 ≤ 0
Lecture 11
- Classic approximation algorithms:
- 2-approximation vertex cover: find matchings in graph;
- 2-approximation TSP on complete graph with triangle inequality: use MST;
- 1.5-approximation TSP on complete graph with triangle inequality:
- MST of G → T;
- Vertices of odd degree in T → S;
- Min-cost matching on S → M;
- return T ∪ M.
- There is no polynomial α-approximation algorithm for general TSP: if such algorithm exists, we can use it to solve Hamiltonian Cycle.
- ln n-approximation set cover: greedy, each move covers at least 1/k of the elements, so k * ln n moves will cover all (k is the optimal).
- 2-approximation load balance: greedy: assign next job to the machine with least load.
Lecture 12
- Classification of random algorithm:
- Las Vegas algorithms: always returns the correct answer, but may run longer than you expect.
- Example: quick sort.
- Monte Carlo algorithms: may fail or return incorrect answer, but the runtime is independent of input randomness.
- Example: random global min-cut.
- Las Vegas algorithms: always returns the correct answer, but may run longer than you expect.
- Random data structre:
- Skip list: O(log n) search time;
- Treap: a binary search tree with the heap ordering property (use rotation to maintain heap ordering property while do no change to BST ordering property).
- Complexity: O(log n) search time & O(log n) insert time.