Wednesday, October 28, 2009

Lecture 15 notes

Some notes about today's lecture:
  1. I mentioned that there are integrality results for some natural LPs for MST, branchings, shortest paths, matchings. Basically, this means that one can write LPs for these problems such that the basic solutions (i.e., vertices of the resulting polytope) are all 0-1 vectors. It was all a bit too fast towards the end, and so we will try to cover integrality for LPs in another lecture later.

  2. For the branchings LP to make sense for negative edge weights, we should throw in a constraint that the total number of arcs chosen is at most n-1 (or that at most one arc should be chosen from the out-arcs of each non-root vertex). Else negative edge weights might cause the LP to choose too many edges, or even to choose edges to an extent greater than 1.

  3. Finally, Srivatsan wondered about the fact that the size of the LP we wrote for branchings was exponentially sized: what's up with that? Two answers to this.

    One: we can use the LP purely for analytical purposes. If we use Edmonds' cycle shrinking algorithm from Lecture 5, we end up showing an integer primal solution of some value, and a feasible dual solution of the same value. And since these primal and dual solutions are feasible, and of equal value, both must be optimal!

    Second: wait until next lecture, we will show how to solve this exponentially sized LPs under certain conditions. For now, just observe that that if you are given a feasible (fractional) solution x, and it does not satisfy all the constraints, you can find some violating contraint in polynomial time. (In case there is a set S such that strictly less than 1 unit of x_e crosses it, there is a min-cut separating some vertex from the root with value strictly less than 1.) Note that if x is not in the branching polytope, this efficient procedure gives us a hyperplane that separates x from the polytope, and hence is called a "separation oracle" for this polytope.

  4. Finally, I mentioned that Edmonds' result shows that every vertex of the polytope defined by the branching LP was integral. This is a slightly subtle point, so I'd like to elaborate.

    Edmonds shows that given edge-costs, the branching found by his algorithm wrt these edge costs is also an optimal LP solution for those edge-costs (since he gives a branching that is feasible for the primal, and a matching set of weights that are feasible for the dual).

    Now suppose the branching polytope has a fractional vertex (i.e., a extreme point z of this polytope such that not every coordinate of z is 0 or 1). One can show (using convexity etc) there exists some cost vector c for which this extreme point z is the unique optimal solution to the branching LP. But that would contradict the fact that for those edge costs, Edmonds' algorithm would find an integral optimal solution.
Next Monday, we'll give a quick overview of simplex, and then some details about the ellipsoid algorithm.

Notes on Lecture 14

A proof of the inequality we used (courtesy Danny):

To prove: Log[1 + b] >= 2 b/(2 + b) for b>=0
Note that it's equal when b=0.
You take the derivative of
Log[1 + b] - 2 b/(2 + b), and get
b2/((1 + b) (2 + b)2). This is non-negative
for b>=0. QED.

Monday, October 26, 2009

Wednesday's Lecture

For Wednesday's lecture (it will be on basics of LPs and duality), have a quick look over the proof of optimality of branchings we did way back in Lecture 5. We'll then give some algorithms to solve LPs in the next couple of lectures.

Saturday, October 24, 2009

Monday's Lecture

Sorry to change things around, but I'll cover some probabilistic inequalities (a.k.a. large-deviation bounds, concentration bounds, Chernoff-type bounds, whatever you might want to call them) in Monday's lecture. Wednesday we'll finally get started on LPs...

Wednesday, October 21, 2009

Clarification of Today's Lecture

I've posted the complete and correct argument that I should have used in today's lecture. It's here. I suggest you read it right now while the lecture is still fresh in your mind.

Monday, October 19, 2009

Notes on Matchings

I've posted a whole bunch of notes on matchings on the course webpage.

Also, if you're interested in blocking flows and would like some more details (or if you just got lost in the discussion today (sorry!)), have a look at Chapter 18 of Dexter Kozen's book (Google book version here). It describes Dinic's algorithm that uses blocking flows, and also gives a faster O(n^3) time version. Using the dynamic trees that Danny will talk about on Wednesday, one can greatly speed up blocking-flow computations, as well as implementations of push-relabel.

Saturday, October 17, 2009

Monday's lecture

Since Danny has to miss Monday's lecture, we'll cover some stuff about matchings in graphs instead, and come back to dynamic trees and speeding up maxflows on Wednesday.

Tuesday, October 13, 2009

Lecture 10 Notes

I've added some notes on push-relabel on the webpage. Also, a slight change in schedule: Danny will be lecturing tomorrow on dynamic (and splay) trees and faster implementations of push-relabel.

Friday, October 9, 2009

Max-flow notes

I assume that most of you are familiar with the notation used for flows on graph, as well as some of the basic results. However, in case you'd like to brush up on some details, we've just posted some notes off the webpage. (CMU/Pitt only, though parts of the book are available through Google books.)

In particular, please have a quick read over the first two chapters for Monday.
  • Lecture 16: gives the notation (flows, residual graphs, augmenting paths), as well as Ford and Fulkerson's proof that max-flow = min-cut using augmenting paths.

  • Lecture 17: gives examples where augmenting paths might take a long time (or converges to a non-optimal flow in graphs with non-integer capacities). Then gives two heuristics of Edmonds and Karp to ensure fast convergence.
If you feel like reading the third chapter too (which gives a max-flow algorithm using blocking flows), go ahead --- but I won't rely on that for Monday.

Monday, October 5, 2009

Lecture 8 notes

T(n) = sqrt(n) T(sqrt(n)) + sqrt(n) and T(4) = 1 does solve to T(n) = O(n).
Just guess T(n) = n - sqrt(n).

Note that I chose the base case to make life easy for myself, choosing the more natural base case T(1) = 1 will not really change anything but the constants. OTOH, if you chose T(n) = sqrt(n) T(sqrt(n)) + n, it would give you T(n) = O(n log log n). But that's a different story, it's not what we needed today.

Sunday, October 4, 2009

HW2 clarifications/corrections.

For problem 1(c), we'd be happy with an implicit representation of the solution. (Thanks, Yuan).

For problem 2, we're looking for a di-diameter of O(alpha(n)), not just alpha(n). (Thanks, Jeremiah.)