- You are here:
- GT Home
- Home
- News & Events

Series: ACO Seminar

A subset of $\mathbb{F}_3^n$ is called a \emph{cap set} if it does not contain three vectors that sum to zero. In dimension four, this relates to the famous card game SET: a cap set corresponds to a collection of cards without a SET. The cap set problem is concerned with upper bounds on the size of cap sets. The central question raised by Frankl, Graham and R\”odl is: do cap sets have exponentially small density? May last year, this question was (unexpectedly) resolved in a pair of papers by Croot, Lev, and Pach and by Ellenberg and myself in a new application of the polynomial method. The proof is surprisingly short and simple.

Series: ACO Seminar

Concentration inequalities are fundamental tools in probabilistic
combinatorics and theoretical computer science for proving that
functions of random variables are typically near their means. Of
particular importance is the case where f(X) is a function of
independent random variables X=(X_1,...,X_n). Here the well-known
bounded differences inequality (also called McDiarmid's or
Hoeffding--Azuma inequality) establishes sharp concentration if the
function f does not depend too much on any of the variables.
One attractive feature is that it relies on a very simple Lipschitz
condition (L): it suffices to show that |f(X)-f(X')| \leq c_k
whenever X,X' differ only in X_k. While this is easy to check,
the main disadvantage is that it considers worst-case changes
c_k, which often makes the resulting bounds too weak to be useful.
In this talk we discuss a variant of the bounded differences inequality
which can be used to establish concentration of functions f(X) where
(i) the typical changes are small although (ii) the worst case
changes might be very large.
One key aspect of this inequality is that it relies on a simple
condition that (a) is easy to check and (b) coincides with heuristic
considerations as to why concentration should hold. Indeed, given a
`good' event G that holds with very high probability, we essentially
relax the Lipschitz condition (L) to situations where G occurs. The
point is that the resulting typical changes c_k are often
much smaller than the worst case ones.
If time permits, we shall illustrate its application by considering the
reverse H-free process, where H is 2-balanced. We prove that the
final number of edges in this process is concentrated, and also determine
its likely value up to constant factors.
This answers a question of Bollobás and Erdös.

Series: ACO Seminar

Recent work of the speaker with Dan Spielman and Nikhil
Srivastava introduced the ``method of interlacing polynomials''
(MOIP) for solving problems in combinatorial linear algebra.
The goal of this talk is to provide insight into the inner workings of
the MOIP by introducing a new theory that reveals an intimate
connection between the use of polynomials in the manner of the
MOIP and free probability, a theory developed by Dan Voiculescu
as a tool in the study of von Neumann algebras.
I will start with a brief introduction to free probability (for those, like
me, who are not operator theorists).
In particular, I will discuss the two basic operations in free
probability theory (the free additive and free multiplicative
convolutions), and how they relate to the asymptotic eigenvalue
distributions of random matrices.
I will then show how certain binary operations on polynomials act
as finite analogues of the free convolutions and how the MOIP is
effectively transferring the asymptotic bounds obtained in free
probability to bounds in the new theory (which can then be applied
to finite scenarios).
If time permits, I will show how such a theory gives far better
intuition as to how one might apply the MOIP in the future, using
recent results on restricted invertibility and the existence of
Ramanujan graphs as examples.

Series: ACO Seminar

The Max-Cut problem seeks to determine the maximal cut size in a given graph. With no polynomial-time efficient approximation for Max-Cut (unless P=NP), its asymptotic for a typical large sparse graph is of considerable interest. We prove that for uniformly random d-regular graph of N vertices, and for the uniformly chosen Erdos-Renyi graph of M=N d/2 edges, the leading correction to M/2 (the typical cut size), is P_* sqrt(N M/2). Here P_* is the ground state energy of the Sherrington-Kirkpatrick model, expressed analytically via Parisi's formula.
This talk is based on a joint work with Subhabrata Sen and Andrea Montanari.

Series: ACO Seminar

We are generally interested in the following ill-defined problem: What is
a conceptually simple algorithm and what is the power and limitations
of such algorithms?
In particular, what is a greedy algorithm or more generally a myopic
algorithm for a combinatorial optimization problem? And to be even more
specific, motivated by the Buchbinder et al
``online double sided greedy algorithm'' for the unconstrained non-monotone
submodular maximization problem, what are (if any) the limitations of
algorithms "of this genre" for the general unconstrained problem and for
specific instances of the problem, such as Max-Di-Cut?Joint work with Norman Huang.

Series: ACO Seminar

The closest vector problem (CVP) on lattices (i.e. given an n
dimensional lattice L with basis B and target point t, find a closest
lattice point in L to t) is fundamental NP-hard problem with applications
in coding, cryptography & optimization. In this talk, we will consider the
preprocessing version of CVP (CVPP), an important variant of CVP, where we
allow arbitrary time to preprocess the lattice before answering CVP queries.
In breakthrough work, Micciancio & Voulgaris (STOC 2010) gave the first
single exponential time algorithm for CVP under the l_2 norm based on
Voronoi cell computations. More precisely, after a preprocessing step on L
requiring tilde{O}(2^{2n}) time, during which they compute the Voronoi cell
of L (the set of points closer to the origin than to any other point in L),
they show that additional CVP queries on L (i.e. CVPP) can be solved in
tilde{O}(2^{2n}) time.
For our main result, we show that given the Voronoi cell V of L as
preprocessing, CVP on any target t can be solved in expected tilde{O}(2^n)
time. As our main technical contribution, we give a new randomized
procedure that starting from any close enough lattice point to the target
t, follows a path in the Voronoi graph of L (i.e. x,y in L are adjacent if
x+V and y+V share a facet) to the closest lattice vector to t of expected
polynomial size. In contrast, the path used by MV algorithm is only known
to have length bounded by tilde{O}(2^n). Furthermore, for points x,y in L,
we show that the distance between x and y in the Voronoi graph is within a
factor n of ||x-y||_V (norm induced by the Voronoi cell), which is best
possible. For our analysis, we rely on tools from high dimensional convex
geometry. No background in convex geometry or lattices will be assumed.
Time permitting, I will describe related results & open questions about
paths on more general lattice Cayley graphs.
Joint work with Nicolas Bonifas (Ecole Polytechnique).

Series: ACO Seminar

Joint DOS-ACO Seminar. Food and refreshments will be provided.

Sparse cutting-planes are often the ones used in mixed-integer programing
(MIP) solvers, since they help in solving the linear programs encountered
during branch-&-bound more efficiently. However, how well can we
approximate the integer hull by just using sparse cutting-planes? In order
to understand this question better, given a polyope P (e.g. the integer
hull of a MIP), let P^k be its best approximation using cuts with at most k
non-zero coefficients. We consider d(P, P^k) = max_{x in P^k} (min_{y in
P} |x - y|) as a measure of the quality of sparse cuts.
In our first result, we present general upper bounds on d(P, P^k) which
depend on the number of vertices in the polytope and exhibits three phases
as k increases. Our bounds imply that if P has polynomially many vertices,
using half sparsity already approximates it very well. Second, we present a
lower bound on d(P, P^k) for random polytopes that show that the upper
bounds are quite tight. Third, we show that for a class of hard packing
IPs, sparse cutting-planes do not approximate the integer hull well.
Finally, we show that using sparse cutting-planes in extended formulations
is at least as good as using them in the original polyhedron, and give an
example where the former is actually much better.
Joint work with Santanu Dey and Qianyi Wang.

Series: ACO Seminar

Many real-world graphs are large and growing. This naturally raises the question of a suitable concept of graph convergence. For graphs with average degree proportional to the number of vertices (dense graphs), this question is by now quite well-studied. But real world graphs tend to be sparse, in the sense that the average or even the maximal degree is bounded by some reasonably small constant. In this talk, I study several notions of convergence for graphs of bounded degree and show that, in contrast to dense graphs, where various a priori different notions of convergence are equivalent, the corresponding notions are not equivalent for sparse graphs. I then describe a notion of convergence formulated in terms of a large deviation principle which implies all previous notions of convergence.

Series: ACO Seminar

For all practical purposes, the Micali-Vazirani algorithm, discovered in
1980, is still the most efficient known maximum matching algorithm (for
very dense graphs, slight asymptotic improvement can be obtained using
fast matrix multiplication). However, this has remained a "black box"
result for the last 32 years. We hope to change this with the help of a
recent paper giving a simpler proof and exposition of the algorithm:
http://arxiv.org/abs/1210.4594
In the interest of covering all the ideas, we will assume that the
audience is familiar with basic notions such as augmenting paths and
bipartite matching algorithm.

Series: ACO Seminar

We derive optimal strategies for a pursuit-and-evasion game and show that when pitted against each other, the two strategies construct a small set containing unit-length line segments at all angles. Joint work with Y. Babichenko, Y. Peres, R. Peretz, and P. Sousi.