Seminars and Colloquia by Series

Wednesday, January 23, 2013 - 12:00 , Location: ISyE Executive classroom , Gustavo Angulo , Georgia Tech ISyE , Organizer:
In this talk we consider the problem of finding basic solutions to linear programs where some vertices are excluded. We study the complexity of this and related problems, most of which turn out to be hard. On the other hand, we show that forbidding vertices from 0-1 polytopes can be carried out with a compact extended formulation. A similar result holds for integer programs having a box-integrality property. We discuss some applications of our results.
Friday, December 7, 2012 - 13:10 , Location: Skiles 005 , Dana Randall , College of Computing, Georgia Tech , Organizer:
The hard-core model has attracted much attention across several disciplines, representing lattice gases in statistical physics and independent sets in discrete mathematics and computer science. On finite graphs, we are given a parameter \lambda, and an independent set I arises with probability proportional to \lambda^{|I|}. We are interested in determining the mixing time of local Markov chains that add or remove a small number of vertices in each step. On finite regions of Z^2 it is conjectured that there is a phase transition at some critical point \lambda_c that is approximately 3.79. It is known that local chains are rapidly mixing when \lambda < 2.3882. We give complementary results showing that local chains will mix slowly when \lambda > 5.3646 on regions with periodic (toroidal) boundary conditions and when \lambda > 7.1031 with non-periodic (free) boundary conditions. The proofs use a combinatorial characterization of configurations based on the presence or absence of fault lines and an enumeration of a new class of self-avoiding walks called taxi walks. (Joint work with Antonio Blanca, David Galvin and Prasad Tetali)
Friday, November 30, 2012 - 13:00 , Location: Skiles 005 , Sara Krehbiel , College of Computing, Georgia Tech , Organizer:
 Mechanism design for distributed systems is fundamentally concerned with aligning individual incentives with social welfare to avoid socially inefficient outcomes that can arise from agents acting autonomously. One simple and natural approach is to centrally broadcast non-binding advice intended to guide the system to a socially near-optimal state while still harnessing the incentives of individual agents. The analytical challenge is proving fast convergence to near optimal states, and we present the first results that carefully constructed advice vectors yield stronger guarantees.                                        We apply this approach to a broad family of potential games modeling vertex cover and set cover optimization problems in a distributed setting.  This class of problems is interesting because finding exact solutions to their optimization problems is NP-hard yet highly inefficient equilibria exist, so a solution in which agents simply locally optimize is not satisfactory.  We show that with an arbitrary advice vector, a set cover game quickly converges to an equilibrium with cost of the same order as the square of the social cost of the advice vector.  More interestingly, we show how to efficiently construct an advice vector with a particular structure with cost $O(\log n)$ times the optimal social cost, and we prove that the system quickly converges to an equilibrium with social cost of this same order. 
Wednesday, November 21, 2012 - 12:00 , Location: ISyE Executive classroom , Cristóbal Guzmán , ISyE, Georgia Tech , , Organizer:
 Inpainting, deblurring and denoising images are common tasks required for a number of applications in science and engineering. Since the seminal work of Rudin, Osher and Fatemi, image regularization by total variation (TV) became a standard heuristic for achieving these tasks.                                                   In this talk, I will introduce the TV regularization model and some connections with sparse optimization and compressed sensing. Later, I will summarize some of the fastest existing methods for solving TV regularization.                                                                                                                                  Motivated by improving the super-linear (on the dimension) running time of these algorithms, we propose two heuristics for image regularization models: the first one is to replace the TV by the \ell^1 norm of the Laplacian, and the second is a new, to the best of our knowledge, approximation of the TV seminorm, based on a redundant parameterization of the gradient field.                                                 We prove that the latter regularizer is an O(log n) approximation of the TV seminorm. This proof is based on basic techniques from Discrete Fourier Analysis and an estimate of the fundamental solutions of the Laplace equation on a grid, due to Mangad.                                                                                  Finally, we present preliminary computational results for the three models, on mid-scale images.              This talk will be self-contained. Joint work with Arkadi Nemirovski. 
Friday, November 16, 2012 - 13:00 , Location: Skiles 005 , Sebastian Pokutta , Georgia Tech, ISyE , Organizer:
We solve a 20-year old problem posed by M. Yannakakis and prove that there exists no polynomial-size linear program (LP) whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the maximum cut problem and the stable set problem. These results follow from a new connection that we make between one-way quantum communication protocols and semidefinite programming reformulations of LPs. (joint work with Samuel Fiorini, Serge Massar, Hans Raj Tiwary, and Ronald de Wolf)
Friday, November 9, 2012 - 13:00 , Location: Skiles 005 , Arindam Khan , College of Computing, Georgia Tech , , Organizer:

In this talk I will briefly survey results on Vertex Sparsification and some of our results on Mimicking network(or Exact Cut Sparsifier).&nbsp;Ankur Moitra introduced the notion of vertex sparsification to construct a smaller graph which preserves the properties of a huge network that are relevant to the terminals. Given a capacitated undirected graph $G=(V,E)$ with a set of terminals $K \subset V$, a &nbsp;vertex cut sparsifier is a smaller graph $H=(V_H,E_H)$ that approximately(quality f>=1) preserves all the minimum cuts between the terminals. Mimicking networks are the best quality vertex cut sparsifiers i.e, &nbsp;with quality 1. &nbsp; &nbsp;&nbsp;We improve both the previous upper($2^{2^{k}}$ ) and lower bounds($k+1$) for mimicking network reducing the doubly-exponential gap between them to a single-exponential gap. &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1. Given a graph $G$, we exhibit a construction of mimicking network with at most $k$'th Hosten-Morris number ($\approx 2^{{(k-1)} \choose {\lfloor {{(k-1)}/2} \rfloor}}$) of vertices (independent of size of $V$). &nbsp; &nbsp; Furthermore, we show that the construction is optimal among all {\itrestricted mimicking networks} -- a natural class of mimicking networks that are obtained by clustering vertices together. &nbsp; &nbsp; &nbsp; &nbsp;2. There exists graphs with $k$ terminals that have no mimicking network of size smaller than $2^{\frac{k-1}{2}}$. &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;3. We also exhibit constructions of better mimicking networks for trees($\lfloor(\frac{3k}{2})-1\rfloor$), outerplanar graphs($5k-9$) and graphs of bounded($t$) tree-width($k 2^{(2t+1) \choose {(2t+1)/2}}$). &nbsp; &nbsp; &nbsp; &nbsp;The talk will be self-contained and with no prerequisite.

Friday, November 2, 2012 - 13:00 , Location: Skiles 005 , Steven Ehrlich , College of Computing, Georgia Tech , Organizer:
We present a new algorithm learning the class of two-sided disjunctions in semi-supervised PAC setting and in the active learning model. These algorithms are efficient and have good sample complexity. By exploiting the power of active learning we are able to find consistent, compatible hypotheses -- a task which is computationally intractable in the semi-supervised setting.
Friday, October 26, 2012 - 13:00 , Location: Skiles 005 , Will Perkins , School of Math., Georgia Tech , , Organizer:
A branching random walk consists of a population of individuals each of whom perform a random walk step before giving birth to a random number of offspring and dying.  The offspring then perform their own independent random steps and branching.  I will present classic results on the convergence of the empirical particle measure to the Gaussian distribution, then present new results on large deviations of this empirical measure.  The talk will be self-contained and can serve as an introduction to both the branching random walk and large deviation theory.  The format will be 40 minutes of introduction and presentation, followed by a short break and then 20 minutes of discussion of open problems for those interested. 
Friday, October 19, 2012 - 13:00 , Location: Skiles 005 , Prateek Bhakta , College of Computing, Georgia Tech , , Organizer:
Sampling permutations from S_n is a fundamental problem from probability theory.  The nearest neighbor transposition chain M_n is known to converge in time \Theta(n^3 \log n) in the uniform case and time \Theta(n^2) in the constant bias case, in which we put adjacent elements in order with probability p \neq 1/2 and out of order with probability 1-p.  In joint work with Prateek Bhakta, Dana Randall and Amanda Streib, we consider the variable bias case where the probability of putting an adjacent pair of elements in order depends on the two elements, and we put adjacent elements x < y in order with probability p_{x,y} and out of order with probability 1-p_{x,y}.  The problem of bounding the mixing rate of M_n was posed by Fill and was motivated by the Move-Ahead-One self-organizing list update algorithm.  It was conjectured that the chain would always be rapidly mixing if 1/2 \leq p_{x,y} \leq 1 for all x < y, but this was only known in the case of constant bias or when p_{x,y} is equal to 1/2 or 1, a case that corresponds to sampling linear extensions of a partial order.  We prove the chain is rapidly mixing for two classes: ``Choose Your Weapon,'' where we are given r_1,..., r_{n-1} with r_i \geq 1/2 and p_{x,y}=r_x for all x < y (so the dominant player chooses the game, thus fixing his or her probability of winning), and ``League Hierarchies,'' where there are two leagues and players from the A-league have a fixed probability of beating players from the B-league, players within each league are similarly divided into sub-leagues with a possibly different fixed probability, and so forth recursively.  Both of these classes include permutations with constant bias as a special case.  Moreover, we also prove that the most general conjecture is false.  We do so by constructing a counterexample where 1/2 \leq p_{x,y} \leq 1 for all x < y, but for which the nearest neighbor transposition chain requires exponential time to converge.
Friday, October 5, 2012 - 13:00 , Location: Skiles 005 , Ying Xiao , College of Computing, Georgia Tech , Organizer:
In the last 10 years, compressed sensing has arisen as an entirely new area of mathematics, combining ideas from convex programming, random matrices, theoretical computer science and many other fields. Candes (one of the originators of the area) recently spoke about two quite recent and exciting developments, but it might be interesting to revisit the fundamentals, and see where a lot of the ideas in the more recent works have developed.                                                                                                    In this talk, I will discuss some of the earlier papers (Candes-Romberg-Tao), define the compressed sensing problem, the key restricted isometry property and how it relates to the Johnson-Lindenstrauss lemma for random projections. I'll also discuss some of the more TCS ideas such as compressed sensing through group testing, and hopefully some of the greedy algorithm ideas as well. Finally, if time allows, I'll draw parallels with other problems, such as matrix completion, phase retrieval etc.                               The talk will be quite elementary, requiring only a knowledge of linear algebra, and some probability.