Seminars and Colloquia by Series

Friday, April 20, 2018 - 13:05 , Location: Skiles 005 , Kira Goldner , CSE, University of Washington , kgoldner@cs.washington.edu , Organizer: He Guo
Consider the problem of selling items to a unit-demand buyer. Most work on maximizing seller revenue considers either a setting that is single dimensional, such as where the items are identical, or multi-dimensional, where the items are heterogeneous. With respect to revenue-optimal mechanisms, these settings sit at extreme ends of a spectrum: from simple and fully characterized (single-dimensional) to complex and nebulous (multi-dimensional). In this paper, we identify a setting that sits in between these extremes. We consider a seller who has three services {A,B,C} for sale to a single buyer with a value v and an interest G from {A,B,C}, and there is a known partial ordering over the services. For example, suppose the seller is selling {internet}, {internet, phone}, and {internet, cable tv}. A buyer with interest {internet} would be satisfied by receiving phone or cable tv in addition, but a customer whose interest is {internet, phone} cannot be satisfied by any other option. Thus this corresponds to a partial-ordering where {internet} > {internet, phone} and {internet} > {internet, cable tv}, but {internet, phone} and {internet, cable tv} are not comparable. We show formally that partially-ordered items lie in a space of their own, in between identical and heterogeneous items: there exist distributions over (value, interest) pairs for three partially-ordered items such that the menu complexity of the optimal mechanism is unbounded, yet for all distributions there exists an optimal mechanism of finite menu complexity. So this setting is vastly more complex than identical items (where the menu complexity is one), or even “totally-ordered” items as in the FedEx Problem [FGKK16] (where the menu complexity is at most seven, for three items), yet drastically more structured than heterogeneous items (where the menu complexity can be uncountable [DDT15]). We achieve this result by proving a characterization of the class of best duals and by giving a primal recovery algorithm which obtains the optimal mechanism. In addition, we (1) extend our lower-bound to the Multi-Unit Pricing setting, (2) give a tighter and deterministic characterization of the optimal mechanism when the buyer’s distribution satisfies the declining marginal revenue condition, and (3) prove a master theorem that allows us to reason about duals instead of distributions. Joint work with Nikhil Devanur, Raghuvansh Saxena, Ariel Schvartzman, and Matt Weinberg.
Friday, April 6, 2018 - 13:05 , Location: Skiles 005 , Uthaipon (Tao) Tantipongpipat , Georgia Tech , tao@gatech.edu , Organizer: He Guo
We study the $A$-optimal design problem where we are given vectors $v_1,\ldots, v_n\in \R^d$, an integer $k\geq d$, and the goal is to select a set $S$ of $k$ vectors that minimizes the trace of $\left(\sum_{i\in  S} v_i v_i^{\top}\right)^{-1}$. Traditionally, the problem is an instance of optimal design of experiments in statistics (\cite{pukelsheim2006optimal}) where each vector corresponds to a linear measurement of an unknown vector and the goal is to pick $k$ of them that minimize the average variance of the error in the maximum likelihood estimate of the vector being measured. The problem also finds applications in sensor placement in wireless networks~(\cite{joshi2009sensor}), sparse least squares regression~(\cite{BoutsidisDM11}), feature selection for $k$-means clustering~(\cite{boutsidis2013deterministic}), and matrix approximation~(\cite{de2007subset,de2011note,avron2013faster}). In this paper, we introduce \emph{proportional volume sampling} to obtain improved approximation algorithms for $A$-optimal design.Given a matrix, proportional volume sampling involves picking a set of columns $S$ of size $k$ with probability proportional to $\mu(S)$ times $\det(\sum_{i \in S}v_i v_i^\top)$ for some measure $\mu$. Our main result is to show the approximability of the $A$-optimal design problem can be reduced to \emph{approximate} independence properties of the measure $\mu$. We appeal to hard-core distributions as candidate distributions $\mu$ that allow us to obtain improved approximation algorithms for the $A$-optimal design. Our results include a $d$-approximation when $k=d$, an $(1+\epsilon)$-approximation when $k=\Omega\left(\frac{d}{\epsilon}+\frac{1}{\epsilon^2}\log\frac{1}{\epsilon}\right)$ and $\frac{k}{k-d+1}$-approximation when repetitions of vectors are allowed in the solution. We also consider generalization of the problem for $k\leq d$ and obtain a $k$-approximation. The last result also implies a restricted invertibility principle for the harmonic mean of singular values.We also show that the $A$-optimal design problem is$\NP$-hard to approximate within a fixed constant when $k=d$.
Friday, March 16, 2018 - 13:05 , Location: Skiles 005 , Gramoz Goranci , CS, University of Vienna , gramoz.goranci@gmail.com , Organizer: He Guo
A low-diameter decomposition (LDD) of an undirected graph G is a partitioning of G into components of bounded diameter, such that only a small fraction of original edges are between the components. This decomposition has played instrumental role in the design of low-stretch spanning tree, spanners, distributed algorithms etc. A natural question is whether such a decomposition can be efficiently maintained/updated as G undergoes insertions/deletions of edges. We make the first step towards answering this question by designing a fully-dynamic graph algorithm that maintains an LDD in sub-linear update time.  It is known that any undirected graph G admits a spanning tree T with nearly logarithmic average stretch, which can be computed in nearly linear-time. This tree decomposition underlies many recent progress in static algorithms for combinatorial and scientific flows. Using our dynamic LDD algorithm, we present the first non-trivial algorithm that dynamically maintains a low-stretch spanning tree in \tilde{O}(t^2) amortized update time, while achieving (t + \sqrt{n^{1+o(1)}/t}) stretch, for every 1 \leq t \leq n.  Joint work with Sebastian Krinninger.
Friday, February 9, 2018 - 13:05 , Location: Skiles 005 , Greg Bodwin , CS, MIT , gbodwin@mit.edu , Organizer: He Guo
A lot of well-studied problems in CS Theory are about making “sketches” of graphs that occupy much less space than the graph itself, but where the shortest path distances of the graph can still be approximately recovered from the sketch. For example, in the literature on Spanners, we seek a sparse subgraph whose distance metric approximates that of the original graph. In Emulator literature, we relax the requirement that the approximating graph is a subgraph. Most generally, in Distance Oracles, the sketch can be an arbitrary data structure, so long as it can approximately answer queries about the pairwise distance between nodes in the original graph. Research on these objects typically focuses on optimizing the worst-case tradeoff between the quality of the approximation and the amount of space that the sketch occupies. In this talk, we will survey a recent leap in understanding about this tradeoff, overturning the conventional wisdom on the problem. Specifically, the tradeoff is not smooth, but rather it follows a new discrete hierarchy in which the quality of the approximation that can be obtained jumps considerably at certain predictable size thresholds. The proof is graph-theoretic and relies on building large families of graphs with large discrepancies in their metrics.
Friday, February 2, 2018 - 13:05 , Location: Skiles 006 , Audra McMillan , Math, University of Michigan , amcm@umich.edu , Organizer: He Guo
Physical sensors (thermal, light, motion, etc.) are becoming ubiquitous and offer important benefits to society. However, allowing sensors into our private spaces has resulted in considerable privacy concerns. Differential privacy has been developed to help alleviate these privacy concerns. In this talk, we’ll develop and define a framework for releasing physical data that preserves both utility and provides privacy. Our notion of closeness of physical data will be defined via the Earth Mover Distance and we’ll discuss the implications of this choice. Physical data, such as temperature distributions, are often only accessible to us via a linear transformation of the data.  We’ll analyse the implications of our privacy definition for linear inverse problems, focusing on those that are traditionally considered to be "ill-conditioned”. We’ll then instantiate our framework with the heat kernel on graphs and discuss how the privacy parameter relates to the connectivity of the graph. Our work indicates that it is possible to produce locally private sensor measurements that both keep the exact locations of the heat sources private and permit recovery of the ``general geographic vicinity'' of the sources. Joint work with Anna C. Gilbert.
Friday, January 26, 2018 - 13:05 , Location: Skiles 005 , David Hemmi , CS, Monash University , david.hemmi@monash.edu , Organizer: He Guo
Stochastic programming is concerned with decision making under uncertainty, seeking an optimal policy with respect to a set of possible future scenarios. While the value of Stochastic Programming is obvious to many practitioners, in reality uncertainty in decision making is oftentimes neglected.  For deterministic optimisation problems, a coherent chain of modelling and solving exists. Employing standard modelling languages and solvers for stochastic programs is however difficult. First, they have (with exceptions) no native support to formulate Stochastic Programs. Secondly solving stochastic programs with standard solvers (e.g. MIP solvers) is often computationally intractable.   David will be talking about his research that aims to make Stochastic Programming more accessible. First, he will be talking about modelling deterministic and stochastic programs in the Constraint Programming language MiniZinc - a modelling paradigm that retains the structure of a problem much more strongly than MIP formulations. Secondly, he will be talking about decomposition algorithms he has been working on to solve combinatorial Stochastic Programs.   
Friday, January 19, 2018 - 13:05 , Location: Skiles 005 , Sarah Cannon , CS, Georgia Tech , sarah.cannon@gatech.edu , Organizer: He Guo
Studying random samples drawn from large, complex sets is one way to begin to learn about typical properties and behaviors. However, it is important that the samples examined are random enough: studying samples that are unexpectedly correlated or drawn from the wrong distribution can produce misleading conclusions. Sampling processes using Markov chains have been utilized in physics, chemistry, and computer science, among other fields, but they are often applied without careful analysis of their reliability. Making sure widely-used sampling processes produce reliably representative samples is a main focus of my research, and in this talk I'll touch on two specific applications from statistical physics and combinatorics.I'll also discuss work applying these same  Markov chain processes used for sampling in a novel way to address research questions in programmable matter and swarm robotics, where a main goal is to understand how simple computational elements can accomplish  complicated system-level goals. In a constrained setting, we've answered this question by showing that groups of abstract particles executing our simple processes (which are derived from Markov chains) can provably accomplish remarkable global objectives. In the long run, one goal is to understand the minimum computational abilities elements need in order to exhibit complex global behavior, with an eye towards developing systems where individual components are  as simple as possible.This talk includes joint work with Marta Andrés Arroyo, Joshua J. Daymude, Daniel I. Goldman, David A. Levin, Shengkai Li, Dana Randall, Andréa Richa, William Savoie, Alexandre Stauffer, and Ross Warkentin. 
Friday, October 13, 2017 - 13:05 , Location: Skiles 005 , David Durfee , CS, Georgia Tech , ddurfee@gatech.edu , Organizer: He Guo
We show variants of spectral sparsification routines can preserve thetotal spanning tree counts of graphs, which by Kirchhoff's matrix-treetheorem, is equivalent to determinant of a graph Laplacian minor, orequivalently, of any SDDM matrix. Our analyses utilizes thiscombinatorial connection to bridge between statistical leverage scores/ effective resistances and the analysis of random graphs by [Janson,Combinatorics, Probability and Computing `94]. This leads to a routinethat in quadratic time, sparsifies a graph down to about $n^{1.5}$edges in ways that preserve both the determinant and the distributionof spanning trees (provided the sparsified graph is viewed as a randomobject). Extending this algorithm to work with Schur complements andapproximate Choleksy factorizations leads to algorithms for countingand sampling spanning trees which are nearly optimal for dense graphs.We give an algorithm that computes a $(1\pm \delta)$ approximation tothe determinant of any SDDM matrix with constant probability in about$n^2\delta^{−2}$ time. This is the first routine for graphs thatoutperforms general-purpose routines for computing determinants ofarbitrary matrices. We also give an algorithm that generates in about$n^2\delta^{−2}$ time a spanning tree of a weighted undirected graphfrom a distribution with total variation distance of $\delta$ fromthe w-uniform distribution.This is joint work with John Peebles, Richard Peng and Anup B. Rao.
Friday, October 6, 2017 - 13:05 , Location: Skiles 005 , Josh Daymude , Arizona State University/GaTech theory lab , joshdaymude@gmail.com , Organizer: He Guo
In a self-organizing particle system, an abstraction of programmable matter, simple computational elements called particles with limited memory and communication self-organize to solve system-wide problems of movement, coordination, and configuration. In this paper, we consider stochastic, distributed, local, asynchronous algorithms for 'shortcut bridging', in which particles self-assemble bridges over gaps that simultaneously balance minimizing the length and cost of the bridge. Army ants of the genus Eticon have been observed exhibiting a similar behavior in their foraging trails, dynamically adjusting their bridges to satisfy an efficiency tradeoff using local interactions. Using techniques from Markov chain analysis, we rigorously analyze our algorithm, show it achieves a near-optimal balance between the competing factors of path length and bridge cost, and prove that it exhibits a dependence on the angle of the gap being 'shortcut' similar to that of the ant bridges. We also present simulation results that qualitatively compare our algorithm with the army ant bridging behavior. Our work presents a plausible explanation of how convergence to globally optimal configurations can be achieved via local interactions by simple organisms (e.g., ants) with some limited computational power and access to random bits. The proposed algorithm demonstrates the robustness of the stochastic approach to algorithms for programmable matter, as it is a surprisingly simple extension of a stochastic algorithm for compression. This is joint work between myself/my professor Andrea Richa at ASU and Sarah Cannon and Prof. Dana Randall here at GaTech.
Friday, September 29, 2017 - 13:05 , Location: Skiles 005 , He Guo , School of Mathematics, Georgia Tech , he.guo@gatech.edu , Organizer: He Guo
In 1995 Kim famously proved the Ramsey bound $R(3,t) \ge c t^2/\log t$ by constructing an $n$-vertex graph that is triangle-free and has independence number at most $C \sqrt{n \log n}$. We extend this celebrated result, which is best possible up to the value of the constants, by approximately decomposing the complete graph $K_n$ into a packing of such nearly optimal Ramsey $R(3,t)$ graphs. More precisely, for any $\epsilon>0$ we find an edge-disjoint collection $(G_i)_i$ of $n$-vertex graphs $G_i \subseteq K_n$ such that (a) each $G_i$ is triangle-free and has independence number at most $C_\epsilon \sqrt{n \log n}$,  and (b) the union of all the $G_i$ contains at least $(1-\epsilon)\binom{n}{2}$ edges. Our algorithmic proof proceeds by sequentially choosing the graphs $G_i$ via a semi-random (i.e., Rödl nibble type) variation of the triangle-free process.  As an application, we prove a conjecture in Ramsey theory by Fox, Grinshpun, Liebenau, Person, and Szabó (concerning a Ramsey-type parameter introduced by Burr, Erdös, Lovász in 1976). Namely, denoting by $s_r(H)$ the smallest minimum degree of $r$-Ramsey minimal graphs for $H$, we close the existing logarithmic gap for $H=K_3$ and establish that $s_r(K_3) = \Theta(r^2 \log r)$. Based on joint work with Lutz Warnke.

Pages