- You are here:
- GT Home
- Home
- News & Events

Series: ACO Student Seminar

We investigate whether the standard dimensionality reduction techniques
inadvertently produce data representations with different fidelity for
two different populations. We show on several real-world datasets, PCA
has higher reconstruction error on population
A than B (for example, women versus men or lower versus higher-educated
individuals). This can happen even when the dataset has similar number
of samples from A and B . This motivates our study of dimensionality
reduction techniques which maintain similar fidelity
for A as B . We give an efficient algorithm for finding a projection
which is nearly-optimal with respect to this measure, and evaluate it on
several datasets. This is a joint work with Uthaipon
Tantipongpipat, Jamie Morgenstern, Mohit Singh, and Santosh Vempala.

Series: ACO Student Seminar

At the heart of most algorithms today there is an optimization engine trying to learn online
and provide the best decision, for e.g.
rankings of objects, at any time with the partial information observed
thus far in time. Often it becomes difficult to find near optimal
solutions to many problems due to their inherent combinatorial structure that
leads to certain computational bottlenecks. Submodularity is a discrete
analogue of convexity and is a key property often exploited in tackling combinatorial optimization
problems.
In the first part of the talk, we will focus on computational
bottlenecks that involve submodular functions: (a) convex function
minimization over submodular base polytopes (for e.g. permutahedron) and
(b) movement along a line inside submodular base polytopes.
We give a conceptually simple and strongly polynomial algorithm Inc-Fix
for the former, which is useful in computing Bregman projections in
first-order projection-based methods like online mirror descent. For the
latter, we will bound the iterations of the
discrete Newton method which gives a running time improvement of at
least n^6 over the state of the art. This is joint work with Michel
Goemans and Patrick Jaillet. In the second part of the talk, we will
consider the dual problem of (a), i.e. minimization
of composite convex and submodular objectives. We will resolve Bach's
conjecture from 2015 about the running time of a popular Kelley's
cutting plane variant to minimize these composite objectives. This is
joint work with Madeleine Udell and Song Zhou.

Series: ACO Student Seminar

Abstract:
Queueing
systems are studied in various asymptotic regimes because they are hard
to study in general. One
popular regime of study is the heavy-traffic regime, when the system is
loaded very close to its capacity. Heavy-traffic behavior of queueing
systems is traditionally studied using fluid and diffusion limits. In
this talk, I will present a recently developed
method called the 'Drift Method', which is much simpler, and is based on
studying the drift of certain test functions. In addition to exactly
characterizing the heavy-traffic behavior, the drift method can be used
to obtain lower and upper bounds for all loads.
In this talk, I will present the drift method, and its successful
application in the context of data center networks to resolve a
decade-old conjecture. I will also talk about ongoing work and some open
problems.
Bio:
Siva
Theja Maguluri is an Assistant Professor in the School of Industrial
and Systems Engineering at Georgia
Tech. Before that, he was a Research Staff Member in the Mathematical
Sciences Department at IBM T. J. Watson Research Center. He obtained his
Ph.D. from the University of Illinois at Urbana-Champaign in Electrical
and Computer Engineering where he worked on
resource allocation algorithms for cloud computing and wireless
networks. Earlier, he received an MS in ECE and an MS in Applied Math
from UIUC and a B.Tech in Electrical Engineering from IIT Madras. His
research interests include Stochastic Processes, Optimization,
Cloud Computing, Data Centers, Resource Allocation and Scheduling
Algorithms, Networks, and Game Theory. The current talk is based on a
paper that received the best publication in applied probability award,
presented by INFORMS Applied probability society.

Series: ACO Student Seminar

Given a finite partially ordered set (poset) of width $w$, Dilworth's theorem gives an existence and minimality of a chain partition of size $w$. First-Fit is an online algorithm for creating a chain partition of a poset. Given a linear ordering of the points of the poset, $v_1, \cdots, v_n$, First-Fit assigns the point $v_i$ to the first available chain in the chain partition of the points $v_1, \cdots, v_{i-1}$. It is a known fact that First-Fit has unbounded performance over the family of finite posets of width 2. We give a complete characterization of the family of finite posets in which First-Fit performs with subexponential efficiency in terms of width. We will also review what is currently known on the family of posets in which First-Fit performs with polynomial efficiency in terms of width. Joint work with Kevin Milans.

Series: ACO Student Seminar

As a generalization of many classic problems in combinatorial optimization, submodular optimization has found a wide range of applications in machine learning (e.g., in feature engineering and active learning). For many large-scale optimization problems,
we are often concerned with the adaptivity complexity of an algorithm, which quantifies the number of sequential rounds where polynomially-many independent function evaluations can be executed in parallel. While low adaptivity is ideal, it is not sufficient
for a (distributed) algorithm to be efficient, since in many practical applications of submodular optimization the number of function evaluations becomes prohibitively expensive. Motivated by such applications, we study the adaptivity and query complexity
of adaptive submodular optimization.
Our main result is a distributed algorithm for maximizing a monotone submodular function with cardinality constraint $k$ that achieves a $(1-1/e-\varepsilon)$-approximation in expectation. Furthermore, this algorithm runs in $O(\log(n))$ adaptive rounds
and makes $O(n)$ calls to the function evaluation oracle in expectation. All three of these guarantees are optimal, and the query complexity is substantially less than in previous works. Finally, to show the generality of our simple algorithm and techniques,
we extend our results to the submodular cover problem.
Joint work with Vahab Mirrokni and Morteza Zadimoghaddam (<a href="https://arxiv.org/abs/1807.07889">arXiv:1807.07889</a>).

Series: ACO Student Seminar

We study the dynamic graph connectivity problem in the massively
parallel computation model. We give a data structure for maintaining a
dynamic undirected graph that handles batches of updates and
connectivity queries in constant rounds, as long as the queries fit on a
single machine. This assumption corresponds to the gradual buildup of
databases over time from sources such as log files and user
interactions. Our techniques combine a distributed data structure for
Euler Tour (ET) trees, a structural theorem about rapidly contracting
graphs via sampling n^{\epsilon} random neighbors, as well as
modifications to sketching based dynamic connectivity data structures.
Joint work with David Durfee, Janardhan Kulkarni, Richard Peng and Xiaorui Sun.

Series: ACO Student Seminar

Consider the problem of selling items to a unit-demand buyer. Most work
on maximizing seller revenue considers either a setting that is single
dimensional, such as where the items are identical, or
multi-dimensional, where the items are heterogeneous. With respect to
revenue-optimal mechanisms, these settings sit at extreme ends of a
spectrum: from simple and fully characterized (single-dimensional) to
complex and nebulous (multi-dimensional).
In this paper, we identify a setting that sits in between these
extremes. We consider a seller who has three services {A,B,C} for sale
to a single buyer with a value v and an interest G from {A,B,C}, and
there is a known partial ordering over the services. For example,
suppose the seller is selling {internet}, {internet, phone}, and
{internet, cable tv}. A buyer with interest {internet} would be
satisfied by receiving phone or cable tv in addition, but a customer
whose interest is {internet, phone} cannot be satisfied by any other
option. Thus this corresponds to a partial-ordering where {internet}
> {internet, phone} and {internet} > {internet, cable tv}, but
{internet, phone} and {internet, cable tv} are not comparable.
We show formally that partially-ordered items lie in a space of their
own, in between identical and heterogeneous items: there exist
distributions over (value, interest) pairs for three partially-ordered
items such that the menu complexity of the optimal mechanism is
unbounded, yet for all distributions there exists an optimal mechanism
of finite menu complexity. So this setting is vastly more complex than
identical items (where the menu complexity is one), or even
“totally-ordered” items as in the FedEx Problem [FGKK16] (where the menu
complexity is at most seven, for three items), yet drastically more
structured than heterogeneous items (where the menu complexity can be
uncountable [DDT15]). We achieve this result by proving a
characterization of the class of best duals and by giving a primal
recovery algorithm which obtains the optimal mechanism. In addition, we
(1) extend our lower-bound to the Multi-Unit Pricing setting, (2) give a
tighter and deterministic characterization of the optimal mechanism
when the buyer’s distribution satisfies the declining marginal revenue
condition, and (3) prove a master theorem that allows us to reason about
duals instead of distributions.
Joint work with Nikhil Devanur, Raghuvansh Saxena, Ariel Schvartzman, and Matt Weinberg.

Series: ACO Student Seminar

We study the $A$-optimal design problem where we are given vectors $v_1,\ldots, v_n\in \R^d$, an integer $k\geq d$, and the goal is to select a set $S$ of $k$ vectors that minimizes the trace of $\left(\sum_{i\in S} v_i v_i^{\top}\right)^{-1}$. Traditionally, the problem is an instance of optimal design of experiments in statistics (\cite{pukelsheim2006optimal}) where each vector corresponds to a linear measurement of an unknown vector and the goal is to pick $k$ of them that minimize the average variance of the error in the maximum likelihood estimate of the vector being measured. The problem also finds applications in sensor placement in wireless networks~(\cite{joshi2009sensor}), sparse least squares regression~(\cite{BoutsidisDM11}), feature selection for $k$-means clustering~(\cite{boutsidis2013deterministic}), and matrix approximation~(\cite{de2007subset,de2011note,avron2013faster}). In this paper, we introduce \emph{proportional volume sampling} to obtain improved approximation algorithms for $A$-optimal design.Given a matrix, proportional volume sampling involves picking a set of columns $S$ of size $k$ with probability proportional to $\mu(S)$ times $\det(\sum_{i \in S}v_i v_i^\top)$ for some measure $\mu$. Our main result is to show the approximability of the $A$-optimal design problem can be reduced to \emph{approximate} independence properties of the measure $\mu$. We appeal to hard-core distributions as candidate distributions $\mu$ that allow us to obtain improved approximation algorithms for the $A$-optimal design. Our results include a $d$-approximation when $k=d$, an $(1+\epsilon)$-approximation when $k=\Omega\left(\frac{d}{\epsilon}+\frac{1}{\epsilon^2}\log\frac{1}{\epsilon}\right)$ and $\frac{k}{k-d+1}$-approximation when repetitions of vectors are allowed in the solution. We also consider generalization of the problem for $k\leq d$ and obtain a $k$-approximation. The last result also implies a restricted invertibility principle for the harmonic mean of singular values.We also show that the $A$-optimal design problem is$\NP$-hard to approximate within a fixed constant when $k=d$.

Series: ACO Student Seminar

A low-diameter decomposition (LDD) of an undirected graph G is a partitioning of G into components of bounded diameter, such that only a small fraction of original edges are between the components. This decomposition has played instrumental role
in the design of low-stretch spanning tree, spanners, distributed algorithms etc.
A natural question is whether such a decomposition can be efficiently maintained/updated as G undergoes insertions/deletions of edges. We make the first step towards answering this question by designing a fully-dynamic graph algorithm that maintains an
LDD in sub-linear update time.
It is known that any undirected graph G admits a spanning tree T with nearly logarithmic average stretch, which can be computed in nearly linear-time. This tree decomposition underlies many recent progress in static algorithms for combinatorial and scientific
flows. Using our dynamic LDD algorithm, we present the first non-trivial algorithm that dynamically maintains a low-stretch spanning tree in \tilde{O}(t^2) amortized update time, while achieving (t + \sqrt{n^{1+o(1)}/t}) stretch, for every 1 \leq t \leq n.
Joint work with Sebastian Krinninger.

Series: ACO Student Seminar

A lot of well-studied problems in CS Theory are about making
“sketches” of graphs that occupy much less space than the graph itself,
but where the shortest path distances of the graph can still be
approximately recovered from the sketch. For example, in the literature
on Spanners, we seek a sparse subgraph whose distance metric
approximates that of the original graph. In Emulator literature, we
relax the requirement that the approximating graph is a subgraph. Most
generally, in Distance Oracles, the sketch can be an arbitrary data
structure, so long as it can approximately answer queries about the
pairwise distance between nodes in the original graph.
Research on these objects typically focuses on optimizing the
worst-case tradeoff between the quality of the approximation and the
amount of space that the sketch occupies. In this talk, we will survey a
recent leap in understanding about this tradeoff, overturning the
conventional wisdom on the problem. Specifically, the tradeoff is not
smooth, but rather it follows a new discrete hierarchy in which the
quality of the approximation that can be obtained jumps considerably at
certain predictable size thresholds. The proof is graph-theoretic and
relies on building large families of graphs with large discrepancies in
their metrics.