Seminars and Colloquia by Series

CANCELLED: Online Selection with Cardinality Constraints under Bias

Series
ACO Student Seminar
Time
Friday, March 13, 2020 - 13:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Jad SalemMath, Georgia Tech
Optimization and machine learning algorithms often use real-world data that has been generated through complex socio-economic and behavioral processes. This data, however, is noisy, and naturally encodes difficult-to-quantify systemic biases. In this work, we model and address bias in the secretary problem, which has applications in hiring. We assume that utilities of candidates are scaled by unknown bias factors, perhaps depending on demographic information, and show that bias-agnostic algorithms are suboptimal in terms of utility and fairness. We propose bias-aware algorithms that achieve certain notions of fairness, while achieving order-optimal competitive ratios in several settings.
 

Strong self concordance and sampling

Series
ACO Student Seminar
Time
Friday, March 6, 2020 - 13:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Aditi LaddhaCS, Georgia Tech

Motivated by the Dikin walk, we develop aspects of an interior-point

theory for sampling in high dimensions. Specifically, we introduce symmetric

and strong self-concordance. These properties imply that the corresponding

Dikin walk mixes in $\tilde{O}(n\bar{\nu})$ steps from a warm start

in a convex body in $\mathbb{R}^{n}$ using a strongly self-concordant barrier

with symmetric self-concordance parameter $\bar{\nu}$. For many natural

barriers, $\bar{\nu}$ is roughly bounded by $\nu$, the standard

self-concordance parameter. We show that this property and strong

self-concordance hold for the Lee-Sidford barrier. As a consequence,

we obtain the first walk to mix in $\tilde{O}(n^{2})$ steps for an

arbitrary polytope in $\mathbb{R}^{n}$. Strong self-concordance for other

barriers leads to an interesting (and unexpected) connection ---

for the universal and entropic barriers, it is implied by the KLS

conjecture.

The Karger-Stein Algorithm is Optimal for k-cut

Series
ACO Student Seminar
Time
Friday, February 21, 2020 - 13:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Jason LiCS, Carnegie Mellon University

In the $k$-cut problem, we are given an edge-weighted graph and want to find the least-weight set of edges whose deletion breaks the graph into $k$ connected components. Algorithms due to Karger-Stein and Thorup showed how to find such a minimum $k$-cut in time approximately $O(n^{2k-2})$. The best lower bounds come from conjectures about the solvability of the $k$-clique problem and a reduction from $k$-clique to $k$-cut, and show that solving $k$-cut is likely to require time $\Omega(n^k)$. Our recent results have given special-purpose algorithms that solve the problem in time $n^{1.98k + O(1)}$, and ones that have better performance for special classes of graphs (e.g., for small integer weights).

In this work, we resolve the problem for general graphs, by showing that for any fixed $k \geq 2$, the Karger-Stein algorithm outputs any fixed minimum $k$-cut with probability at least $\widehat{O}(n^{-k})$, where $\widehat{O}(\cdot)$ hides a $2^{O(\ln \ln n)^2}$ factor. This also gives an extremal bound of $\widehat{O}(n^k)$ on the number of minimum $k$-cuts in an $n$-vertex graph and an algorithm to compute a minimum $k$-cut in similar runtime. Both are tight up to $\widehat{O}(1)$ factors.

The first main ingredient in our result is a fine-grained analysis of how the graph shrinks---and how the average degree evolves---under the Karger-Stein process. The second ingredient is an extremal result bounding the number of cuts of size at most $(2-\delta) OPT/k$, using the Sunflower lemma.

Clustering a Mixture of Gaussians

Series
ACO Student Seminar
Time
Friday, February 14, 2020 - 13:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
He JiaCS, Georgia Tech

We give an efficient algorithm for robustly clustering of a mixture of two arbitrary Gaussians, a central open problem in the theory of computationally efficient robust estimation, assuming only that the the means of the component Gaussian are well-separated or their covariances are well-separated. Our algorithm and analysis extend naturally to robustly clustering mixtures of well-separated logconcave distributions. The mean separation required is close to the smallest possible to guarantee that most of the measure of the component Gaussians can be separated by some hyperplane (for covariances, it is the same condition in the second degree polynomial kernel). Our main tools are a new identifiability criterion based on isotropic position, and a corresponding Sum-of-Squares convex programming relaxation.

Learning Optimal Reserve Price against Non-myopic Bidders

Series
ACO Student Seminar
Time
Friday, January 10, 2020 - 13:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Jinyan LiuUniversity of Hong Kong

We consider the problem of learning optimal reserve price in repeated auctions against non- myopic bidders, who may bid strategically in order to gain in future rounds even if the single- round auctions are truthful. Previous algorithms, e.g., empirical pricing, do not provide non- trivial regret rounds in this setting in general. We introduce algorithms that obtain a small regret against non-myopic bidders either when the market is large, i.e., no single bidder appears in more than a small constant fraction of the rounds, or when the bidders are impatient, i.e., they discount future utility by some factor mildly bounded away from one. Our approach carefully controls what information is revealed to each bidder, and builds on techniques from differentially private online learning as well as the recent line of works on jointly differentially private algorithms.

Thresholds versus fractional expectation-thresholds

Series
ACO Student Seminar
Time
Friday, December 6, 2019 - 13:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Jinyoung ParkRutgers University

Please Note: (This is a joint event of ACO Student Seminar and the Combinatorics Seminar Series)

In this talk we will prove a conjecture of Talagrand, which is a fractional version of the “expectation-threshold” conjecture of Kalai and Kahn. This easily implies various difficult results in probabilistic combinatorics, e.g. thresholds for perfect hypergraph matchings (Johansson-Kahn-Vu) and bounded-degree spanning trees (Montgomery). Our approach builds on recent breakthrough work of Alweiss, Lovett, Wu, and Zhang on the Erdős-Rado “Sunflower Conjecture.” 

This is joint work with Keith Frankston, Jeff Kahn, and Bhargav Narayanan.

Fast convergence of fictitious play

Series
ACO Student Seminar
Time
Friday, November 22, 2019 - 13:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Kevin A. LaiCS, Georgia Tech

Fictitious play is one of the simplest and most natural dynamics for two-player zero-sum games. Originally proposed by Brown in 1949, the fictitious play dynamic has each player simultaneously best-respond to the distribution of historical plays of their opponent. In 1951, Robinson showed that fictitious play converges to the Nash Equilibrium, albeit at an exponentially-slow rate, and in 1959, Karlin conjectured that the true convergence rate of fictitious play after k iterations is O(k^{-1/2}), a rate which is achieved by similar algorithms and is consistent with empirical observations. Somewhat surprisingly, Daskalakis and Pan disproved a version of this conjecture in 2014, showing that an exponentially-slow rate can occur, although their result relied on adversarial tie-breaking. In this talk, we show that Karlin’s conjecture holds if ties are broken lexicographically and the game matrix is diagonal. We also show a matching lower bound under this tie-breaking assumption. This is joint work with Jake Abernethy and Andre Wibisono.

Faster Width-dependent Algorithm for Mixed Packing and Covering LPs

Series
ACO Student Seminar
Time
Friday, November 15, 2019 - 13:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Digvijay BoobISyE, Georgia Tech

In this talk, we provide the details of our faster width-dependent algorithm for mixed packing-covering LPs. Mixed packing-covering LPs are fundamental to combinatorial optimization in computer science and operations research. Our algorithm finds a $1+\eps$ approximate solution in time $O(Nw/ \varepsilon)$, where $N$ is number of nonzero entries in the constraint matrix, and $w$ is the maximum number of nonzeros in any constraint. This algorithm is faster than Nesterov's smoothing algorithm which requires $O(N\sqrt{n}w/ \eps)$ time, where $n$ is the dimension of the problem. Our work utilizes the framework of area convexity introduced in [Sherman-FOCS’17] to obtain the best dependence on $\varepsilon$ while breaking the infamous $\ell_{\infty}$ barrier to eliminate the factor of $\sqrt{n}$. The current best width-independent algorithm for this problem runs in time $O(N/\eps^2)$ [Young-arXiv-14] and hence has worse running time dependence on $\varepsilon$. Many real life instances of mixed packing-covering problems exhibit small width and for such cases, our algorithm can report higher precision results when compared to width-independent algorithms. As a special case of our result, we report a $1+\varepsilon$ approximation algorithm for the densest subgraph problem which runs in time $O(md/ \varepsilon)$, where $m$ is the number of edges in the graph and $d$ is the maximum graph degree.

Asymptotic normality of the $r\to p$ norm for random matrices with non-negative entries

Series
ACO Student Seminar
Time
Friday, November 1, 2019 - 13:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Debankur MukherjeeISyE, Georgia Tech

For an $n\times n$ matrix $A_n$, the $r\to p$ operator norm is defined as $\|A_n\|_{r \to p}= \sup_{\|x\|_r\leq 1 } \|A_n x\|_p$ for $r,p\geq 1$. The $r\to p$ operator norm puts a huge number of important quantities of interest in diverse disciplines under a single unified framework. The application of this norm spans a broad spectrum of areas including data-dimensionality reduction in machine learning, finding oblivious routing schemes in transportation network, and matrix condition number estimation.

 

In this talk, we will consider the $r\to p$ norm of a class of symmetric random matrices with nonnegative entries, which includes the adjacency matrices of the Erd\H{o}s-R\'enyi random graphs and matrices with sub-Gaussian entries. For $1< p\leq r< \infty$, we establish the asymptotic normality of the appropriately centered and scaled $\|A_n\|_{r \to p}$, as $n\to\infty$. The special case $r=p=2$, which corresponds to the largest singular value of matrices, was proved in a seminal paper by F\"uredi and Koml\'os (1981). Of independent interest, we further obtain a sharp $\ell_\infty$-approximation for the maximizer vector. The results also hold for sparse matrices and further the $\ell_\infty$-approximation for the maximizer vector also holds for a broad class of deterministic sequence of matrices with certain asymptotic `expansion' properties.

 

This is based on a joint work with Souvik Dhara (MIT) and Kavita Ramanan (Brown U.).

High-Order Langevin Diffusion Yields an Accelerated MCMC Algorithm

Series
ACO Student Seminar
Time
Friday, October 25, 2019 - 13:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Wenlong MouEECS, UC Berkeley

We propose a Markov chain Monte Carlo (MCMC) algorithm based on third-order Langevin dynamics for sampling from distributions with log-concave and smooth densities. The higher-order dynamics allow for more flexible discretization schemes, and we develop a specific method that combines splitting with more accurate integration. For a broad class of d-dimensional distributions arising from generalized linear models, we prove that the resulting third-order algorithm produces samples from a distribution that is at most \varepsilon in Wasserstein distance from the target distribution in O(d^{1/3}/ \varepsilon^{2/3}) steps. This result requires only Lipschitz conditions on the gradient. For general strongly convex potentials with α-th order smoothness, we prove that the mixing time scales as O (d^{1/3} / \varepsilon^{2/3} + d^{1/2} / \varepsilon^{1 / (\alpha - 1)} ). Our high-order Langevin diffusion reduces the problem of log-concave sampling to numerical integration along a fixed deterministic path, which makes it possible for further improvements in high-dimensional MCMC problems. Joint work with Yi-An Ma, Martin J, Wainwright, Peter L. Bartlett and Michael I. Jordan.

Pages