Seminars and Colloquia by Series

Hardness and Approximations of Submodular Minimum Linear Ordering Problems

Series
ACO Student Seminar
Time
Friday, November 5, 2021 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 314
Speaker
Michael WigalGeorgia Tech Math

The minimum linear ordering problem (MLOP) asks to minimize the aggregated cost of a set function f with respect to some ordering \sigma of the base set. That is, MLOP asks to find a permutation \sigma that minimizes the sum \sum_{i = 1}^{|E|}f({e \in E : \sigma(e) \le i}). Many instances of MLOP have been studied in the literature, for example, minimum linear arrangement (MLA) or minimum sum vertex cover (MSVC). We will cover how graphic matroid MLOP, i.e. where f is taken to be the rank function of a graphic matroid, is NP-hard. This is achieved through a series of reductions beginning with MSVC. During these reductions, we will introduce a new problem, minimum latency vertex cover (MLVC) which we will also show has a 4/3 approximation. Finally, using the theory of principal partitions, we will show MLOP with monotone submodular function f : E \to \mathbb{R}^+ has a 2 - (1 + \ell_f)/(1 + |E|) approximation where \ell_f = f(E)/(\max_{x \in E}f({x})). As a corollary, we obtain a 2 - (1 + r(E))/(1 + |E|) approximation for matroid MLOP where r is the rank function of the matroid. We will also end with some interesting open questions.

Joint work with Majid Farhadi, Swati Gupta, Shengding Sun, and Prasad Tetali.

Learning traffic correlations in multi-class queueing systems by sampling workloads

Series
ACO Student Seminar
Time
Friday, October 22, 2021 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 314
Speaker
Martin ZubeldiaGeorgia Tech ISyE

We consider a service system consisting of parallel single server queues of infinite capacity. Work of different classes arrives as correlated Gaussian processes with known drifts but unknown covariances, and it is deterministically routed to the different queues according to some routing matrix. In this setting we show that, under some conditions, the covariance matrix of the arrival processes can be directly recovered from the large deviations behavior of the queue lengths. Also, we show that in some cases this covariance matrix cannot be directly recovered this way, as there is an inherent loss of information produced by the dynamics of the queues. Finally, we show how this can be used to quickly learn an optimal routing matrix with respect to some utility function.

Approximating Sparse Semidefinite Programs

Series
ACO Student Seminar
Time
Friday, October 1, 2021 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 314
Speaker
Kevin ShuGeorgia Tech Math

Please Note: Stream online at https://bluejeans.com/520769740/

Semidefinite programming is a useful type of convex optimization, which has applications in both graph theory and industrial engineering. Many semidefinite programs exhibit a kind of structured sparsity, which we can hope to exploit to reduce the memory requirements of solving such semidefinite programs. We will discuss an interesting relaxation of such sparse semidefinite programs, and a measurement of how well this relaxation approximates a true semidefinite program. We'll also discuss how these approximations relate to graph theory and the theory of sum-of-squares and nonnegative polynomials. This talk will not assume any background on semidefinite programming.

An atomic matrix norm regularizer for sparse phase retrieval and PCA

Series
ACO Student Seminar
Time
Friday, September 24, 2021 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 314
Speaker
Andrew McraeGeorgia Tech ECE

Please Note: Stream online at https://bluejeans.com/520769740/

We present a mixed atomic matrix norm that, when used as regularization in optimization problems, promotes low-rank matrices with sparse factors. We show that in convex lifted formulations of sparse phase retrieval and sparse principal component analysis (PCA), this norm provides near-optimal sample complexity and error rate guarantees. Since statistically optimal sparse PCA is widely believed to be NP-hard, this leaves open questions about how practical it is to compute and optimize this atomic norm. Motivated by convex duality analysis, we present a heuristic algorithm in the case of sparse phase retrieval and show that it empirically matches existing state-of-the-art algorithms.

Stochastic Methods for Matrix Games and its Applications.

Series
ACO Student Seminar
Time
Friday, September 17, 2021 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 314
Speaker
Yujia JinStanford University

Please Note: Stream online at https://bluejeans.com/520769740/

In this talk, I will introduce some recent advances in designing stochastic primal-dual methods for bilinear saddle point problems, in the form of min_x max_y y^TAx under different geometries of x and y. These problems are prominent in economics, linear programming, machine learning and reinforcement learning. Specifically, our methods apply to Markov decision processes (MDPs), linear regression, and computational geometry tasks. 

 

In our work, we propose a variance-reduced framework for solving convex-concave saddle-point problems, given a gradient estimator satisfying some local properties. Further, we show how to design such gradient estimators for bilinear objectives under different geometry including simplex (l_2), Euclidean ball (l_1) or box (l_inf) domains. For matrix A with larger dimension n, nonzero entries nnz and accuracy epsilon, our proposed variance-reduced primal dual methods obtain a runtime complexity of nnz+\sqrt{nnz*n}/epsilon, improving over the exact gradient methods and fully stochastic methods in the accuracy and/or the sparse regime (when epsilon < n/nnz). For finite-sum saddle-point problems sum_{k=1}^K f_k(x,y) where each f is 1-smooth, we show how to obtain an epsilon-optimal saddle point within gradient query complexity of K+\sqrt{K}/epsilon.

 

Moreover, we also provide a class of coordinate methods for solving bilinear saddle-point problems. These algorithms use either O(1)-sparse gradient estimators to obtain improved sublinear complexity over fully stochastic methods, or their variance-reduced counterparts for improved nearly-linear complexity, for sparse and numerically sparse instances A. 

 

This talk is based on several joint works with Yair Carmon, Aaron Sidford and Kevin Tian, with links of papers below:

Variance Reduction for Matrix Games

Coordinate Methods for Matrix Games

Efficiently Solving MDPs using Stochastic Mirror Descent

 

Bio of the speaker: Yujia Jin is a fourth-year Ph.D. student in Department of Management Science and Engineering, Stanford University, working with Aaron Sidford. She is interested in designing efficient continuous optimization methods, which often run in nearly linear / sublinear time and find vast applications in machine learning, data analysis, reinforcement learning, and graph problems.

Prague dimension of random graphs

Series
ACO Student Seminar
Time
Friday, November 20, 2020 - 13:00 for 1 hour (actually 50 minutes)
Location
ONLINE
Speaker
Kalen PattonMath, Georgia Tech

Various notions of dimension are important throughout mathematics, and for graphs the so-called Prague dimension was introduced by Nesetril, Pultr and Rodl in the 1970s. Proving a conjecture of Furedi and Kantor, we show that the Prague dimension of the binomial random graph is typically of order $n/\log n$ for constant edge-probabilities. The main new proof ingredient is a Pippenger-Spencer type edge-coloring result for random hypergraphs with large uniformities, i.e., edges of size $O(\log n)$.

Based on joint work with He Guo and Lutz Warnke.

Approximation Algorithms for Mixed Integer Non-Linear Optimization Problems

Series
ACO Student Seminar
Time
Friday, November 13, 2020 - 13:00 for 1 hour (actually 50 minutes)
Location
ONLINE
Speaker
Guanyi WangISyE, Georgia Tech

For computational-intensive mixed integer non-linear optimization problems, a major challenge is to verify/guarantee the quality of any feasible solution under mild assumptions in a tractable fashion. In this talk, we focus on tackling this challenge by constructing tight relaxations and designing approximation algorithms for two different mixed integer non-linear optimization problems.

In the first part, we focus on the (row) sparse principal component analysis (rsPCA) problem. Solving rsPCA is the problem of finding the top-r leading principal components of a covariance matrix such that all these principal components are linear combinations of a subset of k variables. The rsPCA problem is a widely used dimensionality reduction tool with an additional sparsity constraint to enhance its interpretability. We propose: (a) a convex integer programming relaxation of rsPCA that gives upper (dual) bounds for rsPCA, and; (b) a new local search algorithm for finding primal feasible solutions for rsPCA. We also show that, in the worst-case, the dual bounds provided by the convex IP are within an affine function of the global optimal value. We demonstrate our techniques applied to large-scale covariance matrices.

In the second part, we focus on improving the execution speed of compute-intensive numerical code. The compute-intensive numerical code, especially of the variety encountered in deep neural network inference and training, is often written using nested for-loops. One of the main bottlenecks that significantly influence the nested for-loops' execution speed is the so-called memory latency. Iteration space tiling is a common memory management technique used to deal with memory latency. We study the problem of automatically optimizing the implementation of these nested loops by formulating the iteration space tiling problem into an integer geometric programming (IGP) problem. We show how to design an efficient approximation algorithm for this problem and how to use the so-called "non-uniform tiling" technique to improve the execution speed.

The first part of the talk is joint work with Santanu S. Dey, Rahul Mazumder, Macro Molinaro, and the second part of the talk is joint work with Ofer Dekel.

$k$-planar crossing numbers and the midrange crossing constant

Series
ACO Student Seminar
Time
Friday, October 30, 2020 - 13:00 for 1 hour (actually 50 minutes)
Location
Online
Speaker
Dr. Zhiyu WangMath, Georgia Tech

The crossing number of a graph is the minimum number of crossings it can be drawn in a plane. Let $\kappa(n, m)$ be the minimum crossing number of graphs with $n$ vertices and (at least) $m$ edges. Erd\H{o}s and Guy conjectured and Pach, Spencer and T\'oth proved that for any $m = m(n)$ satisfying $n \ll m \ll n^2$, the quatity $\ds\lim_{n \to \infty} \frac{\kappa(n,m) n^2}{m^3}$ exists and is positive. The $k$-planar crossing number of a graph is the minimum crossing number obtained when we partition the edges of the graph into $k$ subgraphs and draw them in $k$ planes. Using designs and a probabilistic algorithm, the guaranteed factor of improvement $\alpha_k$ between the $k$-planar and regular crossing number is $\frac{1}{k^2} (1 + o(1))$, while if we restrict our attention to biplanar graphs, this constant is $\beta_k = \frac{1}{k^2}$ exactly. The lower bound proofs require the existence of a midrange crossing constant. Motivated by this, we show that the midrange crossing constant exists for all graph classes (including bipartite graphs) that satisfy certain mild conditions. The regular midrange crossing constant was shown to be is at most $\frac{8}{9\pi^2}$; we present a probabilistic construction that also shows this bound.
 

The Sunflower Problem

Series
ACO Student Seminar
Time
Friday, October 23, 2020 - 13:00 for 1 hour (actually 50 minutes)
Location
Online
Speaker
Tolson BellMath, Georgia Tech

A sunflower with p petals consists of p sets whose pairwise intersections are all the same set. The goal of the sunflower problem is to find the smallest r = r(p,k) such that every family of at least r^k k-element sets must contain a sunflower with p petals. Major breakthroughs within the last year by Alweiss-Lovett-Wu-Zhang and others show that r = O(p log(pk)) suffices. In this talk, after reviewing the history and significance of the Sunflower Problem, I will present our improvement to r = O(p log k), which we obtained during the 2020 REU at Georgia Tech. As time permits, I will elaborate on key lemmas and techniques used in recent improvements.

Based on joint work with Suchakree Chueluecha (Lehigh University) and Lutz Warnke (Georgia Tech), see https://arxiv.org/abs/2009.09327

Hyperbolic Relaxations of Locally Positive Semidefinite Matrices

Series
ACO Student Seminar
Time
Friday, October 9, 2020 - 13:00 for 1 hour (actually 50 minutes)
Location
https://bluejeans.com/264244877/0166
Speaker
Kevin ShuMath, Georgia Tech

Semidefinite programming is a powerful optimization tool, which involves optimizing linear functions on a slice of the positive semidefinite matrices. Locally PSD matrices are a natural relaxation of the PSD matrices which can be useful in reducing the space required for semidefinite optimization. We use the theory of hyperbolic polynomials to give precise quantitative bounds on the quality of the approximation resulting from optimizing over the locally-psd cone instead of the PSD cone.

Pages