Seminars and Colloquia by Series

Beyond Moments: Robustly Learning Affine Transformations with Asymptotically Optimal Error

Series
ACO Student Seminar
Time
Friday, April 21, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
He JiaGeorgia Tech CS

We present a polynomial-time algorithm for robustly learning an unknown affine transformation of the standard hypercube from samples, an important and well-studied setting for independent component analysis (ICA). Total variation distance is the information-theoretically strongest possible notion of distance in our setting and our recovery guarantees in this distance are optimal up to the absolute constant factor multiplying the fraction of corruption. Our key innovation is a new approach to ICA (even to outlier-free ICA) that circumvents the difficulties in the classical method of moments and instead relies on a new geometric certificate of correctness of an affine transformation. Our algorithm is based on a new method that iteratively improves an estimate of the unknown affine transformation whenever the requirements of the certificate are not met.

Alternating minimization for generalized rank one matrix sensing: Sharp predictions from a random initialization

Series
ACO Student Seminar
Time
Friday, April 7, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Mengqi LouGeorgia Tech ISyE

We consider the problem of estimating the factors of a rank-1 matrix with i.i.d. Gaussian, rank-1 measurements that are nonlinearly transformed and corrupted by noise. Considering two prototypical choices for the nonlinearity, we study the convergence properties of a natural alternating update rule for this nonconvex optimization problem starting from a random initialization. We show sharp convergence guarantees for a sample-split version of the algorithm by deriving a deterministic recursion that is accurate even in high-dimensional problems. Our sharp, non-asymptotic analysis also exposes several other fine-grained properties of this problem, including how the nonlinearity and noise level affect convergence behavior.

 

On a technical level, our results are enabled by showing that the empirical error recursion can be predicted by our deterministic sequence within fluctuations of the order n−1/2 when each iteration is run with n observations. Our technique leverages leave-one-out tools originating in the literature on high-dimensional M–estimation and provides an avenue for sharply analyzing higher-order iterative algorithms from a random initialization in other high-dimensional optimization problems with random data.

Which L_p norm is the fairest? Approximations for fair facility location across all "p"

Series
ACO Student Seminar
Time
Friday, March 31, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Jai MoondraGeorgia Tech CS

The classic facility location problem seeks to open a set of facilities to minimize the cost of opening the chosen facilities and the total cost of connecting all the clients to their nearby open facilities. Such an objective may induce an unequal cost over certain socioeconomic groups of clients (i.e., total distance traveled by clients in such a group). This is important when planning the location of socially relevant facilities such as emergency rooms and grocery stores. In this work, we consider a fair version of the problem by minimizing the L_p-norm of the total distance traveled by clients across different socioeconomic groups and the cost of opening facilities, to penalize high access costs to open facilities across r groups of clients. This generalizes classic facility location (p = 1) and the minimization of the maximum total distance traveled by clients in any group (p = infinity). However, it is often unclear how to select a specific "p" to model the cost of unfairness. To get around this, we show the existence of a small portfolio of at most (log2r + 1) solutions for r (disjoint) client groups, where for any L_p-norm, at least one of the solutions is a constant-factor approximation with respect to any L_p-norm. We also show that such a dependence on r is necessary by showing the existence of instances where at least ~ sqrt(log2r) solutions are required in such a portfolio. Moreover, we give efficient algorithms to find such a portfolio of solutions. Additionally, We introduce the notion of refinement across the solutions in the portfolio. This property ensures that once a facility is closed in one of the solutions, all clients assigned to it are reassigned to a single facility and not split across open facilities. We give poly(exp(sqrt(r))-approximation for refinement in general metrics and O(log r)-approximation for the line and tree metrics. This is joint work with Swati Gupta and Mohit Singh.

Online Covering: Prophets, Secretaries, and Samples

Series
ACO Student Seminar
Time
Friday, February 24, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Gregory KehneHarvard Computer Science

We give a polynomial-time algorithm for online covering IPs with a competitive ratio of O(\log mn) when the constraints are revealed in random order, essentially matching the best possible offline bound of \Omega(\log n) and circumventing the \Omega(\log m \log n) lower bound known in adversarial order. We then leverage this O(\log mn)-competitive algorithm to solve this problem in the prophet setting, where constraints are sampled from a sequence of known distributions. Our reduction in fact relies only on samples from these distributions, in a manner evocative of prior work on single-sample prophet inequalities for online packing problems. We present sample guarantees in the prophet setting, as well as in the setting where random samples from an adversarial instance are revealed at the outset.

This talk is based on joint work with Anupam Gupta and Roie Levin, part of which appeared at FOCS 2021. 

Maximizing minimum eigenvalue in constant dimension.

Series
ACO Student Seminar
Time
Friday, February 17, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Adam BrownGeorgia Tech Math

In the minimum eigenvalue problem we are given a collection of rank-1 symmetric matrices, and the goal is to find a subset whose sum has large minimum eigenvalue, subject to some combinatorial constraints. The constraints on which subsets we can select, could be cardinality, partition, or more general matroid base constraints. Using pipage rounding and a matrix concentration inequality, we will show a randomised algorithm which achieves a (1- epsilon) approximation for the minimum eigenvalue problem when the matrices have constant size, subject to any matroid constraint.

The bulk of the talk will be background on “pipage rounding, pessimistic estimators and matrix concentration” adapted from the paper with that title by Nicholas J. A. Harvey and Neil Olver. The application to the minimum eigenvalue problem is joint work with Aditi Laddha and Mohit Singh.

Computation with sequences of neural assemblies

Series
ACO Student Seminar
Time
Friday, February 10, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Max DabagiaGeorgia Tech CS

Assemblies are subsets of neurons whose coordinated excitation is hypothesized to represent the subject's thinking of an object, idea, episode, or word. Consequently, they provide a promising basis for a theory of how neurons and synapses give rise to higher-level cognitive phenomena. The existence (and pivotal role) of assemblies was first proposed by Hebb, and has since been experimentally confirmed, as well as rigorously proven to emerge in the model of computation in the brain recently developed by Papadimitriou & Vempala. In light of contemporary studies which have documented the creation and activation of sequences of assemblies of neurons following training on tasks with sequential decisions, we study here the brain's mechanisms for working with sequences in the assemblies model of Papadimitriou & Vempala.  We show that (1) repeated presentation of a sequence of stimuli leads to the creation of a sequence of corresponding assemblies -- upon future presentation of any contiguous sub-sequence of stimuli, the corresponding assemblies are activated and continue until the end of the sequence; (2) when the stimulus sequence is projected to two brain areas in a "scaffold", both memorization and recall are more efficient, giving rigorous backing to the cognitive phenomenon that memorization and recall are easier with scaffolded memories; and (3) existing assemblies can be quite easily linked to simulate an arbitrary finite state machine (FSM), thereby capturing the brain's ability to memorize algorithms. This also makes the assemblies model capable of arbitrary computation simply in response to presentation of a suitable stimulus sequence, without explicit control commands. These findings provide a rigorous, theoretical explanation at the neuronal level of complex phenomena such as sequence memorization in rats and algorithm learning in humans, as well as a concrete hypothesis as to how the brain's remarkable computing and learning abilities could be realized.

 

Joint work with Christos Papadimitriou and Santosh Vempala.

Sampling with Riemannian Hamiltonian Monte Carlo in a Constrained Space

Series
ACO Student Seminar
Time
Friday, February 3, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Yunbum KookGeorgia Tech CS

We demonstrate for the first time that ill-conditioned, non-smooth, constrained distributions in very high dimensions, upwards of 100,000, can be sampled efficiently in practice. Our algorithm incorporates constraints into the Riemannian version of Hamiltonian Monte Carlo and maintains sparsity. This allows us to achieve a mixing rate independent of condition numbers. On benchmark data sets from systems biology and linear programming, our algorithm outperforms existing packages by orders of magnitude. In particular, we achieve a 1,000-fold speed-up for sampling from the largest published human metabolic network (RECON3D). Our package has been incorporated into the COBRA toolbox. This is joint work with Yin Tat Lee, Ruoqi Shen, and Santosh Vempala.

Utility maximizing load balancing policies

Series
ACO Student Seminar
Time
Friday, January 27, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Diego GoldsztajnEindhoven University of Technology

We consider a service system where incoming tasks are instantaneously assigned to one out of many heterogeneous server pools. All the tasks sharing a server pool are executed in parallel and the execution times do not depend on the class of the server pool or the number of tasks currently contending for service. However, associated with each server pool is a utility function that does depend on the class of the server pool and the number of tasks currently sharing it. These features are characteristic of streaming and online gaming services, where the duration of tasks is mainly determined by the application but congestion can have a strong impact on the quality-of-service (e.g., video resolution and smoothness). We derive an upper bound for the mean overall utility in steady-state and introduce two load balancing policies that achieve this upper bound in a large-scale regime. Also, the transient and stationary behavior of these asymptotically optimal load balancing policies is characterized in the same large-scale regime.

Multiserver Stochastic Scheduling: Analysis and Optimization

Series
ACO Student Seminar
Time
Friday, January 20, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
ONLINE
Speaker
Isaac GrosofCMU

Please Note: Link:https://gatech.zoom.us/j/91232113113?pwd=MDhteEdtcENuME9kdXJmcUY0eWlSUT09

Large-scale computing systems are massively important, using over 1% of the world's electricity. It is vital that these systems be both fast and resource-efficient, and scheduling theory is a key tool towards that goal. However, prior scheduling theory is not equipped to handle large multiserver systems, with little extant theoretical analysis, and no optimality results.

 

I focus on two important multiserver scheduling systems: The one-server-per-job (M/G/k) model, and the multiple-servers-per-job (MSJ) model. In the M/G/k, we prove the first optimality result, demonstrating that the Shortest Remaining Processing Time (SRPT) policy yields asymptotically optimal mean response time in the heavy traffic limit. In the MSJ model, we prove the first mean response analysis for any scheduling policy, for a novel policy called ServerFilling. Moreover, we introduce the ServerFilling-SRPT policy, for which we present the first asymptotic optimality result for the MSJ model. Each result progresses by proving novel bounds on relevant work, and using novel methods to convert those bounds to bounds on mean response time. These results push the state of the art of scheduling theory ever closer to applicability to today's large-scale computing systems.

Memory bounds for continual learning

Series
ACO Student Seminar
Time
Friday, January 13, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Binghui PengColumbia University

Memory bounds for continual learning

Abstract: Continual learning, or lifelong learning, is a formidable current challenge to machine learning. It requires the learner to solve a sequence of k different learning tasks, one after the other, while with each new task learned it retains its aptitude for earlier tasks; the continual learner should scale better than the obvious solution of developing and maintaining a separate learner for each of the k tasks.  We embark on a complexity-theoretic study of continual learning in the PAC framework. We make novel uses of communication complexity to establish that any continual learner, even an improper one, needs memory that grows linearly with k, strongly suggesting that the problem is intractable.  When logarithmically many passes over the learning tasks are allowed, we provide an algorithm based on multiplicative weights update whose memory requirement scales well; we also establish that improper learning is necessary for such performance. We conjecture that these results may lead to new promising approaches to continual learning.

 

Based on the joint work with Xi Chen and Christos Papadimitriou.

Pages