Seminars and Colloquia by Series

Online Covering: Prophets, Secretaries, and Samples

Series
ACO Student Seminar
Time
Friday, February 24, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Gregory KehneHarvard Computer Science

We give a polynomial-time algorithm for online covering IPs with a competitive ratio of O(\log mn) when the constraints are revealed in random order, essentially matching the best possible offline bound of \Omega(\log n) and circumventing the \Omega(\log m \log n) lower bound known in adversarial order. We then leverage this O(\log mn)-competitive algorithm to solve this problem in the prophet setting, where constraints are sampled from a sequence of known distributions. Our reduction in fact relies only on samples from these distributions, in a manner evocative of prior work on single-sample prophet inequalities for online packing problems. We present sample guarantees in the prophet setting, as well as in the setting where random samples from an adversarial instance are revealed at the outset.

This talk is based on joint work with Anupam Gupta and Roie Levin, part of which appeared at FOCS 2021. 

Maximizing minimum eigenvalue in constant dimension.

Series
ACO Student Seminar
Time
Friday, February 17, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Adam BrownGeorgia Tech Math

In the minimum eigenvalue problem we are given a collection of rank-1 symmetric matrices, and the goal is to find a subset whose sum has large minimum eigenvalue, subject to some combinatorial constraints. The constraints on which subsets we can select, could be cardinality, partition, or more general matroid base constraints. Using pipage rounding and a matrix concentration inequality, we will show a randomised algorithm which achieves a (1- epsilon) approximation for the minimum eigenvalue problem when the matrices have constant size, subject to any matroid constraint.

The bulk of the talk will be background on “pipage rounding, pessimistic estimators and matrix concentration” adapted from the paper with that title by Nicholas J. A. Harvey and Neil Olver. The application to the minimum eigenvalue problem is joint work with Aditi Laddha and Mohit Singh.

Computation with sequences of neural assemblies

Series
ACO Student Seminar
Time
Friday, February 10, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Max DabagiaGeorgia Tech CS

Assemblies are subsets of neurons whose coordinated excitation is hypothesized to represent the subject's thinking of an object, idea, episode, or word. Consequently, they provide a promising basis for a theory of how neurons and synapses give rise to higher-level cognitive phenomena. The existence (and pivotal role) of assemblies was first proposed by Hebb, and has since been experimentally confirmed, as well as rigorously proven to emerge in the model of computation in the brain recently developed by Papadimitriou & Vempala. In light of contemporary studies which have documented the creation and activation of sequences of assemblies of neurons following training on tasks with sequential decisions, we study here the brain's mechanisms for working with sequences in the assemblies model of Papadimitriou & Vempala.  We show that (1) repeated presentation of a sequence of stimuli leads to the creation of a sequence of corresponding assemblies -- upon future presentation of any contiguous sub-sequence of stimuli, the corresponding assemblies are activated and continue until the end of the sequence; (2) when the stimulus sequence is projected to two brain areas in a "scaffold", both memorization and recall are more efficient, giving rigorous backing to the cognitive phenomenon that memorization and recall are easier with scaffolded memories; and (3) existing assemblies can be quite easily linked to simulate an arbitrary finite state machine (FSM), thereby capturing the brain's ability to memorize algorithms. This also makes the assemblies model capable of arbitrary computation simply in response to presentation of a suitable stimulus sequence, without explicit control commands. These findings provide a rigorous, theoretical explanation at the neuronal level of complex phenomena such as sequence memorization in rats and algorithm learning in humans, as well as a concrete hypothesis as to how the brain's remarkable computing and learning abilities could be realized.

 

Joint work with Christos Papadimitriou and Santosh Vempala.

Sampling with Riemannian Hamiltonian Monte Carlo in a Constrained Space

Series
ACO Student Seminar
Time
Friday, February 3, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Yunbum KookGeorgia Tech CS

We demonstrate for the first time that ill-conditioned, non-smooth, constrained distributions in very high dimensions, upwards of 100,000, can be sampled efficiently in practice. Our algorithm incorporates constraints into the Riemannian version of Hamiltonian Monte Carlo and maintains sparsity. This allows us to achieve a mixing rate independent of condition numbers. On benchmark data sets from systems biology and linear programming, our algorithm outperforms existing packages by orders of magnitude. In particular, we achieve a 1,000-fold speed-up for sampling from the largest published human metabolic network (RECON3D). Our package has been incorporated into the COBRA toolbox. This is joint work with Yin Tat Lee, Ruoqi Shen, and Santosh Vempala.

Utility maximizing load balancing policies

Series
ACO Student Seminar
Time
Friday, January 27, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Diego GoldsztajnEindhoven University of Technology

We consider a service system where incoming tasks are instantaneously assigned to one out of many heterogeneous server pools. All the tasks sharing a server pool are executed in parallel and the execution times do not depend on the class of the server pool or the number of tasks currently contending for service. However, associated with each server pool is a utility function that does depend on the class of the server pool and the number of tasks currently sharing it. These features are characteristic of streaming and online gaming services, where the duration of tasks is mainly determined by the application but congestion can have a strong impact on the quality-of-service (e.g., video resolution and smoothness). We derive an upper bound for the mean overall utility in steady-state and introduce two load balancing policies that achieve this upper bound in a large-scale regime. Also, the transient and stationary behavior of these asymptotically optimal load balancing policies is characterized in the same large-scale regime.

Multiserver Stochastic Scheduling: Analysis and Optimization

Series
ACO Student Seminar
Time
Friday, January 20, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
ONLINE
Speaker
Isaac GrosofCMU

Please Note: Link:https://gatech.zoom.us/j/91232113113?pwd=MDhteEdtcENuME9kdXJmcUY0eWlSUT09

Large-scale computing systems are massively important, using over 1% of the world's electricity. It is vital that these systems be both fast and resource-efficient, and scheduling theory is a key tool towards that goal. However, prior scheduling theory is not equipped to handle large multiserver systems, with little extant theoretical analysis, and no optimality results.

 

I focus on two important multiserver scheduling systems: The one-server-per-job (M/G/k) model, and the multiple-servers-per-job (MSJ) model. In the M/G/k, we prove the first optimality result, demonstrating that the Shortest Remaining Processing Time (SRPT) policy yields asymptotically optimal mean response time in the heavy traffic limit. In the MSJ model, we prove the first mean response analysis for any scheduling policy, for a novel policy called ServerFilling. Moreover, we introduce the ServerFilling-SRPT policy, for which we present the first asymptotic optimality result for the MSJ model. Each result progresses by proving novel bounds on relevant work, and using novel methods to convert those bounds to bounds on mean response time. These results push the state of the art of scheduling theory ever closer to applicability to today's large-scale computing systems.

Memory bounds for continual learning

Series
ACO Student Seminar
Time
Friday, January 13, 2023 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Binghui PengColumbia University

Memory bounds for continual learning

Abstract: Continual learning, or lifelong learning, is a formidable current challenge to machine learning. It requires the learner to solve a sequence of k different learning tasks, one after the other, while with each new task learned it retains its aptitude for earlier tasks; the continual learner should scale better than the obvious solution of developing and maintaining a separate learner for each of the k tasks.  We embark on a complexity-theoretic study of continual learning in the PAC framework. We make novel uses of communication complexity to establish that any continual learner, even an improper one, needs memory that grows linearly with k, strongly suggesting that the problem is intractable.  When logarithmically many passes over the learning tasks are allowed, we provide an algorithm based on multiplicative weights update whose memory requirement scales well; we also establish that improper learning is necessary for such performance. We conjecture that these results may lead to new promising approaches to continual learning.

 

Based on the joint work with Xi Chen and Christos Papadimitriou.

Sparse Quadratic Optimization via Polynomial roots

Series
ACO Student Seminar
Time
Friday, December 2, 2022 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Kevin ShuGeorgia Tech Math

We'll talk about problems of optimizing a quadratic function subject to quadratic constraints, in addition to a sparsity constraint that requires that solutions have only a few nonzero entries. Such problems include sparse versions of linear regression and principal components analysis. We'll see that this problem can be formulated as a convex conical optimization problem over a sparse version of the positive semidefinite cone, and then see how we can approximate such problems using ideas arising from the study of hyperbolic polynomials. We'll also describe a fast algorithm for such problems, which performs well in practical situations.

Breaking the quadratic gap for strongly polynomial solvers to combinatorial linear programs

Series
ACO Student Seminar
Time
Friday, November 18, 2022 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Bento NaturaGeorgia Tech ISyE

Recent years have seen tremendous progress in high-accuracy solvers for Maximum Flow, Minimum-Cost Flow and general Linear Programs (LP). Progress on strongly polynomial solvers for combinatorial LP on the other hand has stalled. The computational gap between high-accuracy solvers and strongly polynomial solvers is linear in the number of variables only for combinatorial LP on directed graphs. For combinatorial LP beyond directed graphs this gap is currently quadratic and is known since the 1980s due to a seminal result by Tardos.

We finally break the quadratic gap and design a strongly polynomial interior-point-method for combinatorial LP, which reduces the gap to only a linear factor. Thus, we match the known linear gap for LP on directed graphs. Furthermore, we close the linear gap between feasibility and optimization of combinatorial LP.

Self-Adjusting Data Structures

Series
ACO Student Seminar
Time
Friday, November 11, 2022 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Robert TarjanPrinceton University

Please Note: Robert Tarjan is the James S. McDonnell Distinguished University Professor of Computer Science at Princeton University. He has held academic positions at Cornell, Berkeley, Stanford, and NYU, and industrial research positions at Bell Labs, NEC, HP, Microsoft, and Intertrust Technologies. He has invented or co-invented many of the most efficient known data structures and graph algorithms. He was awarded the first Nevanlinna Prize from the International Mathematical Union in 1982 for “for outstanding contributions to mathematical aspects of information science,” the Turing Award in 1986 with John Hopcroft for “fundamental achievements in the design and analysis of algorithms and data structures,” and the Paris Kanellakis Award in Theory and Practice in 1999 with Daniel Sleator for the invention of splay trees. He is a member of the U.S. National Academy of Sciences, the U. S. National Academy of Engineering, the American Academy of Arts and Sciences, and the American Philosophical Society.

Data structures are everywhere in computer software.  Classical data structures are specially designed to make each individual operation fast.  A more flexible approach is to design the structure so that it adapts to its use.  This idea has produced data structures that perform well in practice and have surprisingly good performance guarantees.  In this talk I’ll review some recent work on such data structures, specifically on self-adjusting search trees and self-adjusting heaps.

Pages