Seminars and Colloquia by Series

Friday, January 25, 2019 - 13:05 , Location: Skiles 005 , Mohit Singh , ISyE, Georgia Tech , mohit.singh@isye.gatech.edu , Organizer: He Guo
We present a new general and simple method for rounding semi-definite programs, based on Brownian motion.  Our  approach is inspired byrecent results in algorithmic discrepancy theory.  We develop and present toolsfor analyzing our new rounding algorithms, utilizing mathematical machineryfrom the theory of Brownian motion, complex analysis, and partial differentialequations.  We will present our method to several classical problems, including Max-Cut, Max-di-cut and Max-2-SAT, and derive new algorithms that are competitive with the best known results.  In particular, we show that the basic algorithm achieves 0.861-approximation for Max-cut and a natural variant of the algorithm achieve 0.878-approximation, matching the famous Goemans-Williamson algorithm upto first three decimal digits. This is joint work with Abbas-Zadeh, Nikhil Bansal, Guru Guruganesh, Sasho Nikolov and Roy Schwartz.
Friday, January 11, 2019 - 13:05 , Location: Skiles 005 , Richard Peng , CS, Georgia Tech , rpeng@cc.gatech.edu , Organizer: He Guo
In current convex optimization literature, there are significant gaps between algorithms that produce high accuracy (1+1/poly(n))-approximate solutions vs. algorithms that produce O(1)-approximate solutions for symmetrized special cases. This gap is reflected in the differences between interior point methods vs. (accelerated) gradient descent for regression problems, and between exact vs. approximate undirected max-flow. In this talk, I will discuss generalizations of a fundamental building block in numerical analysis, preconditioned iterative methods, to convex functions that include p-norms. This leads to algorithms that converge to high accuracy solutions by crudely solving a sequence of symmetric residual problems. I will then briefly describe several recent and ongoing projects, including p-norm regression using m^{1/3} linear system solves, p-norm flow in undirected unweighted graphs in almost-linear time, and further improvements to the dependence on p in the runtime.
Friday, November 30, 2018 - 13:05 , Location: Skiles 005 , Samantha Petti , Math, Georgia Tech , spetti@gatech.edu , Organizer: He Guo
In this talk we introduce two different random graph models that produce sparse graphs with overlapping community structure and discuss community detection in each context. The Random Overlapping Community (ROC) model produces a sparse graph by constructing many Erdos Renyi random graphs (communities) on small randomly selected subsets of vertices. By varying the size and density of these communities, ROC graphs can be tuned to exhibit a wide range normalized of closed walk count vectors, including those of hypercubes. This is joint work with Santosh Vempala. In the second half of the talk, we introduce the Community Configuration Model (CCM), a variant of the configuration model in which half-edges are assigned colors and pair according to a matching rule on the colors. The model is a generalization of models in the statistical physics literature and is a natural finite analog for classes of graphexes. We describe a hypothesis testing algorithm that determines whether a graph came from a community configuration model or a traditional configuration model. This is joint work with Christian Borgs, Jennifer Chayes, Souvik Dhara, and Subhabrata Sen.
Friday, November 23, 2018 - 13:05 , Location: None , None , None , Organizer: He Guo
Friday, November 16, 2018 - 13:05 , Location: Skiles 005 , Mark Rudelson , Math, University of Michigan , rudelson@umich.edu , Organizer: He Guo
Consider  a  linear  combination  of  independent  identically  distributed  random variables $X_1, . . . , X_n$ with fixed weights $a_1, . . . a_n$.  If the random variablesare continuous, the sum is almost surely non-zero.  However, for discrete random variables an exact cancelation may occur with a positive probability.  Thisprobability depends on the arithmetic nature of the sequence $a_1, . . . a_n$.  We will discuss how to measure the relevant arithmetic properties and how to evaluate the probability of the exact and approximate cancelation.
Friday, November 9, 2018 - 13:05 , Location: Skiles 005 , Lance Fortnow , School of Computer Science, Georgia Tech , fortnow@cc.gatech.edu , Organizer: He Guo
 Often the popular press talks about the power of quantum computing coming from its ability to perform several computations simultaneously. We’ve already had a similar capability from probabilistic machines. This talk will explore the relationship between quantum and randomized computation, how they are similar and how they differ, and why quantum can work exponentially faster on some but far from all computational problems. We’ll talk about some open problems in quantum complexity that can help shed light on the similarities and differences between randomness and “quantumness”.  This talk will not assume any previous knowledge of quantum information or quantum computing.
Friday, November 2, 2018 - 12:20 , Location: Skiles 005 , Thinh Doan , ISyE/ECE, Georgia Tech , thinh.doan@isye.gatech.edu , Organizer: He Guo
Abstract In this talk, I will present a popular distributed method, namely, distributed consensus-based gradient (DCG) method, for solving optimal learning problems over a network of agents. Such problems arise in many applications such as, finding optimal parameters over a large dataset distributed among a network of processors or seeking an optimal policy for coverage control problems in robotic networks. The focus is to present our recent results, where we study the performance of DCG when the agents are only allowed to exchange their quantized values due to their finite communication bandwidth. In particular, we develop a novel quantization method, which we refer to as adaptive quantization. The main idea of our approach is to quantize the nodes' estimates based on the progress of the algorithm, which helps to eliminate the quantized errors. Under the adaptive quantization, we then derive the bounds on the convergence rates of the proposed method as a function of the bandwidths and the underlying network topology, for both convex and strongly convex objective functions. Our results suggest that under the adaptive quantization, the rate of convergence of DCG with and without quantization are the same, except for a factor which captures the number of quantization bits. To the best of the authors’ knowledge, the results in this paper are considered better than any existing results for DCG under quantization. This is based on a joint work with Siva Theja Maguluri and Justin Romberg. Bio Thinh T. Doan is a TRIAD postdoctoral fellow at Georgia Institute of Technology, joint between the School of Industrial and Systems Engineering and the School of Electrical and Computer Engineering (ECE). He was born in Vietnam, where he got his Bachelor degree in Automatic Control at Hanoi University of Science and Technology in 2008. He obtained his Master and Ph.D. degrees both in ECE from the University of Oklahoma in 2013 and the University of Illinois at Urbana-Champaign in 2018, respectively. His research interests lie at the intersection of control theory, optimization, distributed algorithms, and applied probability, with the main applications in machine learning, reinforcement learning, power networks, and multi-agent systems.
Friday, October 26, 2018 - 13:05 , Location: Skiles 005 , Yu Cheng , CS, Duke University , chycharlie@gmail.com , Organizer: He Guo
We study the fundamental problem of high-dimensional mean estimation in a robust model where a constant fraction of the samples are adversarially corrupted. Recent work gave the first polynomial time algorithms for this problem with dimension-independent error guarantees for several families of structured distributions.  In this work, we give the first nearly-linear time algorithms for high-dimensional robust mean estimation. Specifically, we focus on distributions with (i) known covariance and sub-gaussian tails, and (ii) unknown bounded covariance. Given $N$ samples on $R^d$, an $\epsilon$-fraction of which may be arbitrarily corrupted, our algorithms run in time $\tilde{O}(Nd)/poly(\epsilon)$ and approximate the true mean within the information-theoretically optimal error, up to constant factors. Previous robust algorithms with comparable error guarantees have running times $\tilde{\Omega}(N d^2)$. Our algorithms rely on a natural family of SDPs parameterized by our current guess $\nu$ for the unknown mean $\mu^\star$. We give a win-win analysis establishing the following: either a near-optimal solution to the primal SDP yields a good candidate for $\mu^\star$ -- independent of our current guess $\nu$ -- or the dual SDP yields a new guess $\nu'$ whose distance from $\mu^\star$ is smaller by a constant factor. We exploit the special structure of the corresponding SDPs to show that they are approximately solvable in nearly-linear time. Our approach is quite general, and we believe it can also be applied to obtain nearly-linear time algorithms for other high-dimensional robust learning problems. This is a joint work with Ilias Diakonikolas and Rong Ge.
Friday, October 19, 2018 - 13:05 , Location: Skiles 005 , Samira Samadi , CS, Georgia Tech , ssamadi6@gatech.edu , Organizer: He Guo
We investigate whether the standard dimensionality reduction techniques inadvertently produce data representations with different fidelity for two different populations. We show on several real-world datasets, PCA has higher reconstruction error on population A than B (for example, women versus men or lower versus higher-educated individuals). This can happen even when the dataset has similar number of samples from A and B . This motivates our study of dimensionality reduction techniques which maintain similar fidelity for A as B . We give an efficient algorithm for finding a projection which is nearly-optimal with respect to this measure, and evaluate it on several datasets. This is a joint work with Uthaipon Tantipongpipat, Jamie Morgenstern, Mohit Singh, and Santosh Vempala.
Friday, October 12, 2018 - 13:05 , Location: Skiles 005 , Swati Gupta , ISyE, Georgia Tech , swatig@alum.mit.edu , Organizer: He Guo
At the heart of most algorithms today there is an optimization engine trying to learn online and provide the best decision, for e.g. rankings of objects, at any time with the partial information observed thus far in time. Often it becomes difficult to find near optimal solutions to many problems due to their inherent combinatorial structure that leads to certain computational bottlenecks. Submodularity is a discrete analogue of convexity and is a key property often exploited in tackling combinatorial optimization problems. In the first part of the talk, we will focus on computational bottlenecks that involve submodular functions: (a) convex function minimization over submodular base polytopes (for e.g. permutahedron) and (b) movement along a line inside submodular base polytopes. We give a conceptually simple and strongly polynomial algorithm Inc-Fix for the former, which is useful in computing Bregman projections in first-order projection-based methods like online mirror descent. For the latter, we will bound the iterations of the discrete Newton method which gives a running time improvement of at least n^6 over the state of the art. This is joint work with Michel Goemans and Patrick Jaillet. In the second part of the talk, we will consider the dual problem of (a), i.e. minimization of composite convex and submodular objectives. We will resolve Bach's conjecture from 2015 about the running time of a popular Kelley's cutting plane variant to minimize these composite objectives. This is joint work with Madeleine Udell and Song Zhou. 

Pages