## Seminars and Colloquia by Series

Friday, March 1, 2019 - 13:05 , Location: Skiles 005 , , CS, Technion , , Organizer: He Guo
<p>Correlation Clustering is an elegant model that captures fundamental graph cut problems such as Minimum s-t Cut, Multiway Cut, and Multicut, extensively studied in combinatorial optimization.</p><p> Here, we are given a graph with edges labeled + or - and the goal is to produce a clustering that agrees with the labels as much as possible: + edges within clusters and - edges across clusters.</p><p> The classical approach towards Correlation Clustering (and other graph cut problems) is to optimize a global objective, e.g., minimizing the total number of disagreements or maximizing the total number of agreements.</p><p> We depart from this and study local objectives: minimizing the maximum number of disagreements for edges incident on a single node, and the analogous max min agreements objective.</p><p> This naturally gives rise to a family of basic min-max graph cut problems.</p><p> A prototypical representative is Min-Max s-t Cut: find an s-t cut minimizing the largest number of cut edges incident on any node.</p><p> In this talk we will give a short introduction of Correlation Clustering and discuss the following results:</p><ol> <li> an O(\sqrt{n})-approximation for the problem of minimizing the maximum total weight of disagreement edges incident on any node (thus providing the first known approximation for the above family of min-max graph cut problems) <li> a remarkably simple 7-approximation for minimizing local disagreements in complete graphs (improving upon the previous best known approximation of 48) <li> a (1/(2+epsilon))-approximation for maximizing the minimum total weight of agreement edges incident on any node, hence improving upon the (1/(4+epsilon))-approximation that follows from the study of approximate pure Nash equilibria in cut and party affiliation games. </ol> <p>Joint work with Moses Charikar and Neha Gupta.</p>
Friday, February 8, 2019 - 13:05 , Location: Skiles 005 , , ISyE, Georgia Tech , , Organizer: He Guo
<p>The popularity of machine learning is increasingly growing in transportation, with applications ranging from traffic engineering to travel demand forecasting and pavement material modeling, to name just a few. Researchers often find that machine learning achieves higher predictive accuracy compared to traditional methods. However, many machine-learning methods are often viewed as “black-box” models, lacking interpretability for decision making. As a result, increased attention is being devoted to the interpretability of machine-learning results.</p> <p>In this talk, I introduce the application of machine learning to study travel behavior, covering both mode prediction and behavioral interpretation. I first discuss the key differences between machine learning and logit models in modeling travel mode choice, focusing on model development, evaluation, and interpretation. Next, I apply the existing machine-learning interpretation tools and also propose two new model-agnostic interpretation tools to examine behavioral heterogeneity. Lastly, I show the potential of using machine learning as an exploratory tool to tune the utility functions of logit models.</p> <p>I illustrate these ideas by examining stated-preference travel survey data for a new mobility-on-demand transit system that integrates fixed-route buses and on-demand shuttles. The results show that the best-performing machine-learning classifier results in higher predictive accuracy than logit models as well as comparable behavioral outputs. In addition, results obtained from model-agnostic interpretation tools show that certain machine-learning models (e.g. boosting trees) can readily account for individual heterogeneity and generate valuable behavioral insights on different population segments. Moreover, I show that interpretable machine learning can be applied to tune the utility functions of logit models (e.g. specifying nonlinearities) and to enhance their model performance. In turn, these findings can be used to inform the design of new mobility services and transportation policies.</p>
Friday, February 1, 2019 - 13:05 , Location: Groseclose 402 , , CS, Yale University , , Organizer: He Guo
<p>(The talk will be at 1-2pm, then it follows by a discussion session from 2 pm to 2:45 pm.)</p><p>Powerful AI systems, which are driven by machine learning, are increasingly controlling various aspects of modern society: from social interactions (e.g., Facebook, Twitter, Google, YouTube), economics (e.g., Uber, Airbnb, Banking), learning (e.g., Wikipedia, MOOCs), governance (Judgements, Policing, Voting), to autonomous vehicles and weapons. These systems have a tremendous potential to change our lives for the better, but, via the ability to mimic and nudge human behavior, they also have the potential to be discriminatory, reinforce societal prejudices, and polarize opinions. Moreover, recent studies have demonstrated that these systems can be quite brittle and generally lack the required robustness to be deployed in various civil/military situations. The reason being that considerations such as fairness, robustness, stability, explainability, accountability etc. have largely been an afterthought in the development of AI systems. In this talk, I will discuss the opportunities that lie ahead in a principled and thoughtful development of AI systems.</p><h4>Bio</h4> <p>Nisheeth Vishnoi is a Professor of Computer Science at Yale University. He received a B.Tech in Computer Science and Engineering from IIT Bombay in 1999 and a Ph.D. in Algorithms, Combinatorics and Optimization from Georgia Tech in 2004. His research spans several areas of theoretical computer science: from approximability of NP-hard problems, to combinatorial, convex and non-convex optimization, to tackling algorithmic questions involving dynamical systems, stochastic processes and polynomials. He is also broadly interested in understanding and addressing some of the key questions that arise in nature and society from the viewpoint of theoretical computer science. Here, his current focus is on natural algorithms, emergence of intelligence, and questions at the interface of AI, ethics, and society. He was the recipient of the Best Paper Award at FOCS in 2005, the IBM Research Pat Goldberg Memorial Award in 2006, the Indian National Science Academy Young Scientist Award in 2011, and the IIT Bombay Young Alumni Achievers Award in 2016.</p>
Friday, January 25, 2019 - 13:05 , Location: Skiles 005 , , ISyE, Georgia Tech , , Organizer: He Guo
We present a new general and simple method for rounding&nbsp;semi-definite programs, based on Brownian motion. &nbsp;Our&nbsp; approach is inspired byrecent results in algorithmic discrepancy theory. &nbsp;We develop and present toolsfor analyzing our new rounding algorithms, utilizing mathematical machineryfrom the theory of Brownian motion, complex analysis, and partial differentialequations. &nbsp;We will present our method to several classical problems, including Max-Cut, Max-di-cut and Max-2-SAT, and derive new algorithms that are competitive with the best known results. &nbsp;In particular, we show that the basic algorithm achieves 0.861-approximation for Max-cut and a natural variant of the algorithm achieve 0.878-approximation, matching the famous Goemans-Williamson algorithm upto first three decimal digits.&nbsp;This is joint work with Abbas-Zadeh, Nikhil Bansal, Guru Guruganesh, Sasho Nikolov and Roy Schwartz.
Friday, January 11, 2019 - 13:05 , Location: Skiles 005 , , CS, Georgia Tech , , Organizer: He Guo
In current convex optimization literature, there are significant gaps between algorithms that produce high accuracy (1+1/poly(n))-approximate solutions vs. algorithms that produce O(1)-approximate solutions for symmetrized special cases. This gap is reflected in the differences between interior point methods vs. (accelerated) gradient descent for regression problems, and between exact vs. approximate undirected max-flow. In this talk, I will discuss generalizations of a fundamental building block in numerical analysis, preconditioned iterative methods, to convex functions that include p-norms. This leads to algorithms that converge to high accuracy solutions by crudely solving a sequence of symmetric residual problems. I will then briefly describe several recent and ongoing projects, including p-norm regression using m^{1/3} linear system solves, p-norm flow in undirected unweighted graphs in almost-linear time, and further improvements to the dependence on p in the runtime.
Friday, November 30, 2018 - 13:05 , Location: Skiles 005 , , Math, Georgia Tech , , Organizer: He Guo
In this talk we introduce two different random graph models that produce sparse graphs with overlapping community structure and discuss community detection in each context. The Random Overlapping Community (ROC) model produces a sparse graph by constructing many Erdos Renyi random graphs (communities) on small randomly selected subsets of vertices. By varying the size and density of these communities, ROC graphs can be tuned to exhibit a wide range normalized of closed walk count vectors, including those of hypercubes. This is joint work with Santosh Vempala. In the second half of the talk, we introduce the Community Configuration Model (CCM), a variant of the configuration model in which half-edges are assigned colors and pair according to a matching rule on the colors. The model is a generalization of models in the statistical physics literature and is a natural finite analog for classes of graphexes. We describe a hypothesis testing algorithm that determines whether a graph came from a community configuration model or a traditional configuration model. This is joint work with Christian Borgs, Jennifer Chayes, Souvik Dhara, and Subhabrata Sen.
Friday, November 23, 2018 - 13:05 , Location: None , None , None , Organizer: He Guo
Friday, November 16, 2018 - 13:05 , Location: Skiles 005 , , Math, University of Michigan , , Organizer: He Guo
Consider&nbsp; a&nbsp; linear&nbsp; combination&nbsp; of&nbsp; independent&nbsp; identically&nbsp; distributed&nbsp; random variables $X_1, . . . , X_n$ with fixed weights $a_1, . . . a_n$.&nbsp; If the random variablesare continuous, the sum is almost surely non-zero.&nbsp; However, for discrete random variables an exact cancelation may occur with a positive probability.&nbsp; Thisprobability depends on the arithmetic nature of the sequence $a_1, . . . a_n$.&nbsp; We will discuss how to measure the relevant arithmetic properties and how to evaluate the probability of the exact and approximate cancelation.
Friday, November 9, 2018 - 13:05 , Location: Skiles 005 , , School of Computer Science, Georgia Tech , , Organizer: He Guo
&nbsp;Often the popular press talks about the power of quantum computing coming from its ability to perform several computations simultaneously. We’ve already had a similar capability from probabilistic machines. This talk will explore the relationship between quantum and randomized computation, how they are similar and how they differ, and why quantum can work exponentially faster on some but far from all computational problems. We’ll talk about some open problems in quantum complexity that can help shed light on the similarities and differences between randomness and “quantumness”.&nbsp; This talk will not assume any previous knowledge of quantum information or quantum computing.
Friday, November 2, 2018 - 12:20 , Location: Skiles 005 , , ISyE/ECE, Georgia Tech , , Organizer: He Guo
Abstract&nbsp;In this talk, I will present a popular distributed method, namely, distributed consensus-based gradient (DCG) method, for solving optimal learning problems over a network of agents. Such problems arise in many applications such as, finding optimal parameters over a large dataset distributed among a network of processors or seeking an optimal policy for coverage control problems in robotic networks. The focus is to present our recent results, where we study the performance of DCG when the agents are only allowed to exchange their quantized values due to their finite communication bandwidth. In particular, we develop a novel quantization method, which we refer to as adaptive quantization. The main idea of our approach is to quantize the nodes' estimates based on the progress of the algorithm, which helps to eliminate the quantized errors.&nbsp;Under the adaptive quantization, we then derive the bounds on the convergence rates of the proposed method as a function of the bandwidths and the underlying network topology, for both convex and strongly convex objective functions. Our results suggest&nbsp;that under the adaptive quantization, the rate of convergence of DCG&nbsp;with and without quantization are the same, except for a factor which captures the number of quantization bits. To the best of the authors’ knowledge, the results in this paper are considered better than any existing results for DCG under quantization. This is based on a joint work with Siva Theja Maguluri and Justin Romberg. Bio&nbsp;Thinh T. Doan is a TRIAD postdoctoral fellow at Georgia Institute of Technology, joint between&nbsp;the School of Industrial and Systems Engineering&nbsp;and&nbsp;the School of Electrical and Computer Engineering (ECE). He was born in Vietnam, where he got his Bachelor degree in Automatic Control at Hanoi University of Science and Technology in 2008. He obtained his Master and Ph.D. degrees both in ECE from the University of Oklahoma in 2013 and the University of Illinois at Urbana-Champaign&nbsp;in 2018, respectively. His research interests lie at the intersection of control theory, optimization, distributed algorithms, and applied probability, with the main applications in machine learning, reinforcement learning, power networks, and multi-agent systems.