## Seminars and Colloquia by Series

Friday, November 11, 2016 - 15:00 , Location: Skiles 005 , Giorgis Petridis , University of Gerogia , Organizer: Ernie Croot
An expander polynomial in F_p, the finite field with p elements, is a polynomial f(x_1,...,x_n) such that there exists an absolute c>0 with the property that for every set A in F_p (of cardinality not particularly close to p) the cardinality of f(A,...,A) = {f(a_1,...,a_n) : a in A} is at least |A|^{1+c}. Given an expander polynomial, a very interesting question is to determine a threshold T so that |A|> T implies that |f(A,...,A)| contains, say, half the elements of F_p and so is about as large as it can be. For a large number of "natural appearing" expander polynomials like f(x,y,z) = xy+z and f(x,y,z) = x(y+z), the best known threshold is T= p^{2/3}. What is interesting is that there are several proofs of this threshold of very different “depth” and complexity. We will discuss why for the expander polynomial f(x,y,z,w) = (x-y)(z-w), where f(A,A,A,A) consists of the product of differences of elements of A, one may take T = p^{5/8}. We will also discuss the more complicated setting where A is a subset of a not necessarily prime order finite field.
Friday, November 4, 2016 - 15:05 , Location: Skiles 005 , , Kent State University , , Organizer: Galyna Livshyts
In this talk we will discuss an answer to a question of Alexander Koldobsky and present a discrete version of his slicing inequality.  We let $\# K$ be a number of integer lattice points contained in a set $K$. We show that for each $d\in \mathbb{N}$ there exists a constant $C(d)$ depending on $d$ only, such that for any origin-symmetric convex body $K \subset \mathbb{R}^d$ containing $d$ linearly independent lattice points  $$\# K \leq C(d)\text{max}_{\xi \in S^{d-1}}(\# (K\cap \xi^\perp))\, \text{vol}_d(K)^{\frac{1}{d}},$$where $\xi^\perp$ is the hyperplane orthogonal to a unit vector $\xi$ .We show that  $C(d)$ can be chosen asymptotically of order $O(1)^d$ for hyperplane slices. Additionally, we will discuss some special cases and generalizations for this inequality.  This is a joint work with Martin Henk and Artem Zvavitch.
Friday, October 28, 2016 - 15:05 , Location: Skiles 005 , Sharath Raghvendra , Virginia Tech , Organizer: Esther Ezra
Motivated by real-time logistics, I will present a deterministic online algorithm for the Online Minimum Metric Bipartite Matching Problem. In this problem, we are given a set S of server locations and a set R of request locations.The requests arrive one at a time and when it arrives, we must immediately and irrevocably match it to a free" server. The cost of matching a server to request is given by the distance between the two locations (which we assume satisfies triangle inequality). The objective of this problem is to come up with a matching of servers to requests which is competitive with respect to the minimum-cost matching of S and R.In this talk, I will show that this new deterministic algorithm performs optimally across different adversaries and metric spaces. In particular, I will show that this algorithm simultaneously achieves optimal performances in two well-known online models -- the adversarial and the random arrival models. Furthermore, the same algorithm also has an exponentially improved performance for the line metric resolving a long standing open question.
Friday, October 21, 2016 - 15:05 , Location: Skiles 005 , Esther Ezra , Georgia Tech , Organizer: Esther Ezra

Joint work with Micha Sharir (Tel-Aviv University).

Following a recent improvement of Cardinal etal. on the complexity of a linear decision tree for k-SUM, resulting in O(n^3 \log^3{n}) linear queries, we present a further improvement to O(n^2 \log^2{n}) such queries. Our approach exploits a point-location mechanism in arrangements of hyperplanes in high dimensions, and, in fact, brings a new view to such mechanisms. In this talk I will first present a background on the k-SUM problem, and then discuss bottom-vertex triangulation and vertical decomposition of arrangements of hyperplanes and how they serve our analysis.
Friday, October 7, 2016 - 15:05 , Location: Skiles 005 , John Wilmes , Georgia Tech , Organizer: Esther Ezra
A graph is strongly regular'' (SRG) if it is $k$-regular, and every pair of adjacent (resp. nonadjacent) vertices has exactly $\lambda$ (resp. $\mu$) common neighbors. Paradoxically, the high degree of regularity in SRGs inhibits their symmetry. Although the line-graphs of the complete graph and complete bipartite graph give examples of SRGs with $\exp(\Omega(\sqrt{n}))$ automorphisms, where $n$ is the number of vertices, all other SRGs have much fewer---the best bound is currently $\exp(\tilde{O}(n^{9/37}))$ (Chen--Sun--Teng, 2013), and Babai conjectures that in fact all primitive SRGs besides the two exceptional line-graph families have only quasipolynomially-many automorphisms. In joint work with Babai, Chen, Sun, and Teng, we make progress toward this conjecture by giving a quasipolynomial bound on the number of automorphisms for valencies $k > n^{5/6}$. Our proof relies on bounds on the vertex expansion of SRGs to show that a polylogarithmic number of randomly chosen vertices form a base for the automorphism group with high probability.
Monday, September 12, 2016 - 16:05 , Location: Skiles 169 , , The Ohio State University , Organizer: Esther Ezra
The computational complexity of many geometric problems depends on the dimension of the input space. We study algorithmic problems on spaces of low fractal dimension. There are several well-studied notions of fractal dimension for sets and measures in Euclidean space. We consider a definition of fractal dimension for finite metric spaces, which agrees with standard notions used to empirically estimate the fractal dimension of various sets. When the fractal dimension of the input is lower than the ambient dimension, we obtain faster algorithms for a plethora of classical problems, including TSP, Independent Set, R-Cover, and R-Packing. Interestingly, the dependence of the performance of these algorithms on the fractal dimension closely resembles the currently best-known dependence on the standard Euclidean dimension. For example, our algorithm for TSP has running time 2^O(n^(1-1/delta) * log(n)), on sets of fractal dimension delta; in comparison, the best-known algorithm for sets in d-dimensional Euclidean space has running time 2^O(n^(1-1/d)).
Friday, September 9, 2016 - 15:05 , Location: Skiles 005 , Emma Cohen , Georgia Tech , Organizer: Esther Ezra

Joint work with Will Perkins and Prasad Tetali.

We consider the extremal counting problem which asks what d-regular, r-uniform hypergraph on n vertices has the largest number of (strong) independent sets. Our goal is to generalize known results for number of matchings and independent sets in regular graphs to give a general bound in the hypergraph case. In particular, we propose an adaptation to the hypergraph setting of the occupancy fraction method pioneered by Davies et al. (2016) for use in the case of graph matchings. Analysis of the resulting LP leads to a new bound for the case r=3 and suggests a method for tackling the general case.
Friday, August 26, 2016 - 15:05 , Location: Skiles 005 , Lutz Warnke , Georgia Tech , Organizer: Esther Ezra
One of the most interesting features of Erdös-Rényi random graphs is the `percolation phase transition', where the global structure intuitively changes from only small components to a single giant component plus small ones. In this talk we discuss the percolation phase transition in the random d-process, which corresponds to a natural algorithmic model for generating random regular graphs (starting with an empty graph on n vertices, it evolves by sequentially adding new random edges so that the maximum degree remains at most d).  Our results on the phase transition solve a problem of Wormald from 1997, and verify a conjecture of Balinska and Quintas from 1990.   Based on joint work with Nick Wormald (Monash University).
Friday, April 22, 2016 - 15:00 , Location: Skiles 005 , , University of Washington , , Organizer: Esther Ezra
A classical theorem of Spencer shows that any set system with n sets and n elements admits a coloring of discrepancy O(n^1/2). Recent exciting work of Bansal, Lovett and Meka shows that such colorings can be found in polynomial time. In fact, the Lovett-Meka algorithm finds a half integral point in any "large enough" polytope. However, their algorithm crucially relies on the facet structure and does not apply to general convex sets. We show that for any symmetric convex set K with measure at least exp(-n/500), the following algorithm finds a point y in K \cap [-1,1]^n with Omega(n) coordinates in {-1,+1}: (1) take a random Gaussian vector x; (2) compute the point y in K \cap [-1,1]^n that is closest to x. (3) return y. This provides another truly constructive proof of Spencer's theorem and the first constructive proof of a Theorem of Gluskin and Giannopoulos.
Tuesday, April 19, 2016 - 14:05 , Location: Skiles 006 , Annie Raymond , University of Washington, Seattle, WA , Organizer: Prasad Tetali
The Frankl union-closed sets conjecture states that there exists an element present in at least half of the sets forming a union-closed family. We reformulate the conjecture as an optimization problem and present an integer program to model it. The computations done with this program lead to a new conjecture: we claim that the maximum number of sets in a non-empty union-closed family in which each element is present at most a times is independent of the number n of elements spanned by the sets if n is greater or equal to log_2(a)+1. We prove that this is true when n is greater or equal to a. We also discuss the impact that this new conjecture would have on the Frankl conjecture if it turns out to be true. This is joint work with Jonad Pulaj and Dirk Theis.