Seminars and Colloquia by Series

Transversal $C_k$-factors in subgraphs of the balanced blowup of $C_k$

Series
Graph Theory Seminar
Time
Tuesday, November 17, 2020 - 15:45 for 1 hour (actually 50 minutes)
Location
Speaker
Theo MollaUniversity of South Florida

Call a blowup of a graph $F$ an $n$-blowup if each part has size $n$. For a subgraph $G$ of a blowup of $F$, we define the minimum partial degree of $G$ to be the smallest minimum degree over the bipartite subgraphs of $G$ that correspond to edges of $F$. Johannson proved that if the minimum partial degree of a spanning subgraph of the $n$-blowup of a triangle is $2n/3 + n^{1/2}$, then it contains a collection of $n$ vertex disjoint triangles. Fischer's Conjecture, which was proved by Keevash and Mycroft in 2015, is a generalization of this result to complete graphs larger than the triangle. Another generalization, conjectured independently by Fischer and Häggkvist, is the following: If $G$ is a spanning subgraph of the $n$-blowup of $C_k$ with minimum partial degree $(1 + 1/k)n/2 + 1$, then $G$ contains $n$ vertex disjoint copies of $C_k$ that each intersect each of the $k$ parts. In this talk, we will show that this conjecture holds asymptotically. We will also discuss related conjectures and results.

This is joint work with Beka Ergemlidze.

Pointwise ergodic theorems for bilinear polynomial averages

Series
Analysis Seminar
Time
Tuesday, November 17, 2020 - 14:00 for 1 hour (actually 50 minutes)
Location
https://us02web.zoom.us/j/71579248210?pwd=d2VPck1CbjltZStURWRWUUgwTFVLZz09
Speaker
Mariusz MirekRutgers University

Please Note: We shall discuss the proof of pointwise almost everywhere convergence for the non-conventional (in the sense of Furstenberg) bilinear polynomial ergodic averages. This is my recent work with Ben Krause and Terry Tao.

PDE Models for Collective Behavior

Series
Time
Monday, November 16, 2020 - 15:30 for 1 hour (actually 50 minutes)
Location
Bluejeans meeting: https://gatech.bluejeans.com/759112674
Speaker
Dr. Yao YaoGeorgia Institute of Technology

Self-organization is a common feature in the collective behavior of many animal species, such as flocking birds, herding mammals, and swarming bacteria. As the number of individuals gets large, instead of tracking the movement of each individual, it is more efficient to model the evolution of the whole population density using partial differential equations (PDEs). In this talk, I will introduce some PDE models for collective dynamics, and discuss the challenges in both the modeling part and the mathematical analysis.

A Self-Limiting Hawkes Process

Series
SIAM Student Seminar
Time
Monday, November 16, 2020 - 12:30 for 1 hour (actually 50 minutes)
Location
ONLINE at https://bluejeans.com/703668715
Speaker
John OlindeGT Math

Many real life processes that we would like to model have a self-exciting property, i.e. the occurrence of one event causes a temporary spike in the probability of other events occurring nearby in space and time.  Examples of processes that have this property are earthquakes, crime in a neighborhood, or emails within a company.  In 1971, Alan Hawkes first used what is now known as the Hawkes process to model such processes.  Since then much work has been done on estimating the parameters of a Hawkes process given a data set and creating variants of the process for different applications.

In this talk, I will be proposing a new variant of a Hawkes process that takes into account the effect of police activity on the underlying crime rate and an algorithm for estimating its parameters given a crime data set.

Approximation Algorithms for Mixed Integer Non-Linear Optimization Problems

Series
ACO Student Seminar
Time
Friday, November 13, 2020 - 13:00 for 1 hour (actually 50 minutes)
Location
ONLINE
Speaker
Guanyi WangISyE, Georgia Tech

For computational-intensive mixed integer non-linear optimization problems, a major challenge is to verify/guarantee the quality of any feasible solution under mild assumptions in a tractable fashion. In this talk, we focus on tackling this challenge by constructing tight relaxations and designing approximation algorithms for two different mixed integer non-linear optimization problems.

In the first part, we focus on the (row) sparse principal component analysis (rsPCA) problem. Solving rsPCA is the problem of finding the top-r leading principal components of a covariance matrix such that all these principal components are linear combinations of a subset of k variables. The rsPCA problem is a widely used dimensionality reduction tool with an additional sparsity constraint to enhance its interpretability. We propose: (a) a convex integer programming relaxation of rsPCA that gives upper (dual) bounds for rsPCA, and; (b) a new local search algorithm for finding primal feasible solutions for rsPCA. We also show that, in the worst-case, the dual bounds provided by the convex IP are within an affine function of the global optimal value. We demonstrate our techniques applied to large-scale covariance matrices.

In the second part, we focus on improving the execution speed of compute-intensive numerical code. The compute-intensive numerical code, especially of the variety encountered in deep neural network inference and training, is often written using nested for-loops. One of the main bottlenecks that significantly influence the nested for-loops' execution speed is the so-called memory latency. Iteration space tiling is a common memory management technique used to deal with memory latency. We study the problem of automatically optimizing the implementation of these nested loops by formulating the iteration space tiling problem into an integer geometric programming (IGP) problem. We show how to design an efficient approximation algorithm for this problem and how to use the so-called "non-uniform tiling" technique to improve the execution speed.

The first part of the talk is joint work with Santanu S. Dey, Rahul Mazumder, Macro Molinaro, and the second part of the talk is joint work with Ofer Dekel.

The lattice metric space and its applications

Series
Research Horizons Seminar
Time
Friday, November 13, 2020 - 11:30 for 1 hour (actually 50 minutes)
Location
ONLINE
Speaker
Yuchen HeGeorgia Tech
Lattice patterns are commonly observed in material sciences where microscopic structural nuances induce distinct macroscopic physical or chemical properties. Provided with two lattices of the same dimension, how do we measure their differences in a visually consistent way? Mathematically, any n-D lattice is determined by a set of n independent vectors. Since such basis-representation is non-unique, a direct comparison among basis-representations in Euclidean space is highly ambiguous. In this talk, I will focus on 2-D lattices and introduce the lattice metric space proposed in my earlier work. This geometric space was constructed mainly based on integrating the Modular group theory and the Poincaré metric. In the lattice metric space, each point represents a unique lattice pattern, and the visual difference between two patterns is measured by the shortest path connecting them. Some applications of the lattice metric space will be presented. If time allows, I will briefly discuss potential extensions to 3D-lattices.

Chain conditions in power series and polynomial rings

Series
Student Algebraic Geometry Seminar
Time
Friday, November 13, 2020 - 09:00 for 1 hour (actually 50 minutes)
Location
Microsoft Teams Meeting
Speaker
Hamed MousaviGeorgia Tech

Following the Hilbert Basis theorem and its applications, there has been a vast variety of studies involving the chain conditions over the polynomial or the power series rings. One type of chain condition is the Archimedean condition, which says \cap_n Rt_n = 0for any nonunit element t in the ring R. In this talk, we start with the ascending chain condition on principal ideals (ACCP) over a larger class “skew generalized power series rings”. Then we explain the relation between ACCP rings and Archimedean rings and answer partially to the question “when these properties can be lifted from the ring R to the ring R[[x; α]]? ” In particular we show that if R is an Archimedean reduced ring and satisfy ACC on annihilators, then R[[x]] is also an Archimedean reduced ring.

When Do Neural Networks Outperform Kernel Methods?

Series
Stochastics Seminar
Time
Thursday, November 12, 2020 - 15:30 for 1 hour (actually 50 minutes)
Location
https://bluejeans.com/445382510
Speaker
Song MeiUC Berkeley

For a certain scaling of the initialization of stochastic gradient descent (SGD), wide neural networks (NN) have been shown to be well approximated by reproducing kernel Hilbert space (RKHS)  methods. Recent empirical work showed that, for some classification tasks, RKHS methods can replace NNs without a large loss in performance. On the other hand, two-layers NNs are known to encode richer smoothness classes than RKHS and we know of special examples for which SGD-trained NN provably outperform RKHS. This is true also in the wide network limit, for a different scaling of the initialization.

How can we reconcile the above claims? For which tasks do NNs outperform RKHS? If feature vectors are nearly isotropic, RKHS methods suffer from the curse of dimensionality, while NNs can overcome it by learning the best low-dimensional representation. Here we show that this curse of dimensionality becomes milder if the feature vectors display the same low-dimensional structure as the target function, and we precisely characterize this tradeoff. Building on these results, we present a model that can capture in a unified framework both behaviors observed in earlier work. We hypothesize that such a latent low-dimensional structure is present in image classification. We test numerically this hypothesis by showing that specific perturbations of the training distribution degrade the performances of RKHS methods much more significantly than NNs.

Insights on gradient-based algorithms in high-dimensional non-convex learning

Series
School of Mathematics Colloquium
Time
Thursday, November 12, 2020 - 11:00 for 1 hour (actually 50 minutes)
Location
https://us02web.zoom.us/j/89107379948
Speaker
Lenka ZdeborováEPFL

Gradient descent algorithms and their noisy variants, such as the Langevin dynamics or multi-pass SGD, are at the center of attention in machine learning. Yet their behaviour remains perplexing, in particular in the high-dimensional non-convex setting. In this talk, I will present several high-dimensional and non-convex statistical learning problems in which the performance of gradient-based algorithms can be analysed down to a constant. The common point of these settings is that the data come from a probabilistic generative model leading to problems for which, in the high-dimensional limit, statistical physics provides exact closed solutions for the performance of the gradient-based algorithms. The covered settings include the spiked mixed matrix-tensor model and the phase retrieval.