Seminars and Colloquia by Series

Learning to Solve Hard Minimal Problems

Series
Colloquia
Time
Thursday, October 13, 2022 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Anton LeykinGeorgia Tech

The main result in this talk concerns a new fast algorithm to solve a minimal problem with many spurious solutions that arises as a relaxation of a geometric optimization problem. The algorithm recovers relative camera pose from points and lines in multiple views. Solvers like this are the backbone of structure-from-motion techniques that estimate 3D structures from 2D image sequences.   

Our methodology is general and applicable in areas other than computer vision. The ingredients come from algebra, geometry, numerical methods, and applied statistics. Our fast implementation relies on a homotopy continuation optimized for our setting and a machine-learned neural network.

(This covers joint works with Tim Duff, Ricardo Fabbri, Petr Hruby, Kathlen Kohn, Tomas Pajdla, and others. The talk is suitable for both professors and students.)

Bounds on some classical exponential Riesz basis

Series
Analysis Seminar
Time
Wednesday, October 12, 2022 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Thibaud AlemanyGeorgia Tech

We estimate the  Riesz basis (RB) bounds obtained in Hruschev, Nikolskii and Pavlov' s classical characterization of exponential RB. As an application, we  improve previously known estimates of the RB bounds in some classical cases, such as RB obtained by an Avdonin type perturbation, or RB which are the zero-set of sine-type functions. This talk is based on joint work with S. Nitzan

Random growth models

Series
Research Horizons Seminar
Time
Wednesday, October 12, 2022 - 12:30 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Michael DamronGeorgia Tech

Random and irregular growth is all around us. We see it in the form of cancer growth, bacterial infection, fluid flow through porous rock, and propagating flame fronts. Simple models for these processes originated in the '50s with percolation theory and have since given rise to many new models and interesting mathematics. I will introduce a few models (percolation, invasion percolation, first-passage percolation, diffusion-limited aggregation, ...), along with some of their basic properties.

Progress towards the Burning Number Conjecture

Series
Graph Theory Seminar
Time
Tuesday, October 11, 2022 - 15:45 for 1 hour (actually 50 minutes)
Location
Speaker
Jérémie TurcotteMcGill University

The burning number $b(G)$ of a graph $G$ is the smallest integer $k$ such that $G$ can be covered by $k$ balls of radii respectively $0,\dots,k-1$, and was introduced independently by Brandenburg and Scott at Intel as a transmission problem on processors \cite{alon} and Bonato, Janssen and Roshanbin as a model for the spread of information in social networks.

The Burning Number Conjecture \cite{bonato} claims that $b(G)\leq \left\lceil\sqrt{n}\right\rceil$, where $n$ is the number of vertices of $G$. This bound tight for paths. The previous best bound for this problem, by Bastide et al. \cite{bastide}, was $b(G)\leq \sqrt{\frac{4n}{3}}+1$.

We prove that the Burning Number Conjecture holds asymptotically, that is $b(G)\leq (1+o(1))\sqrt{n}$.

Following a brief introduction to graph burning, this talk will focus on the general ideas behind the proof.

Asymptotics of surface group representations along rays

Series
Geometry Topology Seminar
Time
Monday, October 10, 2022 - 14:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Mike WolfGeorgia Tech

We study a particular distinguished component (the 'Hitchin component') of the space of surface group representations to SL(3,\R).  In this setting, both Hitchin (via Higgs bundles) and the more ancient subject of affine spheres associate a bundle of holomorphic differentials over Teichmuller space to this component of the character variety.  We focus on a ray of holomorphic differentials and provide a formula, tropical in appearance, for the asymptotic holonomy of the representations in terms of the local geometry of the differential.  Alternatively, we show how the associated equivariant harmonic maps to a symmetric space converge to a harmonic map to a building, with geometry determined by the differential. All of this is joint work with John Loftin and Andrea Tamburelli, and all the constructions and definitions will be (likely briskly) explained.

Multi-scale modeling for complex flows at extreme computational scales

Series
Applied and Computational Mathematics Seminar
Time
Monday, October 10, 2022 - 14:00 for
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Spencer BryngelsonGeorgia Tech CSE

Many fluid flows display at a wide range of space and time scales. Turbulent and multiphase flows can include small eddies or particles, but likewise large advected features. This challenge makes some degree of multi-scale modeling or homogenization necessary. Such models are restricted, though: they should be numerically accurate, physically consistent, computationally expedient, and more. I present two tools crafted for this purpose. First, the fast macroscopic forcing method (Fast MFM), which is based on an elliptic pruning procedure that localizes solution operators and sparse matrix-vector sampling. We recover eddy-diffusivity operators with a convergence that beats the best spectral approximation (from the SVD), attenuating the cost of, for example, targeted RANS closures. I also present a moment-based method for closing multiphase flow equations. Buttressed by a recurrent neural network, it is numerically stable and achieves state-of-the-art accuracy. I close with a discussion of conducting these simulations near exascale. Our simulations scale ideally on the entirety of ORNL Summit's GPUs, though the HPC landscape continues to shift.

The Entropy Compression Method

Series
Graduate Student Colloquium
Time
Friday, October 7, 2022 - 15:30 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Abhishek DhawanGeorgia Tech Math

The Lovasz Local Lemma is a powerful tool to prove existence of combinatorial structures satisfying certain properties. In a constructive proof of the LLL, Moser and Tardos introduced a proof technique that is now referred to as the entropy compression method. In this talk I will describe the main idea of the method and apply it to a problem easily solved using the LLL. I will also describe recent applications of the idea to various graph coloring problems.

Smooth structures on open 4-manifolds III

Series
Geometry Topology Working Seminar
Time
Friday, October 7, 2022 - 14:00 for 1.5 hours (actually 80 minutes)
Location
Skiles 006
Speaker
John EtnyreGeorgia Tech

One of the most interesting and surprising features of manifold topology is the existence of topological 4-manifold that admit infinitely many smooth structures. In these talks I will discuss what is known about these “exotic” smooth structures on open manifolds, starting with R^4 and then moving on to other open 4-manifolds. We will also go over various constructions and open questions about these manifolds.  

Sparse Cholesky factorization by greedy conditional selection

Series
ACO Student Seminar
Time
Friday, October 7, 2022 - 13:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Stephen HuanGeorgia Tech CS

Dense kernel matrices resulting from pairwise evaluations of a kernel function arise naturally in machine learning and statistics. Previous work in constructing sparse transport maps or sparse approximate inverse Cholesky factors of such matrices by minimizing Kullback-Leibler divergence recovers the Vecchia approximation for Gaussian processes. However, these methods often rely only on geometry to construct the sparsity pattern, ignoring the conditional effect of adding an entry. In this work, we construct the sparsity pattern by leveraging a greedy selection algorithm that maximizes mutual information with target points, conditional on all points previously selected. For selecting k points out of N, the naive time complexity is O(N k^4), but by maintaining a partial Cholesky factor we reduce this to O(N k^2). Furthermore, for multiple (m) targets we achieve a time complexity of O(N k^2 + N m^2 + m^3) which is maintained in the setting of aggregated Cholesky factorization where a selected point need not condition every target. We directly apply the selection algorithm to image classification and recovery of sparse Cholesky factors. By minimizing Kullback-Leibler divergence, we apply the algorithm to Cholesky factorization, Gaussian process regression, and preconditioning with the conjugate gradient, improving over k-nearest neighbors particularly in high dimensional, unusual, or otherwise messy geometries with non-isotropic kernel functions.

Pages