Finite Dimensional Balian-Low Theorems
- Series
- Applied and Computational Mathematics Seminar
- Time
- Monday, January 7, 2019 - 13:55 for 1 hour (actually 50 minutes)
- Location
- Skiles 154
- Speaker
- Dr. Michael Northington – GT Math
For each n, let M be an n by n random matrix with independent ±1 entries. We show that the probability that M is not invertable equals (1/2 + o(1/n))^n, which settles an old problem. Some generalizations are considered.
It is anticipated that chaotic regimes (characterized by, e.g., sensitivity with respect to initial conditions and loss of memory) arise in a wide variety of dynamical systems, including those arising from the study of ensembles of gas particles and fluid mechanics. However, in most cases the problem of rigorously verifying asymptotic chaotic regimes is notoriously difficult. For volume-preserving systems (e.g., incompressible fluid flow or Hamiltonian systems), these issues are exemplified by coexistence phenomena: even in quite simple models which should be chaotic, e.g. the Chirikov standard map, completely opposite dynamical regimes (elliptic islands vs. hyperbolic sets) can be tangled together in phase space in a convoluted way.
Recent developments have indicated, however, that verifying chaos is tractable for systems subjected to a small amount of noise— from the perspective of modeling, this is not so unnatural, as the real world is inherently noisy. In this talk, I will discuss two recent results: (1) a large positive Lyapunov exponent for (extremely small) random perturbations of the Chirikov standard map, and (2) a positive Lyapunov exponent for the Lagrangian flow corresponding to various incompressible stochastic fluids models, including stochastic 2D Navier-Stokes and 3D hyperviscous Navier-Stokes on the periodic box. The work in this talk is joint with Jacob Bedrossian, Samuel Punshon-Smith, Jinxin Xue and Lai-Sang Young.
The celebrated Hadwiger's theorem says that linear combinations of intrinsic volumes on convex sets are the only isometry invariant continuous valuations(i.e. finitely additive measures). On the other hand H. Weyl has extended intrinsic volumes beyond convexity, to Riemannian manifolds. We try to understand the continuity properties of this extension under theGromov-Hausdorff convergence (literally, there is no such continuityin general). First, we describe a new conjectural compactification of the set of all closed Riemannian manifolds with given upper bounds on dimensionand diameter and lower bound on sectional curvature. Points of thiscompactification are pairs: an Alexandrov space and a constructible(in the Perelman-Petrunin sense) function on it. Second, conjecturally all intrinsic volumes extend by continuity to this compactification. No preliminary knowledge of Alexandrov spaces will be assumed, though it will be useful.
We shall survey a variety of results, some recent, some going back a long time, where combinatorial methods are used to prove or disprove the existence of orthogonal exponential bases and Gabor bases. The classical Erdos distance problem and the Erdos Integer Distance Principle play a key role in our discussion.
This is a SCMB MathBioSys Seminar posted on behalf of Melissa Kemp (GT BME)
Constriction of blood vessels in the extremities due to traumatic injury to halt excessive blood loss or resulting from pathologic occlusion can cause considerable damage to the surrounding tissues with significant morbidity and mortality. Optimal healing of damaged tissue relies on the precise balance of pro-inflammatory and pro-healing processes of innate inflammation. In this talk, we will present a discrete multiscale mathematical model that spans the tissue and intracellular scales, and captures the consequences of targeting various regulatory components. We take advantage of the canalization properties of some of the functions, which is a type of hierarchical clustering of the inputs, and use it as control to steer the system away from a faulty attractor and understand better the regulatory relations that govern the system dynamics.EDIT: CANCELLED
We discuss a problem of asymptotically efficient (that is, asymptotically normal with minimax optimal limit variance) estimation of functionals of the form $\langle f(\Sigma), B\rangle$ of unknown covariance $\Sigma$ based on i.i.d.mean zero Gaussian observations $X_1,\dots, X_n\in {\mathbb R}^d$ with covariance $$\Sigma$. Under the assumptions that the dimension $d\leq n^{\alpha}$ for some $\alpha\in (0,1)$ and $f:{\mathbb R}\mapsto {\mathbb R}$ is of smoothness $s>\frac{1}{1-\alpha},$ we show how to construct an asymptotically efficient estimator of such functionals (the smoothness threshold $\frac{1}{1-\alpha}$ is known to be optimal for a simpler problem of estimation of smooth functionals of unknown mean of normal distribution).
The proof of this result relies on a variety of probabilistic and analytic tools including Gaussian concentration, bounds on the remainders of Taylor expansions of operator functions and bounds on finite differences of smooth functions along certain Markov chains in the spaces of positively semi-definite matrices.
(The talk will be at 1-2pm, then it follows by a discussion session from 2 pm to 2:45 pm.)
Powerful AI systems, which are driven by machine learning, are increasingly controlling various aspects of modern society: from social interactions (e.g., Facebook, Twitter, Google, YouTube), economics (e.g., Uber, Airbnb, Banking), learning (e.g., Wikipedia, MOOCs), governance (Judgements, Policing, Voting), to autonomous vehicles and weapons. These systems have a tremendous potential to change our lives for the better, but, via the ability to mimic and nudge human behavior, they also have the potential to be discriminatory, reinforce societal prejudices, and polarize opinions. Moreover, recent studies have demonstrated that these systems can be quite brittle and generally lack the required robustness to be deployed in various civil/military situations. The reason being that considerations such as fairness, robustness, stability, explainability, accountability etc. have largely been an afterthought in the development of AI systems. In this talk, I will discuss the opportunities that lie ahead in a principled and thoughtful development of AI systems.
BioNisheeth Vishnoi is a Professor of Computer Science at Yale University. He received a B.Tech in Computer Science and Engineering from IIT Bombay in 1999 and a Ph.D. in Algorithms, Combinatorics and Optimization from Georgia Tech in 2004. His research spans several areas of theoretical computer science: from approximability of NP-hard problems, to combinatorial, convex and non-convex optimization, to tackling algorithmic questions involving dynamical systems, stochastic processes and polynomials. He is also broadly interested in understanding and addressing some of the key questions that arise in nature and society from the viewpoint of theoretical computer science. Here, his current focus is on natural algorithms, emergence of intelligence, and questions at the interface of AI, ethics, and society. He was the recipient of the Best Paper Award at FOCS in 2005, the IBM Research Pat Goldberg Memorial Award in 2006, the Indian National Science Academy Young Scientist Award in 2011, and the IIT Bombay Young Alumni Achievers Award in 2016.
In the design of complex engineering systems like aircraft/rotorcraft/spacecraft, computer experiments offer a cheaper alternative to physical experiments due to high-fidelity(HF) models. However, such models are still not cheap enough for application to Global Optimization(GO) and Uncertainty Quantification(UQ) to find the best possible design alternative. In such cases, surrogate models of HF models become necessary. The construction of surrogate models requires an offline database of the system response generated by running the expensive model several times. In general, the training sample size and distribution for a given problem is unknown apriori and can be over/under predicted, which leads to wastage of resources and poor decision-making. An adaptive model building approach eliminates this problem by sequentially sampling points based on information gained in the previous step. However, an approach that works for highly non-stationary response is still lacking in the literature. Here, we use Gaussian Process(GP) models as surrogate model. We employ a novel process-convolution approach to generate parameterized non-stationary.
GPs that offer control on the process smoothness. We show that our approach outperforms existing methods, particularly for responses that have localized non-smoothness. This leads to better performance in terms of GO, UQ and mean-squared-prediction-errors for a given budget of HF function calls.
In the design of complex engineering systems like aircraft/rotorcraft/spacecraft, computer experiments offer a cheaper alternative to physical experiments due to high-fidelity(HF) models. However, such models are still not cheap enough for application to Global Optimization(GO) and Uncertainty Quantification(UQ) to find the best possible design alternative. In such cases, surrogate models of HF models become necessary. The construction of surrogate models requires an offline database of the system response generated by running the expensive model several times. In general, the training sample size and distribution for a given problem is unknown apriori and can be over/under predicted, which leads to wastage of resources and poor decision-making. An adaptive model building approach eliminates this problem by sequentially sampling points based on information gained in the previous step. However, an approach that works for highly non-stationary response is still lacking in the literature. Here, we use Gaussian Process(GP) models as surrogate model. We employ a novel process-convolution approach to generate parameterized non-stationary
GPs that offer control on the process smoothness. We show that our approach outperforms existing methods, particularly for responses that have localized non-smoothness. This leads to better performance in terms of GO, UQ and mean-squared-prediction-errors for a given budget of HF function calls.
We study delocalization properties of null vectors and eigenvectors of matrices with i.i.d. subgaussian entries. Such properties describe quantitatively how "flat" is a vector and confirm one of the universality conjectures stating that distributions of eigenvectors of many classes of random matrices are close to the uniform distribution on the unit sphere. In particular, we get lower bounds on the smallest coordinates of eigenvectors, which are optimal as the case of Gaussian matrices shows. The talk is based on the joint work with Konstantin Tikhomirov.
Strong edge coloring of a graph $G$ is a coloring of the edges of the graph such that each color class is an induced subgraph. The strong chromatic index of $G$ is the smallest number $k$ such that $G$ has a $k$-strong edge coloring. Erdős and Nešetřil conjecture that the strong chromatic index of a graph of max degree $\Delta$ is at most $5\Delta^2/4$ if $\Delta$ is even and $(5\Delta^2-2\Delta + 1)/4$ if $\Delta$ is odd. It is known for $\Delta=3$ that the conjecture holds, and in this talk I will present part of Anderson's proof that the strong chromatic index of a subcubic planar graph is at most $10$
A major challenge in clinical and biomedical research is on translating in-vitro and in- vivo model findings to humans. Translation success rate of all new compounds going through different clinical trial phases is generally about 10%. (i) This field is challenged by a lack of robust methods that can be used to translate model findings to humans (or interpret preclinical finds to accurately design successful patient regimens), hence providing a platform to evaluate a plethora of agents before they are channeled in clinical trials. Using set theory principles of mapping morphisms, we recently developed a novel translational framework that can faithfully map experimental results to clinical patient results. This talk will demonstrate how this method was used to predict outcomes of anti-TB drug clinical trials. (ii) Translation failure is deeply rooted in the dissimilarities between humans and experimental models used; wide pathogen isolates variation, patient population genetic diversities and geographic heterogeneities. In TB, bacteria phenotypic heterogeneity shapes differential antibiotic susceptibility patterns in patients. This talk will also demonstrate the application of dynamical systems in Systems Biology to model (a) gene regulatory networks and how gene programs influence Mycobacterium tuberculosis bacteria metabolic/phenotypic plasticity. (b) And then illustrate how different bacteria phenotypic subpopulations influence treatment outcomes and the translation of preclinical TB therapeutic regimens. In general, this talk will strongly showcase how mathematical modeling can be used to critically analyze experimental and patient data.
We will go to lunch together after the talk with the graduate students.
The popularity of machine learning is increasingly growing in transportation, with applications ranging from traffic engineering to travel demand forecasting and pavement material modeling, to name just a few. Researchers often find that machine learning achieves higher predictive accuracy compared to traditional methods. However, many machine-learning methods are often viewed as “black-box” models, lacking interpretability for decision making. As a result, increased attention is being devoted to the interpretability of machine-learning results.
In this talk, I introduce the application of machine learning to study travel behavior, covering both mode prediction and behavioral interpretation. I first discuss the key differences between machine learning and logit models in modeling travel mode choice, focusing on model development, evaluation, and interpretation. Next, I apply the existing machine-learning interpretation tools and also propose two new model-agnostic interpretation tools to examine behavioral heterogeneity. Lastly, I show the potential of using machine learning as an exploratory tool to tune the utility functions of logit models.
I illustrate these ideas by examining stated-preference travel survey data for a new mobility-on-demand transit system that integrates fixed-route buses and on-demand shuttles. The results show that the best-performing machine-learning classifier results in higher predictive accuracy than logit models as well as comparable behavioral outputs. In addition, results obtained from model-agnostic interpretation tools show that certain machine-learning models (e.g. boosting trees) can readily account for individual heterogeneity and generate valuable behavioral insights on different population segments. Moreover, I show that interpretable machine learning can be applied to tune the utility functions of logit models (e.g. specifying nonlinearities) and to enhance their model performance. In turn, these findings can be used to inform the design of new mobility services and transportation policies.
Abstract: Reiher, Rödl, Ruciński, Schacht, and Szemerédi proved, via a modification of the absorbing method, that every 3-uniform $n$-vertex hypergraph, $n$ large, with minimum vertex degree at least $(5/9+\alpha)n^2/2$ contains a tight Hamiltonian cycle. Recently, owing to a further modification of the method, the same group of authors joined by Bjarne Schuelke, extended this result to 4-uniform hypergraphs with minimum pair degree at least, again, $(5/9+\alpha)n^2/2$. In my talk I will outline these proofs and point to the crucial ideas behind both modifications of the absorbing method.
Abstract: In this talk, we consider the Cauchy problem of the N-dimensional incompressible viscoelastic fluids with N ≥ 2. It is shown that, in the low frequency part, this system possesses some dispersive properties derived from the one parameter group e∓itΛ. Based on this dispersive effect, we construct global solutions with large initial velocity concentrating on the low frequency part. Such kind of solution has never been seen before in the literature even for the classical incompressible Navier-Stokes equations. The proof relies heavily on the dispersive estimates for the system of acoustics, and a careful study of the nonlinear terms. And we also obtain the similar result for the isentropic compressible Navier-Stokes equations. Here, the initial velocity with arbitrary B⋅N 2 −1 2,1 norm of potential part P⊥u0 and large highly oscillating are allowed in our results. (Joint works with Daoyuan Fang and Ruizhao Zi)
Moment problem is a classical question in real analysis, which asks whether a set of moments can be realized as integration of corresponding monomials with respect to a Borel measure. Truncated moment problem asks the same question given a finite set of moments. I will explain how some of the fundamental results in the truncated moment problem can be proved (in a very general setting) using elementary convex geometry. No familiarity with moment problems will be assumed. This is joint work with Larry Fialkow.
The Georgia Scientific Computing Symposium is a forum for professors, postdocs, graduate students and other researchers in Georgia to meet in an informal setting, to exchange ideas, and to highlight local scientific computing research. The symposium has been held every year since 2009 and is open to the entire research community.
This year, the symposium will be held on Saturday, February 16, 2019, at Georgia Institute of Technology. Please see
http://gtmap.gatech.edu/events/2019-georgia-scientific-computing-symposium
for more information
We will discuss the regularity of the conjugacy between an Anosov automorphism L of a torus and its small perturbation. We assume that L has no more than two eigenvalues of the same modulus and that L^4 is irreducible over rationals. We consider a volume-preserving C^1-small perturbation f of L. We show that if the Lyapunov exponents of f with respect to the volume are the same as the Lyapunov exponents of L, then f is C^1+ conjugate to L. Further, we establish a similar result for irreducible partially hyperbolic automorphisms with two-dimensional center bundle. This is joint work with Andrey Gogolev and Victoria Sadovskaya
A single soap bubble has a spherical shape since it minimizes its surface area subject to a fixed enclosed volume of air. When two soap bubbles collide, they form a “double-bubble” composed of three spherical caps. The double-bubble minimizes total surface area among all sets enclosing two fixed volumes. This was proven mathematically in a landmark result by Hutchings-Morgan-Ritore-Ros and Reichardt using the calculus of variations in the early 2000s. The analogous case of three or more Euclidean sets is considered difficult if not impossible. However, if we replace Lebesgue measure in these problems with the Gaussian measure, then recent work of myself (for 3 sets) and of Milman-Neeman (for any number of sets) can actually solve these problems. We also use the calculus of variations. Time permitting, we will discuss an improvement to the Milman-Neeman result and applications to optimal clustering of data and to designing elections that are resilient to hacking. http://arxiv.org/abs/1901.03934
Wiener-Hopf factorization (WHf) encompasses several important results in probability and stochastic processes, as well as in operator theory. The importance of the WHf stems not only from its theoretical appeal, manifested, in part, through probabilistic interpretation of analytical results, but also from its practical applications in a wide range of fields, such as fluctuation theory, insurance and finance. The various existing forms of the WHf for Markov chains, strong Markov processes, Levy processes, and Markov additive process, have been obtained only in the time-homogeneous case. However, there are abundant real life dynamical systems that are modeled in terms of time-inhomogenous processes, and yet the corresponding Wiener-Hopf factorization theory is not available for this important class of models. In this talk, I will first provide a survey on the development of Wiener-Hopf factorization for time-homogeneous Markov chains, Levy processes, and Markov additive processes. Then, I will discuss our recent work on WHf for time-inhomogensous Markov chains. To the best of our knowledge, this study is the first attempt to investigate the WHf for time-inhomogeneous Markov processes.
Inference of evolutionary dynamics of heterogeneous cancer and viral populations Abstract: Genetic diversity of cancer cell populations and intra-host viral populations is one of the major factors influencing disease progression and treatment outcome. However, evolutionary dynamics of such populations remain poorly understood. Quantification of selection is a key step to understanding evolutionary mechanisms driving cancer and viral diseases. We will introduce a mathematical model and an algorithmic framework for inference of fitness landscapes of heterogeneous populations from genomic data. It is based on a maximal likelihood approach, whose objective is to estimate a vector of clone/strain fitnesses which better fits the observed tumor phylogeny, observed population structure and the dynamical system describing evolution of the population as a branching process. We will discuss our approach to solve the problem by transforming the original continuous maximum likelihood problem into a discrete optimization problem, which could be considered as a variant of scheduling problem with precedent constraints and with non-linear cumulative cost function.
Linear Schur multipliers, which act on matrices by entrywisemultiplications, as well as their generalizations have been studiedfor over a century and successfully applied in perturbation theory (asdemonstrated in the previous talk). In this talk, we will discussestimates for finite dimensional multilinear Schur multipliersunderlying these applications.
Correlation Clustering is an elegant model that captures fundamental graph cut problems such as Minimum s-t Cut, Multiway Cut, and Multicut, extensively studied in combinatorial optimization.
Here, we are given a graph with edges labeled + or - and the goal is to produce a clustering that agrees with the labels as much as possible: + edges within clusters and - edges across clusters.
The classical approach towards Correlation Clustering (and other graph cut problems) is to optimize a global objective, e.g., minimizing the total number of disagreements or maximizing the total number of agreements.
We depart from this and study local objectives: minimizing the maximum number of disagreements for edges incident on a single node, and the analogous max min agreements objective.
This naturally gives rise to a family of basic min-max graph cut problems.
A prototypical representative is Min-Max s-t Cut: find an s-t cut minimizing the largest number of cut edges incident on any node.
In this talk we will give a short introduction of Correlation Clustering and discuss the following results:
Joint work with Moses Charikar and Neha Gupta.
First, we introduce a new field theoretical interpretation of quantum mechanical wave functions, by postulating that the wave function is the common wave function for all particles in the same class determined by the external potential V, of the modulus of the wave function represents the distribution density of the particles, and the gradient of phase of the wave function provides the velocity field of the particles. Second, we show that the key for condensation of bosonic particles is that their interaction is sufficiently weak to ensure that a large collection of boson particles are in a state governed by the same condensation wave function field under the same bounding potential V. For superconductivity, the formation of superconductivity comes down to conditions for the formation of electron-pairs, and for the electron-pairs to share a common wave function. Thanks to the recently developed PID interaction potential of electrons and the average-energy level formula of temperature, these conditions for superconductivity are explicitly derived. Furthermore, we obtain both microscopic and macroscopic formulas for the critical temperature. Third, we derive the field and topological phase transition equations for condensates, and make connections to the quantum phase transition, as a topological phase transition. This is joint work with Tian Ma.
We identify principal component analysis (PCA) as an empirical risk minimization problem with respect to the reconstruction error and prove non-asymptotic upper bounds for the corresponding excess risk. These bounds unify and improve existing upper bounds from the literature. In particular, they give oracle inequalities under mild eigenvalue conditions. We also discuss how our results can be transferred to the subspace distance and, for instance, how our approach leads to a sharp $\sin \Theta$ theorem for empirical covariance operators. The proof is based on a novel contraction property, contrasting previous spectral perturbation approaches. This talk is based on joint works with Markus Reiß and Moritz Jirak.
If $f$ is a function supported on a truncated paraboloid, what can we say about $Ef$, the Fourier transform of f? Stein conjectured in the 1960s that for any $p>3$, $\|Ef\|_{L^p(R^3)} \lesssim \|f\|_{L^{\infty}}$.
We make a small progress toward this conjecture and show that it holds for $p> 3+3/13\approx 3.23$. In the proof, we combine polynomial partitioning techniques introduced by Guth and the two ends argument introduced by Wolff and Tao.
Let $\nu$ denote the maximum size of a packing of edge-disjoint triangles in a graph $G$. We can clearly make $G$ triangle-free by deleting $3\nu$ edges. Tuza conjectured in 1981 that $2\nu$ edges suffice, and proved it for planar graphs. The best known general bound is $(3-\frac{3}{23})\nu$ proven by Haxell in 1997. We will discuss this proof and some related results.
Two recent extensions of optimal mass transport theory will be covered. In the first part of the talk, we will discuss measure-valued spline, which generalizes the notion of cubic spline to the space of distributions. It addresses the problem to smoothly interpolate (empirical) probability measures. Potential applications include time sequence interpolation or regression of images, histograms or aggregated datas. In the second part of the talk, we will introduce matrix-valued optimal transport. It extends the optimal transport theory to handle matrix-valued densities. Several instances are quantum states, color images, diffusion tensor images and multi-variate power spectra. The new tool is expected to have applications in these domains. We will focus on theoretical side of the stories in both parts of the talk.
Mathapalooza! is simultaneously a Julia Robinson Mathematics Festival and an event of the Atlanta Science Festival. There will be puzzles and games, a magic show by Matt Baker, mathematically themed courtroom skits by GT Club Math, a presentation about math and dance by Manuela Manetta, a presentation about math and music by David Borthwick, and a gallery of mathematical art curated by Elisabetta Matsumoto. It is free, and we anticipate engaging hundreds of members of the public in the wonders of mathematics. More info at https://mathematics-in-motion.org/about/Be there or B^2 !
Let X be a degree d curve in the projective space P^r.
A general hyperplane H intersects X at d distinct points; varying H defines a monodromy action on X∩H. The resulting permutation group G is the sectional monodromy group of X. When the ground field has characteristic zero the group G is known to be the full symmetric group.
By work of Harris, if G contains the alternating group, then X satisfies a strengthened Castelnuovo's inequality (relating the degree and the genus of X).
The talk is concerned with sectional monodromy groups in positive characteristic. I will describe all non-strange non-degenerate curves in projective spaces of dimension r>2 for which G is not symmetric or alternating. For a particular family of plane curves, I will compute the sectional monodromy groups and thus answer an old question on Galois groups of generic trinomials.
We will try to address a few universality questions for the behavior of large random matrices over finite fields, and then present a minimal progress on one of these questions.
Hadwiger (Hajos and Gerards and Seymour, respectively) conjectured that the vertices of every graph with no K_{t+1} minor (topological minor and odd minor, respectively) can be colored with t colors such that any pair of adjacent vertices receive different colors. These conjectures are stronger than the Four Color Theorem and are either wide open or false in general. A weakening of these conjectures is to consider clustered coloring which only requires every monochromatic component to have bounded size instead of size 1. It is known that t colors are still necessary for the clustered coloring version of those three conjectures. Joint with David Wood, we prove a series of tight results about clustered coloring on graphs with no subgraph isomorphic to a fixed complete bipartite graph. These results have a number of applications. In particular, they imply that the clustered coloring version of Hajos' conjecture is true for bounded treewidth graphs in a stronger sense: K_{t+1} topological minor free graphs of bounded treewidth are clustered t-list-colorable. They also lead to the first linear upper bound for the clustered coloring version of Hajos' conjecture and the currently best upper bound for the clustered coloring version of the Gerards-Seymour conjecture.
If a finite group $G$ acts on a Cohen-Macaulay ring $A$, and the order of $G$ is a unit in $A$, then the invariant ring $A^G$ is Cohen-Macaulay as well, by the Hochster-Eagon theorem. On the other hand, if the order of $G$ is not a unit in $A$ then the Cohen-Macaulayness of $A^G$ is a delicate question that has attracted research attention over the last several decades, with answers in several special cases but little general theory. In this talk we show that the statement that $A^G$ is Cohen-Macaulay is equivalent to a statement quantified over the inertia groups for the action of G$ on $A$ acting on strict henselizations of appropriate localizations of $A$. In a case of long-standing interest—a permutation group acting on a polynomial ring—we show how this can be applied to find an obstruction to Cohen-Macaulayness that allows us to completely characterize the permutation groups whose invariant ring is Cohen-Macaulay regardless of the ground field. This is joint work with Sophie Marques.
Many problems of spherical discrete and metric geometry may be reformulated as energy minimization problems and require techniques that stem from harmonic analysis, potential theory, optimization etc. We shall discuss several such problems as well of applications of these ideas to combinatorial geometry, discrepancy theory, signal processing etc.
<br />
One of the most famous methods for solving large-scale over-determined linear systems is Kaczmarz algorithm, which iteratively projects the previous approximation x_k onto the solution spaces of the next equation in the system. An elegant proof of the exponential convergence of this method using correct randomization of the process is due to Strohmer and Vershynin (2009). Many extensions and generalizations of the method were proposed since then, including the works of Needell, Tropp, Ward, Srebro, Tan and many others. An interesting unifying view on a number of iterative solvers (including several versions of the Kaczmarz algorithm) was proposed by Gower and Richtarik in 2016. The main idea of their sketch-and-project framework is the following: one can observe that the random selection of a row (or a row block) can be represented as a sketch, that is, left multiplication by a random vector (or a matrix), thereby pre-processing every iteration of the method, which is represented by a projection onto the image of the sketch.
I will give an overview of some of these methods, and talk about the role that random matrix theory plays in the showing their convergence. I will also discuss our new results with Deanna Needell on the block Gaussian sketch and project method.
One of the most famous methods for solving large-scale over-determined linear systems is Kaczmarz algorithm, which iteratively projects the previous approximation x_k onto the solution spaces of the next equation in the system. An elegant proof of the exponential convergence of this method using correct randomization of the process is due to Strohmer and Vershynin (2009). Many extensions and generalizations of the method were proposed since then, including the works of Needell, Tropp, Ward, Srebro, Tan and many others. An interesting unifying view on a number of iterative solvers (including several versions of the Kaczmarz algorithm) was proposed by Gower and Richtarik in 2016. The main idea of their sketch-and-project framework is the following: one can observe that the random selection of a row (or a row block) can be represented as a sketch, that is, left multiplication by a random vector (or a matrix), thereby pre-processing every iteration of the method, which is represented by a projection onto the image of the sketch.
I will give an overview of some of these methods, and talk about the role that random matrix theory plays in the showing their convergence. I will also discuss our new results with Deanna Needell on the block Gaussian sketch and project method.
I will talk about the structure of large square random matrices with centered i.i.d. heavy-tailed entries (only two finite moments are assumed). In our previous work with R. Vershynin we have shown that the operator norm of such matrix A can be reduced to the optimal sqrt(n)-order with high probability by zeroing out a small submatrix of A, but did not describe the structure of this "bad" submatrix, nor provide a constructive way to find it. Now we can give a very simple description of this small "bad" subset: it is enough to zero out a small fraction of the rows and columns of A with largest L2 norms to bring its operator norm to the almost optimal sqrt(loglog(n)*n)-order, under additional assumption that the entries of A are symmetrically distributed. As a corollary, one can also obtain a constructive procedure to find a small submatrix of A that one can zero out to achieve the same regularization.
Im am planning to discuss some details of the proof, the main component of which is the development of techniques that extend constructive regularization approaches known for the Bernoulli matrices (from the works of Feige and Ofek, and Le, Levina and Vershynin) to the considerably broader class of heavy-tailed random matrices.
Consider a uniformly chosen proper coloring with q colors of a domain in Z^d (a graph homomorphism to a clique). We show that when the dimension is much higher than the number of colors, the model admits a staggered long-range order, in which one bipartite class of the domain is predominantly colored by half of the q colors and the other bipartite class by the other half. In the q=3 case, this was previously shown by Galvin-Kahn-Randall-Sorkin and independently by Peled. The result further extends to homomorphisms to other graphs (covering for instance the cases of the hard-core model and the Widom-Rowlinson model), allowing also vertex and edge weights (positive temperature models). Joint work with Ron Peled.
This is a two day conference (March 30-31) to be held at Georgia Tech on algebraic geometry and related areas. We will have talks by Sam Payne, Eric Larson, Angelica Cueto, Rohini Ramadas, and Jennifer Balakrishnan. See https://sites.google.com/view/gattaca/home for more information.
One of the characteristics observed in real networks is that, as a network's topology evolves so does the network's ability to perform various complex tasks. To explain this, it has also been observed that as a network grows certain subnetworks begin to specialize the function(s) they perform. We introduce a model of network growth based on this notion of specialization and show that as a network is specialized its topology becomes increasingly modular, hierarchical, and sparser, each of which are properties observed in real networks. This model is also highly flexible in that a network can be specialized over any subset of its components. By selecting these components in various ways we find that a network's topology acquires some of the most well-known properties of real networks including the small-world property, disassortativity, power-law like degree distributions and clustering coefficients. This growth model also maintains the basic spectral properties of a network, i.e. the eigenvalues and eigenvectors associated with the network's adjacency network. This allows us in turn to show that a network maintains certain dynamic properties as the network's topology becomes increasingly complex due to specialization.
A link in the 3-sphere is doubly slice if it is the cross-section of an unknotted 2-sphere in the 4-sphere. The double branched cover of a doubly slice link is a 3-manifold which embeds in the 4-sphere. For doubly slice Montesinos links, this produces embeddings of Seifert fibered spaces in S^4. In this pre-talk, I'll discuss a construction and an obstruction to being doubly slice.
The classical statement that there are 27 lines on every smooth cubic surface in $\mathbb{P}^3$ fails to hold under tropicalization: a tropical cubic surface in $\mathbb{TP}^3$ often contains infinitely many tropical lines. This pathology can be corrected by reembedding the cubic surface in $\mathbb{P}^{44}$ via the anticanonical bundle.
Under this tropicalization, the 27 classical lines become an arrangement of metric trees in the boundary of the tropical cubic surface in $\mathbb{TP}^{44}$. A remarkable fact is that this arrangement completely determines the combinatorial structure of the corresponding tropical cubic surface. In this talk, we will describe their metric and topological type as we move along the moduli space of tropical cubic surfaces. Time permitting, we will discuss the matroid that emerges from their tropical convex hull.
This is joint work with Anand Deopurkar.
Which 3-manifolds smoothly embed in the 4-sphere? This seemingly simple question turns out to be rather subtle. Using Donaldson's theorem, we derive strong restrictions to embedding a Seifert fibered space over an orientable base surface, which in particular gives a complete classification when e > k/2, where k is the number of exceptional fibers and e is the normalized central weight. Our results point towards a couple of interesting conjectures which I'll discuss. This is joint work with Duncan McCoy.
In a joint work with Sameer Iyer, the validity of steady Prandtl layer expansion is established in a channel. Our result covers the celebrated Blasius boundary layer profile, which is based on uniform quotient estimates for the derivative Navier-Stokes equations, as well as a positivity estimate at the flow entrance.
It is anticipated that the invariant statistics of many of smooth dynamical systems with a `chaotic’ asymptotic character are given by invariant measures with the SRB property- a geometric property of invariant measures which, roughly, means that the invariant measure is smooth along unstable directions. However, actually verifying the existence of SRB measures for concrete systems is extremely challenging: indeed, SRB measures need not exist, even for systems exhibiting asymptotic hyperbolicity (e.g., the figure eight attractor).
The study of asymptotic properties for dynamical systems in the presence of noise is considerably simpler. One manifestation of this principle is the theorem of Ledrappier and Young ’89, where it was proved that under very mild conditions, stationary measures for a random dynamical system with a positive Lyapunov exponent are automatically random SRB measures (that is, satisfy the random analogue of the SRB property). I will talk today about a new proof of this result in a joint work with Lai-Sang Young. This new proof has the benefit of being (1) conceptually lucid and to-the-point (the original proof is somewhat indirect) and (2) potentially easily adapted to more general settings, e.g., to appropriate infinite-dimensional random dynamics, such as time-t solutions to certain classes SPDE (this generalization is an ongoing work, joint with LSY).
Mathematical billiards naturally arise in mechanics, optics, acoustics, etc. They also form the most visual class of dynamical systems with evolution covering all the possible spectrum of behaviours from integrable (extremely regular) to strongly chaotic. Billiard is a (deterministic) dynamical system generated by an uniform (by inertia) motion of a point particle within a domain with piecewise smooth walls ("a billiard table"). I will introduce all needed notions on simple examples and outline some open problems. This talk is also a preparatory talk to a Mathematical Physics seminar (on Monday April 8) where a new direction of research will be discussed which consider physical billiards where instead of a point (mathematical) particle a real physical hard sphere moves. To a complete surprise of mathematicians and PHYSICISTS evolution of a billiard may completely change (and in different ways) in transition from mathematical to physical billiards. It a rare example when mathematicians surprise physicists. Some striking results with physicists are also already obtained. I will (again visually) explain at the end of RH why it is surprising that there could be difference between Math and Phys billiards.
In the setup of classical knot theory---the study of embeddings of the circle into S^3---we recall two examples of classical knot invariants: the Alexander polynomial and the Seifert form.
We then introduce notions from knot-concordance theory, which is concerned with the study of slice surfaces of a knot K---surfaces embedded in the 4-ball B^4 with boundary the knot K. We will comment on the difference between the smooth and topological theory with a focus on a surprising feature of the topological theory: classical invariants govern the existence of slice surfaces of low genus in a way that is not the case in the smooth theory. This can be understood as an analogue of a dichotomy in the study of smooth and topological 4-manifolds.
In this talk we will discuss some some extremal problems for polynomials. Applications to the problems in discrete dynamical systems as well as in the geometric complex analysis will be suggested.
Following an idea of Hugelmeyer, we give a knot theory reproof of a theorem of Schnirelman: Every smooth Jordan curve in the Euclidian plane has an inscribed square. We will comment on possible generalizations to more general Jordan curves.
Our main knot theory result is that the torus knot T(2n,1) in S^1xS^2 does not arise as the boundary of a locally-flat Moebius band in S^1xB^3 for square-free integers n>1. For context, we note that for n>2 and the smooth setting, this result follows from a result of Batson about the non-orientable 4-genus of certain torus knots. However, we show that Batson's result does not hold in the locally flat category: the smooth and topological non-orientable 4-genus differ for the T(9,10) torus knot in S^3.
Based on joint work with Marco Golla.
Computing the eigenvalues and eigenvectors of a large matrix is a basic task in high dimensional data analysis with many applications in computer science and statistics. In practice, however, data is often perturbed by noise. A natural question is the following: How much does a small perturbation to the matrix change the eigenvalues and eigenvectors? In this talk, I will consider the case where the perturbation is random. I will discuss perturbation results for the eigenvalues and eigenvectors as well as for the singular values and singular vectors. This talk is based on joint work with Van Vu, Ke Wang, and Philip Matchett Wood.
In an optimal design problem, we are given a set of linear experiments v1,...,vn \in R^d and k >= d, and our goal is to select a set or a multiset S subseteq [n] of size k such that Phi((\sum_{i \in [n]} v_i v_i^T )^{-1}) is minimized. When Phi(M) = det(M)^{1/d}, the problem is known as the D-optimal design problem, and when Phi(M) = tr(M), it is known as the A-optimal design problem. One of the most common heuristics used in practice to solve these problems is the local search heuristic, also known as the Fedorov's exchange method. This is due to its simplicity and its empirical performance. However, despite its wide usage no theoretical bound has been proven for this algorithm. In this paper, we bridge this gap and prove approximation guarantees for the local search algorithms for D-optimal design and A-optimal design problems. We show that the local search algorithms are asymptotically optimal when $\frac{k}{d}$ is large. In addition to this, we also prove similar approximation guarantees for the greedy algorithms for D-optimal design and A-optimal design problems when k/d is large.
In this talk I will discuss a particular fast-slow system, and describe an averaging theorem. I will also explain how this particular slow-fast system arises in a certain problem of energy transport in an open system of interacting hard-spheres. The technical aspect involved in this is how to deal with singularities present and the fact that the dynamics is fully coupled.
Unusual time.
In standard (mathematical) billiards a point particle moves uniformly in a billiard table with elastic reflections off the boundary. We show that in transition from mathematical billiards to physical billiards, where a finite size hard sphere moves in the same billiard table, virtually anything may happen. Namely a non-chaotic billiard may become chaotic and vice versa. Moreover, both these transitions may occur softly, i.e. for any (arbitrarily small) positive value of the radius of a physical particle, as well as by a ”hard” transition when radius of the physical particle must exceed some critical strictly positive value. Such transitions may change a phase portrait of a mathematical billiard locally as well as completely (globally). These results are somewhat unexpected because for all standard examples of billiards their dynamics remains absolutely the same after transition from a point particle to a finite size (”physical”) particle. Moreover we show that a character of dynamics may change several times when the size of the particle is increasing. This approach already demonstrated a sensational result that quantum system could be more chaotic than its classical counterpart.
We are going talk about three topics. First of all, Principal Components Analysis (PCA) as a dimension reduction technique. We investigate how useful it is for real life problems. The problem is that, often times the spectrum of the covariance matrix is wrongly estimated due to the ratio between sample space dimension over feature space dimension not being large enough. We show how to reconstruct the spectrum of the ground truth covariance matrix, given the spectrum of the estimated covariance for multivariate normal vectors. We then present an algorithm for reconstruction the spectrum in the case of sparse matrices related to text classification.
In the second part, we concentrate on schemes of PCA estimators. Consider the problem of finding the least eigenvalue and eigenvector of ground truth covariance matrix, a famous classical estimator are due to Krasulina. We state the convergence proof of Krasulina for the least eigenvalue and corresponding eigenvector, and then find their convergence rate.
In the last part, we consider the application problem, text classification, in the supervised view with traditional Naive-Bayes method. We find out an updated Naive-Bayes method with a new loss function, which loses the unbiased property of traditional Naive-Bayes method, but obtains a smaller variance of the estimator.
Committee: Heinrich Matzinger (Advisor); Karim Lounici (Advisor); Ionel Popescu (school of math); Federico Bonetto (school of math); Xiaoming Huo (school of ISYE);
In this talk, we will study Seifert fibered three-manifolds. While simple to define, they comprise 6 of the 8 Thurston geometries, and are an important testing ground for many questions and invariants. We will present several constructions/definitions of these manifolds and learn how to work with them explicitly.
In this talk we discuss the following problem due to Peskine and Kollar: Let E be a flat family of rank two bundles on P^n parametrized by a smooth variety T. If E_t is isomorphic to O(a)+O(b) for general t in T, does it mean E_0 is isomorphic to O(a)+O(b) for a special point 0 in T? We construct counter-examples in over P^1 and P^2, and discuss the problem in P^3 and higher P^n.
Inference (aka predictive modeling) is in the core of many data science problems. Traditional approaches could be either statistically or computationally efficient, however not necessarily both. The existing principles in deriving these models - such as the maximal likelihood estimation principle - may have been developed decades ago, and do not take into account the new aspects of the data, such as their large volume, variety, velocity and veracity. On the other hand, many existing empirical algorithms are doing extremely well in a wide spectrum of applications, such as the deep learning framework; however they do not have the theoretical guarantee like these classical methods. We aim to develop new algorithms that are both computationally efficient and statistically optimal. Such a work is fundamental in nature, however will have significant impacts in all data science problems that one may encounter in the society. Following the aforementioned spirit, I will describe a set of my past and current projects including L1-based relaxation, fast nonlinear correlation, optimality of detectability, and nonconvex regularization. All of them integrates statistical and computational considerations to develop data analysis tools.
We will use Heegaard Floer homology to analyze maps between a certain family of three-manifolds akin to the Gromov norm/hyperbolic volume. Along the way, we will study the Heegaard Floer homology of splices. This is joint work with Cagri Karakurt and Eamonn Tweedy.
Unusual time.
Mercury is entrapped in a 3:2 resonance: it rotates on its axis three times for every two revolutions it makes around the Sun. It is generally accepted that this is due to the large value of Mercury's eccentricity. However, the mathematical model commonly used to study the problem -- sometimes called the spin-orbit model -- proved not to be entirely convincing, because of the expression used for the tidal torque. Only recently, a different model for the tidal torque has been proposed, with the advantage of both being more realistic and providing a higher probability of capture into the 3:2 resonance with respect to the previous models. On the other hand, a drawback of the model is that the function describing the tidal torque is not smooth and appears as a superposition of peaks, so that both analytical and numerical computations turn out to be rather delicate. We shall present numerical and analytical results about the nature of the librations of Mercury's spin in the 3:2 resonance, as predicted by the realistic model. In particular we shall provide evidence that the librations are quasi-periodic in time, so that the very concept of resonance should be revisited. The analytical results are mainly based on perturbation theory and leave several open problems, that we shall discuss.
We define the notion of a knot type having Legendrian large cables and
show that having this property implies that the knot type is not uniformly thick.
Moreover, there are solid tori in this knot type that do not thicken to a solid torus
with integer sloped boundary torus, and that exhibit new phenomena; specifically,
they have virtually overtwisted contact structures. We then show that there exists
an infinite family of ribbon knots that have Legendrian large cables. These knots fail
to be uniformly thick in several ways not previously seen. We also give a general
construction of ribbon knots, and show when they give similar such examples.
When equiangular tight frames (ETF's), a type of structured optimal packing of lines, exist and are of size $|\Phi|=N$, $\Phi\subset\mathbb{F}^d$ (where $\mathbb{F}=\mathbb{R}$, $\mathbb{C}$, or $\mathbb{H}$), for $p > 2$ the so-called $p$-frame energy $E_p(\Phi)=\sum\limits_{i\neq j} |\langle \varphi_{i}, \varphi_{j} \rangle|^p$ achieves its minimum value on an ETF over all sized $N$ collections of unit vectors. These energies have potential functions which are not positive definite when $p$ is not even. For these cases the apparent complexity of the problem of describing minimizers of these energies presents itself. While there are several open questions about the structure of these sets for fixed $N$ and fixed $p$, we focus on another question:
What structural properties are expressed by minimizing probability measures for the quantity $I_{p}(\mu)=\int\limits_{\mathbb{S}_{\mathbb{F}}^{d-1}}\int\limits_{\mathbb{S}_{\mathbb{F}}^{d-1}} |\langle x, y \rangle|^p d\mu(x) d\mu(y)$?
We collect a number of surprising observations. Whenever a tight spherical or projective $t$-design exists for the sphere $\mathbb{S}_{\mathbb{F}}^d$, equally distributing mass over it gives a minimizer of the quantity $I_{p}$ for a range of $p$ between consecutive even integers associated with the strength $t$. We show existence of discrete minimizers for several related potential functions, along with conditions which guarantee emptiness of the interior of the support of minimizers for these energies.
This talk is based on joint work with D. Bilyk, A. Glazyrin, R. Matzke, and O. Vlasiuk.
Casson invariant is defined for the class of oriented integral homology 3-spheres. It satisfies certain properties, and reduce to Rohlin invariant after mod 2. We will define Casson invariant as half of the algebraic intersection number of irreducible representation spaces (space consists of representations of fundamental group to SU(2)), and then prove this definition satisfies the expected properties.
We discuss a general approach to a problem of estimation of a smooth function $f(\theta)$ of a high-dimensional parameter $\theta$<br />
of statistical models. In particular, in the case of $n$ i.i.d. Gaussian observations $X_1,\doot, X_n$ with mean $\mu$ and covariance <br />
matrix $\Sigma,$ the unknown parameter is $\theta = (\mu, \Sigma)$ and our approach yields an estimator of $f(\theta)$ <br />
for a function $f$ of smoothness $s>0$ with mean squared error of the order $(\frac{1}{n} \vee (\frac{d}{n})^s) \wedge 1$ <br />
(provided that the Euclidean norm of $\mu$ and operator norms of $\Sigma,\Sigma^{-1}$ are uniformly bounded),<br />
with the error rate being minimax optimal up to a log factor (joint result with Mayya Zhilova). The construction of optimal estimators <br />
crucially relies on a new bias reduction method in high-dimensional problems<br />
and the bounds on the mean squared error are based on controlling finite differences of smooth functions along certain Markov chains<br />
in high-dimensional parameter spaces as well as on concentration inequalities.
In a fractional coloring, vertices of a graph are assigned subsets of the [0, 1]-interval such that adjacent vertices receive disjoint subsets. The fractional chromatic number of a graph is at most k if it admits a fractional coloring in which the amount of "color" assigned to each vertex is at least 1/k. We investigate fractional colorings where vertices "demand" different amounts of color, determined by local parameters such as the degree of a vertex. Many well-known results concerning the fractional chromatic number and independence number have natural generalizations in this new paradigm. We discuss several such results as well as open problems. In particular, we will sketch a proof of a "local demands" version of Brooks' Theorem that considerably generalizes the Caro-Wei Theorem and implies new bounds on the independence number. Joint work with Luke Postle.
Milnor K-theory is a field invariant that originated as an attempt to study algebraic K-theory. Instead, Milnor K-theory has proved to have many other applications, including Galois cohomology computations, Voevodsky's proof of the Bloch-Kato conjecture, and Kato's higher class field theory. In this talk, we will go over the basic definitions and theorems of Milnor K-theory. We will also discuss some of these applications.
Stability is a multivariate generalization for real-rootedness in univariate polynomials. Within the past ten years, the theory of stable polynomials has contributed to breakthroughs in combinatorics, convex optimization, and operator theory. I will introduce a generalization of stability, called complete log-concavity, that satisfies many of the same desirable properties. These polynomials were inspired by work of Adiprasito, Huh, and Katz on combinatorial Hodge theory, but can be defined and understood in elementary terms. The structure of these polynomials is closely tied with notions of discrete convexity, including matroids, submodular functions, and generalized permutohedra. I will discuss the beautiful real and combinatorial geometry underlying these polynomials and applications to matroid theory, including a proof of Mason’s conjecture on numbers of independent sets. This is based on joint work with Nima Anari, Kuikui Liu, and Shayan Oveis Gharan.
(*Refreshments available at 2:30pm before the colloquium.*)
In this talk we will follow the paper titled "Aubry-Mather theory for homeomorphisms", in which it is developed a variational approach to study the dynamics of a homeomorphism on a compact metric space. In particular, they are described orbits along which any Lipschitz Lyapunov function has to be constant via a non-negative Lipschitz semidistance. This is work of Albert Fathi and Pierre Pageault.
Stability and bifurcation conditions for a vibroimpact motion in an inclined energy harvester with T-periodic forcing are determined analytically and numerically. This investigation provides a better understanding of impact velocity and its influence on energy harvesting efficiency and can be used to optimally design the device. The numerical and analytical results of periodic motions are in excellent agreement. The stability conditions are developed in non-dimensional parameter space through two basic nonlinear maps based on switching manifolds that correspond to impacts with the top and bottom membranes of the energy harvesting device. The range for stable simple T-periodic behavior is reduced with increasing angle of incline β, since the influence of gravity increases the asymmetry of dynamics following impacts at the bottom and top. These asymmetric T-periodic solutions lose stability to period doubling solutions for β ≥ 0, which appear through increased asymmetry. The period doubling, symmetric and asymmetric periodic motion are illustrated by bifurcation diagrams, phase portraits and velocity time series.
I will give a brief survey of concordance in high-dimensional knot theory and how slice results have classically been obtained in this setting with the aid of surgery theory. Time permitting, I will then discuss an example of how some non-abelian slice obstructions come into the picture for 1-knots, as intuition for the seminar talk about L^2 invariants.
Tropical geometry provides a new set of purely combinatorial tools, which has been used to approach classical problems. In tropical geometry most algebraic computations are done on the classical side - using the algebra of the original variety. The theory developed so far has explored the geometric aspect of tropical varieties as opposed to the underlying (semiring) algebra and there are still many commutative algebra tools and notions without a tropical analogue. In the recent years, there has been a lot of effort dedicated to developing the necessary tools for commutative algebra using different frameworks, among which prime congruences, tropical ideals, tropical schemes. These approaches allows for the exploration of the properties of tropicalized spaces without tying them up to the original varieties and working with geometric structures inherently defined in characteristic one (that is, additively idempotent) semifields. In this talk we explore the relationship between tropical ideals and congruences to conclude that the variety of a prime (tropical) ideal is either empty or consists of a single point. This is joint work with D. Joó.
The question of which high-dimensional knots are slice was entirely solved by Kervaire and Levine. Compared to this, the question of which knots are doubly slice in high-dimensions is still a largely open problem. Ruberman proved that in every dimension, some version of the Casson-Gordon invariants can be applied to obtain algebraically doubly slice knots that are not doubly slice. I will show how to use L^2 signatures to recover the result of Ruberman for (4k-3)-dimensional knots, and discuss how the derived series of the knot group might be used to organise the high-dimensional doubly slice problem.
I this talk I will summerize some of our contributions in the analysis of parabolic elliptic Keller-Segel system, a typical model in chemotaxis. For the case of linear diffusion, after introducing the critical mass in two dimension, I will show our result for blow-up conditions for higher dimension. The second part of the talk is concentrated in the critical exponent for Keller-Segel system with porus media type diffusion. In the end, motivated from the result on nonlocal Fisher-KPP equation, we show that the nonlocal reaction will also help in preventing the blow-up of the solutions.
Optimal transport is a thoroughly studied field in mathematics and introduces the concept of Wasserstein distance, which has been widely used in various applications in computational mathematics, machine learning as well as many areas in engineering. Meanwhile, control theory and path planning is an active branch in mathematics and robotics, focusing on algorithms that calculates feasible or optimal paths for robotic systems. In this defense, we use the properties of the gradient flows in Wasserstein metric to design algorithms to handle different types of path planning and control problems as well as the K-means problems defined on graphs.
We will see some instances of swindles in mathematics, primarily focusing on some in geometric topology due to Barry Mazur.
We discuss the asymptotic value of the maximal perimeter of a convex set in an n-dimensional space with respect to certain classes of measures. Firstly, we derive a lower bound for this quantity for a large class of probability distributions; the lower bound depends on the moments only. This lower bound is sharp in the case of the Gaussian measure (as was shown by Nazarov in 2001), and, more generally, in the case of rotation invariant log-concave measures (as was shown by myself in 2014). We discuss another class of measures for which this bound is sharp. For isotropic log-concave measures, the value of the lower bound is at least n^{1/8}.
In addition, we show a uniform upper bound of Cn||f||^{1/n}_{\infty} for all log-concave measures in a special position, which is attained for the uniform distribution on the cube. We further estimate the maximal perimeter of isotropic log-concave measures by n^2.
The well known Erdos-Hajnal Conjecture states that every graph has the Erdos-Hajnal (EH) property. That is, for every $H$, there exists a $c=c(H)>0$ such that every graph $G$ with no induced copy of $H$ has the property $hom(G):=max\{\alpha(G),\omega(G)\}\geq |V(G)|^{c}$. Let $H,J$ be subdivisions of caterpillar graphs. Liebenau, Pilipczuk, Seymour and Spirkl proved that the EH property holds if we forbid both $H$ and $\overline{J}.$ We will discuss the proof of this result.
I will talk about a conjecture that in Gibbs states of one-dimensional spin chains with short-ranged gapped Hamiltonians the quantum conditional mutual information (QCMI) between the parts of the chain decays exponentially with the length of separation between said parts. The smallness of QCMI enables efficient representation of these states as tensor networks, which allows their efficient construction and fast computation of global quantities, such as entropy. I will present the known partial results on the way of proving of the conjecture and discuss the probable approaches to the proof and the obstacles that are encountered.
This talk will be about polynomial decompositions that are relevant in machine learning. I will start with the well-known low-rank symmetric tensor decomposition, and present a simple new algorithm with local convergence guarantees, which seems to handily outperform the state-of-the-art in experiments. Next I will consider a particular generalization of symmetric tensor decomposition, and apply this to estimate subspace arrangements from very many, very noisy samples (a regime in which current subspace clustering algorithms break down). Finally I will switch gears and discuss representability of polynomials by deep neural networks with polynomial activations. The various polynomial decompositions in this talk motivate questions in commutative algebra, computational algebraic geometry and optimization. The first part of this talk is joint with Emmanuel Abbe, Tamir Bendory, Joao Pereira and Amit Singer, while the latter part is joint with Matthew Trager.
The talk presents an extension for high dimensions of an idea from a recent result concerning near optimal adaptive finite element methods (AFEM). The usual adaptive strategy for finding conforming partitions in AFEM is ”mark → subdivide → complete”. In this strategy any element can be marked for subdivision but since the resulting partition often contains hanging nodes, additional elements have to be subdivided in the completion step to get a conforming partition. This process is very well understood for triangulations received via newest vertex bisection procedure. In particular, it is proven that the number of elements in the final partition is limited by constant times the number of marked cells. This motivated us [B., Fierro, Veeser, in preparation] to design a marking procedure that is limited only to cells of the partition whose subdivision will result in a conforming partition and therefore no completion step is necessary. We also proved that this procedure is near best in terms of both error of approximation and complexity. This result is formulated in terms of tree approximations and opens the possibility to design similar algorithms in high dimensions using sparse occupancy trees introduced in [B., Dahmen, Lamby, 2011]. The talk describes the framework of approximating high dimensional data using conforming sparse occupancy trees.
One can regard a (trained) feedforward neural network as a particular type of function , where is a (typically high-dimensional) Euclidean space parameterizing some data set, and the value of the function on a data point is the probability that the answer to a particular yes/no question is "yes." It is a classical result in the subject that a sufficiently complex neural network can approximate any function on a bounded set. Last year, J. Johnson proved that universality results of this kind depend on the architecture of the neural network (the number and dimensions of its hidden layers). His argument was novel in that it provided an explicit topological obstruction to representability of a function by a neural network, subject to certain simple constraints on its architecture. I will tell you just enough about neural networks to understand how Johnson's result follows from some very simple ideas in piecewise linear geometry. Time permitting, I will also describe some joint work in progress with K. Lindsey aimed at developing a general theory of how the architecture of a neural network constrains its topological expressiveness.
We study whether all stationary solutions of 2D Euler equation must be radially symmetric, if the vorticity is compactly supported or has some decay at infinity. Our main results are the following:
(1) On the one hand, we are able to show that for any non-negative smooth stationary vorticity that is compactly supported (or has certain decay as |x|->infty), it must be radially symmetric up to a translation.
(2) On the other hand, if we allow vorticity to change sign, then by applying bifurcation arguments to sign-changing radial patches, we are able to show that there exists a compactly-supported, sign-changing smooth stationary vorticity that is non-radial.
We have also obtained some symmetry results for uniformly-rotating solutions for 2D Euler equation, as well as stationary/rotating solutions for the SQG equation. The symmetry results are mainly obtained by calculus of variations and elliptic equation techniques. This is a joint work with Javier Gomez-Serrano, Jia Shi and Yao Yao.
First talk at 4:00 by by Ananth Shankar (MIT http://math.mit.edu/~ananths/)
Exceptional splitting of abelian surfaces over global function fields.
Let A denote a non-constant ordinary abelian surface over a global function field (of characteristic p > 2) with good reduction everywhere. Suppose that $A$ does not have real multiplication by any real quadratic field with discriminant a multiple of $p$. Then we prove that there are infinitely many places modulo which $A$ is isogenous to the product of two elliptic curves. If time permits, I will also talk about applications of our results to the p-adic monodromy of such abelian surfaces. This is joint work with Davesh Maulik and Yunqing Tang.
Second talk at 5:15 Jordan Ellenberg (University of Wisconsin http://www.math.wisc.edu/~ellenber/)
What is the tropical Ceresa class and what should it be?
This is a highly preliminary talk about joint work with Daniel Corey and Wanlin Li. The Ceresa cycle is an algebraic cycle canonically attached to a curve C, which appears in an intriguing variety of contexts; its height can sometimes be interpreted as a special value, the vanishing of its cycle class is related to the Galois action on the nilpotent fundamental group, it vanishes on hyperelliptic curves, etc. In practice it is not easy to compute, and we do not in fact know an explicit non-hyperelliptic curve whose Ceresa class vanishes. We will discuss a definition of the Ceresa class for a tropical curve, explain how to compute it in certain simple cases, and describe progress towards understanding whether it is possible for the Ceresa class of a non-hyperelliptic tropical curve to vanish. (The answer is: "sort of”.) The tropical Ceresa class sits at the interface of algebraic geometry, graph theory (because a tropical curve is more or less a metric graph), and topology: for we can also frame the tropical Ceresa class as an entity governed by the mapping class group, and in particular by the question of when a product of commuting Dehn twists can commute with a hyperelliptic involution in the quotient of the mapping class group by the Johnson kernel.
During the last 30 years there has been much interest in random graph processes, i.e., random graphs which grow by adding edges (or vertices) step-by-step in some random way. Part of the motivation stems from more realistic modeling, since many real world networks such as Facebook evolve over time. Further motivation stems from extremal combinatorics, where these processes lead to some of the best known bounds in Ramsey and Turan Theory (that go beyond textbook applications of the probabilistic method). I will review several random graph processes of interest, and (if time permits) illustrate one of the main proof techniques using a simple toy example.
It is known that non-negative homogeneous polynomials(forms) over $\mathbb{R}$ are same as sums of squares if it is bivariate, quadratic forms, or ternary quartic by Hilbert. Once we know a form is a sum of squares, next natural question would be how many forms are needed to represent it as sums of squares. We denote the minimal number of summands in the sums of squares by rank (of the sum of squares). Ranks of some class of forms are known. For example, any bivariate forms (allowing all monomials) can be written as sum of $2$ squares.(i.e. its rank is $2$) and every nonnegative ternary quartic can be written as a sum of $3$ squares.(i.e. its rank is $3$). Our question is that "if we do not allow some monomials in a bivariate form, how its rank will be?". In the talk, we will introduce this problem in algebraic geometry flavor and provide some notions and tools to deal with.
I will talk about the structure of large square random matrices with centered i.i.d. heavy-tailed entries (only two finite moments are assumed). In our previous work with R. Vershynin we have shown that the operator norm of such matrix A can be reduced to the optimal sqrt(n)-order with high probability by zeroing out a small submatrix of A, but did not describe the structure of this "bad" submatrix, nor provide a constructive way to find it. Now we can give a very simple description of this small "bad" subset: it is enough to zero out a small fraction of the rows and columns of A with largest L2 norms to bring its operator norm to the almost optimal sqrt(loglog(n)*n)-order, under additional assumption that the entries of A are symmetrically distributed. As a corollary, one can also obtain a constructive procedure to find a small submatrix of A that one can zero out to achieve the same regularization.
I am planning to discuss some details of the proof, the main component of which is the development of techniques that extend constructive regularization approaches known for the Bernoulli matrices (from the works of Feige and Ofek, and Le, Levina and Vershynin) to the considerably broader class of heavy-tailed random matrices.
The unusual day
New and proposed missions for approaching moons, and particularly icy moons, increasingly require the design of trajectories within challenging multi-body environments that stress or exceed the capabilities of the two-body design methodologies typically used over the last several decades. These current methods encounter difficulties because they often require appreciable user interaction, result in trajectories that require significant amounts of propellant, or miss potential mission-enabling options. The use of dynamical systems methods applied to three-body and multi-body models provides a pathway to obtain a fuller theoretical understanding of the problem that can then result in significant improvements to trajectory design in each of these areas. The search for approach trajectories within highly nonlinear, chaotic regimes where multi-body effects dominate becomes increasingly complex, especially when landing, orbiting, or flyby scenarios must be considered in the analysis. In the case of icy moons, approach trajectories must also be tied into the broader tour which includes flybys of other moons. The tour endgame typically includes the last several flybys, or resonances, before the final approach to the moon, and these resonances further constrain the type of approach that may be used.
In this seminar, new methods for approaching moons by traversing the chaotic regions near the Lagrange point gateways will be discussed for several examples. The emphasis will be on landing trajectories approaching Europa including a global analysis of trajectories approaching any point on the surface and analyses for specific landing scenarios across a range of different energies. The constraints on the approach from the tour within the context of the endgame strategy will be given for a variety of different moons and scenarios. Specific approaches using quasiperiodic or Lissajous orbits will be shown, and general landing and orbit insertion trajectories will be placed into context relative to the invariant manifolds of unstable periodic and quasiperiodic orbits. These methods will be discussed and applied for the specific example of the Europa Lander mission concept. The Europa Lander mission concept is particularly challenging in that it requires the redesign of the approach scenario after the spacecraft has launched to accommodate landing at a wide range of potential locations on the surface. The final location would be selected based on reconnaissance from the Europa Clipper data once Europa Lander is in route. Taken as a whole, these methods will provide avenues to find both fundamentally new approach pathways and reduce cost to enable new missions.
Multidimensional data is ubiquitous in the application, e.g., images and videos. I will introduce some of my previous and current works related to this topic.
1) Lattice metric space and its applications. Lattice and superlattice patterns are found in material sciences, nonlinear optics and sampling designs. We propose a lattice metric space based on modular group theory and
metric geometry, which provides a visually consistent measure of dissimilarity among lattice patterns. We apply this framework to superlattice separation and grain defect detection.
2) We briefly introduce two current projects. First, we propose new algorithms for automatic PDE modeling, which drastically improves the efficiency and the robustness against additive noise. Second, we introduce a new model for surface reconstruction from point cloud data (PCD) and provide an ADMM type fast algorithm.
In this talk we study master equations arising from mean field game
problems, under the crucial monotonicity conditions.
Classical solutions of such equations require very strong technical
conditions. Moreover, unlike the master equations arising from mean
field control problems, the mean field game master equations are
non-local and even classical solutions typically do not satisfy the
comparison principle, so the standard viscosity solution approach seems
infeasible. We shall propose a notion of weak solution for such
equations and establish its wellposedness. Our approach relies on a new
smooth mollifier for functions of measures, which unfortunately does not
keep the monotonicity property, and the stability result of master
equations. The talk is based on a joint work with Jianfeng Zhang.
An electron interacting with the vibrational modes of a polar crystal is called a polaron. Polarons are the simplest Quantum Field Theory models, yet their most basic features such as the effective mass, ground-state energy and wave function cannot be evaluated explicitly. And while several successful theories have been proposed over the years to approximate the energy and effective mass of various polarons, they are built entirely on unjustified, even questionable, Ansätze for the wave function.
In this talk I shall provide the first explicit description of the ground-state wave function of a polaron in an asymptotic regime: For the Fröhlich polaron localized in a Coulomb potential and exposed to a homogeneous magnetic field of strength $B$ it will be shown that the ground-state electron density in the direction of the magnetic field converges pointwise and in a weak sense as $B\rightarrow\infty$ to the square of a hyperbolic secant function--a sharp contrast to the Gaussian wave functions suggested in the physics literature.
In independent bond percolation with parameter p, if one removes the vertices of the infinite cluster (and incident edges), for which values of p does the remaining graph contain an infinite cluster? Grimmett-Holroyd-Kozma used the triangle condition to show that for d > 18, the set of such p contains values strictly larger than the percolation threshold pc. With the work of Fitzner-van der Hofstad, this has been reduced to d > 10. We reprove this result by showing that for d > 10 and some p>pc, there are infinite paths consisting of "shielded"' vertices --- vertices all whose adjacent edges are closed --- which must be in the complement of the infinite cluster. Using numerical values of pc, this bound can be reduced to d > 7. Our methods are elementary and do not require the triangle condition.
Invasion percolation is a stochastic growth model that follows a greedy algorithm. After assigning i.i.d. uniform random variables (weights) to all edges of d-dimensional space, the growth starts at the origin. At each step, we adjoin to the current cluster the edge of minimal weight from its boundary. In '85, Chayes-Chayes-Newman studied the "acceptance profile"' of the invasion: for a given p in [0,1], it is the ratio of the expected number of invaded edges until time n with weight in [p,p+dp] to the expected number of observed edges (those in the cluster or its boundary) with weight in the same interval. They showed that in all dimensions, the acceptance profile an(p) converges to one for ppc. In this paper, we consider an(p) at the critical point p=pc in two dimensions and show that it is bounded away from zero and one as n goes to infinity.
For a first order (deterministic) mean-field game with non-local running and initial couplings, a classical solution is constructed for the associated, so-called master equation, a partial differential equation in infinite-dimensional space with a non-local term, assuming the time horizon is sufficiently small and the coefficients are smooth enough, without convexity conditions on the Hamiltonian.
In network routing users often tradeoff different objectives in selecting their best route. An example is transportation networks, where due to uncertainty of travel times, drivers may tradeoff the average travel time versus the variance of a route. Or they might tradeoff time and cost, such as the cost paid in tolls.
We wish to understand the effect of two conflicting criteria in route selection, by studying the resulting traffic assignment (equilibrium) in the network. We investigate two perspectives of this topic: (1) How does the equilibrium cost of a risk-averse population compare to that of a risk-neutral population? (i.e., how much longer do we spend in traffic due to being risk-averse) (2) How does the equilibrium cost of a heterogeneous population compare to that of a comparable homogeneous user population?
We provide characterizations to both questions above.
Based on joint work with Richard Cole, Thanasis Lianeas and Nicolas Stier-Moses.
At the end I will mention current work of my research group on algorithms and mechanism design for power systems.
Biography: Evdokia Nikolova is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Texas at Austin, where she is a member of the Wireless Networking & Communications Group. Previously she was an Assistant Professor in Computer Science and Engineering at Texas A&M University. She graduated with a BA in Applied Mathematics with Economics from Harvard University, MS in Mathematics from Cambridge University, U.K. and Ph.D. in Computer Science from MIT.
Nikolova's research aims to improve the design and efficiency of complex systems (such as networks and electronic markets), by integrating stochastic, dynamic and economic analysis. Her recent work examines how human risk aversion transforms traditional computational models and solutions. One of her algorithms has been adapted in the MIT CarTel project for traffic-aware routing. She currently focuses on developing algorithms for risk mitigation in networks, with applications to transportation and energy. She is a recipient of an NSF CAREER award and a Google Faculty Research Award. Her research group has been recognized with a best student paper award and a best paper award runner-up. She currently serves on the editorial board of the journal Mathematics of Operations Research.
One of the classical problems in scissors congruence is
this: given two polytopes in $n$-dimensional Euclidean space, when is
it possible to decompose them into finitely many pieces which are
pairwise congruent via translations? A complete set of invariants is
provided by the Hadwiger invariants, which measure "how much area is
pointing in each direction." Proving that these give a complete set
of invariants is relatively straightforward, but determining the
relations between them is much more difficult. This was done by
Dupont, in a 1982 paper. Unfortunately, this result is difficult to
describe and work with: it uses group homological techniques which
produce a highly opaque formula involving twisted coefficients and
relations in terms of uncountable sums. In this talk we will discuss
a new perspective on Dupont's proof which, together with more
topological simplicial techniques, simplifies and clarifies the
classical results. This talk is partially intended to be an
advertisement for simplicial techniques, and will be suitable for
graduate students and others unfamiliar with the approach.
In this talk, I will discuss progress in our understanding of Legendrian surfaces. First, I will present a new construction of Legendrian surfaces and a direct description for their moduli space of microlocal sheaves. This Legendrian invariant will connect to classical incidence problems in algebraic geometry and the study of flag varieties, which we will study in detail. There will be several examples during the talk and, in the end, I will indicate the relation of this theory to the study of framed local systems on a surface. This talk is based on my work with E. Zaslow.
We prove that every rational homology cobordism class in the subgroup generated by lens spaces contains a unique connected sum of lens spaces whose first homology embeds in any other element in the same class. As a consequence we show that several natural maps to the rational homology cobordism group have infinite rank cokernels, and obtain a divisibility condition between the determinants of certain 2-bridge knots and other knots in the same concordance class. This is joint work with Daniele Celoria and JungHwan Park.
The KAM (Kolmogorov Arnold and Moser) theory studies
the persistence of quasi-periodic solutions under perturbations.
It started from a basic set of theorems and it has grown
into a systematic theory that settles many questions.
The basic theorem is rather surprising since it involves delicate
regularity properties of the functions considered, rather
subtle number theoretic properties of the frequency as well
as geometric properties of the dynamical systems considered.
In these lectures, we plan to cover a complete proof of
a particularly representative theorem in KAM theory.
In the first lecture we will cover all the prerequisites
(analysis, number theory and geometry). In the second lecture
we will present a complete proof of Moser's twist map theorem
(indeed a generalization to more dimensions).
The proof also lends itself to very efficient numerical algorithms.
If there is interest and energy, we will devote a third lecture
to numerical implementations.
he KAM (Kolmogorov Arnold and Moser) theory studies
the persistence of quasi-periodic solutions under perturbations.
It started from a basic set of theorems and it has grown
into a systematic theory that settles many questions.
The basic theorem is rather surprising since it involves delicate
regularity properties of the functions considered, rather
subtle number theoretic properties of the frequency as well
as geometric properties of the dynamical systems considered.
In these lectures, we plan to cover a complete proof of
a particularly representative theorem in KAM theory.
In the first lecture we will cover all the prerequisites
(analysis, number theory and geometry). In the second lecture
we will present a complete proof of Moser's twist map theorem
(indeed a generalization to more dimensions).
The proof also lends itself to very efficient numerical algorithms.
If there is interest and energy, we will devote a third lecture
to numerical implementations.
Isospectral reductions is a network/graph reduction that preserves the
eigenvalues and the eigenvectors of the adjacency matrix. We analyze the
conditions under which the generalized eigenvectors would be preserved and
simplify the proof of the preservation of eigenvectors. Isospectral reductions
are associative and form a dynamical system on the set of all matrices/graphs.
We study the spectral equivalence relation defined by specific characteristics
of nodes under isospectral reductions and show some examples of the attractors.
Cooperation among antigens, cross-immunoreactivity (CR) has been observed in
various diseases. The complex viral population dynamics couldn't be explained
by traditional math models. A new math model was constructed recently with
promising numerical simulations. In particular, the numerical results recreated
local immunodeficiency (LI), the phenomenon where some viruses sacrifice
themselves while others are not attacked by the immune system. Here we analyze
small CR networks to find the minimal network with a stable LI. We also
demonstrate that you can build larger CR networks with stable LI using this
minimal network as a building block.
The pedestrian-induced lateral oscillation of London's Millennium bridge on the day it opened in 2000 has become a much cited paradigm of an instability caused by phase synchronization of coupled oscillators. However, a closer examination of subsequent theoretical studies and experimental observations have brought this interpretation into question.
To elucidate the true cause of instability, we study a model in which each pedestrian is represented by a simplified biomechanically-inspired two-legged inverted pendulum. The key finding is that synchronization between individual pedestrians is not a necessary ingredient of instability onset. Instead, the side-to-side pedestrian motion should on average lag that of the bridge oscillation by a fraction of a cycle. Using a multi-scale asymptotic analysis, we derive a mathematically rigorous general criterion for bridge instability based on the notion of effective negative damping. This criterion suggests that the initiation of wobbling is not accompanied by crowd synchrony and crowd synchrony is a consequence but not the cause of bridge instability.
The first half of this dissertation concerns the following problem: Given a lattice in $\mathbf{R}^d$ which refines the integer lattice $\mathbf{Z}^d$, what can be said about the distribution of the lattice points inside of the half-open unit cube $[0,1)^d$? This question is of interest in discrete geometry, especially integral polytopes and Ehrhart theory. We observe a combinatorial description of the linear span of these points, and give a formula for the dimension of this span. The proofs of these results use methods from classical multiplicative number theory.
In the second half of the dissertation, we investigate oriented matroids from the point of view of tropical geometry. Given an oriented matroid, we describe, in detail, a polyhedral complex which plays the role of the Bergman complex for ordinary matroids. We show how this complex can be used to give a new proof of the celebrated Bohne-Dress theorem on tilings of zonotopes by zonotopes with an approach which relies on a novel interpretation of the chirotope of an oriented matroid.
We say that trees with common root are (edge-)independent if, for any vertex in their intersection, the paths to the root induced by each tree are internally (edge-)disjoint. The relationship between graph (edge-)connectivity and the existence of (edge-)independent spanning trees is explored. The (Edge-)Independent Spanning Tree Conjecture states that every k-(edge-)connected graph has k-(edge-)independent spanning trees with arbitrary root.
We prove the case k=4 of the Edge-Independent Spanning Tree Conjecture using a graph decomposition similar to an ear decomposition, and give polynomial-time algorithms to construct the decomposition and the trees. We provide alternate geometric proofs for the cases k=3 of both the Independent Spanning Tree Conjecture and Edge-Independent Spanning Tree Conjecture by embedding the vertices or edges in a 2-simplex, and conjecture higher-dimension generalizations. We provide a partial result towards a generalization of the Independent Spanning Tree Conjecture, in which local connectivity between the root and a vertex set S implies the existence of trees whose independence properties hold only in S. Finally, we prove and generalize a theorem of Györi and Lovász on partitioning a k-connected graph, and give polynomial-time algorithms for the cases k=2,3,4 using the graph decompositions used to prove the corresponding cases of the Independent Spanning Tree Conjecture.
We investigate aspects of Kauffman bracket skein algebras of surfaces and modules of 3-manifolds using quantum torus methods. These methods come in two flavors: embedding the skein algebra into a quantum torus related to quantum Teichmuller space, or filtering the algebra and obtaining an associated graded algebra that is a monomial subalgebra of a quantum torus. We utilize the former method to generalize the Chebyshev homomorphism of Bonahon and Wong between skein algebras of surfaces to a Chebyshev-Frobenius homomorphism between skein modules of marked 3-manifolds, in the course of which we define a surgery theory, and whose image we show is either transparent or (skew)-transparent. The latter method is used to show that skein algebras of surfaces are maximal orders, which implies a refined unicity theorem, shows that SL_2C-character varieties are normal, and suggests a conjecture on how this result may be utilized for topological quantum compiling.
In this talk we will review compactness results and singularity theorems related to harmonic maps. We first talk about maps from Riemann surfaces with tension fields bounded in a local Hardy space, then talk about stationary harmonic maps from higher dimensional manifolds, finally talk about heat flow of harmonic maps.
The study of LIn, the length of the longest increasing subsequences, and of LCIn, the length of the longest common and increasing subsequences in random words is classical in computer science and bioinformatics, and has been well explored over the last few decades. This dissertation studies a generalization of LCIn for two binary random words, namely, it analyzes the asymptotic behavior of LCbBn, the length of the longest common subsequences containing a fixed number, b, of blocks. We first prove that after proper centerings and scalings, LCbBn, for two sequences of i.i.d. Bernoulli random variables with possibly two different parameters, converges in law towards limits we identify. This dissertation also includes an alternative approach to the one-sequence LbBn problem, and Monte-Carlo simulations on the asymptotics of LCbBn and on the growth order of the limiting functional, as well as several extensions of the LCbBn problem to the Markov context and some connection with percolation theory.
Please note different day and room.
In this talk, I will describe joint work with Maximilien Péroux on understanding Koszul duality in ∞-topoi. An ∞-topos is a particularly well behaved higher category that behaves like the category of compactly generated spaces. Particularly interesting examples of ∞-topoi are categories of simplicial sheaves on Grothendieck topologies. The main theorem of this work is that given a group object G of an ∞-topos, there is an equivalence of categories between the category of G-modules in that topos and the category of BG-comodules, where BG is the classifying object for G-torsors. In particular, given any pointed space X, the space of loops on X, denoted ΩX, can be lifted to a group object of any ∞-topos, so if X is in addition a connected space, there is an equivalence between objects of any ∞-topos with an ΩX-action, and objects with an X-coaction (where X is a coalgebra via the usual diagonal map). This is a generalization of the classical equivalence between G-spaces and spaces over BG for G a topological group.
Residential crime is one of the toughest issues in modern society. A quantitative, informative, and applicable model of criminal behavior is needed to assist law enforcement. We have made progress to the pioneering statistical agent-based model of residential burglary (Short et al., Math. Models Methods Appl., 2008) in two ways. (1) In one space dimension, we assume that the movement patterns of the criminals involve truncated Lévy distributions for the jump length, other than classical random walks (Short et al., Math. Models Methods Appl., 2008) or Lévy flights without truncation (Chaturapruek et al., SIAM J. Appl. Math, 2013). This is the first time that truncated Lévy flights have been applied in crime modeling. Furthermore (2), in two space dimensions, we used the Poisson clocks to govern the time steps of the evolution of the model, rather than a discrete time Markov chain with deterministic time increments used in the previous works. Poisson clocks are particularly suitable to model the times at which arrivals enter a system. Introduction of the Poisson clock not only produces similar simulation output, but also brings in theoretically the mathematical framework of the Markov pure jump processes, e.g., a martingale approach. The martingale formula leads to a continuum equation that coincides with a well-known mean-field continuum limit. Moreover, the martingale formulation together with statistics quantifying the relevant pattern formation leads to a theoretical explanation of the finite size effects. Our conjecture is supported by numerical simulations.
Great News Everyone! The cartoon series Futurama is packed with science jokes. Adopting my Professor Farnsworth Alterego, I will explain some of these mathematical jokes with stills and clips from the series.
A brief meeting to discuss the plan for the semester, followed by an informal discussion over lunch (most likely at Ferst Place).
A random array indexed by the paths of an infinitely-branching rooted tree of finite depth is hierarchically exchangeable if its joint distribution is invariant under rearrangements that preserve the tree structure underlying the index set. Austin and Panchenko (2014) prove that such arrays have de Finetti-type representations, and moreover, that an array indexed by a finite collection of such trees has an Aldous-Hoover-type representation.
Motivated by problems in Bayesian nonparametrics and probabilistic programming discussed in Staton et al. (2018), we generalize hierarchical exchangeability to a new kind of partial exchangeability for random arrays which we call DAG-exchangeability. In our setting a random array is indexed by N^{|V|} for some DAG G=(V,E), and its exchangeability structure is governed by the edge set E. We prove a representation theorem for such arrays which generalizes the Aldous-Hoover representation theorem, and for which the Austin-Panchenko representation is a special case.
In this talk, we consider a quasi-periodically forced system arising from the problem of chemical reactions. For we demonstrate efficient algorithms to calculate the normally hyperbolic invariant manifolds and their stable/unstable manifolds using parameterization method. When a random noise is added, we use similar ideas to give a streamlined proof of the existence of the stochastic invariant manifolds.
The field of complex dynamics melds a number of disciplines, including complex analysis, geometry and topology. I will focus on the influences from the latter, introducing some important concepts and questions in complex dynamics, with an emphasis on how the concepts tie into and can be enhanced by a topological viewpoint.
In complex dynamics, the main objects of study are rational maps on the Riemann sphere. For some large subset of such maps, there is a way to associate to each map a marked torus. Moving around in the space of these maps, we can then track the associated tori and get induced mapping classes. In this talk, we will explore what sorts of mapping classes arise in this way and use this to say something about the topology of the original space of maps.
Soot particles are major pollutants emitted from propulsion and power generation systems. In turbulent combustion, soot evolution is heavily influenced by soot-turbulence-chemistry interaction. Specifically, soot is formed during combustion of fuel-rich mixtures and is rapidly oxidized before being transported by turbulence into fuel-lean mixtures. Furthermore, different soot evolution mechanisms are dominant over distinct regions of mixture fraction. For these reasons, a new subfilter Probability Density Function (PDF) model is proposed to account for this distribution of soot in mixture fraction space. At the same time, Direct Numerical Simulation (DNS) studies of turbulent nonpremixed jet flames have revealed that Polycyclic Aromatic Hydrocarbons (PAH), the gas-phase soot precursors, are confined to spatially intermittent regions of low scalar dissipation rates due to their slow formation chemistry. The length scales of these regions are on the order of the Kolmogorov scale (i.e., the smallest turbulence scale) or smaller, where molecular diffusion dominates over turbulent mixing irrespective of the large-scale turbulent Reynolds number. A strain-sensitivity parameter is developed to identify such species. A Strain-Sensitive Transport Approach (SSTA) is then developed to model the differential molecular transport in the nonpremixed “flamelet” equations. These two models are first validated a priori against a DNS database, and then implemented within a Large Eddy Simulation (LES) framework, applied to a series of turbulent nonpremixed sooting jet flames, and validated via comparisons with experimental measurements of soot volume fraction.
We will discuss how to solve algebraic equations using symbolic, numerical, and combinatorial methods.
Prym varieties are a class of abelian varieties that arise from double covers of tropical or algebraic curves. The talk will revolve around the Prym--Brill--Noether locus, a subvariety determined by divisors of a given rank. Using a connection to Young tableaux, we determine the dimensions of these loci for certain tropical curves, with applications to algebraic geometry. Furthermore, these loci are always pure dimensional and path connected. Finally, we compute the first homologies of the Prym--Brill--Noether loci under certain conditions.
special time
In asymptotic analysis and numerical approximation of highly-oscillatory evolution problems, it is commonly supposed that the oscillation frequency is either constant or, at least, bounded from below by a strictly positive constant uniformly in time. Allowing for the possibility that the frequency actually depends on time and vanishes at some instants introduces additional difficulties from both the asymptotic analysis and numerical simulation points of view. I will present a first step towards the resolution of these difficulties. In particular, we show that it is still possible in this situation to infer the asymptotic behavior of the solution at the price of more intricate computations and we derive a second order uniformly accurate numerical method.
Fine properties of spherical averages in the continuous setting include
$L^p$ improving estimates
and sparse bounds, interesting in the settings of a fixed radius, lacunary sets of radii, and the
full set of radii. There is a parallel theory in the setting of discrete spherical averages, as studied
by Elias Stein, Akos Magyar, and Stephen Wainger. We recall the continuous case, outline the
discrete case, and illustrate a unifying proof technique. Joint work with Robert Kesler, and
Dario Mena Arias.
We will consider the problem of estimating the singularity probability of sparse Bernoulli matrices, and a related question of anti-concentration of weighted sums of dependent Bernoulli(p) variables.
Based on joint work with Alexander Litvak.
special time
Our ambition is to derive asymptotic equations of the Vlasov-Poisson system in the strong magntic field regime. This work is thus an attempt to (re-)derive rigorously gyrokinetic equations and to design uniformly accurate methods for solving fast-oscillating kinetic equations, i.e. methods whose cost and accuracy do not depend the stiffness parameter. The main tools used to reach this objective are averaging and PDE techniques. In this talk, I will focus primarily on the first.
In first-passage percolation, we place i.i.d. nonnegative weights (t_e) on the edges of a graph and consider the induced weighted graph metric T(x,y). When the underlying graph is the two-dimensional square lattice, there is a phase transition in the model depending on the probability p that an edge weight equals zero: for p<1/2, the metric T(0,x) grows linearly in x, whereas for p>1/2, it remains stochastically bounded. The critical case occurs for p=1/2, where there are large but finite clusters of zero-weight edges. In this talk, I will review work with Wai-Kit Lam and Xuan Wang in which we determine the rate of growth for T(0,x) up to a constant factor for all critical distributions. Then I will explain recent work with Jack Hanson and Wai-Kit Lam in which we determine the "time constant" (leading order constant in the rate of growth) in the special case where the graph is the triangular lattice, and the weights are placed on the vertices. This is the only class of distributions known where this time constant is computable: we find that it is an explicit function of the infimum of the support of t_e intersected with (0,\infty).
Special time
I will start by giving a short overview of the history around stability and instability issues in gravitational systems driven by kinetic equations. Conservations properties and families of non-homogeneous steady states will be first presented. A well-know conjecture in both astrophysics and mathematics communities was that "all steady states of the gravitational Vlasov-Poisson system which are decreasing functions of the energy, are non linearly stable up to space translations". We explain why the traditional variational approaches are not sufficient to answer this conjecture. An alternative approach, inspired by astrophysics literature, will be then presented and quantitative stability inequalities will be shown, therefore solving the above conjecture for Vlasov-Poisson systems. This have been achieved by using a refined notion for the rearrangement of functions and Poincaré-like functional inequalities. For other systems like the so-called Hamiltonian Mean Field (HMF), the decreasing property of the steady states is no more sufficient to guarantee their stability. An additional explicit criteria is needed, under which their non-linear stability is proved. This criteria is sharp as non linear instabilities can be constructed if it is not satisfied.
In this talk we introduce some machine learning problems in the setting of undirected graphical models, also known as spin systems. We take proper colorings as a representative example of a hard-constraint graphical model. The classic problem of sampling a proper coloring uniformly at random of a given graph has been well-studied. Here we consider two inverse problems: Given random colorings of an unknown graph G, can we recover the underlying graph G exactly? If we are also given a candidate graph H, can we tell if G=H? The former problem is known as structure learning in the machine learning field and the latter is called identity testing. We show the complexity of these problems in different range of parameters and compare these results with the corresponding decision and sampling problems. Finally, we give some results of the analogous problems for the Ising model, a typical soft-constraint model. Based on joint work with Ivona Bezakova, Antonio Blanca, Daniel Stefankovic and Eric Vigoda.
A multivariate complex polynomial is called stable if any line in any positive direction meets its hypersurface only at real points. Stable polynomials have close relations to matroids and hyperbolic programming. We will discuss a generalization of stability to algebraic varieties of codimension larger than one. They are varieties which are hyperbolic with respect to the nonnegative Grassmannian, following the notion of hyperbolicity studied by Shamovich, Vinnikov, Kummer, and Vinzant. We show that their tropicalization and Chow polytopes have nice combinatorial structures related to braid arrangements and positroids, generalizing some results of Choe, Oxley, Sokal, Wagner, and Brändén on Newton polytopes and tropicalizations of stable polynomials. This is based on joint work with Felipe Rincón and Cynthia Vinzant.
I will provide an introduction to Lorentzian Polynomials in the sense of https://arxiv.org/abs/1902.03719
Understanding the folding of RNA sequences into three-dimensional structures is one of the fundamental challenges in molecular biology. For example, the branching of an RNA secondary structure is an important molecular characteristic yet difficult to predict correctly. However, recent results in geometric combinatorics (both theoretical and computational) yield new insights into the distribution of optimal branching configurations, and suggest new directions for improving prediction accuracy.
The systems of coupled NLS equations occur in some physical problems, in particular in nonlinear optics (coupling between two optical waveguides, pulses or polarized components...).
From the mathematical point of view, the coupling effects can lead to truly nonlinear behaviors, such as the beating effect (solutions with Fourier modes exchanging energy) of Grébert, Paturel and Thomann (2013). In this talk, I will use the coupling between two NLS equations on the 1D torus to construct a family of linearly unstable tori, and therefore unstable quasi-periodic solutions.
The idea is to take profit of the Hamiltonian structure of the system via the construction of a Birkhoff normal form and the application of a KAM theorem. In particular, we will see of this surprising behavior (this is the first example of unstable tori for a 1D PDE) is strongly related to the existence of beating solutions.
This is a work in collaboration with Benoît Grébert (Université de Nantes).
We will explore some of the basic notions in quantum topology. Our focus will be on introducing some of the foundations of diagrammatic algebra through the lens of the Temperley-Lieb algebra. We will attempt to show how these diagrammatic techniques can be applied to low dimensional topology. Every effort will be made to make this as self-contained as possible. If time permits we will also discuss some applications to topological quantum computing.
Mathematicians have long been trying to understand which domains admit an orthogonal (or, sometimes, not) basis of exponentials of the form , for some set of frequencies (this is the spectrum of the domain). It is well known that we can do so for the cube, for instance (just take ), but can we find such a basis for the ball? The answer is no, if we demand orthogonality, but this problem is still open when, instead of orthogonality, we demand just a Riesz basis of exponentials.
A 2-knot is a smooth embedding of S^2 in S^4, and a 0-concordance of 2-knots is a concordance with the property that every regular level set of the concordance is just a collection of S^2's. In his thesis, Paul Melvin proved that if two 2-knots are 0-concordant, then a Gluck twist along one will result in the same smooth 4-manifold as a Gluck twist on the other. He asked the following question: Are all 2-knots 0-slice (i.e. 0-concordant to the unknot)? I will explain all relevant definitions, and mostly follow the paper by Nathan Sunukjian on this topic.
In this talk I will briefly talk about coding theory and introduce a specific family of codes called Quasi-quadratic residue (QQR) codes. These codes have large minimum distances, which means they have good error-correcting capabilities. The weights of their codewords are directly related to the number of points on corresponding hyperelliptic curves. I will show a heuristic model to count the number of points on hyperelliptic curves using a coin-toss model, which in turn casts light on the relation between efficiency and the error-correcting capabilities of QQR codes. I will also show an interesting phenomenon we found about the weight enumerator of QQR codes. Lastly, using the bridge between QQR codes and hyperelliptic curves again, we derive the asymptotic behavior of point distribution of a family of hyperelliptic curves using results from coding theory.
We study the effect of sparsity on the appearance of outliers in the semi-circular law. As a corollary of our main results, we show that, for the Erdos-Renyi random graph with parameter p, the second largest eigenvalue is (asymptotically almost surely) detached from the bulk of the spectrum if and only if pn
<br />
In 1999, Alon proved the “Combinatorial Nullstellensatz” which resembles Hilbert’s Nullstellensatz and gives combinatorial structure on the roots of a multivariate polynomial. This method has numerous applications, most notably in additive number theory, but also in many other areas of combinatorics. We will prove the Combinatorial Nullstellensatz and give some of its applications in graph theory.
For the first time in 2020, the US Census Bureau will apply a differentially private algorithm before publicly releasing decennial census data. Recently, the Bureau publicly released their code and end-to-end tests on the 1940 census data at various privacylevels. We will outline the DP algorithm (which is still being developed) and discuss the accuracy of these end-to-end tests. In particular, we focus on the bias and variance of the reported population counts. Finally, we discuss the choices the Bureau has yet to make that will affect the balance between privacy and accuracy. This talk is based on joint work with Abraham Flaxman.
We will define Newton polygons for polynomials over a valued field and prove a couple theorems using them. For example, relating the valuations of the roots of the polynomial to the slopes of the Newton polygon and proving the algebraic closure of the Puiseux series in characteristic 0.
This is a general audience Geometry-Topology talk where I will give a broad overview of my research interests and techniques I use in my work. My research concerns the study of link concordance using groups, both extracting concordance data from group theoretic invariants and determining the properties of group structures on links modulo concordance. Milnor's invariants are one of the more fundamental link concordance invariants; they are thought of as higher order linking numbers and can be computed using both Massey products (due to Turaev and Porter) and higher order intersections (due to Cochran). In my work, I have generalized Milnor's invariants to knots inside a closed, oriented 3-manifold M. I call this the Dwyer number of a knot and show methods to compute it for null-homologous knots inside a family of 3-manifolds with free fundamental group. I further show Dwyer number provides the weight of the first non-vanishing Massey product in the knot complement in the ambient manifold. Additionally, I proved the Dwyer number detects knots K in M bounding smoothly embedded disks in specific 4-manifolds with boundary M which are not concordant to the unknot in M x I. This result further motivates my definition of a new link concordance group in joint work with Matthew Hedden using the knotification construction of Ozsv'ath and Szab'o. Finally, I will briefly discuss my recent result that the string link concordance group modulo its pure braid subgroup is non-abelian.
Starting from Total Variation, this talk will overview some mathematical approaches for image processing, such as removing noise. We will also consider numerical application to data understanding. A few more application maybe presented.
I will show some operations that preserve Lorentzian property following Section 6 of https://arxiv.org/pdf/1902.03719.pdf
Phylogenetic trees are the fundamental mathematical representation of evolutionary processes in biology. As data objects, they are characterized by the challenges associated with "big data," as well as the complication that their discrete geometric structure results in a non-Euclidean phylogenetic tree space, which poses computational and statistical limitations.
In this talk, I will compare the geometric and statistical properties between a well-studied framework - the BHV space, and an alternative framework that we propose, which is based on tropical geometry. Our framework exhibits analytic, geometric, and topological properties that are desirable for theoretical studies in probability and statistics, as well as increased computational efficiency. I also demonstrate our approach on an example of seasonal influenza data.
Let $f$ be defined on $\mathbb{Z}$. Let $A_N f$ be the average of $f$ along the square integers.
Cutting a polyhedron along some spanning tree of its edges will yield an isometric immersion of the polyhedron into the plane. If this immersion is also injective, we call it an unfolding. In this talk I will give some general results about unfoldings of polyhedra. There is also a notion of pseudo-edge unfolding, which involves cutting on a pseudo edge graph, as opposed to an edge graph. A pseudo edge graph is a 3-connected graph on the surface of the polyhedron, whose vertices coincide with the vertices of the polyhedron, and whose edges are geodesics. I will explain part of the paper "Pseudo-Edge Unfoldings of Convex Polyhedra," a joint work of mine with Professor Ghomi, which proves the existence of a convex polyhedron with a pseudo edge graph along which it is not unfoldable. Finally, I will discuss some connections between pseudo edge graphs and edge graphs.
Using ideas from information theory, we establish lower bounds on the volume and the surface area of a geometric body using the size of its slices along different directions. In the first part of the talk, we derive volume bounds for convex bodies using generalized subadditivity properties of entropy combined with entropy bounds for log-concave random variables. In the second part, we investigate a new notion of Fisher information which we call the L1-Fisher information and show that certain superadditivity properties of the L1-Fisher information lead to lower bounds for the surface areas of polyconvex sets in terms of its slices.
A combinatorial structure is said to be quasirandom if it resembles a random structure in a certain robust sense. For example, it is well-known that a graph G with edge-density p is quasirandom if and only if the density of C_4 in G is p^4+o(p^4); this property is known to equivalent to several other properties that hold for truly random graphs. A similar phenomenon was established for permutations: a permutation is quasirandom if and only if the density of every 4-point pattern (subpermutation) is 1/4!+o(1). We strengthen this result by showing that a permutation is quasirandom if and only if the sum of the densities of eight specific 4-point patterns is 1/3+o(1). More generally, we classify all sets of 4-point patterns having such property.
The talk is based on joint work with Timothy F. N. Chan, Jonathan A. Noel, Yanitsa Pehova, Maryam Sharifzadeh and Jan Volec.
Graphs, which in their simplest forms are vertices connected by edges,
are widely used in high performance computing, machine learning and
network science. This talk will use recent progresses on two
well-studied algorithmic problems in static and dynamic graph,
max-flows and dynamic matchings, to discuss a methodology for
designing faster algorithm for large graphs. This approach is
motivated by a fundamental phenomenon in data structures: the
advantages of offline data structures over online ones.
I will start by describing how work on max-flows led to a focus on
finding short paths in residual graphs, and how investigating more
global notions of progress in residual graphs led to a more
sophisticated and general understanding of iterative methods and
preconditioning. I will then discuss a similar phenomenon in dynamic
graphs, where maintaining a large matching seems to require the online
detection of short augmenting paths, but can once again be
circumvented through the offline construction of smaller equivalent
graphs.
The structure of sums-of-squares representations of (nonnegative homogeneous) polynomials is one interesting subject in real algebraic geometry. The sum-of-squares representations of a given polynomial are parametrized by the convex body of positive semidefinite Gram matrices, called the Gram spectrahedron. In this talk, I will introduce Gram spectrahedron, connection to toric variety, a new result that if a variety $X$ is arithmetically Cohen-Macaulay and a linearly normal variety of almost minimal degree (i.e. $\deg(X)=\text{codim}(X)+2$), then every sum of squares on $X$ is a sum of $\dim(X)+2$ squares.
As countless examples show, sequences of complicated objects should be studied all at once via the formalism of generating functions. We apply this point of view to the homology and combinatorics of (orbit-)configuration spaces: using the notion of twisted commutative algebras, which categorify exponential generating functions. With this idea the configuration space “generating function” factors into an infinite product, whose terms are surprisingly easy to understand. Beyond the intrinsic aesthetic of this decomposition and its quantitative consequences, it also gives rise to representation stability - a notion of homological stability for sequences of representations of differing groups.
Continued fractions play a key role in number theory, especially in understanding how well we can approximate irrational numbers by rational numbers. They also play an important role in function theory, in understanding how well we can approximate analytic functions by rational functions. We discuss a few of the main achievements of the theory.
I will consider the isotropic XY chain with a transverse magnetic field acting on a single site, and analyze the long time behaviour of the time-dependent state of the system when a periodic perturbation drives the impurity. I will show that, under some conditions, the state approaches a periodic orbit synchronized with the forcing. Moreover I will provide the explicit rate of convergence to the asymptotics. This is a joint work with G. Genovese.
I will discuss a proof of the statement that the support of a Lorentzian polynomial is M-convex, based on sections 3-5 of the Brändén—Huh paper.
In this talk, I will discuss from a mathematical viewpoint some sufficient conditions that guarantee the energy equality for weak solutions. I will mainly focus on a fluid equation example, namely the inhomogeneous Euler equations. The main tools are the commutator Lemmas. This is a joint work with Ming Chen.
When hybridization plays a role in evolution, networks are necessary to describe species-level relationships. In this talk, we show that most topological features of a level-1 species network (networks with no interlocking cycles) are identifiable from gene tree topologies under the network multispecies coalescent model (NMSC). We also present the theory behind NANUQ, a new practical method for the inference of level-1 networks under the NMSC.
This talk is about an application of complex function theory to inverse spectral problems for differential operators. We consider the Schroedinger operator on a finite interval with an L^1-potential. Borg's two spectra theorem says that the potential can be uniquely recovered from two spectra. By another classical result of Marchenko, the potential can be uniquely recovered from the spectral measure or Weyl m-function. After a brief review of inverse spectral theory of one dimensional regular Schroedinger operators, we will discuss complex analytic methods for the following problem: Can one spectrum together with subsets of another spectrum and norming constants recover the potential?
I will give an introduction to surface bundles and will discuss several places where they arise naturally. A surface bundle is a fiber bundle where the fiber is a surface. A first example is the mapping torus construction for 3-manifolds, which is a surface bundle over the circle. Topics will include a construction of 4-manifolds as well as section problems related to surface bundles. The talk will be based on a forthcoming Notices survey article by Salter and Tshishiku.
We study the subject of approximation of convex bodies by polytopes in high dimension.
For a convex set K in R^n, we say that K can be approximated by a polytope of m facets by a distance R>1 if there exists a polytope of P m facets such that K contains P and RP contains K.
When K is symmetric, the maximal volume ellipsoid of K is used heavily on how to construct such polytope of poly(n) facets to approximate K. In this talk, we will discuss why the situation is entirely different for non-symmetric convex bodies.
In deep generative models, the latent variable is generated by a time-inhomogeneous Markov chain, where at each time step we pass the current state through a parametric nonlinear map, such as a feedforward neural net, and add a small independent Gaussian perturbation. In this talk, based on joint work with Belinda Tzen, I will discuss the diffusion limit of such models, where we increase the number of layers while sending the step size and the noise variance to zero. The resulting object is described by a stochastic differential equation in the sense of Ito. I will first show that sampling in such generative models can be phrased as a stochastic control problem (revisiting the classic results of Föllmer and Dai Pra) and then build on this formulation to quantify the expressive power of these models. Specifically, I will prove that one can efficiently sample from a wide class of terminal target distributions by choosing the drift of the latent diffusion from the class of multilayer feedforward neural nets, with the accuracy of sampling measured by the Kullback-Leibler divergence to the target distribution.
In this talk we introduce a refined alteration approach for constructing $H$-free graphs: we show that removing all edges in $H$-copies of the binomial random graph does not significantly change the independence number (for suitable edge-probabilities); previous alteration approaches of Erdös and Krivelevich remove only a subset of these edges. We present two applications to online graph Ramsey games of recent interest, deriving new bounds for Ramsey, Paper, Scissors games and online Ramsey numbers (each time extending recent results of Fox–He–Wigderson and Conlon–Fox–Grinshpun–He).
Based on joint work with Lutz Warnke.
In this informal chat, I will introduce the braid group and several equivalent topological perspectives from which to view it. In particular, we will discuss the role that complex polynomials play in this setting, along with a few classical results.
The Jacobian Conjecture is a famous open problem in commutative algebra and algebraic geometry. Suppose you have a polynomial function $f:\mathbb{C}^n\to\mathbb{C}^n$. The Jacobian Conjecture asserts that if the Jacobian of $f$ is a non-zero constant, then $f$ has a polynomial inverse. Because the conjecture is so easy to state, there have been many claimed proofs that turned out to be false. We will discuss some of these incorrect proofs, as well as several correct theorems relating to the Jacobian Conjecture.
Deep neural networks (DNNs) have revolutionized machine learning by gradually replacing the traditional model-based algorithms with data-driven methods. While DNNs have proved very successful when large training sets are available, they typically have two shortcomings: First, when the training data are scarce, DNNs tend to suffer from overfitting. Second, the generalization ability of overparameterized DNNs still remains a mystery. In this talk, I will discuss two recent works to “inject” the “modeling” flavor back into deep learning to improve the generalization performance and interpretability of the DNN model. This is accomplished by DNN regularization through applied differential geometry and harmonic analysis. In the first part of the talk, I will explain how to improve the regularity of the DNN representation by enforcing a low-dimensionality constraint on the data-feature concatenation manifold. In the second part, I will discuss how to impose scale-equivariance in network representation by conducting joint convolutions across the space and the scaling group. The stability of the equivariant representation to nuisance input deformation is also proved under mild assumptions on the Fourier-Bessel norm of filter expansion coefficients.
The space of degree-n complex polynomials with distinct roots appears frequently and naturally throughout mathematics, but its shape and structure could be better understood. In recent and ongoing joint work with Jon McCammond, we present a deformation retraction of this space onto a simplicial complex with rich structure given by the combinatorics of noncrossing partitions. In this talk, I will describe the deformation retraction and the resulting combinatorial data associated to each generic complex polynomial. We will also discuss a helpful comment from Daan Krammer which connects our work with the ideas of Bill Thurston and his collaborators.
A knot is a smooth embedding of a circle into R^3. Closely related are tangles, which are properly embedded arcs in a 3-ball. We will model certain tangles using jump ropes and relate this to Conway's classification of rational tangles.
For online optimization, the input instance is revealed in a sequence of steps and, after each step, the algorithm has to take an immediate and irrevocable decision based on the previous inputs. Online algorithms produce a sequence of decisions for such problems without the complete information of the future. In the worst-case analysis of online optimization problems, sometimes, it is impossible to achieve any bounded competitive ratio. An interesting way to bypass these impossibility results is to incorporate a stochastic component into the input model. In the random-order arrival model, the adversary specifies an input instance in advance but the input appears according to a random permutation. The knapsack problem is one of the classical problems in combinatorial optimization: Given a set of items, each specified by its size and profit, the goal is to find a maximum profit packing into a knapsack of bounded capacity. The generalized assignment problem (GAP) includes, besides the knapsack problem, several important problems related to scheduling and matching. In this talk, we will present improved competitive algorithms under random-order arrival for these two problems. This is joint work with Susanne Albers and Leon Ladewig.
A double occurrence word (DOW) is a word in which every symbol appears exactly twice; two DOWs are equivalent if one is a symbol-to-symbol image of the other. In the context of genomics, DOWs and operations on DOWs have been used in studies of DNA rearrangement. By modeling the DNA rearrangement process using DOWs, it was observed that over 95% of the scrambled genome of the ciliate Oxytricha trifallax could be described by iterative insertions of the ``repeat pattern'' and the ``return pattern''. These patterns generalize square and palindromic factors of DOWs, respectively. We introduce a notion of inserting repeat/return words into DOWs and study how two distinct insertions into the same word can produce equivalent DOWs. Given a DOW w, we characterize the structure of w which allows two distinct insertions to yield equivalent DOWs. This characterization depends on the locations of the insertions and on the length of the inserted repeat/return words and implies that when one inserted word is a repeat word and the other is a return word, then both words must be trivial (i.e., have only one symbol). The characterization also introduces a method to generate families of words recursively.
Back in the year 2000, Christ and Kiselev introduced a useful "maximal trick" in their study of spectral properties of Schro edinger operators.
The trick was completely abstract and only at the level of basic functional analysis and measure theory. Over the years it was reproven,
generalized, and reused by many authors. We will present its recent application in the theory of restriction of the Fourier transform to
surfaces in the Euclidean space.
I will talk about a connection between graph theory and sutured Floer homology. In fact, there is a one to one correspondence between hypergraphs of a planar bipartite graph and the dimension of sutured Floer homology of a complement of a neighborhood of special alternating link In a three sphere. This is based on the work of Juhas, Kalman and Rasmussen.
In the realm of Laplacians of Riemannian manifolds, nodal domains have been the subject of intensive research for well over a hundred years.
Given a Riemannian manifold M, let f be an eigenfunctions f of the Laplacian with respect to some boundary conditions. A nodal domain associated with f is the maximal connected subset of the domain M for which the f does not change sign.
Here we examine the discrete cases, namely we consider nodal domains for graphs. Dekel-Lee-Linial shows that for a Erdős–Rényi graph G(n, p), with high probability there are exactly two nodal domains for each eigenvector corresponding to a non-leading eigenvalue. We prove that with high probability, the sizes of these nodal domains are approximately equal to each other.
As a direct corollary of this purely combinatorial result, the sensitivity and degree of every boolean function are polynomially related. This solves an outstanding foundational problem in theoretical computer science, the Sensitivity Conjecture of Nisan and Szegedy.
In the past decade, the catalog of algorithms available to combinatorial optimizers has been substantially extended to settings which allow submodular objective functions. One significant recent result was a tight (1-1/e)-approximation for maximizing a non-negative monotone submodular function subject to a matroid constraint. These algorithmic developments were happening concurrently with research that found a wealth of new applications for submodular optimization in machine learning, recommender systems, and algorithmic game theory.
The related supermodular maximization models also offer an abundance of applications, but they appeared to be highly intractable even under simple cardinality constraints and even when the function has a nice structure. For example, the densest subgraph problem - suspected to be highly intractable - can be expressed as a monotonic supermodular function which has a particularly nice form. Namely, the objective can be expressed by a quadratic form $x^T A x$ where $A$ is a non-negative, symmetric, 0-diagonal matrix. On the other hand, when the entries $A(u,v)$ form a metric, it has been shown that the associated maximization problems have constant factor approximations. Inspired by this, we introduce a parameterized class of non-negative functions called meta-submodular functions that can be approximately maximized within a constant factor. This class includes metric diversity, monotone submodular and other objectives appearing in the machine learning and optimization literature. A general meta-submodular function is neither submodular nor supermodular and so its multi-linear extension does not have the nice convexity/concavity properties which hold for submodular functions. They do, however, have an intrinsic one-sided smoothness property which is essential for our algorithms. This smoothness property might be of independent interest.
We consider the problem of computing unstable manifolds for equilibrium solutions of parabolic PDEs posed on irregular spatial domains. This new approach is based on the parameterization method, a general functional analytic framework for studying invariant manifolds of dynamical systems. The method leads to an infinitesimal invariance equation describing the unstable manifold. A recursive scheme leads to linear homological equations for the jets of the manifold which are solved using the finite element method. One feature of the method is that we recover the dynamics on the manifold in addition to its embedding. We implement the method for some example problems with polynomial and non-polynomial nonlinearities posed on various non-convex two dimensional domains. We provide numerical support for the accuracy of the computed manifolds using the natural notion of a-posteriori error admitted by the parameterization method. This is joint work with J.D. Mireles-James and Necibe Tuncer.
A question going back to Serre asks which groups arise as fundamental groups of smooth, complex projective varieties, or more generally, compact Kaehler manifolds. The most basic examples of these are surface groups, arising as fundamental groups of 1-dimensional projective varieties. We will survey known examples and restrictions on such groups and explain the special role surface groups play in their classification. Finally, we connect this circle of ideas to more general questions about surface bundles and mapping class groups.
It is a fundamental problem in computer vision to describe the geometric relations between two or more cameras that view the same scene -- state of the art methods for 3D reconstruction incorporate these geometric relations in a nontrivial way. At the center of the action is the essential variety: an irreducible subvariety of P^8 of dimension 5 and degree 10 whose homogeneous ideal is minimal generated by 10 cubic equations. Taking a linear slice of complementary dimension corresponds to solving the "minimal problem" of 5 point relative pose estimation. Viewed algebraically, this problem has 20 solutions for generic data: these solutions are elements of the special Euclidean group SE(3) which double cover a generic slice of the essential variety. The structure of these 20 solutions is governed by a somewhat mysterious Galois group (ongoing work with Regan et. al.)
We may ask: what other minimal problems are out there? I'll give an overview of work with Kohn, Pajdla, and Leykin on this question. We have computed the degrees of many minimal problems via computer algebra and numerical methods. I am inviting algebraic geometers at large to attack these problems with "pen and paper" methods: there is still a wide class of problems to be considered, and the more tools we have, the better.
Kodaira, and independently Atiyah, gave the first examples of surface bundles over surfaces whose signature does not vanish, demonstrating that signature need not be multiplicative. These examples, called Kodaira fibrations, are in fact complex projective surfaces admitting a holomorphic submersion onto a complex curve, whose fibers have nonconstant moduli. After reviewing the Atiyah-Kodaira construction, we consider Kodaira fibrations with nontrivial holomorphic invariants in degree one. When the dimension of the invariants is at most two, we show that the total space admits a branched covering over a product of curves.
I will describe a few classical problems in capillarity and the associated classical variational framework. These problems include the well-known shape and rise height problems for the meniscus in a tube as well as the problems associated with sessile and pendent drops. I will briefly discuss elements of recent modifications of the variational theory allowing floating objects. Finally, I will describe a few open problems.
We establish an upper bound on the spectral gap for compact quantum graphs which depends only on the diameter and total number of vertices. This bound is asymptotically sharp for pumpkin chains with number of edges tending to infinity. This is a joint work with D. Borthwick and L. Corsi.
In this talk, we discuss about methods for proving existence and uniqueness of a root of a square analytic system in a given region. For a regular root, Krawczyk method and Smale's $\alpha$-theory are used. On the other hand, when a system has a multiple root, there is a separation bound isolating the multiple root from other roots. We define a simple multiple root, a multiple root whose deflation process is terminated by one iteration, and establish its separation bound. We give a general framework to certify a root of a system using these concepts.
Using what we have studied in the Brändén-Huh paper, we will go over the proof of the ultra-log-concavity version of Mason's conjecture.
We study the geometry of minimizers of the interaction energy functional. When the interaction potential is mildly repulsive, it is known to be hard to characterize those minimizers due to the fact that they break the rotational symmetry, suggesting that the problem is unlikely to be resolved by the usual convexity or symmetrization techniques from the calculus of variations. We prove that, if the repulsion is mild and the attraction is sufficiently strong, the minimizer is unique up to rotation and exhibits a remarkable simplex-shape rigid structure. As the first crucial step we consider the maximum variance problem of probability measures under the constraint of bounded diameter, whose answer in one dimension was given by Popoviciu in 1935.
We give a formula relating various notions of heights of abelian varieties. Our formula completes earlier results due to Bost, Hindry, Autissier and Wagener, and it extends the Faltings-Silverman formula for elliptic curves. We also discuss the case of Jacobians in some detail, where graphs and electrical networks will play a key role. Based on joint works with Robin de Jong (Leiden).
An expectation-maximization (EM) algorithm is a powerful clustering method that was initially developed to fit Gaussian mixture distributions. In the absence of a particular probability density function, an EM algorithm aims to estimate the "best" function that maximizes the likelihood of data being generated by the model. We present an EM algorithm which addresses the problem of clustering "mutated" substrings of similar parent strings such that each substring is correctly assigned to its parent string. This problem is motivated by the process of simultaneously reading similar RNA sequences during which various substrings of the sequence are produced and could be mutated; that is, a substring may have some letters changed during the reading process. Because the original RNA sequences are similar, a substring is likely to be assigned to the wrong original sequence. We describe our EM algorithm and present a test on a simulated benchmark which shows that our method yields a better assignment of the substrings than what has been achieved by previous methods. We conclude by discussing how this assignment problem applies to RNA structure prediction.
The classical isoperimetric inequality states that in Euclidean space spheres form the least perimeter enclosures for any give volume. We will review the historic development of this result in mathematics, and various approaches to proving it. Then we will discuss how one of these approaches, which is a variational argument, may be extended to spaces of nonpositive curvature, known as Cartan-Hadamard manifolds, in order to generalize the isoperimetric inequality.
Stephen Smale’s h-cobordism Theorem was a landmark result in the classification of smooth manifolds. It paved the way towards solutions for the topological Poincaré and Schoenflies conjectures in dimensions greater than 5. Later, building on this, Freedman’s work applied these techniques to 4 manifolds. I shall discuss the ideas relating to h-cobordisms and the proof, which is a wonderful application of handlebody theory and the Whitney trick. Time permitting, we shall explore the world of smooth 4 manifolds further, and talk about cork twists.
We will show the sharp estimate on the behavior of the smallest singular value of random matrices under very general assumptions. One of the steps in the proof is a result about the efficient discretization of the unit sphere in an n-dimensional euclidean space. Another step involves the study of the regularity of the behavior of lattice sets. Some elements of the proof will be discussed. Based on the joint work with Tikhomirov and Vershynin.
The classical isoperimetric inequality states that in Euclidean space spheres provide unique enclosures of least perimeter for any given volume. In this talk we discuss how this inequality may be extended to spaces of nonpositive curvature, known as Cartan-Hadamard manifolds, as conjectured by Aubin, Gromov, Burago, and Zalgaller in 1970s and 80s. The proposed proof is based on a comparison formula for total curvature of level sets in Riemannian manifolds, and estimates for the smooth approximation of the signed distance function, via inf-convolution and Reilly type formulas among other techniques. Immediate applications include sharp extensions of Sobolev and Faber-Krahn inequalities to spaces of nonpositive curvature. This is joint work with Joel Spruck.
A graph is $k$-critical if its chromatic number is $k$ but any its proper subgraph has chromatic number less than $k$. Let $k\geq 4$. Gallai asked in 1984 if any $k$-critical graph on $n$ vertices contains at least $n$ distinct $(k-1)$-critical subgraphs. Improving a result of Stiebitz, Abbott and Zhou proved in 1995 that every such graph contains $\Omega(n^{1/(k-1)})$ distinct $(k-1)$-critical subgraphs. Since then no progress had been made until very recently, Hare resolved the case $k=4$ by showing that any $4$-critical graph on $n$ vertices contains at least $(8n-29)/3$ odd cycles. We mainly focus on 4-critical graphs and develop some novel tools for counting cycles of specified parity. Our main result shows that any $4$-critical graph on $n$ vertices contains $\Omega(n^2)$ odd cycles, which is tight up to a constant factor by infinite many graphs. As a crucial step, we prove the same bound for 3-connected non-bipartite graphs, which may be of independent interest. Using the tools, we also give a very short proof to the Gallai's problem for the case $k=4$. Moreover, we improve the longstanding lower bound of Abbott and Zhou to $\Omega(n^{1/(k-2)})$ for the general case $k\geq 5$. Joint work with Tianchi Yang.
Expander decomposition has been a central tool in designing graph algorithms in many fields (including fast centralized algorithms, approximation algorithms and property testing) for decades. Recently, we found that it also gives many impressive applications in dynamic graph algorithms and distributed graph algorithms. In this talk, I will survey these recent results based on expander decomposition, explain the key components for using this technique, and give some toy examples on how to apply these components.
In 1959 Mark Kac introduced a simple model for the evolution
of a gas of hard spheres undergoing elastic collisions. The main
simplification consisted in replacing deterministic collisions with
random Poisson distributed collisions.
It is possible to obtain many interesting results for this simplified
dynamics, like estimates on the rate of convergence to equilibrium and
validity of the Boltzmann equation. The price paid is that this system
has no space structure.
I will review some classical results on the Kac model and report on an
attempt to reintroduce some form of space structure and non-equilibrium
evolution in a way that preserve the mathematical tractability of the
system.
Foundation is a powerful tool to understand the representability of matroids. The foundation of a matroid is a pasture which is an algebraic structure genrealize the field. I will briefly introduce matroids, algebraic structures (especially pastures) and matroid representability. I will also give some examples on how foundation works in representation of matroids.
We present a multiscale modeling and computational scheme for optical-
mechanical responses of nanostructures. The multi-physical nature of
the problem is a result of the interaction between the electromagnetic
(EM) field, the molecular motion, and the electronic excitation. To
balance accuracy and complexity, we adopt the semi-classical approach
that the EM field is described classically by the Maxwell equations,
and the charged particles follow the Schr ̈oidnger equations quantum
mechanically. To overcome the numerical challenge of solving the high
dimensional multi-component many- body Schr ̈odinger equations, we
further simplify the model with the Ehrenfest molecular dynamics to
determine the motion of the nuclei, and use the Time- Dependent
Current Density Functional Theory (TD-CDFT) to calculate the
excitation of the electrons. This leads to a system of coupled
equations that computes the electromagnetic field, the nuclear
positions, and the electronic current and charge densities
simultaneously. In the regime of linear responses, the resonant
frequencies initiating the out-of-equilibrium optical-mechanical
responses can be formulated as an eigenvalue problem. A
self-consistent multiscale method is designed to deal with the well
separated space scales. The isomerization of Azobenzene is presented as a numerical example.
It is a remarkable fact that some compact topological 4-manifolds X admit infinitely many exotic smooth structures, a phenomenon unique to dimension four. Indeed a fundamental open problem in the subject is to give a meaningful description of the set of all such structures on any given X. This talk will describe one approach to this problem when X is simply-connected, via cork twisting. First we'll sketch an argument to show that any finite list of smooth manifolds homeomorphic to X can be obtained by removing a single compact contractible submanifold (or cork) from X, and then regluing it by powers of a boundary diffeomorphism. In fact, allowing the cork to be noncompact, the collection of all smooth manifolds homeomorphic to X can be obtained in this way. If time permits, we will also indicate how to construct a single universal noncompact cork whose twists yield all smooth closed simply-connected 4-manifolds. This is joint work with Hannah Schwartz.
This presentation reviews different concepts of solution of a differential equation, in particular stressing the need to modify the classical theory when we want to deal with discontinuous systems. We will review the concept of classical solution, and then of Caratheodory solution and Filippov solution, motivating with simple examples the need for these extensions.
We will discuss a deterministic, polynomial (in the rank) time approximation algorithm for counting the bases of a given matroid and for counting common bases between two matroids of the same rank. This talk follows the paper (https://arxiv.org/abs/1807.00929) of Nima Anari, Shayan Oveis Gharan, and Cynthia Vinzant.
Inspired by the interval decomposition of persistence modules and the extended Newick format of phylogenetic networks, we show that, inside the larger category of partially ordered Reeb graphs, every Reeb graph with n leaves and first Betti number s, is equal to a coproduct of at most 2s trees with (n + s) leaves. An implication of this result, is that Reeb graphs are fixed parameter tractable when the parameter is the first Betti number. We propose partially ordered Reeb graphs as a natural framework for modeling time consistent phylogenetic networks. We define a notion of interleaving distance on partially ordered Reeb graphs which is analogous to the notion of interleaving distance for ordinary Reeb graphs. This suggests using the interleaving distance as a novel metric for time consistent phylogenetic networks.
NOTE THE UNUSUAL TIME: This seminar takes place from 1:10-1:50 for THIS WEEK ONLY.
Basin of attraction for a stable equilibrium point is an effective concept for stability in deterministic systems. However, it does not contain information on the external perturbations that may affect it. The concept of stochastic basin of attraction (SBA) is introduced by incorporating a suitable probabilistic notion of basin. The criteria for the size of the SBA is based on the escape probability, which is one of the deterministic quantities that carry dynamical information and can be used to quantify dynamical behavior of the corresponding stochastic basin of attraction. SBA is an efficient tool to describe the metastable phenomena complementing the known exit time, escape probability, or relaxation time. Moreover, the geometric structure of SBA gives additional insight into the system's dynamical behavior, which is important for theoretical and practical reasons. This concept can be used not only in models with small intensity but also with whose amplitude is proportional or in general is a function of an order parameter. The efficiency of the concept is presented through two applications.
A sub-Riemannian manifold M is a connected smooth manifold such that the only smooth curves in M which are admissible are those whose tangent vectors at any point are restricted to a particular subset of all possible tangent vectors. Such spaces have several applications in physics and engineering, as well as in the study of hypo-elliptic operators. We will construct a random walk on M which converges to a process whose infinitesimal generator is one of the natural sub-elliptic Laplacian operators. We will also describe these Laplacians geometrically and discuss the difficulty of defining one which is canonical. Examples will be provided. This is a joint work with Tom Laetsch.
It is a classical theorem of Alexander that every closed oriented manifold is a piecewise linear branched covering of the sphere. In this talk, we will discuss some obstructions to realizing a manifold as a branched covering of the sphere if we require additional properties (like being a submanifold) on the branch set.
We will survey different methods of proving functional inequalities for hypoelliptic diffusions and the corresponding heat kernels. Some of these methods rely on geometric methods such as curvature-dimension inequalities (due to Baudoin-Garofalo), and some are probabilistic such as coupling, and finally some use structure theory and a Fourier transform on Lie groups. This is based on joint work with M. Asaad, F. Baudoin, B. Driver, T. Melcher, Ph. Mariano et al.
Consider the random surface given by the interface separating the plus and minus phases in a low-temperature Ising model in dimensions $d\geq 3$. Dobrushin (1972) famously showed that in cubes of side-length $n$ the horizontal interface is rigid, exhibiting order one height fluctuations above a fixed point.
We study the large deviations of this interface and obtain a shape theorem for its pillar, conditionally on it reaching an atypically large height. We use this to analyze the law of the maximum height $M_n$ of the interface: we prove that for every $\beta$ large, $M_n/\log n \to c_\beta$, and $(M_n - \mathbb E[M_n])_n$ forms a tight sequence. Moreover, even though this centered sequence does not converge, all its sub-sequential limits satisfy uniform Gumbel tail bounds. Based on joint work with Eyal Lubetzky.
Tangles capture a notion of high-connectivity in graphs which differs from $k$-connectivity. Instead of requiring that a small vertex set $X$ does not disconnect the graph $G$, a tangle “points” to the connected component of $G-X$ that contains most of the “highly connected part”. Developed initially by Robertson and Seymour in the graph minors project, tangles have proven to be a fundamental tool in studying the general structure of graphs and matroids. Tangles are also useful in proving that certain families of graphs satisfy an approximate packing-covering duality, also known as the Erd\H{o}s-P\'osa property. In this talk I will give a gentle introduction to tangles and describe some basic applications related to the Erd\H{o}s-P\'osa property.
Let X be an elliptic surface with a section defined over a number field. Specialization theorems by Néron and Silverman imply that the rank of the Mordell-Weil group of special fibers is at least equal to the MW rank of the generic fiber. We say that the rank jumps when the former is strictly large than the latter. In this talk, I will discuss rank jumps for elliptic surfaces fibred over the projective line. If the surface admits a conic bundle we show that the subset of the line for which the rank jumps is not thin in the sense of Serre. This is joint work with Dan Loughran.
While most evolutionary studies of host-pathogen dynamics consider pathogen evolution alone or host-pathogen coevolution, for some diseases (e.g., White Nose syndrome in bats), there is evidence that hosts can sometimes evolve more rapidly than their pathogen. In this talk, we will discuss the spatial, temporal, and epidemiological factors may drive the evolutionary dynamics of the host population. We consider a simplified system of two host genotypes that trade off factors of disease robustness and spatial mobility or growth. For diseases that infect hosts for life, we find that migration and disease-driven mortality can have antagonistic effect on host densities when disease selection on hosts is low, but show synergy when selection is high. For diseases that allow hosts to recover with immunity, we explore the conditions under which the disease dies out, becomes endemic, or has periodic outbreaks, and show how these dynamics relate to the relative success of the robust and wild type hosts in the population over time. Overall, we will discuss how combinations of host spatial structure, demography, and epidemiology of infectious disease can significantly influence host evolution and disease prevalence. We will conclude with some profound implications for wildlife conservation and zoonotic disease control.
The Torelli group is the subgroup of the mapping class group acting trivially on homology. We will discuss some basic properties of the Torelli group and explain how to define it for surfaces with boundary. We will also give some Torelli analogues of the Birman exact sequence.
Matroids are combinatorial gadgets that reflect properties of linear algebra in situations where this latter theory is not available. This analogy prescribes that the moduli space of matroids should be a Grassmannian over a suitable base object, which cannot be a field or a ring; in consequence usual algebraic geometry does not provide a suitable framework. In joint work with Matt Baker, we use algebraic geometry over F1, the so-called field with one element, to construct such moduli spaces. As an application, we streamline various results of matroid theory and find simplified proofs of classical theorems, such as the fact that a matroid is regular if and only if it is binary and orientable.
We will dedicate the first half of this talk to an introduction of matroids and their generalizations. Then we will outline how to use F1-geometry to construct the moduli space of matroids. In a last part, we will explain why this theory is so useful to simplify classical results in matroid theory.
A powerful method for analyzing graphs is to first apply regularity lemmas, which roughly state that one can partition the graph into a few parts so that it looks mostly random between the parts, and then apply probabilistic tools from there. The drawback of this approach is that it only works in general when the input graph is very dense: standard regularity lemmas are trivial already for n-node graphs on "only" <= n^{1.99} edges.
In this work we prove extensions of several standard regularity lemmas to sparse graphs, which are nontrivial so long as the graph spectrum is not too far from that of a random graph. We then apply our notion of "spectral pseudorandomness" to port several notable regularity-based results in combinatorics and theoretical computer science down to sparser graphs.
Joint work with Santosh Vempala.
High-dimensional inference problems such as sparse PCA and planted clique often exhibit statistical-vs-computational tradeoffs whereby there is no known polynomial-time algorithm matching the performance of the optimal estimator. I will discuss an emerging framework -- based on the so-called low-degree likelihood ratio -- for precisely predicting these tradeoffs and giving rigorous evidence for computational hardness in the conjectured hard regime. This method was originally proposed in a sequence of works on the sum-of-squares hierarchy, and the key idea is to study whether or not there exists a low-degree polynomial that succeeds at a given statistical task.
In the second part of the talk, I will give an application to the algorithmic problem of finding an approximate ground state of the SK (Sherrington-Kirkpatrick) spin glass model. I will explain two variants of this problem: "optimization" and "certification." While optimization can be solved in polynomial time [Montanari'18], we give rigorous evidence (in the low-degree framework) that certification cannot be. This result reveals a fundamental discrepancy between two classes of algorithms: local search succeeds while convex relaxations fail.
Based on joint work with Afonso Bandeira and Tim Kunisky (https://arxiv.org/abs/1902.07324 and https://arxiv.org/abs/1907.11636).
Breathers are periodic in time spatially localized solutions of evolutionary PDEs. They are known to exist for the sine-Gordon equation but are believed to be rare in other Klein-Gordon equations. Exchanging the roles of time and position, breathers can be interpreted as homoclinic solutions to a steady solution. In this talk, I will explain how to obtain an asymptotic formula for the distance between the stable and unstable manifold of the steady solution when the steady solution has weakly hyperbolic one dimensional stable and unstable manifolds. Their distance is exponentially small with respect to the amplitude of the breather and therefore classical perturbative techniques cannot be applied. This is a joint work with O. Gomide, T. Seara and C. Zeng.
In this talk, I will introduce my old(1.) and current works(2.).
1. Bounds on regularity of quadratic monomial ideals
We can understand invariants of monomial ideals by invariants of clique (or flag) complex of corresponding graphs. In particular, we can bound the Castelnuovo-Mumford regularity (which is a measure of algebraic complexity) of the ideals by bounding homol0gy of corresponding (simplicial) complex. The construction and proof of our main theorem are simple, but it provides (and improves) many new bounds of regularities of quadratic monomial ideals.
2. Pythagoras numbers on projections of Rational Normal Curves
Observe that forms of degree $2d$ are quadratic forms of degree $d$. Therefore, to study the cone of sums of squares of degree $2d$, we may study quadratic forms on Veronese embedding of degree $d$. In particular, the rank of sums of squares (of degree $2d$) can be studied via Pythagoras number (which is a classical notion) on the Veronese embedding of degree $d$. In this part, I will compute the Pythagoras number on rational normal curve (which is a veronese embedding of $\mathbb{P}^1$) and discuss about how Pythagoras numbers are changed when we take some projections away from some points.
We will describe a twisted action of the symmetric group on the polynomial ring in n variables and use it to define a twisted version of Schubert polynomials. These twisted Schubert polynomials are known to be related to the Chern-Schwartz-MacPherson classes of Schubert cells in the flag variety. Using properties of skew divided difference operators, we will show that these twisted Schubert polynomials are monomial positive and give a combinatorial formula for their coefficients.
In this talk we present some recent results which allow to prove
instability in near integrable Hamiltonian systems. We will show how
these mechanisms are suitable to apply to concrete systems but also are
useful to obtain Arnold diffusion in a large set of Hamiltonian systems.
Gromov revolutionized the study of finitely generated groups by showing that an intrinsic metric on a group is intimately connected with the algebra of the group. This point of view has produced deep applications not only in group theory, but also topology, geometry, logic, and dynamical systems. We will start at the beginning of this story with the definitions of these metrics on groups and how notions from classical geometry can be generalized to this context. The focus will be on how the "hyperbolic groups" exhibit geometric and dynamical feature reminiscent of the hyperbolic plane and its isometries.
This talk is based on work in progress with Sara Lamboglia and Faye Simon. We study the tropical convex hull of convex sets and of tropical curves. Basic definitions of tropical convexity and tropical curves will be presented, followed by an overview of our results on the interaction between tropical and classical convexity. Lastly, we study a tropical analogue of an inequality bounding the degree of a projective variety in classical algebraic geometry; we show a tropical proof of this result for a special class of tropical curves.
While producing subgroups of a group by specifying generators is easy, understanding the structure of such a subgroup is notoriously difficult problem. In the case of hyperbolic groups, Gitik utilized a local-to-global property for geodesics to produce an elegant condition that ensures a subgroup generated by two elements (or more generally generated by two subgroups) will split as an amalgamated free product over the intersection of the generators. We show that the mapping class group of a surface and many other important groups have a similar local-to-global property from which an analogy of Gitik's result can be obtained. In the case of the mapping class group, this produces a combination theorem for the dynamically and topologically important convex cocompact subgroups. Joint work with Davide Spriano and Hung C. Tran.
We consider the strong chain recurrent set and the generalized recurrent set for continuous maps of compact metric spaces. Recent work by Fathi and Pageault has shown a connection between the two sets, and has led to new results on them. We discuss a structure theorem for transitive/mixing maps, and classify maps that permit explosions in the size of the recurrent sets.
As a geometric group theorist, my favorite type of manifold is a surface and my favorite way to study surfaces is by considering the mapping class group, which is the collection of symmetries of a surface. In this talk, we will think bigger than your average low-dimensional topologist and consider surfaces of infinite type and their associated “big” mapping class groups.
Mori Dream Spaces are generalizations of toric varieties and, as the name suggests, Mori's minimal model program can be run for every divisor. It is known that for n≥5, the blow-up of Pn at r very general points is a Mori Dream Space iff r≤n+3. In this talk we proceed to blow up points as well as lines, by considering the blow-up X of P3 at 6 points in very general position and all the 15 lines through the 6 points. We find that the unique anticanonical section of X is a Jacobian K3 Kummer surface S of Picard number 17. We prove that there exists an infinite-order pseudo-automorphism of X, whose restriction to S is one of the 192 infinite-order automorphisms constructed by Keum. A consequence is that there are infinitely many extremal effective divisors on X; in particular, X is not a Mori Dream Space. We show an application to the blow-up of Pn (n≥3) at (n+3) points and certain lines. We relate this pseudo-automorphism to the structure of the birational automorphism group of P3. This is a joint work with Lei Yang.
This is quick tutorial on bounding the mixing time of a finite Markov chain in terms of functional inequalities defining the spectral gap and the entropy constant of a Markov chain. The lecture will include some examples, including bounding the mixing time of the random transposition shuffle and (time permitting) that of the basis-exchange walk on a balanced matroid.
This is intended as a review lecture before Nima Anari’s lectures (during Nov. 4-6) on applications of Lorentzian polynomials, including recent breakthrough analyses of the basis-exchange walk on an arbitrary matroid.
I will present a new method of analysis for Einstein’s
constraint equations, referred to as the Seed-to-Solution Method, which
leads to the existence of asymptotically Euclidean manifolds with
prescribed asymptotic behavior. This method generates a (Riemannian)
Einstein manifold from any seed data set consisting of (1): a Riemannian
metric and a symmetric two-tensor prescribed on a topological manifold
with finitely many asymptotically Euclidean ends, and (2): a density
field and a momentum vector field representing the matter content. By
distinguishing between several classes of seed data referred to as tame
or strongly tame, the method encompasses metrics with the weakest
possible decay (infinite ADM mass) or the strongest possible decay
(Schwarzschild behavior). This analysis is based on a linearization of
the Einstein equations (involving several curvature operators from
Riemannian geometry) around a tame seed data set. It is motivated by
Carlotto and Schoen’s pioneering work on the so-called localization
problem for the Einstein equations. Dealing with manifolds with possibly
very low decay and establishing estimates beyond the critical level of
decay requires significantly new ideas to be presented in this talk. As
an application of our method, we introduce and solve a new problem,
referred to as the asymptotic localization problem, at the critical
level of decay. Collaboration with T. Nguyen. Blog: philippelefloch.org
A central pervasive challenge in genomics is that RNA/DNA must be reconstructed from short, often noisy subsequences. In this talk, we describe a new digraph algorithm which enables this "assembly" when analyzing high-throughput transcriptomic sequencing data. Specifically, the Flow Decomposition problem on a directed ayclic graph asks for the smallest set of weighted paths that “cover” a flow (a weight function on the edges where the amount coming into any vertex is equal to the amount leaving). We describe a new linear-time algorithm solving *k*-Flow Decomposition, the variant where exactly *k* paths are used. Further, we discuss how we implemented and engineered a general Flow Decomposition solver based on this algorithm, and describe its performance on RNA-sequence data. Crucially, our solver finds exact solutions while achieving runtimes competitive with a state-of-the-art heuristic, and we discuss the implications of our results on the original model selection for transcript assembly in this setting.
In the first part of this talk, I will give an overview of a theory of harmonic analysis on a class of fractals that includes the Sierpinski gasket. The starting point of the theory is the introduction by J. Kigami of a Laplacian operator on these fractals. After reviewing the construction of this fractal Laplacian, I will survey some of the properties of its spectrum. In the second part of the talk, I will discuss the fractal analogs of the Heisenberg uncertainty principle, and the spectral properties a class of Schr\"odinger operators.
Which manifold can be obtained from surgery on a knot? Many obstructions to this have been studied. We will discuss some of them, and use Heegaard Floer homology to give an infinite family of seifert fibered integer spheres that cannot be obtained by surgery on a knot in S^3. We will also discuss a recipe to compute HF+ of surgery on a knot (Short review on Heegaard Floer homology included).
Sampling is a fundamental algorithmic task. Many modern applications require sampling from complicated probability distributions in high-dimensional spaces. While the setting of logconcave target distribution is well-studied, it is important to understand sampling beyond the logconcavity assumption. We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability distribution on R^n under isoperimetry conditions. We show a convergence guarantee in Kullback-Leibler (KL) divergence assuming the target distribution satisfies log-Sobolev inequality and the log density has bounded Hessian. Notably, we do not assume convexity or bounds on higher derivatives. We also show convergence guarantees in Rényi divergence assuming the limit of ULA satisfies either log-Sobolev or Poincaré inequality. Joint work with Santosh Vempala (arXiv:1903.08568).
Let $G$ be a graph and $a_0, a_1, a_2, b_1,$ and $b_2$ be distinct vertices of $G$. Motivated by their work on Four Color Theorem, Hadwiger's conjecture for $K_6$, and Jorgensen's conjecture, Robertson and Seymour asked when does $G$ contain disjoint connected subgraphs $G_1, G_2$, such that $\{a_0, a_1, a_2\}\subseteq V(G_1)$ and $\{b_1, b_2\}\subseteq V(G_2)$. We prove that if $G$ is 6-connected then such $G_1,G_2$ exist. Joint work with Robin Thomas and Xingxing Yu.
Advisor: Dr. Xingxing Yu (School of Mathematics, Georgia Institute of Technology)
Committee: Dr. Robin Thomas (School of Mathematics, Georgia Institute of Technology), Dr. Prasad Tetali (School of Mathematics, Georgia Institute of Technology), Dr. Lutz Warnke (School of Mathematics, Georgia Institute of Technology), Dr. Richard Peng (School of Computer Science, Georgia Institute of Technology)
Reader: Dr. Gexin Yu (Department of Mathematics, College of William and Mary)
Everybody are convinced that everything is known about the simplest random process of coin tossing. I will show that it is not the case. Particularly not everything was known for the simplest chaotic dynamical systems like the tent map (which is equivalent to coin tossing). This new area of finite time predictions emerged from a natural new question in the theory of open dynamical systems.
We propose a Markov chain Monte Carlo (MCMC) algorithm based on third-order Langevin dynamics for sampling from distributions with log-concave and smooth densities. The higher-order dynamics allow for more flexible discretization schemes, and we develop a specific method that combines splitting with more accurate integration. For a broad class of d-dimensional distributions arising from generalized linear models, we prove that the resulting third-order algorithm produces samples from a distribution that is at most \varepsilon in Wasserstein distance from the target distribution in O(d^{1/3}/ \varepsilon^{2/3}) steps. This result requires only Lipschitz conditions on the gradient. For general strongly convex potentials with α-th order smoothness, we prove that the mixing time scales as O (d^{1/3} / \varepsilon^{2/3} + d^{1/2} / \varepsilon^{1 / (\alpha - 1)} ). Our high-order Langevin diffusion reduces the problem of log-concave sampling to numerical integration along a fixed deterministic path, which makes it possible for further improvements in high-dimensional MCMC problems. Joint work with Yi-An Ma, Martin J, Wainwright, Peter L. Bartlett and Michael I. Jordan.
Rank-structured matrix representations, e.g., H2 and HSS, are commonly used to reduce computation and storage cost for dense matrices defined by interactions between many bodies. The main bottleneck for their applications is the expensive computation required to represent a matrix in a rank-structured matrix format which involves compressing specific matrix blocks into low-rank form.
We focus on the study and application of a class of hybrid analytic-algebraic compression methods, called the proxy point method. We address several critical problems concerning this underutilized method which limit its applicability. A general form of the method is proposed, paving the way for its wider application in the construction of different rank-structured matrices with kernel functions that are more general than those usually used. Further, we extend the applicability of the proxy point method to compress matrices defined by electron repulsion integrals, which accelerates one of the main computational steps in quantum chemistry.
Committee members: Prof. Edmond Chow (Advisor, School of CSE, Georgia Tech), Prof. David Sherrill (School of Chemistry and Biochemistry, Georgia Tech), Prof. Jianlin Xia (Department of Mathematics, Purdue University), Prof. Yuanzhe Xi (Department of Mathematics, Emory University), and Prof. Haomin Zhou (School of Mathematics, Georgia Tech).
In this talk, we will focus on the spin dynamics of rigid bodies.
Algorithm part: There are many algorithms designed for N body simulations.
But, to study the climates of a planet, we need to extend the simulation from point mass bodies to rigid bodies.
In the N-rigid-body simulations, we will consider the orientation and angular momentum of the rigid body to understand the spin.
In terms of the algorithm, symplectic integrators are designed by splitting methods.
Physical part: We studied the spin dynamics of an Earth-like planet in circumbinary systems.
Canonical Delaunay variables and Andoyer variables are applied to split the variables to be slow part and fast part.
Applying averaging method, we approximated the spin dynamics.
From the approximated dynamics, we may draw some interesting physical conclusions.
A fundamental question in Dynamical Systems is to identify regions of
phase/parameter space satisfying a given property (stability,
linearization, etc). In this talk, given a family of analytic circle
diffeomorphisms depending on a parameter, we obtain effective (almost
optimal) lower bounds of the Lebesgue measure of the set of parameters
for which that diffeomorphism is conjugate to a rigid rotation.
We estimate this measure using an a-posteriori KAM
scheme that relies on quantitative conditions that
are checkable using computer-assistance. We carefully describe
how the hypotheses in our theorems are reduced to a finite number of
computations, and apply our methodology to the case of the
Arnold family, in the far-from-integrable regime.
This is joint work with Jordi Lluis Figueras and Alejandro Luque.
Heegaard Floer homology gives a powerful invariant of closed 3-manifolds. It is always computable in the purely combinatorial sense, but usually computing it needs a very hard work. However, for certain graph 3-manifolds, its minus-flavored Heegaard Floer homology can be easily computed in terms of lattice homology, due to Nemethi. I plan to give the basic definitions and constructions of Heegaard Floer theory and lattice homology, as well as the isomorphism between those two objects.
By using the representation theory of the symmetric group we try to compare, with respect to two different bases of the vector space of symmetric forms, the cones of symmetric nonnegative forms and symmetric sums of squares of a fixed even degree when the number of variables goes to infinity.
Using the covering involution on the double branched cover of S3 branched along a knot, and adapting ideas of Hendricks-Manolescu and Hendricks-Hom-Lidman, we define new knot (concordance) invariants and apply them to deduce novel linear independence results in the smooth concordance group of knots. This is a joint work with A. Alfieri and A. Stipsicz.
Given a graph X and a group G, a G-cover of X is a morphism of graphs X’ --> X together with an invariant G-action on X’ that acts freely and transitively on the fibers. G-covers are classified by their monodromy representations, and if G is a finite abelian group, then the set of G-covers of X is in natural bijection with the first simplicial cohomology group H1(X,G).
In tropical geometry, we are naturally led to consider more general objects: morphisms of graphs X’ --> X admitting an invariant G-action on X’, such that the induced action on the fibers is transitive, but not necessarily free. A natural question is to classify all such covers of a given graph X. I will show that when G is a finite abelian group, a G-cover of a graph X is naturally determined by two data: a stratification S of X by subgroups of G, and an element of a cohomology group H1(X,S) generalizing the simplicial cohomology group H1(X,G). This classification can be viewed as a tropical version of geometric class field theory, and as an abelianization of Bass--Serre theory.
I will discuss the realizability problem for tropical abelian covers, and the relationship between cyclic covers of a tropical curve C and the corresponding torsion subgroup of Jac(C). The realizability problem for cyclic covers of prime degree turns out to be related to the classical nowhere-zero flow problem in graph theory.
Joint work with Yoav Len and Martin Ulirsch.
Usual statistical inference techniques for the tree of life like maximum likelihood and bayesian inference through Markov chain Monte Carlo (MCMC) have been widely used, but their performance declines as the datasets increase (in number of genes or number of species).
I will present two new approaches suitable for big data: one, importance sampling technique for bayesian inference of phylogenetic trees, and two, a pseudolikelihood method for inference of phylogenetic networks.
The proposed methods will allow scientists to include more species into the tree of life, and thus complete a broader picture of evolution.
One of the simplest and, at the same time, most prominent models for the discrete quasi-periodic Schrodinger operator is the almost Mathieu operator (also called the Harper's model). This simple-looking operator is known to present exotic spectral properties. Three (out of fifteen) of Barry Simon's problems on Schrodinger operators in the 21st century concerns the almost Mathieu operator. In 2014, Artur Avila won a Fields Medal for work including the solutions to these three problems. In this talk, I will concentrate on the one concerning the Lebesgue measure of the spectrum. I will also talk about the difficulties in generalizing this result to the extended Harper's model. Students with background in numerics are especially welcome to attend!
That the ball minimizes surface area among all sets of fixed volume, was known since antiquity; this is equivalent to the fact that the ball is the unique set which yields equality in the isoperimetric inequality. But the isoperimetric inequality is only a very special case of quadratic inequalities about mixed volumes of convex bodies, whose equality cases were unknown since the time of Minkowski. This talk is about these quadratic inequalities and their unusual equality cases which we resolved using degenerate diffusions on the sphere. No background in geometry will be assumed. Joint work with Ramon van Handel.
The talk will discuss the relationship between topology and
geometry of Einstein 4-manifolds such as K3 surfaces.
The Ehrhard-Borell inequality stands at the top of the pyramid of Gaussian inequalities. It is a powerful and delicate statement about the convexity of the Gaussian measure. In this talk I will discuss the inequality and its beautiful proof by Borell. The delicate nature of the inequality however makes the characterization of the equality cases difficult and they were left unknown. I will explain how we solved this problem. Joint work with Ramon van Handel.
This is a talk about 3-manifolds and knots. We will begin by reviewing some basic constructions and motivations in low-dimensional topology, and will then introduce the homology cobordism group, the group of 3-manifolds with the same homology as the 3-dimensional sphere up to a reasonable notion of equivalence. We will discuss what is known about the structure of this group and its connection to higher dimensional topology. We will then discuss some existing invariants of the homology cobordism group coming from gauge theory and symplectic geometry, particularly Floer theory. Finally, we will introduce a new invariant of homology cobordism coming from an equivariant version of the computationally-friendly Floer-theoretic 3-manifold invariant Heegaard Floer homology, and use it to construct a new filtration on the homology cobordism group and derive some structural applications. Parts of this talk are joint work with C. Manolescu and I. Zemke; more recent parts of this talk are joint work with J. Hom and T. Lidman.
I will introduce a minimum l-degree threshold for the existence of a nearly perfect (i.e., covering all but a constant number of vertices) matching in a k-graph where k ≥ 3 and k/2 < l ≤ k − 1. This is joint work with Hongliang Lu and Xingxing Yu.
This improves upon an earlier result of Hàn, Person, and Schacht for the range k/2 < l ≤ k − 1. In some cases, such a matching can in fact be near perfect (i.e., covering all but at most k vertices) and our bound on the minimum l-degree is best possible.
Let X be the number of length 3 arithmetic progressions in a random subset of Z/101Z. Does X take the values 630 and 640 with roughly the same probability?
Let Y denote the number of triangles in a random graph on n vertices. Despite looking similar to X, the local distribution of Y is quite different, as Y obeys a local limit theorem.
We will talk about a method for distinguishing when combinatorial random variables obey local limit theorems and when they do not.
Considering SL(2,R) skew-product maps over circle rotations,
we prove that a renormalization transformation
associated with the golden mean alpha
has a nontrivial periodic orbit of length 3.
We also present some numerical results,
including evidence that this period 3 describes
scaling properties of the Hofstadter butterfly
near the top of the spectrum at alpha,
and scaling properties of the generalized eigenfunction
for this energy.
For an $n\times n$ matrix $A_n$, the $r\to p$ operator norm is defined as $\|A_n\|_{r \to p}= \sup_{\|x\|_r\leq 1 } \|A_n x\|_p$ for $r,p\geq 1$. The $r\to p$ operator norm puts a huge number of important quantities of interest in diverse disciplines under a single unified framework. The application of this norm spans a broad spectrum of areas including data-dimensionality reduction in machine learning, finding oblivious routing schemes in transportation network, and matrix condition number estimation.
In this talk, we will consider the $r\to p$ norm of a class of symmetric random matrices with nonnegative entries, which includes the adjacency matrices of the Erd\H{o}s-R\'enyi random graphs and matrices with sub-Gaussian entries. For $1< p\leq r< \infty$, we establish the asymptotic normality of the appropriately centered and scaled $\|A_n\|_{r \to p}$, as $n\to\infty$. The special case $r=p=2$, which corresponds to the largest singular value of matrices, was proved in a seminal paper by F\"uredi and Koml\'os (1981). Of independent interest, we further obtain a sharp $\ell_\infty$-approximation for the maximizer vector. The results also hold for sparse matrices and further the $\ell_\infty$-approximation for the maximizer vector also holds for a broad class of deterministic sequence of matrices with certain asymptotic `expansion' properties.
This is based on a joint work with Souvik Dhara (MIT) and Kavita Ramanan (Brown U.).
I’ll try and give some background on the definition of knot Floer homology, and perhaps also bordered Heegaard Floer homology if time permits.
The analysis and decomposition of nonstationary and nonlinear signals in the quest for the identification
of hidden quasiperiodicities and trends is of high theoretical and applied interest nowadays.
Linear techniques like Fourier and Wavelet Transform, historically used in signal processing, cannot capture
completely nonlinear and non stationary phenomena.
For this reason in the last few years new nonlinear methods have been developed like the groundbreaking
Empirical Mode Decomposition algorithm, aka Hilbert--Huang Transform, and the Iterative Filtering technique.
In this seminar I will give an overview of this kind of methods and I will introduce two new algorithms,
the Fast Iterative Filtering and the Adaptive Local Iterative Filtering. I will review the main theoretical results
and outline the most intriguing open problems that still need to be tackled in the field.
Some examples of applications of these techniques to both artificial and real life signals
will be shown to give a foretaste of their potential and robustness.
‘Koszul duality’ is a phenomenon which algebraists are fond of, and has previously been studied in the context of '(bordered) Heegaard Floer homology' by Lipshitz, Ozsváth and Thurston. In this talk, I shall discuss an occurrence of Koszul duality which links older constructions in Heegaard Floer homology with the bordered Heegaard Floer homology of three-manifolds with torus boundary. I shan’t assume any existing knowledge of Koszul duality or any form of Heegaard Floer homology.
The Probabilistic Method is a powerful tool for tackling many problems in discrete mathematics and related areas.
Roughly speaking, its basic idea can be described as follows. In order to prove existence of a combinatorial structure with certain properties, we construct an appropriate probability space, and show that a randomly chosen element of this space has the desired property with positive probability.
In this talk we shall give a gentle introduction to the Probabilistic Method, with an emphasis on examples.
We introduce the notion of tropical curves of hyperelliptic type. These are tropical curves whose Jacobian is isomorphic to that of a hyperelliptic tropical curve, as polarized tropical abelian varieties. Using the tropical Torelli theorem (due to Caporaso and Viviani), this characterization may be phrased in terms of 3-edge connectiviations. We show that being of hyperelliptic type is independent of the edge lengths and is preserved when passing to genus ≥2 connected minors. The main result is an excluded minors characterization of tropical curves of hyperelliptic type.
We study the mean field limit of large stochastic systems of interacting particles. To treat more general and singular kernels, we propose a modulated free energy combination of the method that we had previously developed and of the modulated energy introduced by S. Serfaty. This modulated free energy may be understood as introducing appropriate weights in the relative entropy to cancel the most singular terms involving the divergence of the flow. Our modulated free energy allows to treat singular potentials which combine large smooth part, small attractive singular part and large repulsive singular part. As an example, a full rigorous derivation (with quantitative estimates) of some chemotaxis models, such as Patlak-Keller-Segel system in the subcritical regimes, is obtained. This is a joint work with D. Bresch and Z. Wang.
A fundamental tool used in sampling, counting, and inference problems is the Markov Chain Monte Carlo method, which uses random walks to solve computational problems. The main parameter defining the efficiency of this method is how quickly the random walk mixes (converges to the stationary distribution). The goal of these talks is to introduce a new approach for analyzing the mixing time of random walks on high-dimensional discrete objects. This approach works by directly relating the mixing time to analytic properties of a certain multivariate generating polynomial. As our main application we will analyze basis-exchange random walks on the set of bases of a matroid. We will show that the corresponding multivariate polynomial is log-concave over the positive orthant, and use this property to show three progressively improving mixing time bounds: For a matroid of rank r on a ground set of n elements:
- We will first show a mixing time of O(r^2 log n) by analyzing the spectral gap of the random walk (based on related works on high-dimensional expanders).
- Then we will show a mixing time of O(r log r + r log log n) based on the modified log-sobolev inequality (MLSI), due to Cryan, Guo, Mousa.
- We will then completely remove the dependence on n, and show the tight mixing time of O(r log r), by appealing to variants of well-studied notions in discrete convexity.
Time-permitting, I will discuss further recent developments, including relaxed notions of log-concavity of a polynomial, and applications to further sampling/counting problems.
Based on joint works with Kuikui Liu, Shayan OveisGharan, and Cynthia Vinzant.
A graphical model encodes conditional independence relations among random variables. For an undirected graph these conditional independence relations are represented by a simple polytope known as the graph associahedron, which is a Minkowski sum of standard simplices. We prove that there are analogous polytopes for a much larger class of graphical models. We construct this polytope as a Minkowski sum of matroid polytopes. The motivation came from the problem of learning Bayesian networks from observational data. No background on graphical models will be assumed for the talk. This is a joint work with Fatemeh Mohammadi, Caroline Uhler, and Charles Wang.
(This is Part 2, continuation of Tuesday's lecture.)
A fundamental tool used in sampling, counting, and inference problems is the Markov Chain Monte Carlo method, which uses random walks to solve computational problems. The main parameter defining the efficiency of this method is how quickly the random walk mixes (converges to the stationary distribution). The goal of these talks is to introduce a new approach for analyzing the mixing time of random walks on high-dimensional discrete objects. This approach works by directly relating the mixing time to analytic properties of a certain multivariate generating polynomial. As our main application we will analyze basis-exchange random walks on the set of bases of a matroid. We will show that the corresponding multivariate polynomial is log-concave over the positive orthant, and use this property to show three progressively improving mixing time bounds: For a matroid of rank r on a ground set of n elements:
- We will first show a mixing time of O(r^2 log n) by analyzing the spectral gap of the random walk (based on related works on high-dimensional expanders).
- Then we will show a mixing time of O(r log r + r log log n) based on the modified log-sobolev inequality (MLSI), due to Cryan, Guo, Mousa.
- We will then completely remove the dependence on n, and show the tight mixing time of O(r log r), by appealing to variants of well-studied notions in discrete convexity.
Time-permitting, I will discuss further recent developments, including relaxed notions of log-concavity of a polynomial, and applications to further sampling/counting problems.
Based on joint works with Kuikui Liu, Shayan OveisGharan, and Cynthia Vinzant.
A real matrix is called orthostochastic if it is the entrywise square of an orthogonal matrix. These matrices have been shown to be deeply connected to determinantal representations of polynomials, and also arise naturally in physics. However, the equations defining the real variety are known only up to the 3x3 case. I will show how various techniques of numerical algebraic geometry give a way of finding (set-theoretic) defining equations for the 4x4 orthostochastic variety, which are smaller (both in number and degree) than the naive equations one might initially guess. Based on joint work with Papri Dey.
Brascamp-Lieb inequalities are estimates for certain multilinear forms on functions on Euclidean spaces. They generalize several classical inequalities, such as Hoelder's inequality or Young's convolution inequality. In this talk we consider singular Brascamp-Lieb inequalities, which arise when one of the functions in the Brascamp-Lieb inequality is replaced by a singular integral kernel. Examples include multilinear singular integral forms such as paraproducts or the multilinear Hilbert transform. We survey some results in the area.
The emergent shape of a knitted fabric is highly sensitive to the underlying stitch pattern. Here, by a stitch pattern we mean a periodic array of symbols encoding a set of rules or instructions performed to produce a swatch or a piece of fabric. So, it is crucial to understand what exactly these instructions mean in terms of mechanical moves performed using a yarn (a smooth piece of string) and a set of knitting needles (oriented sticks). Motivated by the fact that locally every knitting move results in a slip knot, we use tools from topology to model the set of all doubly periodic stitch patterns, knittable & non-knittable, as knots & links in a three manifold. Specifically, we define a map from the set of doubly-periodic stitch patterns to the set of links in S^3 and use link invariants such as the linking number, multivariable Alexander polynomial etc. to characterize them. We focus on such links derived from knitted stitch patterns in an attempt to tackle the question: whether or not a given stitch pattern can be realized through knitting.
We will discuss a novel approach to obtaining non-asymptotic estimates on the lower tail of the least singular value of an $n \times n$ random matrix $M_{n} := M + N_{n}$, where $M$ is a fixed matrix with operator norm at most $O(\exp(n^{c}))$ and $N_n$ is a random matrix, each of whose entries is an independent copy of a random variable with mean 0 and variance 1. This has been previously considered in a series of works by Tao and Vu, and our results improve upon theirs in two ways:
(i) We are able to deal with $\|M\| = O(\exp(n^{c}))$ whereas previous work was applicable for $\|M\| = O(\poly(n))$.
(ii) Even for $\|M\| = O(poly(n))$, we are able to extract more refined information – for instance, our results show that for such $M$, the probability that $M_n$ is singular is $O(exp(-n^{c}))$, whereas even in the case when $N_n$ is an i.i.d. Bernoulli matrix, the results of Tao and Vu only give inverse polynomial singularity probability.
When Alice wants to send a k-bits message v to Bob over a noisy channel, she encodes it as a longer n-bits message Mv, where M is a n times k matrix over F_2. The minimal distance d_M of the linear code M is defined as the minimum Hamming distance between Mw and Mu over all distinct points w,u in F_2^k. In this way, if there are less than d_M/2 corrupted bits in the message, Bob can recover the original message via a nearest neighbor search algorithm.
The classical Gilbert-Varshamov Bound provides a lower bound for d_M if the columns of M are independent copies of X, where X is the random vector uniformly distributed on F_2^n. Under the same assumption on M, we show that the distribution of d_M is essentially the same as the minimum of Hamming weight (Hamming distance to origin) of 2^k-1 i.i.d copies of X.
The result is surprising since M is only generated by k independent copies of X. Furthermore, our results also work for arbitrary finite fields.
This is joint work with Jing Hao, Galyna Livshyts, Konstantin Tikhomirov.
I will talk about algorithms (with unlimited computational power) which adaptively probe pairs of vertices of a graph to learn the presence or absence of edges and whose goal is to output a large clique. I will focus on the case of the random graph G(n,1/2), in which case the size of the largest clique is roughly 2\log(n). Our main result shows that if the number of pairs queried is linear in n and adaptivity is restricted to finitely many rounds, then the largest clique cannot be found; more precisely, no algorithm can find a clique larger than c\log(n) where c < 2 is an explicit constant. I will also discuss this question in the planted clique model. This is based on joint works with Uriel Feige, David Gamarnik, Joe Neeman, Benjamin Schiffer, and Prasad Tetali.
This talk is based on a paper by Grigoriy Blekherman. In most cases, nonnegative polynomials differ from positive polynomials. We will discuss precisely what equations cause these differences, and relate them to the well known Cayley-Bacharach theorem for low degree polynomials.
We discuss the problem of optimal mixing of an inhomogeneous distribution of a scalar field via an active control of the flow velocity, governed by the Stokes or the Navier-Stokes equations, in a two dimensional open bounded and connected domain. We consider the velocity field steered by a control input that acts tangentially on the boundary of the domain through the Navier slip boundary conditions. This is motivated by mixing within a cavity or vessel by moving the walls or stirring at the boundaries. Our main objective is to design an optimal Navier slip boundary control that optimizes mixing at a given final time. Non-dissipative scalars, both passive and active, governed by the transport equation will be discussed. In the absence of diffusion, transport and mixing occur due to pure advection. This essentially leads to a nonlinear control problem of a semi-dissipative system. We shall provide a rigorous proof of the existence of an optimal controller, derive the first-order necessary conditions for optimality, and present some preliminary results on the numerical implementation.
One strategy for developing a proof of a claimed theorem is to start by understanding what a counter-example should look like. In this talk, we will discuss a few recent results in harmonic analysis that utilize a quantitative version of this approach. A key step is the solution of an inverse problem with the following flavor. Let $T:X \to Y$ be a bounded linear operator and let $0 < a \leq \|T\|$. What can we say about those functions $f \in X$ obeying the reverse inequality $\|Tf\|_Y \geq a\|f\|_X$?
A multivariate complex polynomial is called stable if any line in any positive direction meets its hypersurface only at real points. Stable polynomials have close relations to matroids and hyperbolic programming. We will discuss a generalization of stability to algebraic varieties of codimension larger than one. They are varieties which are hyperbolic with respect to the nonnegative Grassmannian, following the notion of hyperbolicity studied by Shamovich, Vinnikov, Kummer, and Vinzant. We show that their tropicalization and Chow polytopes have nice combinatorial structures related to braid arrangements and positroids, generalizing some results of Choe, Oxley, Sokal, Wagner, and Brändén on Newton polytopes and tropicalizations of stable polynomials. This is based on joint work with Felipe Rincón and Cynthia Vinzant.
I will continue to describe deterministic algorithms for approximately counting common bases of matroids within an exponential factor. This is based on AOVI and previous works of AO.
I will discuss an ongoing project to reconstruct a gene network from time-series data from a mammalian signaling pathway. The data is generated from gene knockouts and the techniques involve computational algebra. Specifically, one creates an pseudomonomial "ideal of non-disposable sets" and applies a analogue of Stanley-Reisner theory and Alexander duality to it. Of course, things never work as well in practice, due to issue such as noise, discretization, and scalability, and so I will discuss some of these challenges and current progress.
Starting from mathematical approaches for image processing, we will discuss different models, analytic aspects of them, and numerical challenges. If time permits we will consider numerical applications to data understanding. A few other applications may be presented.
One of the outstanding open problems of statistical mechanics is about the hard-core model which is a popular topic in mathematical physics and has applications in a number of other disciplines. Namely, do non-overlapping hard disks of the same diameter in the plane admit a unique Gibbs measure at high density? It seems natural to approach this question by requiring the centers to lie in a fine lattice; equivalently, we may fix the lattice, but let the Euclidean diameter D of the hard disks tend to infinity. In two dimensions, it can be a unit triangular lattice A_2 or a unit square lattice Z^2. The randomness is generated by Gibbs/DLR measures with a large value of fugacity which corresponds to a high density. We analyze the structure of high-density hard-core Gibbs measures via the Pirogov-Sinai theory. The first step is to identify periodic ground states, i.e., maximal-density disk configurations which cannot be locally `improved'. A key finding is that only certain `dominant' ground states, which we determine, generate nearby Gibbs measures. Another important ingredient is the Peierls bound separating ground states from other admissible configurations. In particular, number-theoretic properties of the exclusion diameter D turn out to be important. Answers are provided in terms of Eisenstein primes for A_2 and norm equations in the cyclotomic ring Z[ζ] for Z^2, where ζ is the primitive 12th root of unity. Unlike most models in statistical physics, we find non-universality: the number of high-density hard-core Gibbs measures grows indefinitely with D but
non-monotonically. In Z^2 we also analyze the phenomenon of 'sliding' and show it is rare.
This is a joint work with A. Mazel and Y. Suhov.
Let X be a random variable taking values in {0,...,n} and f(z) be its probability generating function. Pemantle conjectured that if the variance of X is large and f has no roots close to 1 in the complex plane, then X must be approximately normal. We will discuss a complete resolution of this conjecture in a strong quantitative form, thereby giving the best possible version of a result of Lebowitz, Pittel, Ruelle and Speer. Additionally, if f has no roots with small argument, then X must be approximately normal, again in a sharp quantitative form. These results also imply a multivariate central limit theorem that answers a conjecture and completes a program of Ghosh, Liggett and Pemantle. This talk is based on joint work with Julian Sahasrabudhe.
In this talk, we provide the details of our faster width-dependent algorithm for mixed packing-covering LPs. Mixed packing-covering LPs are fundamental to combinatorial optimization in computer science and operations research. Our algorithm finds a $1+\eps$ approximate solution in time $O(Nw/ \varepsilon)$, where $N$ is number of nonzero entries in the constraint matrix, and $w$ is the maximum number of nonzeros in any constraint. This algorithm is faster than Nesterov's smoothing algorithm which requires $O(N\sqrt{n}w/ \eps)$ time, where $n$ is the dimension of the problem. Our work utilizes the framework of area convexity introduced in [Sherman-FOCS’17] to obtain the best dependence on $\varepsilon$ while breaking the infamous $\ell_{\infty}$ barrier to eliminate the factor of $\sqrt{n}$. The current best width-independent algorithm for this problem runs in time $O(N/\eps^2)$ [Young-arXiv-14] and hence has worse running time dependence on $\varepsilon$. Many real life instances of mixed packing-covering problems exhibit small width and for such cases, our algorithm can report higher precision results when compared to width-independent algorithms. As a special case of our result, we report a $1+\varepsilon$ approximation algorithm for the densest subgraph problem which runs in time $O(md/ \varepsilon)$, where $m$ is the number of edges in the graph and $d$ is the maximum graph degree.
We report on the discovery of a general principle leading to the unexpected cancellation of oscillating sums. It turns out that sums in the
class we consider are much smaller than would be predicted by certain probabilistic heuristics. After stating the motivation, and our theorem,
we apply it to prove a number of results on integer partitions, the distribution of prime numbers, and the Prouhet-Tarry-Escott Problem. For example, we prove a "Pentagonal Number Theorem for the Primes", which counts the number of primes (with von Mangoldt weight) in a set of intervals very precisely. In fact the result is stronger than one would get using a strong form of the Prime Number Theorem and also the Riemann Hypothesis (where one naively estimates the \Psi function on each of the intervals; however, a less naive argument can give an improvement), since the widths of the intervals are smaller than \sqrt{x}, making the Riemann Hypothesis estimate "trivial".
Based on joint work with Ernie Croot.
The topological entropy of a subshift is the exponential growth rate of the number of words of different lengths in its language. For subshifts of entropy zero, finer growth invariants constrain their dynamical properties. In this talk we will survey how the complexity of a subshift affects properties of the ergodic measures it carries. In particular, we will see some recent results (joint with B. Kra) relating the word complexity of a subshift to its set of ergodic measures as well as some applications.
The Bergman fan is a tropical linear space with trivial valuations describing a matroid combinatorially as it corresponds to a matroid. In this talk, based on a plenty of examples, we study the definition of the Bergman fan and their subdivisions. The talk will be closed with the recent result of the Maclagan-Yu's paper (https://arxiv.org/abs/1908.05988) that the fine subdivision of the Bergman fan of any matroid is r-1 connected where r is the rank of the matroid.
The talk is concerned with low multilinear rank approximations to antisymmetric tensors, that is, multivariate arrays for which the entries change sign when permuting pairs of indices. Such tensors play a major role in quantum chemistry. We show which ranks can be attained by an antisymmetric tensor and discuss the adaption of existing approximation algorithms to preserve antisymmetry, most notably a Jacobi-type algorithm. Particular attention is paid to the special case when choosing the rank equal to the order of the tensor. It is shown that this case can be addressed with an unstructured rank-1 approximation. This allows for the straightforward application of the higher-order power method, for which we discuss effective initialization strategies. This is a joint work with Daniel Kressner (EPFL).
I will outline the construction of some knot concordance invariants based on the Heegaard Floer homology of double branched coverings. The construction builds on some ideas developed by Hendricks, Manolescu, Hom and Lidman. This is joint work with Andras Stipsicz, and Sungkyung Kang.
Surfaces are some of the most basic examples of spaces. Although topologists have studied surfaces for a long time, they continue to fascinate. I'll give an overview of the study of surfaces over the past 150 years by highlighting work of seven mathematicians. We'll discuss the classification of surfaces, and we'll also discuss mapping class groups, which are collections of symmetries of surfaces. I'll also give the flavor of four of my own research projects about surfaces, one for each of four broad mathematical areas: group theory, geometry, topology, and dynamics.
Building on previous work of Rozansky and Willis, we generalise Rasmussen’s s-invariant to connected sums of $S^1 \times S^2$. Such an invariant can be computed by approximating the Khovanov-Lee complex of a link in $\#^r S^1 \times S^2$ with that of appropriate links in $S^3$. We use the approximation result to compute the s-invariant of a family of links in $S^3$ which seems otherwise inaccessible, and use this computation to deduce an adjunction inequality for null-homologous surfaces in a (punctured) connected sum of $\bar{CP^2}$. This inequality has several consequences: first, the s-invariant of a knot in the three-sphere does not increase under the operation of adding a null-homologous full twist. Second, the s-invariant cannot be used to distinguish $S^4$ from homotopy 4-spheres obtained by Gluck twist on $S^4$. We also prove a connected sum formula for the s-invariant, improving a previous result of Beliakova and Wehrli. We define two s-invariants for links in $\#^r S^1 \times S^2$. One of them gives a lower bound to the slice genus in $\natural^r S^1 \times B^3$ and the other one to the slice genus in $\natural^r D^2 \times S^2$ . Lastly, we give a combinatorial proof of the slice Bennequin inequality in $\#^r S^1 \times S^2$.
We show that the dynamics of nonlinear dynamical systems with many degrees of freedom (possibly infinitely many) can be similar to that of ordered system in a surprising fashion. To this aim, in the literature one typically uses techniques from perturbation theory, such as KAM theorem or Nekhoroshev theorem. Unfortunately they are known to be ill-suited for obtaining results in the case of many degrees of freedom. We present here a probabilistic approach, in which we focus on some observables of physical interest (obtained by averaging on the probability distribution on initial data) and for several models we get results of stability on long times similar to Nekhoroshev estimates. We present the example of a nonlinear chain of particles with alternating masses, an hyper-simplified model of diatomic solid. In this case, which is similar to the celebrated Fermi-Pasta-Ulam model and is widely studied in the literature, we show the progress with respect to previous results, and in particular how the present approach permits to obtain theorems valid in the thermodynamic limit, as this is of great relevance for physical implications.
In this talk I will give an introduction to certain aspects of geometric Littlewood-Paley theory, which is an area of harmonic analysis concerned with deriving regularity properties of sets and measures from the analytic behavior of associated operators. The work we shall describe has been carried out in collaboration with Fedor Nazarov, Maria Carmen Reguera, Xavier Tolsa, and Michele Villa.
Function classes are collections of Boolean functions on a finite set. Recently, a method of studying function classes via commutative algebra, by associating a squarefree monomial ideal to a function class, was introduced by Yang. I will describe this connection, as well as some free resolutions and Betti numbers for these ideals for an interesting collection of function classes, corresponding to intersection-closed posets. This is joint work with Chris Eur, Greg Yang, and Mengyuan Zhang.
We consider a physical periodic Ehrenfests' Wind-Tree model where a moving particle is a hard ball rather than (mathematical) point particle. Some dynamics and statistical properties of this model are studied. Moreover, it is shown that it has a new superdiffusive regime where the diffusion coefficient $D(t)\sim(\ln t)^2$ of dynamics seems to be never observed before in any model.
In this talk I'll first give an background overview of Bourgain's approach to prove the invariance of the Gibbs measure for the periodic cubic nonlinear Schrodinger equation in 2D and of the para-controlled calculus of Gubinelli-Imkeller and Perkowski in the context of parabolic stochastic equations. I will then present our resolution of the long-standing problem of proving almost sure global well-posedness (i.e. existence /with uniqueness/) for the periodic nonlinear Schrödinger equation (NLS) in 2D on the support of the Gibbs measure, for any (defocusing and renormalized) odd power nonlinearity. Consequently we get the invariance of the Gibbs measure. This is achieved by a new method we call /random averaging operators /which precisely captures the intrinsic randomness structure of the problematic high-low frequency interactions at the heart of this problem. This is work with Yu Deng (USC) and Haitian Yue (USC).
In both biological brains and artificial neural networks, the representational geometry - the shape and distribution of activity - at different layers in an artificial network or across different populations of neurons in the brain, can reveal important signatures of the underlying computations taking place. In this talk, I will describe how we are developing strategies for comparing and aligning neural representations, using a combination of tools from computational geometry and optimal transport.
Determining when two objects have “the same shape” is difficult; this difficulty depends on the dimension we are working in. While many of the same techniques work to study things in dimensions 5 and higher, we can better understand dimensions 1, 2, and 3 using other methods. We can think of 4-dimensional space as the “bridge” between low-dimensional behavior and high-dimensional behavior.
One way to understand the possibilities in each dimension is to examine objects called cobordisms: if an (n+1)-dimensional space has an ``edge,” which is called a boundary, then that boundary is itself an n-dimensional space. We say that two n-dimensional spaces are cobordant if together they form the boundary of an (n+1)-dimensional space. Using the idea of spaces related by cobordism, we can form an algebraic structure called a group. In this way, we can attempt to understand higher dimensions using clues from lower dimensions.
In this talk, I will discuss different types of cobordism groups and how to study them using tools from a broad range of mathematical areas.
I will discuss the prime decomposition of three-manifolds. First I will define the connect sum operation, irreducible and prime 3-manifolds. Then using the connect sum operation as "multiplication," I will show any closed oriented three-manifold decomposes uniquely into prime factors using spheres. If time permits, I will show another way of decomposing using discs.
The condition number of a matrix A is the quantity κ(A) = smax(A)/smin(A), where smax(A), smin(A) are the largest and smallest singular values of A, respectively. Let A be a random n × n matrix with i.i.d, mean zero, unit variance, subgaussian entries. We will discuss a result by Litvak, Tikhomirov and Tomczak-Jaegermann which states that, in this setting, the condition number satisfies the small ball probability estimate
P{κ(A) ≤ n/t} ≤ 2 exp(−ct^2), t ≥ 1, where c > 0 is a constant depending only on the subgaussian moment.
A sequence A of positive integers is r-Ramsey complete if for every r-coloring of A, every sufficiently large integer can be written as a sum of the elements of a monochromatic subsequence. Burr and Erdos proposed several open problems in 1985 on how sparse can an r-Ramsey complete sequence be and which polynomial sequences are r-Ramsey complete. Erdos later offered cash prizes for two of these problems. We prove a result which solves the problems of Burr and Erdos on Ramsey complete sequences. The proof uses tools from probability, combinatorics, and number theory.
Joint work with David Conlon.
A sequence A of positive integers is r-Ramsey complete if for every r-coloring of A, every sufficiently large integer can be written as a sum of the elements of a monochromatic subsequence. Burr and Erdos proposed several open problems in 1985 on how sparse can an r-Ramsey complete sequence be and which polynomial sequences are r-Ramsey complete. Erdos later offered cash prizes for two of these problems. We prove a result which solves the problems of Burr and Erdos on Ramsey complete sequences. The proof uses tools from probability, combinatorics, and number theory.
Joint work with David Conlon.
Fictitious play is one of the simplest and most natural dynamics for two-player zero-sum games. Originally proposed by Brown in 1949, the fictitious play dynamic has each player simultaneously best-respond to the distribution of historical plays of their opponent. In 1951, Robinson showed that fictitious play converges to the Nash Equilibrium, albeit at an exponentially-slow rate, and in 1959, Karlin conjectured that the true convergence rate of fictitious play after k iterations is O(k^{-1/2}), a rate which is achieved by similar algorithms and is consistent with empirical observations. Somewhat surprisingly, Daskalakis and Pan disproved a version of this conjecture in 2014, showing that an exponentially-slow rate can occur, although their result relied on adversarial tie-breaking. In this talk, we show that Karlin’s conjecture holds if ties are broken lexicographically and the game matrix is diagonal. We also show a matching lower bound under this tie-breaking assumption. This is joint work with Jake Abernethy and Andre Wibisono.
A sunflower with $r$ petals is a collection of $r$ sets so that the intersection of each pair is equal to the intersection of all. Erdos and Rado in 1960 proved the sunflower lemma: for any fixed $r$, any family of sets of size $w$, with at least about $w^w$ sets, must contain a sunflower. The famous sunflower conjecture is that the bound on the number of sets can be improved to $c^w$ for some constant $c$. Despite much research, the best bounds until recently were all of the order of $w^{cw}$ for some constant c. In this work, we improve the bounds to about $(\log w)^{w}$.
There are two main ideas that underlie our result. The first is a structure vs pseudo-randomness paradigm, a commonly used paradigm in combinatorics. This allows us to either exploit structure in the given family of sets, or otherwise to assume that it is pseudo-random in a certain way. The second is a duality between families of sets and DNFs (Disjunctive Normal Forms). DNFs are widely studied in theoretical computer science. One of the central results about them is the switching lemma, which shows that DNFs simplify under random restriction. We show that when restricted to pseudo-random DNFs, much milder random restrictions are sufficient to simplify their structure.
Joint work with Ryan Alweiss, Kewen Wu and Jiapeng Zhang.
We are studying the asymptotic homotopical complexity of a sequence of billiard flows on the 2D unit torus T^2 with n
circular obstacles. We get asymptotic lower and upper bounds for the radial sizes of the homotopical rotation sets and,
accordingly, asymptotic lower and upper bounds for the sequence of topological entropies. The obtained bounds are rather
close to each other, so this way we are pretty well capturing the asymptotic homotopical complexity of such systems.
Note that the sequence of topological entropies grows at the order of log(n), whereas, in sharp contrast, the order of magnitude of the sequence of metric entropies is only log(n)/n.
Also, we prove the convexity of the admissible rotation set AR, and the fact that the rotation vectors obtained from
periodic admissible trajectories form a dense subset in AR.
One often gains insight into the topology of a manifold by studying its sub-manifolds. Some of the most interesting sub-manifolds of a 3-manifold are the "incompressible surfaces", which, intuitively, are the properly embedded surfaces that can not be further simplified while remaining non-trivial. In this talk, I will present some results on classifying orientable incompressible surfaces in a hyperbolic mapping torus whose fibers are 4-punctured spheres. I will explain how such a surface gives rise to a path satisfying certain combinatorial properties in the arc complex of the 4-punctured sphere, and how we can reconstruct such surfaces from these paths. This extends and generalizes results of Floyd, Hatcher, and Thurston.
Did you know that a wheel or a ball bearing does not need to be round? Convex regions that can roll smoothly come in many remarkable shapes and have practical applications in engineering and science. Moreover, the mathematics used to describe them, known as convex geometry, is a subject that beautifully ties together analysis and geometry. I'll bring some of these objects along and tell the class how to describe them effectively and recount their interesting history.
We will give a brief introduction to Schur polynomials and intersection theory and show how to use Section 10 of the Brändén-Huh paper to obtain log-concavity results for Schur polynomials.
https://arxiv.org/abs/1902.03719
https://arxiv.org/abs/1906.09633
Anosov flows provide beautiful examples of interactions between dynamics, geometry and analysis. In dimension 3 in particular, they are known to have a subtle relation to topology as well. Motivated by a result of Mitsumatsu from 1995, I will discuss their relation to contact and symplectic structures and argue why contact topological methods are natural tools to study the related global phenomena.
I will discuss the orderability of the fundamental groups of knot complements including known results, a useful technique using some ideas of Baumslag, and some interesting questions that have recently arisen from this study.
I will discuss how a graph theoretic construction used by Hirasawa and Murasugi can be used to show that the commutator subgroup of the knot group of a two-bridge knot is a union of an ascending chain of parafree groups. Using a theorem of Baumslag, this implies that the commutator subgroup of a two-bridge knot group is residually torsion-free nilpotent which has applications to the anti-symmetry of ribbon concordance and the bi-orderability of two-bridge knots. In 1973, E. J. Mayland gave a conference talk in which he announced this result. Notes on this talk can be found online. However, this result has never been published, and there is evidence, in later papers, that a proper proof might have eluded Mayland.
Domino tilings of finite grid regions have been studied in many contexts, revealing rich combinatorial structure. They arise in applications spanning physics, computer science and probability theory and recreational mathematics. We will look at questions such as counting and sampling from large combinatorial sets, such as the set of domino tilings, providing a small sample of some of the techniques that are used.
Branched covers are a generalization of covering spaces, and give rise to interesting questions in smooth as well as contact topology. All 3 manifolds arise as branched coverings of the 3-sphere. The talk will involve a discussion of the proof of this fact due to Montesinos, and will explore the work done towards understanding which contact 3 manifolds arise as the branched cover of the standard tight 3 sphere, and how the branch set can be regulated.
The Caffarelli contraction theorem states that the Brenier map sending the
Gaussian measure onto a uniformly log-concave probability measure is
lipschitz. In this talk, I will present a new proof, using entropic
regularization and a variational characterization of lipschitz transport
maps. Based on joint work with Nathael Gozlan and Maxime Prod'homme.
The complex connectivity structure unique to the brain network is believed to underlie its robust and efficient coding capability. Specifically, neuronal networks at multiple scales utilize their structural complexities to achieve different computational goals. In this talk, I will discuss functional implications that can be inferred from the architecture of brain networks.
The first part of the talk will focus on a generalized problem of linking structure and dynamics of the whole-brain network. By simulating large-scale brain dynamics using a data-driven network of phase oscillators, we show that complexities added to the spatially embedded brain connectome by idiosyncratic long-range connections, enable rapid transitions between local and global synchronizations. In addition to the spatial dependence, I will also discuss hierarchical structure of the brain network. Based on the data-driven layer-specific connectivity patterns, we developed an unsupervised method to find the hierarchical organization of the mouse cortical and thalamic network. The uncovered hierarchy provides insights into the direction of information flow in the mouse brain, which has been less well-defined compared to the primate brain.
Finally, I will discuss computational implications of the hierarchical organization of the brain network. I will focus on a specific type of computation – discrimination of partially occluded objects— carried out by a small cortical circuitry composed of an intermediate visual cortical area V4 and its efferent prefrontal cortex. I will explore how distinct feedforward and feedback signals promote robust encoding of visual stimuli by leveraging predictive coding, a Bayesian inference theory of cortical computation which has been proposed as a method to create efficient neural codes. We implement a predictive coding model of V4 and prefrontal cortex to investigate possible computational roles of feedback signals in the visual system and their potential significance in robust encoding of nosy visual stimuli.
In sum, our results reveal the close link between structural complexity and computational versatility found in brain networks, which may be useful for developing more efficient artificial neural networks and neuromorphic devices.
This talk concerns a naturally occurring family of Calabi-Yau manifolds that degenerates in the sense of metric geometry, algebraic geometry and nonlinear PDE. A primary tool in analyzing their behavior is the recently developed regularity theory. We will give a precise description of arising singularities and explain possible generalizations.
I will introduce an isoperimetric inequality for the Hamming cube and some of its applications. The applications include a “stability” version of Harper’s edge-isoperimetric inequality, which was first proved by Friedgut, Kalai and Naor for half cubes, and later by Ellis for subsets of any size. Our inequality also plays a key role in a recent result on the asymptotic number of maximal independent sets in the cube.
This is joint work with Jeff Kahn.
(This is a joint event of ACO Student Seminar and the Combinatorics Seminar Series)
In this talk we will prove a conjecture of Talagrand, which is a fractional version of the “expectation-threshold” conjecture of Kalai and Kahn. This easily implies various difficult results in probabilistic combinatorics, e.g. thresholds for perfect hypergraph matchings (Johansson-Kahn-Vu) and bounded-degree spanning trees (Montgomery). Our approach builds on recent breakthrough work of Alweiss, Lovett, Wu, and Zhang on the Erdős-Rado “Sunflower Conjecture.”
This is joint work with Keith Frankston, Jeff Kahn, and Bhargav Narayanan.
(This is a joint event of the Combinatorics Seminar Series and the ACO Student Seminar.)
In this talk we will prove a conjecture of Talagrand, which is a fractional version of the “expectation-threshold” conjecture of Kalai and Kahn. This easily implies various difficult results in probabilistic combinatorics, e.g. thresholds for perfect hypergraph matchings (Johansson-Kahn-Vu) and bounded-degree spanning trees (Montgomery). Our approach builds on recent breakthrough work of Alweiss, Lovett, Wu, and Zhang on the Erdos-Rado “Sunflower Conjecture.”
This is joint work with Keith Frankston, Jeff Kahn, and Bhargav Narayanan.
A group is said to be torsion-free if it has no elements of finite order. An example is the group, under composition, of self-homeomorphisms (continuous maps with continuous inverses) of the interval I = [0, 1] fixed on the boundary {0, 1}. In fact this group has the stronger property of being left-orderable, meaning that the elements of the group can be ordered in a way that is nvariant under left-multiplication. If one restricts to piecewise-linear (PL) homeomorphisms, there exists a two-sided (bi-)ordering, an even stronger property of groups.
I will discuss joint work with Danny Calegari concerning groups of homeomorphisms of the cube [0, 1]^n fixed on the boundary. In the PL category, this group is left-orderable, but not bi-orderable, for all n>1. Also I will report on recent work of James Hyde showing that left-orderability fails for n>1 in the topological category.
Note time and place of seminar
Two of the most basic questions in contact topology are which manifolds admit tight contact structures, and on those that do, can we classify such structures. In dimension 3, these questions have been answered for large classes of manifolds, but with a notable absence of hyperbolic manifolds. In this talk, we will see a new classification of contact structures on an family of hyperbolic 3-manifolds arising from Dehn surgery on the figure-eight knot, and see how it suggests some structural results about tight contact structures. This is joint work with Hyunki Min.
Introduced by Hendricks and Manolescu in 2015, Involutive Heegaard Floer homology is a variation of the 3-manifold invariant Heegaard Floer homology which makes use of the conjugation symmetry of the Heegaard Floer complexes. This theory can be used to obtain two new invariants of homology cobordism. This talk will involve a brief overview of general Heegaard Floer homology, followed by a discussion of the involutive theory and some computations of the homology cobordism invariants.