On the relativistic Landau equation
- Series
- PDE Seminar
- Time
- Tuesday, September 24, 2019 - 15:00 for 1 hour (actually 50 minutes)
- Location
- Skiles 006
- Speaker
- Maja Taskovic – Emory University – maja.taskovic@emory.edu
For online optimization, the input instance is revealed in a sequence of steps and, after each step, the algorithm has to take an immediate and irrevocable decision based on the previous inputs. Online algorithms produce a sequence of decisions for such problems without the complete information of the future. In the worst-case analysis of online optimization problems, sometimes, it is impossible to achieve any bounded competitive ratio. An interesting way to bypass these impossibility results is to incorporate a stochastic component into the input model. In the random-order arrival model, the adversary specifies an input instance in advance but the input appears according to a random permutation. The knapsack problem is one of the classical problems in combinatorial optimization: Given a set of items, each specified by its size and profit, the goal is to find a maximum profit packing into a knapsack of bounded capacity. The generalized assignment problem (GAP) includes, besides the knapsack problem, several important problems related to scheduling and matching. In this talk, we will present improved competitive algorithms under random-order arrival for these two problems. This is joint work with Susanne Albers and Leon Ladewig.
A knot is a smooth embedding of a circle into R^3. Closely related are tangles, which are properly embedded arcs in a 3-ball. We will model certain tangles using jump ropes and relate this to Conway's classification of rational tangles.
The space of degree-n complex polynomials with distinct roots appears frequently and naturally throughout mathematics, but its shape and structure could be better understood. In recent and ongoing joint work with Jon McCammond, we present a deformation retraction of this space onto a simplicial complex with rich structure given by the combinatorics of noncrossing partitions. In this talk, I will describe the deformation retraction and the resulting combinatorial data associated to each generic complex polynomial. We will also discuss a helpful comment from Daan Krammer which connects our work with the ideas of Bill Thurston and his collaborators.
Deep neural networks (DNNs) have revolutionized machine learning by gradually replacing the traditional model-based algorithms with data-driven methods. While DNNs have proved very successful when large training sets are available, they typically have two shortcomings: First, when the training data are scarce, DNNs tend to suffer from overfitting. Second, the generalization ability of overparameterized DNNs still remains a mystery. In this talk, I will discuss two recent works to “inject” the “modeling” flavor back into deep learning to improve the generalization performance and interpretability of the DNN model. This is accomplished by DNN regularization through applied differential geometry and harmonic analysis. In the first part of the talk, I will explain how to improve the regularity of the DNN representation by enforcing a low-dimensionality constraint on the data-feature concatenation manifold. In the second part, I will discuss how to impose scale-equivariance in network representation by conducting joint convolutions across the space and the scaling group. The stability of the equivariant representation to nuisance input deformation is also proved under mild assumptions on the Fourier-Bessel norm of filter expansion coefficients.
The Jacobian Conjecture is a famous open problem in commutative algebra and algebraic geometry. Suppose you have a polynomial function $f:\mathbb{C}^n\to\mathbb{C}^n$. The Jacobian Conjecture asserts that if the Jacobian of $f$ is a non-zero constant, then $f$ has a polynomial inverse. Because the conjecture is so easy to state, there have been many claimed proofs that turned out to be false. We will discuss some of these incorrect proofs, as well as several correct theorems relating to the Jacobian Conjecture.
In this informal chat, I will introduce the braid group and several equivalent topological perspectives from which to view it. In particular, we will discuss the role that complex polynomials play in this setting, along with a few classical results.
In this talk we introduce a refined alteration approach for constructing $H$-free graphs: we show that removing all edges in $H$-copies of the binomial random graph does not significantly change the independence number (for suitable edge-probabilities); previous alteration approaches of Erdös and Krivelevich remove only a subset of these edges. We present two applications to online graph Ramsey games of recent interest, deriving new bounds for Ramsey, Paper, Scissors games and online Ramsey numbers (each time extending recent results of Fox–He–Wigderson and Conlon–Fox–Grinshpun–He).
Based on joint work with Lutz Warnke.
In deep generative models, the latent variable is generated by a time-inhomogeneous Markov chain, where at each time step we pass the current state through a parametric nonlinear map, such as a feedforward neural net, and add a small independent Gaussian perturbation. In this talk, based on joint work with Belinda Tzen, I will discuss the diffusion limit of such models, where we increase the number of layers while sending the step size and the noise variance to zero. The resulting object is described by a stochastic differential equation in the sense of Ito. I will first show that sampling in such generative models can be phrased as a stochastic control problem (revisiting the classic results of Föllmer and Dai Pra) and then build on this formulation to quantify the expressive power of these models. Specifically, I will prove that one can efficiently sample from a wide class of terminal target distributions by choosing the drift of the latent diffusion from the class of multilayer feedforward neural nets, with the accuracy of sampling measured by the Kullback-Leibler divergence to the target distribution.