Minimum Norm Interpolation Meets The Local Theory of Banach Spaces
- Series
- Stochastics Seminar
- Time
- Thursday, October 3, 2024 - 15:30 for 1 hour (actually 50 minutes)
- Location
- Skiles 006
- Speaker
- Gil Kur – ETH – gilkur1990@gmail.com
A harmonic function of two variables is the real or imaginary part of an analytic function. A harmonic function of $n$ variables is a function $u$ satisfying
$$
\frac{\partial^2 u}{\partial x_1^2}+\ldots+\frac{\partial^2u}{\partial x_n^2}=0.
$$
We will first recall some basic results on harmonic functions: the mean value property, the maximum principle, the Liouville theorem, the Harnack inequality, the Bocher theorem, the capacity and removable singularities. We will then present a number of more recent results on some conformally invariant elliptic and degenerate elliptic equations arising from conformal geometry. These include results on Liouville theorems, Harnack inequalities, and Bocher theorems.
The Poincaré metric on the unit disc $\mathbb{D} \subset \mathbb{C}$, known for its invariance under all biholomorphisms (bijective holomorphic maps) of $\mathbb{D}$, is one of the most fundamental Riemannian metrics in differential geometry.
In this presentation, we will first introduce the Bergman metric on a bounded domain in $\mathbb{C}^n$, which can be viewed as a generalization of the Poincaré metric. We will then explore some key theorems that illustrate how the curvature of the Bergman metric characterizes bounded domains in $\mathbb{C}^n$ and more generally, complex manifolds. Finally, I will discuss my recent work related to these concepts.
Given x in $[0,1]^d$, this talk is about the fine-scale distribution of the Kronecker sequence $(n x mod 1)_{n\geq 1}$.
After a general introduction, I will report on forthcoming work with Sam Chow.
Using Fourier analysis, we establish a novel deterministic analogue of Beck’s local-to-global principle (Ann. of Math. 1994),
which relates the discrepancy of a Kronecker sequence to multiplicative diophantine approximation.
This opens up a new avenue of attack for Littlewood’s conjecture.
A hollow vortex is a region of constant pressure bounded by a vortex sheet and suspended inside a perfect fluid; we can think of it as a spinning bubble of air in water. In this talk, we present a general method for desingularizing non-degenerate steady point vortex configurations into collections of steady hollow vortices. The machinery simultaneously treats the translating, rotating, and stationary regimes. Through global bifurcation theory, we further obtain maximal curves of solutions that continue until the onset of a singularity. As specific examples, we obtain the first existence theory for co-rotating hollow vortex pairs and stationary hollow vortex tripoles, as well as a new construction of Pocklington’s classical co-translating hollow vortex pairs. All of these families extend into the non-perturbative regime, and we obtain a rather complete characterization of the limiting behavior along the global bifurcation curve. This is a joint work with Samuel Walsh (Missouri) and Miles Wheeler (Bath).
The Goldberg-Seymour Conjecture asserts that if the chromatic index $\chi'(G)$ of a loopless multigraph $G$ exceeds its maximum degree $\Delta(G) +1$, then it must be equal to another well known lower bound $\Gamma(G)$, defined as
$\Gamma(G) = \max\left\{\biggl\lceil \frac{ 2|E(H)|}{(|V (H)|-1)}\biggr\rceil \ : \ H \subseteq G \mbox{ and } |V(H)| \mbox{ odd }\right\}.$
In this talk, we will outline a short proof, obtained recently with Hao, Yu, and Zang.
Transformer (Vaswani et al. 2017) architecture is a popular deep learning architecture that today comprises the foundation for most tasks in natural language processing and forms the backbone of all the current state-of-the-art language models. Central to its success is the attention mechanism, which allows the model to weigh the importance of different input tokens. However, Transformers can become computationally expensive, especially for large-scale tasks. To address this, researchers have explored techniques for conditional computation, which selectively activate parts of the model based on the input. In this talk, we present two case studies of conditional computation in Transformer models. In the first case, we examine the routing mechanism in the Mixture-of-Expert (MoE) Transformer models, and show theoretical and empirical evidence that the router’s ability to route intelligently confers a significant advantage to MoE models. In the second case, we introduce Alternating Updates (AltUp), a method to take advantage of increased residual stream width in the Transformer models without increasing the computation cost.
Speaker's brief introduction: Xin Wang is a research engineer in the Algorithms team at Google Research. Xin finished his PhD in Mathematics at Georgia Institute of Technology before coming to Google. Xin's research interests include efficient computing, memory mechanism for machine learning, and optimization.
The talk will be presented online at
We revisit the classical problem of constructing a developable surface along a given Frenet curve $\gamma$ in space. First, we generalize a well-known formula, introduced in the literature by Sadowsky in 1930, for the Willmore energy of the rectifying developable of $\gamma$ to any (infinitely narrow) flat ribbon along the same curve. Then we apply the direct method of the calculus of variations to show the existence of a flat ribbon along $\gamma$ having minimal bending energy. Joint work with Simon Blatt.
Please Note: This talk starts at 1pm rather than the usual time.
The late Goro Shimura proposed a question regarding certain invariant differential operators on a Hermitian symmetric space. This was answered by Sahi and Zhang by showing that the Harish-Chandra images of these namesake operators are specializations of Okounkov's BC-symmetric interpolation polynomials. We prove, in the super setting, that the Harish-Chandra images of super Shimura operators are specializations of certain BC-supersymmetric interpolation polynomials due to Sergeev and Veselov. Similar questions include the Capelli eigenvalue problems which are generalized to the quantum and/or super settings. This talk is based a joint work with Siddhartha Sahi.
The ($2$-dimensional) assignment problem is to find, in an edge weighted bipartite graph, an assignment (i.e., a perfect matching) of minimum total weight. Efficient algorithms for this problem have been known since the advent of modern algorithmic analysis. Moreover, if the edge weights are i.i.d. Exp(1) random variables and the host graph is complete bipartite, seminal results of Aldous state that the expected weight of the optimal assignment tends to $\zeta(2)$.
We consider high-dimensional versions of the random assignment problem. Here, we are given a cost array $M$, indexed by $[n]^k$, and with i.i.d. Exp(1) entries. The objective is to find a ${0,1}$-matrix A that minimizes $\sum_{x \in [n]^k} A_xM_x$, subject to the constraint that every axis-parallel line in A contains exactly one 1. This is the planar assignment problem, and when $k=2$ is equivalent to the usual random assignment problem. We prove that the expected cost of an optimal assignment is $\Theta(n^{k-2})$. Moreover, we describe a randomized algorithm that finds such an assignment with high probability. The main tool is iterative absorption, as developed by Glock, Kühn, Lo, and Osthus. The results answer questions of Frieze and Sorkin. The algorithmic result is in contrast to the axial assignment problem (in which A contains exactly one 1 in each axis-parallel co-dimension 1 hyperplane). For the latter, the best known bounds (which are due to Frankston, Kahn, Narayanan, and Park) exploit the connection between ``spread'' distributions and optimal assignments. Due to this reliance, no efficient algorithm is known.
Joint work with Ashwin Sah and Mehtaab Sawhney.