Seminars and Colloquia by Series

Balian-Low theorems for subspaces

Series
Analysis Seminar
Time
Tuesday, September 29, 2020 - 14:00 for 1 hour (actually 50 minutes)
Location
online seminar
Speaker
Andrei CarageaKatholische Universität Eichstätt-Ingolstadt
The Balian-Low theorem is a classical result in time-frequency analysis that describes a trade off between the basis properties of a Gabor system and the smoothness and decay of the Gabor window. 
In particular a Gabor system with well localized window cannot be a Riesz basis for the space of finite energy signals.
We explore a few generalizations of this fact in the setting of Riesz bases for subspaces of L^2 and we show that the Gabor space being invariant under additional time-frequency shifts is incompatible with two different notions of smoothness and decay for the Gabor window.

A different approach to endpoint weak-type estimates for Calderón-Zygmund operators

Series
Analysis Seminar
Time
Tuesday, September 15, 2020 - 14:00 for 1 hour (actually 50 minutes)
Location
https://us02web.zoom.us/j/87104893132
Speaker
Cody StockdaleClemson

The weak-type (1,1) estimate for Calderón-Zygmund operators is fundamental in harmonic analysis. We investigate weak-type inequalities for Calderón-Zygmund singular integral operators using the Calderón-Zygmund decomposition and ideas inspired by Nazarov, Treil, and Volberg. We discuss applications of these techniques in the Euclidean setting, in weighted settings, for multilinear operators, for operators with weakened smoothness assumptions, and in studying the dimensional dependence of the Riesz transforms.

Integral neural networks with weight penalization

Series
Analysis Seminar
Time
Tuesday, September 1, 2020 - 14:00 for 1 hour (actually 50 minutes)
Location
https://us02web.zoom.us/j/87104893132
Speaker
Armenak PetrosyanGeorgia Tech

Artificial neural networks have gained widespread adoption as a powerful tool for various machine learning tasks in recent years. Training a neural network to approximate a target function involves solving an inherently non-convex problem. In practice, this is done using stochastic gradient descent with random initialization. For the approximation problem with neural networks error rate guarantees are established for different classes of functions however these rates are not always achieved in practice due to many  local minima of the resulting optimization problem. 

The challenge we address in this work is the following. We want to find small size shallow neural networks that can be trained algorithmically and which achieve guaranteed approximation speed and precision. To maintain the small size we apply penalties on the weights of the network. We show that under minimal requirements, all local minima of the resulting problem are well behaved and possess a desirable small size without sacrificing precision. We adopt the integral neural network framework and use techniques from optimization theory and harmonic analysis to prove our results. In this talk, we will discuss our existing work and possible future promising areas of interest where this approach can potentially be adopted. 

Cancelled

Series
Analysis Seminar
Time
Wednesday, April 22, 2020 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker

TBA by Vlad Yaskin

Series
Analysis Seminar
Time
Wednesday, April 8, 2020 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Vlad YaskinUniversity of Alberta

Tba

Essentially Coercive Forms and asympotically compact semigroups

Series
Analysis Seminar
Time
Wednesday, March 11, 2020 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Isabelle Chalendar Université Paris-Est - Marne-la-Vallée

Form methods are most efficient to prove generation theorems for semigroups but also for proving selfadjointness. So far those theorems are based on a coercivity notion which allows the use of the Lax-Milgram Lemma. Here we consider weaker "essential" versions of coerciveness which already suffice to obtain the generator of a semigroup S or a selfadjoint operator. We also show that one of these properties, namely essentially positive coerciveness implies a very special asymptotic behaviour of S, namely asymptotic compactness; i.e. that \dist(S(t),K(H))0 as t, where K(H) denotes the space of all compact operators on the underlying Hilbert space. 

Stable phase retrieval for infinite dimensional subspaces of L_2(R)

Series
Analysis Seminar
Time
Wednesday, March 4, 2020 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Daniel FreemanSt. Louis University

 The problem of phase retrieval for a set of functions H can be thought of as being able to identify a function fH or fH from the absolute value |f|.  Phase retrieval for a set of functions is called stable if when |f| and |g| are close then f is proportionally close to g or g.  That is, we say that a set HL2(R) does stable phase retrieval if there exists a constant C>0 so that
min(fgL2(R),f+gL2(R))C|f||g|L2(R) for all f,gH.
 It is known that phase retrieval for finite dimensional spaces is always stable.  On the other hand, phase retrieval for infinite dimensional spaces using a frame or a continuous frame is always unstable.  We prove that there exist infinite dimensional subspaces of L2(R) which do stable phase retrieval.  This is joint work with Robert Calderbank, Ingrid Daubechies, and Nikki Freeman.

Geometric averaging operators and points configurations

Series
Analysis Seminar
Time
Wednesday, February 26, 2020 - 13:55 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Eyvindur Ari PalssonVirginia Tech

Two classic questions -- the Erdos distinct distance problem, which asks about the least number of distinct distances determined by N points in the plane, and its continuous analog, the Falconer distance problem -- both focus on the distance, which is a simple two point configuration. When studying the Falconer distance problem, a geometric averaging operator, namely the spherical averaging operator, arises naturally. Questions similar to the Erdos distinct distance problem and the Falconer distance problem can also be posed for more complicated patterns such as triangles, which can be viewed as 3-point configurations. In this talk I will give a brief introduction to the motivating point configuration questions and then report on some novel geometric averaging operators and their mapping properties.

Pages