We consider stationary monotone mean-field games (MFGs) and study the existence of weak solutions. We introduce a regularized problem that preserves the monotonicity and prove the existence of solutions to the regularized problem. Next, using Minty's method, we establish the existence of solutions for the original MFGs. Finally, we examine the properties of these weak solutions in several examples.
Wednesday, January 18, 2017 - 12:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Michael Damron – Georgia Institute of Technology
On the two-dimensional square grid, remove each
nearest-neighbor edge independently with probability 1/2 and consider
the graph induced by the remaining edges. What is the structure of its
connected components? It is a famous theorem of Kesten that 1/2 is the
``critical value.'' In other words, if we remove edges with probability
p \in [0,1], then for p < 1/2, there is an infinite component remaining,
and for p > 1/2, there is no infinite component remaining. We will
describe some of the differences in these phases in terms of crossings
of large boxes: for p < 1/2, there are relatively straight crossings of
large boxes, for p = 1/2, there are crossings, but they are very
circuitous, and for p > 1/2, there are no crossings.
Wednesday, January 18, 2017 - 14:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Chris Heil – Georgia Tech
The Linear Independence of Time-Frequency Translates Conjecture,
also known as the HRT conjecture, states that any finite set of
time-frequency translates of a given $L^2$ function must be linearly
independent. This conjecture, which was first stated in print in 1996,
remains open today. We will discuss this conjecture, its relation to
the Zero Divisor Conjecture in abstract algebra, and the (frustratingly
few) partial results that are currently available.
The long term behavior of dynamical systems can be understood by studying invariant manifolds that act as landmarks that anchor the orbits. It is important to understand which invariant manifolds persist under modifications of the system. A deep mathematical theory, developed in the 70's shows that invariant manifolds which persist under changes are those that have sharp expansion (in the future or in the past) in the the normal directions. A deep question is what happens in the boundary of these theorems of persistence. This question requires to understand the interplay between the geometric properties and the functional analysis of the functional equations involved.In this talk we present several mechanisms in which properties of normal hyperbolicity degenerate, so leading to the breakdown of the invariant manifold. Numerical studies lead to surprising conjectures relating the breakdown to phenomena in phase transitions. The results have been obtained combining numerical exploration and rigorous reasoning.
High-dimensional data arise in many fields of contemporary science and introduce new challenges in statistical learning due to the well-known curse of dimensionality. Many data sets in image analysis and signal processing are in a high-dimensional space but exhibit a low-dimensional structure. We are interested in building efficient representations of these data for the purpose of compression and inference, and giving performance guarantees that are only cursed by the intrinsic dimension of data. Specifically, in the setting where a data set in $R^D$ consists of samples from a probability measure concentrated on or near an unknown $d$-dimensional manifold with $d$ much smaller than $D$, we consider two sets of problems: low-dimensional geometric approximation to the manifold and regression of a function on the manifold. In the first case we construct multiscale low-dimensional empirical approximations to the manifold and give finite-sample performance guarantees. In the second case we exploit these empirical geometric approximations of the manifold to construct multiscale approximations to the function. We prove finite-sample guarantees showing that we attain the same learning rates as if the function was defined on a Euclidean domain of dimension $d$. In both cases our approximations can adapt to the regularity of the manifold or the function even when this varies at different scales or locations. All algorithms have complexity $C n\log (n)$ where $n$ is the number of samples, and the constant $C$ is linear in $D$ and exponential in $d$.
Consider the following solitaire game on a graph. Given a chip configuration on the node set V, a move consists of taking a subset U of nodes and sending one chip from U to V\U along each edge of the cut determined by U. A starting configuration is winning if for every node there exists a sequence of moves that allows us to place at least one chip on that node. The (divisorial) gonality of a graph is defined as the minimum number of chips in a winning configuration. This notion belongs to the Baker-Norine divisor theory on graphs and can be seen as a combinatorial analog of gonality for algebraic curves. In this talk we will show that the gonality is lower bounded by the tree-width and, if time permits, that the parameter is NP-hard to compute. We will conclude with some open problems.
Thursday, January 19, 2017 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Dave Goldberg – ISyE, GaTech
Demand forecasting plays an important role in many inventory control
problems. To mitigate the potential harms of model misspecification, various
forms of distributionally robust optimization have been applied. Although
many of these methodologies suffer from the problem of time-inconsistency,
the work of Klabjan et al. established a general time-consistent framework
for such problems by connecting to the literature on robust Markov decision
processes.
Motivated by the fact that many forecasting models exhibit very special
structure, as well as a desire to understand the impact of positing
different dependency structures, in this talk we formulate and solve a
time-consistent distributionally robust multi-stage newsvendor model which
naturally unifies and robustifies several inventory models with demand
forecasting. In particular, many simple models of demand forecasting have
the feature that demand evolves as a martingale (i.e. expected demand
tomorrow equals realized demand today). We consider a robust variant of such
models, in which the sequence of future demands may be any martingale with
given mean and support. Under such a model, past realizations of demand are
naturally incorporated into the structure of the uncertainty set going
forwards.
We explicitly compute the minimax optimal policy (and worst-case
distribution) in closed form, by combining ideas from convex analysis,
probability, and dynamic programming. We prove that at optimality the
worst-case demand distribution corresponds to the setting in which inventory
may become obsolete at a random time, a scenario of practical interest. To
gain further insight, we prove weak convergence (as the time horizon grows
large) to a simple and intuitive process. We also compare to the analogous
setting in which demand is independent across periods (analyzed previously
by Shapiro), and identify interesting differences between these models, in
the spirit of the price of correlations studied by Agrawal et al.
This is joint work with Linwei Xin, and the paper is available on arxiv at
https://arxiv.org/abs/1511.09437v1
Friday, January 20, 2017 - 03:05 for 1 hour (actually 50 minutes)
Location
Skiles 254
Speaker
Álex Haro – Univ. of Barcelona
We
will design a method to compute invariant tori in Hamiltonian systems
through the computation of invariant tori for time- T maps. We will also
consider isoenergetic cases (i..e. fixing energy).
Friday, January 20, 2017 - 15:05 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Dion Gijswijt – TU Delft
A subset of $\mathbb{F}_3^n$ is called a \emph{cap set} if it does not contain three vectors that sum to zero. In dimension four, this relates to the famous card game SET: a cap set corresponds to a collection of cards without a SET. The cap set problem is concerned with upper bounds on the size of cap sets. The central question raised by Frankl, Graham and R\”odl is: do cap sets have exponentially small density? May last year, this question was (unexpectedly) resolved in a pair of papers by Croot, Lev, and Pach and by Ellenberg and myself in a new application of the polynomial method. The proof is surprisingly short and simple.