- You are here:
- GT Home
- Home
- News & Events

Series: Mathematical Biology Seminar

Individual chemical reactions between molecules are inherently stochastic, although for a large collection of molecules, the overall system behavior may appear to be deterministic. When deterministic chemical reaction models are sufficient to describe the behavior of interest, they are a compact way to describe chemical reactions. However, in other cases, these mass-action kinetics models are not applicable, such as when the number of molecules of a particular type is small, or when no closed-form expressions exist to describe the dynamic evolution of overall system properties. The former case is common in biological systems, such as intracellular reactions. The latter case may occur in either small or large systems, due to a lack of smoothness in the reaction rates. In both cases, kinetic Monte Carlo simulations are a useful tool to predict the evolution of overall system properties of interest. In this talk, an approach will be presented for generating approximate low-order dynamic models from kinetic Monte Carlo simulations. The low-order model describes the dynamic evolution of several expected properties of the system, and thus is not a stochastic model. The method is demonstrated using a kinetic Monte Carlo simulation of atomic cluster formation on a crystalline surface. The extremely high dimension of the molecular state is reduced using linear and nonlinear principal component analysis, and the state space is discretized using clustering, via a self-organizing map. The transitions between the discrete states are then computed using short simulations of the kinetic Monte Carlo simulations. These transitions may depend on external control inputs―in this application, we use dynamic programming to compute the optimal trajectory of gallium flux to achieve a desired surface structure.

Series: PDE Seminar

Landau damping is a collisionless stability result of considerable

importance in plasma physics, as well as in galactic dynamics.

Roughly speaking, it says that spatial waves are damped in time

(very rapidly) by purely conservative mechanisms, on a time scale

much lower than the effect of collisions.

We shall present in this talk a recent work (joint with C. Villani) which

provides the first positive mathematical result for this effect in the

nonlinear regime, and qualitatively explains its robustness over

extremely long time scales. Physical introduction and implications

will also be discussed.

importance in plasma physics, as well as in galactic dynamics.

Roughly speaking, it says that spatial waves are damped in time

(very rapidly) by purely conservative mechanisms, on a time scale

much lower than the effect of collisions.

We shall present in this talk a recent work (joint with C. Villani) which

provides the first positive mathematical result for this effect in the

nonlinear regime, and qualitatively explains its robustness over

extremely long time scales. Physical introduction and implications

will also be discussed.

Series: Research Horizons Seminar

Hosted by: Huy Huynh and Yao Li

One of the basic problems arising in many pure and applied

areas of mathematics is to solve a system of polynomial equations.

Numerical Algebraic Geometry starts with addressing this fundamental

problem and develops machinery to describe higher-dimensional solution

sets (varieties) with approximate data. I will introduce numerical

polynomial homotopy continuation, a technique that is radically

different from the classical symbolic approaches as it is powered by

(inexact) numerical methods.

areas of mathematics is to solve a system of polynomial equations.

Numerical Algebraic Geometry starts with addressing this fundamental

problem and develops machinery to describe higher-dimensional solution

sets (varieties) with approximate data. I will introduce numerical

polynomial homotopy continuation, a technique that is radically

different from the classical symbolic approaches as it is powered by

(inexact) numerical methods.

Series: Other Talks

Concrete optimization problems, while often nonsmooth, are not

pathologically so. The class of "semi-algebraic" sets and functions -

those arising from polynomial inequalities - nicely exemplifies

nonsmoothness in practice. Semi-algebraic sets (and their

generalizations) are common, easy to recognize, and richly structured,

supporting powerful variational properties. In particular I will discuss

a generic property of such sets - partial smoothness - and its relationship

with a proximal algorithm for nonsmooth composite

minimization, a versatile model for practical optimization.

pathologically so. The class of "semi-algebraic" sets and functions -

those arising from polynomial inequalities - nicely exemplifies

nonsmoothness in practice. Semi-algebraic sets (and their

generalizations) are common, easy to recognize, and richly structured,

supporting powerful variational properties. In particular I will discuss

a generic property of such sets - partial smoothness - and its relationship

with a proximal algorithm for nonsmooth composite

minimization, a versatile model for practical optimization.

Series: Other Talks

The purpose of this talk is to highlight some versions of the Krein-Rutman theorem

which have been widely and deeply applied in many fields (e.g., Mathematical Analysis, Geometric Analysis, Physical Sciences, Transport theory and Information Sciences).

These versions are motivated by optimization theory, perturbation theory, bifurcation theory, etc. and give rise to some simple but useful comparison methods, in ordered Banach spaces, such as the Dodds-Fremlin theorem and the De Pagter theorem.

which have been widely and deeply applied in many fields (e.g., Mathematical Analysis, Geometric Analysis, Physical Sciences, Transport theory and Information Sciences).

These versions are motivated by optimization theory, perturbation theory, bifurcation theory, etc. and give rise to some simple but useful comparison methods, in ordered Banach spaces, such as the Dodds-Fremlin theorem and the De Pagter theorem.

Series: Algebra Seminar

An orbitope is the convex hull of an orbit of a compact group acting linearly on a vector space. Instances of these highly symmetric convex bodies have appeared in many areas of mathematics and its applications, including protein reconstruction, symplectic geometry, and calibrations in differential geometry.In this talk, I will discuss Orbitopes from the perpectives of classical convexity, algebraic geometry, and optimization with an emphasis on motivating questions and concrete examples. This is joint work with Raman Sanyal and Bernd Sturmfels.

Series: Analysis Seminar

We discuss joint work with J.-M. Martell, in which werevisit the ``extrapolation method" for Carleson measures, originallyintroduced by John Lewis to proveA_\infty estimates for certain caloric measures, and we present a purely real variable version of the method. Our main result is a general criterion fordeducing that a weight satisfies a ReverseHolder estimate, given appropriate control by a Carleson measure.To illustrate the useof this technique,we reprove a well known theorem of R. Fefferman, Kenig and Pipherconcerning the solvability of the Dirichlet problem with data in some L^p space.

Monday, April 5, 2010 - 13:00 ,
Location: Skiles 255 ,
Jianfeng Cai ,
Dep. of Math. UCLA ,
Organizer: Haomin Zhou

Tight frame is a generalization of orthonormal basis. It inherits most good properties of orthonormal basis but gains more robustness to represent signals of intrests due to the redundancy. One can construct tight frame systems under which signals of interests have sparse representations. Such tight frames include translation invariant wavelet, framelet, curvelet, and etc. The sparsity of a signal under tight frame systems has three different formulations, namely, the analysis-based sparsity, the synthesis-based one, and the balanced one between them. In this talk, we discuss Bregman algorithms for finding signals that are sparse under tight frame systems with the above three different formulations. Applications of our algorithms include image inpainting, deblurring, blind deconvolution, and cartoon-texture decomposition. Finally, we apply the linearized Bregman, one of the Bregman algorithms, to solve the problem of matrix completion, where we want to find a low-rank matrix from its incomplete entries. We view the low-rank matrix as a sparse vector under an adaptive linear transformation which depends on its singular vectors. It leads to a singular value thresholding (SVT) algorithm.

Series: Combinatorics Seminar

I will divide the talk between two topics. The first is Stirling numbers of the second kind, $S(n,k)$. For each $n$ the maximum $S(n,k)$ is achieved either at a unique $k=K_n$, or is achieved twice consecutively at $k=K_n,K_n+1$. Call those $n$ of the second kind {\it exceptional}. Is $n=2$ the only exceptional integer? The second topic is $m\times n$ nonnegative integer matrices all of whose rows sum to $s$ and all of whose columns sum to $t$, $ms=nt$. We have an asymptotic formula for the number of these matrices, valid for various ranges of $(m,s;n,t)$. Although obtained by a lengthy calculation, the final formula is succinct and has an interesting probabilistic interpretation. The work presented here is collaborative with Carl Pomerance and Brendan McKay, respectively.

Series: Probability Working Seminar

It is well known that isoperimetric type inequalities can imply concentration inequalities, but the reverse is not true generally. However, recently E Milman and M Ledoux proved that under some convex assumption of the Ricci curvature, the reverse is true in the Riemannian manifold setting. In this talk, we will focus on the semigroup tools in their papers. First, we introduce some classic methods to obtain concentration inequalities, i.e. from isoperimetric inequalities, Poincare's inequalities, log-Sobolev inequalities, and transportation inequalities. Second, by using semigroup tools, we will prove some kind of concentration inequalities, which then implies linear isoperimetry and super isoperimetry.