- You are here:
- GT Home
- Home
- Seminars and Colloquia Schedule

Series: PDE Seminar

In this talk we will discuss some recent existence and regularity results for equilibrium configurations of epitaxially strained crystalline films.

Series: Analysis Seminar

In any standard course of Analytical Mechanics students are indoctrinated that a Lagrangian have a profound physical meaning (kinetic energy minus potential energy) and that Lagrangians do not exist in the case of nonconservative system. We present an old and regretfully forgotten method by Jacobi which allows to find many nonphysical Lagrangians of simple physical models (e.g., the harmonic oscillator) and also of nonconservative systems (e.g., the damped oscillator). The same method can be applied to any equation of second-order, and extended to fourth-order equations as well as systems of second and first order. Examples from Physics, Number Theory and Biology will be provided.

Series: CDSNS Colloquium

The connection between transport barriers and potential vorticity (PV) barriers in PV-conserving flows is investigated with a focus on zonal jets in planetary atmospheres. A perturbed PV-staircase model is used to illustrate important concepts. This flow consists of a sequence of narrow eastward and broad westward zonal jets with a staircase PV structure; the PV-steps are at the latitudes of the cores of the eastward jets. Numerically simulated solutions to the quasigeostrophic PV conservation equation in a perturbed PV-staircase flow are presented. These simulations reveal that both eastward and westward zonal jets serve as robust meridional transport barriers. The surprise is that westward jets, across which the background PV gradient vanishes, serve as robust transport barriers. A theoretical explanation of the underlying barrier mechanism is provided, which relies on recent results relating to the stability of degenerate Hamiltonians under perturbation. It is argued that transport barriers near the cores of westward zonal jets, across which the background PV gradient is small, are found in Jupiter's midlatitude weather layer and in the Earth's summer hemisphere subtropical stratosphere.

Series: PDE Seminar

Optimization problems with PDE constraints are commonly solved in different areas of science and engineering. In this talk we give an introduction to this field. In particular we discuss discretization techniques and effective linear and nonlinear solvers. Examples are given from inverse problems in electromagnetics.

Series: Research Horizons Seminar

* Dr. Trotter: perspective of the hiring committee with an emphasis on research universities.

* Dr. Carroll: perspective of the applicant with an emphasis on liberal arts universities.

* Dr. Dieci: other advice, including non-academic routes.

* Dr. Carroll: perspective of the applicant with an emphasis on liberal arts universities.

* Dr. Dieci: other advice, including non-academic routes.

Series: Graph Theory Seminar

The problem of generating random integral tables from the set of all nonnegative integral tables with fixed marginals is of importance in statistics. The Diaconis-Sturmfels algorithm for this problem performs a random walk on the set of such tables. The moves in the walk are referred to as Markov bases and correspond to generators of a certain toric ideal. When only one and two-way marginals are considered, one can naturally associate a graph to the model. In this talk, I will present a characterization of all graphs for which the corresponding toric ideal can be generated in degree four, answering a question of Develin and Sullivant. I will also discuss some related open questions and demonstrate applications of the Four Color theorem and results on clean triangulations of surfaces, providing partial answers to these questions. Based on joint work with Daniel Kral and Ondrej Pangrac.

Series: Stochastics Seminar

We consider a random field of tensor product type X and investigate the quality of approximation (both in the average and in the probabilistic sense) to X by the processes of rank n minimizing the quadratic approximation error. Most interesting results are obtained for the case when the dimension of parameter set tends to infinity. Call "cardinality" the minimal n providing a given level of approximation accuracy. By applying Central Limit Theorem to (deterministic) array of covariance eigenvalues, we show that, for any fixed level of relative error, this cardinality increases exponentially (a phenomenon often called "intractability" or "dimension curse") and find the explosion coefficient. We also show that the behavior of the probabilistic and average cardinalities is essentially the same in the large domain of parameters.

Series: Combinatorics Seminar

Let f be a polynomial or multilinear form in a large number of variables. A basic question we can ask about f is how dispersed it becomes as the number of variables increases. To be more concrete: If we randomly (and independently) set each entry to be either 1 or -1, what is the largest concentration of the output of f on any single value, assuming all (or most) of the coefficients of f are nonzero? Can we somehow describe the structure of those forms which are close to having maximal concentration? If f is a linear polynomial, this is a question originally examined by Littlewood and Offord and answered by Erdos: The maximal concentration occurs when all the nonzero coefficients of f are equal. Here we will consider the case where f is a bilinear or quadratic form.

Series: PDE Seminar

Shear flow instability is a classical problem in hydrodynamics. In particular, it is important for understanding the transition from laminar to turbulent flow. First, I will describe some results on shear flow instability in the setting of inviscid flows in a rigid wall. Then the effects of a free surface (or water waves) and viscosity will be discussed.

Series: Mathematical Biology Seminar

In the ocean, coherent vortices account for a large portion of the ocean turbulent kinetic energy and their presence significantly affects the dynamics and the statistical properties of ocean flows, with important consequences on transport processes. Mesoscale vortices also affect the population dynamics of phyto- and zooplankton, and are associated with secondary currents responsible for localized vertical fluxes of nutrients. The fact that the nutrient fluxes have a fine spatial and temporal detail, generated by the eddy field, has important consequences on primary productivity and the horizontal velocity field induced by the eddies has been suggested to play an important role in determining plankton patchiness. Owing to their trapping properties, vortices can also act as shelters for temporarily less-favoured planktonic species. In this contribution, I will review some of the transport properties associated with coherent vortices and their impact on the dynamics of planktoni ecosystems, focusing on the simplified conceptual model provided by two-dimensional turbulence.

Series: Research Horizons Seminar

I will explain and prove a beautiful and useful theorem of Alon and Tarsi that uses multivariate polynomials to guarantee, under suitable hypotheses, the existence of a coloring of a graph. The proof method, sometimes called a Combinatorial Nullstellensatz, has other applications in graph theory, combinatorics and number theory.

Series: Stochastics Seminar

A common subsequence of two sequences X and Y is a sequence which is a subsequence of X as well as a subsequence of Y. A Longest Common Subsequence (LCS) of X and Y is a common subsequence with maximal length. Longest Common subsequences can be represented as alignments with gaps where the aligned letter pairs corresponds to the letters in the LCS. We consider two independent i.i.d. binary texts X and Y of length n. We show that the behavior of the the alignment corresponding to the LCS is very different depending on the number of colors. With 2-colors, long blocks tend to be aligned with no gaps, whilst for four or more colors the opposite is true. Let Ln denote the length of the LCS of X and Y. In general the order of the variance of Ln is not known. We explain how a biased affect of a finite pattern can influence the order of the fluctuation of Ln.

Series: Combinatorics Seminar

Let A be a set of n real numbers. A central problem in additive combinatorics, due to Erdos and Szemeredi, is that of showing that either the sumset A+A or the product set A.A, must have close to n^2 elements. G. Elekes, in a short and brilliant paper, showed that one can give quite good bounds for this problem by invoking the Szemeredi-Trotter incidence theorem (applied to the grid (A+A) x (A.A)). Perhaps motivated by this result, J. Solymosi posed the following problem (actually, Solymosi's original problem is slightly different from the formulation I am about to give). Show that for every real c > 0, there exists 0 < d < 1, such that the following holds for all grids A x B with |A| = |B| = n sufficiently large: If one has a family of n^c lines in general position (no three meet at a point, no two parallel), at least one of them must fail to be n^(1-d)-rich -- i.e. at least one of then meets in the grid in fewer than n^(1-d) points. In this talk I will discuss a closely related result that I and Evan Borenstein have proved, and will perhaps discuss how we think we can use it to polish off this conjecture of Solymosi.

Series: Geometry Topology Seminar

The hyperbolic volume and the colored Jones polynomial are two of the most powerful invariants in knot theory. In this talk we aim to extend these invariants to arbitrary graphs embedded in 3-space. This provides new tools for studying questions about graph embedding and it also sheds some new light on the volume conjecture. According to this conjecture, the Jones polynomial and the volume of a knot are intimately related. In some special cases we will prove that this still holds true in the case of graphs.

Series: CDSNS Colloquium

Consider the classical Newtonian three-body problem. Call motions oscillatory if as times tends to infinity limsup of maximal distance among the bodies is infinite, while liminf it finite. In the '50s Sitnitkov gave the first rigorous example of oscillatory motions for the so-called restricted three-body problem. Later in the '60s Alexeev extended this example to the three-body. A long-standing conjecture, probably going back to Kolmogorov, is that oscillatory motions have measure zero. We show that for the Sitnitkov example and for the so-called restricted planar circular three-body problem these motions have maximal Hausdorff dimension. This is a joint work with Anton Gorodetski.

Series: PDE Seminar

A longstanding problem in the mathematical theory of elasticity is to predict theories of lower-dimensional objects (such as rods, plates or shells), subject to mechanical deformations, starting from the 3d nonlinear theory. For plates, a recent effort (in particular work by Friesecke, James and Muller) has lead to rigorous justification of a hierarchy of such theories (membrane, Kirchhoff, von Karman). For shells, despite extensive use of their ad-hoc generalizations present in the engineering applications, much less is known from the mathematical point of view. In this talk, I will discuss the limiting behaviour (using the notion of Gamma-limit) of the 3d nonlinear elasticity for thin shells around an arbitrary smooth 2d mid-surface S. We prove that the minimizers of the 3d elastic energy converge, after suitable rescaling, to minimizers of a hierarchy of shell models. The limiting functionals (which for plates yield respectively the von Karman, linear, or linearized Kirchhoff theories) are intrinsically linked with the geometry of S. They are defined on the space of infinitesimal isometries of S (which replaces the 'out-of-plane-displacements' of plates), and the space of finite strains (which replaces strains of the `in-plane-displacements'), thus clarifying the effects of rigidity of S on the derived theories. The different limiting theories correspond to different magnitudes of the applied forces, in terms of the shell thickness. This is joint work with M. G. Mora and R. Pakzad.

Series: Mathematical Biology Seminar

The evolution of sociality represented one of the major transition points in biological history. Highly social animals such as social insects dominate ecological communities because of their complex cooperative and helping behaviors. We are interested in understanding how evolutionary processes affect social systems and how sociality, in turn, affects the course of evolution. Our research focuses on understanding the social structure and mating biology of social insects. In addition, we are interested in the process of development in the context of sociality. We have found that some social insect females mate with multiple males, and that this behavior affects the structure of colonies. We have also found that colonies adjust their reproductive output in a coordinated and adaptive manner. Finally, we are investigating the molecular basis underlying the striking differences between queens and workers in highly social insects. Overall, our research provides insight into the function and evolutionary success of highly social organisms.

Series: Research Horizons Seminar

A plasma is a gas of ionized particles. For a dilute plasma of very high temperature, the collisions can be ignored. Such situations occur, for example, in nuclear fusion devices and space plasmas. The Vlasov-Poisson and Vlasov-Maxwell equations are kinetic models for such collisionless plasmas. The Vlasov-Poisson equation is also used for galaxy evolution. I will describe some mathematical results on these models, including well-posedness and stability issues.

Series: ACO Student Seminar

In order to estimate the spread of potential pandemic diseases and the efficiency of various containment policies, it is helpful to have an accurate model of the structure of human contact networks. The literature contains several explicit and implicit models, but none behave like actual network data with respect to the spread of disease. We discuss the difficulty of modeling real human networks, motivate the study of some open practical questions about network structure, and suggest some possible avenues of attack based on some related research.

Series: Stochastics Seminar

Under certain conditions, we obtain exact asymptotic expressions for the stationary distribution \pi of a Markov chain. In this talk, we will consider Markov chains on {0,1,...}^2. We are particularly interested in deriving asymptotic expressions when the fluid limit of the most probable paths from the origin to the rare event are nonlinear. For example, we will derive asymptotic expressions for a large deviation along the x-axis (e.g., \pi(\ell, y) for fixed y) when the most probable paths to (\ell,y) initially climb the y-axis before turning southwest and drifting towards (\ell,y).

Friday, September 12, 2008 - 14:00 ,
Location: Skiles 269 ,
Stavros Garoufalidis ,
School of Mathematics, Georgia Tech ,
Organizer: Stavros Garoufalidis

We will discuss, with examples, the Jones polynomial of the two simplest knots (the trefoil and the figure eight) and its loop expansion.

Series: Combinatorics Seminar

Let K^r_{r+1} denote the complete r-graph on r+1 vertices. The Turan density of K^r_{r+1} is the largest number t such that there are infinitely many K^r_{r+1}-free r-graphs with edge density t-o(1). Determining t(K^r_{r+1}) for r > 2 is a famous open problem of Turan. The best upper bound for even r, t(K^r_{r+1})\leq 1-1/r, was given by de Caen and Sidorenko. In a joint work with Linyuan Lu, we slightly improve it. For example, we show that t(K^r_{r+1})\leq 1 - 1/r - 1/(2r^3) for r=4 mod 6. One of our lemmas also leads to an exact result for hypergraphs. Given r > 2, let p be the smallest prime factor of r-1. Every r-graph on n > r(p-1) vertices such that every r+1 vertices contain 0 or r edges must be empty or a complete star.

Monday, September 15, 2008 - 13:00 ,
Location: Skiles 255 ,
Peijun Li ,
Department of Mathematics, Purdue University ,
Organizer: Haomin Zhou

Near-field optics has developed dramatically in recent years due to the possibility of breaking the diffraction limit and obtaining subwavelength resolution. Broadly speaking, near-field optics concerns phenomena involving evanescent electromagnetic waves, to which the super-resolving capability of near-field optics may be attributed. In order to theoretically understand the physical mechanism of this capability, it is desirable to accurately solve the underlying scattering problem in near-field optics. We propose an accurate global model for one of the important experimental modes of near-field optics, photon scanning tunneling microscopy, and develop a coupling of finite element and boundary integral method for its numerical solution. Numerical experiments will be presented to illustrate the effectiveness of the proposed method and to show the features of wave propagation in photon scanning tunneling microscope. The proposed model and developed method have no limitations on optical or geometrical parameters of probe and sample, they can be used for realistic simulations of various near-field microscope configurations.

Series: Geometry Topology Seminar

The Dehn function of a finitely presented group measures the difficulty in filling loops in the presentation complex of the group. Higher dimensional Dehn functions are a natural generalization: the n-dimensional Dehn function of a group captures the difficulty of filling n-spheres with (n+1)-balls in suitably defined complexes associated with the group. A fundamental question in the area is that of determining which functions arise as Dehn functions. I will give an overview of known results and describe recent progress in the 2-dimensional case. This is joint work with Josh Barnard and Noel Brady.

Series: Analysis Seminar

Variable transformations are used to enhance the normally poor performance of trapezoidal rule approximations of finite-range integrals I[f]=\int^1_0f(x)dx. Letting x=\psi(t), where \psi(t) is an increasing function for 0 < t < 1 and \psi(0)=0 and \psi(1)=1, the trapezoidal rule is applied to the transformed integral I[f]=\int^1_0f(\psi(t))\psi'(t)dt. By choosing \psi(t) appropriately, approximations of very high accuracy can be obtained for I[f] via this approach. In this talk, we survey the various transformations that exist in the literature. In view of recent generalizations of the classical Euler-Maclaurin expansion, we show how some of these transformations can be tuned to optimize the numerical results. If time permits, we will also discuss some recent asymptotic expansions for Gauss-Legendre integration rules in the presence of endpoint singularities and show how their performance can be optimized by tuning variable transformations. The variable transformation approach presents a very flexible device that enables one to write his/her own high-accuracy numerical integration code in a simple way without the need to look up tables of abscissas and weights for special Gaussian integration formulas.

Series: Research Horizons Seminar

Dynamics of spatially extended systems is often described by Lattice Dynamical Systems (LDS). LDS were introduced 25 years ago independently by four physicists from four countries. Sometimes LDS themselves are quite relevant models of real phenomena. Besides, very often discretizations of partial differential equations lead to LDS. LDS consist of local dynamical systems sitting in the nodes of a lattice which interact between themselves. Mathematical studies of LDS started in 1988 and introduced a thermodynamic formalism for these spatially extended dynamical systems. They allowed to give exact definitions of such previously vague phenomena as space-time chaos and coherent structures and prove their existence in LDS. The basic notions and results in this area will be discussed. It is a preparatory talk for the next day colloquium where Dynamical Networks, i.e. the systems with arbitrary graphs of interactions, will be discussed.

Series: ACO Student Seminar

A successful approach to solving linear programming problems exactly has been to solve the problems with increasing levels of fixed precision, checking the final basis in exact arithmetic and then doing additional simplex pivots if necessary. This work is a computational study comparing different techniques for the core element of our exact computation: solving sparse rational systems of linear equations exactly.

Series: School of Mathematics Colloquium

It has been found about ten years ago that most of the real networks are not random ones in the Erdos-Renyi sense but have different topology (structure of the graph of interactions between the elements of a network). This finding generated a steady flux of papers analyzing structural aspects of networks. However, real networks are rather dynamical ones where the elements (cells, genes, agents, etc) are interacting dynamical systems. Recently a general approach to the studies of dynamical networks with arbitrary topology was developed. This approach is based on a symbolic dynamics and is in a sense similar to the one introduced by Sinai and the speaker for Lattice Dynamical Systems, where the graph of interactions is a lattice. The new approach allows to analyse a combined effect of all three features which characterize a dynamical network (topology, dynamics of elements of the network and interactions between these elements) on its evolution. The networks are of the most general type, e.g. the local systems and interactions need not to be homogeneous, nor restrictions are imposed on a structure of the graph of interactions. Sufficient conditions on stability of dynamical networks are obtained. It is demonstrated that some subnetworks can evolve regularly while the others evolve chaotically. This approach is a very natural one and thus gives a hope that in many other problems (some will be discussed) on dynamical networks a progress could be expected.

Series: Graph Theory Seminar

Given a configuration of pebbles on the vertices of a connected graph G, a pebbling move is defined as the removal of two pebbles from some vertex, and the placement of one of these on an adjacent vertex. A graph is called pebbleable if for each vertex v there is a sequence of pebbling moves so that at least one pebble can be placed on vertex v. The pebbling number of a graph G is the smallest integer k such that G is pebbleable given any configuration of k pebbles on G. We improve on the bound of Bukh by showing that the pebbling number of a graph of diameter 3 on n vertices is at most the floor of 3n/2 + 2, and this bound is best possible. We give an alternative proof that the pebbling number of a graph of diameter 2 on n vertices is at most n + 1. This is joint work with Noah Streib and Carl Yerger.

Series: Stochastics Seminar

I will discuss some recent (but modest) results showing the existence and slow mixing of a stationary chain of Hamiltonian oscillators subject to a heat bath. Surprisingly, even these simple results require some delicate stochastic averaging. This is joint work with Martin Hairer.

Friday, September 19, 2008 - 14:00 ,
Location: Skiles 269 ,
John Etnyre ,
School of Mathematics, Georgia Tech ,
Organizer: John Etnyre

This will be an introduction to Legendrian knots (these are interesting knots that blend topological and geometric concepts) and a powerful invariant of Legendrian knots in R^3 called contact homology. On the first pass this invariant is combinatorial and has a lot of interesting algebraic structure. In a future talk (probably a few weeks from now), I will explain more about the analytic side of the theory as well as deeper algebraic aspects. This talk should be accessible anyone interested in topology and geometry.

Series: Combinatorics Seminar

The Balog-Szemeredi-Gowers theorem is a widely used tool in additive combinatorics, and it says, roughly, that if one has a set A such that the sumset A+A is "concentrated on few values," in the sense that these values v each get close to n representations as v = a+b, with a,b in A, then there is a large subset A' of A such that the sumset A'+A' is "small" -- i.e. it has size a small multiple of n. Later, Sudakov, Szemeredi and Vu generalized this result to handle multiple sums A_1 + ... + A_k. In the present talk we will present a refinement of this result of Sudakov, Szemeredi and Vu, where we get better control on the growth of sums A'+...+A'. This is joint work with Ernie Croot.

Series: Probability Working Seminar

Monday, September 22, 2008 - 13:00 ,
Location: Skiles 255 ,
Dongbin Xiu ,
Division of Applied Math, Purdue University ,
Organizer: Haomin Zhou

There has been growing interest in developing numerical methods for stochastic computations. This is motivated by the need to conduct uncertainty quantification in simulations, where uncertainty is ubiquitous and exists in parameter values, initial and boundary conditions, geometry, etc. In order to obtain simulation results with high fidelity, it is imperative to conduct stochastic computations to incorporate uncertainty from the beginning of the simulations. In this talk we review and discuss a class of fast numerical algorithms based on generalized polynomial chaos (gPC) expansion.The methods are highly efficient, compared to other traditional In addition to the forward stochastic problem solvers, we also discuss gPC-based methods for addressing "modeling uncertainty", i.e., deficiency in mathematical models, and solving inverse problems such as parameter estimation. ones, and suitable for stochastic simulations of complex systems.

Series: Analysis Seminar

The Horn inequalities give a characterization of eigenvalues of self-adjoint n by n matrices A, B, C with A+B+C=0. The proof requires powerful tools from algebraic geometry. In this talk I will talk about our recent result of these inequalities that are indeed valid for self-adjoint operators of an arbitrary finite factors. Since in this setting there is no readily available machinery from algebraic geometry, we are forced to look for an analysts friendly proof. A (complete) matricial form of our result is known to imply an affirmative answer to the Connes' embedding problem. Geometers in town especially welcome!

Series: Geometry Topology Seminar

I will discuss a relation between the HOMFLY polynomial of a knot, its extension for a closed 3-manifold, a special function, the trilogarithm, and zeta(3). Technically, this means that we consider perturbative U(N) Chern-Simons theory around the trivial flat connection, for all N, in an ambient 3-manifold. This is rigorous, and joint with Marcos Marino and Thang Le.

Series: Geometry Topology Seminar

Based on work of Schwarz and Oh, information coming from a filtration in Hamiltonian Floer homology can be used to construct "spectral invariants" for paths of Hamiltonian diffeomorphisms of symplectic manifolds. I will show how these invariants can be used to provide a unified approach to proving various old and new results in symplectic topology, including the non-degeneracy of the Hofer metric and some of its variants; a sharp version of an inequality between the Hofer-Zehnder capacity and the displacement energy; and a generalization of Gromov's non-squeezing theorem.

Series: ACO Student Seminar

The notion of a correlation decay, originating in statistical physics, has recently played an important role in yielding deterministic approximation algorithms for various counting problems. I will try to illustrate this technique with two examples: counting matchings in bounded degree graphs, and counting independent sets in certain subclasses of claw-free graphs.

Series: PDE Seminar

Granular materials are important in a wide variety of contexts, such as avalanches and industrial processing of powders and grains. In this talk, I discuss some of the issues in understanding how granular materials flow, and especially how they tend to segregate by size. The segregation process, known scientifically as kinetic sieving, and more colorfully as The Brazil Nut Effect, involves the tendency of small particles to fall into spaces created by large particles. The small particles then force the large particles upwards, as in a shaken can of mixed nuts, in which the large Brazil nuts tend to end up near the lid. I'll describe ongoing physics experiments, mathematical modeling of kinetic sieving, and the results of analysis of the models (which are nonlinear partial differential equations). Movies of simulations and exact solutions illustrate the role of shock waves after layers of small and large particles have formed.

Series: Mathematical Biology Seminar

Since John von Neumann introduced cellular automata in the 1950s to study self-replicating systems, algebraic models of different kinds have increased in popularity in network modeling in systems biology. Their common features are that the interactions between network nodes are described by "rules" and that the nodes themselves typically take on only finitely many states, resulting in a time-discrete dynamical system with a finite state space. Some advantages of such qualitative models are that they are typically intuitive, can accommodate noisy data, and require less information about a variety of kinetic and other parameters than differential equations models. Yet they can capture essential network features in many cases. This talk will discuss examples of different types of algebraic models of molecular networks and a common conceptual framework for their analysis.

Series: Research Horizons Seminar

A Turning point is where solutions to differential equations change behavior from exponential to oscillatory. In this region approximate solutions given by the powerful WKB method break down. In a series of paper in the 30's and 40's Langer developed a transformation (the Langer transformation) that allows the development of good approximate solutions (in terms of Airy functions) in the region of the Turning point I will discuss a discrete analog of this transformation and show how it leads to nice asymptotic formulas for various orthogonal polynomials.

Series: Combinatorics Seminar

Motivated by a question raised by P\'or and Wood in connection with compact embeddings of graphs into the grid {\mathbb Z}^d, we consider generalizations of the no-three-in-line-problem. For several pairs (d,k,l) we give algorithmic or probabilistic, combinatorial lower, and upper bounds on the largest sizes of subsets S of grid-points in the d-dimensional T \times ... \times T-grid, where T is large and no l distinct grid-points of S are contained in a k-dimensional affine or linear subspace.

Series: Stochastics Seminar

Robustness of several nonparametric multivariate "threshold type" outlier identification procedures is studied, employing a masking breakdown point criterion subject to a fixed false positive rate. The procedures are based on four different outlyingness functions: the widely-used "Mahalanobis distance" version, a new one based on a "Mahalanobis quantile" function that we introduce, one based on the well-known "halfspace" depth, and one based on the well-known "projection" depth. In this treatment, multivariate location outlyingness functions are formulated as extensions of univariate versions using either "substitution" or "projection pursuit," and an equivalence paradigm relating multivariate depth, outlyingness, quantile, and centered rank functions is applied. Of independent interest, the new "Mahalanobis quantile" outlyingness function is not restricted to have elliptical contours, has a transformation-retransformation representation in terms of the well-known spatial outlyingness function, and corrects to full affine invariance the orthogonal invariance of that function. Here two special tools, also of independent interest, are introduced and applied: a notion of weak covariance functional, and a very general and flexible formulation of affine equivariance for multivariate quantile functions. The new Mahalanobis quantile function inherits attractive features of the spatial version, such as computational ease and a Bahadur-Kiefer representation. For the particular outlyingness functions under consideration, masking breakdown points are evaluated and compared within a contamination model. It is seen that for threshold type outlier identification the Mahalanobis distance and projection procedures are superior to the others, although all four procedures are quite suitable for robust ranking of points with respect to outlyingness. Reasons behind these differences are discussed, and directions for further study are indicated.

Friday, September 26, 2008 - 14:00 ,
Location: Skiles 269 ,
Jim Krysiak ,
School of Mathematics, Georgia Tech ,
Organizer: John Etnyre

This will be a presentation of the classical result on the existence of three closed nonselfintersecting geodesics on surfaces diffeomorphic to the sphere. It will be accessible to anyone interested in topology and geometry.

Series: Probability Working Seminar

Monday, September 29, 2008 - 13:00 ,
Location: Skiles 255 ,
Silas Alben ,
School of Mathematics, Georgia Tech ,
Organizer: Haomin Zhou

We discuss two problems. First: When a piece of paper is crumpled, sharp folds and creases form. These are distributed over the sheet in a complex yet fascinating pattern. We study experimentally a two-dimensional version of this problem using thin strips of paper confined within rings of shrinking radius. We find a distribution of curvatures which can be fit by a power law. We provide a physical argument for the power law using simple elasticity and geometry. The second problem considers confinement of charged polymers to the surface of a sphere. This is a generalization of the classical Thompson model of the atom and has applications in the confinement of RNA and DNA in viral shells. Using computational results and asymptotics we describe the sequence of configurations of a simple class of charged polymers.

Series: Analysis Seminar

The Horn inequalities give a characterization of eigenvalues of self-adjoint n by n matrices A, B, C with A+B+C=0. The proof requires powerful tools from algebraic geometry. In this talk I will talk about our recent result of these inequalities that are indeed valid for self-adjoint operators of an arbitrary finite factors. Since in this setting there is no readily available machinery from algebraic geometry, we are forced to look for an analysts friendly proof. A (complete) matricial form of our result is known to imply an affirmative answer to the Connes' embedding problem. Geometers especially welcome!

Series: Geometry Topology Seminar

This is an expository talk. A classical theorem of Mazur gives a simple criterion for two closed manifolds M, M' to become diffeomorphic after multiplying by the Euclidean n-space, where n large. In the talk I shall prove Mazur's theorem, and then discuss what happens when n is small and M, M' are 3-dimensional lens spaces. The talk shall be accessible to anybody with interest in geometry and topology.

Series: PDE Seminar

The yield set of a polycrystal may be characterized using variational principles associated to suitable supremal functionals. I will describe some model problems for which these can be obtained via Gamma-convergence of a class of "power-law" functionals acting on fields satisfying appropriate differential constraints, and I will indicate some PDEs which play a role in the analysis of these problems.

Series: Mathematical Biology Seminar

Series: Research Horizons Seminar

In this introduction to knot theory we will focus on a class of knots called rational knots. Here the word rational refers to a beautiful theorem by J. Conway that sets up a one to one correspondence between these knots and the rational numbers using continued fractions. We aim to give an elementary proof of Conway's theorem and discuss its application to the study of DNA recombination. No knowledge of topology is assumed.

Series: ACO Student Seminar

Constraint Programming is a powerful technique developed by the Computer Science community to solve combinatorial problems. I will present the model, explain constraint propagation and arc consistency, and give some basic search heuristics. I will also go through some illustrative examples to show the solution process works.

Series: School of Mathematics Colloquium

Describe the trajectories of particles floating in a liquid. This is a surprisingly difficult problem and attempts to understand it have involved many diverse techniques. In the 60's Arold, Marsden, Ebin and others began to introduce topological techniques into the study of fluid flows. In this talk we will discuss some of these ideas and see how they naturally lead to the introduction of contact geometry into the study of fluid flows. We then consider some of the results one can obtain from this contact geometry perspective. For example we will show that for a sufficiently smooth steady ideal fluid flowing in the three sphere there is always some particle whose trajectory is a closed loop that bounds an embedded disk, and that (generically) certain steady Euler flows are (linearly) unstable.

Series: School of Mathematics Colloquium

Series: Stochastics Seminar

Spatial data are often more dispersed than would be expected if the points were independently placed. Such data can be modeled with repulsive point processes, where the points appear as if they are repelling one another. Various models have been created to deal with this phenomenon. Matern created three algorithms that generate repulsive processes. Here, MatÃ©rn Type III processes are used to approximate the likelihood and posterior values for data. Perfect simulation methods are used to draw auxiliary variables for each spatial point that are part of the type III process.

Series: Geometry Topology Seminar

I will describe a framework which relates large N duality to the geometry of degenerating Calabi-Yau spaces and the Hitchin integrable system. I will give a geometric interpretation of the Dijkgraaf-Vafa large N quantization procedure in this context.

Series: Combinatorics Seminar

This is an expository account of recent work on the enumeration of maps (graphs embedded on a surface of arbitrary genus) and branched covers of the sphere. These combinatorial and geometric objects can both be represented by permutation factorizations, in the which the subgroup generated by the factors acts transitively on the underlying symbols (these are called "transitive factorizations"). Various results and methods are discussed, including a number of methods from mathematical physics, such as matrix integrals and the KP hierarchy of integrable systems. A notable example of the results is a recent recurrence for triangulations of a surface of arbitrary genus obtained from the simplest partial differential equation in the KP hierarchy. The recurrence is very simple, but we do not know a combinatorial interpretation of it, yet it leads to precise asymptotics for the number of triangulations with n edges, of a surface of genus g.

Series: Probability Working Seminar

Series: Graph Theory Seminar

For an undirected graph G=(V,E) with V={1,...,n} let S(G) be the set of all symmetric n x n matrices A=[a_i,j] with a_i,j non-zero for distinct i,j if and only if ij is an edge. The inertia of a symmetric matrix is the triple (p_+,p_-,p_0), where p_+, p_-,p_0 are the number of positive, negative, and null eigenvalues respectively. The inverse inertia problem asks which inertias can be obtained by matrices in S(G). This problem has been studied intensively by Barrett, Hall, and Loewy. In this talk I will present new results on the inverse inertia problem, among them a Colin de Verdiere type invariant for the inertia set (this is the set of all possible inertias) of a graph, a formula for the inertia set of graphs with a 2-separation, and a formula for the inertia set of the join of a collection of graphs.

The Colin de Verdiere type invariant for the inertia set is joint work with F. Barioli, S.M. Fallat, H.T. Hall, D. Hershkowitz, L. Hogben, and B. Shader, and the formula for the inertia set of the join of a collection of graphs is joint work with W. Barrett and H.T. Hall.

Monday, October 6, 2008 - 13:00 ,
Location: Skiles 255 ,
Shengfu Deng ,
School of Mathematics, Georgia Tech ,
Organizer: Haomin Zhou

We consider the three-dimensional gravity-capillary waves on water of finite-depth which are uniformly translating in a horizontal propagating direction and periodic in a transverse direction. The exact Euler equations are formulated as a spatial dynamical system in stead of using Hamiltonian formulation method. A center-manifold reduction technique and a normal form analysis are applied to show that the dynamical system can be reduced to a system of ordinary differential equations. Using the existence of a homoclinic orbit connecting to a two-dimensional periodic solution for the reduced system, it is shown that such a generalized solitary-wave solution persists for the original system by applying a perturbation method and adjusting some appropriate constants.

Series: Geometry Topology Seminar

W. Goldman proved that the SL(2)-character variety X(F) of a closed surface F is a holonomic symplectic manifold. He also showed that the Sl(2)-characters of every 3-manifold with boundary F form an isotropic subspace of X(F). In fact, for all 3-manifolds whose SL(2)-representations are easy to analyze, these representations form a Lagrangian space. In this talk, we are going to construct explicit examples of 3-manifolds M bounding surfaces of arbitrary genus, whose spaces of SL(2)-characters have dimension as small as possible. We discuss relevance of this problem to quantum and classical low-dimensional topology.

Series: Analysis Seminar

A mapping F between metric spaces is called quasisymmetric (QS) if for every triple of points it distorts their relative distances in a controlled fashion. This is a natural generalization of conformality from the plane to metric spaces. In recent times much work has been devoted to the classification of metric spaces up to quasisymmetries. One of the main QS invariants of a space X is the conformal dimension, i.e the infimum of the Hausdorff dimensions of all spaces QS isomorphic to X. This invariant is hard to find and there are many classical fractals such as the standard Sierpinski carpet for which conformal dimension is not known. Tyson proved that if a metric space has sufficiently many curves then there is a lower bound for the conformal dimension. We will show that if there are sufficiently many thick Cantor sets in the space then there is a lower bound as well. "Sufficiently many" here is in terms of a modulus of a system of measures due to Fuglede, which is a generalization of the classical conformal modulus of Ahlfors and Beurling. As an application we obtain a new lower bound for the conformal dimension of self affine McMullen carpets.

Series: PDE Seminar

We describe how several nonlinear PDEs and evolutions including stationary and dynamic Navier-Stokes equations can be formulated and resolved variationally by minimizing energy functionalsof the form

I(u) = L(u, -\Lambda u) + \langle \Lambda u, u\rangle

and

I(u) = \Int^T_0 [L(t, u(t), -\dot u(t) - \Lambda u(t)) + \langle\Lambda u(t), u(t)\rangle]dt + \ell (u(0) - u(T)

\frac{u(T) + u(0)}{2}

I(u) = L(u, -\Lambda u) + \langle \Lambda u, u\rangle

and

I(u) = \Int^T_0 [L(t, u(t), -\dot u(t) - \Lambda u(t)) + \langle\Lambda u(t), u(t)\rangle]dt + \ell (u(0) - u(T)

\frac{u(T) + u(0)}{2}

where L is a time-dependent "selfdual Lagrangian" on state space, is another selfdual "boundary Lagrangian", and is a nonlinear operator (such as \Lambda u = div(u \otimes u) in the Navier-Stokes case). However, just like the selfdual Yang-Mills equations, the equations are not obtained via Euler-Lagrange theory, but from the fact that a natural infimum is attained. In dimension 2, we recover the well known solutions for the corresponding initial-value problem as well as periodic and anti-periodic ones, while in dimension 3 we get Leray solutions for the initial-value problems, but also solutions satisfying u(0) = \alpha u(T ) for any given in (-1, 1). It is worth noting that our variational principles translate into Leray's energy identity in dimension 2 (resp., inequality in dimension 3). Our approach is quite general and does apply to many other situations.

A population model of influenza designed to evaluate projected pandemic vaccine production in Taiwan

Series: Mathematical Biology Seminar

**Background:** We endeavor to reproduce historical observations and to identify and remedy the cause of any disparate predictions before using models to inform public policy-making. We have no finely age- and time-stratified observations from historical pandemics, but prior exposure of older adults to a related strain is among the more compelling hypotheses for the w-shaped age-specific mortality characterizing the 1918 pandemic, blurring the distinction between annual and pandemic influenza.

** Methods:** We are attempting to reproduce patterns in annual influenza morbidity and mortality via a cross-classified compartmental model whose age class sojourns approximate the longevity of clusters of closely-related strains. In this population model, we represent effective inter-personal contacts via a generalization of Hethcote's formulation of mixing as a convex combination of contacts within and between age groups. Information about mixing has been sought in face-to-face conversations, a surrogate for contacts by which respiratory diseases might be transmitted, but could also be obtained from household and community transmission studies. We reanalyzed observations from several such studies to learn about age-specific preferences, proportions of contacts with others the same age. And we obtained age-specific forces of infection from proportions reporting illness in a prospective study of household transmission during the 1957 influenza pandemic, which we gamma distributed to correct for misclassification. Then we fit our model to weekly age-specific hospitalizations from Taiwan's National Health Insurance Program, 2000-07, by adjusting a) age-specific coefficients of harmonic functions by which we model seasonality and b) probabilities of hospitalization given influenza.

** Results:** While our model accounts for only 30% of the temporal variation in hospitalizations, estimated conditional probabilities resemble official health resource utilization statistics. Moreover, younger and older people are most likely to be hospitalized and elderly ones to die of influenza, with modeled deaths 10.6% of encoded influenza or pneumonia mortality.

** Conclusions:** Having satisfactorily reproduced recent patterns in influenza morbidity and mortality in Taiwan via a deterministic model, we will switch to a discrete event-time simulator and - possibly with different initial conditions and selected parameters - evaluate the sufficiency of projected pandemic vaccine production.

Joint work with Denis Taneri, and Jen-Hsiang Chuang

Series: ACO Student Seminar

This study focuses on computations on large graphs (e.g., the web-graph) where the edges of the graph are presented as a stream. The objective in the streaming model is to maintain small amount of memory and perform few passes over the data.

In the streaming model, we show how to perform several graph computations including estimating the probability distribution after a random walk of certain length l, estimate the mixing time, and the conductance. We can compute the approximate PageRank values in O(nM^{-1/4}) space and O(M^{3/4}) passes (where n is the number of nodes and M is the mixing time of the graph). In comparison, a standard (matrix-vector multiplication) implementation of the PageRank algorithm will take O(n) space and O(M) passes. The main ingredient in all our algorithms is to explicitly perform several random walks of certain length efficiently in the streaming model. I shall define and motivate the streaming model and the notion of PageRank, and describe our results and techniques.

Joint work with Sreenivas Gollapudi and Rina Panigrahy from Microsoft Research.

In the streaming model, we show how to perform several graph computations including estimating the probability distribution after a random walk of certain length l, estimate the mixing time, and the conductance. We can compute the approximate PageRank values in O(nM^{-1/4}) space and O(M^{3/4}) passes (where n is the number of nodes and M is the mixing time of the graph). In comparison, a standard (matrix-vector multiplication) implementation of the PageRank algorithm will take O(n) space and O(M) passes. The main ingredient in all our algorithms is to explicitly perform several random walks of certain length efficiently in the streaming model. I shall define and motivate the streaming model and the notion of PageRank, and describe our results and techniques.

Joint work with Sreenivas Gollapudi and Rina Panigrahy from Microsoft Research.

Series: Graph Theory Seminar

The aim of this talk is to introduce techniques from knot theory into the study of graphs embedded in 3-space. The main characters are hyperbolic geometry and the Jones polynomial. Both have proven to be very successful in studying knots and conjecturally they are intimately related. We show how to extend these techniques to graphs and discuss possible applications. No prior knowledge of knot theory or geometry will be assumed.

Series: Combinatorics Seminar

We consider a random subgraph G_p of a host graph G formed by retaining each edge of G with probability p. We address the question of determining the critical value p (as a function of G) for which a giant component emerges. Suppose G satisfies some (mild) conditions depending on its spectral gap and higher moments of its degree sequence. We define the second order average degree \tilde{d} to be \tilde{d}=\sum_v d_v^2/(\sum_v d_v) where d_v denotes the degree of v. We prove that for any \epsilon > 0, if p > (1+ \epsilon)/\tilde{d} then almost surely the percolated subgraph G_p has a giant component. In the other direction, if p < (1-\epsilon)/\tilde{d} then almost surely the percolated subgraph G_p contains no giant component. (Joint work with Fan Chung Graham and Paul Horn)

Series: Geometry Topology Seminar

In this talk I will give a purely combinatorial description of Knot Floer Homology for knots in the three-sphere (Manolescu-Ozsvath-Szabo- Thurston). In this homology there is a naturally associated invariant for transverse knots. This invariant gives a combinatorial but still an effective way to distinguish transverse knots (Ng-Ozsvath-Thurston). Moreover it leads to the construction of an infinite family of non-transversely simple knot-types (Vertesi).

Series: Probability Working Seminar

Based on a paper by E. Candes and Y. Plan.

Series: Mathematical Biology Seminar

Chronic HBV infection affects 350 million people and can lead to death through cirrhosis-induced liver failure or hepatocellular carcinoma. We present the rich dynamics of two recent models of HBV infection with logistic hepatocyte growth and a standard incidence function governing viral infection. One of these models also incorporates an explicit time delay in virus production. All model parameters can be estimated from biological data. We simulate a course of lamivudine therapy and find that the models give good agreement with clinical data. Previous models considering constant hepatocyte growth have permitted only two dynamical possibilities: convergence to a virus free or an endemic steady state. Our models admit periodic solutions. Minimum hepatocyte populations are very small in the periodic orbit, and such a state likely represents acute liver failure. Therefore, the often sudden onset of liver failure in chronic HBV patients can be explained as a switch in stability caused by the gradual evolution of parameters representing the disease state.

Series: Research Horizons Seminar

In the study of one dimensional dynamical systems it is often assumed that the functions involved have a negative Schwarzian derivative. However, as not all one dimensional systems of interest have this property it is natural to consider a generalization of this condition. Specifically, we consider the interval functions of a real variable having some iterate with a negative Schwarzian derivative and show that many known results generalize to this larger class, that is to functions with an eventual negative Schwarzian derivative. The property of having an eventual negative Schwarzian derivative is nonasymptotic therefore verification of whether a function has such an iterate can often be done by direct computation. The introduction of this class was motivated by some maps arising in neuroscience.

Series: School of Mathematics Colloquium

We prove that a smooth compact submanifold of codimension $2$ immersed in $R^n$, $n>2$, bounds at most finitely many topologically distinct compact nonnegatively curved hypersurfaces. This settles a question of Guan and Spruck related to a problem of Yau. Analogous results for complete fillings of arbitrary Riemannian submanifolds are obtained as well. On the other hand, we show that these finiteness theorems may not hold if the codimension is too high, or the prescribed boundary is not sufficiently regular. Our proofs employ, among other methods, a relative version of Nash's isometric embedding theorem, and the theory of Alexandrov spaces with curvature bounded below, including the compactness and stability theorems of Gromov and Perelman. These results consist of joint works with Stephanie Alexander and Jeremy Wong, and Robert Greene.

Series: Stochastics Seminar

Adaptive estimation of linear functionals occupies an important position in the theory of nonparametric function estimation. In this talk I will discuss an adaptation theory for estimation as well as for the construction of confidence intervals for linear functionals. A between class modulus of continuity, a geometric quantity, is shown to be instrumental in characterizing the degree of adaptability and in the construction of adaptive procedures in the same way that the usual modulus of continuity captures the minimax difficulty of estimation over a single parameter space. Our results thus "geometrize" the degree of adaptability.

Friday, October 17, 2008 - 14:00 ,
Location: Skiles 269 ,
Jim Krysiak ,
School of Mathematics, Georgia Tech ,
Organizer: John Etnyre

This will be a continuation of the previous talk by this title. Specifically, this will be a presentation of the classical result on the existence of three closed nonselfintersecting geodesics on surfaces diffeomorphic to the sphere. It will be accessible to anyone interested in topology and geometry.

Series: Combinatorics Seminar

The Birthday Paradox says that if there are N days in a year, and 1.2*sqrt(N) days are chose uniformly at random with replacement, then there is a 50% probability that some day was chosen twice. This can be interpreted as a statement about self-intersection of random paths of length 1.2*sqrt(N) on the complete graph K_N with loops. We prove an extension which shows that for many graphs random paths with length of order sqrt(N) will have the same self-intersection property. We finish by discussing an application to the Pollard Rho Algorithm for Discrete Logarithm. (joint work with Jeong-Han Kim, Yuval Peres and Prasad Tetali).

Series: Combinatorics Seminar

In its simplest form, the Erdos-Ko-Rado theorem tells us that if we have a family F of subsets of size k from set of size v such that any two sets in the family have at least one point in common, then |F|<=(v-1)\choose(k-1) and, if equality holds, then F consists of all k-subsets that contain a given element of the underlying set.

This theorem can also be viewed as a result in graph theory, and from this viewpoint it has many generalizations. I will outline how it can be proved using linear algebra, and then discuss how this approach can be applied in other cases.

This theorem can also be viewed as a result in graph theory, and from this viewpoint it has many generalizations. I will outline how it can be proved using linear algebra, and then discuss how this approach can be applied in other cases.

Series: Geometry Topology Seminar

In this talk I will describe some relations between embedded graphs, their polynomials and the Jones polynomial of an associated link. I will explain how relations between graphs, links and their polynomials leads to the definition of the partial dual of a ribbon graph. I will then go on to show that the realizations of the Jones polynomial as the Tutte polynomial of a graph, and as the topological Tutte polynomial of a ribbon graph are related, surprisingly, by the homfly polynomial.

Series: Other Talks

We argue that computational models have an essential role in uncovering the principles behind a variety of biological phenomena that cannot be approached by other means. In this talk we shall focus on evolution. Living organisms function according to complex mechanisms that operate in different ways depending on conditions. Darwin's theory of evolution suggests that such mechanisms evolved through random variation guided by natural selection. However, there has existed no theory that would explain quantitatively which mechanisms can so evolve in realistic population sizes within realistic time periods, and which are too complex. Here we suggest such a theory. Evolution is treated as a form of computational learning from examples in which the course of learning depends only on the aggregate fitness of the current hypothesis on the examples, and not otherwise on individual examples. We formulate a notion of evolvability that distinguishes function classes that are evolvable with polynomially bounded resources from those that are not. For example, we can show that monotone Boolean conjunctions and disjunctions are demonstrably evolvable over the uniform distribution, while Boolean parity functions are demonstrably not. We shall discuss some broader issues in evolution and intelligence that can be addressed via such an approach.

Series: Research Horizons Seminar

We consider the pseudodifferential operators H_{m,\Omega} associated by the prescriptions of quantum mechanics to the Klein-Gordon Hamiltonian when restricted to a compact domain \Omega in {\mathbb R}^d. When the mass m is 0 the operator H_{0,\Omega} coincides with the generator of the Cauchy stochastic process with a killing condition on \partial \Omega. (The operator H_{0,\Omega} is sometimes called the fractional Laplacian with power 1/2.) We prove several universal inequalities for the eigenvalues (joint work with Evans Harrell).

Series: ACO Seminar

A central characteristic of a computer chip is the speed at which it processes data, determined by the time it takes electrical signals to travel through the chip. A major challenge in the design of a chip is to achieve timing closure, that is to find a physical realization fulfilling the speed specifications. We give an overview over the major tasks for optimizing the performance of computer chips and present several new algorithms. For the topology generation of repeater trees, we introduce a variant of the Steiner tree problem and present fast algorithm that balances efficiently between the resource consumption and performance. Another indispensable task is gate sizing, a discrete optimization problem with nonlinear or PDE constraints, for which a fast heuristic is introduced. The effectiveness in practice is demonstrated by comparing with newly developed lower bounds for the achievable delay. We conclude with a variant of the time-cost tradeoff problem from project management. In contrast to the usual formulation cycles are allowed. We present a new method to compute the time-cost tradeoff curve in such instances using combinatorial algorithms. Several problems in chip design can be modeled as time-cost tradeoff problems, e.g. threshold voltage optimization of plane assignment.

Series: PDE Seminar

I will discuss a few ways in which reaction diffusion models have been used to pattern formation. In particular in the setting of Cdc42 transport to and from the membrane in a yeast cell I will show a simple model which achieves polarization. The model and its analysis exhibits some striking differences between deterministic and probabilistic versions of the model.

Tuesday, October 21, 2008 - 16:30 ,
Location: Skiles 255 ,
Chris Godsil ,
University of Waterloo ,
Organizer: Robin Thomas

Refreshments will be served at 4PM in Skiles 236.

The possibility of a quantum computer has lead to much new work in theoretical physics and, naturally enough, this work has raised many new mathematical problems. What is perhaps surprising is that it has lead to interesting problems in algebraic graph theory. For example, questions about the relative power of quantum computer and classical computers lead to questions about the chromatic number of certain graphs. In my talk I will discuss some of these problems, and the progress that has been made.

Wednesday, October 22, 2008 - 10:00 ,
Location: Skiles 269 ,
Arthur Szlam ,
UCLA ,
Organizer: Haomin Zhou

SPECIAL TIME AND LOCATION FOR THIS WEEK ONLY

The k-planes method is the generalization of k-means where the representatives of each cluster are affine linear sets. In this talk I will describe some possible modifications of this method for discriminative learning problems.

Series: Mathematical Biology Seminar

After rational protein design and combinatorial protein engineering (directed evolution), data-driven protein engineering emerges as a third generation of techniques for improving protein properties. Data-driven protein engineering relies heavily on the use of mathematical algorithms. In the first example, we developed a method for predicting the positions in the amino acid sequence that are critical for the catalytic activity of a protein. With nucleotide sequences of both functional and non-functional variants and a Support Vector Machine (SVM) learning algorithm, we set out to narrow the interesting sequence space of proteins, i.e. find the truly relevant positions. Variants of TEM-1 β-lactamase were created in silico using simulations of both mutagenesis and recombination protocols. The algorithm was shown to be able to predict critical positions that can tolerate up to two amino acids. Pairs of amino acid residues are known to lead to inactive sequences, unless mutated jointly. In the second example, we combine SVM, Boolean learning (BL), and the combination of the two, BLSVM, to find such interactive residues. Results on interactive residues in two fluorescent proteins, Discosoma Red Fluorescent Protein (Ds-Red) and monomeric Red Fluorescent Protein (mRFP), will be presented.

Wednesday, October 22, 2008 - 15:00 ,
Location: Skiles 269 ,
ZhengJun Zhang ,
University of Wisconsin ,
Organizer: Liang Peng

Various correlation measures have been introduced in statistical inferences and applications. Each of them may be used in measuring association strength of the relationship, or testing independence, between two random variables. The quotient correlation is defined here as an alternative to Pearson's correlation that is more intuitive and flexible in cases where the tail behavior of data is important. It measures nonlinear dependence where the regular correlation coefficient is generally not applicable. One of its most useful features is a test statistic that has high power when testing nonlinear dependence in cases where the Fisher's Z-transformation test may fail to reach a right conclusion. Unlike most asymptotic test statistics, which are either normal or \chi 2, this test statistic has a limiting gamma distribution (henceforth the gamma test statistic). More than the common usages of correlation, the quotient correlation can easily and intuitively be adjusted to values at tails. This adjustment generates two new concepts -- the tail quotient correlation and the tail independence test statistics, which are also gamma statistics. Due to the fact that there is no analogue of the correlation coefficient in extreme value theory, and there does not exist an efficient tail independence test statistic, these two new concepts may open up a new field of study. In addition, an alternative to Spearman's rank correlation: a rank based quotient correlation is also defined. The advantages of using these new concepts are illustrated with simulated data, and real data analysis of internet traffic, tobacco markets, financial markets...

Series: Graph Theory Seminar

In this lecture I will introduce the method and sketch some recent applications. The main idea is to exploit a natural connection between the evolution of discrete random processes and continuous functions on the real numbers. Roughly speaking, the method is as follows: Given a discrete random process, we calculate the expected change in the random variable (or variables) of interest in one step of the process, write a differential equation suggested by the expected change, and show that the evolution of the random variable over the course of the process is sharply concentrated around the trajectory given by the solution of the differential equation. This allows us to translate simple facts (often combinatorial in nature) about expected changes in one step of the process into strong statements about sharp concentration of the random variable over the entire course of the process.

Series: Geometry Topology Seminar

In many physical situations we are interested in topological lower bounds for L^2-energy of volume preserving vector fields. Such situations include for instance evolution of a magnetic field in ideal magnetohydrodynamics. Classical energy bounds involve topological invariants like helicity which measure linkage of orbits in the flow. In this talk I will present a new lower bound in terms of the third order helicity, which is an invariant measuring a third order linkage of orbits. I will also discuss how the third order helicity can be derived from the Milnor's \mu-bar invariant for 3-component links.

Series: Combinatorics Seminar

Consider the following random graph process. We begin with the empty graph on n vertices and add edges chosen at random one at a time. Each edge is chosen uniformly at random from the collection of pairs of vertices that do not form triangles when added as edges to the existing graph. In this talk I discuss an analysis of the triangle-free process using the so-called differential equations method for random graph processes. It turns out that with high probability the triangle-free process produces a Ramsey R(3,t) graph, a triangle-free graph whose independence number is within a multiplicative constant factor of the smallest possible.

Monday, October 27, 2008 - 13:00 ,
Location: Skiles 255 ,
George Biros ,
CSE, Georgia Tech ,
Organizer: Haomin Zhou

Fluid membranes are area-preserving interfaces that resist bending. They are models of cell membranes, intracellular organelles, and viral particles. We are interested in developing simulation tools for dilute suspensions of deformable vesicles. These tools should be computationally efficient, that is, they should scale well as the number of vesicles increases. For very low Reynolds numbers, as it is often the case in mesoscopic length scales, the Stokes approximation can be used for the background fluid. We use a boundary integral formulation for the fluid that results in a set of nonlinear integro-differential equations for the vesicle dynamics. The motion of the vesicles is determined by balancing the nonlocal hydrodynamic forces with the elastic forces due to bending and tension. Numerical simulations of such vesicle motions are quite challenging. On one hand, explicit time-stepping schemes suffer from a severe stability constraint due to the stiffness related to high-order spatial derivatives and a milder constraint due to a transport-like stability condition. On the other hand, an implicit scheme can be expensive because it requires the solution of a set of nonlinear equations at each time step. We present two semi-implicit schemes that circumvent the severe stability constraints on the time step and whose computational cost per time step is comparable to that of an explicit scheme. We discretize the equations by using a spectral method in space, and a multistep third-order accurate scheme in time. We use the fast multipole method to efficiently compute vesicle-vesicle interaction forces in a suspension with a large number of vesicles. We report results from numerical experiments that demonstrate the convergence and algorithmic complexity properties of our scheme. Joint work with: Shravan K. Veerapaneni, Denis Gueyffier, and Denis Zorin.

Series: Analysis Seminar

The Dirichlet space is the set of analytic functions on the disc that have a square integrable derivative. In this talk we will discuss necessary and sufficient conditions in order to have a bilinear form on the Dirichlet space be bounded. This condition will be expressed in terms of a Carleson measure condition for the Dirichlet space. One can view this result as the Dirichlet space analogue of Nehari's Theorem for the classical Hardy space on the disc. This talk is based on joint work with N. Arcozzi, R. Rochberg, and E. Sawyer

Series: Geometry Topology Seminar

We prove that every metric of constant curvature on a compact 2-manifold M with boundary bdM induces (at least) four vertices, i.e., local extrema of geodesic curvature, on bdM, if, and only if, M is simply connected. Indeed, when M is not simply connected, we construct hyperbolic, parabolic, and elliptic metrics of constant curvature on M which induce only two vertices on bdM. Furthermore, we characterize the sphere as the only closed orientable Riemannian 2-manifold M which has the four-vertex-property, i.e., the boundary of every compact surface immersed in M has 4 vertices.

Series: Research Horizons Seminar

This talk is not an appetizer to pizza, but rather an appetizer to the main course: Hua Xu's and Trevis Litherland's thesis defenses which will respectively take place on Thursday the 30th of October and November the 6th, in Skiles 269, at 3pm. I will present the history and origins of the problems they have been tackling ("Ulam's problems"). Various interactions with other fields such as Analysis, Algebra (Young Tableaux) or Bioinformatics (Sequence Comparison) will be touched upon. Then, some elementary but rather useful probabilistic techniques will also be introduced and shown how to be applied.

Wednesday, October 29, 2008 - 15:00 ,
Location: Skiles 269 ,
Lily Wang ,
Department of Statistics, University of Georgia ,
Organizer: Liang Peng

We analyze a class of semiparametric ARCH models that nests the simple GARCH(1,1) model but has flexible news impact function. A simple estimation method is proposed based on profiled polynomial spline smoothing. Under regular conditions, the proposed estimator of the dynamic coeffcient is shown to be root-n consistent and asymptotically normal. A fast and efficient algorithm based on fast fourier transform (FFT) has been developed to analyze volatility functions with infinitely many lagged variables within seconds. We compare the performance of our method with the commonly used GARCH(1, 1) model, the GJR model and the method in Linton and Mammen (2005) through simulated data and various interesting time series. For the S&P 500 index returns, we find further statistical evidence of the nonlinear and asymmetric news impact functions.

Series: Graph Theory Seminar

PLEASE NOTE UNUSUAL TIME

We will consider the problem of counting the number T(n,g) of cubic graphs with n edges on a surface of of genus g, and review was is known in the combinatorial community in the past 30 years, what was conjectured in physics 20 years ago, and what was proven last month in joint work with Thang Le and Marcos Marino, using the Riemann-Hilbert analysis of the Painleve equation. No knowledge of physics or analysis is required.

Series: Stochastics Seminar

In this presentation, interactions between spectra of classical Gaussian ensembles and subsequence problems are studied with the help of the powerful machinery of Young tableaux. For the random word problem, from an ordered finite alphabet, the shape of the associated Young tableaux is shown to converge to the spectrum of the (generalized) traceless GUE. Various properties of the (generalized) traceless GUE are established, such as a law of large number for the extreme eigenvalues and the convergence of the spectral measure towards the semicircle law. The limiting shape of the whole tableau is also obtained as a Brownian functional. The Poissonized word problem is finally talked, and, with it, the convergence of the whole Poissonized tableaux is derived.

Friday, October 31, 2008 - 14:00 ,
Location: Skiles 269 ,
Sinem Celik Onaran ,
School of Mathematics, Georgia Tech ,
Organizer: John Etnyre

It is still not known whether every genus g Lefschetz fibration over the 2-sphere admits a section or not. In this talk, we will give a brief background information on Lefschetz fibrations and talk about sections of genus two Lefschetz fibration. We will observe that any holomorphic genus two Lefschetz fibration without seperating singular fibers admits a section. This talk is accessible to anyone interested in topology and geometry.

Series: Stochastics Seminar

Consider a class of multidimensional degenerate diffusion processes of the following form

X_t = x+\int_0^t (X_s) ds+\int_0^t \sigma(X_s) dW_s,

Y_t = y+\int_0^t F(X_s)ds,

where b,\sigma, F are assumed to be smooth and b,\sigma bounded. Suppose now that \sigma\sigma^* is uniformly elliptic and that \nabla F does not degenerate. These assumptions guarantee that only one Poisson bracket is needed to span the whole space. We obtain a parametrix representation of Mc Kean-Singer type for the density of (X_t,Y_t) from which we derive some explicit Gaussian controls that characterize the additional singularity induced by the degeneracy. This particular representation then allows to give a local limit theorem with the usual convergence rate for an associated Markov chain approximation. The "weak" degeneracy allows to use the local limit Theorem in Gaussian regime but also induces some difficulty to define the suitable approximating process. In particular two time scales appear. Another difficulty w.r.t. the standard literature on the topic, see e.g. Konakov and Mammen (2000), is the unboundedness of F.

Series: Combinatorics Seminar

We will discuss some extensions/generalizations of the striking and elegant fact (proved independently by Furstenberg and Sarkozy) that any subset of the integers of positive upper density necessarily contains two distinct elements whose difference is a perfect square. This is joint work with Akos Magyar.

Series: Dissertation Defense

The first part of this work deals with open dynamical systems. A natural question of how the survival probability depends upon a position of a hole was seemingly never addresses in the theory of open dynamical systems. We found that this dependency could be very essential. The main results are related to the holes with equal sizes (measure) in the phase space of strongly chaotic maps. Take in each hole a periodic point of minimal period. Then the faster escape occurs through the hole where this minimal period assumes its maximal value. The results are valid for all finite times (starting with the minimal period), which is unusual in dynamical systems theory where typically statements are asymptotic when time tends to infinity. It seems obvious that the bigger the hole is the bigger is the escape through that hole. Our results demonstrate that generally it is not true, and that specific features of the dynamics may play a role comparable to the size of the hole.

In the second part we consider some classes of cellular automata called Deterministic Walks in Random Environments on \mathbb Z^1. At first we deal with the system with constant rigidity and Markovian distribution of scatterers on \mathbb Z^1. It is shown that these systems have essentially the same properties as DWRE on \mathbb Z^1 with constant rigidity and independently distributed scatterers. Lastly, we consider a system with non-constant rigidity (so called process of aging) and independent distribution of scatterers. Asymptotic laws for the dynamics of perturbations propagating in such environments with aging are obtained.

In the second part we consider some classes of cellular automata called Deterministic Walks in Random Environments on \mathbb Z^1. At first we deal with the system with constant rigidity and Markovian distribution of scatterers on \mathbb Z^1. It is shown that these systems have essentially the same properties as DWRE on \mathbb Z^1 with constant rigidity and independently distributed scatterers. Lastly, we consider a system with non-constant rigidity (so called process of aging) and independent distribution of scatterers. Asymptotic laws for the dynamics of perturbations propagating in such environments with aging are obtained.

Series: Analysis Seminar

Pseudodifferential operators and affine pseudodifferential operators arise naturally in the study of wireless communications. We discuss the origins of these operators and give new conditions on the kernels and symbols of pseudodifferential and affine pseudodifferential operators which ensure the operators are trace class (and more generally, Schatten p-class).

Series: Research Horizons Seminar

Orthogonal polynomials play a role in myriads of problems ranging from approximation theory to random matrices and signal processing. Generalizations of orthogonal polynomials - such as biorthogonal polynomials, cardinal series, Muntz polynomials, are used for example, in number theory and numerical analysis. We discuss some of these, and some potential research projects involving them.

Series: Mathematical Biology Seminar

Hydrogen peroxide has been long considered a harmful reactive oxygen species, but is increasingly appreciated as a cellular signaling molecule. The mechanism by which the cell buffers against intracellular H2O2 accumulation during periods of oxidative stress is not fully understood. I will introduce a detailed network model of the known redox reactions and cellular thiol modifications involved in H2O2 buffering. The model includes anti-oxidative contributions of catalase, glutathione peroxidase, peroxiredoxin, and glutaredoxin, in addition to the cytoplasmic redox buffers, thioredoxin and glutathione. Based on ordinary differential equations, the model utilizes mass action kinetics to describe changes in concentration and redox state of cytoplasmic proteins upon exposure to physiologically relevant concentrations of extracellular H2O2. Simulations match experimental observations of a rapid and transient oxidation of thioredoxin upon exposure to extracellular peroxide. The increase in the concentration of oxidized proteins predicted by the model is simultaneously accompanied by an increase in protein S-glutathionylation, possibly regulating signal transduction in cells undergoing oxidative stress. Ultimately, this network analysis will provide insight into how to target antioxidant therapies for enhanced buffering without impacting the necessary protein oxidation used by cells for signaling purposes.

Series: ACO Student Seminar

It has been found about ten years ago that most of the real networks are not random ones in the Erdos-Renyi sense but have different topology (structure of the graph of interactions between the elements of a network). This finding generated a steady flux of papers analyzing structural aspects of networks. However, real networks are rather dynamical ones where the elements (cells, genes, agents, etc) are interacting dynamical systems. Recently a general approach to the studies of dynamical networks with arbitrary topology was developed. This approach is based on a symbolic dynamics and is in a sense similar to the one introduced by Sinai and the speaker for Lattice Dynamical Systems, where the graph of interactions is a lattice. The new approach allows to analyze a combined effect of all three features which characterize a dynamical network ( topology, dynamics of elements of the network and interactions between these elements) on its evolution. The networks are of the most general type, e.g. the local systems and interactions need not to be homogeneous, nor restrictions are imposed on a structure of the graph of interactions. Sufficient conditions on stability of dynamical networks are obtained. It is demonstrated that some subnetworks can evolve regularly while the others evolve chaotically. Some natural graph theoretical and dynamical questions appear in the farther developments of this approach. No preliminary knowledge of anything besides basic calculus and linear algebra is required to understand what is going on.

Series: School of Mathematics Colloquium

In this talk, we discuss 1.) the nonlinear instability and unstable manifolds of steady solutions of the Euler equation with fixed domains and 2.) the evolution of free (inviscid) fluid surfaces, which may involve vorticity, gravity, surface tension, or magnetic fields. These problems can be formulated in a Lagrangian formulation on infinite dimensional manifolds of volume preserving diffeomorphisms with an invariant Lie group action. In this setting, the physical pressure turns out to come from the combination of the gravity, surface tension, and the Lagrangian multiplier. The vorticity is naturally related to an invariant group action. In the absence of surface tension, the well-known Rayleigh-Taylor and Kelvin-Helmholtz instabilities appear naturally related to the signs of the curvatures of those infinite dimensional manifolds. Based on these considerations, we obtain 1.) the existence of unstable manifolds and L^2 nonlinear instability in the cases of the fixed domains and 2.) in the free boundary cases, the local well-posedness with surface tension in a rather uniform energy method. In particular, for the cases without surface tension which do not involve hydrodynamical instabilities, we obtain the local existence of solutions by taking the vanishing surface tension limit.

Series: Stochastics Seminar

The limiting law of the length of the longest increasing subsequence, LI_n, for sequences (words) of length n arising from iid letters drawn from finite, ordered alphabets is studied using a straightforward Brownian functional approach. Building on the insights gained in both the uniform and non-uniform iid cases, this approach is then applied to iid countable alphabets. Some partial results associated with the extension to independent, growing alphabets are also given. Returning again to the finite setting, and keeping with the same Brownian formalism, a generalization is then made to words arising from irreducible, aperiodic, time-homogeneous Markov chains on a finite, ordered alphabet. At the same time, the probabilistic object, LI_n, is simultaneously generalized to the shape of the associated Young tableau given by the well-known RSK-correspondence. Our results on this limiting shape describe, in detail, precisely when the limiting shape of the Young tableau is (up to scaling) that of the iid case, thereby answering a conjecture of Kuperberg. These results are based heavily on an analysis of the covariance structure of an m-dimensional Brownian motion and the precise form of the Brownian functionals. Finally, in both the iid and more general Markovian cases, connections to the limiting laws of the spectrum of certain random matrices associated with the Gaussian Unitary Ensemble (GUE) are explored.

Series: Geometry Topology Seminar

In the 1980s Gromov showed that curvature (in the triangle comparison sense) decreases under branched covers. In this expository talk I shall prove Gromov's result, and then discuss its generalization (due to Allcock) that helps show that some moduli spaces arising in algebraic geometry have contractible universal covers. The talk should be accessible to those interested in geometry/topology.

Series: Combinatorics Seminar

Given a set of linear equations Mx=b, we say that a set of integers S is (M,b)-free if it contains no solution to this system of equations. Motivated by questions related to testing linear-invariant Boolean functions, as well as recent investigations in additive number theory, the following conjecture was raised (implicitly) by Green and by Bhattacharyya, Chen, Sudan and Xie: we say that a set of integers S \subseteq [n], is \epsilon-far from being (M,b)-free if one needs to remove at least \epsilon n elements from S in order to make it (M,b)-free. The conjecture was that for any system of homogeneous linear equations Mx=0 and for any \epsilon > 0 there is a *constant* time algorithm that can distinguish with high probability between sets of integers that are (M,0)-free from sets that are \epsilon-far from being (M,0)-free. Or in other words, that for any M there is an efficient testing algorithm for the property of being (M,0)-free. In this paper we confirm the above conjecture by showing that such a testing algorithm exists even for non-homogeneous linear equations. As opposed to most results on testing Boolean functions, which rely on algebraic and analytic arguments, our proof relies on results from extremal hypergraph theory, such as the recent removal lemmas of Gowers, R\"odl et al. and Austin and Tao.

Series: Probability Working Seminar

A discrete infinite volume limit for random trees will be constructed and studied.

Monday, November 10, 2008 - 13:00 ,
Location: Skiles 255 ,
Guowei Wei ,
Michigan State University ,
Organizer: Haomin Zhou

Solvation process is of fundamental importance to other complex biological processes, such signal transduction, gene regulation, etc. Solvation models can be roughly divided into two classes: explicit solvent models that treat the solvent in molecular or atomic detail while implicit solvent models take a multiscale approach that generally replaces the explicit solvent with a dielectric continuum. Because of their fewer degrees of freedom, implicit solvent methods have become popular for many applications in molecular simulation with applications in the calculations of biomolecular titration states, folding energies, binding affinities, mutational effects, surface properties, and many other problems in chemical and biomedical research. In this talk, we introduce a geometric flow based multiscale solvation model that marries a microscopic discrete description of biomolecules with a macroscopic continuum treatment of the solvent. The free energy functional is minimized by coupled geometric and potential flows. The geometric flow is driven not only by intrinsic forces, such as mean curvatures, but also by extrinsic potential forces, such as those from electrostatic potentials. The potential flow is driven mainly by a Poisson-Boltzmann like operator. Efficient computational methods, namely the matched interface and boundary (MIB) method, is developed for to solve the Poisson- Boltzmann equation with discontinuous interface. A Dirichlet- to-Neumann mapping (DTN) approach is developed to regularize singular charges from biomolecules.

Series: Analysis Seminar

It is easy to ask for the number T(g,n) of (rooted) graphs with n edges on a surface of genus g. Bender et al gave an asymptotic expansion for fixed g and large n. The contant t_g remained missing for over 20 years, although it satisfied a complicated nonlinear recursion relation. The relation was vastly simplified last year. But a further simplification was made possible last week, thus arriving to Painleve I. I will review many trivialities and lies about this famous non-linear differential equation, from a post modern point of view.

Series: Dissertation Defense

Series: PDE Seminar

We discuss the inverse problem of determining elastic parameters in the interior of an anisotropic elastic media from dynamic measurements made at the surface. This problem has applications in medical imaging and seismology. The boundary data is modeled by the Dirichlet-to-Neumann map, which gives the correspondence between surface displacements and surface tractions. We first show that, without a priori information on the anisotropy type, uniqueness can hold only up to change of coordinates fixing the boundary. In particular, we study orbits of elasticity tensors under diffeomorphisms. Then, we obtain partial uniqueness for special classes of transversely isotropic media. This is joint work with L. Rachele (RPI).

Series: Mathematical Biology Seminar

Parent-offspring interactions lead to natural conflicts. Offspring want as many resources as possible from parents in order to gain maximal fitness levels. On the other hand, parents desire to invest only enough to guarantee survival to reproduction. The resolution of the parent-offspring conflict has been a topic of much debate in evolutionary biology and typically invoke the concept of 'costs' to begging by offspring. Here I present the analysis of a simple quantitative genetic model of parent-offspring interactions that does not costs to resolve parent-offspring conflicts.

Series: ACO Student Seminar

We consider the following Maximum Budgeted Allocation(MBA) problem: Given a set of m indivisible items and n agents; each agent i is willing to pay b_ij amount of money on item j, and in addition he species the maximum amount (budget of B_i) he is willing to pay in total over all the items he receives. Goal is to allocate items to agents so as to maximize the total payment received from all the agents. The problem naturally arises as auctioneer revenue maximization in first price budget-constrained Auctions (For e.g. auctioning of TV/Radio ads by Google). Our main results are: 1) We give a 3/4-approximation algorithm for MBA improving upon the previous best of 0.632 [Anelman-Mansour, 04]. Our factor matches the integrality gap of the LP used by the previous results. 2) We prove it is NP-hard to approximate MBA to any factor better than 15/16, previously only NP-hardness was known. Our result also implies NP-hardness of approximating maximum submodular welfare with demand oracle to a factor better than 15/16, improving upon the best known hardness of 275/276 [Feige-Vondrak, 07]. Our hardness techniques can be modified to prove that it is NP-hard to approximate the Generalized Assignment Problem (GAP) to any factor better than 10/11. This improves upon the 422/423 hardness of [Chekuri-Kumar, 04]. We use iterative rounding on a natural LP relaxation of MBA to obtain the 3/4-approximation. Recently iterative rounding has achieved considerable success in designing approximation algorithms. However, these successes have been limited to minimization problems, and as per our knowledge, this work is the first iterative rounding based approximation algorithm for a natural maximization problem. We also give a (3/4 - \epsilon)-factor algorithm based on the primal-dual schema which runs in O(nm) time, for any constant \epsilon > 0. In this talk, I will present the iterative rounding based algorithm, show the hardness reductions, and put forward some directions which can help in solving the natural open question of closing the approximation gap. Joint work with Deeparnab Chakrabarty.

Wednesday, November 12, 2008 - 14:00 ,
Location: Skiles 255 ,
Christian Houdré ,
School of Mathematics, Georgia Tech ,
Organizer: Christian Houdre

In connection with the class Stochastic Processes in Finance II, we will have a supplementary lecture where a first, 50 minutes long, movie on Doeblin's life will be shown. This will be followed by a second movie, 30 minutes long, where Yor explains on the blackboard Doeblin's contribution to what Shreeve calls the Ito-Doeblin's lemma.

Series: Stochastics Seminar

Many context-free formalisms based on transitive properties of trees and strings have been converted to probabilitic models. We have Probabilistic Finite Automaton, Probabilistic Context Free Grammar and Probabilistic Tree Adjoining Grammars and many other probabilistic models of grammars. Typically such formalisms employ context-free productions that are transitively closed. Context-free grammars can be represented declaratively through context-sensitive grammars that analyse or check wellformedness of trees. When this direction is elaborated further, we obtain constraint-based representations for regular, context-free and mildly-context sensitive languages and their associated structures. Such representations can also be Probabilistic and this could be achieved by combining weighted rational operations and Dyck languages. More intuitively, the rational operations are packed to a new form of conditional rule: Generalized Restriction or GR in short (Yli-Jyrä and Koskenniemi 2004), or a predicate logic over strings. The conditional rule, GR, is flexible and provides total contexts, which is very useful e.g. when compiling rewriting rules for e.g. phonological alternations or speech or text normalization. However, the total contexts of different conditional rewriting rules can overlap. This implies that the conditions of different rules are not independent and the probabilities do not combine like in the case of context-free derivations. The non-transitivity causes problems for the general use of probabilistic Generalized Restriction e.g. when adding probabilities to phonological rewriting grammars that define regular relations.

Series: ACO Distinguished Lecture

RECEPTION TO FOLLOW

Fifty five years ago, two results of the Hungarian mathematicians, Koenig and Egervary, were combined using the duality theory of linear programming to construct the Hungarian Method for the Assignment Problem. In a recent reexamination of the geometric interpretation of the algorithm (proposed by Schmid in 1978) as a steepest descent method, several variations on the algorithm have been uncovered, which seem to deserve further study.

The lecture will be self-contained, assuming little beyond the duality theory of linear programming. The Hungarian Method will be explained at an elementary level and will be illustrated by several examples.

We shall conclude with account of a posthumous paper of Jacobi containing an algorithm developed by him prior to 1851 that is essentially identical to the Hungarian Method, thus anticipating the results of Koenig (1931), Egervary (1931), and Kuhn (1955) by many decades.

The lecture will be self-contained, assuming little beyond the duality theory of linear programming. The Hungarian Method will be explained at an elementary level and will be illustrated by several examples.

We shall conclude with account of a posthumous paper of Jacobi containing an algorithm developed by him prior to 1851 that is essentially identical to the Hungarian Method, thus anticipating the results of Koenig (1931), Egervary (1931), and Kuhn (1955) by many decades.

Series: Stochastics Seminar

Let X=(X_1,\ldots,X_n) be a n-dimensional random vector for which the distribution has Markov structure corresponding to a junction forest, assuming functional forms for the marginal distributions associated with the cliques of the underlying graph. We propose a latent variable approach based on computing junction forests from filtrations. This methodology establishes connections between efficient algorithms from Computational Topology and Graphical Models, which lead to parametrizations for the space of decomposable graphs so that: i) the dimension grows linearly with respect to n, ii) they are convenient for MCMC sampling.

Friday, November 14, 2008 - 14:00 ,
Location: Skiles 269 ,
Thang Le ,
School of Mathematics, Georgia Tech ,
Organizer: Thang Le

We will explain the famous result of Luck and Schick which says that for a large class of 3-manifolds, including all knot complements, the hyperbolic volume is equal to the l^2-torsion. Then we speculate about the growth of homology torsions of finite covers of knot complements. The talk will be elementary and should be accessible to those interested in geometry/topology.

Monday, November 17, 2008 - 13:00 ,
Location: Skiles 255 ,
Maoan Han ,
Shanghai Normal University ,
Organizer: Haomin Zhou

Let H(m) denote the maximal number of limit cycles of polynomial systems of degree m. It is called the Hilbert number. The main part of Hilbert's 16th problem posed in 1902 is to find its value. The problem is still open even for m=2. However, there have been many interesting results on the lower bound of it for m\geq 2. In this paper, we give some new lower bounds of this number. The results obtained in this paper improve all existing results for all m\geq 7 based on some known results for m=3,4,5,6. In particular, we confirm the conjecture H(2k+1) \geq (2k+1)^2-1 and obtain that H(m) grows at least as rapidly as \frac{1}{2\ln2}(m+2)^2\ln(m+2) for all large m.

Series: CDSNS Colloquium

Nonlinear wave phenomena are of great importance in the physical world and have been for a long time a challenging topic of research for both pure and applied mathematicians. There are numerous nonlinear evolution equations for which we need to analyze the properties of the solutions for time evolution of the system. As the first step, we should understand the dynamics of their traveling wave solutions. There exists an enormous literature on the study of nonlinear wave equations, in which exact explicit solitary wave, kink wave, periodic wave solutions and their dynamical stabilities are discussed. To find exact traveling wave solutions for a given nonlinear wave system, a lot of methods have been developed. What is the dynamical behavior of these exact traveling wave solutions? How do the travelling wave solutions depend on the parameters of the system? What is the reason of the smoothness change of traveling wave solutions? How to understand the dynamics of the so-called compacton and peakon solutions? These are very interesting and important problems. The aim of this talk is to give a more systematic account for the bifurcation theory method of dynamical systems to find traveling wave solutions and understand their dynamics for two classes of singular nonlinear traveling systems.

Series: PDE Seminar

We consider the existence of periodic solutions to the Euler equations of gas dynamics. Such solutions have long been thought not to exist due to shock formation, and this is confirmed by the celebrated Glimm-Lax decay theory for 2x2 systems. However, in the full 3x3 system, multiple interaction effects can combine to slow down and prevent shock formation. In this talk I shall describe the physical mechanism supporting periodicity, describe combinatorics of simple wave interactions, and develop periodic solutions to a "linearized" problem. These linearized solutions have a beautiful structure and exhibit several surprising and fascinating phenomena. I shall also discuss partial progress on the perturbation problem: this leads us to problems of small divisors and KAM theory. This is joint work with Blake Temple.

Series: Research Horizons Seminar

We examine some problems in the coupled motions of fluids and flexible solid bodies. We first present some basic equations in fluid dynamics and solid mechanics, and then show some recent asymptotic results and numerical simulations. No prior experience with fluid dynamics is necessary.

Series: ACO Student Seminar

Nash bargaining was first modeled in John Nash's seminal 1950 paper. In his paper, he used a covex program to give the Nash bargaining solution, which satifies many nature properties. Recently, V.Vazirani defined a class of Nash bargaining problem as Uniform Nash Bargaining(UNB) and also defined a subclass called Submodular Nash Bargaining (SNB). In this talk, we will consider some game theoretic issues of UNB: (1) price of bargaining; (2) fully competitiveness; (3) min-max and max-min fairness and we show that each of these properties characterizes the subclass SNB.

Series: Analysis Seminar

Note change in time.

The theory of geometric discrepancy studies different variations of the following question: how well can one approximate a uniform distribution by a discrete one, and what are the limitations that necessarily arise in such approximations. Historically, the methods of harmonic analysis (Fourier transform, Fourier series, wavelets, Riesz products etc) have played a pivotal role in the subject. I will give an overview of the problems, methods, and results in the field and discuss some latest developments.

Series: Graph Theory Seminar

Given a configuration of pebbles on the vertices of a connected graph G, a pebbling move is defined as the removal of two pebbles from some vertex, and the placement of one of these on an adjacent vertex. The pebbling number of a graph G is the smallest integer k such that given any configuration of k pebbles on G and any specified vertex v in V(G), there is a sequence of pebbling moves that sends a pebble to v. We will show that the pebbling number of a graph of diameter four on n vertices is at most 3n/2 + O(1), and this bound is best possible up to an additive constant. This proof, based on a discharging argument and a decomposition of the graph into ''irreducible branches'', generalizes work of Bukh on graphs of diameter three. Further, we prove that the pebbling number of a graph on n vertices with diameter d is at most (2^{d/2} - 1)n + O(1). This also improves a bound of Bukh.

Series: Stochastics Seminar

So far, likelihood-based interval estimate for quantiles has not been studied in literature for interval censored Case 2 data and partly interval-censored data, and in this context the use of smoothing has not been considered for any type of censored data. This article constructs smoothed weighted empirical likelihood ratio confidence intervals (WELRCI) for quantiles in a unified framework for various types of censored data, including right censored data, doubly censored data, interval censored data and partly interval-censored data. The 4th-order expansion of the weighted empirical log-likelihood ratio is derived, and the 'theoretical' coverage accuracy equation for the proposed WELRCI is established, which generally guarantees at least the 'first-order' accuracy. In particular for right censored data, we show that the coverage accuracy is at least O(n^{-1/2}), and our simulation studies show that in comparison with empirical likelihood-based methods, the smoothing used in WELRCI generally gives a shorter confidence interval with comparable coverage accuracy. For interval censored data, it is interesting to find that with an adjusted rate n^{-1/3}, the weighted empirical log-likelihood ratio has an asymptotic distribution completely different from that by the empirical likelihood approach, and the resulting WELRCI perform favorably in available comparison simulation studies.

Series: Geometry Topology Seminar

Lickorish observed a simple way to make two knots in S^3 that produced the same manifold by the same surgery. Many have extended this result with the most dramatic being Osoinach's method (and Teragaito's adaptation) of creating infinitely many distinct knots in S^3 with the same surgery yielding the same manifold. We will turn this line of inquiry around and examine relationships within such families of corresponding knots in the resulting surgered manifold.

Series: Combinatorics Seminar

In 1968, Vizing proposed the following conjecture which claims that if G is an edge chromatic critical graph with n vertices, then the independence number of G is at most n/2. In this talk, we will talk about this conjecture and the progress towards this conjecture.

Series: PDE Seminar

We consider a system of hyperbolic-parabolic equations describing the material instability mechanism associated to the formation of shear bands at high strain-rate plastic deformations of metals. Systematic numerical runs are performed that shed light on the behavior of this system on various parameter regimes. We consider then the case of adiabatic shearing and derive a quantitative criterion for the onset of instability: Using ideas from the theory of relaxation systems we derive equations that describe the effective behavior of the system. The effective equation turns out to be a forward-backward parabolic equation regularized by fourth order term (joint work with Th. Katsaounis and Th. Baxevanis, Univ. of Crete).

Series: Analysis Seminar

In his celebrated paper on area distortion under planar quasiconformal mappings (Acta 1994), K. Astala proved that a compact set E of Hausdorff dimension d is mapped under a K-quasiconformal map f to a set fE of Hausdorff dimension at most d' = \frac{2Kd}{2+(K-1)d}, and he proved that this result is sharp. He conjectured (Question 4.4) that if the Hausdorff measure \mathcal{H}^d (E)=0, then \mathcal{H}^{d'} (fE)=0. This conjecture was known to be true if d'=0 (obvious), d'=2 (Ahlfors), and more recently d'=1 (Astala, Clop, Mateu, Orobitg and UT, Duke 2008.) The approach in the last mentioned paper does not generalize to other dimensions. Astala's conjecture was shown to be sharp (if it was true) in the class of all Hausdorff gauge functions in work of UT (IMRN, 2008). Finally, we (Lacey, Sawyer and UT) jointly proved completely Astala's conjecture in all dimensions. The ingredients of the proof come from Astala's original approach, geometric measure theory, and some new weighted norm inequalities for Calderon-Zygmund singular integral operators which cannot be deduced from the classical Muckenhoupt A_p theory. These results are intimately related to (not yet fully understood) removability problems for various classes of quasiregular maps. The talk will be self-contained.

Series: Geometry Topology Seminar

Cannon: "A f.g. negatively curved group with boundary homeomorphic to the round two sphere is Kleinian". We shall outline a combinatorial (complex analysis motivated) approach to this interesting conjecture (following Cannon, Cannon-Floyd-Parry). If time allows we will hint on another approach (Bonk-Kleiner) (as well as ours). The talk should be accessible to graduate students with solid background in: complex analysis, group theory and basic topology.

Series: Stochastics Seminar

We will introduce the Dunkl derivative as well as the Dunkl process and some of its properties. We will treat its radial part called the radial Dunkl process and light the connection to the eigenvalues of some matrix valued processes and to the so called Brownian motions in Weyl chambers. Some open problems will be discussed at the end.

Series: PDE Seminar

In this talk we will consider three different numerical methods for solving nonlinear PDEs:

- A class of Godunov-type second order schemes for nonlinear conservation laws, starting from the Nessyahu-Tadmor scheme;
- A class of L1 -based minimization methods for solving linear transport equations and stationary Hamilton-

Jacobi equations; - Entropy-viscosity methods for nonlinear conservation laws.

All of the above methods are based on high-order approximations of the corresponding nonlinear PDE and

respect a weak form of an entropy condition. Theoretical results and numerical examples for the performance of

each of the three methods will be presented.

Series: ACO Student Seminar

Two independent proofs of the polyhedrality of the split closure of Mixed Integer Linear Program have been previously presented. Unfortunately neither of these proofs is constructive. In this paper, we present a constructive version of this proof. We also show that split cuts dominate a family of inequalities introduced by Koppe and Weismantel.

Series: Analysis Seminar

Note time change.

Let I_\alpha be the fractional integral operator. The Olsen inequality, useful in certain PDEs, concerns multiplication operators and fractional integrals in the L^p-norm, or more generally, the Morrey norm. We strenghten this inequality from the one given by Olsen.

Series: Analysis Seminar

In this talk we will discuss a generalization of monotone sequences/functions as well as of those of bounded variation. Some applications to various problems of analysis (the Lp-convergence of trigonometric series, the Boas-type problem for the Fourier transforms, the Jackson and Bernstein inequalities in approximation, etc.) will be considered.

Series: Geometry Topology Seminar

We will begin with an overview of the Burau representation of the braid group. This will be followed by an introduction to a contact category on 3-manifolds, with a brief discussion of its relation to the braid group.

Series: Geometry Topology Seminar

A broken fibration is a map from a smooth 4-manifold to S^2 with isolated Lefschetz singularities and isolated fold singularities along circles. These structures provide a new framework for studying the topology of 4-manifolds and a new way of studying Floer theoretical invariants of low dimensional manifolds. In this talk, we will first talk about topological constructions of broken Lefschetz fibrations. Then, we will describe Perutz's 4-manifold invariants associated with broken fibrations and a TQFT-like structure corresponding to these invariants. The main goal of this talk is to sketch a program for relating these invariants to Ozsváth-Szabó invariants.

Series: PDE Seminar

The usual boundary condition adjoined to a second order elliptic equation is the Dirichlet problem, which prescribes the values of the solution on the boundary. In many applications, this is not the natural boundary condition. Instead, the value of some directional derivative is given at each point of the boundary. Such problems are usually considered a minor variation of the Dirichlet condition, but this talk will show that this problem has a life of its own. For example, if the direction changes continuously, then it is possible for the solution to be continuously differentiable up to a merely Lipschitz boundary. In addition, it's possible to get smooth solutions when the direction changes discontinuously as well.

Series: Mathematical Biology Seminar

In this presentation I will outline physical principles of two analytical techniques, the Scanning ElectroChemical Microscopy (SECM) and Scanning Mass Spectrometry (SMS), which can be used to obtain the spatially resolved images of (bio/electro)chemically active interfaces. The mathematical models need to be employed for image interpretation and mapping measured quantities (e.g., an electrode current in SECM) to biochemically relevant quantities (e.g., kinetics of exocytotic signaling events in cellular communications), and I will review the key ideas/assumptions used for the model formulation and the main results of analysis and simulations. In conclusion, an alternative approach to spatially-resolved imaging based on the multi-probe array will be introduced along with intriguing opportunities and challenges for mathematical interpretation of such images.

Series: Combinatorics Seminar

Expanders via Random Spanning Trees Motivated by the problem of routing reliably and scalably in a graph, we introduce the notion of a splicer, the union of spanning trees of a graph. We prove that for any bounded-degree n-vertex graph, the union of two random spanning trees approximates the expansion of every cut of the graph to within a factor of O(log n). For the random graph G_{n,p}, for p > c (log n)/n, two spanning trees give an expander. This is suggested by the case of the complete graph, where we prove that two random spanning trees give an expander. The construction of the splicer is elementary — each spanning tree can be produced independently using an algorithm by Aldous and Broder: a random walk in the graph with edges leading to previously unvisited vertices included in the tree. A second important application of splicers is to graph sparsification where the goal is to approximate every cut (and more generally the quadratic form of the Laplacian) using only a small subgraph of the original graph. Benczur-Karger as well as Spielman-Srivastava have shown sparsifiers with O(n log n/eps^2) edges that achieve approximation within factors 1+eps and 1-eps. Their methods, based on independent sampling of edges, need Omega(n log n) edges to get any approximation (else the subgraph could be disconnected) and leave open the question of linear-size sparsifiers. Splicers address this question for random graphs by providing sparsifiers of size O(n) that approximate every cut to within a factor of O(log n). This is joint work with Navin Goyal and Santosh Vempala.

Series: PDE Seminar

We calculate numerically the solutions of the stationary Navier-Stokes equations in two dimensions, for a square domain with particular choices of boundary data. The data are chosen to test whether bounded disturbances on the boundary can be expected to spread into the interior of the domain. The results indicate that such behavior indeed can occur, but suggest an estimate of general form for the magnitudes of the solution and of its derivatives, analogous to classical bounds for harmonic functions. The qualitative behavior of the solutions we found displayed some striking and unexpected features. As a corollary of the study, we obtain two new examples of non-uniqueness for stationary solutions at large Reynolds numbers.

Series: Other Talks

The Southeast Geometry Seminar (SGS) is a semiannual series of one day events organized by Vladimir Oliker (Emory), Mohammad Ghomi and John McCuan (Georgia Tech) and Gilbert Weinstein (UAB). See http://www.math.uab.edu/sgs for details

Series: Stochastics Seminar

In this approach to the Gaussian Correlation Conjecture we must check the log-concavity of the moment generating function of certain measures pulled down by a particular Gaussian density.

Series: Job Candidate Talk

I will present properties of polynomials mappings and generalizations. I will first describe all polynomials f and g for which there is a complex number c such that the orbits {c, f(c), f(f(c)), ...} and {c, g(c), g(g(c)), ...} have infinite intersection. I will also discuss a common generalization of this result and Mordell's conjecture (Faltings' theorem). After this I will move to polynomial mappings over finite fields, with connections to curves having large automorphism groups and instances of a positive characteristic analogue of Riemann's existence theorem.

Series: Job Candidate Talk

It is now increasingly common in statistical practice to encounter datasets in which the number of observations, n, is of the same order of magnitude as the number of measurements, p, we have per observation. This simple remark has important consequences for theoretical (and applied) statistics. Namely, it suggests on the theoretical front that we should study the properties of statistical procedures in an asymptotic framework where p and n both go to infinity (and p/n has for instance a finite non-zero limit). This is drastically different from the classical theory where p is held fixed when n goes to infinity. Since a number of techniques in multivariate statistics rely fundamentally on sample covariance matrices and their eigenvalues and eigenvectors, the spectral properties of large dimensional covariance matrices play a key role in such "large n, large p" analyses. In this talk, I will present a few problems I have worked on, concerning different aspects of the interaction between random matrix theory and multivariate statistics. I will discuss some fluctuation properties of the largest eigenvalue of sample covariance matrices when the population covariance is (fairly) general, talk about estimation problems for large dimensional covariance matrices and, time permitting, address some applications in a classic problem of mathematical finance. The talk will be self-contained and no prior knowledge of statistics or random matrix theory will be assumed.

Monday, January 12, 2009 - 13:00 ,
Location: Skiles 255 ,
Frank Crosby ,
Naval Surface Warfare Center, Panama City ,
Organizer: Haomin Zhou

Several imaging innovations have been designed to find hidden objects in coastal areas of entry, such as beaches and ports. Each imaging device is designed to exploit particular distinguishing characteristics. This talk with cover using a tunable multi-spectral camera for polarization based detection and object identification with a flash LIDAR camera that produces three-dimensional imagery.

Series: Math Physics Seminar

We present an overview of mathematical results on the low temperature properties of dilute quantum gases, which have been obtained in the past few years. The discussion includes, for instance, results on the free energy in the thermodynamic limit, and on Bose-Einstein condensation, Superfluidity and quantized vortices in trapped gases. All these properties are intensely being studied in current experiments on cold atomic gases. We will give a brief description of the mathematics involved in understanding these phenomena, starting from the underlying many-body Schroedinger equation.

Series: Research Horizons Seminar

The Apery sequence is a sequence of natural numbers 1,5,73,1445,...which is used to prove the irrationality of zeta(3). Can you compute its asymptotic expansion to all orders of 1/n? The talk will not assume a lot, but promises to compute, and also justify.

Series: School of Mathematics Colloquium

Issai Schur (1918) considered a class of polynomials with integer coefficients and simple zeros in the closed unit disk. He studied the limit behavior of the arithmetic means s_n for zeros of such polynomials as the degree n tends to infinity. Under the assumption that the leading coefficients are bounded, Schur proved that \limsup_{n\to\infty} |s_n| \le 1-\sqrt{e}/2. We show that \lim_{n\to\infty} s_n = 0 as a consequence of the asymptotic equidistribution of zeros near the unit circle. Furthermore, we estimate the rate of convergence of s_n to 0. These results follow from our generalization of the Erdos-Turan theorem on discrepancy in angular equidistribution of zeros. We give a range of applications to polynomials with integer coefficients. In particular, we show that integer polynomials have some unexpected restrictions of growth on the unit disk. Schur also studied problems on means of algebraic numbers on the real line. When all conjugate algebraic numbers are positive, the problem of finding \liminf_{n\to\infty} s_n was developed further by Siegel and many others. We provide a solution of this problem for algebraic numbers equidistributed in subsets of the real line.

Series: Graph Theory Seminar

Each graph can be embedded in 3-space. The problem becomes more interesting if we put restrictions on the type of embedding. For example, a linkless embedding of a graph is one where each pair of vertex-disjoint circuits has linking number equal to zero. The class of all graphs that have a linkless embedding is closed under taking minors. Robertson, Seymour, and Thomas gave the forbidden minors for this class of graphs. Open remained how to find a linkless embedding in polynomial time. In the talk we start with discussing an algorithm to find a linkless embedding.Instead of embedding the graph in 3-space, we could also consider mapping properties of certain superstructures of the graph in 3-space, and, indeed, if this superstructure has not the right mapping properties in 3-space, see whether it has the right one in 4-space, etc. Recently, we introduced for a graph G a new graph parameter \sigma(G), which is defined as the smallest d such that superstructures of G have a zero intersection mapping in d-space. The nicest property of this graph parameter is its independence of the superstructure and thus depends on the graph only. For d=2 and d=3, \sigma(G) \leq d if and only if G is outerplanar and planar, respectively. The graphs G with \sigma(G)\leq 4 are exactly those that have a linkless embedding. In the second part of the talk we will discuss this new graph parameter. (This part is joint work with R. Pendavingh.)

Series: Job Candidate Talk

We construct our understanding of the world solely from neuronal activity generated in our brains. How do we do this? Many studies have investigated how the electrical activity of neurons (action potentials) is related to outside stimuli, and maps of these relationships -- often called receptive fields -- are routinely computed from data collected in neuroscience experiments. Yet how the brain can understand the meaning of this activity, without the dictionary provided by these maps, remains a mystery. I will present some recent results on this question in the context of hippocampal place cells -- i.e., neurons in rodent hippocampus whose activity is strongly correlated to the animal's position in space. In particular, we find that topological and geometric features of the animal's physical environment can be derived purely from the activity of hippocampal place cells. Relating stimulus space topology and geometry to neural activity opens up new opportunities for investigating the connectivity of recurrent networks in the brain. I will conclude by discussing some current projects along these lines.

Series: Stochastics Seminar

The uniform convergence of empirical averages to their expectations for a set of bounded test functions will be discussed. In our previous work, we proved a necessary and sufficient condition for the uniform convergence that can be formulated in terms of the epsilon-entropy of certain sets associated to the sample. In this talk, I will consider the case where that condition is violated. The main result is that in this situation strong almost sure oscillations take place. In fact, with probability one, for a given oscillation pattern, one can find an admissible test function that realizes this pattern for any positive prescribed precision level.

Series: Job Candidate Talk

After introducing and reviewing the situation for rational and integral points on curves, I will discuss various aspects of integral points on higher-dimensional varieties. In addition to discussing recent higher-dimensional results, I will also touch on connections with the value distribution theory of holomorphic functions and give some concrete open problems.

Series: Other Talks

It is shown (theoretically and empirically) that a reliable result can be gained only in the case of a certain relation between the capacity of the class of models from which we choose and the size of the training set. There are different ways to measure the capacity of a class of models. In practice the size of a training set is always finite and limited. It leads to an idea to choose a model from the most narrow class, or in other words to use the simplest model (Occam's razor). But if our class is narrow, it is possible that there is no true model within the class or a model close to the true one. It means that there will be greater residual error or larger number of errors even on the training set. So the problem of model complexity choice arises – to find a balance between errors due to limited number of training data and errors due to excessive model simplicity. I shall review different approaches to the problem.

Series: Job Candidate Talk

Series: Other Talks

<p>You are cordially invited to attend a reception that will follow the seminar to chat informally with faculty and students. Refreshments will be provided.</p>

The existing machine learning paradigm considers a simple scheme: given a set of training examples find in a given collection of functions the one that in the best possible way approximates the unknown decision rule. In such a paradigm a teacher does not play an important role. In human learning, however, the role of a teacher is very important: along with examples a teacher provides students with explanations, comments, comparisons, and so on. In this talk I will introduce elements of human teaching in machine learning. I will consider an advanced learning paradigm called learning using hidden information (LUHI), where at the training stage a teacher gives some additional information x^* about training example x. This information will not be available at the test stage. I will consider the LUHI paradigm for support vector machine type of algorithms, demonstrate its superiority over the classical one and discuss general questions related to this paradigm. For details see FODAVA, Foundations of Data Analysis and Visual Analytics

Series: PDE Seminar

Living systems are subject to constant evolution through the two processes of mutations and selection, a principle discovered by Darwin. In a very simple, general, and idealized description, their environment can be considered as a nutrient shared by all the population. This allows certain individuals, characterized by a 'phenotypical trait', to expand faster because they are better adapted to the environment. This leads to select the 'best fitted trait' in the population (singular point of the system). On the other hand, the new-born population undergoes small variance on the trait under the effect of genetic mutations. In these circumstances, is it possible to describe the dynamical evolution of the current trait?

We will give a mathematical model of such dynamics, based on parabolic equations, and show that an asymptotic method allows us to formalize precisely the concepts of monomorphic or polymorphic population. Then, we can describe the evolution of the 'best fitted trait' and eventually compute various forms of branching points, which represent the cohabitation of two different populations.

The concepts are based on the asymptotic analysis of the above mentioned parabolic equations, one appropriately rescaled. This leads to concentrations of the solutions and the difficulty is to evaluate the weight and position of the moving Dirac masses that describe the population. We will show that a new type of Hamilton-Jacobi equation, with constraints, naturally describes this asymptotic. Some additional theoretical questions as uniqueness for the limiting H.-J. equation will also be addressed.

This work is based on collaborations with O. Diekmann, P.-E. Jabin, S. Mischler, S. Cuadrado, J. Carrillo, S. Genieys, M. Gauduchon and G. Barles.

Series: Job Candidate Talk

Numerical algebraic geometry provides a collection of novel methods to treat the solutions of systems of polynomial equations. These hybrid symbolic-numerical methods based on homotopy continuation technique have found a wide range of applications in both pure and applied areas of mathematics. This talk gives an introduction to numerical algebraic geometry and outlines directions in which the area has been developing. Two topics are highlighted: (1) computation of Galois groups of Schubert problems, a recent application of numerical polynomial homotopy continuation algorithms to enumerative algebraic geometry; (2) numerical primary decomposition, the first numerical method that discovers embedded solution components.

Series: ACO Colloquium

We describe recent progress in the study of the binary deletion channel and related channels with synchronization errors, including a clear description of many open problems in this area. As an example, while the capacity of the binary symmetric error channel and the binary erasure channel have been known since Shannon, we still do not have a closed-form description of the capacity of the binary deletion channel. We highlight a recent result that shows that the capacity is at least (1-p)/9 when each bit is deleted independently with fixed probability p.

Series: School of Mathematics Colloquium

In this talk we will review some of the global asymptotic results obtained during the last two decades in the theory of the classical Painleve equations with the help of the Isomonodromy - Riemann-Hilbert method. The results include the explicit derivation of the asymptotic connection formulae, the explicit description of linear and nonlinear Stokes phenomenon and the explicit evaluation of the distribution of poles. We will also discuss some of the most recent results emerging due to the appearance of Painleve equations in random matrix theory. The Riemann-Hilbert method will be outlined as well.

Series: SIAM Student Seminar

In this talk, I will focus on some interesting examples in the conditional expectation and martingale, for example, gambling system "Martingale", Polya's urn scheme, Galton-Watson process, Wright-Fisher model of population genetics. I will

skip the theorems and properties. Definitions to support the examples will be introduced. The talk will not assume a lot of probability, just some basic measure theory.

skip the theorems and properties. Definitions to support the examples will be introduced. The talk will not assume a lot of probability, just some basic measure theory.

Series: Other Talks

h-Principle consists of a powerful collection of tools developed by Gromov and others to solve underdetermined partial differential equations or relations which arise in differential geometry and topology. In these talks I will describe the Holonomic approximation theorem of Eliashberg-Mishachev, and discuss some of its applications including the sphere eversion theorem of Smale. Further I will discuss the method of convex integration and its application to proving the C^1 isometric embedding theorem of Nash.

Series: Combinatorics Seminar

In this talk, I will discuss chip-firing games on graphs, and the related Jacobian groups. Additionally, I will describe elliptic curves over finite fields, and how such objects also have group structures. For a family of graphs obtained by deforming the sequence of wheel graphs, the cardinalities of the Jacobian groups satisfy a nice reciprocal relationship with the orders of elliptic curves as we consider field extensions. I will finish by discussing other surprising ways that these group structures are analogous. Some of this research was completed as part of my dissertation work at the University of California, San Diego under Adriano Garsia's guidance.

Friday, January 23, 2009 - 15:00 ,
Location: Skiles 269 ,
Mohammad Ghomi ,
Ga Tech ,
Organizer: John Etnyre

$h$-Principle consists of a powerful collection of tools developed by Gromov and others to solve underdetermined partial differential equations or relations which arise in differential geometry and topology. In these talks I will describe the Holonomic approximation theorem of Eliashberg-Mishachev, and discuss some of its applications including the sphere eversion theorem of Smale. Further I will discuss the method of convex integration and its application to proving the $C^1$ isometric embedding theorem of Nash.

Monday, January 26, 2009 - 13:00 ,
Location: Skiles 255 ,
Ming-Jun Lai ,
University of Georgia ,
Organizer: Haomin Zhou

I will first explain why we want to find the sparse solutions of underdetermined linear systems. Then I will explain how to solve the systems using \ell_1, OGA, and \ell_q approaches. There are some sufficient conditions to ensure that these solutions are the sparse one, e.g., some conditions based on restricted isometry property (RIP) by Candes, Romberg, and Tao'06 and Candes'08. These conditions are improved recently in Foucart and Lai'08. Furthermore, usually, Gaussian random matrices satisfy the RIP. I shall explain random matrices with strictly sub-Gaussian random variables also satisfy the RIP.

Series: CDSNS Colloquium

I will present a generalization of a classical within-host model of a viral infection that includes multiple strains of the virus. The strains are allowed to mutate into each other. In the absence of mutations, the fittest strain drives all other strains to extinction. Treating mutations as a small perturbation, I will present a global stability result of the perturbed equilibrium. Whether a particular strain survives is determined by the connectivity of the graph describing all possible mutations.

Tuesday, January 27, 2009 - 11:05 ,
Location: Skiles 269 ,
Philip Protter ,
Cornell University ,
Organizer: Christian Houdre

Series: PDE Seminar

Image segmentation has been widely studied, specially since Mumford-Shah functional was been proposed. Many theoretical works as well as numerous extensions have been studied rough out the years. In this talk, I will focus on couple of variational models for multi-phase segmentation. For the first model, we propose a model built upon the phase transition model of Modica and Mortola in material sciences and a properly synchronized fitting term that complements it. For the second model, we propose a variational functional for an unsupervised multiphase segmentation, by adding scale information of each phase. This model is able to deal with the instability issue associated with choosing the number of phases for multiphase segmentation.

Series: Mathematical Biology Seminar

Series: Research Horizons Seminar

In this talk, we give an insight into the mathematical topic of shape optimization. First, we give several examples of problems, some of them are purely academic and some have an industrial origin. Then, we look at the different mathematical questions arising in shape optimization. To prove the existence of a solution, we need some topology on the set of domains, together with good compactness and continuity properties. Studying the regularity and the geometric properties of a minimizer requires tools from classical analysis, like symmetrization. To be able to define the optimality conditions, we introduce the notion of derivative with respect to the domain. At last, we give some ideas of the different numerical methods used to compute a possible solution.

Series: ACO Student Seminar

In this article, we disprove the uniform shortest path routing conjecture for vertex-transitive graphs by constructing an infinite family of counterexamples.

Series: Job Candidate Talk

Carleson's Corona Theorem from the 1960's has served as a major motivation for many results in complex function theory, operator theory and harmonic analysis. In its simplest form, the result states that for two bounded analytic functions, g_1 and g_2, on the unit disc with no common zeros, it is possible to find two other bounded analytic functions, f_1 and f_2, such that f_1g_1+f_2g_2=1. Moreover, the functions f_1 and f_2 can be chosen with some norm control. In this talk we will discuss an exciting new generalization of this result to certain function spaces on the unit ball in several complex variables. In particular, we will highlight the Corona Theorem for the Drury-Arveson space and its applications in multi-variable operator theory.

Series: Stochastics Seminar

This work began in collaboration with C.Heitsch. I will briefly discuss the biological motivation. Then I will introduce Gibbs random trees and study their asymptotics as the tree size grows to infinity. One of the results is a "thermodynamic limit" allowing to introduce a limiting infinite random tree which exhibits a few curious properties. Under appropriate scaling one can obtain a diffusion limit for the process of generation sizes of the infinite tree. It also turns out that one can approach the study the details of the geometry of the tree by tracing progenies of subpopulations. Under the same scaling the limiting continuum random tree can be described as a solution of an SPDE w.r.t. a Brownian sheet.

Series: SIAM Student Seminar

I plan to give a simple proof of the law of iterated logarithm in probability, which is a famous conclusion relative to strong law of large number, and in the proof I will cover the definition of some important notations in probability such as Moment generating function and large deviations, the proof is basically from Billingsley's book and I made some.

Series: Combinatorics Seminar

Part of Spielman and Teng's smoothed analysis of the Simplex algorithm relied on showing that most minors of a typical random rectangular matrix are well conditioned (do not have any singular values too close to zero). Motivated by this, Vershynin asked the question as to whether it was typically true that ALL minors of a random rectangular matrix are well conditioned. Here I will explain why that the answer to this question is in fact no: Even an n by 2n matrix will typically have n by n minors which have singular values exponentially close to zero.

Friday, January 30, 2009 - 15:00 ,
Location: Skiles 269 ,
Mohammad Ghomi ,
Ga Tech ,
Organizer: John Etnyre

$h$-Principle consists of a powerful collection of tools developed by Gromov and others to solve underdetermined partial differential equations or relations which arise in differential geometry and topology. In these talks I will describe the Holonomic approximation theorem of Eliashberg-Mishachev, and discuss some of its applications including the sphere eversion theorem of Smale. Further I will discuss the method of convex integration and its application to proving the $C^1$ isometric embedding theorem of Nash. (Please note this course runs from 3-5.)

Series: Other Talks

Please note this course runs from 3-5.

Series: Geometry Topology Seminar

I will discuss a "duality" among the linearized contact homology groups of a Legendrian submanifold in certain contact manifolds (in particular in Euclidean (2n+1)-space). This duality is expressed in a long exact sequence relating the linearized contact homology, linearized contact cohomology and the ordinary homology of the Legendrian submanifold. One can use this structure to ease difficult computations of linearized contact homology in high dimensions and further illuminate the proof of cases of the Arnold Conjecture for the double points of an exact Lagrangian in complex n- space.

Series: Analysis Seminar

The Balian-Low Theorem is a strong form of the uncertainty principle for Gabor systems that form orthonormal or Riesz bases for L^2(R). In this talk we will discuss the Balian-Low Theorem in the setting of Schauder bases. We prove that new weak versions of the Balian-Low Theorem hold for Gabor Schauder bases, but we constructively demonstrate that several variants of the BLT can fail for Gabor Schauder bases that are not Riesz bases. We characterize a class of Gabor Schauder bases in terms of the Zak transform and product A_2 weights; the Riesz bases correspond to the special case of weights that are bounded away from zero and infinity. This is joint work with Alex Powell (Vanderbilt University).

Series: Job Candidate Talk

Stochastic Loewner evolution (SLE) introduced by Oded Schramm is a breakthrough in studying the scaling limits of many two-dimensional lattice models from statistical physics. In this talk, I will discuss the proofs of the reversibility conjecture and duality conjecture about SLE. The proofs of these two conjectures use the same idea, which is to use a coupling technique to lift local couplings of two SLE processes that locally commute with each other to a global coupling. And from the global coupling, we can clearly see that the two conjectures hold.

Series: CDSNS Colloquium

I will review results from binary black hole simulations and the role that these simulations have in astrophysics and gravitational wave observations. I will then focus on the mathematical and computational aspects of the recent breakthroughs in numerical relativity that have made finding binary black hole solutions to the Einstein field equations an almost routine exercise.

Tuesday, February 3, 2009 - 15:00 ,
Location: Skiles 269 ,
Dmitry Kreslavskiy ,
Bloomberg ,
Organizer: Christian Houdre

We will give an overview of the company as it relates to the work of a quant. We will discuss projects of interest, typical lifecycle of a project, and involved areas.

Series: PDE Seminar

In this talk I will present Hamiltonian identities for elliptic PDEs and systems of PDEs. I will also show some interesting applications of these identities to problems related to solutions of some nonlinear elliptic equations in the entire space or plane. In particular, I will give a rigorous proof to the Young's law in triple junction configuration for a vector-valued Allen Cahn model arising in phase transition; a necessary condition for the existence of certain saddle solutions for Allen-Cahn equation with asymmetric double well potential will be derived, and the structure of level sets of general saddle solutions will also be discussed.

Series: Research Horizons Seminar

Due to Alexander, it is well known that every closed oriented 3-manifold has an open book decomposition. In this talk, we will define open book decompositions of 3-manifolds. We will discuss various examples and sketch the proof of Alexander's theorem. Further, we will discuss the importance of the open books in manifold theory, in particular in contact geometry.

Series: ACO Student Seminar

Shrouded in mystery and kept hidden for decades, Richard Lipton's vault of open problems will be revealed...

Series: School of Mathematics Colloquium

Consider the 2-d ideal incompressible fluid moving inside a bounded domain (say 2-d torus). It is described by 2-d Euler equations which have unique global solution; thus, we have a dynamical system in the space of sufficiently regular incompressible vector fields. The global properties of this system are poorly studied, and, as much as we know, paradoxical. It turns out that there exists a global attractor (in the energy norm), i.e. a set in the phase space attracting all trajectories (in spite the fact that the system is conservative). This apparent contradiction leads to some deep questions of non-equilibrium statistical mechanics.

Series: Combinatorics Seminar

The study of random tilings of planar lattice regions goes back to the solution of the dimer model in the 1960's by Kasteleyn, Temperley and Fisher, but received new impetus in the early 1990's, and has since branched out in several directions in the work of Cohn, Kenyon, Okounkov, Sheffield, and others. In this talk, we focus on the interaction of holes in random tilings, a subject inspired by Fisher and Stephenson's 1963 conjecture on the rotational invariance of the monomer-monomer correlation on the square lattice. In earlier work, we showed that the correlation of a finite number of holes on the triangular lattice is given asymptotically by a superposition principle closely paralleling the superposition principle for electrostatic energy. We now take this analogy one step further, by showing that the discrete field determined by considering at each unit triangle the average orientation of the lozenge covering it converges, in the scaling limit, to the electrostatic field. Our proof involves a variety of ingredients, including Laplace's method for the asymptotics of integrals, Newton's divided difference operator, and hypergeometric function identities.

Friday, February 6, 2009 - 15:00 ,
Location: Skiles 269 ,
Mohammad Ghomi ,
School of Mathematics, Georgia Tech ,
Organizer: John Etnyre
h-Principle consists of a powerful collection of tools developed by Gromov and others to solve underdetermined partial differential equations or relations which arise in differential geometry and topology. In these talks I will describe the Holonomic approximation theorem of Eliashberg-Mishachev, and discuss some of its applications including the sphere eversion theorem of Smale. Further I will discuss the method of convex integration and its application to proving the C^1 isometric embedding theorem of Nash.

<p>(Please note this course runs from 3-5 pm.)</p>

Monday, February 9, 2009 - 13:00 ,
Location: Skiles 255 ,
Giuseppe Mastroianni ,
Dept. of Mathematics and Informatics, Univ. of Basilicata, Italy) ,
Organizer: Haomin Zhou

In this talk I will show a simple projection method for Fredholm integral equation (FIE) defined on finite intervals and a Nyström method for FIE defined on the real semiaxis. The first method is based the polynomial interpolation of functions in weighted uniform norm. The second one is based on a Gauss truncated quadrature rule. The stability and the convergence of the methods are proved and the error estimates are given.

Series: CDSNS Colloquium

Mathematical models are used to study possible impact of drug treatment of infections with the human immunodeficiency virus type 1 (HIV-1) on the evolution of the pathogen. Treating HIV-infected patients with a combination of several antiretroviral drugs usually contributes to a substantial decline in viral load and an increase in CD4+ T cells. However, continuing viral replication in the presence of drug therapy can lead to the emergence of drug-resistant virus variants, which subsequently results in incomplete viral suppression and a greater risk of disease progression. As different types of drugs (e.g., reverse transcriptase inhibitors,protease inhibitors and entry inhibitors) help to reduce the HIV replication at different stages of the cell infection, infection-age-structured models are useful to more realistically model the effect of these drugs. The model analysis will be presented and the results are linked to the biological questions under investigation. By demonstrating how drug therapy may influence the within host viral fitness we show that while a higher treatment efficacy reduces the fitness of the drug-sensitive virus, it may provide a stronger selection force for drug-resistant viruses which allows for a wider range of resistant strains to invade.

Tuesday, February 10, 2009 - 15:00 ,
Location: Skiles 269 ,
Rehim Kilic ,
School of Economics, Georgia Tech ,
Organizer: Christian Houdre

This paper introduces a new nonlinear long memory volatility process, denoted by Smooth Transition FIGARCH, or ST-FIGARCH, which is designed to account for both long memory and nonlinear dynamics in the conditional variance process. The nonlinearity is introduced via a logistic transition function which is characterized by a transition parameter and a variable. The model can capture smooth jumps in the altitude of the volatility clusters as well as asymmetric response to negative and positive shocks. A Monte Carlo study finds that the ST-FIGARCH model outperforms the standard FIGARCH model when nonlinearity is present, and performs at least as well without nonlinearity. Applications reported in the paper show both nonlinearity and long memory characterize the conditional volatility in exchange rate and stock returns and therefore presence of nonlinearity may not be the source of long memory found in the data.

Series: PDE Seminar

The transportation problem can be formulated as the problem of finding the optimal way to transport a given measure into another with the same mass. In mathematics, there are at least two different but very important types of optimal transportation: Monge-Kantorovich problem and ramified transportation. In this talk, I will give a brief introduction to the theory of ramified optimal transportation. In terms of applied mathematics, optimal transport paths are used to model many "tree shaped" branching structures, which are commonly found in many living and nonliving systems. Trees, river channel networks, blood vessels, lungs, electrical power supply systems, draining and irrigation systems are just some examples. After briefly describing some basic properties (e.g. existence, regularity) as well as numerical simulation of optimal transport paths, I will use this theory to explain the dynamic formation of tree leaves. On the other hand, optimal transport paths provide excellent examples for studying geodesic problems in quasi-metric spaces, where the distance functions satisfied a relaxed triangle inequality: d(x,y) <= K(d(x,z)+d(z,y)). Then, I will introduce a new concept "dimensional distance" on the space of probability measures. With respect to this new metric, the dimension of a probability measure is just the distance of the measure to any atomic measure. In particular, measures concentrated on self-similar fractals (e.g. Cantor set, fat Cantor sets) will be of great interest to us.

Series: Research Horizons Seminar

The Horn inequalities give a characterization of eigenvalues of self-adjoint n by n matrices A, B, C with A+B+C=0. The original proof by Klyachko and Knutson-Tao, requires tools from algebraic geometry, among other things. Our recent work provides a proof using only elementary tools that made it possible to generalize the Horn inequalities to finite von Neumann factors. No knowledge of von Neumann algebra is required.

Series: Analysis Seminar

Series: Job Candidate Talk

In late 1980's Manin et al put forward a precise conjecture about the density of rational points on Fano varieties. Over the last two decades some progress has been made towards proving this conjecture. But the conjecture is far from being proved even for the case of two dimensional Fano varieties or del Pezzo surfaces. These surfaces are geometrically classified according to `degree', and the geometric, as well as, the arithmetic complexity increases as the degree drops. The most interesting cases of Manin's conjecture for surfaces are degrees four and lower. In this talk I will mainly focus on the arithmetic of these del Pezzo surfaces, and report some of my own results (partly joint with Henryk Iwaniec). I will also talk about some other problems which apparently have a different flavor but, nonetheless, are directly related with the problem of rational points on surfaces.

On creating a model assessment tool independent of data size and estimating the U statistic variance

Series: Stochastics Seminar

If viewed realistically, models under consideration are always false. A consequence of model falseness is that for every data generating mechanism, there exists a sample size at which the model failure will become obvious. There are occasions when one will still want to use a false model, provided that it gives a parsimonious and

powerful description of the generating mechanism. We introduced a model credibility index, from the point of view that the model is false. The model credibility index is defined as the maximum sample size at which samples from the model and those from the true data generating mechanism are nearly indistinguishable. Estimating the model credibility index is under the framework of subsampling, where a large data set is treated as our population, subsamples are generated from the population and compared with the model using various sample sizes. Exploring the asymptotic properties of the model credibility index is associated with the problem of estimating variance of U statistics. An unbiased estimator and a simple fix-up are proposed to estimate the U statistic variance.

powerful description of the generating mechanism. We introduced a model credibility index, from the point of view that the model is false. The model credibility index is defined as the maximum sample size at which samples from the model and those from the true data generating mechanism are nearly indistinguishable. Estimating the model credibility index is under the framework of subsampling, where a large data set is treated as our population, subsamples are generated from the population and compared with the model using various sample sizes. Exploring the asymptotic properties of the model credibility index is associated with the problem of estimating variance of U statistics. An unbiased estimator and a simple fix-up are proposed to estimate the U statistic variance.

Series: SIAM Student Seminar

Let V be a vector space over the field C of complex numbers and let GL(V) be the group of isomorphisms of onto itself. Suppose G is a finite group. A linear representation of G in V is a homomorphism from the group G into the group GL(V). In

this talk, I will give a brief introduction to some basic theorems about linear representations of finite groups with concentration on the decomposition of a representation into irreducible sub-representations, and the definition and some nice

properties of the character. At the end of the talk, I will re-prove the Burnside lemma in the group theory from the representation theory approach.

this talk, I will give a brief introduction to some basic theorems about linear representations of finite groups with concentration on the decomposition of a representation into irreducible sub-representations, and the definition and some nice

properties of the character. At the end of the talk, I will re-prove the Burnside lemma in the group theory from the representation theory approach.

Since I began learning the topic only very recently, hence an absolute novice myself, I invite all of you to the talk to help me learn the knowledge through presenting it to others. If you are familiar with the topic and want to learn something new, my

talk can easily be a disappointment.

Friday, February 13, 2009 - 15:00 ,
Location: Skiles 269 ,
Igor Belegradek ,
School of Mathematics, Georgia Tech ,
Organizer: John Etnyre

Comparison geometry studies Riemannian manifolds with a given curvature bound. This minicourse is an introduction to volume comparison (as developed by Bishop and Gromov), which is fundamental in understanding manifolds with a lower bound on Ricci curvature. Prerequisites are very modest: we only need basics of Riemannian geometry, and fluency with fundamental groups and metric spaces. In the first (2 hour) lecture I shall explain what volume comparison is and derive several applications.

Series: Probability Working Seminar

This term, the main topic for the Probability Working Seminar will be the coupling method, broadly understood. In the first talk, some basics on coupling will be discussed along with classical examples such as the ergodic theorem for Markov chains.

Series: Geometry Topology Seminar

I will discuss a couple of applications of transverse knot theory to the classification of contact structures and braid theory. In particular I will make the statement "transverse knots classify contact structures" precise and then prove it (if we have time). I will also discuss how progress on two of Orevkov's questions concerning quasi-positive knots that have implications for Hilbert's 16th problem.

Series: CDSNS Colloquium

Permutation entropy was introduced as a complexity measure of time series. Formally, it replaces the symbol blocks in the definition of Shannon entropy by the so-called ordinal patterns –a digest of the ups-and-downs along a finite orbit in a totally ordered state space. Later, this concept was extended to self maps of n-dimensional intervals, in metric and topological versions. It can be proven that, under some assumptions, the metric and topological permutation entropy coincide with their corresponding conventional counterparts. Besides its use as an entropy estimator, permutation entropy has found some interesting applications. We will talk about the detection of determinism in noisy time series, and the recovery of the control parameter from the symbolic sequences of a unimodal map (which allows to cryptanalize some chaotic ciphers).

Series: PDE Seminar

We study generalized traveling front solutions of reaction-diffusion equations modeling flame propagation in combustible media. Although the case of periodic media has been studied extensively, until very recently little has been known for general disordered media. In this talk we will address questions of existence, uniqueness, and stability of traveling fronts in this framework.

Series: Research Horizons Seminar

I will give a modern bijective proof of Kirchhoff's classical theorem relating the number of spanning trees in a graph to the Laplacian matrix of the graph. The proof will highlight some analogies between graph theory and algebraic geometry.

Series: ACO Student Seminar

In this talk I will give an introduction of the Markov Chain Monte Carlo Method, which uses markov chains to sample interesting combinatorial objects such as proper colorings, independent sets and perfect matchings of a graph. I will introduce methods such as Couplings and Canonical Paths which have been widely used to analyze how many steps Markov Chains needs to go (mixing time) in order to get a sufficiently random combinatorial object. I will also give a brief survey of some recent results in the sampling of colorings.

Series: School of Mathematics Colloquium

Molecular topology is an application of graph theory to fields like chemistry, biology and pharmacology, in which the molecular structure matters. Its scope is the topological characterization of molecules by means of numerical invariants, called topological indices, which are the main ingredient of the molecular topological models. These models have been instrumental in the discovery of new applications of naturally occurring molecules, as well as in the design of synthetic molecules with specific chemical, biological or pharmacological properties. The talk will focus on pharmacological applications.

Series: Graph Theory Seminar

Tiling problems belong to the oldest problems in whole mathematics. They attracted attention of many famous mathematicians. Even one of the Hilbert problems is devoted to the topic. The interest in tilings by unit cubes originated with a conjecture raised by Minkowski in 1908. In this lecture we will discuss the conjecture, and other closely related problems.

Series: Stochastics Seminar

We explore the connection between Scenery Reconstruction and Optimal Alignments. We present some new algorithms which work in practise and not just in theory, to solve the Scenery Reconstruction problem

Series: SIAM Student Seminar

In this introductory talk, I am going to derive the basic governing equations of fluid dynamics. Our assumption are the three physical principles: the conservation of mass, Newton's second law, and the conservation of energy. The main object is to present Euler equations (which characterize inviscid flow) and Navier-Stokes equations (which characterize viscid flow).

Series: Other Talks

Comparison geometry studies Riemannian manifolds with a given curvature bound. This minicourse is an introduction to volume comparison (as developed by Bishop and Gromov), which is fundamental in understanding manifolds with a lower bound on Ricci curvature. Prerequisites are very modest: we only need basics of Riemannian geometry, and fluency with fundamental groups and metric spaces. The second (2 hour) lecture is about Gromov-Hausdorff convergence, which provides a natural framework to studying degenerations of Riemannian metrics.

Friday, February 20, 2009 - 15:00 ,
Location: Skiles 269 ,
Igor Belegradek ,
Ga Tech ,
Organizer: John Etnyre
Comparison geometry studies Riemannian manifolds with a given curvature bound. This minicourse is an introduction to volume comparison (as developed by Bishop and Gromov), which is fundamental in understanding manifolds with a lower bound on Ricci curvature. Prerequisites are very modest: we only need basics of Riemannian geometry, and fluency with fundamental groups and metric spaces. The second (2 hour) lecture is about Gromov-Hausdorff convergence, which provides a natural framework to studying degenerations of Riemannian metrics.

Series: Combinatorics Seminar

In this work (joint with Derrick Hart), we show that there exists a constant c > 0 such that the following holds for all n sufficiently large: if S is a set of n monic polynomials over C[x], and the product set S.S = {fg : f,g in S}; has size at most n^(1+c), then the sumset S+S = {f+g : f,g in S}; has size \Omega(n^2). There is a related result due to Mei-Chu Chang, which says that if S is a set of n complex numbers, and |S.S| < n^(1+c), then |S+S| > n^(2-f(c)), where f(c) -> 0 as c -> 0; but, there currently is no result (other than the one due to myself and Hart) giving a lower bound of the quality >> n^2 for |S+S| for a fixed value of c. Our proof combines combinatorial and algebraic methods.

Series: Probability Working Seminar

The talk is based on a paper by Kuksin, Pyatnickiy, and Shirikyan. In this paper, the convergence to a stationary distribution is established by partial coupling. Here, only finitely many coordinates in the (infinite-dimensional) phase space participate in the coupling while the dynamics takes care of the other coordinates.

Monday, February 23, 2009 - 13:00 ,
Location: Skiles 255 ,
Tiejun Li ,
Peking University ,
Organizer: Haomin Zhou

The tau-leaping algorithm is proposed by D.T. Gillespie in 2001 for accelerating the simulation for chemical reaction systems. It is faster than the traditional stochastic simulation algorithm (SSA), which is an exact simulation algorithm. In this lecture, I will overview some recent mathematical results on tau-leaping done by our group, which include the rigorous analysis, construction of the new algorithm, and the systematic analysis of the error.

Series: Geometry Topology Seminar

A cubic graph is a graph with all vertices of valency 3. We will show how to assign two numerical invariants to a cubic graph: its spectral radius, and a number field. These invariants appear in asymptotics of classical spin networks, and are notoriously hard to compute. They are known for the Theta graph, the Tetrahedron, but already unknown for the Cube and the K_{3,3} graph. This is joint work with Roland van der Veen: arXiv:0902.3113.

Series: Analysis Seminar

Euler's beta (and gamma) integral and the associated orthogonal polynomials lie at the core of much of the theory of special functions, and many generalizations have been studied, including multivariate analogues (the Selberg integral; also work of Dixon and Varchenko), q-analogues (Askey-Wilson, Nasrallah-Rahman), and both (work of Milne-Lilly and Gustafson; Macdonald and Koornwinder for orthgonal polynomials). (Among these are the more tractable sums arising in random matrices/tilings/etc.) In 2000, van Diejen and Spiridonov conjectured a further generalization of the Selberg integral, going beyond $q$ to the elliptic level (replacing q by a point on an elliptic curve). I'll discuss two proofs of their conjecture, and the corresponding elliptic analogue of the Macdonald and Koornwinder orthogonal polynomials. In addition, I'll discuss a further generalization of the elliptic Selberg integral with a (partial) symmetry under the exceptional Weyl group E_8, and its relation to Sakai's elliptic Painlev equation.

Series: CDSNS Colloquium

A plasma is a completed ionized gas. In many applications such as in nuclear fusion or astrophysical phenomena, the plasma has very high temperature and low density, thus collisions can be ignored. The standard kinetic models for a collisionless plasma are the Vlasov- Maxwell and Vlasov-Poisson systems. The Vlasov-Poisson system is also used to model galaxy dynamics, where a star plays the role of a particle. There exists infinitely many equilibria for Vlasov models and their stability is a very important issue in physics. I will describe some of my works on stability and instability of various Vlasov equilibria.

Series: PDE Seminar

The problem of understanding the parabolic hull of Brownian motion arises in two different fields. In mathematical physics this is the Burgers-Hopf caricature of turbulence (very interesting, even if not entirely turbulent). In statistics, the limit distribution we study was first considered by Chernoff, and forms the cornerstone of a large class of limit theorems that have now come to be called 'cube-root-asymptotics'. It was in the statistical context that the problem was first solved completely in the mid-80s by Groeneboom in a tour de force of hard analysis. We consider another approach to his solution motivated by recent work on stochastic coalescence (especially work of Duchon, Bertoin, and my joint work with Bob Pego). The virtues of this approach are simplicity, generality, and the appearance of a completely unexpected Lax pair. If time permits, I will also indicate some tantalizing links of this approach with random matrices. This work forms part of my student Ravi Srinivasan's dissertation.

Series: Mathematical Biology Seminar

The expression dynamics of interacting genes depends on the topology of the regulatory network, the quantitative nature of feedbacks and interactions between DNA, RNA and proteins, and the biochemical state of the intracellular and surrounding environment. In this talk we show that dynamics of a gene regulatory network can also depend sensitively on the copy number of genes and promoters. Genetic regulatory networks include an overrepresentation of subgraphs commonly known as network motifs. We consider positive feedback, bistable feedback, and toggle switch motifs and show that variation in gene copy number can cause a sequence of saddle-node bifurcations in the corresponding differential equations models, which leads to multiple orders of magnitude change in gene expression. A similar analysis of a 3-gene motif with successive inhibition (the ``repressilator'') reveals that changes in gene copy number can also cause a Hopf bifurcation, thus leading to a qualitative switch in system behavior among oscillatory and equilibrium dynamics. Importantly, we show that these bifurcations exist over a wide range of parameter values, thus reinforcing our claim that copy number is a key control parameter in the expression dynamics of regulatory networks.

Series: Research Horizons Seminar

I'll give a brief introduction to the to Quantum Statistical Mechanics in the case of systems of Fermions (e.g. electrons) and try to show that a lot of the mathematical problems can be framed in term of counting (Feynman) graphs or estimating large determinants.

Series: ACO Student Seminar

In this talk, I will introduce the class of logconcave and s-concave functions, illustrate their properties, and explain their connections to convex geometry. I will present a simple and unified approach for proving probabilistic inequalities for logconcave and s-concave densities on the real line. Lastly I will use these techniques to prove two important theorems in convex geometry: Grunbaum's theorem, every halfspace cut through the centroid of a convex body contains at least a 1/e volume fraction of the body, and the Milman-Pajor inequality, a convex body in R^n is sandwiched between its inertial ellipsoid and a factor n scaling of it. Joint work with Santosh Vempala.

Series: School of Mathematics Colloquium

The study of partition identities has a long history going back to Euler, with applications ranging from Analysis to Number Theory, from Enumerative Combina- torics to Probability. Partition bijections is a combinatorial approach which often gives the shortest and the most elegant proofs of these identities. These bijections are then often used to generalize the identities, find "hidden symmetries", etc. In the talk I will present a modern approach to partition bijections based on the geometry of random partitions and complexity ideas.

Series: Stochastics Seminar

Last week we saw combinatorial reconstruction. This time we are going to explain a new approach to Scenery Reconstruction. This new approach could allow us to prove that being able to distinguish sceneries implies reconstructability.

Series: SIAM Student Seminar

This talk will follow Peter Lax on the linear algebraic fact of the index of Fredholm operators such as the product formula and stability, all of which are totally elementary.

Series: Combinatorics Seminar

We show that for all \el an \epsilon>0 there is a constant c=c(\ell,\epsilon)>0 such that every \ell-coloring of the triples of an N-element set contains a subset S of size c\sqrt{\log N} such that at least 1-\epsilon fraction of the triples of S have the same color. This result is tight up to the constant c and answers an open question of Erd\H{o}s and Hajnal from 1989 on discrepancy in hypergraphs. For \ell \geq 4 colors, it is known that there is an \ell-coloring of the triples of an N-element set whose largest monochromatic subset has cardinality only \Theta(\log \log N). Thus, our result demonstrates that the maximum almost monochromatic subset that an \ell-coloring of the triples must contain is much larger than the corresponding monochromatic subset. This is in striking contrast with graphs, where these two quantities have the same order of magnitude. To prove our result, we obtain a new upper bound on the \ell-color Ramsey numbers of complete multipartite 3-uniform hypergraphs, which answers another open question of Erd\H{o}s and Hajnal. (This is joint work with D. Conlon and J. Fox.)

Series: Other Talks

Comparison geometry studies Riemannian manifolds with a given curvature bound. This minicourse is an introduction to volume comparison (as developed by Bishop and Gromov), which is fundamental in understanding manifolds with a lower bound on Ricci curvature. Prerequisites are very modest: we only need basics of Riemannian geometry, and fluency with fundamental groups and metric spaces. In the third (2 hour) lecture I shall prove volume and Laplacian comparison theorems.

Series: Probability Working Seminar

This is a continuation of last week's seminar. The talk is based on a paper by Kuksin, Pyatnickiy, and Shirikyan. In this paper, the convergence to a stationary distribution is established by partial coupling. Here, only finitely many coordinates in the (infinite-dimensional) phase space participate in the coupling while the dynamics takes care of the other coordinates.

Friday, February 27, 2009 - 15:05 ,
Location: Skiles 269 ,
Igor Belegradek ,
Ga Tech ,
Organizer: John Etnyre
Comparison geometry studies Riemannian manifolds with a given curvature bound. This minicourse is an introduction to volume comparison (as developed by Bishop and Gromov), which is fundamental in understanding manifolds with a lower bound on Ricci curvature. Prerequisites are very modest: we only need basics of Riemannian geometry, and fluency with fundamental groups and metric spaces. In the third (2 hour) lecture I shall prove volume and Laplacian comparison theorems.

Series: Geometry Topology Seminar

We introduce a construction of an immersed surface for a null-homologous braid in an annulus open book decomposition. This is hinted by the so called Bennequin surface for a braid in R^3. By resolving the singularities of the immersed surface, we obtain an embedded Seifert surface for the braid. Then we compute a self-linking number formula using this embedded surface and observe that the Bennequin inequality is satisfied if and only the contact structure is tight. We also prove that our self-linking formula is invariant (changes by 2) under a positive (negative) braid stabilization which preserves (changes) the transverse knot class.

Series: Algebra Seminar

Starting with some classical hypergeometric functions, we explain how to derive their classical univariate differential equations. A severe change of coordinates transforms this ODE into a system of PDE's that has nice geometric aspects. This type of system, called A-hypergeometric, was introduced by Gelfand, Graev, Kapranov and Zelevinsky in about 1985. We explain some basic questions regarding these systems. These are addressed through homology, combinatorics, and toric geometry.

Series: PDE Seminar

In this talk I will describe recent work with C. N. Moore about the two-phase Stefan problem with a degenerate zone. We start with local solutions (no reference to initial or boundary data) and then obtain intrinsic energy estimates, that will in turn lead to the continuity of the temperature. We then show existence and uniqueness of solutions with signed measures as data. The uniqueness problem with signed measure data has been open for some 30 years for any degenerate parabolic equation.

Series: Mathematical Biology Seminar

We consider a class of age-structured population models in which the central modeling assumption is simply that the birth rate depends on the size of the adult population. For the most part, we in fact assume that the birth rate is a monotone non-decreasing function of the adult population size. Despite the simplicity of our modeling assumptions (or perhaps because of it), we will see that this class of models admits a wide variety of solutions (exponentially growing, lineary growing, periodic, etc.) Much of the analysis of these models can be carried out using elementary techniques and we present some specific examples in which explicit solutions (which are elementary functions) can be generated. We also consider some questions related to the inverse problem for these models.

Series: ACO Student Seminar

Scarf's lemma is one of the fundamental results in combinatorics, originally introduced to study the core of an N-person game. Over the last four decades, the usefulness of Scarf's lemma has been demonstrated in several important combinatorial problems seeking stable solutions. However, the complexity of the computational version of Scarf's lemma (Scarf) remained open. In this talk, I will prove that Scarf is complete for the complexity class PPAD. This shows that Scarf is as hard as the computational versions of Brouwer's fixed point theorem and Sperner's lemma. Hence, there is no polynomial-time algorithm for Scarf unless PPAD \subseteq P. I will also talk about fractional stable paths problem, finding fractional kernels in digraphs, finding fractional stable matching in hypergraphic preference systems and finding core in an N-person balanced game with non-transferable utilities. I will show the connection between these problems through Scarf's lemma and address the complexity of these problems.

Series: School of Mathematics Colloquium

This is joint work with Andrei Okounkov. The ``honeycomb dimer model'' is a natural model of discrete random surfaces in R^3. It is possible to write down a ``Law of Large Numbers" for such surfaces which describes the typical shape of a random surface when the mesh size tends to zero. Surprisingly, one can parameterize these limit shapes in a very simple way using analytic functions, somewhat reminiscent of the Weierstrass parameterization of minimal surfaces. This is even more surprising since the limit shapes tend to be facetted, that is, only piecewise analytic. There is a large family of boundary conditions for which we can obtain exact solutions to the limit shape problem using algebraic geometry techniques. This family includes the (well-known) solution to the limit shape of a ``boxed plane partition'' and has many generalizations.

Series: Stochastics Seminar

A shot noise process is essentially a compound Poisson process whereby the arriving shots are allowed to accumulate or decay after their arrival via some preset shot (impulse response) function. Shot noise models see applications in diverse areas such as insurance, ﬁ- nance, hydrology, textile engineering, and electronics. This talk stud- ies several statistical inference issues for shot noise processes. Under mild conditions, ergodicity is proven in that process sample paths sat- isfy a strong law of large numbers and central limit theorem. These results have application in storage modeling. Shot function parameter estimation from a data history observed on a discrete-time lattice is then explored. Optimal estimating functions are tractable when the shot function satisﬁes a so-called interval similar condition. Moment methods of estimation are easily applicable if the shot function is com- pactly supported and show good performance. In all cases, asymptotic normality of the proposed estimators is established.

Series: SIAM Student Seminar

In this talk, I will briefly introduce some basics of mathematical learning theory. Two basic methods named perceptron algorithm and support vector machine will be explained for the separable classification case. Also, the subgaussian random

variable and Hoeffding inequality will be mentioned in order to provide the upper bound for the deviation of the empirical risk. If time permits, the Vapnik combinatorics will be involved for shaper bounds of this deviation.

variable and Hoeffding inequality will be mentioned in order to provide the upper bound for the deviation of the empirical risk. If time permits, the Vapnik combinatorics will be involved for shaper bounds of this deviation.

Series: Probability Working Seminar

The talk is based on the paper titled "Anosov diffeomorphisms and coupling" by Bressaud and Liverani. Existence and uniqueness of SRB invariant measure for the dynamics is established via a coupling of initial conditions introduced to dynamics by L.-S. Young.

Series: Combinatorics Seminar

This is joint work with Alex Postnikov. Imagine that you are to build a system of space stations (graph vertices), which communicate via laser beams (edges). The edge directions were already chosen, but you must place the stations so that none of the beams miss their targets. In this talk, we let the edge directions be generic and independent, a choice that constrains vertex placement the most. For K_{3} in \mathbb{R}^{2}, the edges specify a unique triangle, but its size is arbitrary --- D_{2}(K_{3})=1 degree of freedom; we say that K_{3} is rigid in \mathbb{R}^{2}. We call D_{n}(G) the degree of parallel rigidity of the graph for generic edge directions. We found an elegant combinatorial characterization of D_{n}(G) --- it is equal to the minimal number of edges in the intersection of n spanning trees of G. In this talk, I will give a linear-algebraic proof of this result, and of some other properties of D_{n}(G). The notion of parallel graph rigidity was previously studied by Whiteley and Develin-Martin-Reiner. The papers worked with the generic parallel rigidity matroid; I will briefly compare our results in terms of D_{n}(G) with the previous work.

Series: Geometry Topology Seminar

Given any contact 3-manifold, Etnyre and Ozbagci defined new invariants of the contact structures in terms of open book decompositions supporting the contact structure. One of the invariants is the support genus of the contact structure which is defined as the minimal genus of a page of an open book that supports the contact structure. In a similar fashion, we define the support genus sg(L) of a Legendrian knot L in a contact manifold M as the minimal genus of a page of an open book of M supporting the contact structure such that L sits on a page and the framing given by the contact structure and by the page agree. In this talk, we will discuss the support genus of Legendrian knots in contact 3-manifolds. We will show any null-homologous loose knot has support genus zero. To prove this, we observe an interesting topological property of knots and links on the way. We observe any topological knot or link in a 3-manifold sits on a planar page (genus zero page) of an open book decomposition.

Monday, March 9, 2009 - 13:05 ,
Location: Skiles 255 ,
Zhi J. Wang ,
Aerospace Engineering, Iowa State University ,
Organizer: Yingjie Liu

The current breakthrough in computational fluid dynamics (CFD) is the emergence of unstructured grid based high-order (order > 2) methods. The leader is arguably the discontinuous Galerkin method, amongst several other methods including the multi-domain spectral, spectral volume (SV), and spectral difference (SD) methods. All these methods possess the following properties: k-exactness on arbitrary grids, and compactness, which is especially important for parallel computing. In this talk, recent progresses in the DG, SV, SD and a unified formulation called lifting collocation penalty will be presented. Numerical simulations with the SV and the SD methods will be presented. The talk will conclude with several remaining challenges in the research on high-order methods.

Series: CDSNS Colloquium

The Cohen-Gallavotti Fluctuation theorem is a result describing the behaviour of simple hyperbolic dynamical systems. It was introduced to illustrate, in a somewhat simpler context, anomalies in the second law of thermodynamics. I will describe the mathematical formulation of this Fluctuation Theorem, and some variations on it.

Series: PDE Seminar

We consider the the following fourth order degenerate parabolic equation h_t + (hh_xxx)_x = 0. The equation arises in the lubrication approximation regime, describing the spreading of a thin film liquid with height profile h >= 0 on a plate. We consider the equation as free boundary problem, defined on its positivity set. We address existence and regularity of classical solutions in weighted Hölder and Sobolev spaces.

Series: Mathematical Biology Seminar

Human Mobility in our globalised world has reached a complexity and volume of unprecedented degree. More than 60 million people travel billions of kilometres on more than 2 million international flights each week. Hundreds of millions of people commute on a complex web of highways and railroads most of which operate at their maximum capacity. Human mobility is responsible for the geographical spread of emergent human infectious diseases and plays a key role in human mediated bioinvasion, the dominant factor in the global biodiversity crisis. I will report on the recent discovery of scaling laws in global human traffic (obtained from online bill-tracking at www.wheresgeorge.com) and mathematical models that can account for it. I will present a complex network perspective on multi-scale human traffic networks, report on their statistical properties and show that they can be used to identify geographically coherent communities that only vaguely resemble administrative ones. The approach provides an operational segmentation of maps into a hierarchical set of effective regions and boundaries based on human behavior. I will briefly talk about European transportation networks, geocaching and trackable items.

Series: Combinatorics Seminar

Tolerance graphs were introduced in 1982 by Golumbic and Monma as a generalization of the class of interval graphs. A graph G= (V, E) is an interval graph if each vertex v \in V can be assigned a real interval I_v so that xy \in E(G) iff I_x \cap I_y \neq \emptyset. Interval graphs are important both because they arise in a variety of applications and also because some well-known recognition problems that are NP-complete for general graphs can be solved in polynomial time when restricted to the class of interval graphs. In certain applications it can be useful to allow a representation of a graph using real intervals in which there can be some overlap between the intervals assigned to non-adjacent vertices. This motivates the following definition: a graph G= (V, E) is a tolerance graph if each vertex v \in V can be assigned a real interval I_v and a positive tolerance t_v \in {\bf R} so that xy \in E(G) iff |I_x \cap I_y| \ge \min\{t_x,t_y\}. These topics can also be studied from the perspective of ordered sets, giving rise to the classes of Interval Orders and Tolerance Orders. In this talk we give an overview of work done in tolerance graphs and orders . We use hierarchy diagrams and geometric arguments as unifying themes.

Series: Research Horizons Seminar

In this talk, I will present an brief introdution to use partial differential equation (PDE) and variational techniques (including techniques developed in computational fluid dynamics (CFD)) into wavelet transforms and Applications in Image Processing. Two different approaches are used as examples. One is PDE and variational frameworks for image reconstruction. The other one is an adaptive ENO wavelet transform designed by using ideas from Essentially Non-Oscillatory (ENO) schemes for numerical shock capturing.

Series: ACO Seminar

We consider the problem of a search engine trying to assign a sequence of search keywords to a set of competing bidders, each with a daily spending limit. The goal is to maximize the revenue generated by these keyword sales, bearing in mind that, as some bidders may eventually exceed their budget, not all keywords should be sold to the highest bidder. We assume that the sequence of keywords (or equivalently, of bids) is revealed on-line. Our concern will be the competitive ratio for this problem versus the off-line optimum.

We extend the current literature on this problem by considering the setting where the keywords arrive in a random order. In this setting we are able to achieve a competitive ratio of 1-\epsilon under some mild, but necessary, assumptions. In contrast, it is already known that when the keywords arrive in an adversarial order, the best competitive ratio is bounded away from 1. Our algorithm is motivated by PAC learning, and proceeds in two parts: a training phase, and an exploitation phase.

Joint work with Tom Hayes, UNM.

We extend the current literature on this problem by considering the setting where the keywords arrive in a random order. In this setting we are able to achieve a competitive ratio of 1-\epsilon under some mild, but necessary, assumptions. In contrast, it is already known that when the keywords arrive in an adversarial order, the best competitive ratio is bounded away from 1. Our algorithm is motivated by PAC learning, and proceeds in two parts: a training phase, and an exploitation phase.

Joint work with Tom Hayes, UNM.

Series: CDSNS Colloquium

Motivated by Smale's work on smooth dynamical systems, David Ruelle introduced the notion of Smale spaces. These are topological dynamical systems which are hyperbolic in the sense of having local coordinates of contracting and expending directions. These include hyperbolic automorphisms of tori, but typically, the spaces involved have a fractal nature. An important subclass are the shifts of finite type which are symbolic systems described by combinatorial data. These are also precisely the Smale spaces which are totally disconnected. Rufus Bowen showed that every Smale space is the image of shift of finite type under a finite-to-one factor map. In the 1970's, Wolfgang Krieger introduced a beautiful invariant for shifts of finite type. The aim of this talk is to show how a refined version of Bowen's theorem may be used to extend Krieger's invariant to all (irreducible) Smale spaces. The talk will assume no prior knowledge of these topics - we begin with a discussion of Smale spaces and shifts of finite type and then move on to Krieger's invariant and its extension.

Series: Stochastics Seminar

We consider Markovian tandem queues with finite intermediate buffers and flexible servers and study how the servers should be assigned dynamically to stations in order to obtain optimal long-run average throughput. We assume that each server can work on only one job at a time, that several servers can work together on a single job, and that the travel times between stations are negligible. Under various server collaboration schemes, we characterize the optimal server assignment policy for these systems.

On the theory and applications of the longtime dynamics of 3-dimensional fluid flows on thin domains

Series: CDSNS Colloquium

The current theory of global attractors for the Navier-Stokes equations on thin 3D domains is motivated by the desire to better understand the theory of heat transfer in the oceans of the Earth. (In this context, the thinness refers to the aspect ratio - depth divided by expanse - of the oceans.) The issue of heat transfer is, of course, closely connected with many of the major questions concerning the climate. In order to exploit the tools of modern dynamical systems in this study, one needs to know that the global attractors are "good" in the sense that the nonlinearities are Frechet differentiable on these attractors. About 20 years ago, it was discovered that on certain thin 3D domains, the Navier-Stokes equations did possess good global attractors. This discovery, which was itself a major milestone in the study of the 3D Navier-Stokes equations, left open the matter of extending the theory to cover oceanic-like regions with the appropriate physical boundary behavior. In this lecture, we will review this theory, and the connections with climate modeling, while placing special emphasis on the recent developments for fluid flows with the Navier (or slip) boundary conditions

Series: PDE Seminar

We discuss the global regularity vs. finite time breakdown in Eulerian dynamics, driven by different models of nonlinear forcing. Finite time breakdown depends on whether the initial configuration crosses intrinsic, O(1) critical thresholds (CT). Our approach is based on spectral dynamics, tracing the eigenvalues of the velocity gradient which determine the boundaries of CT surfaces in configuration space. We demonstrate this critical threshold phenomena with several n-dimensional prototype models. For n=2 we show that when rotational forcing dominates the pressure, it prolongs the life-span of sub-critical 2D shallow-water solutions. We present a stability theory for vanishing viscosity solutions of the 2D nonlinear "pressureless" convection. We revisit the 3D restricted Euler and Euler-Poisson equations, and obtain a surprising global existence result for a large set of sub-critical initial data in the 4D case.

Series: Graph Theory Seminar

Many results in asymptotic extremal combinatorics are obtained using just a handful of instruments, such as induction and Cauchy-Schwarz inequality. The ingenuity lies in combining these tools in just the right way. Recently,

Razborov developed a flag calculus which captures many of the available techniques in pure form, and allows one, in particular, to computerize the search for the right combination. In this talk we outline the general approach and describe its application to the conjecture that a digraph with minimum outdegree n/3 contains a directed triangle. This special case of the Caccetta-Haggkvist conjecture has been extensively investigated in the past. We show that a digraph with minimum outdegree a*n contains a directed triangle for a = 0.3465. The proof builds on arguments used to establish previously known bounds, due to Shen from 1998 (a = 0.3542) and Hamburger, Haxell and Kostochka from 2008 (a = 0.3531). It consists of a combination of ~80 computer generated inequalities. Based on joint work with Jan Hladky and Daniel Kral.

Razborov developed a flag calculus which captures many of the available techniques in pure form, and allows one, in particular, to computerize the search for the right combination. In this talk we outline the general approach and describe its application to the conjecture that a digraph with minimum outdegree n/3 contains a directed triangle. This special case of the Caccetta-Haggkvist conjecture has been extensively investigated in the past. We show that a digraph with minimum outdegree a*n contains a directed triangle for a = 0.3465. The proof builds on arguments used to establish previously known bounds, due to Shen from 1998 (a = 0.3542) and Hamburger, Haxell and Kostochka from 2008 (a = 0.3531). It consists of a combination of ~80 computer generated inequalities. Based on joint work with Jan Hladky and Daniel Kral.

Monday, March 23, 2009 - 13:00 ,
Location: Skiles 255 ,
Shigui Ruan ,
University of Miami ,
Organizer: Yingfei Yi

Understanding the seasonal/periodic reoccurrence of influenza will be very helpful in designing successful vaccine programs and introducing public health interventions. However, the reasons for seasonal/periodic influenza epidemics are still not clear even though various explanations have been proposed. In this talk, we present an age-structured type evolutionary epidemiological model of influenza A drift, in which the susceptible class is continually replenished because the pathogen changes genetically and immunologically from one epidemic to the next, causing previously immune hosts to become susceptible. Applying our recent established center manifold theory for semilinear equations with non-dense domain, we show that Hopf bifurcation occurs in the model. This demonstrates that the age-structured type evolutionary epidemiological model of influenza A drift has an intrinsic tendency to oscillate due to the evolutionary and/or immunological changes of the influenza viruses. (based on joint work with Pierre Magal).

Series: ACO Seminar

Low-rank models are frequently used in machine learning and statistics. An important domain of application is provided by collaborative filtering, whereby a low-rank matrix describes the ratings that a large set of users attribute to a large set of products. The problem is in this case to predict future ratings from a sparse subset currently available. The dataset released for the Netflix challenge provides an ideal testbed for theory and algorithms for learning low-rank matrices. Given M, a random n x n matrix of rank r, we assume that a uniformly random subset E of its entries is observed. We describe an efficient procedure that reconstructs M from |E| = O(rn) observed entries with arbitrarily small root mean square error, whenever M is satisfies an appropriate incoherence condition. If r = O(1), the algorithm reconstructs M exactly from O(n log n) entries. This settles a recent open problem by Candes and Recht. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices. [Based on joint work with R. H. Keshavan and S. Oh]

Series: CDSNS Colloquium

In this talk we will review results on local entropy theory for the past 15 years, introduce the current development and post some open questions for the further study.

Series: PDE Seminar

We will give an overview of results on the global existence of solutions to the initial value problem for nonlinear elastic and viscoelastic materials in 3d without boundary. Materials will be assumed to be isotropic, but both compressible and incompressible cases will be discussed. In the compressible case, a key null condition must be imposed to control nonlinear interactions of pressure waves. This necessary assumption is consistent with the physical model. Initial conditions are small perturbations of a stress free reference state. Existence is proven using a fixed point argument which combines energy estimates and with some new dispersive estimates.

Series: Other Talks

There is much impressive observational evidence, mainly from the cosmic microwave background (CMB), for an enormously hot and dense early stage of the universe --- referred to as the Big Bang. Observations of the CMB are now very detailed, but this very detail presents new puzzles of various kinds, one of the most blatant being an apparent paradox in relation to the second law of thermodynamics. The hypothesis of inflationary cosmology has long been argued to explain away some of these puzzles, but it does not resolve some key issues, including that raised by the second law. In this talk, I describe a quite different proposal, which posits a succession of universe aeons prior to our own. The expansion of the universe never reverses in this scheme, but the space-time geometry is nevertheless made consistent through a novel geometrical conception. Some very recent analysis of the CMB data, obtained with the WMAP satellite, will be described, this having a profound but tantalizing bearing on these issues.

Series: Mathematical Biology Seminar

The stress condition calls for an adequate activity of key enzymatic systems of cellular defense. Massive protein destabilization and degradation is occurring in stressed cells. The rate of protein re-synthesis (DNA->RNA->protein) is inadequate to adapt to rapidly changing environment. Therefore, an alternative mechanism should exist maintaining sufficient activity of defense enzymes. Interestingly, more than 50% of enzymes consist of identical subunits which are forming multimeric interface. Stabilization of multimers is important for enzymatic activity. We found that it can be achieved by the formation of inter-subunit covalent bridges in response to stress conditions. It shows an elegance of the structure design that directs selective subunits linkage and increases enzyme's robustness and chances of cell survival during the stress. In contrast, modification of aminoacids involved in linkage leads to protein destabilization, unfolding and degradation. These results describe a new instantaneous mechanism of structural adaptation that controls enzymatic system under stress condition.

Wednesday, March 25, 2009 - 13:00 ,
Location: Skiles 255 ,
Junping Wang ,
NSF ,
Organizer: Haomin Zhou

This talk will first review domain decomposition methods for second order elliptic equations, which should be accessible to graduate students. The second part of the talk will deal with possible extensions to the Stokes equation when discretized by finite element methods. In particular, we shall point out the difficulties in such a generalization, and then discuss ways to overcome the difficulties.

Series: ACO Student Seminar

We will survey some old, some new, and some open problems in the area of efficient sampling. We will focus on sampling combinatorial structures (such as perfect matchings and independent sets) on regular lattices. These problems arise in statistical physics, where sampling objects on lattices can be used to determine many thermodynamic properties of simple physical systems. For example, perfect matchings on the Cartesian lattice, more commonly referred to as domino tilings of the chessboard, correspond to systems of diatomic molecules. But most importantly they are just cool problems with some beautiful solutions and a surprising number of unsolved challenges!

Series: Other Talks

Twistor theory is now over 45 years old. In December 1963, I proposed the initial ideas of this scheme, based on complex-number geometry, which presents an alternative perspective to that of standard 4-dimensional space-time, for the basic arena in which (quantum) physics takes place. Over the succeeding years, there were numerous intriguing developments. But many of these were primarily mathematical, and there was little interest expressed by the physics community. Things changed rather dramatically, in December 2003, when E. Witten produced a 99-page article initiating the subject of “twistor-string theory” this providing a novel approach to high-energy scattering processes. In this talk, I shall provide an account of the original geometrical and physical ideas, and also outline various recent developments, some of which may help our understandings of the seeming paradoxes of quantum mechanics.

Series: School of Mathematics Colloquium

A new estimate on weak solutions of the Navier-Stokes equations in three dimensions gives some information about the partial regularity of solutions. In particular, if energy concentration takes place, the dimension of the microlocal singular set cannot be too small. This estimate has a dynamical systems proof. These results are joint work with M. Arnold and A. Biryuk.

Series: SIAM Student Seminar

This is due to the paper of Dr. Christian Houdre and Trevis Litherland. Let X_1, X_2,..., X_n be a sequence of iid random variables drawn uniformly from a finite ordered alphabets (a_1,...,a_m) where a_1 < a_2 < ...< a_m. Let LI_n be the length of the longest increasing subsequence of X_1,X_2,...,X_n. We'll express the limit distribution of LI_n as functionals of (m-1)-dimensional Brownian motion. This is an elementary case taken from this paper.

Series: Combinatorics Seminar

Choose a graph uniformly at random from all d-regular graphs on n vertices. We determine the chromatic number of the graph for about half of all values of d, asymptotically almost surely (a.a.s.) as n grows large. Letting k be the smallest integer satisfying d < 2(k-1)\log(k-1), we show that a random d-regular graph is k-colorable a.a.s. Together with previous results of Molloy and Reed, this establishes the chromatic number as a.a.s. k-1 or k. If furthermore d>(2k-3)\log(k-1) then the chromatic number is a.a.s. k. This is an improvement upon results recently announced by Achlioptas and Moore. The method used is "small subgraph conditioning'' of Robinson and Wormald, applied to count colorings where all color classes have the same size. It is the first rigorously proved result about the chromatic number of random graphs that was proved by small subgraph conditioning. This is joint work with Xavier Perez-Gimenez and Nick Wormald.

Series: Probability Working Seminar

This talk is based in the article titled "On the convergence to equilibrium of Kac’s random walk on matrices" by Roberto Oliveira (IMPA, Brazil). We show how a strategy related to the path coupling method allows us to establish tight bounds for the L-2 transportation-cost mixing time of the Kac's random walk on SO(n).

Series: Geometry Topology Seminar

Let M be a hyperbolic 3-manifold, that is, a 3-manifold admitting a complete, finite volume Riemannian metric of constant section curvature -1. Let S be a Heegaard surface in M, that is, M cut open along S consists of two handlebodies. Our goal is to prove that is the volume of M (denoted Vol(M)) if small than S is simple. To that end we define two complexities for Heegaard surfaces. The first is the genus of the surface (denoted g(S)) and the second is the distance of the surface, as defined by Hempel (denoted d(S)). We prove that there exists a constant K>0 so that for a generic manifold M, if g(S) \geq 76KVol(M) + 26, then d(S) \leq 2. Thus we see that for a generic manifold of small volume, either the genus of S is small or its distance is at most two. The term generic will be explained in the talk.

Monday, March 30, 2009 - 13:00 ,
Location: Skiles 255 ,
Richardo March ,
Istituto per le Applicazioni del Calcolo &quot;Mauro Picone&quot; of C.N.R. ,
Organizer: Haomin Zhou

We consider ordered sequences of digital images. At a given pixel a time course is observed which is related to the time courses at neighbour pixels. Useful information can be extracted from a set of such observations by classifying pixels in groups, according to some features of interest. We assume to observe a noisy version of a positive function depending on space and time, which is parameterized by a vector of unknown functions (depending on space) with discontinuities which separate regions with different features in the image domain. We propose a variational method which allows to estimate the parameter functions, to segment the image domain in regions, and to assign to each region a label according to the values that the parameters assume on the region. Approximation by \Gamma-convergence is used to design a numerical scheme. Numerical results are reported for a dynamic Magnetic Resonance imaging problem.

Series: Analysis Seminar

The contracted asymptotics for orthogonal polynomials whose recurrence coefficients tend to infinity will be discussed. The connection between the equilibrium measure for potential problems with external fields will be

exhibited. Applications will be presented which include the Wilson polynomials.

exhibited. Applications will be presented which include the Wilson polynomials.

Series: CDSNS Colloquium

Allocation of service capacity ('staffing') at stations in queueing networks is both of fundamental and practical interest. Unfortunately, the problem is mathematically intractable in general and one therefore typically resorts to approximations or computer simulation. This talk describes work in progress with M. Squillante and S. Ghosh (IBM Research) on an algorithm that serves as an approximation for the 'best' capacity allocation rule. The algorithm can be interpreted as a discrete-time dynamical system, and we are interested in uniqueness of a fixed point and in convergence properties. No prior knowledge on queueing networks will be assumed.

Series: PDE Seminar

We study the problem of constructing systems of hyperbolic conservation laws with prescribed eigencurves, i.e. the eigenvector fields of the Jacobian of the flux are given. We formulate this as a (typically overdetermined) system of equations for the eigenvalues-to-be. Equivalent formulations in terms of differential and algebraic-differential equations are considered. The resulting equations are then analyzed with techniques from exterior differential systems (Cartan-Kahler theory). The cases of 2x2- and 3x3-systems can be treated in detail, and explicit examples show that already the 3x3-case is fairly complex. We also analyze general rich systems. We also characterize conservative systems with the same eigencurves as compressible gas dynamics. This is joint work with Irina Kogan (North Carolina State University).

Series: Mathematical Biology Seminar

The treatment of bacterial infections with antibiotics is universally accepted as one of (if not THE) most significant contributions of medical intervention to reducing mortality and morbidity during last century. Despite their widespread use over this extended period however, basic knowledge about how antibiotics kill or prevent the growth of bacteria is only just beginning to emerge. The dose and term of antibiotic treatment has long been determined empirically and intuitively by clinicians. Only recently have antibiotic treatment protocols come under scrutiny with the aim to theoretically and experimentally rationalize treatment protocols. The aim of such scrutiny is to design protocols which maximize antibiotics’ efficacy in clearing bacterial infections and simultaneously prevent the emergence of resistance in treated patients. Central to these endeavors are the pharmacodynamics, PD (relationship between bug and drug), and the pharmacokinetics, PK (the change antibiotic concentration with time) of each bacteria : drug : host combination. The estimation of PD and PK parameters is well established and standardized worldwide and although different PK parameters are commonly employed for most of these considerations, a single PD parameter is usually used, the minimum inhibitory concentration (MIC). MICs, also utilized as the criteria for resistance are determined under conditions that are optimal to the action of the antibiotic; low densities of bacteria growing exponentially. The method for estimating MICs which is the only one officially sanctioned by the regulatory authority (Clinical and Laboratory Standards Institute) defines conditions that rarely obtain outside of the laboratory and virtually never in the bacteria infecting mammalian hosts. Real infections with clinical symptoms commonly involve very high densities of bacteria, most of which are not replicating. These populations are rarely planktonic but rather reside as colonies or within matrices called biofilms which sometimes include other species of bacteria.

In the first part of my talk, I will present newly published data that describes the pharmacodynamic relationship between the sometimes pathogenic bacterium Staphylococcus aureus and antibiotics of six classes and the effects of cell density on MICs. By including density dependent MIC in a standard mathematical model of antibiotic treatment (from our lab), I show that this density-dependence may explain why antibiotic treatment fails in the absence of inherited resistance. In the second part of my talk I will consider the effects of the physiological state of clinical isolates of S. aureus on their susceptibility to different antibiotics. I present preliminary data which suggests that the duration of an infection may contribute adversely to an antibiotics chance of clearing the infection. I conclude with a brief discussion of the implications of the theoretical and experimental results for the development of antibiotic treatment protocols. As a special treat, I will outline problems of antibiotic treatment that could well be addressed with some classy mathematics.

Series: ACO Colloquium

One of the most interesting aspects of the Linear Complementarity Problem (LCP) is its range from relatively easy problems such as linear and convex quadratic programming problems to NP-hard problems. A major effort in LCP theory had been the study of the Lemke algorithm, a simplex-like algorithm which is guaranteed to terminate in finite number of iterations but not necessarily with a solution (or a certificate that no solution exists). Over the years, many subclasses of LCP were proven to be workable by the Lemke algorithm. Those subclasses were often characterized as ‘nice’ even when no polynomial upper bound for the algorithm was known to exist. In fact, for most of these classes, instances with exponential number of steps had been discovered. In this talk, I’ll discuss the close connection between these classes and the PPAD (Polynomial-time Parity Argument Directed) complexity class. The discovery that computing Nash equilibrium (which is an LCP) is PPAD complete is particularly significant in analyzing the complexity of LCP. I’ll also discuss the LCP reduction-via-perturbation technique and its relation to the PPAD class and the Lemke Algorithm.

This talk is based on a joint work with Sushil Verma.

This talk is based on a joint work with Sushil Verma.

Series: School of Mathematics Colloquium

In this talk I will outline recent results of G-Q Chen, Dehua Wang, and me on the problem of isometric embedding a two dimensional Riemannian manifold with negative Gauss curvature into three dimensional Euclidean space. Remarkably there is very pretty duality between this problem and the equations of steady 2-D gas dynamics. Compensated compactness (L.Tartar and F.Murat) yields proof of existence of solutions to an initial value problem when the prescribed metric is the one associated with the catenoid.

Series: ACO Student Seminar

In the first part of the talk, I am going to give an introduction and overview of linear and semidefinite programming hierarchies. I will mostly review known integrality gaps for such programs and try to give an intuition of why we currently lack strong techniques for designing rounding algorithms. In the second part of the talk I will focus on the stronger SDP Lasserre hierarchy. In contrast with the previous LP and SDP hierarchies, very few examples of integrality gap instances are known to date. I will present a recent technique for designing such instances and discuss open problems in the area.

Series: SIAM Student Seminar

Suppose b is a vector field in R^n such that b(0) = 0. Let A = Jb(0) the Jacobian matrix of b at 0. Suppose that A has no zero eigenvalues, at least one positive and at least one negative eigenvalue. I will study the behavior of the stochastic differential equation dX_\epsilon = b(X_\epsilon) + \epsilon dW as \epsilon goes to 0. I will illustrate the techniques done to deal with this kind of equation and make remarks on how the solution behaves as compared to the deterministic case.

Series: Combinatorics Seminar

I will present an approximation algorithm for the following problem: Given a graph G and a parameter k, find k edges to add to G as to maximize its algebraic connectivity. This problem is known to be NP-hard and prior to this work no algorithm was known with provable approximation guarantee. The algorithm uses a novel way of sparsifying (patching) part of a graph using few edges.

Series: Geometry Topology Seminar

Joint meeting at Emory

Recall that an open book decomposition of a 3-manifold M is a link L in M whose complement fibers over the circle with fiber a Seifert surface for L. Giroux's correspondence relates open book decompositions of a manifold M to contact structures on M. This correspondence has been fundamental to our understanding of contact geometry. An intriguing question raised by this correspondence is how geometric properties of a contact structure are reflected in the monodromy map describing the open book decomposition. In this talk I will show that there are several interesting monoids in the mapping class group that are related to various properties of a contact structure (like being Stein fillable, weakly fillable, . . .). I will also show that there are open book decompositions of Stein fillable contact structures whose monodromy cannot be factored as a product of positive Dehn twists. This is joint work with Jeremy Van Horn-Morris and Ken Baker.

Series: Geometry Topology Seminar

Joint meeting at Emory

A k--dimensional Dehn function of a group gives bounds on the volumes of (k+1)-balls which fill k--spheres in a geometric model for the group. For example, the 1-dimensional Dehn function of the group Z^2 is quadratic. This corresponds to the fact that loops in the euclidean plane R^2 (which is a geometric model for Z^2) have quadratic area disk fillings. In this talk we will consider the countable sets IP^{(k)} of numbers a for which x^a is a k-dimensional Dehn function of some group. The situation k \geq 2 is very different from the case k=1.

Series: CDSNS Colloquium

I will speak on the dispersive character of waves on the interface between vacuum and water under the influence of gravity and surface tension. I will begin by giving a precise account of the formulation of the surface water-wave problem and discussion of its distinct features. They include the dispersion relation, its severe nonlinearity, traveling waves and the Hamiltonian structure. I will describe the recent work of Hans Christianson, Gigliola Staffilani and myself on the local smoothing effect of 1/4 derivative for the fully nonlinear problem under surface tension with some detail of the proof. If time permits, I will explore some open questions regarding long-time behavior and stability.

Series: Combinatorics Seminar

The entropy function has a number of nice properties that make it a useful

counting tool, especially when one wants to bound a set with respect to the set's

projections. In this talk, I will show a method developed by Mokshay Madiman, Prasad

Tetali, and myself that builds on the work of Gyarmati, Matolcsi and Ruzsa as well as

the work of Ballister and Bollobas. The goal will be to give a black-box method for

generating projection bounds and to show some applications by giving new bounds on

the sizes of Abelian and non-Abelian sumsets.

counting tool, especially when one wants to bound a set with respect to the set's

projections. In this talk, I will show a method developed by Mokshay Madiman, Prasad

Tetali, and myself that builds on the work of Gyarmati, Matolcsi and Ruzsa as well as

the work of Ballister and Bollobas. The goal will be to give a black-box method for

generating projection bounds and to show some applications by giving new bounds on

the sizes of Abelian and non-Abelian sumsets.

Series: PDE Seminar

The Cauchy problem for the Poisson-Nernst-Planck/Navier-Stokes model was investigated by the speaker in [Transport Theory Statist. Phys. 31 (2002), 333-366], where a local existence-uniqueness theory was demonstrated, based upon Kato's framework for examining evolution equations. In this talk, the existence of a global distribution solution is proved to hold for the model, in the case of the initial-boundary value problem. Connection of the above analysis to significant applications is discussed. The solution obtained is quite rudimentary, and further progress would be expected in resolving issues of regularity.

Series: Analysis Seminar

Solutions of the simplest of the Painleve equations, PI, y'' = 6y^2+x, exhibit surprisingly rich asymptotic properties as x is large. Using the Riemann-Hilbert problem approach, we find an exponentially small addition to an algebraically large background admitting a power series asymptotic expansion and explain how this "beyond of all orders" term helps us to compute the coefficient asymptotics in the preceding series.

Series: Mathematical Biology Seminar

Oscillator synchrony can occur through environmental forcing or as a phenomenon of spontaneous self-organization in which interacting oscillators adjust phase or period and begin to cycle together. Examples of spontaneous synchrony have been documented in a wide variety of electrical, mechanical, chemical, and biological systems, including the menstrual cycles of women. Many colonial birds breed approximately synchronously within a time window set by photoperiod. Some studies have suggested that heightened social stimulation in denser colonies can lead to a tightened annual reproductive pulse (the “Fraser Darling effect”). It has been unknown, however, whether avian ovulation cycles can synchronize on a daily timescale within the annual breeding pulse. We will discuss socially-stimulated egg-laying synchrony in a breeding colony of glaucous-winged gulls using Monte Carlo analysis and a discrete-time dynamical system model.

Series: Research Horizons Seminar

This talk will be a continuation of the one I gave in this Seminar on March~11. I will present a brief introduction to use partial differential equations (PDE) and variational techniques (including techniques developed in computational fluid dynamics (CFD)) into wavelet transforms and Applications in Image Processing. Two different approaches are used as examples. One is PDE and variational frameworks for image reconstruction. The other one is an adaptive ENO wavelet transform designed by using ideas from Essentially Non-Oscillatory (ENO) schemes for numerical shock capturing.

Series: ACO Student Seminar

This short introduction to the principles of Quantum Computation will give hints upon why quantum computers, if they are built, will revolutionize the realm of information technology. If Physicists and Engineers can produce such machines, all the security protocoles used today will become obsolete and complex computations called NP will become easy. From the example of trapped ion computation, the talk will explain how Quantum Mechanics helps encoding information. The notion of quantum gate, the elementary brick of computation, will be introduced and some example of elementary program will be described. Comments about the Fourier transformalgorithm, its potential speed and its application to code breaking will end this talk.

Series: Dissertation Defense

Series: Stochastics Seminar

The Cameron-Martin theorem is one of the cornerstones of stochastic analysis. It asserts that the shifts of the Wiener measure along certain flows are equivalent. Driver and others have shown that this theorem, after an appropriate reformulation, can be extension to the Wiener measure on the path space over a compact Riemannian manifold. In this talk we will discuss this and other extensions of the Cameron-Martin theorem and show that it in fact holds for an arbitrary complete Riemannian manifold.

Series: SIAM Student Seminar

Linear algebra method is a very useful method in combinatorics. Lovas Theorem (a very deep theorem about perfect graph) is proved by using this way. The idea is, if we want to come up with an upper bound on the size of a set of objects, associate them with elements in a vector space V of relatively low dimension, and show that these

elements are linearly independent. Then we cannot have more objects in our set than the dimension of V. We will show we can use this way to solve some combinatorics problem, such as odd town problem and two-distance sets problem.

elements are linearly independent. Then we cannot have more objects in our set than the dimension of V. We will show we can use this way to solve some combinatorics problem, such as odd town problem and two-distance sets problem.

Series: ACO Seminar

Toda proved in 1989 that the (discrete) polynomial time hierarchy, {\bf PH}, is contained in the class {\bf P}^{#\bf P}, namely the class of languages that can be decided by a Turing machine in polynomial time given access to an oracle with the power to compute a function in the counting complexity class #{\bf P}. This result which illustrates the power of counting is considered to be a seminal result in computational complexity theory. An analogous result in the complexity theory over the reals (in the sense of Blum-Shub-Smale real Turing machines) has been missing so far. We formulate and prove a real analogue of Toda's theorem. Unlike Toda's proof in the discrete case, which relied on sophisticated combinatorial arguments, our proof is topological in nature. (Joint work with Thierry Zell.)

Friday, April 10, 2009 - 15:00 ,
Location: Skiles 269 ,
Thang Le ,
School of Mathematics, Georgia Tech ,
Organizer: John Etnyre

These are two hour talks.

We will develop general theory of quantum invariants based on sl_2 (the simplest Lie algebra): The Jones polynomials, the colored Jones polynomials, quantum sl_2 groups, operator invariants of tangles, and relations with the Alexander polynomial and the A-polynomials. Optional: Finite type invariants and the Kontsevich integral.

Series: Geometry Topology Seminar

Monday, April 13, 2009 - 13:00 ,
Location: Skiles 255 ,
Stacey Levine ,
Duquesne University ,
Organizer: Sung Ha Kang

We present new finite difference approximations for solving

variational problems using the TV and Besov smoothness penalty

functionals. The first approach reduces oversmoothing and anisotropy

found in common discrete approximations of the TV functional. The

second approach reduces the staircasing effect that arises from TV

type smoothing. The algorithms converge and can be sped up using a

multiscale algorithm. Numerical examples demonstrate both the

qualitative and quantitative behavior of the solutions.

variational problems using the TV and Besov smoothness penalty

functionals. The first approach reduces oversmoothing and anisotropy

found in common discrete approximations of the TV functional. The

second approach reduces the staircasing effect that arises from TV

type smoothing. The algorithms converge and can be sped up using a

multiscale algorithm. Numerical examples demonstrate both the

qualitative and quantitative behavior of the solutions.

Series: Geometry Topology Seminar

We will define the sutured version of embedded contact homology for sutured contact 3-manifolds. After that, we will show that the sutured version of embedded contact homology of S^1\times D^2, equipped with 2n sutures of integral or infinite slope on the boundary, coincides with the sutured Floer homology.

Series: Analysis Seminar

It turns out that the sinc kernel is not the only kernel that arises as a universality limit coming from random matrices associated with measures with compact support. Any reproducing kernel for a de Branges space that is equivalent to a Paley-Winer space may arise. We discuss this and some other results involving de Branges spaces, universality, and orthogonal polynomials.

Series: CDSNS Colloquium

I will discuss new computational tools based on topological methods that extracts coarse, but rigorous, combinatorial descriptions of global dynamics of multiparameter nonlinear systems. These techniques are motivated by the fact that these systems can produce an wide variety of complicated dynamics that vary dramatically as a function of changes in the nonlinearities and the following associated challenges which we claim can, at least in part, be addressed. 1. In many applications there are models for the dynamics, but specific parameters are unknown or not directly computable. To identify the parameters one needs to be able to match dynamics produced by the model against that which is observed experimentally. 2. Experimental measurements are often too crude to identify classical dynamical structures such as fixed points or periodic orbits, let alone more the complicated structures associated with chaotic dynamics. 3. Often the models themselves are based on nonlinearities that a chosen because of heuristic arguments or because they are easy to fit to data, as opposed to being derived from first principles. Thus, one wants to be able to separate the scientific conclusions from the particular nonlinearities of the equations. To make the above mentioned comments concrete I will describe the techniques in the context of a simple model arising in population biology.

Series: Other Talks

An old conjecture of Erdos and Szemeredi states that if A is a finite set of integers then the sum-set or the product-set should be large. The sum-set of A is A + A={a+b | a,b \in A\}, and the product set is defined in a similar way, A*A={ab | a,b \in A}. Erdos and Szemeredi conjectured that the sum-set or the product set is almost quadratic in |A|, i.e. max(|A+A|,|A*A|)> c|A|^{2-\epsilon}. In this talk we review some recent developments and problems related to the conjecture.

Series: PDE Seminar

I will talk about the highlights of a collaborative and multidisciplinary program investigating qualitative features of steady water waves with vorticity in two dimensions. Computational and analytical results together with data from the oceanographic community have resulted in strong evidence that key qualitative features such as amplitude, depth, streamline shape and pressure profile can be fundamentally affected by the presence of vorticity. Systematic studies of constant vorticity and shear vorticity functions will be presented.

Series: Mathematical Biology Seminar

Series: Research Horizons Seminar

This talk will focus on mathematical approaches using PDE and variational models for image processing. I will discuss general problems arising from image reconstructions and segmentation, starting from Total Variation minimization (TV) model and Mumford-Shah segmentation model, and present new models from various developments. Two main topics will be on variational approaches to image reconstruction and multi-phase segmentation. Many challenges and various problems will be presented with some numerical results.

Series: ACO Student Seminar

Grotzsch's Theorem states that every triangle-free planar graph is

3-colorable.

Thomassen conjectured that every triangle-free planar graph has

exponentially many distinct 3-colorings. He proved that it has at least

2^{n^{1/12}/20000} distinct 3-colorings where n is the number of vertices.

We show that it has at least 2^{\sqrt{n/600}} distinct 3-colorings.

3-colorable.

Thomassen conjectured that every triangle-free planar graph has

exponentially many distinct 3-colorings. He proved that it has at least

2^{n^{1/12}/20000} distinct 3-colorings where n is the number of vertices.

We show that it has at least 2^{\sqrt{n/600}} distinct 3-colorings.

Joint work with Arash Asadi and Robin Thomas.

Series: Stochastics Seminar

In binary classification problems, the goal is to estimate a function g*:S -> {-1,1} minimizing the generalization error (or the risk)

L(g):=P{(x,y):y \neq g(x)},

where P is a probability distribution in S x {-1,1}. The distribution P is unknown and estimators \hat g of g* are based on a finite number of independent random couples (X_j,Y_j) sampled from P. It is of interest to have upper bounds on the excess risk

{\cal E}(\hat g):=L(\hat g) - L(g_{\ast})

of such estimators that hold with a high probability and that take into account reasonable measures of complexity of classification problems (such as, for instance, VC-dimension). We will discuss several approaches (both old and new) to excess risk bounds in classification, including some recent results on excess risk in so called active learning.

L(g):=P{(x,y):y \neq g(x)},

where P is a probability distribution in S x {-1,1}. The distribution P is unknown and estimators \hat g of g* are based on a finite number of independent random couples (X_j,Y_j) sampled from P. It is of interest to have upper bounds on the excess risk

{\cal E}(\hat g):=L(\hat g) - L(g_{\ast})

of such estimators that hold with a high probability and that take into account reasonable measures of complexity of classification problems (such as, for instance, VC-dimension). We will discuss several approaches (both old and new) to excess risk bounds in classification, including some recent results on excess risk in so called active learning.

Series: School of Mathematics Colloquium

Archimedes principle may be used to predict if and how certain solid objects float in a liquid bath. The principle, however, neglects to consider capillary forces which can sometimes play an important role. We describe a recent generalization of the principle and how the standard textbook presentation of Archimedes' work may have played a role in delaying the discovery of such generalizations to this late date.

Friday, April 17, 2009 - 13:00 ,
Location: Skiles 255 ,
Gilad Lerman ,
University of Minnesota ,
Organizer: Sung Ha Kang

Note special day.

We propose a fast multi-way spectral clustering algorithm for multi-manifold data modeling, i.e., modeling data by mixtures of manifolds (possibly intersecting). We describe the supporting theory as well as the practical choices guided by it. We first develop the case of hybrid linear modeling, i.e., when the underlying manifolds are affine subspaces in a Euclidean space, and then we extend this setting to more general manifolds. We exemplify the practical use of the algorithm by demonstrating its successful application to problems of motion segmentation.

Series: Combinatorics Seminar

Let G be a graph and K be a field. We associate to G a projective toric variety X_G over K, the cut variety of the graph G. The cut ideal I_G of the graph G is the ideal defining the cut variety. In this talk, we show that, if G is a subgraph of a subdivision of a book or an outerplanar graph, then the minimal generators are quadrics. Furthermore we describe the generators of the cut ideal of a subdivision of a book.

Friday, April 17, 2009 - 15:00 ,
Location: Skiles 269 ,
Thang Le ,
School of Mathematics, Georgia Tech ,
Organizer: John Etnyre
We will develop general theory of quantum invariants based on sl_2 (the simplest Lie algebra): The Jones polynomials, the colored Jones polynomials, quantum sl_2 groups, operator invariants of tangles, and relations with the Alexander polynomial and the A-polynomials. Optional: Finite type invariants and the Kontsevich integral.

These are two hour lectures.

Monday, April 20, 2009 - 13:00 ,
Location: Skiles 255 ,
Tiancheng Ouyang ,
Brigham Young ,
Organizer: Chongchun Zeng

In this talk, I will show many interesting orbits in 2D and 3D of the N-body problem. Some of them do not have symmetrical property nor with equal masses. Some of them with collision singularity. The methods of our numerical optimization lead to search the initial conditions and properties of preassigned orbits. The variational methods will be used for the prove of the existence.

Series: Geometry Topology Seminar

In this talk we will introduce the notion of a cube diagram---a surprisingly subtle, extremely powerful new way to represent a knot or link. One of the motivations for creating cube diagrams was to develop a 3-dimensional "Reidemeister's theorem''. Recall that many knot invariants can be easily be proven by showing that they are invariant under the three Reidemeister moves. On the other hand, simple, easy-to-check 3-dimensional moves (like triangle moves) are generally ineffective for defining and proving knot invariants: such moves are simply too flexible and/or uncontrollable to check whether a quantity is a knot invariant or not. Cube diagrams are our attempt to "split the difference" between the flexibility of ambient isotopy of R^3 and specific, controllable moves in a knot projection. The main goal in defining cube diagrams was to develop a data structure that describes an embedding of a knot in R^3 such that (1) every link is represented by a cube diagram, (2) the data structure is rigid enough to easily define invariants, yet (3) a limited number of 5 moves are all that are necessary to transform one cube diagram of a link into any other cube diagram of the same link. As an example of the usefulness of cube diagrams we present a homology theory constructed from cube diagrams and show that it is equivalent to knot Floer homology, one of the most powerful known knot invariants.

Series: Analysis Seminar

Note special time

In 1908 Hadamard conjectured that the biharmonic Green function must be positive. Later on, several counterexamples to Hadamard's conjecture have been found and a variety of upper estimates were obtained in sufficiently smooth domains. However, the behavior of the Green function in general domains was not well-understood until recently. In a joint work with V. Maz'ya we derive sharp pointwise estimates for the biharmonic and, more generally, polyharmonic Green function in arbitrary domains. Furthermore, we introduce the higher order capacity and establish an analogue of the Wiener criterion describing the precise correlation between the geometry of the domain and the regularity of the solutions to the polyharmonic equation.

Series: CDSNS Colloquium

In the talk I will discuss the periodicity of solutions to the classical forced pendulum equation y" + A sin y = f(t) where A= g/l is the ratio of the gravity constant and the pendulum length, and f(t) is an external periodic force with a minimal period T. The major concern is to characterize conditions on A and f under which the equation admits periodic solutions with a prescribed minimal period pT, where p>1 is an integer. I will show how the new approach, based on the critical point theory and an original decomposition technique, leads to the existence of such solutions without requiring p to be a prime as imposed in most previous approaches. In addition, I will present the first non-existence result of such solutions which indicates that long pendulum has a natural resistance to oscillate periodically.

Series: ACO Colloquium

The past 10 years have seen a confluence of research in sparse approximation amongst computer science, mathematics, and electrical engineering. Sparse approximation encompasses a large number of mathematical, algorithmic, and signal processing problems which all attempt to balance the size of a (linear) representation of data and the fidelity of that representation. I will discuss several of the basic algorithmic problems and their solutions, focusing on special classes of matrices. I will conclude with an application in biological testing.

Series: Graph Theory Seminar

I will discuss some new results, as well as new interpretations of some old results, concerning reduced divisors (a.k.a. G-parking functions) on graphs, metric graphs, and tropical curves.

Series: Research Horizons Seminar

The eigenvalues of the Laplacian are the squares of the frequencies of

the normal modes of vibration, according to the wave equation. For this

reason, Bers and Kac referred to the problem of determining the shape of

a domain from the eigenvalue spectrum of the Laplacian as the question of

whether one can "hear" the shape. It turns out that in general the answer

is "no." Sometimes, however, one can, for instance in extremal cases

where a domain, or a manifold, is round. There are many "isoperimetric"

theorems that allow us to conclude that a domain, curve, or a manifold,

is round, when enough information about the spectrum of the Laplacian

or a similar operator is known. I'll describe a few of these theorems

and show how to prove them by linking geometry with functional analysis.

the normal modes of vibration, according to the wave equation. For this

reason, Bers and Kac referred to the problem of determining the shape of

a domain from the eigenvalue spectrum of the Laplacian as the question of

whether one can "hear" the shape. It turns out that in general the answer

is "no." Sometimes, however, one can, for instance in extremal cases

where a domain, or a manifold, is round. There are many "isoperimetric"

theorems that allow us to conclude that a domain, curve, or a manifold,

is round, when enough information about the spectrum of the Laplacian

or a similar operator is known. I'll describe a few of these theorems

and show how to prove them by linking geometry with functional analysis.

Series: ACO Student Seminar

We construct efficient and natural encryption schemes that remain

secure (in the standard model) even when used to encrypt messages that

may depend upon their secret keys. Our schemes are based on

well-studied "noisy learning" problems. In particular, we design

secure (in the standard model) even when used to encrypt messages that

may depend upon their secret keys. Our schemes are based on

well-studied "noisy learning" problems. In particular, we design

1) A symmetric-key cryptosystem based on the "learning parity with

noise" (LPN) problem, and

2) A public-key cryptosystem based on the "learning with errors"

(LWE) problem, a generalization of LPN that is at least as hard as

certain worst-case lattice problems (Regev, STOC 2005; Peikert, STOC

2009).

Remarkably, our constructions are close (but non-trivial) relatives of

prior schemes based on the same assumptions --- which were proved

secure only in the usual key-independent sense --- and are nearly as

efficient. For example, our most efficient public-key scheme encrypts

and decrypts in amortized O-tilde(n) time per message bit, and has

only a constant ciphertext expansion factor. This stands in contrast

to the only other known standard-model schemes with provable security

for key-dependent messages (Boneh et al., CRYPTO 2008), which incur a

significant extra cost over other semantically secure schemes based on

the same assumption. Our constructions and security proofs are simple

and quite natural, and use new techniques that may be of independent

interest.

This is joint work with Chris Peikert and Amit Sahai.

Series: Analysis Seminar

We will discuss a new method of asymptotic analysis of matrix-valued Riemann-Hilbert problems that involves dispensing with analyticity in favor of measured deviation therefrom. This method allows the large-degree analysis of orthogonal polynomials on the real line with respect to varying nonanalytic weights with external fields having two Lipschitz-continuous derivatives, as long as the corresponding equilibrium measure has typical support properties. Universality of local eigenvalue statistics of unitary-invariant ensembles in random matrix theory follows under the same conditions. This is joint work with Ken McLaughlin.

Series: Graph Theory Seminar

A well know theorem of Kuratowski states that a graph is planar graph iff it contains no TK_5 or TK_{3,3}. In 1970s Seymour conjectured that every 5-connected nonplanar graph contains a TK_5. In the talk we will discuss several special cases of the conjecture, for example the graphs containing K_4^- (K_4 withour an edge). A related independent paths theorem also will be covered.

Thursday, April 23, 2009 - 13:00 ,
Location: Skiles 255 ,
Per-Gunnar Martinsson ,
Dept of Applied Mathematics, University of Colorado ,
Organizer: Haomin Zhou

Note special day

Linear boundary value problems occur ubiquitously in many areas of

science and engineering, and the cost of computing approximate

solutions to such equations is often what determines which problems

can, and which cannot, be modelled computationally. Due to advances in

the last few decades (multigrid, FFT, fast multipole methods, etc), we

today have at our disposal numerical methods for most linear boundary

value problems that are "fast" in the sense that their computational

cost grows almost linearly with problem size. Most existing "fast"

schemes are based on iterative techniques in which a sequence of

incrementally more accurate solutions is constructed. In contrast, we

propose the use of recently developed methods that are capable of

directly inverting large systems of linear equations in almost linear

time. Such "fast direct methods" have several advantages over

existing iterative methods:

(1) Dramatic speed-ups in applications involving the repeated solution

of similar problems (e.g. optimal design, molecular dynamics).

(2) The ability to solve inherently ill-conditioned problems (such as

scattering problems) without the use of custom designed preconditioners.

(3) The ability to construct spectral decompositions of differential

and integral operators.

(4) Improved robustness and stability.

In the talk, we will also describe how randomized sampling can be used

to rapidly and accurately construct low rank approximations to matrices.

The cost of constructing a rank k approximation to an m x n matrix A

for which an O(m+n) matrix-vector multiplication scheme is available

is O((m+n)*k). This cost is the same as that of the well-established

Lanczos scheme, but the randomized scheme is significantly more robust.

For a general matrix A, the cost of the randomized scheme is O(m*n*log(k)),

which should be compared to the O(m*n*k) cost of existing deterministic

methods.

science and engineering, and the cost of computing approximate

solutions to such equations is often what determines which problems

can, and which cannot, be modelled computationally. Due to advances in

the last few decades (multigrid, FFT, fast multipole methods, etc), we

today have at our disposal numerical methods for most linear boundary

value problems that are "fast" in the sense that their computational

cost grows almost linearly with problem size. Most existing "fast"

schemes are based on iterative techniques in which a sequence of

incrementally more accurate solutions is constructed. In contrast, we

propose the use of recently developed methods that are capable of

directly inverting large systems of linear equations in almost linear

time. Such "fast direct methods" have several advantages over

existing iterative methods:

(1) Dramatic speed-ups in applications involving the repeated solution

of similar problems (e.g. optimal design, molecular dynamics).

(2) The ability to solve inherently ill-conditioned problems (such as

scattering problems) without the use of custom designed preconditioners.

(3) The ability to construct spectral decompositions of differential

and integral operators.

(4) Improved robustness and stability.

In the talk, we will also describe how randomized sampling can be used

to rapidly and accurately construct low rank approximations to matrices.

The cost of constructing a rank k approximation to an m x n matrix A

for which an O(m+n) matrix-vector multiplication scheme is available

is O((m+n)*k). This cost is the same as that of the well-established

Lanczos scheme, but the randomized scheme is significantly more robust.

For a general matrix A, the cost of the randomized scheme is O(m*n*log(k)),

which should be compared to the O(m*n*k) cost of existing deterministic

methods.

Series: Stochastics Seminar

It is of interest that researchers study competing risks in which subjects may fail from any one of k causes. Comparing any two competing risks with covariate effects is very important in medical studies. In this talk, we develop omnibus tests for comparing cause-specific hazard rates and cumulative incidence functions at specified covariate levels. The omnibus tests are derived under the additive risk model by a weighted difference of estimates of cumulative cause-specific hazard rates. Simultaneous confidence bands for the difference of two conditional cumulative incidence functions are also constructed. A simulation procedure is used to sample from the null distribution of the test process in which the graphical and numerical techniques are used to detect the significant difference in the risks. In addition, we conduct a simulation study, and the simulation result shows that the proposed procedure has a good finite sample performance. A melanoma data set in clinical trial is used for the purpose of illustration.

Series: Algebra Seminar

Let S be a group or semigroup acting on a variety V, let x be a point on V, and let W be a subvariety of V. What can be said about the structure of the intersection of the S-orbit of x with W? Does it have the structure of a union of cosets of subgroups of S? The Mordell-Lang theorem of Laurent, Faltings, and Vojta shows that this is the case for certain groups of translations (the Mordell conjecture is a consequence of this). On the other hand, Pell's equation shows that it is not true for additive translations of the Cartesian plane. We will see that this question relates to issues in complex dynamics, simple questions from linear algebra, and techniques from the study of linear recurrence sequences.

Series: Combinatorics Seminar

We develop an information-theoretic foundation for compound Poisson

approximation and limit theorems (analogous to the corresponding

developments for the central limit theorem and for simple Poisson

approximation). First, sufficient conditions are given under which the

compound Poisson distribution has maximal entropy within a natural

class of probability measures on the nonnegative integers. In

particular, it is shown that a maximum entropy property is valid

if the measures under consideration are log-concave, but that it

fails in general. Second, approximation bounds in the (strong)

relative entropy sense are given for distributional approximation

of sums of independent nonnegative integer valued random variables

by compound Poisson distributions. The proof techniques involve the

use of a notion of local information quantities that generalize the

classical Fisher information used for normal approximation, as well

as the use of ingredients from Stein's method for compound Poisson

approximation. This work is joint with Andrew Barbour (Zurich),

Oliver Johnson (Bristol) and Ioannis Kontoyiannis (AUEB).

approximation and limit theorems (analogous to the corresponding

developments for the central limit theorem and for simple Poisson

approximation). First, sufficient conditions are given under which the

compound Poisson distribution has maximal entropy within a natural

class of probability measures on the nonnegative integers. In

particular, it is shown that a maximum entropy property is valid

if the measures under consideration are log-concave, but that it

fails in general. Second, approximation bounds in the (strong)

relative entropy sense are given for distributional approximation

of sums of independent nonnegative integer valued random variables

by compound Poisson distributions. The proof techniques involve the

use of a notion of local information quantities that generalize the

classical Fisher information used for normal approximation, as well

as the use of ingredients from Stein's method for compound Poisson

approximation. This work is joint with Andrew Barbour (Zurich),

Oliver Johnson (Bristol) and Ioannis Kontoyiannis (AUEB).

Friday, April 24, 2009 - 15:00 ,
Location: Skiles 269 ,
Thang Le ,
School of Mathematics, Georgia Tech ,
Organizer: John Etnyre
We will develop general theory of quantum invariants based on sl_2 (the simplest Lie algebra): The Jones polynomials, the colored Jones polynomials, quantum sl_2 groups, operator invariants of tangles, and relations with the Alexander polynomial and the A-polynomials. Optional: Finite type invariants and the Kontsevich integral.

These are two hour lectures.

Series: Analysis Seminar

Let A be a Hilbert space operator. If A = UP is the polar decomposition of A,

and 0 < \lambda < 1, the \lambda-Aluthge transform of A is defined to be

the operator \Delta_\lambda = P^\lambda UP^{1-\lambda}. We will discuss the recent progress on

the convergence of the iteration. Infinite and finite dimensional cases will be discussed.

and 0 < \lambda < 1, the \lambda-Aluthge transform of A is defined to be

the operator \Delta_\lambda = P^\lambda UP^{1-\lambda}. We will discuss the recent progress on

the convergence of the iteration. Infinite and finite dimensional cases will be discussed.

Series: Dissertation Defense

Series: ACO Seminar

We consider the problem of finding an unknown graph by using two types of queries with an additive property. Given a graph, an additive query asks the number of edges in a set of vertices while a cross-additive query asks the number of edges crossing between two disjoint sets of vertices. The queries ask sum of weights for the weighted graphs. These types of queries were partially motivated in DNA shotgun sequencing and linkage discovery problem of artificial intelligence. For a given unknown weighted graph G with n vertices, m edges, and a certain mild condition on weights, we prove that there exists a non-adaptive algorithm to find the edges of G using O\left(\frac{m\log n }{\log m}\right) queries of both types provided that m \geq n^{\epsilon} for any constant \epsilon> 0. For a graph, it is shown that the same bound holds for all range of m. This settles a conjecture of Grebinski for finding an unweighted graph using additive queries. We also consider the problem of finding the Fourier coefficients of a certain class of pseudo-Boolean functions. A similar coin weighing problem is also considered. (This is joint work with S. Choi)

Series: Analysis Seminar

In this contribution we study the asymptotic behaviour of polynomials orthogonal with respect to a Sobolev-Type inner product

\langle p, q\rangle_S = \int^\infty_0 p(x)q(x)x^\alpha e^{-x} dx + IP(0)^t AQ(0), \alpha > -1,

where p and q are polynomials with real coefficients,

A = \pmatrix{M_0 & \lambda\\ \lambda & M_1},

IP(0) = \pmatrix{p(0)\\ p'(0)}, Q(0) = \pmatrix{q(0)\\ q'(0)},

and A is a positive semidefinite matrix.

First, we analyze some algebraic properties of these polynomials. More precisely, the connection relations between the polynomials orthogonal with respect

to the above inner product and the standard Laguerre polynomials are deduced.

On the other hand, the symmetry of the multiplication operator by x^2 yields a

five term recurrence relation that such polynomials satisfy.

Second, we focus the attention on their outer relative asymptotics with respect to the standard Laguerre polynomials as well as on an analog of the

Mehler-Heine formula for the rescaled polynomials.

Third, we find the raising and lowering operators associated with these orthogonal polynomials. As a consequence, we deduce the holonomic equation

that they satisfy. Finally, some open problems will be considered.

Series: PDE Seminar

Some interesting nonlinear fourth-order parabolic equations, including the "thin-film" equation with linear mobility and the quantum drift-diffusion equation, can be seen as gradient flows of first-order integral functionals in the Wasserstein space of probability measures. We will present some general tools of the metric-variational approach to gradient flows which are useful to study this kind of equations and their asymptotic behavior. (Joint works in collaboration with U.Gianazza, R.J. McCann, D. Matthes, G. Toscani)

Series: Dissertation Defense

Series: Combinatorics Seminar

We consider the Ulam "liar" and "pathological liar" games, natural and well-studied variants of "20 questions" in which the adversarial respondent is permitted to lie some fraction of the time. We give an improved upper bound for the optimal strategy (aka minimum-size covering code), coming within a triply iterated log factor of the so-called "sphere covering" lower bound. The approach is twofold: (1) use a greedy-type strategy until the game is nearly over, then (2) switch to applying the "liar machine" to the remaining Berlekamp position vector. The liar machine is a deterministic (countable) automaton which we show to be very close in behavior to a simple random walk, and this resemblance translates into a nearly optimal strategy for the pathological liar game.

Series: Graph Theory Seminar

Richter and Salazar conjectured that graphs that are critical for a fixed crossing number k have bounded bandwidth. A weaker well-known conjecture of Richter is that their maximum degree is bounded in terms of k. We disprove these conjectures for every k >170, by providing examples of k-crossing-critical graphs with arbitrarily large maximum degree, and explore the structure of such graphs.

Series: Graph Theory Seminar

We study several parameters of cubic graphs with large girth. In particular, we prove that every n-vertex cubic graph with sufficiently large girth satisfies the following:

- has a dominating set of size at most 0.29987n (which improves the previous bound of 0.32122n of Rautenbach and Reed)
- has fractional chromatic number at most 2.37547 (which improves the previous bound of 2.66881 of Hatami and Zhu)
- has independent set of size at least 0.42097n (which improves the previous bound of 0.41391n of Shearer), and
- has fractional total chromatic number arbitrarily close to 4 (which answers in the affirmative a conjecture of Reed). More strongly, there exists g such that the fractional total chromatic number of every bridgeless graph with girth at least g is equal to 4.

The presented bounds are based on a simple probabilistic argument.

The presentation is based on results obtained jointly with Tomas Kaiser, Andrew King, Petr Skoda and Jan Volec.

Series: Other Talks

This will be an informal seminar with a discussion on some mathematical problems in relativistic astrophysics, and discuss plans for future joint seminars between the Schools of Mathematics and Physics.

Series: Dissertation Defense

This disseratation provides the conceptual development, modeling and simulation, physical implementation and measured hardware results for a procticable digital coherent chaotic communication system.

Series: Dissertation Defense

In this thesis, we extend De Giorgi's interpolation method to a class of parabolic equations which are not gradient flows but possess an entropy functional and an underlying Lagrangian. The new fact in the study is that not only the

Lagrangian may depend on spatial variables, but also it does not induce a metric. Assuming the initial condition is a density function, not necessarily smooth, but solely of bounded first moments and finite entropy, we use a variational scheme to

discretize the equation in time and construct approximate solutions. Moreover, De

Giorgi's interpolation method reveals to be a powerful tool for proving convergence

of our algorithm. Finally, we analyze uniqueness and stability of our solution in L^1.

Lagrangian may depend on spatial variables, but also it does not induce a metric. Assuming the initial condition is a density function, not necessarily smooth, but solely of bounded first moments and finite entropy, we use a variational scheme to

discretize the equation in time and construct approximate solutions. Moreover, De

Giorgi's interpolation method reveals to be a powerful tool for proving convergence

of our algorithm. Finally, we analyze uniqueness and stability of our solution in L^1.

Series: Dissertation Defense

Series: Dissertation Defense

Series: Combinatorics Seminar

In this lecture, I will explain connections between graph theory and submodular optimization. The topics include theorems of Nash-Williams on orientation and detachment of graphs.

Tuesday, August 18, 2009 - 14:00 ,
Location: Skiles 255 ,
Justin W. L. Wan ,
Computer Science, University of Waterloo ,
Organizer: Sung Ha Kang

In image guided procedures such as radiation therapies and computer-assisted surgeries, physicians often need to align images that are taken at different times and by different modalities. Typically, a rigid registration is performed first, followed by a nonrigid registration. We are interested in efficient registrations methods which are robust (numerical solution procedure will not get stuck at local minima) and fast (ideally real time). We will present a robust continuous mutual information model for multimodality regisration and explore the new emerging parallel hardware for fast computation. Nonrigid registration is then applied afterwards to further enhance the results. Elastic and fluid models were usually used but edges and small details often appear smeared in the transformed templates. We will propose a new inviscid model formulated in a particle framework, and derive the corresponding nonlinear partial differential equations for computing the spatial transformation. The idea is to simulate the template image as a set of free particles moving toward the target positions under applied forces. Our model can accommodate both small and large deformations, with sharper edges and clear texture achieved at less computational cost. We demonstrate the performance of our model on a variety of images including 2D and 3D, mono-modal and multi-modal, synthetic and clinical data.

Series: Combinatorics Seminar

In this lecture, I will review combinatorial algorithms for minimizing submodular functions. In particular, I will present a new combinatorial algorithm obtained in my recent joint work with Jim Orlin.

Series: Combinatorics Seminar

In this lecture, I will explain the greedy approximation algorithm on submodular function maximization due to Nemhauser, Wolsey, and Fisher. Then I will apply this algorithm to the problem of approximating an monotone submodular functions by another submodular function with succinct representation. This approximation method is based on the maximum volume ellipsoid inscribed in a centrally symmetric convex body. This is joint work with Michel Goemans, Nick Harvey, and Vahab Mirrokni.

Series: CDSNS Colloquium

The Bendixson conditions for general nonlinear differential equations in Banach spaces are developed in terms of stability of associated compound differential equations. The generalized Bendixson criterion states that, if some measure of 2-dimensional surface area tends to zero with time, then there are no closed curves that are left invariant by the dynamics. In particular, there are no nontrivial periodic orbits, homoclinic loops or heteroclinic loops. Concrete conditions that preclude the existence of periodic solutions for a parabolic PDE will be given. This is joint work with Professor James S. Muldowney at University of Alberta.

Series: PDE Seminar

We prove that solutions of the Navier-Stokes equations of

three-dimensional, compressible flow, restricted to fluid-particle

trajectories, can be extended as analytic functions of complex time. One

important corollary is backwards uniqueness: if two such solutions agree

at a given time, then they must agree at all previous times.

Additionally, analyticity yields sharp estimates for the time

derivatives of arbitrary order of solutions along particle trajectories.

I'm going to integrate into the talk something like a "pretalk" in an

attempt to motivate the more technical material and to make things

accessible to a general analysis audience.

three-dimensional, compressible flow, restricted to fluid-particle

trajectories, can be extended as analytic functions of complex time. One

important corollary is backwards uniqueness: if two such solutions agree

at a given time, then they must agree at all previous times.

Additionally, analyticity yields sharp estimates for the time

derivatives of arbitrary order of solutions along particle trajectories.

I'm going to integrate into the talk something like a "pretalk" in an

attempt to motivate the more technical material and to make things

accessible to a general analysis audience.

Series: ACO Student Seminar

A central question in the theory of card shuffling is how quickly a deck of

cards becomes 'well-shuffled' given a shuffling rule. In this talk, I will

discuss a probabilistic card shuffling model known as the 'interchange

process'. A

conjecture from 1992 about this model has recently been resolved

and I will address how my work has been involved with this conjecture. I

will also discuss other card shuffling models.

cards becomes 'well-shuffled' given a shuffling rule. In this talk, I will

discuss a probabilistic card shuffling model known as the 'interchange

process'. A

conjecture from 1992 about this model has recently been resolved

and I will address how my work has been involved with this conjecture. I

will also discuss other card shuffling models.

Series: Analysis Seminar

We will survey recent developments in the area of two weight inequalities, especially those relevant for singular integrals. In the second lecture, we will go into some details of recent characterizations of maximal singular integrals of the speaker, Eric Sawyer, and Ignacio Uriate-Tuero.

Series: School of Mathematics Colloquium

Pre-reception at 2:30 in Room N201. If you would like to meet with Prof. Ashtekar while he is on campus (at the Center for Relativistic Astrophysics - Boggs building), please contact <A class="moz-txt-link-abbreviated" href="mailto:lori.federico@physics.gatech.edu">lori.federico@physics.gatech.edu</a>.

General relativity is based on a deep interplay between physics and mathematics: Gravity is encoded in geometry. It has had spectacular observational success and has also pushed forward the frontier of geometric analysis. But the theory is incomplete because it ignores quantum physics. It predicts that the space-time ends at singularities such as the big-bang. Physics then comes to a halt. Recent developments in loop quantum gravity show that these predictions arise because the theory has been pushed beyond the domain of its validity. With new inputs from mathematics, one can extend cosmology beyond the big-bang. The talk will provide an overview of this new and rich interplay between physics and mathematics.

Series: Graph Theory Seminar

Slightly modifying a quote of

Paul Erdos: The problem for graphs we

solve this week. The corresponding problem

for posets will take longer.

Paul Erdos: The problem for graphs we

solve this week. The corresponding problem

for posets will take longer.

As one example, testing a graph to determine

if it is planar is linear in the number of

edges. Testing an order (Hasse) diagram to

determine if it is planar is NP-complete.

As a second example, it is NP-complete

to determine whether a graph is a cover

graph.

With these remarks in mind, we present

some results, mostly new but some classic,

regarding posets with planar cover graphs

and planar diagrams. The most recent

result is that for every h, there is a constant

c_h so that if P is a poset of height h and

the cover graph of P is planar, then

the dimension of P is at most c_h.

Series: Stochastics Seminar

We propose a penalized orthogonal-components regression

(POCRE) for large p small n data. Orthogonal components are sequentially

constructed to maximize, upon standardization, their correlation to the

response residuals. A new penalization framework, implemented via

empirical Bayes thresholding, is presented to effectively identify

sparse predictors of each component. POCRE is computationally efficient

owing to its sequential construction of leading sparse principal

components. In addition, such construction offers other properties such

as grouping highly correlated predictors and allowing for collinear or

nearly collinear predictors. With multivariate responses, POCRE can

construct common components and thus build up latent-variable models for

large p small n data. This is an joint work with Yanzhu Lin and Min Zhang

(POCRE) for large p small n data. Orthogonal components are sequentially

constructed to maximize, upon standardization, their correlation to the

response residuals. A new penalization framework, implemented via

empirical Bayes thresholding, is presented to effectively identify

sparse predictors of each component. POCRE is computationally efficient

owing to its sequential construction of leading sparse principal

components. In addition, such construction offers other properties such

as grouping highly correlated predictors and allowing for collinear or

nearly collinear predictors. With multivariate responses, POCRE can

construct common components and thus build up latent-variable models for

large p small n data. This is an joint work with Yanzhu Lin and Min Zhang

Monday, August 31, 2009 - 13:00 ,
Location: Skiles 255 ,
Nicola Guglielmi ,
Università di L&#039;Aquila ,
guglielm@univaq.it ,
Organizer: Sung Ha Kang

In this talk I will address the problem of the computation of the jointspectral radius (j.s.r.) of a set of matrices.This tool is useful to determine uniform stability properties of non-autonomous discrete linear systems. After explaining how to extend the spectral radius from a single matrixto a set of matrices and illustrate some applications where such conceptplays an important role I will consider the problem of the computation ofthe j.s.r. and illustrate some possible strategies. A basic tool I willuse to this purpose consists of polytope norms, both real and complex.I will illustrate a possible algorithm for the computation of the j.s.r. ofa family of matrices which is based on the use of these classes of norms.Some examples will be shown to illustrate the behaviour of the algorithm.Finally I will address the problem of the finite computability of the j.s.r.and state some recent results, open problems and conjectures connected withthis issue.

Series: Geometry Topology Seminar

Not yet!

Series: PDE Seminar

This seminar concerns the analysis of a PDE, invented by J.M. Lasry

and P.L. Lions

whose applications need not concern us.

Notwithstanding, the focus of the application is the behavior of a

free boundary in a diffusion equation which has dynamically evolving,

non--standard sources. Global existence and uniqueness are

established for this system. The work to be discussed represents a

collaborative effort with

Maria del Mar Gonzalez, Maria Pia Gualdani and Inwon Kim.

and P.L. Lions

whose applications need not concern us.

Notwithstanding, the focus of the application is the behavior of a

free boundary in a diffusion equation which has dynamically evolving,

non--standard sources. Global existence and uniqueness are

established for this system. The work to be discussed represents a

collaborative effort with

Maria del Mar Gonzalez, Maria Pia Gualdani and Inwon Kim.

Series: Other Talks

In these talks we will introduced the basic definitions and examples of presheaves, sheaves and sheaf spaces. We will also explore various constructions and properties of these objects.

Series: ACO Student Seminar

Sum-Product inequalities originated in the early 80's

with the work of Erdos and Szemeredi, who showed that there exists

a constant c such that if A is a set of n integers, n sufficiently

large, then either the sumset A+A = {a+b : a,b in A} or the product

set A.A = {ab : a,b in A}, must exceed n^(1+c) in size. Since that

time the subject has exploded with a vast number of generalizations

and extensions of the basic result, which has led to many

very interesting unsolved problems (that would make great thesis

topics). In this talk I will survey some of the developments in this

fast-growing area.

with the work of Erdos and Szemeredi, who showed that there exists

a constant c such that if A is a set of n integers, n sufficiently

large, then either the sumset A+A = {a+b : a,b in A} or the product

set A.A = {ab : a,b in A}, must exceed n^(1+c) in size. Since that

time the subject has exploded with a vast number of generalizations

and extensions of the basic result, which has led to many

very interesting unsolved problems (that would make great thesis

topics). In this talk I will survey some of the developments in this

fast-growing area.

Series: Analysis Seminar

Series: Graph Theory Seminar

We will discuss the classic theorem of Walter Schnyder: a graph G is planar if and only if the dimension of its incidence poset is at most 3. This result has been extended by Brightwell and Trotter to show that the dimension of the vertex-edge-face poset of a planar 3-connected graph is 4 and the removal of any vertex (or by duality, any face) reduces the dimension to 3. Recently, this result and its extension to planar multigraphs was key to resolving the question of the dimension of the adjacency poset of a planar bipartite graph. It also serves to motivate questions about the dimension of posets with planar cover graphs.

Series: Stochastics Seminar

The goal of this talk is to present a new method for sparse estimation

which does not use standard techniques such as $\ell_1$ penalization.

First, we introduce a new setup for aggregation which bears strong links

with generalized linear models and thus encompasses various response

models such as Gaussian regression and binary classification. Second, by

combining maximum likelihood estimators using exponential weights we

derive a new procedure for sparse estimations which satisfies exact

oracle inequalities with the desired remainder term. Even though the

procedure is simple, its implementation is not straightforward but it

can be approximated using the Metropolis algorithm which results in a

stochastic greedy algorithm and performs surprisingly well in a

simulated problem of sparse recovery.

which does not use standard techniques such as $\ell_1$ penalization.

First, we introduce a new setup for aggregation which bears strong links

with generalized linear models and thus encompasses various response

models such as Gaussian regression and binary classification. Second, by

combining maximum likelihood estimators using exponential weights we

derive a new procedure for sparse estimations which satisfies exact

oracle inequalities with the desired remainder term. Even though the

procedure is simple, its implementation is not straightforward but it

can be approximated using the Metropolis algorithm which results in a

stochastic greedy algorithm and performs surprisingly well in a

simulated problem of sparse recovery.

Series: Math Physics Seminar

The McK--V system is a non--linear diffusion equation with a non--local

non--linearity provided by convolution. Recently popular in a variety

of applications, it enjoys an ancient heritage as a basis for

understanding equilibrium and near equilibrium fluids. The model is

discussed in finite volume where, on the basis of the physical

considerations, the correct scaling (for the model itself) is

identified. For dimension two and above and in large volume, the phase

structure of the model is completely elucidated in (somewhat

disturbing) contrast to dynamical results. This seminar represents

joint work with V. Panferov.

non--linearity provided by convolution. Recently popular in a variety

of applications, it enjoys an ancient heritage as a basis for

understanding equilibrium and near equilibrium fluids. The model is

discussed in finite volume where, on the basis of the physical

considerations, the correct scaling (for the model itself) is

identified. For dimension two and above and in large volume, the phase

structure of the model is completely elucidated in (somewhat

disturbing) contrast to dynamical results. This seminar represents

joint work with V. Panferov.

Series: SIAM Student Seminar

In this talk we will review some of the classical weighted theory for singular integral operators, and discuss some recent progress on finding sharp bounds in terms of the A_p constant associated with the weight

Series: Combinatorics Seminar

Lovasz Local Lemma (LLL) is a powerful result in probability theory that states that the probability that none of a set of bad events happens is nonzero if the probability of each event is small compared to the number of events that depend on it. It is often used in combination with the probabilistic method for non-constructive existence proofs. A prominent application of LLL is to k-CNF formulas, where LLL implies that, if every clause in the formula shares variables with at most d \le 2^k/e other clauses then such a formula has a satisfying assignment. Recently, a randomized algorithm to efficiently construct a satisfying assignment was given by Moser. Subsequently Moser and Tardos gave a randomized algorithm to construct the structures guaranteed by the LLL in a very general algorithmic framework. We will address the main problem left open by Moser and Tardos of derandomizing their algorithm efficiently when the number of other events that any bad event depends on is possibly unbounded. An interesting special case of the open problem is the k-CNF problem when k = \omega(1), that is, when k is more than a constant.

Series: Geometry Topology Seminar

Series: PDE Seminar

Multicomponent reactive flows arise in many practical applicationssuch as combustion, atmospheric modelling, astrophysics, chemicalreactions, mathematical biology etc. The objective of this work isto develop a rigorous mathematical theory based on the principles ofcontinuum mechanics. Results on existence, stability, asymptotics as wellas singular limits will be discussed.

Series: Research Horizons Seminar

Additive combinatorics is a relatively new field, with

many diverse and exciting research programmes. In this talk I will discuss

two of these programmes -- the continuing development of

sum-product inequalities, and the unfolding progress on

arithmetic progressions -- along with some new results proved by me and my

collaborators. Hopefully I will have time to mention some nice research

problems as well.

many diverse and exciting research programmes. In this talk I will discuss

two of these programmes -- the continuing development of

sum-product inequalities, and the unfolding progress on

arithmetic progressions -- along with some new results proved by me and my

collaborators. Hopefully I will have time to mention some nice research

problems as well.

Series: ACO Student Seminar

In 1969, Gomory introduced the master group polyhedron for pure integer programs and derives the mixed integer cut (MIC) as a facet of a special family of these polyhedra. We study the MIC in this framework, characterizing both its facets and extreme points; next, we extend our results under mappings between group polyhedra; and finally, we conclude with related open problems. No prior knowledge of algebra or the group relaxation is assumed. Terminology will be introduced as needed. Joint work with Ellis Johnson.

Series: Other Talks

Wednesday, September 9, 2009 - 13:00 ,
Location: Skiles 114 ,
Amy Novick-Cohen ,
Technion ,
Organizer: John McCuan

Grain boundaries within polycrystalline materials are known to be governed by motion by mean curvature. However, when the polycrystalline specimen is thin, such as in thin films, then the effects of the exterior surfaces start to play an important role. We consider two particularly simple geometries, an axi-symmetric geometry, and a half loop geometry which is often employed in making measurements of the kinetic coefficient in the motion by mean curvature equation, obtaining corrective terms which arise due to the coupling of grain boundaries to the exterior surface. Joint work with Anna Rotman, Arkady Vilenkin & Olga Zelekman-Smirin

Series: Analysis Seminar

We describe how time-frequency analysis is used to analyze boundedness

and Schatten class properties of pseudodifferential operators and

Fourier integral operators.

and Schatten class properties of pseudodifferential operators and

Fourier integral operators.

Series: Stochastics Seminar

Given a random word of size n whose letters are drawn independently

from an ordered alphabet of size m, the fluctuations of the shape of

the corresponding random RSK Young tableaux are investigated, when both

n and m converge together to infinity. If m does not grow too fast and

if the draws are uniform, the limiting shape is the same as the

limiting spectrum of the GUE. In the non-uniform case, a control of

both highest probabilities will ensure the convergence of the first row

of the tableau, i.e., of the length of the longest increasing

subsequence of the random word, towards the Tracy-Widom distribution.

Series: SIAM Student Seminar

We develop a stochastic control system from a continuous-time

Principal-Agent model in which both the principal and the agent have

imperfect information and different beliefs about the project. We

attempt to optimize the agent’s utility function under the agent’s

belief. Via the corresponding Hamilton-Jacobi-Bellman equation we

prove that the value function is jointly continuous and satisfies the

Dynamic Programming Principle. These properties directly lead to the

conclusion that the value function is a viscosity solution of the HJB

equation. Uniqueness is then also established.

Principal-Agent model in which both the principal and the agent have

imperfect information and different beliefs about the project. We

attempt to optimize the agent’s utility function under the agent’s

belief. Via the corresponding Hamilton-Jacobi-Bellman equation we

prove that the value function is jointly continuous and satisfies the

Dynamic Programming Principle. These properties directly lead to the

conclusion that the value function is a viscosity solution of the HJB

equation. Uniqueness is then also established.

Series: Combinatorics Seminar

We consider the #P complete problem of counting the number of independent

sets in a given graph. Our interest is in understanding the effectiveness of

the popular Belief Propagation (BP) heuristic. BP is a simple and iterative

algorithm that is known to have at least one fixed point. Each fixed point

corresponds to a stationary point of the Bethe free energy (introduced by

Yedidia, Freeman and Weiss (2004) in recognition of Hans Bethe's earlier

work (1935)). The evaluation of the Bethe Free Energy at such a stationary

point (or BP fixed point) leads to the Bethe approximation to the number of

independent sets of the given graph. In general BP is not known to converge

nor is an efficient, convergent procedure for finding stationary points of

the Bethe free energy known. Further, effectiveness of Bethe approximation

is not well understood.

As the first result of this paper, we propose a BP-like algorithm that

always converges to a BP fixed point for any graph. Further, it finds an \epsilon

approximate fixed point in poly(n, 2^d, 1/\epsilon) iterations for a graph of n

nodes with max-degree d. As the next step, we study the quality of this

approximation. Using the recently developed 'loop series' approach by

Chertkov and Chernyak, we establish that for any graph of n nodes with

max-degree d and girth larger than 8d log n, the multiplicative error decays

as 1 + O(n^-\gamma) for some \gamma > 0. This provides a deterministic counting

algorithm that leads to strictly different results compared to a recent

result of Weitz (2006). Finally as a consequence of our results, we prove

that the Bethe approximation is exceedingly good for a random 3-regular

graph conditioned on the Shortest Cycle Cover Conjecture of Alon and Tarsi

(1985) being true.

(Joint work with Venkat Chandrasekaran, Michael Chertkov, David Gamarnik and

Devavrat Shah)

sets in a given graph. Our interest is in understanding the effectiveness of

the popular Belief Propagation (BP) heuristic. BP is a simple and iterative

algorithm that is known to have at least one fixed point. Each fixed point

corresponds to a stationary point of the Bethe free energy (introduced by

Yedidia, Freeman and Weiss (2004) in recognition of Hans Bethe's earlier

work (1935)). The evaluation of the Bethe Free Energy at such a stationary

point (or BP fixed point) leads to the Bethe approximation to the number of

independent sets of the given graph. In general BP is not known to converge

nor is an efficient, convergent procedure for finding stationary points of

the Bethe free energy known. Further, effectiveness of Bethe approximation

is not well understood.

As the first result of this paper, we propose a BP-like algorithm that

always converges to a BP fixed point for any graph. Further, it finds an \epsilon

approximate fixed point in poly(n, 2^d, 1/\epsilon) iterations for a graph of n

nodes with max-degree d. As the next step, we study the quality of this

approximation. Using the recently developed 'loop series' approach by

Chertkov and Chernyak, we establish that for any graph of n nodes with

max-degree d and girth larger than 8d log n, the multiplicative error decays

as 1 + O(n^-\gamma) for some \gamma > 0. This provides a deterministic counting

algorithm that leads to strictly different results compared to a recent

result of Weitz (2006). Finally as a consequence of our results, we prove

that the Bethe approximation is exceedingly good for a random 3-regular

graph conditioned on the Shortest Cycle Cover Conjecture of Alon and Tarsi

(1985) being true.

(Joint work with Venkat Chandrasekaran, Michael Chertkov, David Gamarnik and

Devavrat Shah)

Friday, September 11, 2009 - 15:00 ,
Location: Skiles 269 ,
John Etnyre ,
Georgia Tech

We will discuss how to put a hyperbolic structure on various

surface and 3-manifolds. We will being by discussing isometries of hyperbolic space in

dimension 2 and 3. Using our understanding of these isometries we will explicitly

construct hyperbolic structures on all close surfaces of genus greater than one and a

complete finite volume hyperbolic structure on the punctured torus. We will then consider

the three dimensional case where we will concentrate on putting hyperbolic structures on

knot complements. (Note: this is a 2 hr seminar)

surface and 3-manifolds. We will being by discussing isometries of hyperbolic space in

dimension 2 and 3. Using our understanding of these isometries we will explicitly

construct hyperbolic structures on all close surfaces of genus greater than one and a

complete finite volume hyperbolic structure on the punctured torus. We will then consider

the three dimensional case where we will concentrate on putting hyperbolic structures on

knot complements. (Note: this is a 2 hr seminar)

Series: Probability Working Seminar

The talk is based on the recent paper by M.Hairer, J.Mattingly, and M.Scheutzow with the same title.There are many Markov chains on infinite dimensional spaces whose one-step

transition kernels are mutually singular when starting from different initial

conditions. We give results which prove unique ergodicity under minimal

assumptions on one hand and the existence of a spectral gap under conditions

reminiscent of Harris' theorem. The first uses the existence of couplings which

draw the solutions together as time goes to infinity. Such "asymptotic

couplings" were central to recent work on SPDEs on which this work builds. The

emphasis here is on stochastic differential delay equations.Harris' celebrated

theorem states that if a Markov chain admits a Lyapunov function whose level

sets are "small" (in the sense that transition probabilities are uniformly

bounded from below), then it admits a unique invariant measure and transition

probabilities converge towards it at exponential speed. This convergence takes

place in a total variation norm, weighted by the Lyapunov function. A second

aim of this article is to replace the notion of a "small set" by the much

weaker notion of a "d-small set," which takes the topology of the underlying

space into account via a distance-like function d. With this notion at hand, we

prove an analogue to Harris' theorem, where the convergence takes place in a

Wasserstein-like distance weighted again by the Lyapunov function. This

abstract result is then applied to the framework of stochastic delay equations.

transition kernels are mutually singular when starting from different initial

conditions. We give results which prove unique ergodicity under minimal

assumptions on one hand and the existence of a spectral gap under conditions

reminiscent of Harris' theorem. The first uses the existence of couplings which

draw the solutions together as time goes to infinity. Such "asymptotic

couplings" were central to recent work on SPDEs on which this work builds. The

emphasis here is on stochastic differential delay equations.Harris' celebrated

theorem states that if a Markov chain admits a Lyapunov function whose level

sets are "small" (in the sense that transition probabilities are uniformly

bounded from below), then it admits a unique invariant measure and transition

probabilities converge towards it at exponential speed. This convergence takes

place in a total variation norm, weighted by the Lyapunov function. A second

aim of this article is to replace the notion of a "small set" by the much

weaker notion of a "d-small set," which takes the topology of the underlying

space into account via a distance-like function d. With this notion at hand, we

prove an analogue to Harris' theorem, where the convergence takes place in a

Wasserstein-like distance weighted again by the Lyapunov function. This

abstract result is then applied to the framework of stochastic delay equations.

Series: CDSNS Colloquium

We study the behavior of the asymptotic dynamics of a dissipative reaction-diffusion equation in a dumbbell domain, which, roughly speaking, consists of two fixed domains joined by a thin channel. We analyze the behavior of the stationary solutions (solutions of the elliptic problem), their local unstable manifold and the attractor of the equation as the width of the connecting channel goes to zero.

Series: Geometry Topology Seminar

After reviewing a few techniques from the theory of confoliation in dimension three we will discuss some generalizations and certain obstructions in extending these techniques to higher dimensions. We also will try to discuss a few questions regarding higher dimensional confoliations.

Series: Other Talks

In this talk Professor Tapia identifies elementary mathematical frameworks for the study of popular drag racing beliefs. In this manner some myths are validated while others are destroyed. Tapia will explain why dragster acceleration is greater than the acceleration due to gravity, an age old inconsistency. His "Fundamental Theorem of Drag Racing" will be presented. The first part of the talk will be a historical account of the development of drag racing and will include several lively videos.

Series: Geometry Topology Seminar

A closed hyperbolic 3-manifold $M$ determines a fundamental classin the algebraic K-group $K_3^{ind}(C)$. There is a regulator map$K_3^{ind}(C)\to C/4\Pi^2Z$, which evaluated on the fundamental classrecovers the volume and Chern-Simons invariant of $M$. The definition of theK-groups are very abstract, and one is interested in more concrete models.The extended Bloch is such a model. It is isomorphic to $K_3^{ind}(C)$ andhas several interesting properties: Elements are easy to produce; thefundamental class of a hyperbolic manifold can be constructed explicitly;the regulator is given explicitly in terms of a polylogarithm.

Series: PDE Seminar

Many problems in Geometry, Physics and Biology are described by nonlinear partial differential equations of second order or four order. In this talk I shall mainly address the blow-up phenomenon in a class of fourth order equations from conformal geometry and some Liouville systems from Physics and Ecology. There are some challenging open problems related to these equations and I will report the recent progress on these problems in my joint works with Gilbert Weinstein and Chang-shou Lin.

Series: ACO Student Seminar

I will describe a simple scheme for generating a valid inequality for a

stochastic integer programs from a given valid inequality for its

deterministic counterpart. Applications to stochastic lot-sizing problems

will be discussed. This is joint work with Yongpei Guan and George Nemhauser

and is based on the following two papers

(1) Y. Guan, S. Ahmed and G.L. Nemhauser. "Cutting planes for multi-stage

stochastic integer programs," Operations Research, vol.57, pp.287-298, 2009

(2)

Y. Guan, S. Ahmed and G. L. Nemhauser. "Sequential pairing of mixed integer

inequalities," Discrete Optimization, vol.4, pp.21-39, 2007

stochastic integer programs from a given valid inequality for its

deterministic counterpart. Applications to stochastic lot-sizing problems

will be discussed. This is joint work with Yongpei Guan and George Nemhauser

and is based on the following two papers

(1) Y. Guan, S. Ahmed and G.L. Nemhauser. "Cutting planes for multi-stage

stochastic integer programs," Operations Research, vol.57, pp.287-298, 2009

(2)

Y. Guan, S. Ahmed and G. L. Nemhauser. "Sequential pairing of mixed integer

inequalities," Discrete Optimization, vol.4, pp.21-39, 2007

This is a joint DOS/ACO seminar.

Series: Research Horizons Seminar

(joint work with Csaba Biro, Dave Howard, Mitch Keller and Stephen Young. Biro and Young finished their Ph.D.'s at Georgia Tech in 2008. Howard and Keller will graduate in spring 2010)

Motivated by questions in algebra involving what is called "Stanley" depth, the following combinatorial question was posed to us by Herzog: Given a positive integer n, can you partition the family of all non-empty subsets of {1, 2, ..., n} into intervals, all of the form [A, B] where |B| is at least n/2. We answered this question in the affirmative by first embedding it in a stronger result and then finding two elegant proofs. In this talk, which will be entirely self-contained, I will give both proofs. The paper resulting from this research will appear in the Journal of Combinatorial Theory, Series A.

Series: Other Talks

Series: Graph Theory Seminar

This is the third session in this series and a special effort will be made to make it self contained ... to the fullest extent possible.With Felsner and Li, we proved that the dimension of the adjacency poset of a graph is bounded as a function of the genus. For planar graphs, we have an upper bound of 8 and for outerplanar graphs, an upper bound of 5. For lower bounds, we have respectively 5 and 4. However, for bipartite planar graphs, we have an upper bound of 4, which is best possible. The proof of this last result uses the Brightwell/Trotter work on the dimension of thevertex/edge/face poset of a planar graph, and led to the following conjecture:For each h, there exists a constant c_h so that if P is a poset of height h and the cover graph of P is planar, then the dimension of P is at most c_h.With Stefan Felsner, we have recently resolved this conjecture in the affirmative. From below, we know from a construction of Kelly that c_h must grow linearly with h.

Friday, September 18, 2009 - 14:00 ,
Location: Skiles 269 ,
John Etnyre ,
Georgia Tech

We will discuss how to put a hyperbolic structure on various surface and 3-manifolds. We will being by discussing isometries of hyperbolic space in dimension 2 and 3. Using our understanding of these isometries we will explicitly construct hyperbolic structures on all close surfaces of genus greater than one and a complete finite volume hyperbolic structure on the punctured torus. We will then consider the three dimensional case where we will concentrate on putting hyperbolic structures on knot complements. (Note: this is a 1.5 hr lecture)

Series: CDSNS Colloquium

Fourier's Law assert that the heat flow through a point in a solid is proportional to the temperature gradient at that point. Fourier himself thought that this law could not be derived from the mechanical properties of the elementary constituents (atoms and electrons, in modern language) of the solid. On the contrary, we now believe that such a derivation is possible and necessary. At the core of this change of opinion is the introduction of probability in the description. We now see the microscopic state of a system as a probability measure on phase space so that evolution becomes a stochastic process. Macroscopic properties are then obtained through averages. I will introduce some of the models used in this research and discuss their relevance for the physical problem and the mathematical results one can obtain.

Monday, September 21, 2009 - 13:00 ,
Location: Skiles 255 ,
Yuliya Babenko ,
Department of Mathematics and Statistics, Sam Houston State University ,
Organizer: Doron Lubinsky

In this talk we first present the exact asymptotics of the optimal

error in the weighted L_p-norm, 1\leq p \leq \infty, of linear spline

interpolation of an arbitrary bivariate function f \in C^2([0,1]^2). We

further discuss the applications to numerical integration and adaptive

mesh generation for finite element methods, and explore connections

with the problem of approximating the convex bodies by polytopes. In

addition, we provide the generalization to asymmetric norms.

We give a brief review of known results and introduce a series of new

ones. The proofs of these results lead to algorithms for the

construction of asymptotically optimal sequences of triangulations for

linear interpolation.

Moreover, we derive similar results for other classes of splines and

interpolation schemes, in particular for splines over rectangular

partitions.

Last but not least, we also discuss several multivariate

generalizations.

error in the weighted L_p-norm, 1\leq p \leq \infty, of linear spline

interpolation of an arbitrary bivariate function f \in C^2([0,1]^2). We

further discuss the applications to numerical integration and adaptive

mesh generation for finite element methods, and explore connections

with the problem of approximating the convex bodies by polytopes. In

addition, we provide the generalization to asymmetric norms.

We give a brief review of known results and introduce a series of new

ones. The proofs of these results lead to algorithms for the

construction of asymptotically optimal sequences of triangulations for

linear interpolation.

Moreover, we derive similar results for other classes of splines and

interpolation schemes, in particular for splines over rectangular

partitions.

Last but not least, we also discuss several multivariate

generalizations.

Series: Geometry Topology Seminar

The uniform thickness property (UTP) is a property of knots embeddedin the 3-sphere with the standard contact structure. The UTP was introduced byEtnyre and Honda, and has been useful in studying the Legendrian and transversalclassification of cabled knot types. We show that every iterated torus knotwhich contains at least one negative iteration in its cabling sequence satisfiesthe UTP. We also conjecture a complete UTP classification for iterated torusknots, and fibered knots in general.

Series: Other Talks

We discuss the convergence properties of first-order methods for two problems that

arise in computational geometry and statistics: the minimum-volume enclosing ellipsoid problem

and the minimum-area enclosing ellipsoidal cylinder problem for a set of m points in R^n.

The algorithms are old but the analysis is new, and the methods are remarkably effective

at solving large-scale problems to high accuracy.

arise in computational geometry and statistics: the minimum-volume enclosing ellipsoid problem

and the minimum-area enclosing ellipsoidal cylinder problem for a set of m points in R^n.

The algorithms are old but the analysis is new, and the methods are remarkably effective

at solving large-scale problems to high accuracy.

Tuesday, September 22, 2009 - 15:00 ,
Location: Skiles 269 ,
Gunter Meyer ,
School of Mathematics, Georgia Tech ,
Organizer: Liang Peng

When the asset price follows geometric Brownian motion but allows random Poisson jumps (called jump diffusion) then the standard Black Scholes partial differential for the option price becomes a partial-integro differential equation (PIDE). If, in addition, the volatility of the diffusion is assumed to lie between given upper and lower bounds but otherwise not known then sharp upper and lower bounds on the option price can be found from the Black Scholes Barenblatt equation associated with the jump diffusion PIDE. In this talk I will introduce the model equations and then discuss the computational issues which arise when the Black Scholes Barenblatt PIDE for jump diffusion is to be solved numerically.

Series: PDE Seminar

We discuss comparison principle for viscosity solutions of fully nonlinear elliptic PDEs in $\R^n$ which may have superlinear growth in $Du$ with variable coefficients. As an example, we keep the following PDE in mind:$$-\tr (A(x)D^2u)+\langle B(x)Du,Du\rangle +\l u=f(x)\quad \mbox{in }\R^n,$$where $A:\R^n\to S^n$ is nonnegative, $B:\R^n\to S^n$ positive, and $\l >0$. Here $S^n$ is the set of $n\ti n$ symmetric matrices. The comparison principle for viscosity solutions has been one of main issues in viscosity solution theory. However, we notice that we do not know if the comparison principle holds unless $B$ is a constant matrix. Moreover, it is not clear which kind of assumptions for viscosity solutions at $\infty$ is suitable. There seem two choices: (1) one sided boundedness ($i.e.$ bounded from below), (2) growth condition.In this talk, assuming (2), we obtain the comparison principle for viscosity solutions. This is a work in progress jointly with O. Ley.

Series: Research Horizons Seminar

Dodgson (the author of Alice in Wonderland) was an amateur

mathematician who wrote a book about determinants in 1866 and gave a copy

to the queen. The queen dismissed the book and so did the math community

for over a century. The Hodgson Condensation method resurfaced in the 80's

as the fastest method to compute determinants (almost always, and almost

surely). Interested about Lie groups, and their representations? In

crystal bases? In cluster algebras? In alternating sign matrices?

OK, how about square ice? Are you nuts? If so, come and listen.

mathematician who wrote a book about determinants in 1866 and gave a copy

to the queen. The queen dismissed the book and so did the math community

for over a century. The Hodgson Condensation method resurfaced in the 80's

as the fastest method to compute determinants (almost always, and almost

surely). Interested about Lie groups, and their representations? In

crystal bases? In cluster algebras? In alternating sign matrices?

OK, how about square ice? Are you nuts? If so, come and listen.

Series: Other Talks

I will discuss how various geometric categories (e.g. smooth manifolds, complex manifolds) can be be described in terms of locally ringed spaces. (A locally ringed space is a topological spaces endowed with a sheaf of rings whose stalks are local rings.) As an application of the notion of locally ringed space, I'll define what a scheme is.

Series: Analysis Seminar

We consider multipoint Padé approximation to Cauchy transforms of

complex measures. First, we recap that if the support of a measure is

an analytic Jordan arc and if the measure itself is absolutely

continuous with respect to the equilibrium distribution of that arc

with Dini-continuous non-vanishing density, then the diagonal

multipoint Padé approximants associated with appropriate interpolation

schemes converge locally uniformly to the approximated Cauchy

transform in the complement of the arc. Second, we show that this

convergence holds also for measures whose Radon–Nikodym derivative is

a Jacobi weight modified by a Hölder continuous function. The

asymptotics behavior of Padé approximants is deduced from the analysis

of underlying non–Hermitian orthogonal polynomials, for which the

Riemann–Hilbert–∂ method is used.

complex measures. First, we recap that if the support of a measure is

an analytic Jordan arc and if the measure itself is absolutely

continuous with respect to the equilibrium distribution of that arc

with Dini-continuous non-vanishing density, then the diagonal

multipoint Padé approximants associated with appropriate interpolation

schemes converge locally uniformly to the approximated Cauchy

transform in the complement of the arc. Second, we show that this

convergence holds also for measures whose Radon–Nikodym derivative is

a Jacobi weight modified by a Hölder continuous function. The

asymptotics behavior of Padé approximants is deduced from the analysis

of underlying non–Hermitian orthogonal polynomials, for which the

Riemann–Hilbert–∂ method is used.

Series: School of Mathematics Colloquium

The asymmetric simple exclusion process (ASEP) is a continuous time Markov process of interacting particles on a lattice \Gamma. ASEP is defined by two rules: (1) A particle at x \in \Gamma waits an exponential time with parameter one, and then chooses y \in \Gamma with probability p(x, y); (2) If y is vacant at that time it moves to y, while if y is occupied it remains at x. The main interest lies in infinite particle systems. In this lecture we consider the ASEP on the integer lattice {\mathbb Z} with nearest neighbor jump rule: p(x, x+1) = p, p(x, x-1) = 1-p and p \ne 1/2. The integrable structure is that of Bethe Ansatz. We discuss various limit theorems which in certain cases establishes KPZ universality.

Series: Stochastics Seminar

I will describe recent work on the behavior of solutions to

reaction diffusion equations (PDEs) when the coefficients in the

equation are random. The solutions behave like traveling waves moving

in a randomly varying environment. I will explain how one can obtain

limit theorems (Law of Large Numbers and CLT) for the motion of the

interface. The talk will be accessible to people without much knowledge

of PDE.

reaction diffusion equations (PDEs) when the coefficients in the

equation are random. The solutions behave like traveling waves moving

in a randomly varying environment. I will explain how one can obtain

limit theorems (Law of Large Numbers and CLT) for the motion of the

interface. The talk will be accessible to people without much knowledge

of PDE.

Series: SIAM Student Seminar

In the study of one dimensional dynamical systems one often assumes that the functions involved have a negative Schwarzian derivative. In this talk we consider a generalization of this condition. Specifically, we consider the interval functions of a real variable having some iterate with a negative Schwarzian derivative and show that many known results generalize to this larger class of functions. The introduction of this class was motivated by some maps arising in neuroscience

Friday, September 25, 2009 - 15:00 ,
Location: Skiles 269 ,
Anh Tran ,
Georgia Tech

(This is a 2 hour lecture.)

In this talk I will give a quick review of classical invariants of

Legendrian knots in a 3-dimensional contact manifold (the topological knot type, the

Thurston-Bennequin invariant and the rotation number). These classical invariants do not

completely determine the Legendrian isotopy type of Legendrian knots, therefore we will

consider Contact homology (aka Chekanov-Eliashberg DGA), a new invariant that has been

defined in recent years. We also discuss the linearization of Contact homology, a method

to extract a more computable invariant out of the DGA associated to a Legendrian knot.

Legendrian knots in a 3-dimensional contact manifold (the topological knot type, the

Thurston-Bennequin invariant and the rotation number). These classical invariants do not

completely determine the Legendrian isotopy type of Legendrian knots, therefore we will

consider Contact homology (aka Chekanov-Eliashberg DGA), a new invariant that has been

defined in recent years. We also discuss the linearization of Contact homology, a method

to extract a more computable invariant out of the DGA associated to a Legendrian knot.

Series: Combinatorics Seminar

Since the seminal work of Erdos and Renyi the phase transition of the largest components in random graphs became one of the central topics in random graph theory and discrete probability theory. Of particular interest in recent years are random graphs with constraints (e.g. degree distribution, forbidden substructures) including random planar graphs. Let G(n,M) be a uniform random graph, a graph picked uniformly at random among all graphs on vertex set [n]={1,...,n} with M edges. Let P(n,M) be a uniform random planar graph, a graph picked uniformly at random among all graphs on vertex set [n] with M edges that are embeddable in the plane. Erodos-Renyi, Bollobas, and Janson-Knuth-Luczak-Pittel amongst others studied the critical behaviour of the largest components in G(n,M) when M= n/2+o(n) with scaling window of size n^{2/3}. For example, when M=n/2+s with s=o(n) and s \gg n^{2/3}, a.a.s. (i.e. with probability tending to 1 as n approaches \infty) G(n,M) contains a unique largest component (the giant component) of size (4+o(1))s. In contract to G(n,M) one can observe two critical behaviour in P(n,M), when M=n/2+o(n) with scaling window of size n^{2/3}, and when M=n+o(n) with scaling window of size n^{3/5}. For example, when M=n/2+s with s = o(n) and s \gg n^{2/3}, a.a.s. the largest component in P(n,M) is of size (2+o(1))s, roughly half the size of the largest component in G(n,M), whereas when M=n+t with t = o(n) and t \gg n^{3/5}, a.a.s. the number of vertices outside the giant component is \Theta(n^{3/2}t^{-3/2}). (Joint work with Tomasz Luczak)

Series: Probability Working Seminar

In this talk, we will introduce the classical Cramer's Theorem. The

pattern of proof is one of the two most powerful tools in the theory

of large deviations. Namely, the upper bound comes from optimizing

over a family of Chebychef inequalities; while the lower bound comes

from introducing a Radon-Dikodym factor in order to make what was

originally "deviant" behavior look like typical behavior.

pattern of proof is one of the two most powerful tools in the theory

of large deviations. Namely, the upper bound comes from optimizing

over a family of Chebychef inequalities; while the lower bound comes

from introducing a Radon-Dikodym factor in order to make what was

originally "deviant" behavior look like typical behavior.

If time permits, we will extend the Cramer's Theorem to a more general

setting and discuss the Sanov Theorem.

This talk is based on Deuschel and Stroock's <Large deviations>.

Series: CDSNS Colloquium

This talk continues from last week's colloquium.

Monday, September 28, 2009 - 13:00 ,
Location: Skiles 255 ,
Chad Topaz ,
Macalester College

Biological aggregations such as insect swarms, bird flocks, and fish schools are arguably some of the most common and least understood patterns in nature. In this talk, I will discuss recent work on swarming models, focusing on the connection between inter-organism social interactions and properties of macroscopic swarm patterns. The first model is a conservation-type partial integrodifferential equation (PIDE). Social interactions of incompressible form lead to vortex-like swarms. The second model is a high-dimensional ODE description of locust groups. The statistical-mechanical properties of the attractive-repulsive social interaction potential control whether or not individuals form a rolling migratory swarm pattern similar to those observed in nature. For the third model, we again return to a conservation-type PIDE and, via long- and short-wave analysis, determine general conditions that social interactions must satisfy for the population to asymptotically spread, contract, or reach steady state.

Series: Geometry Topology Seminar

Legendrian knots are knots that can be described only by their projections(without having to separately keep track of the over-under crossinginformation): The third coordinate is given as the slope of theprojections. Every knot can be put in Legendrian position in many ways. Inthis talk we present an ongoing project (with Etnyre and Ng) of thecomplete classification of Legendrian representations of twist knots.

Series: PDE Seminar

We formulate a plasma model in which negative ions tend to a fixed, spatially-homogeneous background of positive charge. Instead of solutions with compact spatial support, we must consider those that tend to the background as x tends to infinity. As opposed to the traditional Vlasov-Poisson system, the total charge and energy are thus infinite, and energy conservation (which is an essential component of global existence for the traditional problem) cannot provide bounds for a priori estimates. Instead, a conserved quantity related to the energy is used to bound particle velocities and prove the existence of a unique, global-in-time, classical solution. The proof combines these energy estimates with a crucial argument which establishes spatial decay of the charge density and electric field.

Series: Mathematical Biology Seminar

The recent emergence of the influenza strain (the "swine flu") and delays in production of vaccine against it illustrate the importance of optimizing vaccine allocation. Using an age-dependent model parametrized with data from the 1957 and 1918 influenza pandemics, which had dramatically different mortality patterns, we determined optimal vaccination strategies with regard to five outcome measures: deaths, infections, years of life lost, contingent valuation and economic costs. In general, there is a balance between vaccinating children who transmit most and older individuals at greatest risk of mortality, however, we found that when at least a moderate amount of an effective vaccine is available supply, all outcome measures prioritized vaccinating schoolchildren. This is vaccinating those most responsible for transmission to indirectly protect those most at risk of mortality and other disease complications. When vaccine availability or effectiveness is reduced, the balance is shifted toward prioritizing those at greatest risk for some outcome measures. The amount of vaccine needed for vaccinating schoolchildren to be optimal depends on the general transmissibility of the influenza strain (R_0). We also compared the previous and new recommendations of the CDC and its Advisory Committee on Immunization Practices are below optimum for all outcome measures. In addition, I will discuss some recent results using mortality and hospitalization data from the novel H1N1 "swine flu" and implications of the delay in vaccine availability.

Series: Research Horizons Seminar

In the last 10 years there has been a resurgence of interest in questions about certain spaces of analytic functions. In this talk we will discuss various advances in the study of these spaces of functions and highlight questions of current interest in analytic function theory. We will give an overview of recent advances in the Corona Problem, bilinear forms on spaces of analytic functions, and highlight some methods to studying these questions that use more discrete techniques.

Series: Other Talks

After a few remarks to tie up some loose ends from last week's talk on locally

ringed spaces, I will discuss exact sequences of sheaves and give some natural

examples coming from real, complex, and algebraic geometry. In the context of these

examples, we'll see that a surjective map of sheaves (meaning a morphism of sheaves

which is surjective on the level of stalks) need not be surjective on global

sections. This observation will be used to motivate the need for "sheaf cohomology"

(which will be discussed in detail in subsequent talks).

ringed spaces, I will discuss exact sequences of sheaves and give some natural

examples coming from real, complex, and algebraic geometry. In the context of these

examples, we'll see that a surjective map of sheaves (meaning a morphism of sheaves

which is surjective on the level of stalks) need not be surjective on global

sections. This observation will be used to motivate the need for "sheaf cohomology"

(which will be discussed in detail in subsequent talks).

Series: Analysis Seminar

The trigonometric Grassmannian parametrizes specific solutions of the KP hierarchy which correspond to rank one solutions of a differential-difference bispectral problem. It can be considered as a completion of the phase spaces of the trigonometric Calogero-Moser particle system or the rational Ruijsenaars-Schneider system.

I will describe the characterization of this Grassmannian in terms of representation theory of a suitable difference W-algebra. Based on joint work with L. Haine and E. Horozov.

Series: Dissertation Defense

Tanenbaum, Trenk, and Fishburn introduced the concept of linear discrepancy in 2001, proposing it as a way to measure a partially ordered set's distance from being a linear order. In addition to proving a number of results about linear discrepancy, they posed eight challenges and questions for future work. This dissertation completely resolves one of those challenges and makes contributions on two others. This dissertation has three principal components: 3-discrepancy irreducible posets of width 3, degree bounds, and online algorithms for linear discrepancy. The first principal component of this dissertation provides a forbidden subposet characterization of the posets with linear discrepancy equal to 2 by completing the determination of the posets that are 3-irreducible with respect to linear discrepancy. The second principal component concerns degree bounds for linear discrepancy and weak discrepancy, a parameter similar to linear discrepancy. Specifically, if every point of a poset is incomparable to at most \Delta other points of the poset, we prove three bounds: the linear discrepancy of an interval order is at most \Delta, with equality if and only if it contains an antichain of size \Delta+1; the linear discrepancy of a disconnected poset is at most \lfloor(3\Delta-1)/2\rfloor; and the weak discrepancy of a poset is at most \Delta-1. The third principal component of this dissertation incorporates another large area of research, that of online algorithms. We show that no online algorithm for linear discrepancy can be better than 3-competitive, even for the class of interval orders. We also give a 2-competitive online algorithm for linear discrepancy on semiorders and show that this algorithm is optimal.

Series: Stochastics Seminar

The Black‐Scholes model for stock price as geometric Brownian motion, and the

corresponding European option pricing formula, are standard tools in mathematical

finance. In the late seventies, Cox and Ross developed a model for stock price based

on a stochastic differential equation with fractional diffusion coefficient. Unlike the

Black‐Scholes model, the model of Cox and Ross is not solvable in closed form, hence

there is no analogue of the Black‐Scholes formula in this context. In this talk, we

discuss a new method, based on Stratonovich integration, which yields explicitly

solvable arbitrage‐free models analogous to that of Cox and Ross. This method gives

rise to a generalized version of the Black‐Scholes partial differential equation. We

study solutions of this equation and a related ordinary differential equation.

corresponding European option pricing formula, are standard tools in mathematical

finance. In the late seventies, Cox and Ross developed a model for stock price based

on a stochastic differential equation with fractional diffusion coefficient. Unlike the

Black‐Scholes model, the model of Cox and Ross is not solvable in closed form, hence

there is no analogue of the Black‐Scholes formula in this context. In this talk, we

discuss a new method, based on Stratonovich integration, which yields explicitly

solvable arbitrage‐free models analogous to that of Cox and Ross. This method gives

rise to a generalized version of the Black‐Scholes partial differential equation. We

study solutions of this equation and a related ordinary differential equation.

Series: SIAM Student Seminar

I will describe some interesting properties of frames and Gabor frames in particular. Then we will examine how frames may lead to interesting decompositions of integral operators. In particular, I will share some theorems for pseudodifferential operators and Fourier integral operators arising from Gabor frames.

Friday, October 2, 2009 - 15:00 ,
Location: Skiles 269 ,
Igor Belegradek ,
Georgia Tech

This 2 hour talk is a gentle introduction to simply-connected sugery theory (following classical work by Browder, Novikov, and Wall). The emphasis will be on classification of high-dimensional manifolds and understanding concrete examples.

Series: Probability Working Seminar

The talk is based on the paper by B. Klartag. It will be shown that there exists a sequence \eps_n\to 0 for which the

following holds: let K be a compact convex subset in R^n with nonempty

interior and X a random vector uniformly distributed in K. Then there

exists a unit vector v, a real number \theta and \sigma^2>0 such that

d_TV(<X,v>, Z)\leq \eps_n

where Z has Normal(\theta,\sigma^2) distribution and d_TV - the total

variation distance. Under some additional assumptions on X, the

statement is true for most vectors v \in R^n.

following holds: let K be a compact convex subset in R^n with nonempty

interior and X a random vector uniformly distributed in K. Then there

exists a unit vector v, a real number \theta and \sigma^2>0 such that

d_TV(<X,v>, Z)\leq \eps_n

where Z has Normal(\theta,\sigma^2) distribution and d_TV - the total

variation distance. Under some additional assumptions on X, the

statement is true for most vectors v \in R^n.

Series: Combinatorics Seminar

It is known that, relative to any fixed vertex q of a finite graph, there

exists a unique q-reduced divisor (G-Parking function based at q) in

each linear equivalence class of divisors.

exists a unique q-reduced divisor (G-Parking function based at q) in

each linear equivalence class of divisors.

In this talk, I will give an efficient algorithm for finding such reduced

divisors. Using this, I will give an explicit and efficient bijection

between the Jacobian group and the set of spanning trees of the graph. Then

I will outline some applications of the main results, including a new

approach to the Random Spanning Tree problem, efficient computation of the

group law in the critical and sandpile group, efficient algorithm for the

chip-firing game of Baker and Norine, and the relation to the Riemann-Roch

theory on finite graphs.

Series: Geometry Topology Seminar

Series: Research Horizons Seminar

In linear algebra classes we learn that a symmetic matrix with

real entries has real eigenvalues. But many times we deal with nonsymmetric

matrices that we want them to have real eigenvalues and be stable under a

small perturbation. In the 1930's totally positive matrices were discovered

in mechanical problems of vibtrations, then lost for over 50 years. They

were rediscovered in the 1990's as esoteric objects in quantum groups and

crystal bases. In the 2000's these matrices appeared in relation to

Teichmuller space and its quantization. I plan to give a high school

introduction to totally positive matrices.

real entries has real eigenvalues. But many times we deal with nonsymmetric

matrices that we want them to have real eigenvalues and be stable under a

small perturbation. In the 1930's totally positive matrices were discovered

in mechanical problems of vibtrations, then lost for over 50 years. They

were rediscovered in the 1990's as esoteric objects in quantum groups and

crystal bases. In the 2000's these matrices appeared in relation to

Teichmuller space and its quantization. I plan to give a high school

introduction to totally positive matrices.

Series: Other Talks

We will define the Cech cohomology groups of a sheaf and discuss some basic properties of the Cech construction.

Series: Analysis Seminar

Modulation spaces are a class of Banach spaces which provide a quantitative time-frequency analysis of functions via the Short-Time Fourier Transform. The modulation spaces are the "right" spaces for time-frequency analysis andthey occur in many problems in the same way that Besov Spaces are attached to wavelet theory and issues of smoothness. In this seminar, I will talk about embeddings of modulation Spaces into BMO or VMO (the space of functions of bounded or vanishing mean oscillation, respectively ). Membership in VMO is central to the Balian-Low Theorem, which is a cornerstone of time-frequency analysis.

Series: School of Mathematics Colloquium

A classification of the dynamics of polynomials in one complex variable has remained elusive, even when considering only the simpler "structurally stable" polynomials. In this talk, I will describe the basics of polynomial iteration, leading up to recent results in the direction of a complete classification. In particular, I will describe a (singular) metric on the complex plane induced by the iteration of a polynomial. I will explain how this geometric structure relates to topological conjugacy classes within the moduli space of polynomials.

Series: Graph Theory Seminar

A metric graph is a geometric realization of a finite graph by identifying each edge with a real interval. A divisor on a metric graph Gamma is an element of the free abelian group on Gamma. The rank of a divisor on a metric graph is a concept appearing in the Riemann-Roch theorem for metric graphs (or tropical curves) due to Gathmann and Kerber, and Mikhalkin and Zharkov. A rank-determining set of a metric graph Gamma is defined to be a subset A of Gamma such that the rank of a divisor D on Gamma is always equal to the rank of D restricted on A. I will present an algorithm to derive the reduced divisor from any effective divisor in the same linear system, and show constructively that there exist finite rank-determining sets. Based on this discovery, we can compute the rank of an arbitrary divisor on any metric graph. In addition, I will discuss the properties of rank-determining sets in general and formulate a criterion for rank-determining sets.

Series: SIAM Student Seminar

This talk is based on a paper by Medvedev and Scaillet which derives closed form

asymptotic expansions for option implied volatilities (and option prices).

The model is a two-factor jump-diffusion stochastic volatility one with short time to

maturity. The authors derive a power series expansion (in log-moneyness and time

to maturity) for the implied volatility of near-the-money options with small time to

maturity. In this talk, I will discuss their techniques and results.

asymptotic expansions for option implied volatilities (and option prices).

The model is a two-factor jump-diffusion stochastic volatility one with short time to

maturity. The authors derive a power series expansion (in log-moneyness and time

to maturity) for the implied volatility of near-the-money options with small time to

maturity. In this talk, I will discuss their techniques and results.

Friday, October 9, 2009 - 15:00 ,
Location: Skiles 269 ,
Igor Belegradek ,
Georgia Tech
This 2 hour talk is a gentle introduction to simply-connected sugery

theory (following classical work by Browder, Novikov, and Wall). The

emphasis will be on classification of high-dimensional manifolds and

understanding concrete examples.

theory (following classical work by Browder, Novikov, and Wall). The

emphasis will be on classification of high-dimensional manifolds and

understanding concrete examples.

Series: Stochastics Seminar

One of the most important stochastic partial differential equations,

known as the superprocess, arises as a limit in population dynamics.

There are several notions of uniqueness, but for many years only weak

uniqueness was known. For a certain range of parameters, Mytnik and

Perkins recently proved strong uniqueness. I will describe joint work

with Barlow, Mytnik and Perkins which proves nonuniqueness for the

parameters not included in Mytnik and Perkins' result. This

completely settles the question for strong uniqueness, but I will end

by giving some problems which are still open.

known as the superprocess, arises as a limit in population dynamics.

There are several notions of uniqueness, but for many years only weak

uniqueness was known. For a certain range of parameters, Mytnik and

Perkins recently proved strong uniqueness. I will describe joint work

with Barlow, Mytnik and Perkins which proves nonuniqueness for the

parameters not included in Mytnik and Perkins' result. This

completely settles the question for strong uniqueness, but I will end

by giving some problems which are still open.

Series: Combinatorics Seminar

In this talk I will discuss a new technique discovered by myself

and Olof Sisask which produces many new insights in additive combinatorics,

not to mention new

proofs of classical theorems previously proved only using harmonic

analysis. Among these new proofs is one for Roth's theorem on three-term

arithmetic progressions, which gives the best bounds so

far achieved by any combinatorial method. And another is a new proof

that positive density subsets of the integers mod p contain very

long arithmetic progressions, first proved by Bourgain, and improved

upon by Ben Green and Tom Sanders. If time permits, I will discuss

how the method can be applied to the 2D corners problem.

and Olof Sisask which produces many new insights in additive combinatorics,

not to mention new

proofs of classical theorems previously proved only using harmonic

analysis. Among these new proofs is one for Roth's theorem on three-term

arithmetic progressions, which gives the best bounds so

far achieved by any combinatorial method. And another is a new proof

that positive density subsets of the integers mod p contain very

long arithmetic progressions, first proved by Bourgain, and improved

upon by Ben Green and Tom Sanders. If time permits, I will discuss

how the method can be applied to the 2D corners problem.

Monday, October 12, 2009 - 13:00 ,
Location: Skiles 255 ,
Wei Zhu ,
University of Alabama (Department of Mathematics) ,
wzhu7@bama.ua.edu ,
Organizer: Sung Ha Kang

The Rudin-Osher-Fatemi (ROF) model is one of the most powerful and popular models in image denoising. Despite its simple form, the ROF functional has proved to be nontrivial to minimize by conventional methods. The difficulty is mainly due to the nonlinearity and poor conditioning of the related problem. In this talk, I will focus on the minimization of the ROF functional in the one-dimensional case. I will present a new algorithm that arrives at the minimizer of the ROF functional fast and exactly. The proposed algorithm will be compared with the standard and popular gradient projection method in accuracy, efficiency and other aspects.

Series: Geometry Topology Seminar

The deformation variety is similar to the representation variety inthat it describes (generally incomplete) hyperbolic structures on3-manifolds with torus boundary components. However, the deformationvariety depends crucially on a triangulation of the manifold: theremay be entire components of the representation variety which can beobtained from the deformation variety with one triangulation but notanother, and it is unclear how to choose a "good" triangulation thatavoids these problems. I will describe the "extended deformationvariety", which deals with many situations that the deformationvariety cannot. In particular, given a manifold which admits someideal triangulation we can construct a triangulation such that we canrecover any irreducible representation (with some trivial exceptions)from the associated extended deformation variety.

Tuesday, October 13, 2009 - 15:00 ,
Location: Skiles 269 ,
Suzanne Lee ,
College of Management, Georgia Tech ,
Organizer: Christian Houdre

We propose a new two stage semi-parametric test and estimation procedure to

investigate predictability of stochastic jump arrivals in asset prices. It allows us

to search for conditional information that affects the likelihood of jump occurrences up

to the intra-day levels so that usual factor analysis for jump dynamics can be

achieved. Based on the new theory of inference, we find empirical evidence of jump clustering

in U.S. individual equity markets during normal trading hours. We also present other

intra-day jump predictors such as analysts recommendation updates and stock news

releases.

investigate predictability of stochastic jump arrivals in asset prices. It allows us

to search for conditional information that affects the likelihood of jump occurrences up

to the intra-day levels so that usual factor analysis for jump dynamics can be

achieved. Based on the new theory of inference, we find empirical evidence of jump clustering

in U.S. individual equity markets during normal trading hours. We also present other

intra-day jump predictors such as analysts recommendation updates and stock news

releases.

Series: PDE Seminar

We study the asymptotic behavior of the infinite Darcy-Prandtl number Darcy-Brinkman-Boussinesq model for convection in porous media at small Brinkman-Darcy number. This is a singular limit involving a boundary layer with thickness proportional to the square root of the Brinkman-Darcynumber . This is a joint work with Jim Kelliher and Roger Temam.

Series: Mathematical Biology Seminar

Hubbell's neutral model provides a rich theoretical framework to study

ecological communities. By coupling ecological and evolutionary time

scales, it allows investigating how communities are shaped by speciation

processes. The speciation model in the basic neutral model is particularly

simple, describing speciation as a point mutation event in a birth of a

single individual. The stationary species abundance distribution of the

basic model, which can be solved exactly, fits empirical data of

distributions of species abundances surprisingly well. More realistic

speciation models have been proposed such as the random fission model in

which new species appear by splitting up existing species. However, no

analytical solution is available for these models, impeding quantitative

comparison with data. Here we present a self-consistent approximation

method for the neutral community model with random fission speciation. We

derive explicit formulas for the stationary species abundance

distribution, which agree very well with simulations. However, fitting the

model to tropical tree data sets, we find that it performs worse than the

original neutral model with point mutation speciation.

ecological communities. By coupling ecological and evolutionary time

scales, it allows investigating how communities are shaped by speciation

processes. The speciation model in the basic neutral model is particularly

simple, describing speciation as a point mutation event in a birth of a

single individual. The stationary species abundance distribution of the

basic model, which can be solved exactly, fits empirical data of

distributions of species abundances surprisingly well. More realistic

speciation models have been proposed such as the random fission model in

which new species appear by splitting up existing species. However, no

analytical solution is available for these models, impeding quantitative

comparison with data. Here we present a self-consistent approximation

method for the neutral community model with random fission speciation. We

derive explicit formulas for the stationary species abundance

distribution, which agree very well with simulations. However, fitting the

model to tropical tree data sets, we find that it performs worse than the

original neutral model with point mutation speciation.

Series: Research Horizons Seminar

Image segmentation has been widely studied, specially since Mumford-Shah

functional was been proposed. Many theoretical works as well as numerous

extensions have been studied rough out the years. This talk will focus on

introduction to these image segmentation functionals. I will start with

the review of Mumford-Shah functional and discuss Chan-Vese model. Some

new extensions will be presented at the end.

functional was been proposed. Many theoretical works as well as numerous

extensions have been studied rough out the years. This talk will focus on

introduction to these image segmentation functionals. I will start with

the review of Mumford-Shah functional and discuss Chan-Vese model. Some

new extensions will be presented at the end.

Series: Other Talks

We will briefly review the definition of the Cech cohomology groups of a sheaf (so if you missed last weeks talk, you should still be able to follow this weeks), discuss some basic properties of the Cech construction and give some computations that shows how the theory connects to other things (like ordinary cohomology and line bundles).

Wednesday, October 14, 2009 - 13:00 ,
Location: Skiles 269 ,
Edson Denis Leonel ,
Universidade Estadual Paulista, Rio Claro, Brazil

Fermi acceleration is a phenomenon where a classical particle canacquires unlimited energy upon collisions with a heavy moving wall. Inthis talk, I will make a short review for the one-dimensional Fermiaccelerator models and discuss some scaling properties for them. Inparticular, when inelastic collisions of the particle with the boundaryare taken into account, suppression of Fermi acceleration is observed.I will give an example of a two dimensional time-dependent billiardwhere such a suppression also happens.

Series: Analysis Seminar

Given an "infinite symmetric matrix" W we give a simple condition, related

to the shift operator being expansive on a certain sequence space, under

which W is positive. We apply this result to AAK-type theorems for

generalized Hankel operators, providing new insights related to previous

work by S. Treil and A. Volberg. We also discuss applications and open

problems.

to the shift operator being expansive on a certain sequence space, under

which W is positive. We apply this result to AAK-type theorems for

generalized Hankel operators, providing new insights related to previous

work by S. Treil and A. Volberg. We also discuss applications and open

problems.

Series: Analysis Seminar

In this talk, I will discuss some results obtained in my Ph.D. thesis.

First, the point mass formula will be introduced. Using the formula, we

shall see how the asymptotics of orthogonal polynomials relate to the

perturbed Verblunsky coefficients. Then I will discuss two classes of

measures on the unit circle -- one with Verblunsky coefficients \alpha_n -->

0 and the other one with \alpha_n --> L (non-zero) -- and explain the

methods I used to tackle the point mass problem involving these measures.

Finally, I will discuss the point mass problem on the real line. For a long

time it was believed that point mass perturbation will generate

exponentially small perturbation on the recursion coefficients. I will

demonstrate that indeed there is a large class of measures such that that

proposition is false.

Series: SIAM Student Seminar

This talk considers the following sequence shufling problem: Given a biological sequence (either DNA or protein) s, generate a random instance among all the permutations of s that exhibit the same frequencies of k-lets (e.g. dinucleotides, doublets of amino acids, triplets, etc.). Since certain biases in the usage of k-lets are fundamental to biological sequences, effective generation of such sequences is essential for the evaluation of the results of many sequence analysis tools. This talk introduces two sequence shuffling algorithms: A simple swapping-based algorithm is shown to generate a near-random instance and appears to work well, although its efficiency is unproven; a generation algorithm based on Euler tours is proven to produce a precisely uniforminstance, and hence solve the sequence shuffling problem, in time not much more than linear in the sequence length.

Friday, October 16, 2009 - 15:00 ,
Location: Skiles 169 ,
Amey Kaloti ,
Georgia Tech

This is a 2-hour talk.

Heegaard floer homology is an invariant of closed 3 manifolds defined by Peter

Ozsvath and Zoltan Szabo. It has proven to be a very strong invariant of 3 manifolds with

connections to contact topology. In these talks we will try to define the Heegaard Floer

homology without assuming much background in low dimensional topology. One more goal is

to present the combinatorial description for this theory.

Ozsvath and Zoltan Szabo. It has proven to be a very strong invariant of 3 manifolds with

connections to contact topology. In these talks we will try to define the Heegaard Floer

homology without assuming much background in low dimensional topology. One more goal is

to present the combinatorial description for this theory.

Series: Combinatorics Seminar

There has been substantial work on approximation algorithms for clustering

data under distance-based objective functions such as k-median, k-means, and

min-sum objectives. This work is fueled in part by the hope that

approximating these objectives well will indeed yield more accurate

solutions. That is, for problems such as clustering proteins by function, or

clustering images by subject, there is some unknown correct "target"

clustering and the implicit assumption is that clusterings that are

approximately optimal in terms of these distance-based measures are also

approximately correct in terms of error with respect to the target. In this

work we show that if we make this implicit assumption explicit -- that is, if

we assume that any c-approximation to the given clustering objective Phi is

epsilon-close to the target -- then we can produce clusterings that are

O(epsilon)-close to the target, even for values c for which obtaining a

c-approximation is NP-hard. In particular, for the k-median, k-means, and

min-sum objectives, we show that we can achieve this guarantee for any

constant c > 1.

Our results show how by explicitly considering the alignment between the

objective function used and the true underlying clustering goals, one can

bypass computational barriers and perform as if these objectives were

computationally substantially easier.

This talk is based on joint work with Avrim Blum and Anupam Gupta (SODA

2009), Mark Braverman (COLT 2009), and Heiko Roeglin and Shang-Hua Teng (ALT 2009).

data under distance-based objective functions such as k-median, k-means, and

min-sum objectives. This work is fueled in part by the hope that

approximating these objectives well will indeed yield more accurate

solutions. That is, for problems such as clustering proteins by function, or

clustering images by subject, there is some unknown correct "target"

clustering and the implicit assumption is that clusterings that are

approximately optimal in terms of these distance-based measures are also

approximately correct in terms of error with respect to the target. In this

work we show that if we make this implicit assumption explicit -- that is, if

we assume that any c-approximation to the given clustering objective Phi is

epsilon-close to the target -- then we can produce clusterings that are

O(epsilon)-close to the target, even for values c for which obtaining a

c-approximation is NP-hard. In particular, for the k-median, k-means, and

min-sum objectives, we show that we can achieve this guarantee for any

constant c > 1.

Our results show how by explicitly considering the alignment between the

objective function used and the true underlying clustering goals, one can

bypass computational barriers and perform as if these objectives were

computationally substantially easier.

This talk is based on joint work with Avrim Blum and Anupam Gupta (SODA

2009), Mark Braverman (COLT 2009), and Heiko Roeglin and Shang-Hua Teng (ALT 2009).

Series: CDSNS Colloquium

Despite advances in treatment of chronic hepatitis B virus (HBV) infection,

liver transplantation remains the only hope for many patients with end-stage

liver disease due to HBV. A complication with liver transplantation,

however, is that the new liver is eventually reinfected in chronic HBV

patients by infection in other compartments of the body. We have formulated

a model to describe the dynamics of HBV after liver transplant, considering

the liver and the blood of areas of infection. Analyzing the model, we

observe that the system shows either a transcritical or a backward

bifurcation. Explicit conditions on the model parameters are given for the

backward bifurcation to be present, to be reduced, or disappear.

Consequently, we investigate possible factors that are responsible for

HBV/HCV infection and assess control strategies to reduce HBV/HCV

reinfection and improve graft survival after liver transplantation.

liver transplantation remains the only hope for many patients with end-stage

liver disease due to HBV. A complication with liver transplantation,

however, is that the new liver is eventually reinfected in chronic HBV

patients by infection in other compartments of the body. We have formulated

a model to describe the dynamics of HBV after liver transplant, considering

the liver and the blood of areas of infection. Analyzing the model, we

observe that the system shows either a transcritical or a backward

bifurcation. Explicit conditions on the model parameters are given for the

backward bifurcation to be present, to be reduced, or disappear.

Consequently, we investigate possible factors that are responsible for

HBV/HCV infection and assess control strategies to reduce HBV/HCV

reinfection and improve graft survival after liver transplantation.

Monday, October 19, 2009 - 13:00 ,
Location: Skiles 255 ,
Helga S. Huntley ,
University of Delaware

Biologists tracking crab larvae, engineers designing pollution mitigation strategies, and

climate scientists studying tracer transport in the oceans are among many who rely on

trajectory predictions from ocean models. State-of-the-art models have been shown to

produce reliable velocity forecasts for 48-72 hours, yet the predictability horizon for

trajectories and related Lagrangian quantities remains significantly shorter. We

investigate the potential for decreasing Lagrangian prediction errors by applying a

constrained normal mode analysis (NMA) to blend drifter observations with model velocity

fields. The properties of an unconstrained NMA and the effects of parameter choices are

discussed. The constrained NMA technique is initially presented in a perfect model

simulation, where the “true” velocity field is known and the resulting error can be

directly assessed. Finally, we will show results from a recent experiment in the East

Asia Sea, where real observations were assimilated into operational ocean model hindcasts.

climate scientists studying tracer transport in the oceans are among many who rely on

trajectory predictions from ocean models. State-of-the-art models have been shown to

produce reliable velocity forecasts for 48-72 hours, yet the predictability horizon for

trajectories and related Lagrangian quantities remains significantly shorter. We

investigate the potential for decreasing Lagrangian prediction errors by applying a

constrained normal mode analysis (NMA) to blend drifter observations with model velocity

fields. The properties of an unconstrained NMA and the effects of parameter choices are

discussed. The constrained NMA technique is initially presented in a perfect model

simulation, where the “true” velocity field is known and the resulting error can be

directly assessed. Finally, we will show results from a recent experiment in the East

Asia Sea, where real observations were assimilated into operational ocean model hindcasts.

Series: Geometry Topology Seminar

We will introduce new constructions of infinite families of smooth structures on small 4-manifolds and infinite families of smooth knottings of surfaces.

Series: Analysis Working Seminar

In this working seminar we will give a proof of Seip's characterization of interpolating sequences in the Bergman space of analytic functions. This topic has connection with complex analysis, harmonic analysis, and

time frequency analysis and so hopefully many of the participants would

be able to get something out of the seminar. The initial plan will be

to work through his 1993 Inventiones Paper and supplement this with

material from his book "Interpolation and Sampling in Spaces of

Analytic Functions". Notes will be generated as the seminar progresses.

time frequency analysis and so hopefully many of the participants would

be able to get something out of the seminar. The initial plan will be

to work through his 1993 Inventiones Paper and supplement this with

material from his book "Interpolation and Sampling in Spaces of

Analytic Functions". Notes will be generated as the seminar progresses.

Series: Graph Theory Seminar

In 1865, Sylvester posed the following problem: For a region R in the plane,let q(R) denote the probability that four points chosen at random from Rform a convex quadrilateral. What is the infimum q* of q(R) taken over allregions R? The number q* is known as Sylvester's Four Point Problem Constant(Sylvester's Constant for short). At first sight, it is hard to imagine howto find reasonable estimates for q*. Fortunately, Scheinerman and Wilf foundthat Sylvester's Constant is intimately related to another fundamentalconstant in discrete geometry. The rectilinear crossing number of rcr(K_n)the complete graph K_n is the minimum number of crossings of edges in adrawing of K_n in the plane in which every edge is a straight segment. Itis not difficult to show that the limit as n goes to infinity ofrcr(K_n)/{n\choose 4} exists; this is the rectilinear crossing numberconstant RCR. Scheinerman and Wilf proved a surprising connection betweenthese constants: q* = RCR. Finding estimates of rcr(K_n) seems like a moreapproachable task. A major breakthrough was achieved in 2003 by Lovasz,Vesztergombi, Wagner, and Welzl, and simultaneously by Abrego andFernandez-Merchant, who unveiled an intimate connection of rcr(K_n) withanother classical problem of discrete geometry, namely the number of <k-setsin a set of points in the plane. With techniques developed within the lastfew years, we have been able to determine the exact value of rcr(K_n) for n< 28. Moreover, we have proved that 0.37992 < RCR < 0.38045 (the quotient ofthe lower and upper bounds is roughly 0.998). The natural feeling is thatthe steady progress from the last few years in both fronts (upper and lowerbounds) should be a reason of optimism for the chances to finally find theexact value of RCR. In this talk we will survey the techniques we havedeveloped to reach these bounds, as well as our recent theoretical resultsthat hint that the current techniques will not take us much further in thequest to find q*=RCR.

Tuesday, October 20, 2009 - 15:00 ,
Location: Skiles 269 ,
Daniel Bauer ,
Georgia State University ,
Organizer: Liang Peng

In recent literature, different mothods have been proposed on how to define

and model stochastic mortality. In most of these approaches, the so-called spot force

of mortality is modeled as a stochastic process. In contrast to such spot force

models, forward force mortality models infer dynamics on the entire

age/term-structure of mortality.

and model stochastic mortality. In most of these approaches, the so-called spot force

of mortality is modeled as a stochastic process. In contrast to such spot force

models, forward force mortality models infer dynamics on the entire

age/term-structure of mortality.

This paper considers forward models defined based on best-estimate forecasts of

survival probabilities as can be found in so-called best-estimate generation life

tables. We provide a detailed analysis of forward mortality models deriven by

finite-dimensional Brownian motion. In particular, we address the relationship to

other modeling approaches, the consistency problem of parametric forward models, and

the existence of finite dimensional realizations for Gaussian forward models. All

results are illustrated based on a simple example with an affine specification.

Series: PDE Seminar

Under the classical small-amplitude, long wave-length assumptions in which the

Stokes number is of order one, so featuring a balance between nonlinear and dispersive effects,

the KdV-equation

u_t+ u_x + uu_x + u_xxx = 0 (1)

and the regularized long wave equation, or BBM-equation

u_t + u_x + uu_x-u_xxt = 0 (2)

are formal reductions of the full, two-dimensional Euler equations for free surface flow. This

talk is concerned with the two-point boundary value problem for (1) and (2) wherein the wave

motion is specified at both ends of a finite stretch of length L of the media of propagation.

After ascertaining natural boundary specifications that constitute well posed problems, it is

shown that the solution of the two-point boundary value problem, posed on the interval [0;L],

say, converges as L converges to infinity, to the solution of the quarter-plane boundary value problem in

which a semi-infinite stretch [0;1) of the medium is disturbed at its finite end (the so-called

wavemaker problem). In addition to its intrinsic interest, our results provide justification for the use of the

two-point boundary-value problem in numerical studies of the quarter plane problem for

both the KdV-equation and the BBM-equation.

Stokes number is of order one, so featuring a balance between nonlinear and dispersive effects,

the KdV-equation

u_t+ u_x + uu_x + u_xxx = 0 (1)

and the regularized long wave equation, or BBM-equation

u_t + u_x + uu_x-u_xxt = 0 (2)

are formal reductions of the full, two-dimensional Euler equations for free surface flow. This

talk is concerned with the two-point boundary value problem for (1) and (2) wherein the wave

motion is specified at both ends of a finite stretch of length L of the media of propagation.

After ascertaining natural boundary specifications that constitute well posed problems, it is

shown that the solution of the two-point boundary value problem, posed on the interval [0;L],

say, converges as L converges to infinity, to the solution of the quarter-plane boundary value problem in

which a semi-infinite stretch [0;1) of the medium is disturbed at its finite end (the so-called

wavemaker problem). In addition to its intrinsic interest, our results provide justification for the use of the

two-point boundary-value problem in numerical studies of the quarter plane problem for

both the KdV-equation and the BBM-equation.

Series: Mathematical Biology Seminar

Treatment of bacterial

infections with antibiotics is universally accepted as one of (if not THE) most

significant contributions of medical intervention to reducing mortality and

morbidity during last century. Surprisingly, basic knowledge about how

antibiotics kill or prevent the growth of bacteria is only just beginning to

emerge and the dose and term of antibiotic treatment has long been determined

by clinicians empirically and intuitively.

infections with antibiotics is universally accepted as one of (if not THE) most

significant contributions of medical intervention to reducing mortality and

morbidity during last century. Surprisingly, basic knowledge about how

antibiotics kill or prevent the growth of bacteria is only just beginning to

emerge and the dose and term of antibiotic treatment has long been determined

by clinicians empirically and intuitively.

There is a recent drive to

theoretically and experimentally rationalize antibiotic treatment protocols with

the aim to them and to design protocols which maximize antibiotics’ efficacy

while preventing resistance emergence. Central to these endeavors are the

pharmacodynamics of the antibiotic(s) and bacteria, PD (the relationship

between the concentration of the antibiotic and the rate of growth/death of

bacteria), and the pharmacokinetics of the antibiotic, PK (the distribution and

change in concentration of the antibiotics in a treated host) of each

bacteria. The procedures for

estimating of PD and PK parameters are well established and standardized

worldwide. Although different PK

parameters are commonly employed for the design of antibiotic treatment

protocols most of these considerations, a single PD parameter is usually used,

the minimum inhibitory concentration (MIC). The Clinical and Laboratory

Standards Institute (CLSI)

approved method for estimating MICs defines testing conditions that are optimal

for the antibiotic, like low densities and exponential growth, rarely obtain

outside of the laboratory and virtually never in the bacteria infecting

mammalian hosts. Real

infections with clinical symptoms commonly involve very high densities of

bacteria, most of which are not replicating, and these bacteria are rarely

planktonic, rather residing as colonies or within matrices called biofilms

which sometimes include other species of bacteria. Refractoriness (non-inherited resistance) is the term used to

describe an observed inefficacy of antibiotics on otherwise

antibiotic-susceptible bacterial populations. This talk will focus on our

efforts to describe the pharmacodynamic relationship between Staphylococcus

aureus and antibiotics of six

classes in the light of antibiotic refractoriness.

I will begin by addressing

the effects of cell density on the MIC index, after which I intend to present

unpublished data descriptive of physiology-related effects on antibiotic

efficacy. Additionally, we will explore

the potential contribution of such in vitro results, to observed/predicted clinical

outcomes using standard mathematical models of antibiotic treatment which also

serve to generate testable hypotheses.

Series: Research Horizons Seminar

Orthogonal polynomials are an important tool in many areas of pure and

applied mathematics. We outline one application in random matrix

theory. We discuss generalizations of orthogonal polynomials such as

the Muntz orthogonal polynomials investigated by Ulfar Stefansson.

Finally, we present some conjectures about biorthogonal polynomials,

which would be a great Ph.D. project for any interested student.

applied mathematics. We outline one application in random matrix

theory. We discuss generalizations of orthogonal polynomials such as

the Muntz orthogonal polynomials investigated by Ulfar Stefansson.

Finally, we present some conjectures about biorthogonal polynomials,

which would be a great Ph.D. project for any interested student.

Series: Other Talks

As we have seen already, the global section functor is left exact. To get a long exact sequence, I will first give the construction of derived functors in the more general setting of abelian categories withenough injectives. If time permits, I will then show that for any ringed space the category of all sheaves of Modules is an abelian category with enough injectives, and so we can construct sheaf cohomology as the right derived functors of the global section functor. The relation with Cech cohomology will be studied in a subsequent talk.

Series: Analysis Seminar

In this talk we will discuss Kolmogorov and Landau type inequalities for the derivatives. These are the inequalities which estimate the norm of the intermediate

derivative of a function (defined on an interval, R_+, R, or

their multivariate analogs) from some class in terms of the norm of the

function itself and norm of its highest derivative.

We shall present several new results on sharp inequalities of this type

for special classes of functions (multiply monotone and absolutely

monotone) and sequences. We will also highlight some of the techniques

involved in the proofs (comparison theorems) and discuss several

applications.

derivative of a function (defined on an interval, R_+, R, or

their multivariate analogs) from some class in terms of the norm of the

function itself and norm of its highest derivative.

We shall present several new results on sharp inequalities of this type

for special classes of functions (multiply monotone and absolutely

monotone) and sequences. We will also highlight some of the techniques

involved in the proofs (comparison theorems) and discuss several

applications.

Series: Other Talks

Tea and light refreshments 1:30 in Room 2222. Organizer: Santosh Vempala

Concentration results for the TSP, MWST and many other problems with random inputs show the answer is concentrated tightly around the mean. But most results assume uniform density of the input. We will generalize these to heavy-tailed inputs which seem to be ubiquitous in modern applications. To accomplish this, we prove two new general probability inequalities. The simpler first inequality weakens both hypotheses in Hoffding-Azuma inequality and is enough to tackle TSP, MWST and Random Projections. The second inequality further weakens the moment requirements and using it, we prove the best possible concentration for the long-studied bin packing problem as well as some others. Many other applications seem possible..

Series: School of Mathematics Colloquium

After a brief account of some of

the history of this classical subject,

we indicate how such models are derived.

Rigorous theory justifying the models

will be discussed and the conversation

will then turn to some applications.

These will be drawn from questions

arising in geophysics and coastal

engineering, as time permits.

the history of this classical subject,

we indicate how such models are derived.

Rigorous theory justifying the models

will be discussed and the conversation

will then turn to some applications.

These will be drawn from questions

arising in geophysics and coastal

engineering, as time permits.

Series: Graph Theory Seminar

The Jacobian of a graph, also known as the Picard Group, Sandpile Group, or Critical Group, is a discrete analogue of the Jacobian of an algebraic curve. It is known that the order of the Jacobian of a graph is equal to its number of spanning trees, but the exact structure is known for only a few classes of graphs. In this talk I will present a combinatorial method of approaching the Jacobian of graphs by way of a chip-firing game played on its vertices. We then apply this chip-firing game to explicitly characterize the Jacobian of nearly complete graphs, those obtained from the complete graph by deleting either a cycle or two vertex-disjoint paths incident with all but one vertex. This is joint work with Sergey Norin.

Series: Stochastics Seminar

In this talk, we study an interacting particle system arising in the

context of series Jackson queueing networks. Using effectively nothing

more than the Cauchy-Binet identity, which is a standard tool in

random-matrix theory, we show that its transition probabilities can be

written as a signed sum of non-crossing probabilities. Thus, questions

on time-dependent queueing behavior are translated to questions on

non-crossing probabilities. To illustrate the use of this connection,

we prove that the relaxation time (i.e., the reciprocal of the

’spectral gap’) of a positive recurrent system equals the relaxation

time of a single M/M/1 queue with the same arrival and service rates as

the network’s bottleneck station. This resolves a 1985 conjecture from

Blanc on series Jackson networks.

Joint work with Jon Warren, University of Warwick.

context of series Jackson queueing networks. Using effectively nothing

more than the Cauchy-Binet identity, which is a standard tool in

random-matrix theory, we show that its transition probabilities can be

written as a signed sum of non-crossing probabilities. Thus, questions

on time-dependent queueing behavior are translated to questions on

non-crossing probabilities. To illustrate the use of this connection,

we prove that the relaxation time (i.e., the reciprocal of the

’spectral gap’) of a positive recurrent system equals the relaxation

time of a single M/M/1 queue with the same arrival and service rates as

the network’s bottleneck station. This resolves a 1985 conjecture from

Blanc on series Jackson networks.

Joint work with Jon Warren, University of Warwick.

Series: Analysis Seminar

I will review recent and classical results concerning the

asymptotic properties (as N --> \infty) of 'ground state' configurations

of N particles restricted to a d-dimensional compact set A\subset {\bf R}^p

that minimize the Riesz s-energy functional

\sum_{i\neq j}\frac{1}{|x_{i}-x_{j}|^{s}}

for s>0.

Specifically, we will discuss the following

(1) For s < d, the ground state configurations have limit distribution as

N --> \infty given by the equilibrium measure \mu_s, while the first

order asymptotic growth of the energy of these configurations is given by

the 'transfinite diameter' of A.

(2) We study the behavior of \mu_s as s approaches the critical

value d (for s\ge d, there is no equilibrium measure). In the case that

A is a fractal, the notion of 'order two density' introduced by Bedford

and Fisher naturally arises. This is joint work with M. Calef.

(3) As s --> \infty, ground state configurations approach best-packing

configurations on A. In work with S. Borodachov and E. Saff we show that

such configurations are asymptotically uniformly distributed on A.

asymptotic properties (as N --> \infty) of 'ground state' configurations

of N particles restricted to a d-dimensional compact set A\subset {\bf R}^p

that minimize the Riesz s-energy functional

\sum_{i\neq j}\frac{1}{|x_{i}-x_{j}|^{s}}

for s>0.

Specifically, we will discuss the following

(1) For s < d, the ground state configurations have limit distribution as

N --> \infty given by the equilibrium measure \mu_s, while the first

order asymptotic growth of the energy of these configurations is given by

the 'transfinite diameter' of A.

(2) We study the behavior of \mu_s as s approaches the critical

value d (for s\ge d, there is no equilibrium measure). In the case that

A is a fractal, the notion of 'order two density' introduced by Bedford

and Fisher naturally arises. This is joint work with M. Calef.

(3) As s --> \infty, ground state configurations approach best-packing

configurations on A. In work with S. Borodachov and E. Saff we show that

such configurations are asymptotically uniformly distributed on A.

Friday, October 23, 2009 - 15:00 ,
Location: Skiles 269 ,
Amey Kaloti ,
Georgia Tech

This is a 2 hour talk.

Abstract: Heegaard floer homology is an invariant of closed 3 manifolds defined by Peter

Ozsvath and Zoltan Szabo. It has proven to be a very strong invariant of 3 manifolds with

connections to contact topology. In these talks we will try to define the Heegaard Floer

homology without assuming much background in low dimensional topology. One more goal is

to present the combinatorial description for this theory.

Ozsvath and Zoltan Szabo. It has proven to be a very strong invariant of 3 manifolds with

connections to contact topology. In these talks we will try to define the Heegaard Floer

homology without assuming much background in low dimensional topology. One more goal is

to present the combinatorial description for this theory.

Series: ACO Seminar

The class of random regular graphs has been the focus of extensive study highlighting

the excellent expansion properties of its typical instance. For instance, it is well

known that almost every regular graph of fixed degree is essentially Ramanujan, and

understanding this class of graphs sheds light on the general behavior of expanders.

In this talk we will present several recent results on random regular graphs,

focusing on the interplay between their spectrum and geometry.

the excellent expansion properties of its typical instance. For instance, it is well

known that almost every regular graph of fixed degree is essentially Ramanujan, and

understanding this class of graphs sheds light on the general behavior of expanders.

In this talk we will present several recent results on random regular graphs,

focusing on the interplay between their spectrum and geometry.

We will first discuss the relation between spectral properties and the abrupt

convergence of the simple random walk to equilibrium, derived from precise

asymptotics of the number of paths between vertices. Following the study of the graph

geometry we proceed to its random perturbation via exponential weights on the edges

(first-passage-percolation). We then show how this allows the derivation of various

properties of the classical Erd\H{o}s-R\'enyi random graph near criticality. Finally,

returning to the spectrum of random regular graph, we discuss the question of how

close they really are to being Ramanujan and conclude with related problems involving

random matrices.

Based on joint works with Jian Ding, Jeong Han Kim and Yuval Peres, with Allan Sly

and with Benny Sudakov and Van Vu.

Series: Probability Working Seminar

The talk is based on a 1992 paper by Yakov Sinai. He proves a localization property for random walks in the random potential known as Nechaev's model.

Series: Other Talks

From time to time a new algorithm comes along that causes a sensation in theoretical computer science or in an area of application because of its resolution of a long-standing open question, its surprising efficiency, its practical usefulness, the novelty of its setting or approach, the elegance of its structure, the subtlety of its analysis or its range of applications. We will give examples of algorithms that qualify for greatness for one or more of these reasons, and discuss how to equip students to appreciate them and understand their strengths and weaknesses.

Series: Other Talks

Many models from a variety of areas involve the computation of an equilibrium or fixed point of some kind. Examples include Nash equilibria in games; price equilibria in markets; optimal strategies and the values of competitive games (stochastic and other games); stable configurations of neural networks; analysis of the evolution of various types of dynamic stochastic models. It is not known whether these problems can be solved in polynomial time. Despite their broad diversity, there are certain common computational principles that underlie different types of equilibria and connect many of these problems to each other. In this talk we will discuss some of these common principles and the corresponding complexity classes that capture them; the effect on the complexity of the underlying computational framework; and the relationship with other open questions in computation.

Series: Other Talks

The spectral properties of a graph are intimately related to its structure. This can be applied in the study of discrete isoperimetric problems and in the investigation of extremal and algorithmic questions for graphs. I will discuss several recent examples illustrating this theme.

Series: ACO Distinguished Lecture

Preceded with a reception at 4:10pm.

To come to grips with consciousness, I postulate that living entities in

general, and human beings in particular, are mechanisms... marvelous

mechanisms to be sure but not magical ones... just mechanisms. On this

basis, I look to explain some of the paradoxes of consciousness such as

Samuel Johnson's "All theory is against the freedom of the will; all

experience is for it."

I will explain concepts of self-awareness and free will from a mechanistic

view. My explanations make use of computer science and suggest why these

phenomena would exist even in a completely deterministic world. This is

particularly striking for free will.

The impressions of our senses, like the sense of the color blue, the sound

of a tone, etc. are to be expected of a mechanism with enormously many

inputs categorized into similarity classes of sight, sound, et

general, and human beings in particular, are mechanisms... marvelous

mechanisms to be sure but not magical ones... just mechanisms. On this

basis, I look to explain some of the paradoxes of consciousness such as

Samuel Johnson's "All theory is against the freedom of the will; all

experience is for it."

I will explain concepts of self-awareness and free will from a mechanistic

view. My explanations make use of computer science and suggest why these

phenomena would exist even in a completely deterministic world. This is

particularly striking for free will.

The impressions of our senses, like the sense of the color blue, the sound

of a tone, etc. are to be expected of a mechanism with enormously many

inputs categorized into similarity classes of sight, sound, et