Seminars and Colloquia by Series

Friday, January 23, 2009 - 15:00 , Location: Skiles 269 , Mohammad Ghomi , Ga Tech , Organizer: John Etnyre
$h$-Principle consists of a powerful collection of tools developed by Gromov and others to solve underdetermined partial differential equations or relations which arise in differential geometry and topology. In these talks I will describe the Holonomic approximation theorem of Eliashberg-Mishachev, and discuss some of its applications including the sphere eversion theorem of Smale. Further I will discuss the method of convex integration and its application to proving the $C^1$ isometric embedding theorem of Nash.
Friday, January 23, 2009 - 12:30 , Location: Skiles 269 , Linwei Xin , School of Mathematics, Georgia Tech , Organizer:
In this talk, I will focus on some interesting examples in the conditional expectation and martingale, for example, gambling system "Martingale", Polya's urn scheme, Galton-Watson process, Wright-Fisher model of population genetics. I will skip the theorems and properties. Definitions to support the examples will be introduced. The talk will not assume a lot of probability, just some basic measure theory.
Thursday, January 22, 2009 - 11:00 , Location: Skiles 269 , Alexander Its , Indiana University-Purdue University Indianapolis , Organizer: Guillermo Goldsztein
In this talk we will review some of the global asymptotic results obtained during the last two decades in the theory of the classical Painleve equations with the help of the Isomonodromy - Riemann-Hilbert method. The results include the explicit derivation of the asymptotic connection formulae, the explicit description of linear and nonlinear Stokes phenomenon and the explicit evaluation of the distribution of poles. We will also discuss some of the most recent results emerging due to the appearance of Painleve equations in random matrix theory. The Riemann-Hilbert method will be outlined as well.
Wednesday, January 21, 2009 - 16:30 , Location: Klaus , Michael Mitzenmacher , Harvard University , Organizer: Robin Thomas
We describe recent progress in the study of the binary deletion channel and related channels with synchronization errors, including a clear description of many open problems in this area. As an example, while the capacity of the binary symmetric error channel and the binary erasure channel have been known since Shannon, we still do not have a closed-form description of the capacity of the binary deletion channel. We highlight a recent result that shows that the capacity is at least (1-p)/9 when each bit is deleted independently with fixed probability p.
Tuesday, January 20, 2009 - 15:00 , Location: Skiles 269 , Anton Leykin , University of Illinois at Chicago , Organizer: Stavros Garoufalidis
Numerical algebraic geometry provides a collection of novel methods to treat the solutions of systems of polynomial equations. These hybrid symbolic-numerical methods based on homotopy continuation technique have found a wide range of applications in both pure and applied areas of mathematics. This talk gives an introduction to numerical algebraic geometry and outlines directions in which the area has been developing. Two topics are highlighted: (1) computation of Galois groups of Schubert problems, a recent application of numerical polynomial homotopy continuation algorithms to enumerative algebraic geometry; (2) numerical primary decomposition, the first numerical method that discovers embedded solution components.
Series: PDE Seminar
Friday, January 16, 2009 - 16:05 , Location: Skiles 255 , Benoit Perthame , Université Pierre et Marie Curie, Paris , Organizer:
Living systems are subject to constant evolution through the two processes of mutations and selection, a principle discovered by Darwin. In a very simple, general, and idealized description, their environment can be considered as a nutrient shared by all the population. This allows certain individuals, characterized by a 'phenotypical trait', to expand faster because they are better adapted to the environment. This leads to select the 'best fitted trait' in the population (singular point of the system). On the other hand, the new-born population undergoes small variance on the trait under the effect of genetic mutations. In these circumstances, is it possible to describe the dynamical evolution of the current trait? We will give a mathematical model of such dynamics, based on parabolic equations, and show that an asymptotic method allows us to formalize precisely the concepts of monomorphic or polymorphic population. Then, we can describe the evolution of the 'best fitted trait' and eventually compute various forms of branching points, which represent the cohabitation of two different populations. The concepts are based on the asymptotic analysis of the above mentioned parabolic equations, one appropriately rescaled. This leads to concentrations of the solutions and the difficulty is to evaluate the weight and position of the moving Dirac masses that describe the population. We will show that a new type of Hamilton-Jacobi equation, with constraints, naturally describes this asymptotic. Some additional theoretical questions as uniqueness for the limiting H.-J. equation will also be addressed. This work is based on collaborations with O. Diekmann, P.-E. Jabin, S. Mischler, S. Cuadrado, J. Carrillo, S. Genieys, M. Gauduchon and G. Barles.
Series: Other Talks
Friday, January 16, 2009 - 14:00 , Location: Klaus 2447 , Vladimir Vapnik , NEC Laboratories, Columbia University and Royal Holloway University of London , Organizer:

<p>You are cordially invited to attend a reception that will follow the seminar to chat informally with faculty and students. Refreshments will be provided.</p>

The existing machine learning paradigm considers a simple scheme: given a set of training examples find in a given collection of functions the one that in the best possible way approximates the unknown decision rule. In such a paradigm a teacher does not play an important role. In human learning, however, the role of a teacher is very important: along with examples a teacher provides students with explanations, comments, comparisons, and so on. In this talk I will introduce elements of human teaching in machine learning. I will consider an advanced learning paradigm called learning using hidden information (LUHI), where at the training stage a teacher gives some additional information x^* about training example x. This information will not be available at the test stage. I will consider the LUHI paradigm for support vector machine type of algorithms, demonstrate its superiority over the classical one and discuss general questions related to this paradigm. For details see FODAVA, Foundations of Data Analysis and Visual Analytics
Friday, January 16, 2009 - 13:00 , Location: Skiles 255 , Aaron Levin , Scuola Normale Superiore Pisa , Organizer: Matt Baker
After introducing and reviewing the situation for rational and integral points on curves, I will discuss various aspects of integral points on higher-dimensional varieties. In addition to discussing recent higher-dimensional results, I will also touch on connections with the value distribution theory of holomorphic functions and give some concrete open problems.
Friday, January 16, 2009 - 13:00 , Location: Skiles 255 , Aaron Levin , Scuola Normale Superiore Pisa , Organizer: Matt Baker
After introducing and reviewing the situation for rational and integral points on curves, I will discuss various aspects of integral points on higher-dimensional varieties. In addition to discussing recent higher-dimensional results, I will also touch on connections with the value distribution theory of holomorphic functions and give some concrete open problems.
Series: Other Talks
Friday, January 16, 2009 - 13:00 , Location: Klaus 2447 , Alexey Chervonenkis , Russian Academy of Science and Royal Holloway University of London , Organizer:
It is shown (theoretically and empirically) that a reliable result can be gained only in the case of a certain relation between the capacity of the class of models from which we choose and the size of the training set. There are different ways to measure the capacity of a class of models. In practice the size of a training set is always finite and limited. It leads to an idea to choose a model from the most narrow class, or in other words to use the simplest model (Occam's razor). But if our class is narrow, it is possible that there is no true model within the class or a model close to the true one. It means that there will be greater residual error or larger number of errors even on the training set. So the problem of model complexity choice arises – to find a balance between errors due to limited number of training data and errors due to excessive model simplicity. I shall review different approaches to the problem.

Pages