Seminars and Colloquia by Series

Thursday, October 1, 2009 - 15:00 , Location: Skiles 269 , Denis Bell , University of North Florida , Organizer:
The Black‐Scholes model for stock price as geometric Brownian motion, and the corresponding European option pricing formula, are standard tools in mathematical finance. In the late seventies, Cox and Ross developed a model for stock price based on a stochastic differential equation with fractional diffusion coefficient. Unlike the Black‐Scholes model, the model of Cox and Ross is not solvable in closed form, hence there is no analogue of the Black‐Scholes formula in this context. In this talk, we discuss a new method, based on Stratonovich integration, which yields explicitly solvable arbitrage‐free models analogous to that of Cox and Ross. This method gives rise to a generalized version of the Black‐Scholes partial differential equation. We study solutions of this equation and a related ordinary differential equation.
Thursday, September 24, 2009 - 15:00 , Location: Skiles 269 , Jim Nolen , Duke University , Organizer:
I will describe recent work on the behavior of solutions to reaction diffusion equations (PDEs) when the coefficients in the equation are random.  The solutions behave like traveling waves moving in a randomly varying environment.  I will explain how one can obtain limit theorems (Law of Large Numbers and CLT) for the motion of the interface.  The talk will be accessible to people without much knowledge of PDE.
Thursday, September 10, 2009 - 15:00 , Location: Skiles 269 , Christian Houdré , Georgia Tech , Organizer:

Given a random word of size n whose letters are drawn independently
from an ordered alphabet of size m, the fluctuations of the shape of
the corresponding random RSK Young tableaux are investigated, when both
n and m converge together to infinity. If m does not grow too fast and
if the draws are uniform, the limiting shape is the same as the
limiting spectrum of the GUE. In the non-uniform case, a control of
both highest probabilities will ensure the convergence of the first row
of the tableau, i.e., of the length of the longest increasing
subsequence of the random word, towards the Tracy-Widom distribution.

Thursday, September 3, 2009 - 15:00 , Location: Skiles 269 , Philippe Rigollet , Princeton University , Organizer:
The goal of this talk is to present a new method for sparse estimation which does not use standard techniques such as $\ell_1$ penalization. First, we introduce a new setup for aggregation which bears strong links with generalized linear models and thus encompasses various response models such as Gaussian regression and binary classification. Second, by combining maximum likelihood estimators using exponential weights we derive a new procedure for sparse estimations which satisfies exact oracle inequalities with the desired remainder term. Even though the procedure is simple, its implementation is not straightforward but it can be approximated using the Metropolis algorithm which results in a stochastic greedy algorithm and performs surprisingly well in a simulated problem of sparse recovery.
Thursday, August 27, 2009 - 15:00 , Location: Skiles 269 , Dabao Zhang , Purdue University , Organizer:
We propose a penalized orthogonal-components regression (POCRE) for large p small n data. Orthogonal components are sequentially constructed to maximize, upon standardization, their correlation to the response residuals. A new penalization framework, implemented via empirical Bayes thresholding, is presented to effectively identify sparse predictors of each component. POCRE is computationally efficient owing to its sequential construction of leading sparse principal components. In addition, such construction offers other properties such as grouping highly correlated predictors and allowing for collinear or nearly collinear predictors. With multivariate responses, POCRE can construct common components and thus build up latent-variable models for large p small n data. This is an joint work with Yanzhu Lin and Min Zhang
Thursday, April 23, 2009 - 15:00 , Location: Skiles 269 , Yichuan Zhao , Department of Mathematics, Georgia State University , Organizer: Heinrich Matzinger
It is of interest that researchers study competing risks in which subjects may fail from any one of k causes. Comparing any two competing risks with covariate effects is very important in medical studies. In this talk, we develop omnibus tests for comparing cause-specific hazard rates and cumulative incidence functions at specified covariate levels. The omnibus tests are derived under the additive risk model by a weighted difference of estimates of cumulative cause-specific hazard rates. Simultaneous confidence bands for the difference of two conditional cumulative incidence functions are also constructed. A simulation procedure is used to sample from the null distribution of the test process in which the graphical and numerical techniques are used to detect the significant difference in the risks. In addition, we conduct a simulation study, and the simulation result shows that the proposed procedure has a good finite sample performance. A melanoma data set in clinical trial is used for the purpose of illustration.
Thursday, April 16, 2009 - 15:00 , Location: Skiles 269 , Vladimir I. Koltchinskii , School of Mathematics, Georgia Tech , Organizer: Heinrich Matzinger
In binary classification problems, the goal is to estimate a function g*:S -> {-1,1} minimizing the generalization error (or the risk) L(g):=P{(x,y):y \neq g(x)}, where P is a probability distribution in S x {-1,1}. The distribution P is unknown and estimators \hat g of g* are based on a finite number of independent random couples (X_j,Y_j) sampled from P. It is of interest to have upper bounds on the excess risk {\cal E}(\hat g):=L(\hat g) - L(g_{\ast}) of such estimators that hold with a high probability and that take into account reasonable measures of complexity of classification problems (such as, for instance, VC-dimension). We will discuss several approaches (both old and new) to excess risk bounds in classification, including some recent results on excess risk in so called active learning.
Thursday, April 9, 2009 - 15:00 , Location: Skiles 269 , Elton Hsu , Department of Mathematics, Northwestern University , Organizer: Heinrich Matzinger
The Cameron-Martin theorem is one of the cornerstones of stochastic analysis. It asserts that the shifts of the Wiener measure along certain flows are equivalent. Driver and others have shown that this theorem, after an appropriate reformulation, can be extension to the Wiener measure on the path space over a compact Riemannian manifold. In this talk we will discuss this and other extensions of the Cameron-Martin theorem and show that it in fact holds for an arbitrary complete Riemannian manifold.
Thursday, March 12, 2009 - 15:00 , Location: Skiles 269 , Hayrie Ayhan , ISyE, Georgia Tech , Organizer: Heinrich Matzinger
We consider Markovian tandem queues with finite intermediate buffers and flexible servers and study how the servers should be assigned dynamically to stations in order to obtain optimal long-run average throughput. We assume that each server can work on only one job at a time, that several servers can work together on a single job, and that the travel times between stations are negligible. Under various server collaboration schemes, we characterize the optimal server assignment policy for these systems.
Thursday, March 5, 2009 - 15:00 , Location: Skiles 269 , Yuanhui Xiao , Department of Mathematics and Statistics, Georgia State University , Organizer: Heinrich Matzinger
A shot noise process is essentially a compound Poisson process whereby the arriving shots are allowed to accumulate or decay after their arrival via some preset shot (impulse response) function. Shot noise models see applications in diverse areas such as insurance, fi- nance, hydrology, textile engineering, and electronics. This talk stud- ies several statistical inference issues for shot noise processes. Under mild conditions, ergodicity is proven in that process sample paths sat- isfy a strong law of large numbers and central limit theorem. These results have application in storage modeling. Shot function parameter estimation from a data history observed on a discrete-time lattice is then explored. Optimal estimating functions are tractable when the shot function satisfies a so-called interval similar condition. Moment methods of estimation are easily applicable if the shot function is com- pactly supported and show good performance. In all cases, asymptotic normality of the proposed estimators is established.

Pages