- You are here:
- GT Home
- Home
- News & Events

Series: Stochastics Seminar

The Black‐Scholes model for stock price as geometric Brownian motion, and the
corresponding European option pricing formula, are standard tools in mathematical
finance. In the late seventies, Cox and Ross developed a model for stock price based
on a stochastic differential equation with fractional diffusion coefficient. Unlike the
Black‐Scholes model, the model of Cox and Ross is not solvable in closed form, hence
there is no analogue of the Black‐Scholes formula in this context. In this talk, we
discuss a new method, based on Stratonovich integration, which yields explicitly
solvable arbitrage‐free models analogous to that of Cox and Ross. This method gives
rise to a generalized version of the Black‐Scholes partial differential equation. We
study solutions of this equation and a related ordinary differential equation.

Series: Stochastics Seminar

I will describe recent work on the behavior of solutions to
reaction diffusion equations (PDEs) when the coefficients in the
equation are random. The solutions behave like traveling waves moving
in a randomly varying environment. I will explain how one can obtain
limit theorems (Law of Large Numbers and CLT) for the motion of the
interface. The talk will be accessible to people without much knowledge
of PDE.

Series: Stochastics Seminar

Given a random word of size n whose letters are drawn independently

from an ordered alphabet of size m, the fluctuations of the shape of

the corresponding random RSK Young tableaux are investigated, when both

n and m converge together to infinity. If m does not grow too fast and

if the draws are uniform, the limiting shape is the same as the

limiting spectrum of the GUE. In the non-uniform case, a control of

both highest probabilities will ensure the convergence of the first row

of the tableau, i.e., of the length of the longest increasing

subsequence of the random word, towards the Tracy-Widom distribution.

Series: Stochastics Seminar

The goal of this talk is to present a new method for sparse estimation
which does not use standard techniques such as $\ell_1$ penalization.
First, we introduce a new setup for aggregation which bears strong links
with generalized linear models and thus encompasses various response
models such as Gaussian regression and binary classification. Second, by
combining maximum likelihood estimators using exponential weights we
derive a new procedure for sparse estimations which satisfies exact
oracle inequalities with the desired remainder term. Even though the
procedure is simple, its implementation is not straightforward but it
can be approximated using the Metropolis algorithm which results in a
stochastic greedy algorithm and performs surprisingly well in a
simulated problem of sparse recovery.

Series: Stochastics Seminar

We propose a penalized orthogonal-components regression
(POCRE) for large p small n data. Orthogonal components are sequentially
constructed to maximize, upon standardization, their correlation to the
response residuals. A new penalization framework, implemented via
empirical Bayes thresholding, is presented to effectively identify
sparse predictors of each component. POCRE is computationally efficient
owing to its sequential construction of leading sparse principal
components. In addition, such construction offers other properties such
as grouping highly correlated predictors and allowing for collinear or
nearly collinear predictors. With multivariate responses, POCRE can
construct common components and thus build up latent-variable models for
large p small n data. This is an joint work with Yanzhu Lin and Min Zhang

Series: Stochastics Seminar

It is of interest that researchers study competing risks in which subjects may fail from any one of k causes. Comparing any two competing risks with covariate effects is very important in medical studies. In this talk, we develop omnibus tests for comparing cause-specific hazard rates and cumulative incidence functions at specified covariate levels. The omnibus tests are derived under the additive risk model by a weighted difference of estimates of cumulative cause-specific hazard rates. Simultaneous confidence bands for the difference of two conditional cumulative incidence functions are also constructed. A simulation procedure is used to sample from the null distribution of the test process in which the graphical and numerical techniques are used to detect the significant difference in the risks. In addition, we conduct a simulation study, and the simulation result shows that the proposed procedure has a good finite sample performance. A melanoma data set in clinical trial is used for the purpose of illustration.

Series: Stochastics Seminar

In binary classification problems, the goal is to estimate a function g*:S -> {-1,1} minimizing the generalization error (or the risk)
L(g):=P{(x,y):y \neq g(x)},
where P is a probability distribution in S x {-1,1}. The distribution P is unknown and estimators \hat g of g* are based on a finite number of independent random couples (X_j,Y_j) sampled from P. It is of interest to have upper bounds on the excess risk
{\cal E}(\hat g):=L(\hat g) - L(g_{\ast})
of such estimators that hold with a high probability and that take into account reasonable measures of complexity of classification problems (such as, for instance, VC-dimension). We will discuss several approaches (both old and new) to excess risk bounds in classification, including some recent results on excess risk in so called active learning.

Series: Stochastics Seminar

The Cameron-Martin theorem is one of the cornerstones of stochastic analysis. It asserts that the shifts of the Wiener measure along certain flows are equivalent. Driver and others have shown that this theorem, after an appropriate reformulation, can be extension to the Wiener measure on the path space over a compact Riemannian manifold. In this talk we will discuss this and other extensions of the Cameron-Martin theorem and show that it in fact holds for an arbitrary complete Riemannian manifold.

Series: Stochastics Seminar

We consider Markovian tandem queues with finite intermediate buffers and flexible servers and study how the servers should be assigned dynamically to stations in order to obtain optimal long-run average throughput. We assume that each server can work on only one job at a time, that several servers can work together on a single job, and that the travel times between stations are negligible. Under various server collaboration schemes, we characterize the optimal server assignment policy for these systems.

Series: Stochastics Seminar

A shot noise process is essentially a compound Poisson process whereby the arriving shots are allowed to accumulate or decay after their arrival via some preset shot (impulse response) function. Shot noise models see applications in diverse areas such as insurance, ﬁ- nance, hydrology, textile engineering, and electronics. This talk stud- ies several statistical inference issues for shot noise processes. Under mild conditions, ergodicity is proven in that process sample paths sat- isfy a strong law of large numbers and central limit theorem. These results have application in storage modeling. Shot function parameter estimation from a data history observed on a discrete-time lattice is then explored. Optimal estimating functions are tractable when the shot function satisﬁes a so-called interval similar condition. Moment methods of estimation are easily applicable if the shot function is com- pactly supported and show good performance. In all cases, asymptotic normality of the proposed estimators is established.