Tuesday, January 16, 2018 - 15:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Ying Zhu – Michigan State University
Semiparametric regressions enjoy the flexibility of nonparametric models as well as the in-terpretability of linear models. These advantages can be further leveraged with recent ad-vance in high dimensional statistics. This talk begins with a simple partially linear model,Yi = Xi β ∗ + g ∗ (Zi ) + εi , where the parameter vector of interest, β ∗ , is high dimensional butsufficiently sparse, and g ∗ is an unknown nuisance function. In spite of its simple form, this highdimensional partially linear model plays a crucial role in counterfactual studies of heterogeneoustreatment effects. In the first half of this talk, I present an inference procedure for any sub-vector (regardless of its dimension) of the high dimensional β ∗ . This method does not requirethe “beta-min” condition and also works when the vector of covariates, Zi , is high dimensional,provided that the function classes E(Xij |Zi )s and E(Yi |Zi ) belong to exhibit certain sparsityfeatures, e.g., a sparse additive decomposition structure. In the second half of this talk, I discussthe connections between semiparametric modeling and Rubin’s Causal Framework, as well asthe applications of various methods (including the one from the first half of this talk and thosefrom my other papers) in counterfactual studies that are enriched by “big data”.Abstract as a .pdf
Tuesday, December 5, 2017 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 006
Speaker
Qiyang Han – University of Washington
We study the convergence rate of the least squares estimator (LSE) in a regression model with possibly heavy-tailed errors. Despite its importance in practical applications, theoretical understanding of this problem has been limited. We first show that from a worst-case perspective, the convergence rate of the LSE in a general non-parametric regression model is given by the maximum of the Gaussian regression rate and the noise rate induced by the errors. In the more difficult statistical model where the errors only have a second moment, we further show that the sizes of the 'localized envelopes' of the model give a sharp interpolation for the convergence rate of the LSE between the worst-case rate and the (optimal) parametric rate. These results indicate both certain positive and negative aspects of the LSE as an estimation procedure in a heavy-tailed regression setting. The key technical innovation is a new multiplier inequality that sharply controls the size of the multiplier empirical process associated with the LSE, which also finds applications in shape-restricted and sparse linear regression problems.
Random effects models are commonly used to measure genetic
variance-covariance matrices of quantitative phenotypic traits. The
population eigenvalues of these matrices describe the evolutionary
response to selection. However, they may be difficult to estimate from
limited samples when the number of traits is large. In this talk, I will
present several results describing the eigenvalues of classical MANOVA
estimators of these matrices, including dispersion of the bulk
eigenvalue distribution, bias and aliasing of large "spike" eigenvalues,
and distributional limits of eigenvalues at the spectral edges. I will
then discuss a new procedure that uses these results to obtain better
estimates of the large population eigenvalues when there are many
traits, and a Tracy-Widom test for detecting true principal components
in these models. The theoretical results extend proof techniques in
random matrix theory and free probability, which I will also briefly
describe.This is joint work with Iain Johnstone, Yi Sun, Mark Blows, and Emma Hine.
We consider the problem of
estimating pairwise comparison probabilities in a tournament setting
after observing every pair of teams play with each other once. We assume
the true pairwise probability matrix satisfies a stochastic
transitivity condition which
is popular in the Social Sciences.This stochastic transitivity
condition generalizes the ubiquitous Bradley- Terry model used in the
ranking literature. We propose a computationally efficient estimator for
this problem, borrowing ideas from recent work on
Shape Constrained Regression. We show that the worst case rate of our
estimator matches the best known rate for computationally tractable
estimators. Additionally we show our estimator enjoys faster rates of
convergence for several sub parameter spaces of
interest thereby showing automatic adaptivity. We also study the
missing data setting where only a fraction of all possible games are
observed at random.
Thursday, February 23, 2017 - 11:00 for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Rajarshi Mukherjee – Department of Statistics, Stanford University
In this talk, I will discuss some
examples of sparse signal detection problems in the context of binary outcomes.
These will be motivated by examples from next generation sequencing association
studies, understanding heterogeneities in large scale networks, and exploring
opinion distributions over networks. Moreover, these examples will serve as
templates to explore interesting phase transitions present in such studies. In
particular, these phase transitions will be aimed at revealing a difference
between studies with possibly dependent binary outcomes and Gaussian outcomes.
The theoretical developments will be further complemented with numerical
results.
Biology is becoming increasingly quantitative, with large genomic datasets being curated at a rapid rate. Sound mathematical modeling as well as data science approaches are both needed to take advantage of these newly available datasets. I will describe two projects that span these approaches. The first is a Markov chain model of naturalselection acting at two scales, motivated by the virulence-transmission tradeoff from pathogen evolution. This stochastic model, under a natural scaling, converges to a nonlinear deterministic system for which we can analytically derive steady-state behavior. This analysis, along with simulations, leads to general properties of selection at two scales. The second project is a bioinformatics pipeline that identifies gene copy number variants, currently a difficult problem in modern genomics. This quantificationof copy number variation in turn generates new mathematical questionsthat require the type of probabilistic modelling used in the first project.
High-dimensional data arise in many fields of contemporary science and introduce new challenges in statistical learning due to the well-known curse of dimensionality. Many data sets in image analysis and signal processing are in a high-dimensional space but exhibit a low-dimensional structure. We are interested in building efficient representations of these data for the purpose of compression and inference, and giving performance guarantees that are only cursed by the intrinsic dimension of data. Specifically, in the setting where a data set in $R^D$ consists of samples from a probability measure concentrated on or near an unknown $d$-dimensional manifold with $d$ much smaller than $D$, we consider two sets of problems: low-dimensional geometric approximation to the manifold and regression of a function on the manifold. In the first case we construct multiscale low-dimensional empirical approximations to the manifold and give finite-sample performance guarantees. In the second case we exploit these empirical geometric approximations of the manifold to construct multiscale approximations to the function. We prove finite-sample guarantees showing that we attain the same learning rates as if the function was defined on a Euclidean domain of dimension $d$. In both cases our approximations can adapt to the regularity of the manifold or the function even when this varies at different scales or locations. All algorithms have complexity $C n\log (n)$ where $n$ is the number of samples, and the constant $C$ is linear in $D$ and exponential in $d$.
Network data analysis has wide applications in computational social
science, computational biology, online social media, and data
visualization. For many of these network inference questions, the
brute-force (yet statistically optimal) methods involve combinatorial
optimization, which is computationally prohibitive when faced with large
scale networks. Therefore, it is important to understand the effect on
statistical inference when focusing on computationally tractable methods.
In this talk, we will discuss three closely related statistical models for
different network inference problems. These models answer inference
questions on cliques, communities, and ties, respectively. For each
particular model, we will describe the statistical model, propose new
computationally efficient algorithms, and study the theoretical properties
and numerical performance of the algorithms. Further, we will quantify the
computational optimality through describing the intrinsic barrier for
certain efficient algorithm classes, and investigate the
computational-to-statistical gap theoretically. A key feature shared by our
studies is that, as the parameters of the model changes, the problems
exhibit different phases of computational difficulty.
Asymptotic equivalence between two statistical models means that they have the same asymptotic (large sample) properties with respect to all decision problems with bounded loss. In nonparametric (infinite-dimensional) statistical models, asymptotic equivalence has been found to be useful since it can allow one to derive certain results by studying simpler models. One of the key results in this area is Nussbaum’s theorem, which states that nonparametric density estimation is asymptotically equivalent to a Gaussian shift model, provided that the densities are smooth enough and uniformly bounded away from zero.We will review the notion of asymptotic equivalence and existing results, before presenting recent work on the extent to which one can relax the assumption of being bounded away from zero. We further derive the optimal (Le Cam) distance between these models, which quantifies how close they are for finite-samples. As an application, we also consider Poisson intensity estimation with low count data. This is joint work with Johannes Schmidt-Hieber.