### TBA by Cheng Mao

- Series
- Job Candidate Talk
- Time
- Thursday, January 18, 2018 - 11:00 for 1 hour (actually 50 minutes)
- Location
- Skiles 006
- Speaker
- Cheng Mao – Yale University

- You are here:
- GT Home
- Home
- News & Events

- Series
- Job Candidate Talk
- Time
- Thursday, January 18, 2018 - 11:00 for 1 hour (actually 50 minutes)
- Location
- Skiles 006
- Speaker
- Cheng Mao – Yale University

- Series
- Job Candidate Talk
- Time
- Thursday, January 18, 2018 - 11:00 for 1 hour (actually 50 minutes)
- Location
- Skiles 006
- Speaker
- Cheng Mao – Yale University

TBA by Cheng Mao

- Series
- Job Candidate Talk
- Time
- Tuesday, January 16, 2018 - 15:00 for 1 hour (actually 50 minutes)
- Location
- Skiles 006
- Speaker
- Ying Zhu – Michigan State University

Semiparametric regressions enjoy the flexibility of nonparametric models as well as the in-terpretability of linear models. These advantages can be further leveraged with recent ad-vance in high dimensional statistics. This talk begins with a simple partially linear model,Yi = Xi β ∗ + g ∗ (Zi ) + εi , where the parameter vector of interest, β ∗ , is high dimensional butsufficiently sparse, and g ∗ is an unknown nuisance function. In spite of its simple form, this highdimensional partially linear model plays a crucial role in counterfactual studies of heterogeneoustreatment effects. In the first half of this talk, I present an inference procedure for any sub-vector (regardless of its dimension) of the high dimensional β ∗ . This method does not requirethe “beta-min” condition and also works when the vector of covariates, Zi , is high dimensional,provided that the function classes E(Xij |Zi )s and E(Yi |Zi ) belong to exhibit certain sparsityfeatures, e.g., a sparse additive decomposition structure. In the second half of this talk, I discussthe connections between semiparametric modeling and Rubin’s Causal Framework, as well asthe applications of various methods (including the one from the first half of this talk and thosefrom my other papers) in counterfactual studies that are enriched by “big data”.Abstract as a .pdf

- Series
- Job Candidate Talk
- Time
- Tuesday, December 5, 2017 - 11:00 for 1 hour (actually 50 minutes)
- Location
- Skiles 006
- Speaker
- Qiyang Han – University of Washington

We study the convergence rate of the least squares estimator (LSE) in a regression model with possibly heavy-tailed errors. Despite its importance in practical applications, theoretical understanding of this problem has been limited. We first show that from a worst-case perspective, the convergence rate of the LSE in a general non-parametric regression model is given by the maximum of the Gaussian regression rate and the noise rate induced by the errors. In the more difficult statistical model where the errors only have a second moment, we further show that the sizes of the 'localized envelopes' of the model give a sharp interpolation for the convergence rate of the LSE between the worst-case rate and the (optimal) parametric rate. These results indicate both certain positive and negative aspects of the LSE as an estimation procedure in a heavy-tailed regression setting. The key technical innovation is a new multiplier inequality that sharply controls the size of the multiplier empirical process associated with the LSE, which also finds applications in shape-restricted and sparse linear regression problems.

- Series
- Job Candidate Talk
- Time
- Thursday, November 30, 2017 - 11:05 for 1 hour (actually 50 minutes)
- Location
- Skiles 006
- Speaker
- Zhou Fan – Stanford University – zhoufan@stanford.edu

Random effects models are commonly used to measure genetic
variance-covariance matrices of quantitative phenotypic traits. The
population eigenvalues of these matrices describe the evolutionary
response to selection. However, they may be difficult to estimate from
limited samples when the number of traits is large. In this talk, I will
present several results describing the eigenvalues of classical MANOVA
estimators of these matrices, including dispersion of the bulk
eigenvalue distribution, bias and aliasing of large "spike" eigenvalues,
and distributional limits of eigenvalues at the spectral edges. I will
then discuss a new procedure that uses these results to obtain better
estimates of the large population eigenvalues when there are many
traits, and a Tracy-Widom test for detecting true principal components
in these models. The theoretical results extend proof techniques in
random matrix theory and free probability, which I will also briefly
describe.This is joint work with Iain Johnstone, Yi Sun, Mark Blows, and Emma Hine.

- Series
- Job Candidate Talk
- Time
- Monday, February 27, 2017 - 15:05 for 1 hour (actually 50 minutes)
- Location
- Skiles 006
- Speaker
- Sabyasachi Chatterjee – University of Chicago – sabyasachi@uchicago.edu

We consider the problem of
estimating pairwise comparison probabilities in a tournament setting
after observing every pair of teams play with each other once. We assume
the true pairwise probability matrix satisfies a stochastic
transitivity condition which
is popular in the Social Sciences.This stochastic transitivity
condition generalizes the ubiquitous Bradley- Terry model used in the
ranking literature. We propose a computationally efficient estimator for
this problem, borrowing ideas from recent work on
Shape Constrained Regression. We show that the worst case rate of our
estimator matches the best known rate for computationally tractable
estimators. Additionally we show our estimator enjoys faster rates of
convergence for several sub parameter spaces of
interest thereby showing automatic adaptivity. We also study the
missing data setting where only a fraction of all possible games are
observed at random.

- Series
- Job Candidate Talk
- Time
- Thursday, February 23, 2017 - 11:00 for 1 hour (actually 50 minutes)
- Location
- Skiles 005
- Speaker
- Rajarshi Mukherjee – Department of Statistics, Stanford University

In this talk, I will discuss some
examples of sparse signal detection problems in the context of binary outcomes.
These will be motivated by examples from next generation sequencing association
studies, understanding heterogeneities in large scale networks, and exploring
opinion distributions over networks. Moreover, these examples will serve as
templates to explore interesting phase transitions present in such studies. In
particular, these phase transitions will be aimed at revealing a difference
between studies with possibly dependent binary outcomes and Gaussian outcomes.
The theoretical developments will be further complemented with numerical
results.

- Series
- Job Candidate Talk
- Time
- Tuesday, January 24, 2017 - 10:00 for 1 hour (actually 50 minutes)
- Location
- Skiles 005
- Speaker
- Shishi Luo – UC Berkeley – shishi.luo@berkeley.edu

Biology is becoming increasingly quantitative, with large genomic datasets being curated at a rapid rate. Sound mathematical modeling as well as data science approaches are both needed to take advantage of these newly available datasets. I will describe two projects that span these approaches. The first is a Markov chain model of naturalselection acting at two scales, motivated by the virulence-transmission tradeoff from pathogen evolution. This stochastic model, under a natural scaling, converges to a nonlinear deterministic system for which we can analytically derive steady-state behavior. This analysis, along with simulations, leads to general properties of selection at two scales. The second project is a bioinformatics pipeline that identifies gene copy number variants, currently a difficult problem in modern genomics. This quantificationof copy number variation in turn generates new mathematical questionsthat require the type of probabilistic modelling used in the first project.

- Series
- Job Candidate Talk
- Time
- Thursday, January 19, 2017 - 14:00 for 1 hour (actually 50 minutes)
- Location
- Skiles 005
- Speaker
- Wenjing Liao – Johns Hopkins University – liao@math.jhu.edu

High-dimensional data arise in many fields of contemporary science and introduce new challenges in statistical learning due to the well-known curse of dimensionality. Many data sets in image analysis and signal processing are in a high-dimensional space but exhibit a low-dimensional structure. We are interested in building efficient representations of these data for the purpose of compression and inference, and giving performance guarantees that are only cursed by the intrinsic dimension of data. Specifically, in the setting where a data set in $R^D$ consists of samples from a probability measure concentrated on or near an unknown $d$-dimensional manifold with $d$ much smaller than $D$, we consider two sets of problems: low-dimensional geometric approximation to the manifold and regression of a function on the manifold. In the first case we construct multiscale low-dimensional empirical approximations to the manifold and give finite-sample performance guarantees. In the second case we exploit these empirical geometric approximations of the manifold to construct multiscale approximations to the function. We prove finite-sample guarantees showing that we attain the same learning rates as if the function was defined on a Euclidean domain of dimension $d$. In both cases our approximations can adapt to the regularity of the manifold or the function even when this varies at different scales or locations. All algorithms have complexity $C n\log (n)$ where $n$ is the number of samples, and the constant $C$ is linear in $D$ and exponential in $d$.

- Series
- Job Candidate Talk
- Time
- Thursday, January 12, 2017 - 11:05 for 1 hour (actually 50 minutes)
- Location
- Skiles 006
- Speaker
- Tengyuan Liang – University of Pennsylvania – tengyuan@wharton.upenn.edu

Network data analysis has wide applications in computational social
science, computational biology, online social media, and data
visualization. For many of these network inference questions, the
brute-force (yet statistically optimal) methods involve combinatorial
optimization, which is computationally prohibitive when faced with large
scale networks. Therefore, it is important to understand the effect on
statistical inference when focusing on computationally tractable methods.
In this talk, we will discuss three closely related statistical models for
different network inference problems. These models answer inference
questions on cliques, communities, and ties, respectively. For each
particular model, we will describe the statistical model, propose new
computationally efficient algorithms, and study the theoretical properties
and numerical performance of the algorithms. Further, we will quantify the
computational optimality through describing the intrinsic barrier for
certain efficient algorithm classes, and investigate the
computational-to-statistical gap theoretically. A key feature shared by our
studies is that, as the parameters of the model changes, the problems
exhibit different phases of computational difficulty.

- Offices & Departments
- News Center
- Campus Calendar
- Special Events
- GreenBuzz
- Institute Communications
- Visitor Resources
- Campus Visits
- Directions to Campus
- Visitor Parking Information
- GTvisitor Wireless Network Information
- Georgia Tech Global Learning Center
- Georgia Tech Hotel & Conference Center
- Barnes & Noble at Georgia Tech
- Ferst Center for the Arts
- Robert C. Williams Paper Museum