Compute Faster and Learn Better: Machine Learning via Nonconvex Optimization
- Series
- Applied and Computational Mathematics Seminar
- Time
- Monday, April 2, 2018 - 13:55 for 1 hour (actually 50 minutes)
- Location
- Skiles 005
- Speaker
- Tuo Zhao – Georgia Institute of Technology
Nonconvex
optimization naturally arises in many machine learning problems.
Machine learning researchers exploit various nonconvex formulations to
gain modeling flexibility, estimation robustness, adaptivity, and
computational scalability. Although classical computational complexity
theory has shown that solving nonconvex optimization is generally
NP-hard in the worst case, practitioners have proposed numerous
heuristic optimization algorithms, which achieve outstanding empirical
performance in real-world applications.To
bridge this gap between practice and theory, we propose a new
generation of model-based optimization algorithms and theory, which
incorporate the statistical thinking into modern optimization.
Specifically, when designing practical computational algorithms, we take
the underlying statistical models into consideration. Our novel
algorithms exploit hidden geometric structures behind many nonconvex
optimization problems, and can obtain global optima with the desired
statistics properties in polynomial time with high probability.