Cancelled
- Series
- Analysis Seminar
- Time
- Wednesday, April 22, 2020 - 13:55 for 1 hour (actually 50 minutes)
- Location
- Skiles 005
- Speaker
TBA
Numerical algebraic geometry studies methods to approach problems in algebraic geometry numerically. Especially, finding roots of systems of equations using theory in algebraic geometry involves symbolic algorithm which requires expensive computations, numerical techniques often provides faster methods to tackle these problems. We establish numerical techniques to approximate roots of systems of equations and ways to certify its correctness.
As techniques for approximating roots of systems of equations, homotopy continuation method will be introduced. Combining homotopy method with monodromy group action, we introduce techniques for solving parametrized polynomial systems. Since numerical approaches rely on heuristic method, we study how to certify numerical roots of systems of equations. Based on Newton’s method, we study Krawczyk method and Smale’s alpha theory. These two method will be mainly used for certifying regular roots of systems. Furthermore, as an approach for multiple roots, we establish the local separation bound of a multiple root. For multiple roots whose deflation process terminates by only one iteration, we give their local separation bound and study how to certify an approximation of such multiple roots.
The talk will be held online via Bluejeans, use the following link to join the meeting.
TBA (joint with Stochastics Seminar)
My thesis studies two topics. In the first part, we study the spectrum reconstruction technique. As is known to all, eigenvalues play an important role in many research fields and are foundation to many practical techniques such like PCA (Principal Component Analysis). We believe that related algorithms should perform better with more accurate spectrum estimation. There was an approximation formula proposed by Prof. Matzinger. However, they didn't give any proof. In our research, we show why the formula works. And when both number of features and dimension of space go to infinity, we find the order of error for the approximation formula, which is related to a constant C-the ratio of dimension of space and number of features.
In the second part, we focus on some applications of Naive Bayes models in text classification problems. Especially we focus on two special situations: 1) there is insufficient data for model training; 2) partial labeling problem. We choose Naive Bayes as our base model and do some improvement on the model to achieve better performance in those two situations. To improve model performance and to utilize as many information as possible, we introduce a correlation factor, which somehow relaxes the conditional independence assumption of Naive Bayes. The new estimates are biased estimation compared to the traditional Naive Bayes estimate, but have much smaller variance, which give us a better prediction result.