In this talk, we will discuss some advantages of using non-convex penalty functions in variational regularization problems and how to handle them using the so-called Convex-Nonconvex approach. In particular, TV-like non-convex penalty terms will be presented for the problems in segmentation and additive decomposition of scalar functions defined over a 2-manifold embedded in \R^3. The parametrized regularization terms are equipped by a free scalar parameter that allows to tune their degree of non-convexity. Appropriate numerical schemes based on the Alternating Directions Methods of Multipliers procedure are proposed to solve the optimization problems.
Mac Lane's technique of "inductive valuations" is over 80 years old, but has only recently been used to attack problems about arithmetic surfaces. We will give an explicit, hands-on introduction to the theory, requiring little background beyond the definition of a non-archimedean valuation. We will then outline how this theory is helpful for resolving "weak wild" quotient singularities of arithmetic surfaces, as well as for proving conductor-discriminant inequalities for higher genus curves. The first project is joint work with Stefan Wewers, and the second is joint work with Padmavathi Srinivasan.
The semi-cubical cusp which is formed in the bottom of a mug when you shine a light on it is an everyday example of a caustic. In this talk we will become familiar with the singularities of Lagrangian and Legendrian fronts, also known as caustics in the mathematics literature, which have played an important role in symplectic and contact topology since the work of Arnold and his collaborators. For this purpose we will discuss some basic singularity theory, the method of generating families in cotangent bundles, the geometry of the front projection, the Legendrian Reidemeister theorem, and draw many pictures of the simplest examples.
Abstract: Reiher, Rödl, Ruciński, Schacht, and Szemerédi proved, via a modification of the absorbing method, that every 3-uniform $n$-vertex hypergraph, $n$ large, with minimum vertex degree at least $(5/9+\alpha)n^2/2$ contains a tight Hamiltonian cycle. Recently, owing to a further modification of the method, the same group of authors joined by Bjarne Schuelke, extended this result to 4-uniform hypergraphs with minimum pair degree at least, again, $(5/9+\alpha)n^2/2$. In my talk I will outline these proofs and point to the crucial ideas behind both modifications of the absorbing method.
The popularity of machine learning is increasingly growing in transportation, with applications ranging from traffic engineering to travel demand forecasting and pavement material modeling, to name just a few. Researchers often find that machine learning achieves higher predictive accuracy compared to traditional methods. However, many machine-learning methods are often viewed as “black-box” models, lacking interpretability for decision making. As a result, increased attention is being devoted to the interpretability of machine-learning results.
In this talk, I introduce the application of machine learning to study travel behavior, covering both mode prediction and behavioral interpretation. I first discuss the key differences between machine learning and logit models in modeling travel mode choice, focusing on model development, evaluation, and interpretation. Next, I apply the existing machine-learning interpretation tools and also propose two new model-agnostic interpretation tools to examine behavioral heterogeneity. Lastly, I show the potential of using machine learning as an exploratory tool to tune the utility functions of logit models.
I illustrate these ideas by examining stated-preference travel survey data for a new mobility-on-demand transit system that integrates fixed-route buses and on-demand shuttles. The results show that the best-performing machine-learning classifier results in higher predictive accuracy than logit models as well as comparable behavioral outputs. In addition, results obtained from model-agnostic interpretation tools show that certain machine-learning models (e.g. boosting trees) can readily account for individual heterogeneity and generate valuable behavioral insights on different population segments. Moreover, I show that interpretable machine learning can be applied to tune the utility functions of logit models (e.g. specifying nonlinearities) and to enhance their model performance. In turn, these findings can be used to inform the design of new mobility services and transportation policies.
We will go to lunch together after the talk with the graduate students.
We introduce methods from convex optimization to solve the multi-marginal transport type problems arise in the context of density functional theory. Convex relaxations are used to provide outer approximation to the set of N-representable 2-marginals and 3-marginals, which in turn provide lower bounds to the energy. We further propose rounding schemes to obtain upper bound to the energy.
We continue the discussion of Chapter 8 in 3264 and All That. We will discuss complete quadrics, Hilbert schemes and Kontsevich spaces.
I will present joint work with Elena Kosygina and Ofer Zeitouni in which we prove the homogenization of a class of one-dimensional viscous Hamilton-Jacobi equations with random Hamiltonians that are nonconvex in the gradient variable. Due to the special form of the Hamiltonians, the solutions of these PDEs with linear initial conditions have representations involving exponential expectations of controlled Brownian motion in a random potential. The effective Hamiltonian is the asymptotic rate of growth of these exponential expectations as time goes to infinity and is explicit in terms of the tilted free energy of (uncontrolled) Brownian motion in a random potential. The proof involves large deviations, construction of correctors which lead to exponential martingales, and identification of asymptotically optimal policies.
Interpolative decomposition is a simple and yet powerful tool for approximating low-rank matrices. After discussing the theory and algorithms, I will present a few new applications of interpolative decomposition in numerical partial differential equations, quantum chemistry, and machine learning.
A major challenge in clinical and biomedical research is on translating in-vitro and in- vivo model findings to humans. Translation success rate of all new compounds going through different clinical trial phases is generally about 10%. (i) This field is challenged by a lack of robust methods that can be used to translate model findings to humans (or interpret preclinical finds to accurately design successful patient regimens), hence providing a platform to evaluate a plethora of agents before they are channeled in clinical trials. Using set theory principles of mapping morphisms, we recently developed a novel translational framework that can faithfully map experimental results to clinical patient results. This talk will demonstrate how this method was used to predict outcomes of anti-TB drug clinical trials. (ii) Translation failure is deeply rooted in the dissimilarities between humans and experimental models used; wide pathogen isolates variation, patient population genetic diversities and geographic heterogeneities. In TB, bacteria phenotypic heterogeneity shapes differential antibiotic susceptibility patterns in patients. This talk will also demonstrate the application of dynamical systems in Systems Biology to model (a) gene regulatory networks and how gene programs influence Mycobacterium tuberculosis bacteria metabolic/phenotypic plasticity. (b) And then illustrate how different bacteria phenotypic subpopulations influence treatment outcomes and the translation of preclinical TB therapeutic regimens. In general, this talk will strongly showcase how mathematical modeling can be used to critically analyze experimental and patient data.