- You are here:
- GT Home
- Home
- News & Events

Series: ACO Student Seminar

In network routing users often tradeoff different objectives in selecting their best route. An example is transportation networks, where due to uncertainty of travel times, drivers may tradeoff the average travel time versus the variance of a route. Or they might tradeoff time and cost, such as the cost paid in tolls.

We wish to understand the effect of two conflicting criteria in route selection, by studying the resulting traffic assignment (equilibrium) in the network. We investigate two perspectives of this topic: (1) How does the equilibrium cost of a risk-averse population compare to that of a risk-neutral population? (i.e., how much longer do we spend in traffic due to being risk-averse) (2) How does the equilibrium cost of a heterogeneous population compare to that of a comparable homogeneous user population?

We provide characterizations to both questions above.

Based on joint work with Richard Cole, Thanasis Lianeas and Nicolas Stier-Moses.

At the end I will mention current work of my research group on algorithms and mechanism design for power systems.

**Biography: ** Evdokia Nikolova is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Texas at Austin, where she is a member of the Wireless Networking & Communications Group. Previously she was an Assistant Professor in Computer Science and Engineering at Texas A&M University. She graduated with a BA in Applied Mathematics with Economics from Harvard University, MS in Mathematics from Cambridge University, U.K. and Ph.D. in Computer Science from MIT.

Nikolova's research aims to improve the design and efficiency of complex systems (such as networks and electronic markets), by integrating stochastic, dynamic and economic analysis. Her recent work examines how human risk aversion transforms traditional computational models and solutions. One of her algorithms has been adapted in the MIT CarTel project for traffic-aware routing. She currently focuses on developing algorithms for risk mitigation in networks, with applications to transportation and energy. She is a recipient of an NSF CAREER award and a Google Faculty Research Award. Her research group has been recognized with a best student paper award and a best paper award runner-up. She currently serves on the editorial board of the journal Mathematics of Operations Research.

Series: Dissertation Defense

For a first order (deterministic) mean-field game with non-local running and initial couplings, a classical solution is constructed for the associated, so-called master equation, a partial differential equation in infinite-dimensional space with a non-local term, assuming the time horizon is sufficiently small and the coefficients are smooth enough, without convexity conditions on the Hamiltonian.

Series: Dissertation Defense

In independent bond percolation with parameter p, if one removes the vertices of the infinite cluster (and incident edges), for which values of p does the remaining graph contain an infinite cluster? Grimmett-Holroyd-Kozma used the triangle condition to show that for d > 18, the set of such p contains values strictly larger than the percolation threshold pc. With the work of Fitzner-van der Hofstad, this has been reduced to d > 10. We reprove this result by showing that for d > 10 and some p>pc, there are infinite paths consisting of "shielded"' vertices --- vertices all whose adjacent edges are closed --- which must be in the complement of the infinite cluster. Using numerical values of pc, this bound can be reduced to d > 7. Our methods are elementary and do not require the triangle condition.

Invasion percolation is a stochastic growth model that follows a greedy algorithm. After assigning i.i.d. uniform random variables (weights) to all edges of d-dimensional space, the growth starts at the origin. At each step, we adjoin to the current cluster the edge of minimal weight from its boundary. In '85, Chayes-Chayes-Newman studied the "acceptance profile"' of the invasion: for a given p in [0,1], it is the ratio of the expected number of invaded edges until time n with weight in [p,p+dp] to the expected number of observed edges (those in the cluster or its boundary) with weight in the same interval. They showed that in all dimensions, the acceptance profile an(p) converges to one for p<pc and to zero for p>pc. In this paper, we consider an(p) at the critical point p=pc in two dimensions and show that it is bounded away from zero and one as n goes to infinity.

Series: Dissertation Defense

An electron interacting with the vibrational modes of a polar crystal is called a polaron. Polarons are the simplest Quantum Field Theory models, yet their most basic features such as the effective mass, ground-state energy and wave function cannot be evaluated explicitly. And while several successful theories have been proposed over the years to approximate the energy and effective mass of various polarons, they are built entirely on unjustified, even questionable, Ansätze for the wave function.

In this talk I shall provide the first explicit description of the ground-state wave function of a polaron in an asymptotic regime: For the Fröhlich polaron localized in a Coulomb potential and exposed to a homogeneous magnetic field of strength $B$ it will be shown that the ground-state electron density in the direction of the magnetic field converges pointwise and in a weak sense as $B\rightarrow\infty$ to the square of a hyperbolic secant function--a sharp contrast to the Gaussian wave functions suggested in the physics literature.

Series: PDE Seminar

In this talk we study master equations arising from mean field game

problems, under the crucial monotonicity conditions.

Classical solutions of such equations require very strong technical

conditions. Moreover, unlike the master equations arising from mean

field control problems, the mean field game master equations are

non-local and even classical solutions typically do not satisfy the

comparison principle, so the standard viscosity solution approach seems

infeasible. We shall propose a notion of weak solution for such

equations and establish its wellposedness. Our approach relies on a new

smooth mollifier for functions of measures, which unfortunately does not

keep the monotonicity property, and the stability result of master

equations. The talk is based on a joint work with Jianfeng Zhang.

Series: Other Talks

Multidimensional data is ubiquitous in the application, e.g., images and videos. I will introduce some of my previous and current works related to this topic.

1) Lattice metric space and its applications. Lattice and superlattice patterns are found in material sciences, nonlinear optics and sampling designs. We propose a lattice metric space based on modular group theory and

metric geometry, which provides a visually consistent measure of dissimilarity among lattice patterns. We apply this framework to superlattice separation and grain defect detection.

2) We briefly introduce two current projects. First, we propose new algorithms for automatic PDE modeling, which drastically improves the efficiency and the robustness against additive noise. Second, we introduce a new model for surface reconstruction from point cloud data (PCD) and provide an ADMM type fast algorithm.

Series: CDSNS Colloquium

(Please note the unusual day)

New and proposed missions for approaching moons, and particularly icy moons, increasingly require the design of trajectories within challenging multi-body environments that stress or exceed the capabilities of the two-body design methodologies typically used over the last several decades. These current methods encounter difficulties because they often require appreciable user interaction, result in trajectories that require significant amounts of propellant, or miss potential mission-enabling options. The use of dynamical systems methods applied to three-body and multi-body models provides a pathway to obtain a fuller theoretical understanding of the problem that can then result in significant improvements to trajectory design in each of these areas. The search for approach trajectories within highly nonlinear, chaotic regimes where multi-body effects dominate becomes increasingly complex, especially when landing, orbiting, or flyby scenarios must be considered in the analysis. In the case of icy moons, approach trajectories must also be tied into the broader tour which includes flybys of other moons. The tour endgame typically includes the last several flybys, or resonances, before the final approach to the moon, and these resonances further constrain the type of approach that may be used.

In this seminar, new methods for approaching moons by traversing the chaotic regions near the Lagrange point gateways will be discussed for several examples. The emphasis will be on landing trajectories approaching Europa including a global analysis of trajectories approaching any point on the surface and analyses for specific landing scenarios across a range of different energies. The constraints on the approach from the tour within the context of the endgame strategy will be given for a variety of different moons and scenarios. Specific approaches using quasiperiodic or Lissajous orbits will be shown, and general landing and orbit insertion trajectories will be placed into context relative to the invariant manifolds of unstable periodic and quasiperiodic orbits. These methods will be discussed and applied for the specific example of the Europa Lander mission concept. The Europa Lander mission concept is particularly challenging in that it requires the redesign of the approach scenario after the spacecraft has launched to accommodate landing at a wide range of potential locations on the surface. The final location would be selected based on reconnaissance from the Europa Clipper data once Europa Lander is in route. Taken as a whole, these methods will provide avenues to find both fundamentally new approach pathways and reduce cost to enable new missions.

Series: Stochastics Seminar

I will talk about the structure of large square random matrices with centered i.i.d. heavy-tailed entries (only two finite moments are assumed). In our previous work with R. Vershynin we have shown that the operator norm of such matrix A can be reduced to the optimal sqrt(n)-order with high probability by zeroing out a small submatrix of A, but did not describe the structure of this "bad" submatrix, nor provide a constructive way to find it. Now we can give a very simple description of this small "bad" subset: it is enough to zero out a small fraction of the rows and columns of A with largest L2 norms to bring its operator norm to the almost optimal sqrt(loglog(n)*n)-order, under additional assumption that the entries of A are symmetrically distributed. As a corollary, one can also obtain a constructive procedure to find a small submatrix of A that one can zero out to achieve the same regularization.

I am planning to discuss some details of the proof, the main component of which is the development of techniques that extend constructive regularization approaches known for the Bernoulli matrices (from the works of Feige and Ofek, and Le, Levina and Vershynin) to the considerably broader class of heavy-tailed random matrices.

Friday, April 26, 2019 - 12:00 ,
Location: Skiles 006 ,
Jaewoo Jung ,
Georgia Institute of Technology ,
jjung325@gatech.edu ,
Organizer: Trevor Gunn

It is known that non-negative homogeneous polynomials(forms) over $\mathbb{R}$ are same as sums of squares if it is bivariate, quadratic forms, or ternary quartic by Hilbert. Once we know a form is a sum of squares, next natural question would be how many forms are needed to represent it as sums of squares. We denote the minimal number of summands in the sums of squares by rank (of the sum of squares). Ranks of some class of forms are known. For example, any bivariate forms (allowing all monomials) can be written as sum of $2$ squares.(i.e. its rank is $2$) and every nonnegative ternary quartic can be written as a sum of $3$ squares.(i.e. its rank is $3$). Our question is that "if we do not allow some monomials in a bivariate form, how its rank will be?". In the talk, we will introduce this problem in algebraic geometry flavor and provide some notions and tools to deal with.

Series: Research Horizons Seminar

During the last 30 years there has been much interest in random graph processes, i.e., random graphs which grow by adding edges (or vertices) step-by-step in some random way. Part of the motivation stems from more realistic modeling, since many real world networks such as Facebook evolve over time. Further motivation stems from extremal combinatorics, where these processes lead to some of the best known bounds in Ramsey and Turan Theory (that go beyond textbook applications of the probabilistic method). I will review several random graph processes of interest, and (if time permits) illustrate one of the main proof techniques using a simple toy example.