Seminars and Colloquia by Series

Wednesday, March 27, 2019 - 10:00 , Location: Skiles 005 , Dan Coombs , UBC (visiting Emory) , coombs@math.ubc.ca , Organizer: Howie Weiss

The likelihood of HIV infection following risky contact is believed to be low. This suggests that the infection process is stochastic and governed by rare events. I will present mathematical branching process models of early infection and show how we have used them to gain insights into the duration of the undetectable phase of HIV infection, the likelihood of success of pre- and post-exposure prophylaxis, and the effects of prior infection with HSV-2. Although I will describe quite a bit of theory, I will try to keep giant and incomprehensible formulae to a minimum.

Wednesday, February 27, 2019 - 11:01 , Location: Skiles 005 , Pavel Skums , GSU/CDC , skums@gsu.edu , Organizer: Leonid A. Bunimovich

Inference of evolutionary dynamics of heterogeneous cancer and viral populations Abstract: Genetic diversity of cancer cell populations and intra-host viral populations is one of the major factors influencing disease progression and treatment outcome. However, evolutionary dynamics of such populations remain poorly understood. Quantification of selection is a key step to understanding evolutionary mechanisms driving cancer and viral diseases. We will introduce a mathematical model and an algorithmic framework for inference of fitness landscapes of heterogeneous populations from genomic data. It is based on a maximal likelihood approach, whose objective is to estimate a vector of clone/strain fitnesses which better fits the observed tumor phylogeny, observed population structure and the dynamical system describing evolution of the population as a branching process. We will discuss our approach to solve the problem by transforming the original continuous maximum likelihood problem into a discrete optimization problem, which could be considered as a variant of scheduling problem with precedent constraints and with non-linear cumulative cost function.

Wednesday, January 30, 2019 - 11:00 , Location: Skiles 005 , Andreas Handel , UGA , ahandel@uga.edu , Organizer: Howie Weiss

Vaccination is an effective method to protect against infectious diseases. An important consideration in any vaccine formulation is the inoculum dose, i.e., amount of antigen or live attenuated pathogen that is used. Higher levels generally lead to better stimulation of the immune response but might cause more severe side effects and allow for less population coverage in the presence of vaccine shortages. Determining the optimal amount of inoculum dose is an important component of rational vaccine design. A combination of mathematical models with experimental data can help determine the impact of the inoculum dose. We designed mathematical models and fit them to data from influenza A virus (IAV) infection of mice and human parainfluenza virus (HPIV) of cotton rats at different inoculum doses. We used the model to predict the level of immune protection and morbidity for different inoculum doses and to explore what an optimal inoculum dose might be. We show how a framework that combines mathematical models with experimental data can be used to study the impact of inoculum dose on important outcomes such as immune protection and morbidity. We find that the impact of inoculum dose on immune protection and morbidity depends on the pathogen and both protection and morbidity do not always increase with increasing inoculum dose. An intermediate inoculum dose can provide the best balance between immune protection and morbidity, though this depends on the specific weighting of protection and morbidity. Once vaccine design goals are specified with required levels of protection and acceptable levels of morbidity, our proposed framework which combines data and models can help in the rational design of vaccines and determination of the optimal amount of inoculum.

Wednesday, January 31, 2018 - 11:00 , Location: Skiles 006 , Prof. Mansoor Haider , North Carolina State University, Department of Mathematics & Biomathematics , Organizer: Sung Ha Kang

Many biological soft tissues exhibit complex interactions between passive biophysical or biomechanical mechanisms, and active physiological responses. These interactions affect the ability of the tissue to remodel in order to maintain homeostasis, or govern alterations in tissue properties with aging or disease. In tissue engineering applications, such interactions also influence the relationship between design parameters and functional outcomes. In this talk, I will discuss two mathematical modeling problems in this general area. The first problem addresses biosynthesis and linking of articular cartilage extracellular matrix in cell-seeded scaffolds. A mixture approach is employed to, inherently, capture effects of evolving porosity in the tissue-engineered construct. We develop a hybrid model in which cells are represented, individually, as inclusions within a continuum reaction-diffusion model formulated on a representative domain. The second problem addresses structural remodeling of cardiovascular vessel walls in the presence of pulmonary hypertension (PH). As PH advances, the relative composition of collagen, elastin and smooth muscle cells in the cardiovascular network becomes altered. The ensuing wall stiffening increases blood pressure which, in turn, can induce further vessel wall remodeling. Yet, the manner in which these alterations occur is not well understood. I will discuss structural continuum mechanics models that incorporate PH-induced remodeling of the vessel wall into 1D fluid-structure models of pulmonary cardiovascular networks. A Holzapfel-Gasser-Ogden (HGO)-type hyperelastic constitutive law for combined bending, inflation, extension and torsion of a nonlinear elastic tube is employed. Specifically, we are interested in formulating new, nonlinear relations between blood pressure and vessel wall cross-sectional area that reflect structural alterations with advancing PH.

Wednesday, March 15, 2017 - 11:05 , Location: Skiles 006 , Max Alekseyev , George Washington University , maxal@gwu.edu , Organizer: Torin Greenwood

Genome median and genome halving are combinatorial optimization problems that aim at reconstruction of ancestral genomes by minimizing the number of possible evolutionary events between the reconstructed genomes and the genomes of extant species. While these problems have been widely studied in past decades, their known algorithmic solutions are either not efficient or produce biologically inadequate results. These shortcomings have been recently addressed by restricting the problems solution space. We show that the restricted variants of genome median and halving problems are, in fact, closely related and have a neat topological interpretation in terms of embedded graphs and polygon gluings. Hence we establish a somewhat unexpected link between comparative genomics and topology, and further demonstrate its advantages for solving genome median and halving problems in some particular cases. As a by-product, we also determine the cardinality of the genome halving solution space.

Tuesday, October 18, 2016 - 11:05 , Location: Skiles 005 , Tandy Warnow , The University of Illinois at Urbana-Champaign , Organizer: Heather Smith
The estimation of phylogenetic trees from molecular sequences (e.g., DNA, RNA, or amino acid sequences) is a major step in many biological research studies, and is typically approached using heuristics for NP-hard optimization problems. In this talk, I will describe a new approach for computing large trees: constrained exact optimization. In a constrained exact optimization, we implicitly constrain the search space by providing a set X of allowed bipartitions on the species set, and then use dynamic programming to find a globally optimal solution within that constrained space. For many optimization problems, the dynamic programming algorithms can complete in polynomial time in the input size. Simulation studies show that constrained exact optimization also provides highly accurate estimates of the true species tree, and analyses of both biological and simulated datasets shows that constrained exact optimization provides improved solutions to the optimization criteria efficiently. We end with some discussion of future research in this topic. (Refreshments will be served before the talk at 10:30.)
Wednesday, July 6, 2016 - 11:00 , Location: Skiles 005 , Bradford Taylor , School of Biology, Georgia Tech , Organizer: Christine Heitsch

When a disease outbreak occurs, mathematical models are used to
estimate the potential severity of the epidemic. The average number of
secondary infections resulting from the initial infection or reproduction
number, R_0, quantifies this severity. R_0 is estimated from the models by
leveraging observed case data and understanding of disease epidemiology.
However, the leveraged data is not perfect. How confident should we be
about measurements of R_0 given noisy data? I begin my talk by introducing
techniques used to model epidemics. I show how to adapt standard models to
specific diseases by using the 2014-2015 Ebola outbreak in West Africa as
an example throughout the talk. Nest, I introduce the inverse problem:
given real data tracking the infected population how does one estimate the
severity of the outbreak. Through a novel method I show how to account for
both inherent noise arising from discrete interactions between individuals
(demographic stochasticity) and from uncertainty in epidemiological
parameters. By applying this, I argue that the first estimates of R_0
during the Ebola outbreak were overconfident because demographic
stochasticity was ignored.
This talk will be accessible to undergraduates.

Wednesday, June 29, 2016 - 11:00 , Location: Skiles 005 , Elena Dimitrova , Clemson University , Organizer: Christine Heitsch
Progress in systems biology relies on the use of mathematical and statistical models for system level studies of biological processes. This talk will focus on discrete models of gene regulatory networks and the challenges they present, in particular data selection and model stability. Careful data selection is important for model identification since the process is sensitive to the amount and type of data used as input. We will discuss a criterion for deciding when a set of data points identifies an algebraic model with special
minimality properties. Stability is another important requirement for models of gene regulatory networks. Canalizing functions, a particular class of Boolean functions, show stable dynamic behavior and are thus suitable for expressing gene regulatory relationships. However, in practice, relaxing the canalizing requirement on some variables is appropriate. We will present the class of partially nested canalizing functions and some of their properties and applications.
Wednesday, June 22, 2016 - 11:00 , Location: Skiles 005 , Emily Rogers , Georgia Tech , Organizer: Christine Heitsch
Although DNA forensic evidence is widely considered objective and infallible, a great deal of subjectivity and bias can still exist in its
interpretation, especially concerning mixtures of DNA. The exact degree of variability across labs, however, is unknown, as DNA forensic examiners are primarily trained in-house, with protocols and quality control up to the discretion of each forensic laboratory. This talk uncovers the current state of forensic DNA mixture interpretation by analyzing the results of a groundbreaking DNA mixture interpretation study initiated by the Department of Defense's Defense Forensic Science Center (DFSC) in the summer of 2014. This talk will be accessible to undergraduates.
Thursday, June 16, 2016 - 11:00 , Location: Skiles 005 , Lenore Cowen , Tufts University , Organizer: Christine Heitsch
In protein-protein interaction (PPI) networks, or more general protein-protein association networks, functional similarity is often
inferred based on the some notion of proximity among proteins in a local neighborhood. In prior work, we have introduced diffusion state distance (DSD), a new metric based on a graph diffusion property, designed to capture more fine-grained notions of similarity from the neighborhood structure that we showed could improve the accuracy of network-based function-prediction algorithms. Boehnlein, Chin, Sinha and Liu have recently shown that a variant of the DSD metric has deep connections to Green's function, the normalized Laplacian, and the heat kernel of the graph.

Because DSD is based on random walks, changing the probabilities of the underlying random walk gives a natural way to incorporate experimental error and noise (allowing us to place confidence weights on edges), incorporate biological knowledge in terms of known biological pathways, or weight subnetwork importance based on tissue-specific expression levels, or known disease processes. Our framework provides a mathematically natural way to integrate heterogeneous network data sources for classical function prediction and disease gene prioritization problems.

This is joint work with Mengfei Cao, Hao Zhang, Jisoo Park, Noah Daniels, Mark Crovella and Ben Hescott.

Pages