- You are here:
- GT Home
- Home
- News & Events

Monday, April 12, 2010 - 13:00 ,
Location: Skiles 255 ,
Samuel Isaacson ,
Boston University Mathematics Dept. ,
Organizer:

We will give an overview of our recent work investigating the influence

of incorporating cellular substructure into stochastic

reaction-diffusion models of gene regulation and expression. Extensions

to the reaction-diffusion master equation that incorporate effects due

to the chromatin fiber matrix are introduced. These new mathematical

models are then used to study the role of nuclear substructure on the

motion of individual proteins and mRNAs within nuclei. We show for

certain distributions of binding sites that volume exclusion due to

chromatin may reduce the time needed for a regulatory protein to locate

a binding site.

of incorporating cellular substructure into stochastic

reaction-diffusion models of gene regulation and expression. Extensions

to the reaction-diffusion master equation that incorporate effects due

to the chromatin fiber matrix are introduced. These new mathematical

models are then used to study the role of nuclear substructure on the

motion of individual proteins and mRNAs within nuclei. We show for

certain distributions of binding sites that volume exclusion due to

chromatin may reduce the time needed for a regulatory protein to locate

a binding site.

Monday, April 5, 2010 - 13:00 ,
Location: Skiles 255 ,
Jianfeng Cai ,
Dep. of Math. UCLA ,
Organizer: Haomin Zhou

Tight frame is a generalization of orthonormal basis. It inherits most good properties of orthonormal basis but gains more robustness to represent signals of intrests due to the redundancy. One can construct tight frame systems under which signals of interests have sparse representations. Such tight frames include translation invariant wavelet, framelet, curvelet, and etc. The sparsity of a signal under tight frame systems has three different formulations, namely, the analysis-based sparsity, the synthesis-based one, and the balanced one between them. In this talk, we discuss Bregman algorithms for finding signals that are sparse under tight frame systems with the above three different formulations. Applications of our algorithms include image inpainting, deblurring, blind deconvolution, and cartoon-texture decomposition. Finally, we apply the linearized Bregman, one of the Bregman algorithms, to solve the problem of matrix completion, where we want to find a low-rank matrix from its incomplete entries. We view the low-rank matrix as a sparse vector under an adaptive linear transformation which depends on its singular vectors. It leads to a singular value thresholding (SVT) algorithm.

Friday, April 2, 2010 - 13:00 ,
Location: Skiles 269 ,
Sookkyung Lim ,
Department of Mathematical Sciences, University of Cincinnati ,
Organizer: Sung Ha Kang

We investigate the effects of electrostatic and steric repulsion on thedynamics of pre-twisted charged elastic rod, representing a DNA molecule,immersed in a viscous incompressible fluid. Equations of motion of the rod, whichinclude the fluid-structure interaction, rod elasticity, and electrostatic interaction, are solved by the generalized immersed boundary method. Electrostatic interaction is treated using a modified Debye-Huckel repulsive force in which the electrostatic force depends on the salt concentration and the distance between base pairs, and a close range steric repulsion force to prevent self-penetration. After perturbation a pretwisted DNA circle collapses into a compact supercoiled configuration. The collapse proceeds along a complex trajectory that may pass near several equilibrium configurations of saddle type, before it settles in a locally stable equilibrium. We find that both the final configuration and the transition path are sensitive to the initial excess link, ionic stregth of the solvent, and the initial perturbation.

Monday, March 29, 2010 - 13:00 ,
Location: Skiles 255 ,
Luca Gerardo Giorda ,
Dep. of Mathematics and Computer Science, Emory University ,
Organizer: Sung Ha Kang

Schwarz algorithms have experienced a second youth over the lastdecades, when distributed computers became more and more powerful andavailable. In the classical Schwarz algorithm the computational domain is divided into subdomains and Dirichlet continuity is enforced on the interfaces between subdomains. Fundamental convergence results for theclassical Schwarzmethods have been derived for many partial differential equations. Withinthis frameworkthe overlap between subdomains is essential for convergence. More recently, Optimized Schwarz Methods have been developed: based on moreeffective transmission conditions than the classical Dirichlet conditions at theinterfaces between subdomains, such algorithms can be used both with and without overlap. On the other hand, such algorithms show greatly enhanced performance compared to the classical Schwarz method. I will present a survey of Optimized Schwarz Methods for the numerical approximation of partial differential equation, focusing mainly on heterogeneous convection-diffusion and electromagnetic problems.

Monday, March 15, 2010 - 13:00 ,
Location: Skiles 255 ,
Maria Cameron ,
Courant Institute, NYU ,
Organizer:

The overdamped Langevin equation is often used as a model in molecular dynamics. At low temperatures, a system evolving according to such an SDE spends most of the time near the potential minima and performs rare transitions between them. A number of methods have been developed to study the most likely transition paths. I will focus on one of them: the MaxFlux functional.The MaxFlux functional has been around for almost thirty years but not widely used because it is challenging to minimize. Its minimizer provides a path along which the reactive flux is maximal at a given finite temperature. I will show two ways to derive it in the framework of transition path theory: the lower bound approach and the geometrical approach. I will present an efficient way to minimize the MaxFlux functional numerically. I will demonstrate its application to the problem of finding the most likely transition paths in the Lennard-Jones-38 cluster between the face-centered-cubic and icosahedral structures.

Monday, March 8, 2010 - 13:00 ,
Location: Skiles 255 ,
Chun Liu ,
Penn State/IMA ,
Organizer:

Almost all models for complex fluids can be fitted into the energetic variational framework. The advantage of the approach is the revealing/focus of the competition between the kinetic energy and the internal "elastic" energies. In this talk, I will discuss two very different engineering problems: free interface motion in Newtonian fluids and viscoelastic materials. We will illustrate the underlying connections between the problems and their distinct properties. Moreover, I will present the analytical results concerning the existence of near equilibrium solutions of these problems.

Monday, March 1, 2010 - 13:00 ,
Location: Skiles 255 ,
James G. Nagy ,
Mathematics and Computer Science, Emory University ,
Organizer: Sung Ha Kang

Large-scale inverse problems arise in a variety of importantapplications in image processing, and efficient regularization methodsare needed to compute meaningful solutions. Much progress has beenmade in the field of large-scale inverse problems, but many challengesstill remain for future research. In this talk we describe threecommon mathematical models including a linear, a separable nonlinear,and a general nonlinear model. Techniques for regularization andlarge-scale implementations are considered, with particular focusgiven to algorithms and computations that can exploit structure in theproblem. Examples will illustrate the properties of these algorithms.

Monday, February 22, 2010 - 13:00 ,
Location: Skiles 255 ,
Heasoon Park ,
CSE, Georgia Institute of Technology ,
Organizer: Sung Ha Kang

Nonnegative Matrix

Factorization (NMF) has attracted much attention during the past

decade as a dimension reduction method in machine learning and data

analysis. NMF provides a lower rank approximation of a nonnegative

high dimensional matrix by factors whose elements are also

nonnegative. Numerous success stories were reported in application

areas including text clustering, computer vision, and cancer class

discovery.

Factorization (NMF) has attracted much attention during the past

decade as a dimension reduction method in machine learning and data

analysis. NMF provides a lower rank approximation of a nonnegative

high dimensional matrix by factors whose elements are also

nonnegative. Numerous success stories were reported in application

areas including text clustering, computer vision, and cancer class

discovery.

In

this talk, we present novel algorithms for NMF and NTF (nonnegative

tensor factorization) based on the alternating non-negativity

constrained least squares (ANLS) framework. Our new algorithm for NMF

is built upon the block principal pivoting method for the

non-negativity constrained least squares problem that overcomes some

limitations of the classical active set method. The proposed NMF

algorithm can naturally be extended to obtain highly efficient NTF

algorithm for PARAFAC (PARAllel FACtor) model. Our algorithms

inherit the convergence theory of the ANLS framework and can easily

be extended to other NMF formulations such as sparse NMF and NTF with

L1 norm constraints. Comparisons of algorithms using various data

sets show that the proposed new algorithms outperform existing ones

in computational speed as well as the solution quality.

This

is a joint work with Jingu Kim and Krishnakumar Balabusramanian.

Monday, February 15, 2010 - 13:00 ,
Location: Skiles 255 ,
Lek-Heng Lim ,
UC Berkeley ,
Organizer: Haomin Zhou

Numerical linear algebra is often regarded as a workhorse of scientific and

engineering computing. Computational problems arising from optimization,

partial differential equation, statistical estimation, etc, are usually reduced

to one or more standard problems involving matrices: linear systems, least

squares, eigenvectors/singular vectors, low-rank approximation, matrix

nearness, etc. The idea of developing numerical algorithms for multilinear

algebra is naturally appealing -- if similar problems for tensors of higher

order (represented as hypermatrices) may be solved effectively, then one would

have substantially enlarged the arsenal of fundamental tools in numerical

computations.

We will see that higher order tensors are indeed ubiquitous in applications;

for multivariate or non-Gaussian phenomena, they are usually inevitable.

However the path from linear to multilinear is not straightforward. We will

discuss the theoretical and computational difficulties as well as ways to avoid

these, drawing insights from a variety of subjects ranging from algebraic

geometry to compressed sensing. We will illustrate the utility of such

techniques with our work in cancer metabolomics, EEG and fMRI neuroimaging,

financial modeling, and multiarray signal processing.

engineering computing. Computational problems arising from optimization,

partial differential equation, statistical estimation, etc, are usually reduced

to one or more standard problems involving matrices: linear systems, least

squares, eigenvectors/singular vectors, low-rank approximation, matrix

nearness, etc. The idea of developing numerical algorithms for multilinear

algebra is naturally appealing -- if similar problems for tensors of higher

order (represented as hypermatrices) may be solved effectively, then one would

have substantially enlarged the arsenal of fundamental tools in numerical

computations.

We will see that higher order tensors are indeed ubiquitous in applications;

for multivariate or non-Gaussian phenomena, they are usually inevitable.

However the path from linear to multilinear is not straightforward. We will

discuss the theoretical and computational difficulties as well as ways to avoid

these, drawing insights from a variety of subjects ranging from algebraic

geometry to compressed sensing. We will illustrate the utility of such

techniques with our work in cancer metabolomics, EEG and fMRI neuroimaging,

financial modeling, and multiarray signal processing.

Monday, February 1, 2010 - 13:00 ,
Location: Skiles 255 ,
Manu O. Platt ,
Biomedical Engineering (BME), Georgia Tech ,
Organizer:

Tissue remodeling

involves the activation of proteases, enzymes capable of degrading

the structural proteins of tissue and organs. The implications of

the activation of these enzymes span all organ systems and therefore,

many different disease pathologies, including cancer metastasis.

This occurs when local proteolysis of the structural extracellular

matrix allows for malignant cells to break free from the primary

tumor and spread to other tissues. Mathematical models add value to

this experimental system by explaining phenomena difficult to test at

the wet lab bench and to make sense of complex interactions among the

proteases or the intracellular signaling changes leading to their

expression. The papain family of cysteine proteases, the cathepsins,

is an understudied class of powerful collagenases and elastases

implicated in extracellular matrix degradation that are secreted by

macrophages and cancer cells and shown to be active in the slightly

acidic tumor microenvironment. Due to the tight regulatory

mechanisms of cathepsin activity and their instability outside of

those defined spaces, detection of the active enzyme is difficult to

precisely quantify, and therefore challenging to target

therapeutically. Using valid assumptions that consider these complex

interactions we are developing and validating a system of ordinary

differential equations to calculate the concentrations of mature,

active cathepsins in biological spaces. The system of reactions

considers four enzymes (cathepsins B, K, L, and S, the most studied

cathepsins with reaction rates available), three substrates (collagen

IV, collagen I, and elastin) and one inhibitor (cystatin C) and

comprise more than 30 differential equations with over 50 specified

rate constants. Along with the mathematical model development, we

have been developing new ways to quantify proteolytic activity to

provide further inputs. This predictive model will be a useful tool

in identifying the time scale and culprits of proteolytic breakdown

leading to cancer metastasis and angiogenesis in malignant tumors.