- Series
- Other Talks
- Time
- Monday, April 29, 2013 - 3:05pm for 1 hour (actually 50 minutes)
- Location
- Klaus 1116W
- Speaker
- Maxim Raginsky – University of Illinois, Urbana-Champaign
- Organizer
- Prasad Tetali
The problem of quantifying the amount of information loss due to a
random transformation (or a noisy channel) arises in a variety of
contexts, such as machine learning, stochastic simulation,
error-correcting codes, or computation in circuits with noisy gates, to
name just a few. This talk will focus on discrete channels, where both
the input and output sets are finite. The noisiness of a discrete
channel can be measured by comparing suitable functionals of the input
and output distributions. For instance, if we fix a reference input
distribution, then the worst-case ratio of output relative entropy
(Kullback-Leibler divergence) to input relative entropy for any other
input distribution is bounded by one, by the data processing theorem.
However, for a fixed reference input distribution, this quantity may be
strictly smaller than one, giving so-called strong data processing
inequalities (SDPIs). I will show that the problem of determining both
the best constant in an SDPI and any input distributions that achieve it
can be addressed using logarithmic Sobolev inequalities, which relate
input relative entropy to certain measures of input-output correlation. I
will also show that SDPIs for Kullback-Leibler divergence arises as a
limiting case of a family of SDPIs for Renyi divergence, and discuss the
relationship to hypercontraction of Markov operators.