The limits of control: An information-theoretic viewpoint

Max Raginsky

Maxim Raginsky received the B.S. and M.S. degrees in 2000 and the Ph.D. degree in 2002 from Northwestern University, Evanston, IL, all in electrical engineering. He has held research positions with Northwestern, the University of Illinois at Urbana-Champaign (where he was a Beckman Foundation Fellow from 2004 to 2007), and Duke University. In 2012, he has returned to UIUC, where he is currently an Assistant Professor with the Department of Electrical and Computer Engineering and the Coordinated

Science Laboratory. In 2013, Prof. Raginsky has received a Faculty Early Career Development (CAREER) Award from the National Science Foundation. His research interests lie at the intersection of information theory, machine learning, and control.

Science Laboratory. In 2013, Prof. Raginsky has received a Faculty Early Career Development (CAREER) Award from the National Science Foundation. His research interests lie at the intersection of information theory, machine learning, and control.

Abstract:

Adaptive dynamical systems arise in a multitude of contexts, e.g., optimization, control, communications, signal processing, and machine learning. A precise characterization of their fundamental limitations is therefore of paramount importance. In this talk, I consider the general problem of adaptively controlling and/or identifying a stochastic dynamical system, where a priori knowledge allows us to place the system in a subset of a metric space (the uncertainty set). I will present an information-theoretic meta-theorem that captures the trade-off between the metric complexity (or richness) of the uncertainty set, the amount of information acquired online in the process of controlling and observing the system, and the residual uncertainty remaining after the observations have been collected. Following the approach of G. Zames, I quantify a priori information by the Kolmogorov (metric) entropy of the uncertainty set, while the information acquired online is expressed as a sum of information divergences. I will then use the meta-theorem to derive new minimax lower bounds on the metric identification error, as well as to give a simple derivation of the minimum time needed to stabilize an uncertain stochastic linear system.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer