Mutual Information-based Gradient-Ascent Control for Distributed Robotics
Brian J. Julian
Computer Science and Artificial Intelligence Laboratory
Tuesday, April 30th, 2013
Abstract: In this talk I discuss the relevancy, generality, validity, and scalability of a novel class of decentralized, gradient-based controllers. More specifically, these controllers use the analytical gradient of mutual information to distributively control multiple robots as they infer the state of an environment. Given simple robot dynamics and few probabilistic assumptions, none of which include Gaussian noise, I prove that these controllers are convergent between sensor observations and, in their most general form, locally optimal. Additionally, by employing distributed approximation algorithms and non-parametric methods, I show that computational tractability is achieved even for large systems and environments. Throughout the talk I support this work with numerous simulations and hardware experiments concerning traditional robot applications such as mapping, exploration, and surveillance.
Brian J. Julian received the B.S. degree in mechanical and aerospace engineering from Cornell University, and the S.M. degree in electrical engineering and computer science from the Massachusetts Institute of Technology. He is currently a Ph.D. candidate in the Computer Science and Artificial Intelligence Laboratory at MIT. Since 2005, he has been a staff member in the Engineering Division at MIT Lincoln Laboratory, where his is currently Associate Staff in the Rapid Prototyping Group and a Lincoln Doctoral Scholar. Brian’s research in distributed multi-robot coordination applies tools from control, network, and optimization theory to develop scalable information-theoretic algorithms with provable performance characteristics.
April 30, 2013