Variational Inference and Learning for Autonomous Robots

Monday, April 24, 2017 - 18:30
London Machine Learning


- 18:30:  doors open, pizza, beer, networking

- 19:00: First talk

- 20:00: Break & networking

- 20:15: Second talk

- 21:30: Close

Data-Efficient Learning for Autonomous Robots - Marc Deisenroth

In this talk I will focus on machine learning methods for controlling autonomous robots, which pose an additional practical challenge: Data-efficiency, i.e., we need to be able to learn controllers in a few experiments since performing millions of experiments with robots is time consuming and wears out the hardware. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, pre-shaped policies, or the underlying dynamics. In the first part of the talk, I follow a different approach and speed up learning by efficiently extracting information from sparse data. In particular, I propose to learn a probabilistic, non-parametric Gaussian process dynamics model. By explicitly incorporating model uncertainty in long-term planning and controller learning my approach reduces the effects of model errors, a key problem in model-based learning.

Compared to state-of-the art reinforcement learning our model-based policy search method achieves an unprecedented speed of learning, which makes is most promising for application to real systems. I demonstrate its applicability to autonomous learning from scratch on real robot and control tasks. In the second part of my talk, I will discuss an alternative method for learning controllers for bipedal locomotion based on Bayesian Optimization, where it is hard to learn models of the underlying dynamics due to ground contacts. Using Bayesian optimization, we sidestep this modeling issue and directly optimize the controller parameters without the need of modeling the robot's dynamics.

Bio: Marc Deisenroth is a Lecturer in Statistical Machine Learning in the Department of Computing at Imperial College London and with, a Cambridge-based start-up. He has been awarded an Imperial College Research Fellowship in 2014 and received Best Paper Awards at ICRA 2014 and ICCAS 2016. He is a recipient of a Google Faculty Research Award and a Microsoft Ph.D. Scholarship. Marc's research interests center around data-efficient machine learning methods with application to autonomous decision making, personalized healthcare and autonomous robots.

Don't sample, Optimize: Why Variational inference? - Peadar Coyle

Probabilistic Programming allows very flexible creation of custom probabilistic models and is mainly concerned with insight and learning from your data. The approach is inherently Bayesian so we can specify priors to inform and constrain our models and get uncertainty estimation in form of a posterior distribution. Using MCMC sampling algorithms we can draw samples from this posterior to very flexibly estimate these models. PyMC3 and Stan are the current state-of-the-art tools to consruct and estimate these models.

One major drawback of sampling, however, is that it's often very slow, especially for high-dimensional models. That's why more recently, variational inference algorithms have been developed that are almost as flexible as MCMC but much faster. Instead of drawing samples from the posterior, these algorithms instead fit a distribution (e.g. normal) to the posterior turning a sampling problem into and optimization problem. ADVI -- Automatic Differentation Variational Inference -- is implemented in PyMC3 and Stan, as well as a new package called Edward which is mainly concerned with Variational Inference.

In this talk we'll apply these methods of Variational Inference to regression and neural network problems, and explain the advantages for solving big data problems in probabilistic programming. You'll leave this talk with methods you can apply in your own work, and will showcase some of the new features in PyMC3 and Edward.

Bio: I'm a PyMC3 contributor, author and data scientist based in London. I currently work on applying machine learning techniques to solving recruitment problems for a recruitment platform I'm interested in NLP, Deep Learning and Probabilistic Programming. My academic history includes a Masters in Mathematics and a Bachelors in Physics/Philosophy, and I've worked in Media/Telco and E-commerce. 


AHL Riverbank House, 2 Swan Lane, EC4R 3AD