Understanding & Generalising the Convolution and Scalable Bayesian Inference

Date: 
Monday, June 18, 2018 - 18:30
Source: 
London Machine Learning
Attendees: 
210
City: 
London

VERY IMPORTANT INFORMATION: The venue has changed to 1 Angel Ln, London EC4R 3AB.

We have a great new venue (very close to the old one) - please note that Photo ID will be required.

Agenda:
- 18:30: Doors open, pizza, beer, networking
- 19:00: First talk
- 20:00: Break & networking
- 20:15: Second talk
- 21:30: Close

*Sponsors*
Man AHL: At Man AHL, we mix machine learning, computer science and engineering with terabytes of data to invest billions of dollars every day.
Evolution AI: Build a state-of-the-art NLP pipeline in seconds.

*Understanding and Generalising the Convolution (Daniel Worrall)
Abstract: Classifying cat vs. dog images should not be affected by the translation/rotation/scaling of the animal. But off-the-shelf convolutional neural networks (CNNs) can only deal with translation. To solve the problem of rotation and scaling invariance, we need to first ask “Why can CNNs cope with translation in the first place”? The answer is because they use convolutions. Just what is it about convolution that makes CNNs so useful for dealing with translation? And how can we leverage these insights to build tomorrow’s state-of-the-art models for other transformations? It turns out that convolutions and their generalisations are a unique operation that must be included in any model when there is an invariance or symmetry in the task to data transformations. In this talk, I will introduce some light theory and insights to these questions, some of the models my lab and I have developed, and potential avenues of future research.

Bio: Daniel Worrall is a postdoctoral researcher working with Prof. Dr. Max Welling at the University of Amsterdam, in the Philips Laboratory. He is interested in equivariant neural networks, approximate Bayesian inference, uncertainty quantification, and medical imaging. He read Information Engineering at the University of Cambridge (BA, MEng) and Computer Vision at University College London (PhD), where he was briefly involved with Amnesty International’s Decoders Unit working on AI for Good.

*Scalable Bayesian Inference with Hamiltonian Monte Carlo (Michael Betancourt)
Abstract: Despite the promise of big data, inferences are often limited not by sample size but rather by systematic effects. Only by carefully modeling these effects can we take full advantage of the data -- big data must be complemented with big models and the algorithms that can fit them. One such algorithm is Hamiltonian Monte Carlo, which exploits the inherent geometry of the posterior distribution to admit full Bayesian inference that scales to the complex models of practical interest. In this talk, I will discuss the conceptual foundations of Hamiltonian Monte Carlo, elucidating the geometric nature of its scalable performance and stressing the properties critical to a robust implementation.

Bio: Michael Betancourt is the principle research scientist with Symplectomorphic, LLC where he develops theoretical and methodological tools to support practical Bayesian inference. He is also a core developer of Stan, where he implements and tests these tools. In addition to hosting tutorials and workshops on Bayesian inference with Stan, he also collaborates on analyses in epidemiology, pharmacology, and physics, amongst others. Before moving into statistics, Michael earned a B.S. from the California Institute of Technology and a Ph.D. from the Massachusetts Institute of Technology, both in physics.

1 Angel Ln

1 Angel Ln