- 18:30: doors open, pizza, beer, networking
- 19:00: First talk
- 20:00: Break & networking
- 20:15: Second talk
- 21:30: Close
• Homomorphically Encrypted A.I. on the Blockchain - Andrew Trask
In the first half of this talk, I'll introduce and describe Homomorphically Encrypted Deep Learning, which is an approach to training neural networks in an encrypted state (on unencrypted data) such that it's growing intelligence is protected from theft. This description will include a breakdown of several major Homomorphic Encryption techniques as well as a "from scratch" demo in Python.
In the second half of this talk, I'll be discussing the significant impacts this technology has when combined with the recent advancements in Blockchain and Federated Learning technologies. To demonstrate, we'll train an Encrypted AI on a private Ethereum blockchain, offering a cryptobounty to all those willing to contribute valuable training data. We'll demonstrate how tying the bounty to the loss function of our deep learning engine allows reward to be distributed based on how relevant each contributor's dataset is to minimizing a target loss.
Bio: Andrew Trask is a PhD student at Oxford University, funded by the Oxford-DeepMind Graduate Scholarship where he studies Deep Learning and Natural Language Processing with special emphasis on Long-Term Memory and Homomorphically Encrypted Deep Learning. He is also the author of the book Grokking Deep Learning, an instructor in Udacity's Deep Learning Nanodegree, and the author of a popular deep learning blog (iamtrask.github.io). Prior to Oxford, Andrew lead product analytics at Digital Reasoning, which delivers enterprise A.I. solutions to Hedge Funds, Investment Banks, Healthcare Networks, and Government Intelligence clients. While at Digital Reasoning, he also trained the world's largest neural network, a record published as a part of his work on neural word embeddings at the International Conference on Machine Learning in 2015.
• Reconciling Neural Networks with Symbolic Artificial Intelligence - Murray Shanahan
Despite the success of deep reinforcement learning, exemplified by DeepMind’s DQN and AlphaGo, there is arguably something missing. Learning with these systems is brittle, data hungry, and over-fits to the specialised tasks it is trained on. In this talk I will present ongoing work to incorporate elements of symbolic AI into a deep reinforcement learning framework in an attempt to alleviate some of these shortcomings.
Bio: Murray Shanahan is Professor of Cognitive Robotics in the Dept. of Computing at Imperial College London, and a senior research scientist at DeepMind. Educated at Imperial College and Cambridge University (King’s College), he became a full professor at Imperial in 2006, and joined DeepMind in 2017. His publications span artificial intelligence, robotics, machine learning, logic, dynamical systems, computational neuroscience, and philosophy of mind. He has written several books, including “Embodiment and the Inner Life” (2010) and “The Technological Singularity” (2015). His main current research interests are neurodynamics, deep reinforcement learning, and the future of AI.
AHL Riverbank House, 2 Swan Lane, EC4R 3AD