Towards Intepretable & Responsible AI & Searching for Principles of Reasoning

Date: 
Wednesday, October 24, 2018 - 18:30
Source: 
London Machine Learning
Attendees: 
260
City: 
London

Please note that Photo ID will be required. Please can attendees ensure their meetup profile name includes their full name to ensure entry.

Agenda:
- 18:30: Doors open, pizza, beer, networking
- 19:00: First talk
- 19:45: Break & networking
- 20:00: Second talk
- 20:45: Close

* Towards Interpretable and Responsible AI in Structured Worlds (Vaishak Belle)

Abstract: The field of statistical relational learning aims at unifying logic and probability to reason and learn from relational data. Logic provides a means to codify high-level dependencies between individuals, enabling descriptive clarity in the knowledge representation system, and probability theory provides the means to quantify our uncertainty about this knowledge. In this talk, we report on some recent progress in the field while touching on the themes of interpretability and responsibility in AI. If time permits, we will also discuss very recent work on automating responsible decision making, by explicitly capturing the blame that should be accorded to a system in regards to a decision taken by it.

Bio: Vaishak Belle is a Chancellor’s Fellow at the School of Informatics, University of Edinburgh, an Alan Turing Institute Faculty Fellow, and a member of the RSE (Royal Society of Edinburgh) Young Academy of Scotland. Vaishak’s research is in artificial intelligence, and is motivated by the need to augment learning and perception with high-level structured, commonsensical knowledge, to enable AI systems to learn faster and more accurate models of the world. He is interested in computational frameworks that are able to explain their decisions, modular, re-usable, and robust to variations in problem description. He has co-authored over 40 scientific articles on AI, and along with his co-authors, he has won the Microsoft best paper award at UAI, and the Machine learning journal award at ECML-PKDD. In 2014, he received a silver medal by the Kurt Goedel Society.

* Searching for the Principles of Reasoning and Intelligence (Shakir Mohamed)

Abstract: We are collectively committed to a common task: a search for the general principles that make machines-that-learn possible. This leads to the question: What are the universal principles, if there are any, of reasoning and intelligence in machines? For me, these are the principles of probability, and of probabilistic inference. My search begins with four statistical operations that expose the dual tasks of learning, and of testing. We can instantiate many different types of inferential questions, and I share some of the pathways I've followed in attempting to find general-purpose approaches to them. One such area is variational inference, and I'll briefly discuss the roles of amortised inference, stochastic optimisation, and universal density estimation. For the most part, I'll explore recent work in testing as an inferential principle for implicit probabilistic models, and discuss work in estimation-by-comparison, density ratio estimation, and the method-of-moments. Different types of models require different types of inference, making any type of universal inferential elusive. But these are ongoing efforts, and as usual, there remain many questions and much more to do. My search for the principles of reasoning and intelligence continues.

Bio: Shakir Mohamed is a staff research scientist at DeepMind. Shakir's research is in areas of statistical machine learning and artificial intelligence. His work focusses on the interface between probabilistic reasoning, deep learning and reinforcement learning, and how the computational solutions that emerge at that intersection can be used to develop general-purpose learning systems. Shakir focusses his efforts around 3 pillars: Searching for the Principles of Reasoning and Intelligence, on Global Challenges, and Transformation and Diversity.

1 Angel Ln

1 Angel Ln