Computer Science

Faculty of Engineering, LTH


After the Christmas break: Jog your mind with ML!


From: 2020-01-07 13:00 to: 15:00
Place: E:A, E-building, Ole Römers väg 3, LTH, Lund University - see map for how to enter the building.
Contact: elin_anna [dot] topp [at] cs [dot] lth [dot] se
Save event to your calendar

Marc Deisenroth (University College London) and Shakir Mohamed (DeepMind) give a guest lecture each on their work within the area of Machine Learning

Welcome to attend this research seminar of the group for Robotics and Semantic Systems at the Dept of Computer Science. Our guests, Marc Deisenroth (University College London) and Shakir Mohamed (DeepMind) will give some insights into their insights in Machine Learning - and hopefully wake you up after the Holidays! 

When: 7January 2020 13.00-15.00

Room E:A, E-huset, Ole Römers väg 3 LundWhere: E:A, E-building, Ole Römers väg 3, LTH, Lund University

Presentation 1:
Data-Efficient Reinforcement Learning with Probabilistic Models, 
Marc Deisenroth, University College London, 

Abstract: On our path toward fully autonomous systems, i.e., systems that operate in the real world without significant human intervention, reinforcement learning (RL) is a promising framework for learning to solve problems by trial and error. While RL has had many successes recently, a practical challenge we face is its data inefficiency: In real-world problems (e.g., robotics) it is not always possible to conduct millions of experiments, e.g., due to time or hardware constraints. In this talk, I will outline three approaches that explicitly address the data-efficiency challenge in reinforcement learning using probabilistic models. First, I will give a brief overview of a model-based RL algorithm that can learn from small datasets. Second, I will describe an idea based on model predictive control that allows us to learn even faster while taking care of state or control constraints, which is important for safe exploration. Finally, I will introduce an idea for meta learning (in the context of model-based RL), which is based on latent variables within a hierarchical Bayesian model.

Key references:

  • Marc P. Deisenroth, Dieter Fox, Carl E. Rasmussen, Gaussian Processes for Data-Efficient Learning in Robotics and Control, IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 37, pp. 408–423, 2015
  • Sanket Kamthe, Marc P. Deisenroth, Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control, Proceedings of the International the Conference on Artificial Intelligence and Statistics (AISTATS), 2018
  • Steindór Sæmundsson, Katja Hofmann, Marc P. Deisenroth, Meta Reinforcement Learning with Latent Variable Gaussian Processes, Proceedings of the International the Conference on Uncertainty in Artificial Intelligence (UAI), 2018

Presentation 2:
Machine Learning from Principles to Products,
Shakir Mohamed, Research Scientist, DeepMind

Abstract: Machine learning covers a range of research from the theoretical to the practical and across ever-wider areas of human activity. In this talk i'd like to explore these two ends of our field, using a definition of machine learning as a pathway from principles to products, structuring this talk in two parts. Part one will look at the basic question of how we compute gradients of stochastic objective functions. This is the important problem of sensitivity analysis, where we will derive three estimators for these gradients. The second part will jump from the theoretical to the applied and look at the problem of machine learning in healthcare. In particular, I'll explore the prediction of organ injury in hospitals, and a specific condition known as acute Kidney Injury (AKI), and look at one approach for making clinically-applicable predictions of kidney injury. As computational researchers, we have the privilege to work in many different areas of research, and I'll conclude by raising some questions of what ethical research in pathways from principles to products looks like, and from where we derive the motivation for our research.