MC1 From Deep Learning to Deep Reinforcement Learning: A quick introduction

Description

The course will introduce some concepts of deep learning. We will discuss the difference between feedforward models and recurrent ones, optimization issues surrounding this non-convex functions, etc. Everything will be described in the framework of supervised learning, though unsupervised learning will also be briefly introduced. Afterwards we will continue towards reinforcement learning and in particular discuss the intersection between deep learning and reinforcement learning that poses some unique problems. The lecture will end with an outlook on some of the big open questions in the field.

Objectives

Conceptual: To understand the status of a very fast moving subfield of machine learning that managed to make headway in the media quite a bit recently Methodological: Be able to have some rudimentary knowledge of how to set up and attack a problem using deep networks and/or rl.

Literature

https://www.cs.toronto.edu/~hinton/absps/NatureDeepReview.pdf
Deep Learning, Yann LeCunn, Yoshua Bengio and Geoffrey Hinton Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. “Playing Atari with deep
reinforcement learning.” arXiv preprint arXiv:1312.5602 (2013).

Course location

Günne

Course requirements

none

Instructor information.

Instructor's name

Razvan Pascanu

Email

cf. website

Vita

Razvan Pascanu is a senior research scientist at Google DeepMind. He did his masters with prof. Hebert Jaeger, followed by a PhD at University of Montreal with prof. Yoshua Bengio. His interests focus on neural networks, optimization, reinforcement learning, theory of neural networks, continual learning, .. anything to do with neural networks. They are a very powerful yet not well understood computational model. Easy to work with, hard to understand.

Website

https://uk.linkedin.com/in/razvan-pascanu-67abb215/de