6 EC
Semester 1 & 2, period 1, 4
5204RELE6Y
Owner | Master Artificial Intelligence |
Coordinator | dr. H.C. van Hoof |
Part of | Master Artificial Intelligence, |
Reinforcement learning is a general framework studying sequential decision-making problems. In such problems, at every time step an action must be chosen to optimize long-term performance. This is a very wide class of problems that includes robotic control, game playing, but also human and animal behavior. Reinforcement learning methods can be applied when no training labels for the optimal action are available, and good actions have to be discovered through trial and error.
In this course, we will discuss properties of reinforcement learning problems and algorithms to solve them. In particular, we will look at
Reinforcement Learning: An Introduction. R. S. Sutton & A. G. Barto
Second Edition. Available: http://incompleteideas.net/book/RLbook2020.pdf. We will cover or partially cover chapters 1-6, 8-11, 13, 16, and 17.
A Survey on Policy Search for Robotics. M. P. Deisenroth, G. Neumann, J. Peters. We will cover this survey partially. Available:
We will study new developments in the field of RL through recent publicly available papers. Links will be distributed through the class website.
In the lectures the theoretical background will be covered, coupled the building up of an intuitive understanding of how methods relate to each other through examples and explanation.
The practical sessions focus on applying and practicing the techniques from the lecture.
Activity | Hours | |
Hoorcollege | 28 | |
Laptopcollege | 14 | |
Tentamen | 3 | |
Werkcollege | 14 | |
Self study | 109 | |
Total | 168 | (6 EC x 28 uur) |
This programme does not have requirements concerning attendance (OER part B).
Additional requirements for this course:
Attendance to lectures and tutorial sessions is strongly encouraged but not required.
Item and weight | Details |
Final grade | |
0.65 (65%) Tentamen | |
0.35 (35%) Deelcijfer praktische opdrachten |
A resit is possible for the exam only.
A result of at least 5.0 on the exam is necessary to pass the course. Of course, the weighted average of assignments and exam needs to be at least 5.5 to pass the course.
The final grade is a weighted average of assignments and the exam as follows:
Assignments handed in after the assignment deadline without permission might not be graded and might not be awarded full points. Please see the course policy in the syllabus on Canvas.
An announcement will be made on canvas for inspecting exam grades. For inspecting assignment grades, ask your TA after the grade is announced.
Both graded and ungraded assignments are provided. Only the graded assignments should be handed in. They are clearly marked as 'homework'.
The answers to ungraded assignments will be provided one week after the assignment was scheduled so that students can check their own work.
Homework assignments, including coding assignments and the empirical RL report, can be done in groups of 2 or individually. Feedback will be given in Canvas and additional feedback can be given on request by the TA during tutorial sessions.
The 'Regulations governing fraud and plagiarism for UvA students' applies to this course. This will be monitored carefully. Upon suspicion of fraud or plagiarism the Examinations Board of the programme will be informed. For the 'Regulations governing fraud and plagiarism for UvA students' see: www.student.uva.nl
T = tutorial (werkcollege), L = lecture
RL:AI = Reinforcement Learning: An introduction, Ex = ungraded exercise, HW = homework
Lecture |
Topic |
Literature |
T 1 |
Set-up programming environment, prior knowledge self-test |
Ex. 0.1 & 0.2 |
L 1 |
Introduction. MDP & Bandit |
RL:AI 1.1-1.4, 1.6, 2.1-2.4, 2.6, 2.7, 3.1-3.3 |
T 2 |
|
Ex. 1.1-1.3 |
L 2 |
Dynamic programming |
RL:AI 2.5, 3.4-3.8, 4 |
T 3 |
|
Ex. 2.1-2.2, HW 2.3 & 2.4 |
L 3 |
Monte-Carlo methods |
RL:AI 5.1-5.7 |
T 4 |
|
Ex. 3.1-3.3, HW 3.4 |
12/2 |
Hand in HW 1! (HW always due on Wednesday at 17:00) |
|
L 4 |
Temporal difference methods |
RL:AI 6.1-6.5 |
T 5 |
|
Ex. 4.1-4.2, HW 4.3 |
L 5 |
From tabular learning to approximation |
RL:AI 9.1 - 9.3 |
T 6 |
|
Ex. 5.1, 5.2, HW 5.3 & 5.4 |
19/2 |
Hand in HW 2! |
|
L 6 |
On-policy temporal difference learning with approximation |
RL:AI 9.3-9.8 |
T7 |
|
Ex. 6.1-6.4; HW 6.5 |
L7 |
Off-policy RL with approximation |
RL:AI 10.1, 11.1-11.7 |
T 8 |
|
Ex. 7.1-7.3 HW 7.4 |
26/2 |
Hand in HW 3! |
|
L 8 |
Deep RL (value-based methods) |
RL:AI 16.5 ; papers (TBD) |
T 9 |
|
8.1, HW 8.2 & 8.3 |
L 9 |
Policy gradient methods: REINFORCE |
RL:AI 13.1-13.4, 13.7; Survey 1 - 2.4.1.2 |
T 10 |
|
Ex. 9.1-9.3, HW 9.4 |
5/3 |
Hand in HW 4! |
|
L 10 |
Policy gradient methods: PGT, DPG & evaluation |
13.5, RL that matters paper , Empirical design paper, DPG paper |
T 11 |
|
Ex. 10.1 - 10.2, HW 10.3 & 10.4 |
L 11 |
Advanced PS methods: NPG & TRPO |
Survey 2.4.1.3, TRPO paper |
T 12 |
|
Ex. 11.1 - 11.3 |
12/3 |
Hand in HW 5! |
|
L 12 |
Planning and learning |
RL:AI 8.1, 8.2, 8.8, 8.10, 8.11, 8.13, 16.6, AlphaGo paper |
T 13 |
|
Ex. 12.1 & 12.2 |
L 13 |
Partial observability |
RL:AI 17.3 |
T 14 |
FAQ session (exam and reproducible research assignment) |
Ex. 13.1, ERL assignment |
19/3 |
Hand in HW 6 (ERL assignment)! |
|
L 14 |
Recap & Exam FAQ |
|
24/3 |
Exam! |
|
For questions regarding assignment to tutorial groups please see the Canvas announcement and contact Pieter Pierrot.
Questions about the content of lectures or exercises can be asked on the course Ed Discussion page (see link on Canvas).
For sensitive or private questions please contact the course coordinator using e-mail or a message on Canvas.