Title: Probabilistic Reasoning and Reinforcement Learning
Info: ECE 457C - Reinforcement Learning
Instructor: Prof. Mark Crowley, ECE Department, UWaterloo
NOTE: Ignore the weekly dates, they are from a previous year
Website: markcrowley.ca/rlcourse
Link to this Gingko Tree: RL Course Links and Notes
Primary Textbook : Reinforcement Learning: An Introduction
Small : Richard S. Sutton and Andrew G. Barto, 2018 [SB]
Some topics are not covered in the SB textbook or they are covered in much more detail than the lectures. We will continue to update this list with references as the term progresses.
Skipped Topics:
Other resources connected with previous versions of the course, I’m happy to talk about any of these if people are interested.
Introduction to Reinforcement Learning (RL) theory and algorithms for learning decision-making policies in situations with uncertainty and limited information. Topics include Markov decision processes, classic exact/approximate RL algorithms such as value/policy iteration, Q-learning, State-action-reward-state-action (SARSA), Temporal Difference (TD) methods, policy gradients, actor-critic, and Deep RL such as Deep Q-Learning (DQN), Asynchronous Advantage Actor Critic (A3C), and Deep Deterministic Policy Gradient (DDPG). [Offered: S, first offered Spring 2019]
Textbook Sections: [SB 1.1, 1.2, 17.6]
Former title: The Reinforcement Learning Problem
Textbook Sections:[SB 4.1-4.4]
Textbook Sections: Selections from [SB chap 5], [SB 6.0 - 6.5]
Textbook Sections: [SB 12.1, 12.2]
Go over any questions or open topics from first 6 weeks.
No midterm in Spring 2023 course.
[SB 13.1, 13.2, 13.5]
Note: the content listed in LEARN for the S22 offering are being updated more frequently and consistently with content than this list.
Week 13
See LEARN for more information.
In general: move the elevators, open/close the doors in order to maximize your objective function
At every moment the system can take any of the following actions, we can assume they only happen one at a time
Do nothing
Open a door/Close a door : set
Move an elevator up/down from current floor : set
Stop an elevator at the current floor it is moving towards using
(huh? no it’s not short…it’s about elevators)
[SuttonBarto2018] - Reinforcement Learning: An Introduction. Book, free pdf of draft available.
http://incompleteideas.net/book/the-book-2nd.html
[Dimitrakakis2019] - Decision Making Under Uncertainty and Reinforcement Learning
http://www.cse.chalmers.se/~chrdimi/downloads/book.pdf
[Ghavamzadeh2016] - Bayesian Reinforcement Learning: A Survey. Ghavamzadeh et al. 2016.
https://arxiv.org/abs/1609.04436
This website is a great resource. It lays out concepts from start to finish. Once you get through the first half of our course, many of the concepts on this site will be familiar to you.
https://spinningup.openai.com/en/latest/spinningup/keypapers.html
The fundamentals of RL are briefly covered here. We will go into all this and more in detail in our course.
https://spinningup.openai.com/en/latest/spinningup/rl_intro.html
(as of 2022)
Here, a list of algorithms at the cutting edge of RL as of 1 year ago to so, so it’s a good place to find out more. But in a fast growing field, it may be a bit out of date about the latest now.
https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html
This is a thorough collection of slides from a few different texts and courses laid out with the essentials from basic decision making to Deep RL. There is also code examples for some of their own simple domains.
https://github.com/omerbsezer/Reinforcement_learning_tutorial_with_demo#ExperienceReplay
AAMAS 2021 conference just finished recently and is focussed on decision making and planning, lots of RL papers.
ICLR 2020 conference (https://iclr.cc/virtual_2020/index.html)
Introductory topics on this from my graduate course ECE 657A - Data and Knowledge Modeling and Analysis are available on youtube and mostly applicable to this course as well.
Probability and Statistics Review (youtube playlist)
Containing Videos on:
For a very fundamental view of probability from another course of Prof. Crowley you can view the lectures and tutorials for ECE 108
ECE 108 Youtube (look at “future lectures” and “future tutorials” for S20): https://www.youtube.com/channel/UCHqrRl12d0WtIyS-sECwkRQ/playlists
The last few lectures and tutorials are on probability definitions as seen from the perspective of discrete math and set theory.
A Good article summarizing how likelihood, loss functions, risk, KL divergence, MLE, MAP are all connected.
https://quantivity.wordpress.com/2011/05/23/why-minimize-negative-log-likelihood/
From the course website for a previous year. Some of this we won’t need so much but they are all useful to know for Machine Learning methods in general.
https://compthinking.github.io/RLCourseNotes/
- Part 1 - Live Lecture May 17, 2021 on Virtual Classroom - View Live Here
Parts:
Eligibility traces, in a tabular setting, lead to a significant benefit in training time in additional to the Temporal Difference method.
In Deep RL it is very common to use experience replay to reduce overfitting and bias to recent experiences. However, experience replay makes it very hard to leverage eligibility traces which require a sequence of actions to distribute reward backwards.
A Value Function Approximation (VFA)
is a necessary technique to use whenever the size of the state of action spaces become too large to represent the value function explicitly as a table. In practice, any practical problem needs to use a VFA.
In this video go over some of the fundamental concepts that led to neural networks (such as linear regression and logistic regression models), the basic structure and formulation of classic neural networks and the history of their development.
This video goes through a ground level description of logistic neural units, classic neural networks, modern activation functions and the idea of a Neural Network as a Universal Approximator.
In this video we discuss the nuts and bolts of how training in Neural Networks (Deep or Shallow) works as a process of incremental optimization of weights via gradient descent. Topics discussed: Backpropagation algorithm, gradient descent, modern optimizer methods.
In this video we go over the fundamentals of Deep Learning from a different angle using the approach from Goodfellow et. al.’s Deep Learning Textbook and their network graph notation for neural networks.
We describe the network diagram notation, and how to view neural networks in this way, focussing on the relationship between sets of weights and layers.
Other topics include: gradient descent, loss functions, cross-entropy, network output distribution types, softmax output for classification.
This video continues with the approach from Goodfellow et. al.’s Deep Learning Textbook and goes into detail about computational methods, efficiency and defining the measure being used for optimization.
Topics covered include: relationship of network depth to generalization power, computation benefits of convolutional network structures, revisiting the meaning of backpropagation, methods for defining loss functions
In this lecture I talk about some of the problems that can arise when training neural networks and how they can be mitigated. Topics include : overfitting, model complexity, vanishing gradients, catastrophic forgetting and interpretability.
In this video we give an overview of several approaches for making DNNs more usable when data is limited with respect to the size of the network. Topics include data augmentation, residual network links, vanishing gradients.
Some of the posts used for lecture on July 26.
Very clear blog post on describing Actor-Critic Algorithms to improve Policy Gradients
https://www.freecodecamp.org/news/an-intro-to-advantage-actor-critic-methods-lets-play-sonic-the-hedgehog-86d6240171d/
Going beyond what we covered in class, here are some exciting trends and new advances in RL research in the past few years to find out more about.
PG methods are a fast changing area of RL research. This post has a number of the successful algorithms in this area from a few years ago:
https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html#actor-critic
https://hyp.is/go?url=https%3A%2F%2Farxiv.org%2Fpdf%2F1509.02971.pdf&group=DM67BYBG
See the RL Next Steps tree for what was discussed in class July 22, 2022.
A nice blog post on comparing DQN and Policy Gradient algorithms such A2C.
https://flyyufelix.github.io/2017/10/12/dqn-vs-pg.html
[Ermon2019] - First half of notes are based on Stanford CS 228 (https://ermongroup.github.io/cs228-notes/) which goes even more into details on PGMs than we will.
[Cam Davidson 2018] - Bayesian Methods for Hackers - Probabilistic Programming textbook as set of python notebooks.
https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/#contents
[Koller, Friedman, 2009] Probabilistic Graphical Models : Principles and Techniques
The extensive theoretical book on PGMs.
https://mitpress.mit.edu/books/probabilistic-graphical-models
When using a VFA, you can use a Stochastic Gradient Descent (SGD) method to search for the best weights for your value function according to experience.
This parametric form the value function will then be used to obtain a greedy or epsilon-greedy policy at run-time.
This is why using a VFA + SGD is still different from a Direct Policy Search approach where you optimize the parameters of the policy directly.
SamIam Bayesian Network GUI Tool
Other Tools
Some videos and resources on Bayes Nets, d-seperation, Bayes Ball Algorithm and more:
https://metacademy.org/graphs/concepts/bayes_ball