Course References, Links and Random Notes
Title: Probabilistic Reasoning and Reinforcement Learning
Info: ECE 457C - Reinforcement Learning
Instructor: Prof. Mark Crowley, ECE Department, UWaterloo
NOTE: Ignore the weekly dates, they are from a previous year
Website: markcrowley.ca/rlcourse
Link to this Gingko Tree: RL Course Links and Notes
Course Description :
Introduction to Reinforcement Learning (RL) theory and algorithms for learning decision-making policies in situations with uncertainty and limited information. Topics include Markov decision processes, classic exact/approximate RL algorithms such as value/policy iteration, Q-learning, State-action-reward-state-action (SARSA), Temporal Difference (TD) methods, policy gradients, actor-critic, and Deep RL such as Deep Q-Learning (DQN), Asynchronous Advantage Actor Critic (A3C), and Deep Deterministic Policy Gradient (DDPG). [Offered: S, first offered Spring 2019]
Style Sheet
- don’t mess with it
- Style Sheet - see saved main version : https://gingkoapp.com/app#3bf9513db6a011c9e8000239
Course Resources
- Course Website : contains course outline, grade breakdown, weekly schedule information
- Notes and slides via the Textbook (available free online):
- Reinforcement Learning: An Introduction
Small : Richard S. Sutton and Andrew G. Barto
Sutton Textbook, 2018
- Reinforcement Learning: An Introduction
- Course Youtube Channel : Reinforcement Learning
- See Additional Resources for more online notes and reading.
Topics
Primary Textbook : Reinforcement Learning: An Introduction
Small : Richard S. Sutton and Andrew G. Barto, 2018 [SB]
Some topics are not covered in the SB textbook or they are covered in much more detail than the lectures. We will continue to update this list with references as the term progresses.
- Motivation & Context [SB 1.1, 1.2, 17.6]
- Decision Making Under Uncertainty [SB 2.1-2.3, 2.7, 3.1-3.3]
- Solving MDPs [SB 3.5, 3.6, 4.1-4.4]
- The RL Problem [SB 3.7, 6.4, 6.5]
- TD Learning [SB 12.1, 12.2]
- Policy Search [SB 13.1, 13.2, 13.5]
- State Representation & Value Function Approximation
- Basics of Neural Networks
- Deep RL
- AlphaGo and MCTS
- Quick Overview of Other Topics:
- MARL
- Free Energy
- Hierarchical RL
- Supervised Learning for RL and Curriculum Learning
Skipped Topics:
- POMDPs (skipped in S22)
Course Introduction
Basics of Probability
ECE 657A Youtube Videos
Introductory topics on this from my graduate course ECE 657A - Data and Knowledge Modeling and Analysis are available on youtube and mostly applicable to this course as well.
Probability and Statistics Review (youtube playlist)
Containing Videos on:
- Conditional Prob and Bayes Theorem
- Comparing Distributions and Random Variables
- Hypothesis Testing
ECE 108 YouTube Videos
For a very fundamental view of probability from another course of Prof. Crowley you can view the lectures and tutorials for ECE 108
ECE 108 Youtube (look at “future lectures” and “future tutorials” for S20): https://www.youtube.com/channel/UCHqrRl12d0WtIyS-sECwkRQ/playlists
The last few lectures and tutorials are on probability definitions as seen from the perspective of discrete math and set theory.
Likelihood, Loss and Risk
A Good article summarizing how likelihood, loss functions, risk, KL divergence, MLE, MAP are all connected.
https://quantivity.wordpress.com/2011/05/23/why-minimize-negative-log-likelihood/
Probability Intro Markdown Notes
From the course website for a previous year. Some of this we won’t need so much but they are all useful to know for Machine Learning methods in general.
https://compthinking.github.io/RLCourseNotes/
- Basic probability definitions
- conditional probability
- Expectation
- Inference in Graphical Models
- Variational Inference
Basic Decision Making Models - Multiarmed Bandits
Textbook Sections: [SB 1.1, 1.2, 17.6]
Videos
- Part 1 - Live Lecture May 17, 2021 on Virtual Classroom - View Live Here
- Part 2 - Bandits and Values (the sound is horrible! we’ll record a new one) - https://youtu.be/zVIv1ipnubA
- Part 3 - Regret Minimization, UCB and Thompson Sampling - https://youtu.be/a0OcuuglkHQ
Multiarmed Bandit : Solving it via Reinforcement Learning in Python
- Quite a good blog post with all the concepts laid out in simple terms in order https://www.analyticsvidhya.com/blog/2018/09/reinforcement-multi-armed-bandit-scratch-python/
Thompson Sampling
- Long tutorial on Thompson Sampling with more background and theory. Nice charts as well: https://web.stanford.edu/~bvr/pubs/TS_Tutorial.pdf
Markov Decision Processes
Textbook Sections
- Markov Decision Processes
[SB 3.0-3.4] - Solving MDPs Exactly
[SB 3.5, 3.6, 3.7]
Playlist:
Individual Videos:
- Markov Decision Processes 3.0-3.1:
https://youtu.be/pGW1wP4jJas - Rewards and Returns 3.3-3.4: https://youtu.be/K7ymZkEd0ZA
- Value Functions 3.5 - 3.6 : https://youtu.be/lNBXDgAthmQ
Dynamic Programming
Former title: The Reinforcement Learning Problem
Textbook Sections:[SB 4.1-4.4]
Videos:
- Dynamic Programming 1: https://youtu.be/nhyCQK4v4Cw
- Dynamic Programming 2 : Policy and Value Iteration: https://youtu.be/NHN02JnGmdQ
- Dynamic Programming 3 : Generalized Policy Iteration and Asynchronous Value Iteration https://youtu.be/7gfRBYpzhxU
Temporal Difference Learning
Textbook Sections: Selections from [SB chap 5], [SB 6.0 - 6.5]
- Quick intro to Monte-Carlo methods
- Temporal Difference Updating
- SARSA
- Q-Learning
- Expected SARSA
- Double Q-Learning
Videos
Parts:
- Just the MC Lecture part - https://youtu.be/b1C_2x6IUUw
- Temporal Difference Learning 1 - Introduction https://youtu.be/pJyz6OZiIBo
- Temporal Difference Learning 2 - Comparison to Monte-Carlo Method on Random Walk
https://youtu.be/NVtoj4XRRZw
Videos
- Week 5 Youtube Playlist
- Temporal Difference Learning 3 - Sarsa and QLearning Algorithms
https://youtu.be/nEDblNhoL2E - Temporal Difference Learning 4 - Expected Sarsa and Double Q-Learning
https://youtu.be/uGFb0mtJW00
N-Step TD and Eligibility Traces
Textbook Sections: [SB 12.1, 12.2]
Eligibility traces, in a tabular setting, lead to a significant benefit in training time in additional to the Temporal Difference method.
In Deep RL it is very common to use experience replay to reduce overfitting and bias to recent experiences. However, experience replay makes it very hard to leverage eligibility traces which require a sequence of actions to distribute reward backwards.
Videos:
- ET1 - One Step vs Direct Value Updates
- ET2 - ET2 N Step TD Forward View
- ET3 - N Step TD Backward View
- ET4 - Eligibility Traces On Policy
- ET5 - Eligibility Traces Off Policy
- youtube playlist of entire topic ET1-5: https://youtube.com/playlist?list=PLrV5TcaW6bIVtMNt_dZMdMQ9JdtzV5VWS
Other Resources:
Part 1 Review
Go over any questions or open topics from first 6 weeks.
State Representation & Value Function Approximation
VFA Concept
A Value Function Approximation (VFA)
is a necessary technique to use whenever the size of the state of action spaces become too large to represent the value function explicitly as a table. In practice, any practical problem needs to use a VFA.
Benefits of VFA
- Reduce memory need to store the functions (transition, reward, value etc)
- Reduce computation to look up values
- Reduce experience needed to find the optimal value or policy (sample efficiency)
- For continuous state spaces, a coarse coding or tile coding can be effective
Types of Function Approximators
- Linear function approximations (linear combination of features)
- Neural Networks
- Decision Trees
- Nearest Neighbors
- Fourier/ wavelet bases
Finding an Optimal Value Function
When using a VFA, you can use a Stochastic Gradient Descent (SGD) method to search for the best weights for your value function according to experience.
This parametric form the value function will then be used to obtain a greedy or epsilon-greedy policy at run-time.
This is why using a VFA + SGD is still different from a Direct Policy Search approach where you optimize the parameters of the policy directly.
Video:
- Lecture on Value Function Approximation approaches - https://youtu.be/7Dg6KiI_0eM
Other Resources:
- How to use a shallow, linear approximation for Atari - This post explains a paper showing how to achieve the same performance as the Deep RL DQN method for Atari using carefully constructed linear value function approximation.
MIDTERM Exam
Deep Reinforcement Learning
- Deep RL playlist (https://youtube.com/playlist?list=PLrV5TcaW6bIXkjBAExaFcv8NnnNU-qtzt)
- DQN - new youtube lecture on this topic posted July 26, 2021
- revised look at Value Function Approximations in light of DQN and Atari games
- Agent57 - 2020 update by DeepMind to learn how to play all 57 Atari dataset games (huge datausage) - https://www.deepmind.com/blog/agent57-outperforming-the-human-atari-benchmark#:~:text=The%20Atari57%20suite%20of%20games,all%2057%20Atari%202600%20games.
- “Human-level Atari 200x Faster” - Deepmind 2022 - https://arxiv.org/abs/2209.07550
Deep Learning Fundamentals
Deep Learning
- Review, or learn, a bit about Deep Learning
- See videos and content from DKMA Course (ECE 657A)
- This youtube playlist is a targeted “Deep Learning Crash Course” ( #dnn-crashcourse-for-rl ) with just the essentials you’ll need for Deep RL.
- That course also has more detailed videos on Deep Learning which won’t be specifically useful for ECE 493, but which you can refer to if interested.
Lect 9B - Deep Learning Introduction
- link - https://youtu.be/eopsPef7rLc
In this video go over some of the fundamental concepts that led to neural networks (such as linear regression and logistic regression models), the basic structure and formulation of classic neural networks and the history of their development.
Tags
#deeplearning #introduction #overview
Lect 11A - 1 - Deep Learning Fundamentals
- link - https://youtu.be/_Pe7eyLN6VY
This video goes through a ground level description of logistic neural units, classic neural networks, modern activation functions and the idea of a Neural Network as a Universal Approximator.
Tags
#deeplearning #introduction #dnn-crashcourse-for-rl
Lect 11A - 1.2 - Deep Learning - Gradient Descent
- link - https://youtu.be/eWzbLXWEJJ4
In this video we discuss the nuts and bolts of how training in Neural Networks (Deep or Shallow) works as a process of incremental optimization of weights via gradient descent. Topics discussed: Backpropagation algorithm, gradient descent, modern optimizer methods.
Tags
Lect11B - 1 - DeepLearning - Fundamentals II
- link - https://youtu.be/R8PZ7UPKQNM
In this video we go over the fundamentals of Deep Learning from a different angle using the approach from Goodfellow et. al.’s Deep Learning Textbook and their network graph notation for neural networks.
We describe the network diagram notation, and how to view neural networks in this way, focussing on the relationship between sets of weights and layers.
Other topics include: gradient descent, loss functions, cross-entropy, network output distribution types, softmax output for classification.
Tags
#deeplearning #introduction #dnn-crashcourse-for-rl
Lect 11B - 2 - Deep Learning - Fundamentals III
- link - https://youtu.be/c6g0dfMWQ6k
This video continues with the approach from Goodfellow et. al.’s Deep Learning Textbook and goes into detail about computational methods, efficiency and defining the measure being used for optimization.
Topics covered include: relationship of network depth to generalization power, computation benefits of convolutional network structures, revisiting the meaning of backpropagation, methods for defining loss functions
Tags
Lect 11B - 3 - Deep Learning - Regularization
- link - https://youtu.be/qkqkY09splc
In this lecture I talk about some of the problems that can arise when training neural networks and how they can be mitigated. Topics include : overfitting, model complexity, vanishing gradients, catastrophic forgetting and interpretability.
Tags
#deeplearning #detail #dnn-crashcourse-for-rl
Lect 11B - 4 - Deep Learning - Data Augmentation and Vanishing Gradients
- link - https://youtu.be/k4DdJ590teM
In this video we give an overview of several approaches for making DNNs more usable when data is limited with respect to the size of the network. Topics include data augmentation, residual network links, vanishing gradients.
Tags
#deeplearning #detail #overview
Direct Policy Search
[SB 13.1, 13.2, 13.5]
- Policy Gradients
- Actor-Critic
Video:
- Lecture on Policy Gradient methods -
https://youtu.be/SqulTcLHRnY
Policy Gradient Algorithms
Some of the posts used for lecture on July 26.
- A good post with all the fundamental math for policy gradients.
https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html#a3c - Also a good intro post about Policy gradients vs DQN by great ML blogger Andrej Karpathy (this is the one I showed in class with the Pong example):
http://karpathy.github.io/2016/05/31/rl/ - The Open-AI page on the PPO algorithm used on their simulator domains of humanoid robots:
https://openai.com/blog/openai-baselines-ppo/ - Good description of Actor-Critic approach using Sonic the Hedgehog game as example:
https://www.freecodecamp.org/news/an-intro-to-advantage-actor-critic-methods-lets-play-sonic-the-hedgehog-86d6240171d/ - Blog post about how the original Alpha Go solution worked using Policy Gradient RL and Monte-Carlo Tree Search:
https://medium.com/@jonathan_hui/alphago-how-it-works-technically-26ddcc085319
Actor-Critic Algorithm
Very clear blog post on describing Actor-Critic Algorithms to improve Policy Gradients
https://www.freecodecamp.org/news/an-intro-to-advantage-actor-critic-methods-lets-play-sonic-the-hedgehog-86d6240171d/
Cutting Edge Algorithms
Going beyond what we covered in class, here are some exciting trends and new advances in RL research in the past few years to find out more about.
PG methods are a fast changing area of RL research. This post has a number of the successful algorithms in this area from a few years ago:
https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html#actor-critic
A3C/A2C Resources
- Blog from OpenAI introducing their implementation for A3C and analysis of how a simpler, non-parallalized version they call A2C is just as good:
- The original A3C paper from DeepMind:
- Mnih, 2016 : https://arxiv.org/pdf/1602.01783.pdf
- Good summary of these algorithms with cleaned up pseudocode and links:
- A2C - Review of policy gradients and adding how A2C implements them using Deep Learning - (https://youtu.be/WPs8KsWM8sg)
Evaluating RL Algorithms and Double DQN
- discussion of evaluation metrics for RL algorithms
- training hyper-parameters vs. algorithm parameters
- Double DQN bringing back the Double-Q-Learning idea and giving it new life to solve optimism bias
Note: the content listed in LEARN for the S22 offering are being updated more frequently and consistently with content than this list.
Advanced Policy Gradient Methods using Trust Regions
- Trust Region Methods
- TRPO
- PPO
DPG, DDGP and SAC
Hypothesis: Original DDPG Paper - Lillicrap, ICLR, 2016
https://hyp.is/go?url=https%3A%2F%2Farxiv.org%2Fpdf%2F1509.02971.pdf&group=DM67BYBG
Looking Ahead with Tree Search - MCTS and AlphaGo
- Monte-Carlo Tree Search (MCTS)
- How AlphaGo works (combining A2C and MCTS)
RL Next Steps
- An overview next steps in learning more about RL research and applications
- Keep Reading: Conferences
Going Beyond: MARL, Hierarchical RL, Supervised and Curriculum Learning - Find out about Big New Ideas: LeCun, DeepMind, OpenAI, Friston
- Get Involved: Competitions and OpenSource
- Keep Reading: Conferences
- You can find the slides here: RL Next Steps
Review and End of Classes
Week 13
See the RL Next Steps tree for what was discussed in class July 22, 2022.
Final Exam
See LEARN for more information.
Prof. Crowley’s E7 “Elevator Pitch”
E7 Elevator Pitch
Defining the MDP
States
- Elevators : $e_i\in E$ : $i \in \mathcal{R} \in[1,7]$
- Floors : $f \in \mathcal{Z} \in [1,8]$
- Location : $L(e_i) : E \rightarrow f$ - which floor is the elevator on?
- Outside Button: $b\in B^f_{i,dir} \in \{0,1\}; dir\in {up, down}$
- Movement: $M(e_i): E\rightarrow \{up, stopped,down\}$
- Doors: $G(e_i,f): E \times f \rightarrow \{closed, closing, opening, open\}$
- Next Floor: $NL(e_i) : E \rightarrow f \cup {stopped}$ - the next floor the elevator will arrive at, if the elevator is not currently moving, then this returns “stopped”.
Actions
In general: move the elevators, open/close the doors in order to maximize your objective function
At every moment the system can take any of the following actions, we can assume they only happen one at a time
Do nothing
Open a door/Close a door : set $G(e_i,f)$
Move an elevator up/down from current floor : set $M(e_i)$
Stop an elevator at the current floor it is moving towards using $NL(e_i)$
Dynamics
- Define dynamics
(huh? no it’s not short…it’s about elevators)
Questions
Does the system need to remember that it just closed a door?
- Should we define actions to be “close door and move to floor f”?
How would Exploration/Exploitation Work in This Domain?
- how long are you willing to annoy users to get the information you need?
- can we build a simulator for this system?
Primary References for Course
[SuttonBarto2018] - Reinforcement Learning: An Introduction. Book, free pdf of draft available.
http://incompleteideas.net/book/the-book-2nd.html
Practical Resources
- The Open-AI page for their standard set of baseline implementations for the major Deep RL algorithms:
https://github.com/openai/baselines/tree/master/baselines - This is a very good page with all the fundamental math for many policy gradient based Deep RL algorithms. References to the original papers, mathematical explanation and pseudocode included:
https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html#a3c
Deep Q Network vs Policy Gradients - An Experiment on VizDoom with Keras
A nice blog post on comparing DQN and Policy Gradient algorithms such A2C.
https://flyyufelix.github.io/2017/10/12/dqn-vs-pg.html
Additional Resources
Other Useful Textbooks
[Dimitrakakis2019] - Decision Making Under Uncertainty and Reinforcement Learning
http://www.cse.chalmers.se/~chrdimi/downloads/book.pdf
[Ghavamzadeh2016] - Bayesian Reinforcement Learning: A Survey. Ghavamzadeh et al. 2016.
https://arxiv.org/abs/1609.04436
- More probability notes online: https://compthinking.github.io/RLCourseNotes/
Open AI Reference Website
This website is a great resource. It lays out concepts from start to finish. Once you get through the first half of our course, many of the concepts on this site will be familiar to you.
Key Papers in Deep RL List
https://spinningup.openai.com/en/latest/spinningup/keypapers.html
Fundamental RL Concepts Overview
The fundamentals of RL are briefly covered here. We will go into all this and more in detail in our course.
https://spinningup.openai.com/en/latest/spinningup/rl_intro.html
Family Tree of Algorithms
(as of 2022)
Here, a list of algorithms at the cutting edge of RL as of 1 year ago to so, so it’s a good place to find out more. But in a fast growing field, it may be a bit out of date about the latest now.
https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html
Reinforcement Learning Tutorial with Demo on GitHub
This is a thorough collection of slides from a few different texts and courses laid out with the essentials from basic decision making to Deep RL. There is also code examples for some of their own simple domains.
https://github.com/omerbsezer/Reinforcement_learning_tutorial_with_demo#ExperienceReplay
Online/Other Courses
- Coursera/University of Alberta (Martha White)https://www.coursera.org/specializations/reinforcement-learning#courses
- great course with notes online that uses MineCraft for assignments and projects to teach RL : https://canvas.eee.uci.edu/courses/34142
- Deep Mind RL Fundamentals Lecture Series 2021 - a good resource from lecturers at UCL in collaboration with Google’s DeepMind
Videos to Watch on RL (Current Research)
Conferences 2020
- Multiple talks at Canadian AI 2020 conference.
- Csaba Szepesvari (U. Alberta)
AAMAS 2021 conference just finished recently and is focussed on decision making and planning, lots of RL papers.
- See their Twitter Feed for links to talks
ICLR 2020 conference (https://iclr.cc/virtual_2020/index.html)
Old Topics Archive
Other resources connected with previous versions of the course, I’m happy to talk about any of these if people are interested.
Bayes Nets (dropped)
Tools
SamIam Bayesian Network GUI Tool
- Java GUI tool for playing with BNs (its old but its good)
http://reasoning.cs.ucla.edu/samiam/index.php?h=emodels
Other Tools
- Bayesian Belief Networks Python Package :
Allows creation of Bayesian Belief Networks
and other Graphical Models with pure Python
functions. Where tractable exact inference
is used.
https://github.com/eBay/bayesian-belief-networks - Python library for conjugate exponential family BNs and variational inference only
http://www.bayespy.org/intro.html - Open Markov
http://www.openmarkov.org/ - Open GM (C++ library)
http://hciweb2.iwr.uni-heidelberg.de/opengm/
References
Some videos and resources on Bayes Nets, d-seperation, Bayes Ball Algorithm and more:
https://metacademy.org/graphs/concepts/bayes_ball
Conjugate Priors (dropped)
https://en.wikipedia.org/wiki/Conjugate_prior#Table_of_conjugate_distributions
Primary References for Probabilistic Reasoning (mostly dropped)
[Ermon2019] - First half of notes are based on Stanford CS 228 (https://ermongroup.github.io/cs228-notes/) which goes even more into details on PGMs than we will.
[Cam Davidson 2018] - Bayesian Methods for Hackers - Probabilistic Programming textbook as set of python notebooks.
https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/#contents
[Koller, Friedman, 2009] Probabilistic Graphical Models : Principles and Techniques
The extensive theoretical book on PGMs.
https://mitpress.mit.edu/books/probabilistic-graphical-models