Your browser doesn't support the features required by impress.js, so you are presented with a simplified version of this presentation.

For the best experience please use the latest Chrome, Safari or Firefox browser.

Course References, Links and Random Notes

Title: Probabilistic Reasoning and Reinforcement Learning
Info: ECE 457C - Reinforcement Learning
Instructor: Prof. Mark Crowley, ECE Department, UWaterloo

NOTE: Ignore the weekly dates, they are from a previous year

Website: markcrowley.ca/rlcourse

Links to this Gingko Tree:

Course Description :

Introduction to Reinforcement Learning (RL) theory and algorithms for learning decision-making policies in situations with uncertainty and limited information. Topics include Markov decision processes, classic exact/approximate RL algorithms such as value/policy iteration, Q-learning, State-action-reward-state-action (SARSA), Temporal Difference (TD) methods, policy gradients, actor-critic, and Deep RL such as Deep Q-Learning (DQN), Asynchronous Advantage Actor Critic (A3C), and Deep Deterministic Policy Gradient (DDPG). [Offered: S, first offered Spring 2019]

Style Sheet

Course Resources

Topics

Primary Textbook : Reinforcement Learning: An Introduction
Small
: Richard S. Sutton and Andrew G. Barto, 2018 [SB]

Some topics are not covered in the SB textbook or they are covered in much more detail than the lectures. We will continue to update this list with references as the term progresses.

  1. Motivation & Context [SB 1.1, 1.2, 17.6]
  2. Decision Making Under Uncertainty [SB 2.1-2.3, 2.7, 3.1-3.3]
  3. Solving MDPs [SB 3.5, 3.6, 4.1-4.4]
  4. The RL Problem [SB 3.7, 6.4, 6.5]
  5. TD Learning [SB 12.1, 12.2]
  6. State Representation & Value Function Approximation
  7. Basics of Neural Networks
  8. Deep RL
  9. Policy Search [SB 13.1, 13.2, 13.5]
  10. AlphaGo and MCTS
  11. Multi-Agent RL (MARL)
  12. Hierarchical RL
  13. Reinforcement Learning with Human Feedback
  14. Decision Transformers
  15. Other Possible Topics:
    1. Free Energy
    2. Distributional RL
    3. Supervised Learning for RL and Curriculum Learning
    4. POMDPs (skipped in S22)

Begin Part 1


Course Introduction

Basics of Probability

ECE 657A Youtube Videos

Introductory topics on this from my graduate course ECE 657A - Data and Knowledge Modeling and Analysis are available on youtube and mostly applicable to this course as well.

Probability and Statistics Review (youtube playlist)

Containing Videos on:

  • Conditional Prob and Bayes Theorem
  • Comparing Distributions and Random Variables
  • Hypothesis Testing

ECE 108 YouTube Videos

For a very fundamental view of probability from another course of Prof. Crowley you can view the lectures and tutorials for ECE 108

ECE 108 Youtube (look at “future lectures” and “future tutorials” for S20): https://www.youtube.com/channel/UCHqrRl12d0WtIyS-sECwkRQ/playlists

The last few lectures and tutorials are on probability definitions as seen from the perspective of discrete math and set theory.

Likelihood, Loss and Risk

A Good article summarizing how likelihood, loss functions, risk, KL divergence, MLE, MAP are all connected.
https://quantivity.wordpress.com/2011/05/23/why-minimize-negative-log-likelihood/

Probability Intro Markdown Notes

From the course website for a previous year. Some of this we won’t need so much but they are all useful to know for Machine Learning methods in general.

https://compthinking.github.io/RLCourseNotes/

  • Basic probability definitions
  • conditional probability
  • Expectation
  • Inference in Graphical Models
  • Variational Inference

Basic Decision Making Models - Multiarmed Bandits

Textbook Sections: [SB 1.1, 1.2, 17.6]

Videos

- Part 1 - Live Lecture May 17, 2021 on Virtual Classroom - View Live Here

Multiarmed Bandit : Solving it via Reinforcement Learning in Python

Thompson Sampling

Markov Decision Processes

Textbook Sections

  • Markov Decision Processes
    [SB 3.0-3.4]
  • Solving MDPs Exactly
    [SB 3.5, 3.6, 3.7]

Playlist:

Individual Videos:

Dynamic Programming

Former title: The Reinforcement Learning Problem
Textbook Sections:[SB 4.1-4.4]

Videos:

Temporal Difference Learning

Textbook Sections: Selections from [SB chap 5], [SB 6.0 - 6.5]

  • Quick intro to Monte-Carlo methods
  • Temporal Difference Updating
  • SARSA
  • Q-Learning
  • Expected SARSA
  • Double Q-Learning

Videos

Parts:

Videos

N-Step TD and Eligibility Traces

Textbook Sections: [SB 12.1, 12.2]

Eligibility traces in a tabular setting can lead to a significant benefit in training time in additional to the Temporal Difference method.

Videos (from previous years):

Eligibility Traces in Deep RL
In Deep RL it is very common to use experience replay to reduce overfitting and bias to recent experiences. However, experience replay makes it very hard to leverage eligibility traces which require a sequence of actions to distribute reward backwards.

There is a fair bit of discussion about Eligibility Traces and Deep RL. See some of the following papers and notes.

Expected Eligibility Traces - 2021

I put some notes up on Hypothesis about this one, it seems quite interesting. It’s more recent, just 2021, after lots of advances of the initial Deep RL algorithms (unlike the “Investigating Recurrence…” paper). And it makes a fairly straightforward argument about Eligibility Traces that is similar to Expected SARSA in its implementation.

This could be a good algorithm to consider implementing for #asg4.

Discussion about Incompatibility of Eligibility Traces with Experience Replay

(https://stats.stackexchange.com/questions/341027/eligibility-traces-vs-experience-replay/341038)

Efficient Eligibility Traces for Deep Reinforcement Learning

Brett Daley, Christopher Amato

(https://arxiv.org/abs/1810.09967)

[Investigating Recurrence and Eligibility Traces in Deep Q-Networks -Jean Harb, Doina Precup] - 2016

(https://arxiv.org/abs/1704.05495)

MIDTERM Exam

No midterm in Spring 2023 course.


Begin Part 2


State Representation & Value Function Approximation

VFA Concept

A Value Function Approximation (VFA)
is a necessary technique to use whenever the size of the state of action spaces become too large to represent the value function explicitly as a table. In practice, any practical problem needs to use a VFA.

Benefits of VFA

  • Reduce memory need to store the functions (transition, reward, value etc)
  • Reduce computation to look up values
  • Reduce experience needed to find the optimal value or policy (sample efficiency)
  • For continuous state spaces, a coarse coding or tile coding can be effective

Types of Function Approximators

  • Linear function approximations (linear combination of features)
  • Neural Networks
  • Decision Trees
  • Nearest Neighbors
  • Fourier/ wavelet bases

Finding an Optimal Value Function

When using a VFA, you can use a Stochastic Gradient Descent (SGD) method to search for the best weights for your value function according to experience.
This parametric form the value function will then be used to obtain a greedy or epsilon-greedy policy at run-time.

This is why using a VFA + SGD is still different from a Direct Policy Search approach where you optimize the parameters of the policy directly.

Video:

  • Lecture from 2020 by former TA for the coruse Sriram Ganapathi Subramanian on classic Value Function Approximation approaches - https://youtu.be/7Dg6KiI_0eM

Other Resources:

Deep Learning Fundamentals

Deep Learning

  • Review, or learn, a bit about Deep Learning
    • See videos and content from DKMA Course (ECE 657A)
    • This youtube playlist is a targeted “Deep Learning Crash Course” ( #dnn-crashcourse-for-rl ) with just the essentials you’ll need for Deep RL.
    • That course also has more detailed videos on Deep Learning which won’t be specifically useful for ECE 493, but which you can refer to if interested.

Lect 9B - Deep Learning Introduction

In this video go over some of the fundamental concepts that led to neural networks (such as linear regression and logistic regression models), the basic structure and formulation of classic neural networks and the history of their development.

Tags

#deeplearning #introduction #overview

Lect 11A - 1 - Deep Learning Fundamentals

This video goes through a ground level description of logistic neural units, classic neural networks, modern activation functions and the idea of a Neural Network as a Universal Approximator.

Tags

#deeplearning #introduction #dnn-crashcourse-for-rl

Lect 11A - 1.2 - Deep Learning - Gradient Descent

In this video we discuss the nuts and bolts of how training in Neural Networks (Deep or Shallow) works as a process of incremental optimization of weights via gradient descent. Topics discussed: Backpropagation algorithm, gradient descent, modern optimizer methods.

Tags

#deeplearning #detail

Lect11B - 1 - DeepLearning - Fundamentals II

In this video we go over the fundamentals of Deep Learning from a different angle using the approach from Goodfellow et. al.’s Deep Learning Textbook and their network graph notation for neural networks.

We describe the network diagram notation, and how to view neural networks in this way, focussing on the relationship between sets of weights and layers.

Other topics include: gradient descent, loss functions, cross-entropy, network output distribution types, softmax output for classification.

Tags

#deeplearning #introduction #dnn-crashcourse-for-rl

Lect 11B - 2 - Deep Learning - Fundamentals III

This video continues with the approach from Goodfellow et. al.’s Deep Learning Textbook and goes into detail about computational methods, efficiency and defining the measure being used for optimization.

Topics covered include: relationship of network depth to generalization power, computation benefits of convolutional network structures, revisiting the meaning of backpropagation, methods for defining loss functions

Tags

#deeplearning #detail

Lect 11B - 3 - Deep Learning - Regularization

In this lecture I talk about some of the problems that can arise when training neural networks and how they can be mitigated. Topics include : overfitting, model complexity, vanishing gradients, catastrophic forgetting and interpretability.

Tags

#deeplearning #detail #dnn-crashcourse-for-rl

Lect 11B - 4 - Deep Learning - Data Augmentation and Vanishing Gradients

In this video we give an overview of several approaches for making DNNs more usable when data is limited with respect to the size of the network. Topics include data augmentation, residual network links, vanishing gradients.

Tags

#deeplearning #detail #overview

Deep Reinforcement Learning

Resources

Code and Reading Resources

These resources will be useful for the course in general but especially for assignments 3 and 4.

CODE : Stable Baselines and Gymnasium

StableBaselines3 is a project to maintain a standard repository of core RL algorithms, and even trained models/policies. It uses the API defined in Gymnasium for interacting with RL environments, but SB3 is about policies, values functions, optimization, neural networks, gradients, etc., it isn’t about environments themselves.

Gymnasium is the successor of Open AI’s Gym project, which defines a standard set of environments for RL including an API for interacting with them.

Reading Academic Papers

The only way to keep up with the changes in a fast paced field like RL (or any area of Machine Learning in general, these days) is to read the latest papers from relevant conferences or pre-prints (unpublished paper drafts) on Arxiv.

The Stable Baselines library has a list of Key Papers in Deep RL to get started, especially the first three sections 1.1, 1.2, and 1.3.

[SB 13.1, 13.2, 13.5]

  • Policy Gradients
  • REINFORCE
  • Actor-Critic

The basic idea of policy gradients is often explained with a simple algorithm that predates Deep RL. There seem to be two main versions of this story, although the result is the same.

#REINFORCE

“Vanilla” Policy Gradients

The OpenAI Spinning Up documentation has a description of Vanilla Policy Gradients (#VPG). This is almost the same as the #REINFORCE algorithm.

The Difference Between REINFORCE and VPG:
The difference is subtle, but is explained well in this stackexchange response

They do look very similar in objective functions, but they are different. The way the gradient ascent is performed differs strongly since in REINFORCE method the gradient ascent is performed once for each action taken for each episode and the direction of ascent is taken as

$$
G_t\frac{\nabla \pi (A_t |S_t, \theta)}{\pi (A_t |S_t, \theta)}
$$

so the update becomes

$$
\theta{t+1} = \theta{t} + \alpha G_t\frac{\nabla \pi (A_t |S_t, \theta)}{\pi (A_t |S_t, \theta)}
$$

but in VPG algorithm the gradient ascents performed once over multiple episodes and direction of ascent taken as average

$$
\frac{1}{|\mathcal{T}|}\sum{\tau\in\mathcal{T}} \sum{t=0}^T R(\tau) \frac{\nabla \pi (A_t |S_t, \theta)}{\pi (A_t |S_t, \theta)}
$$

and gradient ascent step is

$$
\theta{t+1} = \theta{t} + \alpha \frac{1}{|\mathcal{T}|}\sum{\tau\in\mathcal{T}} \sum{t=0}^T R(\tau) \frac{\nabla \pi (A_t |S_t, \theta)}{\pi (A_t |S_t, \theta)}
$$

which looks a lot like what you have stated as REINFORCE algorithm.

I admit, that some form of mathematical equivalence can be derived between them, since expectation over policy and expectation over the trajectory sampled from the policy looks practically the same. But approaches differ at least in the way, the ascent is computed.

Save of StackExchange Post

link to post: https://ai.stackexchange.com/a/34344/73583
author: https://ai.stackexchange.com/users/52494/vl-knd


You can check the Open AI [Introduction to RL series][1], they explain pretty neatly there what is the Policy Optimization and how to derive it. I think, that usually when we are talking about REINFORCE algorithm, we are talking about the one described in [Sutton’s book on Reinforcement learning][2]. It is described as the policy optimization algorithm maximizing the Value Function $v_{\pi(\theta)}(s) = E[G_t|S_t = s]$ of initial state of the agent. Here $G_t = \sum_{k=0}^\infty \gamma^k R_{t+k+1}$ is the $\gamma$ discounted return from given state, time $s, t$. Or, shortly put.

$$
J(\theta) = v{\pi(\theta)}(s_0) = E[G_t|S_t = s_0]\
\nabla J(\theta) = E
\pi\left[G_t\frac{\nabla \pi (A_t |S_t, \theta)}{\pi (A_t |S_t, \theta)}\right]
$$

But in the RL series of Open AI, the algorithm that is described as Vanilla policy gradient (If it is the one you are talking about) is optimizing finite-horizon undiscounted return $E_{\tau \sim \pi} [R(\tau)] $, where $\tau$ are possible trajectories. e.g.

$$
J(\theta) = E{\tau \sim \pi} [R(\tau)] \
\nabla J(\theta) = E
{\tau \sim\pi}\left[\sum_{t=0}^T R(\tau) \frac{\nabla \pi (A_t |S_t, \theta)}{\pi (A_t |S_t, \theta)}\right]
$$

Resources

Lecture Video (from 2022):

  • Lecture on Policy Gradient methods -
    https://youtu.be/SqulTcLHRnY
  • new lecture also available on the Teams Stream playlist available in LEARN

Actor-Critic Algorithm

Very clear blog post on describing Actor-Critic Algorithms to improve Policy Gradients
https://www.freecodecamp.org/news/an-intro-to-advantage-actor-critic-methods-lets-play-sonic-the-hedgehog-86d6240171d/

Tags:

#reinforcement-learning #policy-gradients #a2c #a3c
#457C

A3C/A2C Resources

Resources

Cutting Edge Algorithms

Here are some exciting trends and new advances in RL research in the past few years to find out more about.

PG methods are a fast changing area of RL research. This post has a number of the successful algorithms in this area from a few years ago:
https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html#actor-critic

Which notes to study

NOTE: I know the topics and lectures from this point onward have become a bit scattered. There are many resources to share and it’s not always clear what parts of them are essential. Also, the RL textbook will have less up to date information on the latests algorithms after REINFORCE/Actor-Critic.

So, when in doubt about slides or websites to trust, stick to the high-level understanding available on the Spinning Up Documentation : https://spinningup.openai.com/en/latest/user/algorithms.html

Proximal Policy Iteration (#PPO)

  • The #PPO algorithm does better in most cases, it’s a good starting default algorithm
  • But even so, it’s not that well understood why it works to well

Discussion

  • #PPO is based on #TRPO, which is hard to implement
    • #TRPO is often impractical, which is why PPO does it more efficiently with lots of approximations
    • PPO introduces a parameter, $\beta$, in equation (5) of the original paper that isn’t that well understood
      • Open AI has their own setting for it, but it’s not well understood
      • if you fix $\beta$, then you can’t change anything else and it’s very tricky

Difficulties with #PPO

Evaluating RL Algorithms and Double DQN

  • discussion of evaluation metrics for RL algorithms
  • training hyper-parameters vs. algorithm parameters
  • Double DQN bringing back the Double-Q-Learning idea and giving it new life to solve optimism bias

Papers discussed in Class

[updated july 14, 2023]

  • Deep Double Q-Learning
  • Deep Reinforcement Learning that Matters
  • Rainbow Paper
    • This famous paper gives a great review of the DQN algorithm a couple years after it changed everything in Deep RL. It compares six different extensions to DQN for Deep Reinforcement Learning, many of which have now become standard additions to DQN and other Deep RL algorithms. It also combines all of them together to produce the “rainbow” algorithm, which outperformed many other models for a while.

#The-Deadly-Triad

  • see section 11.3 of the textbook

DPG, DDGP and SAC and TD3

Hypothesis: Original DDPG Paper - Lillicrap, ICLR, 2016

This paper introduces the DDPG algorithm which builds on the existing DPG algorithm from classic RL theory. The main idea is to define a deterministic policy, or nearly deterministic, for situations where the environment is very sensitive to suboptimal actions, and one action setting usually dominates in each state. This showed good performance, but could not beat algorithms such as PPO until the additions of SAC were added. SAC adds an entropy penalty which essentially penalizes uncertainty in any states. Using this, the deterministic policy gradient approach performs well.

Public Link: https://hyp.is/go?url=https%3A%2F%2Farxiv.org%2Fpdf%2F1509.02971.pdf&group=__world__

Looking Ahead with Tree Search - MCTS and AlphaGo

  • Monte-Carlo Tree Search (MCTS)
  • How AlphaGo works (combining A2C and MCTS)
  • AlphaZero
  • Alpha(Everything?)

Resources

RL Next Steps

  • An overview next steps in learning more about RL research and applications
  • You can find the Spring 2022 slides on this topic here: RL Next Steps or below this card in the tree

Epic Reinforcement Learning Fails

Reinforcement Learning is a great framework for training systems to perform actions in a way that makes less assumptions than most other optimization and planning methods. But it’s not perfect and it’s not always the best solution to a particular problem. Here we’ll include descriptions and examples of times when RL fails in a major way.

The Spinning Boat

  • A famous example of what can happen if you don’t create an appropriate reward function. This relates to the current hot topic of AI #Allignment too, how do you get an AI to do what you want, or in our case, how do you specify rewards in such a way that when the agent converges to a policy, the policy will satisfy what you watned?

Link: https://openai.com/research/faulty-reward-functions

Find out about Big New Ideas:

LeCun, DeepMind, OpenAI, Friston

Get Involved: Competitions and OpenSource

RL Next Steps

(ECE 457C - Mark Crowley - UWaterloo)

So…What Next?

We’ve covered the basics of classic (Tabular) and modern (Deep) Reinforcement Learning.

But it’s a fast changing field, where do you go next with RL?

  • Keep Reading: Conferences
  • Going Beyond: MARL, Hierarchical RL, Learning Process
  • Big New Ideas: LeCun, DeepMind, OpenAI, Friston
  • Get Involved: Competitions and OpenSource

General AI Conferences with RL

AI is more general than ML (Prof. Crowley’s opinion) and RL is a more AI-like pursuit than ML itself. So these conferences often have a broader set of tasks and results.

  • AAAI - largest, general Artificial Intelligence conference in North America, annual
  • IJCAI - largest, general Artificial Intelligence conference internationally, annual

Conference - RL

  • RLDM - Reinforcement Learning and Decision Making
    • This is a great, small conference only once every two years. Lots of big ideas. Half the papers are from Neuroscience/Psychology and half are from Engineering/Computer Science.
    • So the focus is to understanding learning how to act in the world in general!
  • AAMAS - Autonomous Agents and Multiagent Systems (https://www.ifaamas.org/)

General ML Conference with RL

  • NeurIPS

Conference - ICML

Going Beyond

Curriculum Learning

MARL

Curiosity driven RL and Intrinsic motivation

Curiosity alone can often lead to good policies, but only when reward and curiosity learned dynamics are correlated.

Big New Ideas

Free Energy Principle

https://www.wired.com/story/karl-friston-free-energy-principle-artificial-intelligence/?utm_source=pocket_mylist

living systems fight entropy by minimizing free energy, or surprise


Review and End of Classes

Week 13

See the RL Next Steps tree for what was discussed in class July 22, 2022.

Final Exam

See LEARN for more information.

Prof. Crowley’s E7 “Elevator Pitch”

E7 Elevator Pitch

Defining the MDP

States

  • Elevators : $e_i\in E$ : $i \in \mathcal{R} \in[1,7]$
  • Floors : $f \in \mathcal{Z} \in [1,8]$
  • Location : $L(e_i) : E \rightarrow f$ - which floor is the elevator on?
  • Outside Button: $b\in B^f_{i,dir} \in \{0,1\}; dir\in {up, down}$
  • Movement: $M(e_i): E\rightarrow \{up, stopped,down\}$
  • Doors: $G(e_i,f): E \times f \rightarrow \{closed, closing, opening, open\}$
  • Next Floor: $NL(e_i) : E \rightarrow f \cup {stopped}$ - the next floor the elevator will arrive at, if the elevator is not currently moving, then this returns “stopped”.

Actions

In general: move the elevators, open/close the doors in order to maximize your objective function

At every moment the system can take any of the following actions, we can assume they only happen one at a time

  • Do nothing

  • Open a door/Close a door : set $G(e_i,f)$

  • Move an elevator up/down from current floor : set $M(e_i)$

  • Stop an elevator at the current floor it is moving towards using $NL(e_i)$

Dynamics

  • Define dynamics

(huh? no it’s not short…it’s about elevators)

Questions

Does the system need to remember that it just closed a door?

  • Should we define actions to be “close door and move to floor f”?

How would Exploration/Exploitation Work in This Domain?

  • how long are you willing to annoy users to get the information you need?
  • can we build a simulator for this system?

Primary References for Course

[SuttonBarto2018] - Reinforcement Learning: An Introduction. Book, free pdf of draft available.
http://incompleteideas.net/book/the-book-2nd.html

Practical Resources

Deep Q Network vs Policy Gradients - An Experiment on VizDoom with Keras

A nice blog post on comparing DQN and Policy Gradient algorithms such A2C.
https://flyyufelix.github.io/2017/10/12/dqn-vs-pg.html

Additional Resources

Other Useful Textbooks

[Dimitrakakis2019] - Decision Making Under Uncertainty and Reinforcement Learning

http://www.cse.chalmers.se/~chrdimi/downloads/book.pdf
[Ghavamzadeh2016] - Bayesian Reinforcement Learning: A Survey. Ghavamzadeh et al. 2016.
https://arxiv.org/abs/1609.04436

Open AI Reference Website

This website is a great resource. It lays out concepts from start to finish. Once you get through the first half of our course, many of the concepts on this site will be familiar to you.

Key Papers in Deep RL List

https://spinningup.openai.com/en/latest/spinningup/keypapers.html

Fundamental RL Concepts Overview

The fundamentals of RL are briefly covered here. We will go into all this and more in detail in our course.
https://spinningup.openai.com/en/latest/spinningup/rl_intro.html

Family Tree of Algorithms

(as of 2022)
Here, a list of algorithms at the cutting edge of RL as of 1 year ago to so, so it’s a good place to find out more. But in a fast growing field, it may be a bit out of date about the latest now.
https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html

Reinforcement Learning Tutorial with Demo on GitHub

This is a thorough collection of slides from a few different texts and courses laid out with the essentials from basic decision making to Deep RL. There is also code examples for some of their own simple domains.
https://github.com/omerbsezer/Reinforcement_learning_tutorial_with_demo#ExperienceReplay

Online/Other Courses

Tags:

#minecraft #minerl
#teaching

Videos to Watch on RL (Current Research)

Conferences 2020

Old Topics Archive

Other resources connected with previous versions of the course, I’m happy to talk about any of these if people are interested.

Bayes Nets (dropped)

Tools

SamIam Bayesian Network GUI Tool

Other Tools

References

Some videos and resources on Bayes Nets, d-seperation, Bayes Ball Algorithm and more:
https://metacademy.org/graphs/concepts/bayes_ball

Conjugate Priors (dropped)

Primary References for Probabilistic Reasoning (mostly dropped)

[Ermon2019] - First half of notes are based on Stanford CS 228 (https://ermongroup.github.io/cs228-notes/) which goes even more into details on PGMs than we will.

[Cam Davidson 2018] - Bayesian Methods for Hackers - Probabilistic Programming textbook as set of python notebooks.
https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/#contents

[Koller, Friedman, 2009] Probabilistic Graphical Models : Principles and Techniques
The extensive theoretical book on PGMs.
https://mitpress.mit.edu/books/probabilistic-graphical-models