CS188 Notes 3 - Markov Decision Processes (MDPs) II
Notes from UC Berkeley's CS188 course on Artificial Intelligence.
Note:#
You could view previous notes on CS188: Lecture 8 - Markov Decision Processes (MDPs).
Also note that my notes are based on the Spring 2025 version of the course, and my understanding of the material. So they MAY NOT be 100% accurate or complete. Also, THIS IS NOT A SUBSTITUTE FOR THE COURSE MATERIAL. I would only take notes on parts of the lecture that I find interesting or confusing. I will NOT be taking notes on every single detail of the lecture.
Markov Decision Processes (MDPs)#
After the previous lecture, I realized I had some misunderstandings about the Policy Iteration algorithm, especially when compared to Value Iteration. So here, I’ll clarify my understanding of these two core approaches for solving MDPs.
Why use a “fixed policy” in Policy Iteration?#
It can be confusing at first that Policy Iteration evaluates a fixed policy. You might ask: does using a fixed, possibly non-optimal policy ever lead to the optimal one?
The answer is that evaluating a fixed policy is an essential intermediate step towards finding the optimal policy. We might “evaluate” a policy that is not optimal, but we it yields valuable information about the expected future rewards of that policy, so finnaly what we act on is the optimal policy.
In Policy Iteration, we loop between two key phases:
Step 1: Policy Evaluation#
We begin with an initial policy (random, greedy, whatever). For this , we compute the exact utility for each state under the assumption that we always follow . The Bellman equation for this is:
This evaluates the policy’s long-term value at every state, given that policy.
Step 2: Policy Improvement#
Now that we have , we look at each state and ask: “Is there an action that would improve my expected future rewards if I took it immediately, then continued with ?”
For each state, we consider:
We then build a new policy by setting:
That is, for each state, choose the action that looks best based on the values under the old policy. This is the policy improvement step.
Repeat: We now re-evaluate the new policy , and the process continues until the policy stops changing. This guarantees convergence to the optimal policy and optimal value function . Evaluating a fixed policy at each stage is essential for knowing both how good our current strategy is and how to improve it.
What is the difference between Policy Iteration and Value Iteration?#
In short:
- Value Iteration is always searching for the best action at each step, directly refining the estimate of the optimal value function.
- Policy Evaluation (as used in Policy Iteration) simply calculates the consequences of following a predefined plan , without improvement during evaluation itself. Policy improvement occurs as a separate step.
Let’s break down the differences in detail.
Value Iteration Equation:#
- Goal: Directly compute the optimal value function .
- How: Each iteration, for each state , considers all possible actions . For each action, it calculates the expected value (reward + discounted future value), then takes the maximum over all actions.
- Policy: Implicit. The operation is finding the best action, and the final optimal policy is extracted after converges.
- What it computes: Iteratively refines the best possible long-term value from each state.
Policy Evaluation Equation (for a fixed policy ):#
- Goal: Compute the value function for the given, fixed policy (which may not be optimal).
- How: Each iteration, for each state , uses only the action prescribed by : . Calculates the expected value (reward + discounted future value) following this fixed action. There is no because the action is predetermined by .
- Policy: Explicit and fixed throughout evaluation.
Comparison Table#
Feature | Value Iteration (VI) | Policy Evaluation (PE for fixed ) |
---|---|---|
Equation Core | ||
Present? | Yes | No |
Action Choice | Considers all , picks the best | Only the action given by policy |
Policy Role | Policy is implicit (via ) | Policy is explicit and fixed |
Goal | Compute optimal value function | Compute value function for the given policy |
Used Where? | Standalone algorithm to find | Subroutine within Policy Iteration |
Convergence | converges to | converges to |
Does Policy Evaluation converge after more iterations than Value Iteration?#
It’s tempting to think that Policy Evaluation takes more iterations to converge, since it does not optimize at every step, but in practice, Policy Iteration often converges in fewer outer iterations (policy updates) than Value Iteration, though the work per iteration can differ.
The real power of Policy Iteration comes after Policy Evaluation. Once we have for our current policy, we can often make a large jump to a better policy by improving all states at once:
We only repeat this process until the policy stops changing, which often happens quickly and requires fewer overall iterations than Value Iteration.