-
Notifications
You must be signed in to change notification settings - Fork 8
/
recap20.html
212 lines (121 loc) · 18.4 KB
/
recap20.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
---
layout: page
title: Lecture 20 Recap - Markov Decision Processes
mathjax: true
weight: 0
---
<section class="main-container text">
<div class="main">
<h4>Date: April 15, 2020 (<a href="https://canvas.harvard.edu/courses/65269/external_tools/62565" target="_blank">Lecture Video</a>, <a href="https://piazza.com/class_profile/get_resource/k5fnxwvfh7p7mi/k927rxxtp0j5o5" target="_blank">iPad Notes</a>, <a href="https://docs.google.com/forms/d/e/1FAIpQLSem2VpRYM3VeUjHziFOBD5EqpAvhXu32fgzYN1sQLB9Ja54nQ/viewform" target="_blank">Concept Check</a>, <a href="https://docs.google.com/forms/d/e/1FAIpQLSem2VpRYM3VeUjHziFOBD5EqpAvhXu32fgzYN1sQLB9Ja54nQ/viewanalytics" target="_blank">Class Responses</a>, <a href="{{ site.baseurl }}/ccsolutions" target="_blank">Solutions</a>)</h4>
<br>
<h3>Lecture 20 Summary</h3>
<ul>
<li><a href="#recap20_1">Intro: Moving from Predictions to Decisions</a></li>
<li><a href="#recap20_2">Intro: Markov Decision Processes</a></li>
<li><a href="#recap20_3">How to Solve using Policy Iteration (Method 1)</a></li>
<li><a href="#recap20_4">Recap and Looking Forward</a></li>
</ul>
<h2 id="recap20_1">Intro: Moving from Predictions to Decisions</h2>
So far in CS181, we've been looking at how to make predictions. Today, we will transition from predictions into the realm of decision making.<br><br>
To best illustrate this, let's cover a few real world examples. Imagine we're using a map system to go from a starting point to a destination. The map system might have predictive tools that tell us where the traffic is (using realtime data of where cars are right now), and how long different routes take from start to finish. Modeling was used here to figure out all this traffic information and which routes are slower and faster. But now, we need to actually make a decision. Which route do we take? We might have different objectives. Maybe we want the route that is the shortest in expectation. Maybe we're okay with being a little slower on average, if there is a guarantee that we'll be there by some time (say we're trying to make an important interview). These are decisions that don't take place at the prediction time. In order to make these decisions, we'll need to formalize a reward function.<br><br>
Another real world example is autonomous vehicles. There are tons of predictions that come into place for autonomous vehicles, including where the road is, where the lines on the road are, whether there are signs ahead or not, whether there are pedestrians crossing or not, etc. These are all predictive, and is what we've been doing so far in CS181. Once we have this data, how do we decide what to do next? How should the car actually drive to be safe? Those are the types of things we will start looking at across the next three lectures.
<h2 id="recap20_2">Intro: Markov Decision Processes</h2>
First: what is a Markov Decision Process? At the most basic level, it is a framework for modeling decision making (again, remember that we've moved from the world of prediction to the world of decision making). In our model, there is only one agent, and this is the agent that needs to make decisions while interacting with the world (other models incorporate multiple agents, but in CS181 we are only covering situations that have one agent).
<div class="text-center">
<img src="{{ site.baseurl }}/images/recap20_1.png" style="width:50%" alt="MDP Setup"></img>
</div>
<h4>Some Notation, Definitions, and the Goal</h4>
To explain the setup diagram above, the agent can send actions to the world, and the world will return observations that we call $o$ and rewards that we call $r$.<br><br>
$o$ represents observations<br><br>
$r$ represents rewards<br><br>
$t$ represents the time stamp<br><br>
$\gamma$ is our discount factor which is between [0, 1) (in the future, rewards are worth less by a factor of $\gamma$ per round of time)<br><br>
With the above information, we have:<br><br>
$E [\sum_t \gamma^t r_t]$ represents expected sum of discounted rewards<br><br>
Goal: agent should choose the actions to maximize the expected sum of discounted rewards.<br><br>
$h_t = \{a_0, o_0 r_0, a_1, o_1, r_1 ... a_t, o_t, r_t\}$ represents the history of actions, obsevations, and rewards from time 1 to our current time t<br><br>
State: summarizes a history such that making predictions about the future given the state is equivalent to making those same predictions given the entire history.<br><br>
<h4>Breaking Down Definition of State</h4>
<div class="text-center">
<img src="{{ site.baseurl }}/images/recap20_2.png" style="width:50%" alt="Explaining State"></img>
</div>
Imagine we had a situation where we wanted to get a person from Start (S) to Goal (G). Assume we have total control of where our person walks. The solid wiggly line to the middle point is the person's walking history. If the person takes the action, "Go Down", and we try to guess what happens next, we don't actually need to know anything about the history of where the person walked to get to their current location (the middle dot). The person could have taken the solid wiggly line or the dotted wiggly line to get to the middle dot, yet all we really needed to know was their current location. Similarly, if someone stopped you on the street to ask how to get to Maxwell Dworkin, you wouldn't ask them "where did you come from?". You already have enough info to direct them to MD, since you already know their current location. In this example, the person's location is their state, as it is a sufficient statistic.<br><br>
Let's say instead of a person that this was a car instead, however, and that the car's velocity matters. If the car was coming in at 100 miles per hour from the left, moving "Down" would require a huge turn, versus if the car was driving in slowly from above, at 10 miles per hour, the action "Down" would cause the car to move differently in each scenario. This means that only knowing the car's location is not enough to define the car's state. We would want to incoporate in the velocity, the acceleration, and essentialy everything we need to be able to predict what happens next. State is all information needed to be able to ignore the past. If all of this information is available to us, then we say the system is Markov in state.<br><br>
Note: this is the official definition of state. Some papers in the literature will sometimes use approximations of states.
Say, for example, you might choose to use a single frame (a still snapshot) of a video game (such as Super Mario Bros) as a state. Therefore from the still frame, you can see where each object (such as Mario, bricks, and evil mushrooms) is located on the screen. But to really predict how the objects move, you actually need the velocity of the objects (is Mario running? in what direction are the mushrooms walking?) to predict what happens next and to be able to call the system Markov. It's possible that the paper will just say, "We will choose to use just the frame as our approxmiation of state". It's just important to note that that is exactly what it is: an approximation of a state. However, the formal definition of state is still everything you need to know to be able to predict what's next. We will use this definition for state for the next 3 lectures.<br><br>
<h4>Tying Back and Fully Defining our Markov Decision Process</h4>
<div class="text-center">
<img src="{{ site.baseurl }}/images/recap20_3.png" style="width:50%" alt="MDP Setup"></img>
</div>
Let's come back to the diagram of an agent in the world. Now, we're going to assume the world is nice, and that when our agent does some action $a$ from starting state $s$, the world will send back the resultant state $s'$. You can imagine that the obsevations we had from the situation before get converted via some method into the state (in the real world, this is what would have to happen). Because the world gives us a resultant state $s'$ which gives us all information we need to predict what will happen to the agent next, this system is a Markov Decision Process.<br><br>
Now let's fully define everything:<br><br>
This system can be described using this tuple of information:
$$\{S, A, R(s, a, s'), T(s' | s, a), \gamma, p_0\}$$
$S$ are all possible states<br>
$A$ are all possible actions<br>
$R(s, a, s')$ is the reward for doing action $a$ in starting state $s$ and landing in state $s'$<br>
$T(s' | s, a)$ is a matrix of transition probabilities, which describe $p(s' | s, a)$, the probability of landing in state $s'$ from taking action $a$ from starting state $s$. Notice this is a similar idea to the last time when we were defined T(s'|s) for Hidden Markov Models. The only difference is that here, transition probabilities depend on the action.<br>
$\gamma$ is the discount factor.
$p_0$ is some starting state.<br><br>
Goal: Find the best policy. We use $\pi$ to denote a policy. Policies can be stochastic, or describe a distribution over actions ($\pi(s, a) = P(a|s)$), or determinstic and be equal to 1 action ($\pi(s)=a$). Stoachastic policies have a probability distribution across actions for a given state (what probability to play each action with). Deterministic policies we simply have one predetermiend action in the policy to play for each given state. These both are commonly used notations. <br><br>
The best policy is as follows:
$$max_\pi E_{T, p_0} [\sum_t \gamma^t r_t]$$
Here, $T$ and $p_0$ represent transition probabilities and the initial starting state of the world. The agent does not have control over the world. The agent does control its own actions. We are trying to determine the best actions for the agent to take: the actions that maximize the agent's returns that it gets in the world.
<h2 id="recap20_3">How to Solve using Policy Iteration (Method 1)</h2>
Over the this lecture and next, we will cover three methods of sovling an MDP. Each will have their pros and cons. There's a lot of literature on this topic that includes more complex methods, but overall the three we cover are intuitive and scale relatively well. Additionally, the complex methods build off these simpler methods here, so this is laying down important groundwork.<br><br>
First, let's explore policy iteration. At a high level, policy iteration iteratively evaluates a proposed policy, and then performs policy improvement.
<h4>Policy Evaluation</h4>
To understand policy iteration, we first much understand policy evaluation. Policy evaluation is used to estimate the expected reward at each state given a particular policy $\pi$ to follow at each state.
More formally, given a $\pi$, policy evaluation calculates $E[\sum \gamma^t r_t]$. Let's start with some definitions...<br><br>
Define $V^\pi(s) = E_{\pi, T}[\sum \gamma^t r_t]$<br><br>
Here, the notation $V^\pi(s)$ represents the ``value'' of policy $\pi$. We define the value as the expected sum of discounted rewards the agent will receive. Note: the $\pi$ is in the expecation just in case it's a stochastic policy. If $\pi$ was deterministic, we wouldn't need to do this.<br><br>
Define $Q^\pi(s, a) = E_{\pi, T} [\sum \gamma^t r_t | s_0=s, a_0=a]$<br><br>
Here, we define this Q as a slight altercation of the above V. You can think of $Q^\pi(s, a)$ as experimenting with a policy. We're essentially saying: For one instance of time, take action $a$ which lands us in some $s'$ instead of following policy $\pi$. Then for each following time step, the agent will follow the policy $\pi$ the rest of the way. This is the definition of $Q^\pi(s, a)$, which will be used later.<br><br>
From this point onwards, let's have our rewards $R(s, a)$ be deterministic. Remember that the randomness in $R$ comes from the transition probabilties, which can make the rewards different. Note: this is a very reasonable restriciton, because we can always redefine $R(s, a) = E_R[R(s, a)]$. Since this whole R term will be taken over expectation anyways, if we just redefined R to be deterministic, we would be fine.<br><br>
From this point onwards, let's also have our $\pi$ be a deterministic pi(s). This is a reasonable restriction, because for MDPs, it turns out there will always be a determistic optimal policy for a MDP. The proof for this claim is outside the scope of this class, but the general idea is that there is always going to be a determistinic strategy that will exploit the highest possible discounted reward.<br>
Note: this assumption does not hold in a multiplayer game. Imagine rock paper scissors: with two players, the optimal $\pi$ has to have some randomesnss, because if you always played scissors the opponent could always play rock and beat you consistently.<br><br>
<u>Bellman Equation:</u>
$$V^\pi(s) = R(s, \pi(s)) + \gamma \sum_s' T(s'|s, \pi(s)) V^\pi(s')$$
Here we introduce the Bellman Equation. In our below derivation, we will show that the Bellman Equation is recursive with respect to each time step $t$. Notice that because we assumed policy $\pi$ is deterministic, $\pi$ is a function of only $s$ instead of both $s$ and $a$. <br><br>
How did we derive this Bellman equation? Let's start from our definition of $V^\pi(s)$:
$$V^\pi(s) = E[\sum_{t=0} \gamma^t r_t | s_0 = s]$$
$$V^\pi(s) = E[R(s_0, \pi(s_0)) + \sum_{t=1} \gamma^t r_t]$$
We can extract out the first time step.
$$V^\pi(s) = R(s_0, \pi(s_0)) + E[\sum_{t=1} \gamma^t r_t]$$
We can pull out a factor gamma to help us start to recreate V but for s'.
$$V^\pi(s) = R(s_0, \pi(s_0)) + \gamma E[\sum_{t=1}\gamma^t r_t | s_0 = s_1]$$
Note that $s_1$ is unknown. It depends on where taking $\pi(s)$ after $s_1$ lands us.
$$V^\pi(s) = R(s_t, \pi(s_0)) + \gamma E_{s_1 | s_0} [ E [ \sum \gamma^t r_t]]$$
Since the state space is discrete, we can use summations to rewrite the expectation.
$$V^\pi(s) = R(s_0, \pi(s_0)) + \gamma \sum_{s_1} T(s_1 | s, a) E_{T, \pi}[\sum \gamma^t r_t | s_0' = s_1]$$
The right part of this equation is now a redefinition of $V^\pi(s')$ so we have completed our recursion for 1 time step.
$$V^\pi(s) = R(s, \pi(s)) + \gamma \sum T(s' | s, a) V^\pi(s')$$
The idea here is that the value of being in a particular state is equal to the immediate reward that the agent gets from following policy, plus a discounted expectation of what the future value is at the next time step.<br><br>
<u>So how do we actually solve this to complete Policy Evaluation?</u><br><br>
So far we've just written out a Bellman equation, and justified why it's correct mathematically and conceptually. But we still haven't solved for $V^\pi(s)$ yet.<br><br>
The good news is, if the state spaces and actions spaces are discrete, it is easy to solve for $V^\pi$! First, we note that the Bellman equation is a linear equation. Across a set of $|S|$ states, we have $s$ unknowns and $s$ equations, with one Bellman Equation for each state. This means we can simply solve the system of equations.
<h5>Method 1: Solving for V</h5>
Write down our system of equations in vector form:
$$V^\pi = R^\pi + \gamma T^\pi V^\pi$$
$V^\pi$ is a vector. $R^\pi$ is a vector, where each element is $E_\pi[R(s_t, \pi(s_t))]$ if R is random, or $R(s_0, \pi(s_0))$ if R is deterministic. $T^\pi(s'|s)$ is $T(s'|s, \pi(s))$.<br><br>
When we solve the system of equations, we get closed-form solution:
$$V^\pi = (I - \pi T^n)^{-1} R^\pi$$
<h5>Method 2: Solving for V</h5>
We can also solve for $V$ iteratively, updating our estimates for $V^\pi$ until they converge. This method is used more in practice, as this method tends to be more stable. First, initialize $\hat{V}^{\pi(s)}_{k=0}$ to be whatever constant you'd like.<br>
Define $\hat{V}^{\pi(s)}_k$ as:<br>
$$\hat{V}^{\pi(s)}_k = R(s, \pi(s)) + \gamma \sum_s' T(s' | s, \pi(s))V^{\pi(s')}_{k-1}$$
The $V^{\pi(s')}_{k-1}$ represent our old estimate of $V^{\pi(s')}$ at iteration $(k - 1)$ of the algorithm. We can use this old estimate at iteration $(k - 1)$ to compute a new estimate at iteration $k$. The intuition is that with every iteration, we will push our our old estimate into the discounted future.
$$\hat{V}^{\pi(s)}_k = R(s, \pi(s)) + \gamma \sum_s' T(s' | s, \pi(s)) [R(s', \pi(s')) + \gamma \sum_s'' T(s'' | s', \pi(s'))V^{\pi(s'')}_{k-2}]$$
Therefore the originally initialized $\hat{V}^{\pi(s)}_{k=0}$ is ``pushed out'' into the future. If we expand the math out, we end up with term $\gamma^2 \hat{V}^\pi_{k=2}(s'')$, where we notice for each iteration, we multiply this term by another $\gamma$. Essentially, we have $\gamma^\textrm{iters} V_0^\pi$.<br><br>
This is a contraction because $\gamma$ is less than 1 (by definition for us), and eventually this pushes our wrongly initalized value matrix $\hat{V}^{\pi(s)}_{k=0}$ so far into the future that it doesn't matter anymore.
<h4>Tying Back to Policy Iteration</h4>
Now we have two ways to perform policy evaluation, or solve for $V^\pi(s)$. Now, let's return to our goal of policy iteration. As you can see, in it, we use policy evaluation. The algorithm is given below:
<div class="text-center">
<img src="{{ site.baseurl }}/images/recap20_4.png" style="width:80%" alt="Policy Iteration"></img>
</div>
The idea is, we will start with some policy $\pi$, and then use our $Q$ to try some excursion. If an action (excursion) $a$ is great, we'll actually replace the learned policy $\pi$ with the better move (policy update). While the proof of this is out of the scope of the course, it turns out that the policy improvement theorem proves that this loop will converge to $\pi^*$, the optimal policy. <br>
Overall, at a high level, you can think of like this: we started with a policy, we tried some excursions, evaluated them, then improved our policy. We then repeated this, iterating until our policy was optimal.<br><br>
<h2 id="recap20_4">Recap and Looking Forward</h2>
Where we are so far? So far, we've found one way of getting the optimal policy. We will cover two more methods in the next lecture. In addition, soon we will be covering reinforcement learning (RL). RL is learning when we don't have access to $T$, and $R$. What if we had to just interact with the world directly? Much more to come!
</div>
</section>