-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathReport.html
244 lines (194 loc) · 6.55 KB
/
Report.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
<!doctype html>
<html class="no-js" lang="">
<head>
<meta charset="utf-8">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<title></title>
<meta name="description" content="">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
</head>
<body>
<!--[if lte IE 9]>
<p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="https://browsehappy.com/">upgrade your browser</a> to improve your experience and security.</p>
<![endif]-->
<!-- Add your site or application content here -->
<style>
body {
background-size: 40px 40px;
background-image: radial-gradient(circle, #000000 1px, rgba(0, 0, 0, 0) 1px);
margin: 0;
padding: 0;
font-family: 'Courier New', Courier, monospace;
}
aside {
position: fixed;
height: 100%;
background-color: #ffffff;
width: 96px;
box-shadow: 0 2px 2px 0 rgba(0,0,0,0.16), 0 0 0 1px rgba(0,0,0,0.08);
}
aside ul {
padding: 0;
margin: 0;
}
aside ul li {
display: inline-block;
width: 100%;
text-align: center;
padding: 24px 0;
border-bottom: 1px solid #eeeeee;
}
.active {
font-weight: bold;
}
table {
width: 100%;
border-collapse: collapse;
margin: 24px 0;
}
table, th, td {
border: 1px solid black;
}
h2 {
text-decoration: underline;
}
th, td {
padding: 6px;
}
.row {
display: flex;
align-items: center;
justify-content: center;
}
.col {
width: 50%;
padding: 24px;
}
#banner {
text-align: left;
}
section {
background-color: #ffffff;
width: 80%;
max-width: 792px;
margin: 48px auto;
padding: 36px 24px;
box-shadow: 0 2px 2px 0 rgba(0,0,0,0.16), 0 0 0 1px rgba(0,0,0,0.08);
border-radius: 7px;
}
section h2 {
text-align: left;
}
.gradient-backdrop {
height: 400px;
background: #E55D87; /* fallback for old browsers */
background: -webkit-linear-gradient(to right, #5FC3E4, #E55D87); /* Chrome 10-25, Safari 5.1-6 */
background: linear-gradient(to right, #5FC3E4, #E55D87); /* W3C, IE 10+/ Edge, Firefox 16+, Chrome 26+, Opera 12+, Safari 7+ */
display: flex;
justify-content: center;
align-items: center;
}
.gradient-backdrop h1 {
color: #ffffff;
}
.img-container {
width: 100%;
height: auto;
text-align: left;
}
.stretch img {
width: 100%;
height: auto;
max-width: 240px;
}
</style>
<aside>
<ul>
<li>P1</li>
<li class="active">
P2
</li>
<li>P3</li>
</ul>
</aside>
<div class="gradient-backdrop">
<h1>Project 2: Continous Control</h1>
</div>
<div id="content">
<section>
<h2>Approach</h2>
<p>
Our initial approach to solving the "Reacher" environment involved using multiple agents, and a custom implementation of the D4PG algorithm.
However this proved challenging, and we were never able to achieve a score higher than 1.5. Due to this as well as time constraints, we instead opted to
base our implementation on Udacity's DDPG algorithm, used to solve the "Pendulum" environment. We also limted our scope to a single agent.
We found that the agent was able to complete the task and achieve a score greater than 30 in 267 episodes with a final score of 31.02.
Please see the plot and table below for more information.
</p>
</section>
<section>
<h2>
Algorithm and Architecture
</h2>
The DDPG algorithm uses an two neural networks. The first network is called the Actor, and is used to map states to actions.
The second network is referred to as the Critic, and maps state-action pairs to Q-values.
The Actor produces an action given the current state of the environment. The critic then produces TD error signal, which
drives learning in both the actor and the critic.
This approach allows us to optimize a policy, with a continous action-space, in a deterministic fashion.
Target networks for both the Actor and Critic networks are used in order to avoid correlation excessive
when calculating the loss factor.
</br>
</br>
Similarily to the DQN algorithm, DDPG also utilizes a technique known as <strong>Replay Memory</strong>, which involves first storing the experiences, obtained
from interacting with the environment, and later sampling them randomly and learning from them.
This further minmizes correlation and stabilizies the performance of the model.
</br>
To achieve our results, we used the following hyper parameters:
<ul>
<li>BUFFER_SIZE = int(1e6)</li>
<li>BATCH_SIZE = 128</li>
<li>GAMMA = 0.99</li>
<li>TAU = 1e-2</li>
<li>LR_ACTOR = 1e-4</li>
<li>LR_CRITIC = 1e-4</li>
<li>WEIGHT_DECAY = 0</li>
</ul>
In our case, the structure of the neural network consists of three linear layers, with an input equal to the state space (37)
and a final output corresponding to the number of available actions (4). In between, the hidden layer has a total of 64 neurons.
Finally, we decided to use the relu activation function.
</section>
<section class="container">
<h2>DDPG</h2>
Environment solved in 267 episodes! Average Score: 31.02
<div class="row">
<div class="col">
<table>
<tr>
<th>Episode #</th><th>Average Score</th>
</tr>
<tr><td>100</td><td>3.26</td></tr>
<tr><td>200</td><td>16.60</td></tr>
<tr><td>267</td><td>31.02</td></tr>
</table>
</div>
<div class="col">
<div><img src="./img/graph.png" style="width: 100%; height: auto; display: block;" alt="DDPG graph"></div>
</div>
</div>
</section>
<section>
<h2>Obstacles & Future improvements</h2>
<p>
The next step would involve modifying the algorithm with the techniques used in the D4PG algorithm, as well as more experimentation with the model hyper parameters.
</p>
</section>
<section>
<h2>References</h2>
<ul>
<li>
A link to the original paper on DDPG can be found <a href="https://arxiv.org/pdf/1509.02971.pdf">here</a>.
</li>
</ul>
</section>
</div>
</body>
</html>