Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RL-baseline] Model v4, experiment #2 #40

Open
wants to merge 2 commits into
base: RL-baseline-v4
Choose a base branch
from

Conversation

ziritrion
Copy link
Collaborator

The policy network for model v4 for REINFORCE with Baseline is essentially the same network as in v2, but the actor and critic heads have an additional fully connected layer each similar to v3. This tweak was added with the hopes of seeing the initial gains in reward that we observed with model v3 but with the elevated sustained reward value that we observed in v2.

The action sets are the same as in Model v3. For this experiment, action set #1 is chosen:
[0.0, 0.0, 0.0], # no action
[0.0, 0.8, 0.0], # throttle
[0.0, 0.0, 0.6], # break
[-0.9, 0.0, 0.0], # left
[-0.5, 0.0, 0.0], # left
[-0.2, 0.0, 0.0], # left
[0.9, 0.0, 0.0], # right
[0.5, 0.0, 0.0], # right
[0.2, 0.0, 0.0], # right

The Running Reward reached its maximum value of 413 around the 4k episode mark but quickly collapsed afterwards. Entropy stayed consistently near zero at the 5k episode mark, although RR became positive again at the 7k episode mark, fluctuating between 150 and 250 up until the 17k episode mark, where the RR finally collapsed and never recovered.

Results are below:
Notification_Center
Notification_Center

Sample video below:
https://user-images.githubusercontent.com/1465235/113120748-328b7900-9212-11eb-8cac-2e79f1d6b90e.mp4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant