Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reward function design #4

Open
Joll123 opened this issue Jul 19, 2019 · 2 comments
Open

reward function design #4

Joll123 opened this issue Jul 19, 2019 · 2 comments

Comments

@Joll123
Copy link

Joll123 commented Jul 19, 2019

Does your reward function contain obstacle distance information?

@ElhadjHoussem
Copy link
Contributor

Hi,
The distance to obstacles is what the Neuronal network actually learning from as Input , which are the Laser signal ( Normalized [0-1]) beside the one-hot vector of oriontation.
It could be added to the reward, but i think it will disturb the information that lead to the goal, and it will make the robot more rotating around himself, cause it can bring him more hight reward from that.

@Joll123
Copy link
Author

Joll123 commented Jul 19, 2019

thank you for your reply. I am working on obstacle avoidance mobile robot navigation. I added obstacle distance information to the reward, making it difficult for the neural network to find the target.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants