Skip to content

Implementing FrozenLake-v1 by following examples #618

Closed Answered by Hananel-Hazan
gthampi asked this question in Q&A
Discussion options

You must be logged in to vote

At first glance, the code appears to be fine.

To clarify, the DQN example in BindsNET has two main steps. First, we train an artificial neural network (ANN) using DQN. Next, we copy the weights from the ANN (with slight modifications) to a spiking neural network (SNN) that has the same topology. Did you also train an ANN and then copy the weights in your code?

The other reinforcement learning (RL) code examples in BindsNET serve as proof of concept to demonstrate that the framework can utilize reward modulated signals to change the weights, similar to the RL framework. However, this arrangement does not perform well.

Regarding your other question, if the weights are not changing, I sugges…

Replies: 2 comments 1 reply

Comment options

You must be logged in to vote
0 replies
Answer selected by gthampi
Comment options

You must be logged in to vote
1 reply
@Hananel-Hazan
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants