You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello. As someone who are interested in RL and KAN, it's great to see that someone is already working on combining KAN and RL.
It's really exciting to find out that KAQN improves the performance and efficiency of RL, and now I'm planning to extend it to other RL algorithms(ex. TD3, SAC, etc). Yet before starting it I want to ask few questions about your implementation.
offline training
KAQN seems to train after an episode ends, which is different to normal DQN implementation(training every step), and config.train_steps is set to 5, which seems to be very small in my opinion. Does training KAQN every step leads to training failure?
number of episode-based training schedule
KAQN seems to schedule warming up(random action), copying q-network to target q-network, update_grid_from_samples. Yet doing it by number of episodes will not guarantee equal training setting by each run. Is it done on purpose?
The text was updated successfully, but these errors were encountered:
Hello. As someone who are interested in RL and KAN, it's great to see that someone is already working on combining KAN and RL.
It's really exciting to find out that KAQN improves the performance and efficiency of RL, and now I'm planning to extend it to other RL algorithms(ex. TD3, SAC, etc). Yet before starting it I want to ask few questions about your implementation.
config.train_steps
is set to 5, which seems to be very small in my opinion. Does training KAQN every step leads to training failure?update_grid_from_samples
. Yet doing it by number of episodes will not guarantee equal training setting by each run. Is it done on purpose?The text was updated successfully, but these errors were encountered: