You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From my testing, this is a source of major instability since the vtrace operator frequently outputs very large q value targets - this flows directly to the advantage estimation. The large clip bounds don't do much to prevent extremely large policy loss values.
Setting this value to something lower eg 10 makes for more consistent training.
Could the authors comment on why such a large clip value was used / didn't introduce this instability.
Thank you.
P.S. the readme value incorrectly reports a clip value of 100k instead of 10k as used.
The text was updated successfully, but these errors were encountered:
In the paper and open spiel implementation, the neurd clip value is set to 10k.
open_spiel/open_spiel/python/algorithms/rnad/rnad.py
Line 604 in 931e39a
From my testing, this is a source of major instability since the vtrace operator frequently outputs very large q value targets - this flows directly to the advantage estimation. The large clip bounds don't do much to prevent extremely large policy loss values.
Setting this value to something lower eg 10 makes for more consistent training.
Could the authors comment on why such a large clip value was used / didn't introduce this instability.
Thank you.
P.S. the readme value incorrectly reports a clip value of 100k instead of 10k as used.
The text was updated successfully, but these errors were encountered: