You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why are the rewards truncated in the "GPTRewardModel" class? What is the reason and where can I find more information about it?
# Retrieve first index where trajectories diverge
divergence_ind = (chosen[i] != rejected[i]).nonzero()[0]
assert divergence_ind > 0
# Index into the correct rewards
c_truncated_reward = chosen_rewards[i][divergence_ind:end_ind]
r_truncated_reward = rejected_rewards[i][divergence_ind:end_ind]
Thanks in advance
The text was updated successfully, but these errors were encountered:
Why are the rewards truncated in the "GPTRewardModel" class? What is the reason and where can I find more information about it?
Thanks in advance
The text was updated successfully, but these errors were encountered: