Tensor representation of chess-board for AlphaZero #987
-
Hello, First of all, I wanted to thank you for the incredible work done so far! For school purposes, I want to implement my own alphazero algorithm for chess. To do this, I need the tensor representation of the chessboard and the action. Is there a way to get these representations using the Openspiel Alphazero module? Thank you. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi, all the tensor representations of state are obtained via In python, you would get this via Roughly, it's a representation that is friendly to convolutional neural nets: planar inputs where each plan represents a piece type and side, and some binary planes for castling rights. I recommend you start with this. To get a stacked version of this as used in original AZ (i.e. one that takes into account the past 4-8 moves in the history), you'd have to modify this function or make a game wrapper that overrides the ObservationTensor function. This would be a welcome general wrapper to have though, and I'd encourage you to contribute that if you write it (there is an "easy" way of doing this in game_transforms). The actions are always integers in the range {0, 1, ... , Game::NumDistinctActions() - 1 }. |
Beta Was this translation helpful? Give feedback.
Hi, all the tensor representations of state are obtained via
State::ObservationTensor
orState::InformationStateTensor
. In the of chess, the code is here: https://github.com/deepmind/open_spiel/blob/e75bdf114de32c2211edf36443703a6e8846a3cb/open_spiel/games/chess.cc#L315.In python, you would get this via
state.observation_tensor()
.Roughly, it's a representation that is friendly to convolutional neural nets: planar inputs where each plan represents a piece type and side, and some binary planes for castling rights. I recommend you start with this. To get a stacked version of this as used in original AZ (i.e. one that takes into account the past 4-8 moves in the history), you'd have to modi…