-
-
Notifications
You must be signed in to change notification settings - Fork 18
In principle almost all variants supported by Fairy-Stockfish can also be trained using this repository, including user-defined variants. However, currently there are a few additional limitations:
- At most 15 piece types per variant
- At most 40-60 pieces on the board (depending on the board size and the number of pockets, if any). In case of more pieces use the
largedata=yes
flag when compiling the data generator. - At most 2 times the number of files of pieces in hand per piece type (e.g., 2x8 = 16 in crazyhouse, or 2x9 = 18 in shogi). The absolute limit currently is 31, which however anyway is larger than 2*12 for the largest supported board size.
- Some variants with peculiarities are in principle supported but require special handling to work well
- Variants where the king piece (if existing) only has a limited area available to move to (such as the palace in minixiangqi) require manual code changes, with the exception of Xiangqi/Janggi, for which this is already considered.
- For variants like racing kings that have a goal that does not follow the usual symmetry, the training code should be adjusted to reflect this different symmetry in order to get strong NNUE evaluation.
- For check counting variants like 3check there is an extra evaluation term added on top of NNUE evaluation, since the check counts are not considered as a feature in NNUE. Therefore pure NNUE evaluation is inaccurate and should not be used in training data generation (i.e.,
Use NNUE
true
orfalse
, but notpure
).
These limitations are mainly for pragmatic reasons in order to avoid an unnecessarily big training data format and/or NNUE file size.
Make sure that the pytorch-lightning version is <1.5.0.
Please check whether you applied NNUE-training#code-changes correctly, especially that the DATA_SIZE
is consistent with what the data generator prints when setting the variant. If you compiled with largedata=yes
it should be 1024, else 512.
This is a known performance issue and likely due to the slow move generation and large branching factor in drop variants.
In the training data generation the number of FENs per second suddenly starts to drop or gets stuck entirely.
Unless the drop of speed is only temporary due to some other concurrent processes, the problem might be due to the generator running out of new positions, since it filters previously encountered positions using the transposition table. This can especially happen when a variant is very small (e.g., losalamos) or forced (e.g., antichess) and/or the random_multi_pv_diff
very low. If this happens, try to restart generation with a higher random_multi_pv_diff
and see check if this solves the problem.
This can have multiple potential reasons that should be checked:
- Is the Fairy-Stockfish version you are using compatible to the current network? Variants where KING_SQUARES does not equal the number of SQUARES are only supported starting from version 14.0.1. Other variants should be supported from version 14.
- Were the settings in the variant.py correctly defined? You can check the values against what the training data generator prints when setting a variant, and you can also check the file size for plausibility by using the approximate formula
FILE_SIZE_IN_BYTE >= SQUARES * KING_SQUARES * PIECE_TYPES * 2080
(for variants with drops it is slightly bigger). E.g., for Xiangqi90 * 9 * 7 * 2080 B = 11 MB
. - If a variant has a very large NNUE network (>80MB), e.g., due to having many piece types, it can only be loaded with the large-board version even if the variant itself might also work the the normal (8x8) version.
You can also try to load the NNUE network at https://fairy-stockfish-nnue-wasm.vercel.app/ in order to check if it is expected to work with a recent Fairy-Stockfish development version.
On Windows install VisualStudio in order to make nmake
available.