-
-
Notifications
You must be signed in to change notification settings - Fork 18
Introduction
If you are new to NNUE, you might want to start reading about NNUE itself in the Fairy-Stockfish wiki.
- A CUDA capable GPU: https://developer.nvidia.com/cuda-gpus
- CUDA: https://developer.nvidia.com/cuda-downloads
- Training data generator: https://github.com/fairy-stockfish/variant-nnue-tools
- Training code: https://github.com/fairy-stockfish/variant-nnue-pytorch
Note: If you are already familiar with official Stockfish training, be aware that Fairy-Stockfish uses a HalfKAv2 based genalized NNUE architecture, and a generalized bin
training data format (with 512 instead of 256 bit) incompatible with the official Stockfish trainer. For deeper technical insight into the differences between the training data and NNUE networks in Fairy-Stockfish compared to official Stockfish, see the technical details.
Before starting the training, please check if any of the limitations mentioned in the FAQ apply to the variant you want train.
The first step in the training of NNUE networks is the generation of training data. The training data generator code is based on Fairy-Stockfish and is available at https://github.com/fairy-stockfish/variant-nnue-tools. You can download it from the releases. Alternatively, building the training data generator code yourself from source works the same way as compiling Fairy-Stockfish.
See the page on training data generation in this wiki for more details.
In order to run NNUE training for specific variants, the code requires minor adjustments to specify board size, piece types, etc. The reason for that is that the training code is variant-agnostic and other than those minor adaptations has no knowledge of the rules of the variants. The training data generator prints the recommended training code changes when setting a variant using the UCI_Variant
, you just need to apply these changes to the variant.h and variant.py files in the training code.
After that you can run the main train.py
script to train an NNUE network with generated training data. As a final step of the training the generated network needs to be transformed to the format compatible with Fairy-Stockfish using the serialize.py
script.
See the page on NNUE training in this wiki for more details.
There are demos for running the training online using Kaggle or Colab, see
- https://www.kaggle.com/fabianfichter/variant-nnue-demo
- https://colab.research.google.com/drive/1ve1Q807qAki7m7k4ZG17oP48wORFhbQZ?usp=sharing
for detailed usage of Kaggle notebook, see Training through Kaggle GPU section.
In order to test a trained NNUE network, there are several options depending on the variant.
- cutechess is both available as a GUI and a command line client and can run engine matches for a large number of variants.
- variantfishtest is a simple variant-agnostic testing script that can be used for testing of arbitrary variants in Fairy-Stockfish.
- fairyfishtest is another python script similar to variantfishtest and also can be used for all variants supported by Fairy-Stockfish. It is mainly used for testing four-player variants like bughouse.
When a trained NNUE network performns well in testing, you can upload it via this form to make it available here. After that, please also update the list of current strongest NNUE networks by raising a pull request against the NNUE page in the website repository, so that others can easily find the network you trained.
- Both the training data generation tool as well as this repository only support the
bin
format and nobinpack
orplain
formats. - Many of the helper scripts/functionality in the training data generation and this repository are not maintained and likely broken since they rely on chess-specific code. The essential functionality of generating
bin
training data and using it for NNUE training of HalfKAv2 networks with thetrain.py
andserialize.py
scripts is working though.
See the FAQ for more info on supported variants and common pitfalls.