-
Notifications
You must be signed in to change notification settings - Fork 542
Neural Net Training
jjoshua2 edited this page Jun 11, 2019
·
3 revisions
The self-play games your client creates are used by the central server to improve the neural net. This process is called training (many people call the process of running the client to produce self-play games training, but in machine learning these games are only the input data for the actual training process).
Some machine learning terms:
- Batch Size: How many positions the GPU can train on simultaneously. Set as large as your GPU can handle.
- Learning Rate: How fast the neural net weights are adjusted. Too high and you don't learn anything, or worse. Too low and your progress is too slow.
- Sampling Ratio: How many times each position from self-play games is used for training. Too high and your net may overfit. Too low and your progress is too slow.
- Train/Test Sets: Best practice is to split your data into two sets, train and test sets. You use the train set to improve the NN weights. You use the test set to validate the NN is able to generalize what it learned to positions it has never trained on. This way you can detect if your NN is overfitting.
- Overfitting: If the network trains on the same positions too much or with too low learning rate, it may memorize those positions and not generalize well to other similar positions. Larger learning rates or other regularization like L2 or dropout can reduce this, if more data is not available. See https://arxiv.org/pdf/1803.09820.pdf for some discussion on L2 and learning rate affecting this.
This sheet shows how some of the hyper parameters are picked.