Skip to content

Releases: LeelaChessZero/lc0

v0.23.1

03 Dec 19:06
Compare
Choose a tag to compare
  • Fixed a bug with Lc0 crashing sometimes during match phase of training game generation.
  • Release packages now include CUDNN version without DLLs bundled.

v0.23.0

01 Dec 16:04
Compare
Choose a tag to compare
  • Fixed the order of BLAS options so that Eigen is lower priority, to match assumption in check_opencl patch introduced in v0.23.0-rc2.

(no other changes since rc2)

v0.23.0-rc2

27 Nov 20:26
Compare
Choose a tag to compare
v0.23.0-rc2 Pre-release
Pre-release
  • Fixes in nps and time reporting during search.
  • Introduced DNNL BLAS build for modern CPUs in addition to OpenBLAS.
  • Build fixes on MacOS without OpenCL.
  • Fixed smart pruning and KLDGain trying to stop search in go infinite mode.
  • OpenCL package now has check_opencl tool to find computation behaves sanely.
  • Fixed a bug in interoperation of shortsighteness and certainty propagation.

v0.23.0-rc1

21 Nov 20:36
Compare
Choose a tag to compare
v0.23.0-rc1 Pre-release
Pre-release
  • Support for Fischer Random Chess (UCI_Chess960 option to enable FRC-style castling). Also added support for FRC-compatible weight files, but no training code yet.
  • New option --logit-q (UCI: LogitQ). Changes subtree selection algorithm a bit, possibly making it stronger (experimental, default off).
  • Lc0 now reports WDL score. To enable it, use --wdl-info command-line argument or UCI_WdlInfo UCI option.
  • Added "Badgame split" mode during the training. After the engine makes inferior move due to temperature, the game is branched and later the game is replayed from the position of the branch.
  • Added experimental --short-sightedness (UCI: ShortSightedness) parameter. Treats longer variations as more "drawish".
  • Lc0 can now open Fat Fritz weight files.
  • Time management code refactoring. No functional changes, but will make time management changes easier.
  • Lc0 logo is now printed in red! \o/
  • Command line argument -v is now short for --verbose-move-stats.
  • Errors in --backend-opts parameter syntax are now reported.
  • The most basic version of "certainty propagation" feature (actually without "propagation"). If the engine sees checkmate, it plays it! (before it could play other good move).
  • Various small changes: hidden options to control Dirichlet noise, floating point optimizations, Better error reporting if there is exception in worker thread, better error messages in CUDA backend.

v0.22.0

05 Aug 10:35
Compare
Choose a tag to compare

No change from v0.22.0-rc1 other than version number.

v0.22.0-rc1

03 Aug 19:17
Compare
Choose a tag to compare
v0.22.0-rc1 Pre-release
Pre-release

Bunch of small changes that piled up from last major release.

(Do Not Use - incorrectly tagged) v0.21.5-rc1

03 Aug 11:35
Compare
Choose a tag to compare

Remove softmax calculation from backends and apply it after filtering for illegal moves to ensure spurious outputs on illegal moves don't reduce (or entirely remove) the quality of the policy values on the legal moves. This was especially noticeable on fp16 backends for nets trained with legal move masking, but could theoretically be an improvement for any net.

v0.21.4

28 Jul 11:21
Compare
Choose a tag to compare

Two small changes in this release.

  • A fix for crashes that can occur during use of sticky-endgames
  • Change the false positive value reported when in wdl style resign and display average nodes per move as part of tournament stats in selfplay mode.

v0.21.3

21 Jul 12:41
Compare
Choose a tag to compare

Bug fix release:

  • Fix for potential memory corruption/crash in using small networks or using the wdl head with cuda backends. #892
  • Fix for building with newer versions of meson. #904

v0.21.2

09 Jun 15:15
Compare
Choose a tag to compare

Changes since v0.21.2-rc3

  • Centipawn formula retweaked to show 128.00 instead of 127.99 pawns for checkmate.

Since v0.21.1

Highlights:

  • --sticky-engames (minimal version of certainty propagation)
  • New centipawn formula
  • Way to exit training gracefully
  • Optimizations for GTX 16xx videocards (cudnn-fp16 works now)
  • Optimizations for larger filter sizes