Releases: explosion/thinc
v8.3.2: Fix regression to torch training, update ARM dependency
- Fix regression to torch training introduced in v8.3.1
- Restore MacOS ARM wheels, which were missing from previous builds
- Fix compatibility with thinc-apple-ops
v8.3.1: Fix torch deprecation warning
torch.cuda.amp is deprecated (Pytorch 2.4). This PR updates shims pytorch.py to use torch.amp.autocast instead of torch.cuda.amp.autocast.
Thanks to @Atlogit for the patch.
v9.1.1: Restore wheels for MacOS ARM 64
Previously we used a complicated build process that used self-hosted runners to build wheels for platforms Github Actions did not support. Github Actions has been adding support for ARM recently, so we've simplified the CI process to rely only on it exclusively.
This release adds back support for MacOS ARM64 wheels that were missing from the previous release. Linux ARM wheels are still pending, as Linux ARM architectures are currently only supported for private repos. Cross-compilation with QEMU is possible in theory, but in practice the build timed out after several hours.
v9.1.0: Depend on numpy 2.0.0
Numpy is a build dependency of Thinc, and numpy 2.0 is not binary compatible with numpy 1.0 (fair enough). This means we can't have a version that's compatible across numpy v1 and numpy v2.
This release updates v9 by pinning to numpy 2.0, and builds against it. No other changes are made, so that we have paired versions that only differ in their dependencies.
v8.3.0: Depend on numpy 2.0
Numpy is a build dependency of Thinc, and numpy 2.0 is not binary compatible with numpy 1.0 (fair enough). This means we can't have a version that's compatible across numpy v1 and numpy v2.
This release updates the pins to numpy 2.0 and builds against it. No other changes are made, so that we have paired versions that only differ in their dependencies.
v8.2.5: Restrict numpy pin to <2.0.0
Numpy v2.0 isn't binary compatible with v1 (understandably). We build against numpy so we need to restrict the pin.
v8.2.4: Relaxing `nbconvert` and `typing_extensions` upper pins
v9.0.0: better learning rate schedules, integration of thinc-apple-ops
The main new feature of Thinc v9 is the support for learning rate schedules that can take the training dynamics into account. For example, the new
plateau.v1
schedule scales the learning rate when no progress has been found after a given number of evaluation steps. Another visible change is thatAppleOps
is now part of Thinc, so it is not necessary anymore to installthinc-apple-ops
to use the AMX units on Apple Silicon.
✨ New features and improvements
- Learning rate schedules can now take the training step as well as an arbitrary set of keyword arguments. This makes it possible to pass information such a the parameter name and last evaluation score to determine the learning rate (#804).
- Added the
plateau.v1
schedule (#842). This schedule scales the learning rate if training was found to be stagnant for a given period. - The functionality of
thinc-apple-ops
is integrated into Thinc (#927). Starting with this version of Thinc, it is not necessary anymore to installthinc-apple-ops
.
🔴 Bug fixes
- Fix the use of thread-local storage (#917).
⚠️ Backwards incompatibilities
- Thinc v9.0.0 only support Python 3.9 and later.
- Schedules are not generators anymore, but implementations of the
Schedule
class (#804). thinc.backends.linalg
has been removed (#742). The same functionality is provided by implementations in BLAS that are better tested and more performant.thinc.extra.search
has been removed (#743). The beam search functionality in this module was strongly coupled to the spaCy transition parser and has therefore moved to spaCy in v4.
👥 Contributors
@adrianeboyd, @danieldk, @honnibal, @ines, @kadarakos, @shadeMe, @svlandeg