Releases: explosion/thinc
v8.2.1: Support Python 3.12
✨ New features and improvements
Updates and binary wheels for Python 3.12.
👥 Contributors
v8.2.0: Disable automatic MXNet and TensorFlow imports
✨ New features and improvements
To improve loading times and reduce conflicts, MXNet and TensorFlow are no longer imported automatically (#890).
⚠️ Backwards incompatibilities
MXNet and TensorFlow support needs to be enabled explicitly. Previously, MXNet and TensorFlow were imported automatically if they were available in the current environment.
To enable MXNet:
from thinc.api import enable_mxnet
enable_mxnet()
To enable TensorFlow:
from thinc.api import enable_tensorflow
enable_tensorflow()
With spaCy CLI commands you can provide this custom code using -c code.py
. For training use spacy train -c code.py
and to package your code with your pipeline use spacy package -c code.py
.
Future deprecation warning: built-in MXNet and TensorFlow support will be removed in Thinc v9. If you need MXNet or TensorFlow support in the future, you can transition to using a custom copy of the current MXNetWrapper
or TensorFlowWrapper
in your package or project.
👥 Contributors
v8.1.12: Support zero-length batches and hidden sizes in reductions
v8.1.11: Support Pydantic v2, update package setup
✨ New features and improvements
- Update NumPy build constraints for NumPy v1.25 (#885).
- Switch from
distutils
tosetuptools
/sysconfig
(#888). - Allow Pydantic v2 using transitional v1 support (#891).
📖 Documentation and examples
- Fix typo in example code (#879).
👥 Contributors
@adrianeboyd, @Ankush-Chander, @danieldk, @honnibal, @ines, @svlandeg
v8.1.10: Lazy loading for CuPy kernels and additional CuPy and MPS improvements
✨ New features and improvements
- Implement
pad
as a CUDA kernel (#860). - Avoid h2d - d2h roundtrip when using
unflatten
(#861). - Improve exception when CuPy/PyTorch MPS is not installed (#863).
- Lazily load custom
cupy
kernels (#870).
🔴 Bug fixes
- Initially load TorchScript models on CPU for MPS devices (#864).
👥 Contributors
@adrianeboyd, @danieldk, @honnibal, @ines, @shadeMe, @svlandeg
v8.1.9: Type fixes
v8.1.8: New faster mapping layer and bug fixes for resizeable layer
✨ New features and improvements
🔴 Bug fixes
- Make resizable layer work with textcat and transformers (#820).
📖 Documentation
👥 Contributors
@adrianeboyd, @danieldk, @essenmitsosse, @honnibal, @ines, @kadarakos, @patjouk, @polm, @svlandeg
v8.1.7: Updated layers and extended requirements
✨ New features and improvements
- Add
with_flatten.v2
layer with symmetric input/output types (#821). - Extend to
typing_extensions
v4.4.x for Python 3.6 and 3.7 (#833).
📖 Documentation
👥 Contributors
@adrianeboyd, @albertvillanova, @danieldk, @essenmitsosse, @honnibal, @ines, @shadchin, @shadeMe, @svlandeg
v8.1.6: New and updated layers, bug fixes and more
✨ New features and improvements
- Update to mypy 0.990 (#801).
- Extend to wasabi v1.1 (#813).
- Add
SparseLinear.v2
, to fix indexing issues (#754). - Add
TorchScriptWrapper_v1
(#802). - Add callbacks to facilitate lazy-loading models in
PyTorchShim
(#796). - Make all layer defaults serializable (#808).
🔴 Bug fixes
- Add missing
packaging
requirement (#799). - Correct sequence length error messages for
reduce_first/last
(#807). - Update
CupyOps.asarray
to always copy cupy arrays to the current device (#812). - Fix types for sequences passed to
Ops.asarray*
(#819).
👥 Contributors
@adrianeboyd, @danieldk, @frobnitzem, @honnibal, @ines, @richardpaulhudson, @ryndaniels, @shadeMe, @svlandeg