Releases: ml-explore/mlx
Releases · ml-explore/mlx
v0.6.0
Highlights:
- Faster quantized matrix-vector multiplies
mx.fast.scaled_dot_product_attention
fused op
Core
- Memory allocation API improvements
- Faster GPU reductions for smaller sizes (between 2 and 7x)
mx.fast.scaled_dot_product_attention
fused op- Faster quantized matrix-vector multiplications
- Pickle support for
mx.array
NN
- Dilation on convolution layers
Bugfixes
- Fix
mx.topk
- Fix reshape for zero sizes
v0.5.0
Highlights:
- Faster convolutions.
- Up to 14x faster for some common sizes.
- See benchmarks
Core
mx.where
properly handlesinf
- Faster and more general convolutions
- Input and kernel dilation
- Asymmetric padding
- Support for cross-correlation and convolution
atleast_{1,2,3}d
accept any number of arrays
NN
nn.Upsample
layer- Supports nearest neighbor and linear interpolation
- Any number of dimensions
Optimizers
- Linear schedule and schedule joiner:
- Use for e.g. linear warmup + cosine decay
Bugfixes
arange
throws oninf
inputs- Fix Cmake build with MLX
- Fix
logsumexp
inf
edge case - Fix grad of power w.r.t. to exponent edge case
- Fix compile with
inf
constants - Bug temporary bug in convolution
v0.4.0
Highlights:
- Partial shapeless compilation
- Default shapeless compilation for all activations
- Can be more than 5x faster than uncompiled versions
- CPU kernel fusion
- Some functions can be up to 10x faster
Core
- CPU compilation
- Shapeless compilation for some cases
mx.compile(function, shapeless=True)
- Up to 10x faster scatter: benchmarks
mx.atleast_1d
,mx.atleast_2d
,mx.atleast_3d
Bugfixes
- Bug with
tolist
withbfloat16
andfloat16
- Bug with
argmax
on M3
v0.3.0
Highlights:
mx.fast
subpackage- Custom
mx.fast.rope
up to 20x faster
Core
- Support metadata with
safetensors
- Up to 5x faster scatter and 30% faster gather
- 40% faster
bfloat16
quantizated matrix-vector multiplies mx.fast
subpackage with a fast RoPE- Context manager
mx.stream
to set the default device
NN
- Average and Max pooling layers for 1D and 2D inputs
Optimizers
- Support schedulers for e.g. learning rates
- A few basic schedulers:
optimizers.step_decay
optimizers.cosine_decay
opimtizers.exponential_decay
Bugfixes
- Fix bug in remainder with negative numerators and integers
- Fix bug with slicing into softmax
- Fix quantized matmuls with non 32 multiples
v0.2.0
Highlights:
mx.compile
makes stuff go fast- Some functions are up to 10x faster (benchmarks)
- Training models anywhere from 10% to twice as fast (benchmarks)
- Simple syntax for compiling full training steps
Core
mx.compile
function transformation- Find devices properly for iOS
- Up to 10x faster GPU gather
__abs__
overload forabs
on arraysloc
andscale
in parameter formx.random.normal
NN
- Margin ranking loss
- BCE loss with weights
Bugfixes
- Fix for broken eval during function transformations
- Fix
mx.var
to giveinf
withdoff >= nelem
- Fix loading empty modules in
nn.Sequential
v0.1.0
Highlights
- Memory use improvements:
- Gradient checkpointing for training with
mx.checkpoint
- Better graph execution order
- Buffer donation
- Gradient checkpointing for training with
Core
- Gradient checkpointing with
mx.checkpoint
- CPU only QR factorization
mx.linalg.qr
- Release Python GIL during
mx.eval
- Depth-based graph execution order
- Lazy loading arrays from files
- Buffer donation for reduced memory use
mx.diag
,mx.diagonal
- Breaking:
array.shape
is a Python tuple - GPU support for
int64
anduint64
reductions - vmap over reductions and arg reduction:
sum
,prod
,max
,min
,all
,any
argmax
,argmin
NN
- Softshrink activation
Bugfixes
- Comparisons with
inf
work, and fixmx.isinf
- Bug fix with RoPE cache
- Handle empty Matmul on the CPU
- Negative shape checking for
mx.full
- Correctly propagate
NaN
in some binary opsmx.logaddexp
,mx.maximum
,mx.minimum
- Fix > 4D non-contiguous binary ops
- Fix
mx.log1p
withinf
input - Fix SGD to apply weight decay even with 0 momentum
v0.0.11
Highlights:
- GGUF improvements:
- Native quantizations
Q4_0
,Q4_1
, andQ8_0
- Metadata
- Native quantizations
Core
- Support for reading and writing GGUF metadata
- Native GGUF quantization (
Q4_0
,Q4_1
, andQ8_0
) - Quantize with group size of 32 (2x32, 4x32, and 8x32)
NN
Module.save_weights
supports safetensorsnn.init
package with several commonly used neural network initializers- Binary cross entropy and cross entropy losses can take probabilities as targets
Adafactor
innn.optimizers
Bugfixes
- Fix
isinf
and friends for integer types - Fix array creation from list Python ints to
int64
,uint
, andfloat32
- Fix power VJP for
0
inputs - Fix out of bounds
inf
reads ingemv
mx.arange
crashes on NaN inputs
v0.0.10
Highlights:
- Faster matmul: up to 2.5x faster for certain sizes, benchmarks
- Fused matmul + addition (for faster linear layers)
Core
- Quantization supports sizes other than multiples of 32
- Faster GEMM (matmul)
- ADMM primitive (fused addition and matmul)
mx.isnan
,mx.isinf
,isposinf
,isneginf
mx.tile
- VJPs for
scatter_min
andscatter_max
- Multi output split primitive
NN
- Losses: Gaussian negative log-likelihood
Misc
- Performance enhancements for graph evaluation with lots of outputs
- Default PRNG seed is based on current time instead of 0
- Primitive VJP takes output as input. Reduces redundant work without need for simplification
- PRNGs default seed based on system time rather than fixed to 0
- Format boolean printing in Python style when in Python
Bugfixes
- Scatter < 32 bit precision and integer overflow fix
- Overflow with
mx.eye
- Report Metal out of memory issues instead of silent failure
- Change
mx.round
to follow NumPy which rounds to even
v0.0.9
Highlights:
- Initial (and experimental) GGUF support
- Support Python buffer protocol (easy interoperability with NumPy, Jax, Tensorflow, PyTorch, etc)
at[]
syntax for scatter style operations:x.at[idx].add(y)
, (min
,max
,prod
, etc)
Core
- Array creation from other mx.array’s (
mx.array([x, y])
) - Complete support for Python buffer protocol
mx.inner
,mx.outer
- mx.logical_and, mx.logical_or, and operator overloads
- Array at syntax for scatter ops
- Better support for in-place operations (
+=
,*=
,-=
, ...) - VJP for scatter and scatter add
- Constants (
mx.pi
,mx.inf
,mx.newaxis
, …)
NN
- GLU activation
cosine_similarity
loss- Cache for
RoPE
andALiBi
Bugfixes / Misc
- Fix data type with
tri
- Fix saving non-contiguous arrays
- Fix graph retention for inlace state, and remove
retain_graph
- Multi-output primitives
- Better support for loading devices
v0.0.7
Core
- Support for loading and saving HuggingFace's safetensor format
- Transposed quantization matmul kernels
mlx.core.linalg
sub-package withmx.linalg.norm
(Frobenius, infininty, p-norms)tensordot
andrepeat
NN
- Layers
Bilinear
,Identity
,InstanceNorm
Dropout2D
,Dropout3D
- more customizable
Transformer
(pre/post norm, dropout) - More activations:
SoftSign
,Softmax
,HardSwish
,LogSoftmax
- Configurable scale in
RoPE
positional encodings
- Losses:
hinge
,huber
,log_cosh
Misc
- Faster GPU reductions for certain cases
- Change to memory allocation to allow swapping