Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update NEWS #2247

Merged
merged 1 commit into from
Apr 29, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 29 additions & 9 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,11 @@
# Flux Release Notes

See also [github's page](https://github.com/FluxML/Flux.jl/releases) for a complete list of PRs merged before each release.

## v0.13.16
* Most greek-letter keyword arguments are deprecated in favour of ascii.
Thus `LayerNorm(3; ϵ=1e-4)` (not `ε`!) should become `LayerNorm(3; eps=1e-4)`.

## v0.13.15
* Added [MultiHeadAttention](https://github.com/FluxML/Flux.jl/pull/2146) layer.
* `f16, f32, f64` now specifically target floating point arrays (i.e. integers arrays and other types are preserved).
Expand All @@ -14,22 +20,36 @@

## v0.13.13
* Added `f16` which changes precision to `Float16`, recursively.
* Most layers standardise their input to `eltype(layer.weight)`, [#2156](https://github.com/FluxML/Flux.jl/pull/2156),
to limit the cost of accidental Float64 promotion.
* Friendlier errors from size mismatches [#2176](https://github.com/FluxML/Flux.jl/pull/2176).

## v0.13.12
* CUDA.jl 4.0 compatibility.
* Use `dropout` from NNlib as back-end for `Dropout` layer.

## v0.13.7
* Added [`@autosize` macro](https://github.com/FluxML/Flux.jl/pull/2078)
## v0.13.9
* New method of `train!` using Zygote's "explicit" mode. Part of a move away from "implicit" `Params`.
* Added [Flux.setup](https://github.com/FluxML/Flux.jl/pull/2082), which is `Optimisers.setup` with extra checks,
and translation from deprecated "implicit" optimisers like `Flux.Optimise.Adam` to new ones from Optimisers.jl.

## v0.13.7
* Added [`@autosize` macro](https://github.com/FluxML/Flux.jl/pull/2078), as another way to use `outputsize`.
* Export `Embedding`.

## v0.13.6
* Use the package [OneHotArrays.jl](https://github.com/FluxML/OneHotArrays.jl) instead of having the same code here.

## v0.13.4
* Added [`PairwiseFusion` layer](https://github.com/FluxML/Flux.jl/pull/1983)
* Re-name `ADAM` to `Adam`, etc (with deprecations).

## v0.13 (April 2022)

## v0.13
* After a deprecations cycle, the datasets in `Flux.Data` have
been removed in favour of MLDatasets.jl.
been removed in favour of [MLDatasets.jl](https://github.com/JuliaML/MLDatasets.jl).
* `params` is not exported anymore since it is a common name and is also exported by Distributions.jl
* `flatten` is not exported anymore due to clash with Iterators.flatten.
* `flatten` is not exported anymore due to clash with `Iterators.flatten`.
* Remove Juno.jl progress bar support as it is now obsolete.
* `Dropout` gained improved compatibility with Int and Complex arrays and is now twice-differentiable.
* Notation `Dense(2 => 3, σ)` for channels matches `Conv`; the equivalent `Dense(2, 3, σ)` still works.
Expand Down Expand Up @@ -70,7 +90,7 @@ been removed in favour of MLDatasets.jl.
* CUDA.jl 3.0 support
* Bug fixes and optimizations.

## v0.12.0
## v0.12 (March 2021)

* Add [identity_init](https://github.com/FluxML/Flux.jl/pull/1524).
* Add [Orthogonal Matrix initialization](https://github.com/FluxML/Flux.jl/pull/1496) as described in [Exact solutions to the nonlinear dynamics of learning in deep linear neural networks](https://arxiv.org/abs/1312.6120).
Expand All @@ -95,7 +115,7 @@ been removed in favour of MLDatasets.jl.
* Adds the [AdaBelief](https://arxiv.org/abs/2010.07468) optimiser.
* Other new features and bug fixes (see GitHub releases page)

## v0.11
## v0.11 (July 2020)

* Moved CUDA compatibility to use [CUDA.jl instead of CuArrays.jl](https://github.com/FluxML/Flux.jl/pull/1204)
* Add [kaiming initialization](https://arxiv.org/abs/1502.01852) methods: [kaiming_uniform and kaiming_normal](https://github.com/FluxML/Flux.jl/pull/1243)
Expand All @@ -116,14 +136,14 @@ been removed in favour of MLDatasets.jl.
* Functors have now moved to [Functors.jl](https://github.com/FluxML/Flux.jl/pull/1174) to allow for their use outside of Flux.
* Added [helper functions](https://github.com/FluxML/Flux.jl/pull/873) `Flux.convfilter` and `Flux.depthwiseconvfilter` to construct weight arrays for convolutions outside of layer constructors so as to not have to depend on the default layers for custom implementations.
* `dropout` function now has a mandatory [active](https://github.com/FluxML/Flux.jl/pull/1263)
keyword argument. The `Dropout` struct *whose behavior is left unchanged) is the recommended choice for common usage.
keyword argument. The `Dropout` struct (whose behavior is left unchanged) is the recommended choice for common usage.
* and many more fixes and additions...

## v0.10.1 - v0.10.4

See GitHub's releases.

## v0.10.0
## v0.10.0 (November 2019)

* The default AD engine has switched from [Tracker to Zygote.jl](https://github.com/FluxML/Flux.jl/pull/669)
- The dependency on Tracker.jl has been removed.
Expand Down