Releases: FluxML/Flux.jl
Releases · FluxML/Flux.jl
v0.14.23
Flux v0.14.23
Merged pull requests:
- Support for lecun normal weight initialization (#2311) (@RohitRathore1)
- Some small printing upgrades (#2344) (@mcabbott)
- simplify test machinery (#2498) (@CarloLucibello)
- Correct dead link for "quickstart page" in README.md (#2499) (@zengmao)
- make
gpu(x) = gpu_device()(x)
(#2502) (@CarloLucibello) - some cleanup (#2503) (@CarloLucibello)
- unbreak some data movement cuda tests (#2504) (@CarloLucibello)
Closed issues:
- Add support for lecun normal weight initialization (#2290)
using Flux, cuDNN
freezes, butusing Flux, CUDA, cuDNN
works (#2346)- Problem with RNN and CUDA. (#2352)
- since new version: Flux throws error when for train! / update! even on quick start problem (#2358)
- Cannot take
gradient
of L2 regularization loss (#2441) - Potential bug of RNN training flow (#2455)
- Problem with documentation (#2485)
- Flux has no Lecun Normalization weight init function? (#2491)
- Zygote fails to differentiate through Flux.params on julia v0.11 (#2497)
- ERROR: UndefVarError:
ADAM
not defined inMain
in flux (#2507)
v0.14.22
Flux v0.14.22
Merged pull requests:
- Bump actions/checkout from 4.2.0 to 4.2.1 (#2489) (@dependabot[bot])
- handle data movement with MLDataDevices.jl (#2492) (@CarloLucibello)
- remove some v0.13 deprecations (#2493) (@CarloLucibello)
Closed issues:
v0.14.21
Flux v0.14.21
Merged pull requests:
- Update ci.yml for macos-latest to use aarch64 (#2481) (@ViralBShah)
- Remove leading empty line in example (#2486) (@blegat)
- Bump actions/checkout from 4.1.7 to 4.2.0 (#2487) (@dependabot[bot])
- fix: CUDA package optional for FluxMPIExt (#2488) (@askorupka)
v0.14.20
Flux v0.14.20
Merged pull requests:
- feat: Distributed data parallel training support (#2464) (@askorupka)
- Run Enzyme tests only on CUDA CI machine (#2478) (@pxl-th)
- Adapt to pending Enzyme breaking change (#2479) (@wsmoses)
- Update TagBot.yml (#2480) (@ViralBShah)
- Bump patch version (#2483) (@wsmoses)
v0.14.19
Flux v0.14.19
Merged pull requests:
Closed issues:
- Model saved under Flux v0.14.16 does not load on v0.14.17 (#2476)
v0.14.18
v0.14.17
Flux v0.14.17
Merged pull requests:
- Add Enzyme train function (#2446) (@wsmoses)
- Bump actions/checkout from 4.1.5 to 4.1.7 (#2460) (@dependabot[bot])
- Add output padding for ConvTranspose (#2462) (@guiyrt)
- Fix ConvTranspose symmetric non-constant padding (#2463) (@paulnovo)
- CompatHelper: add new compat entry for Enzyme at version 0.12, (keep existing compat) (#2466) (@github-actions[bot])
- move enzyme to extension (#2467) (@CarloLucibello)
- Fix function
_size_check()
(#2472) (@gruberchr) - Fix ConvTranspose output padding on AMDGPU (#2473) (@paulnovo)
Closed issues:
- Hoping to offer a version without cuda (#2155)
- ConvTranspose errors with symmetric non-constant pad (#2424)
- Create a flag to use Enzyme as the AD in training/etc. (#2443)
- Can't load a Fluxml trained & saved model. Getting ERROR: CUDA error: invalid device context (code 201, ERROR_INVALID_CONTEXT) (#2461)
- Requires deprecated cuNN.jl package (#2470)
v0.14.16
Flux v0.14.16
Merged pull requests:
- Make sure first example in Custom Layers docs uses type parameter (#2415) (@BioTurboNick)
- Add GPU GC comment to Performance Tips (#2416) (@BioTurboNick)
- Fix some typos in docs (#2418) (@JoshuaLampert)
- fix component arrays test (#2419) (@CarloLucibello)
- Bump julia-actions/setup-julia from 1 to 2 (#2420) (@dependabot[bot])
- documentation update (#2422) (@CarloLucibello)
- remove
public dropout
(#2423) (@mcabbott) - Allow BatchNorm on CUDA with track_stats=False (#2427) (@paulnovo)
- Bump actions/checkout from 4.1.2 to 4.1.3 (#2428) (@dependabot[bot])
- Add working downloads badge (#2429) (@pricklypointer)
- Bump actions/checkout from 4.1.3 to 4.1.4 (#2430) (@dependabot[bot])
- Add tip for non-CUDA users (#2434) (@micahscopes)
- Add hint for choosing a different GPU backend (#2435) (@micahscopes)
- Patch
Flux._isleaf
for abstract arrays with bitstype elements (#2436) (@jondeuce) - Bump julia-actions/cache from 1 to 2 (#2437) (@dependabot[bot])
- Bump actions/checkout from 4.1.4 to 4.1.5 (#2438) (@dependabot[bot])
- Enzyme: bump version and mark models as working [test] (#2439) (@wsmoses)
- Enable remaining enzyme test (#2442) (@wsmoses)
- Bump AMDGPU to 0.9 (#2449) (@pxl-th)
- Do not install all GPU backends at once (#2453) (@pxl-th)
- CompatHelper: add new compat entry for BSON at version 0.3, (keep existing compat) (#2457) (@github-actions[bot])
- remove BSON dependence (#2458) (@CarloLucibello)
Closed issues:
- How to have a stable GPU memory while being performant? (#780)
- Why is Flux.destructure type unstable? (#2405)
- tests are failing due to ComponentArrays (#2411)
- Significant time spent moving medium-size arrays to GPU, type instability (#2414)
- Dense layers with shared parameters (#2432)
- why is my
withgradient
type unstable ? (#2456)
v0.14.15
Flux v0.14.15
Merged pull requests:
- Restore some support for Tracker.jl (#2387) (@mcabbott)
- start testing Enzyme (#2392) (@CarloLucibello)
- Add Ignite.jl to ecosystem.md (#2395) (@mcabbott)
- Bump actions/checkout from 4.1.1 to 4.1.2 (#2401) (@dependabot[bot])
- More lazy strings (#2402) (@lassepe)
- Fix dead link in docs (#2403) (@BioTurboNick)
- Improve errors for conv layers (#2404) (@mcabbott)
Closed issues:
v0.14.14
Flux v0.14.14
Merged pull requests:
- Bump actions/cache from 3 to 4 (#2371) (@dependabot[bot])
- Use LazyString in depwarn (#2400) (@mcabbott)
Closed issues: