-
-
Notifications
You must be signed in to change notification settings - Fork 607
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some fast paths + type fixes #2137
Changes from 1 commit
cb43150
7bf7594
8e9a5cc
fcbc7b4
ce1cf88
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -210,7 +210,7 @@ true | |
``` | ||
""" | ||
struct LayerNorm{F,D,T,N} | ||
λ::F | ||
λ::F # this field is not used | ||
diag::D | ||
ϵ::T | ||
size::NTuple{N,Int} | ||
|
@@ -254,16 +254,16 @@ function _norm_layer_forward( | |
end | ||
end | ||
|
||
o = _norm_layer_forward(x, μ, σ², l.ϵ) | ||
hasaffine(l) || return l.λ.(o) | ||
|
||
γ = reshape(l.γ, affine_shape) | ||
β = reshape(l.β, affine_shape) | ||
return l.λ.(γ .* o .+ β) | ||
s = (inv∘sqrt).(σ² .+ l.ϵ) # faster to un-fuse this, smaller... ideally mean_var(x, ε)? | ||
if hasaffine(l) | ||
γ = reshape(l.γ, affine_shape) # ideally reshape on construction, store Scale? | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The issue with packing the affine params/activation in a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see. We should probably still make these arrays the size required on construction, and make them even if they won't be used, instead of this: https://github.com/FluxML/NNlibCUDA.jl/blob/master/src/cudnn/batchnorm.jl#L21 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does Flux ever call that NNlib code? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's the only remaining CUDA.jl-reliant functionality left in this repo aside from the Functors stuff: https://github.com/FluxML/Flux.jl/blob/master/src/cuda/cudnn.jl. Absolute kludge as you can see, which is why these routines should be moved to NNlib sooner than later. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh right, I forgot about that file. But I remember seeing it when trying to remove CUDA... agree that NNlib is the right place. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Did a quick blame of the NNlibCUDA line above and came up with FluxML/NNlibCUDA.jl#36. I don't recall why the arrays are allocated instead of just set as |
||
β = reshape(l.β, affine_shape) | ||
return l.λ.(γ .* s .* (x .- μ) .+ β) | ||
else | ||
return l.λ.(s .* (x .- μ)) | ||
end | ||
end | ||
|
||
@inline _norm_layer_forward(x, μ, σ², ϵ) = (x .- μ) ./ sqrt.(σ² .+ ϵ) | ||
|
||
function _track_stats!( | ||
bn, x::AbstractArray{T, N}, μ, σ², reduce_dims, | ||
) where {T, N} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since it's unfused by Zygote anyhow, might as well do that here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For just the forward pass, it was still faster to un-fuse this, to do inv & sqrt N times not N^3.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't that what your comment is saying? I might be misunderstanding, does "un-fuse" here refer to extracting
s
as its own variable or to writings = inv.(sqrt.(σ² .+ l.ϵ))
instead ofs = (inv∘sqrt).(σ² .+ l.ϵ)
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes maybe we are agreeing. The comment was meant to answer "why make
s
at all", since without it things got slower.(inv∘sqrt)
is probably premature optimisation.