Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Got an error, while trying to implement softplus with beta #1216

Closed
jakeoung opened this issue Jun 9, 2020 · 1 comment
Closed

Got an error, while trying to implement softplus with beta #1216

jakeoung opened this issue Jun 9, 2020 · 1 comment

Comments

@jakeoung
Copy link

jakeoung commented Jun 9, 2020

I want to define a new soft plus with the parameter beta. (similar to pytorch).

using Flux
using CUDA

mysoftplus(x, β=100.f0) = 1f0 / β * log(1f0 + exp(β*x))
CUDA.@cufunc mysoftplus(x, β=100.0f0) = 1f0 / β * CUDA.log(1f0 + CUDA.exp(β*x))

# g(x) = CUDA.logsoftplus.(x)
# CUDA.@cufunc g(x) = CUDA.log.(x)

function Net(ndim)
    net = Chain(Dense(3, 2, mysoftplus)) 
    return net
end

net = Net(3)
net = Flux.fmap(cu, net)
x = cu(zeros(3,4))
net(x)
gradient( x -> sum(net(x)), x )

I don't know what happened, but the following is the error message: (I tried to search a similar error, but it didn't help.

┌ Warning: calls to Base intrinsics might be GPU incompatible
│ exception =
│ You called log(x::Float32) in Base.Math at special/log.jl:290, maybe you intended to call log(x::Float32) in CUDA at /home/jakoo/.julia/packages/CUDA/5t6R9/src/device/cuda/math.jl:73 instead?
│ Stacktrace:
│ [1] log at special/log.jl:290
│ [2] #20 at /home/jakoo/.julia/packages/GPUArrays/JqOUg/src/host/broadcast.jl:57
└ @ GPUCompiler ~/.julia/packages/GPUCompiler/lqbF2/src/irgen.jl:68
ERROR: LoadError: GPU compilation of kernel broadcast(CUDA.CuKernelContext, CuDeviceArray{Tuple{Float32,typeof(∂(mysoftplus))},2,CUDA.AS.Global}, Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},Zygote.var"#1750#1757"{Zygote.Context,typeof(mysoftplus)},Tuple{Base.Broadcast.Extruded{CuDeviceArray{Float32,2,CUDA.AS.Global},Tuple{Bool,Bool},Tuple{Int64,Int64}}}}) failed
KernelError: passing and using non-bitstype argument

Argument 4 to your kernel function is of type Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},Zygote.var"#1750#1757"{Zygote.Context,typeof(mysoftplus)},Tuple{Base.Broadcast.Extruded{CuDeviceArray{Float32,2,CUDA.AS.Global},Tuple{Bool,Bool},Tuple{Int64,Int64}}}}, which is not isbits:
.f is of type Zygote.var"#1750#1757"{Zygote.Context,typeof(mysoftplus)} which is not isbits.
.context is of type Zygote.Context which is not isbits.
.cache is of type Union{Nothing, IdDict{Any,Any}} which is not isbits.

Passing non-isbits types is only allowed if they they are unused by the kernel.

@ToucheSir
Copy link
Member

This now works (probably has for a while) and as a bonus no longer needs the @cufunc line.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants