-
-
Notifications
You must be signed in to change notification settings - Fork 611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
using OneHotArrays #2025
using OneHotArrays #2025
Conversation
GPU test failure seems to be this: julia> using CUDA, OneHotArrays, NNlibCUDA
julia> CUDA.allowscalar(false)
julia> x = [1, 3, 2];
julia> y = onehotbatch(x, 0:3)
4×3 OneHotMatrix(::Vector{UInt32}) with eltype Bool:
⋅ ⋅ ⋅
1 ⋅ ⋅
⋅ ⋅ 1
⋅ 1 ⋅
julia> y2 = onehotbatch(x |> cu, 0:3)
ERROR: Scalar indexing is disallowed.
Invocation of getindex resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations *do not* execute on the GPU, but very slowly on the CPU,
and therefore are only permitted from the REPL for prototyping purposes.
If you did intend to index this array, annotate the caller with @allowscalar.
Stacktrace:
[1] error(s::String)
@ Base ./error.jl:33
[2] assertscalar(op::String)
@ GPUArraysCore ~/.julia/packages/GPUArraysCore/rSIl2/src/GPUArraysCore.jl:78
[3] getindex
@ ~/.julia/packages/GPUArrays/gok9K/src/host/indexing.jl:9 [inlined]
[4] iterate
@ ./abstractarray.jl:1144 [inlined]
[5] iterate
@ ./abstractarray.jl:1142 [inlined]
[6] _onehotbatch(data::CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, labels::NTuple{4, Int64})
@ OneHotArrays ~/.julia/packages/OneHotArrays/Moo4n/src/onehot.jl:87
[7] onehotbatch(::CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, ::UnitRange{Int64})
@ OneHotArrays ~/.julia/packages/OneHotArrays/Moo4n/src/onehot.jl:84
[8] top-level scope Because #1959 doesn't exist in OneHotArrays |
Downstream failure for Transformers seems to be related:
Maybe it shouldn't export OneHotArray? Cc @chengchingwen |
Personally, I would prefer not having btw. exporting |
Just out of Curiosity, do we really need a dependency on OneHotArrays.jl directly? It seems that none of code/functions in Flux explicitly need OneHotArray. It could be a complete separate package and people need that just |
Ok. I think nothing was exported before, so for now this PR shouldn't call And longer term, indeed, there's no strong reason for Flux to depend on this. Maybe Flux@0.14 can simply drop it? |
Then do we need a deprecate warning for accessing those function from Flux in the next patch release? |
CI tells me that the Possibly those methods be changed to dispatch on |
Also these: Flux.jl/src/layers/recurrent.jl Line 203 in b8bdc2d
I think |
(I can take this up in another PR or the changes can be made to this PR itself!) |
Sorting out docs in another PR sounds great. Not so clear whether it wants to be included like NNlib / MLUtils or pushed out to ecosystem.md; maybe that depends on whether Flux@0.14 is going to load it at all. Then the goal of this PR is only to remove the code, so that we don't have two versions current -- e.g. #1959 happened after the package was created, which is confusing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think with a rebase this should be good to go. The longer we wait, the more things will depend on the Flux-internal version (e.g. #2031).
The downstream errors appear unrelated (FastAI and Metalhead for sure). AtomicGraphNets.jl has not run CI for 2 months, but the error appears to be related to the SciML stack.
I just ran the AtomicGraphNets.jl test locally against the current release. They throw the same errors, so we can safely ignore those. @rkurchin you may want to look into those. |
* using OneHotArrays * rm tests * skip a test * don't export, add depwarns * back to using
This removes
onehot.jl
since the package is now registered, JuliaRegistries/General#64647Tests
could beremoved too.Maybe docs need though?
Closes #1544