Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cudnnFindConvolutionAlgorithmWorkspaceSize uses removed function cached_memory #1101

Closed
DrChainsaw opened this issue Aug 13, 2021 · 3 comments · Fixed by #1103
Closed

cudnnFindConvolutionAlgorithmWorkspaceSize uses removed function cached_memory #1101

DrChainsaw opened this issue Aug 13, 2021 · 3 comments · Fixed by #1103
Labels
bug Something isn't working

Comments

@DrChainsaw
Copy link

Sanity checks (read this first, then remove this section)

  • [x ] Make sure you're reporting a bug; for general questions, please use Discourse or
    Slack.

  • [x ] If you're dealing with a performance issue, make sure you disable scalar iteration
    (CUDA.allowscalar(false)). Only file an issue if that shows scalar iteration happening
    in CUDA.jl or Base Julia, as opposed to your own code.

  • [ x] If you're seeing an error message, follow the error message instructions, if any
    (e.g. inspect code with @device_code_warntype). If you can't solve the problem using
    that information, make sure to post it as part of the issue.

  • [x ] Always ensure you're using the latest version of CUDA.jl, and if possible, please
    check the master branch to see if your issue hasn't been resolved yet.

If your bug is still valid, please go ahead and fill out the template below.

Describe the bug

The function cudnnFindConvolutionAlgorithmWorkspaceSize uses CUDA.cached_memory which was removed in 6ab0d42. This causes convolution ops using CUDA to fail.

To reproduce

julia> using CUDA

julia> CUDA.CUDNN.cudnnFindConvolutionAlgorithmWorkspaceSize([1])
ERROR: UndefVarError: cached_memory not defined
Stacktrace:
 [1] cudnnFindConvolutionAlgorithmWorkspaceSize(x::Vector{Int64})
   @ CUDA.CUDNN E:\Programs\julia\.julia\packages\CUDA\zx5iI\lib\cudnn\convolution.jl:235
 [2] top-level scope
   @ REPL[4]:1
Manifest.toml

Tested in a project with many other deps, but bug should be obvious.

Expected behavior

cudnnFindConvolutionAlgorithmWorkspaceSize should not crash so convolution ops work :)

Version info

Details on Julia:

# please post the output of:
julia> versioninfo()
Julia Version 1.7.0-beta3.0
Commit e76c9dad42 (2021-07-07 08:12 UTC)       
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: Intel(R) Core(TM) i7-5820K CPU @ 3.30GHz
  WORD_SIZE: 64    
  LIBM: libopenlibm
  LLVM: libLLVM-12.0.0 (ORCJIT, haswell)
Environment:
  JULIA_DEPOT_PATH = E:/Programs/julia/.julia
  JULIA_EDITOR = code
  JULIA_NUM_THREADS = 6

Details on CUDA:

# please post the output of:
julia> CUDA.versioninfo()
CUDA toolkit 11.4.1, artifact installation
CUDA driver 11.4.0

Libraries:
- CUBLAS: 11.5.4
- CURAND: 10.2.5
- CUFFT: 10.5.1
- CUSOLVER: 11.2.0
- CUSPARSE: 11.6.0
- CUPTI: 14.0.0
- NVML: missing
- CUDNN: 8.20.2 (for CUDA 11.4.0)
  Downloaded artifact: CUTENSOR
- CUTENSOR: 1.3.0 (for CUDA 11.2.0)

Toolchain:
- Julia: 1.7.0-beta3.0
- LLVM: 12.0.0
- PTX ISA support: 3.2, 4.0, 4.1, 4.2, 4.3, 5.0, 6.0, 6.1, 6.3, 6.4, 6.5, 7.0
- Device capability support: sm_35, sm_37, sm_50, sm_52, sm_53, sm_60, sm_61, sm_62, sm_70, sm_72, sm_75, sm_80

1 device:
  0: NVIDIA GeForce RTX 2080 Ti (sm_75, 9.900 GiB / 11.000 GiB available)

Additional context

N/A

@DrChainsaw DrChainsaw added the bug Something isn't working label Aug 13, 2021
@DhairyaLGandhi
Copy link
Member

ref FluxML/model-zoo#313

@DhairyaLGandhi
Copy link
Member

I guess the issue is that NNlib code was removed which also took the tests with it. @maleadt could we bring back CI on NNlib/ Flux specific cases? This would currently break most of Flux.

@mkschleg
Copy link

In case anyone gets here and needs a fix in the meantime. Reverting to CUDA v3.3.5 fixes this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants