Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix CI #82

Merged
merged 7 commits into from
Nov 9, 2021
Merged

Fix CI #82

merged 7 commits into from
Nov 9, 2021

Conversation

maleadt
Copy link
Member

@maleadt maleadt commented Nov 8, 2021

Update CUDA.jl compat bounds and adapt to some changes.

@thomasfaingnaert Kernels don't compile, but you were already looking into a regression so I take it you have a fix already?

@thomasfaingnaert
Copy link
Member

There was only a performance regression, IIRC everything still compiled correctly for 1.6.

@maleadt
Copy link
Member Author

maleadt commented Nov 8, 2021

Yeah it's the CUDA.jl update that break things, I figured you had updated that dependency too.

@thomasfaingnaert
Copy link
Member

Pushed a fix for diagonal matrices on 1.6. I'm not that experienced with UnionAlls, but isn't the way you were doing it supposed to work? If so, I guess that's a bug in the Julia 1.6 compiler?

I'll have a closer look at the dynamic function invocation issues that occur on 1.7 later.

@maleadt
Copy link
Member Author

maleadt commented Nov 8, 2021

Yeah it's the CUDA.jl update that break things, I figured you had updated that dependency too.

Actually it's just the upgrade to Julia 1.7, as CI reveals here.

I'm not that experienced with UnionAlls, but isn't the way you were doing it supposed to work? If so, I guess that's a bug in the Julia 1.6 compiler?

It should, yes.

I'll have a closer look at the dynamic function invocation issues that occur on 1.7 later.

It's something to do with indexing into the the result of iterate which may be nothing. But changing subdivide to what's below triggers a cascade of other failures which I'll have a look at tomorrow:

@inline function subdivide(tile::Tile{size, names, T}, tiling_size::Tile{tile_sz, names, T}, idx, count) where {names, T, size, tile_sz}
    iter = iterate(parallellise(tile, tiling_size, idx, count))
    iter === nothing && throw(BoundsError())
    iter[1]
end

@maleadt
Copy link
Member Author

maleadt commented Nov 9, 2021

Fixed the dynamic invocation, but now the code allocates on 1.7 (dynamic memory, which is only meant to be used in error paths). I fear this may be an instance of JuliaLang/julia#41800.

@codecov
Copy link

codecov bot commented Nov 9, 2021

Codecov Report

Merging #82 (84b53ff) into master (322d4ca) will increase coverage by 1.25%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #82      +/-   ##
==========================================
+ Coverage   41.06%   42.31%   +1.25%     
==========================================
  Files           9        9              
  Lines         414      423       +9     
==========================================
+ Hits          170      179       +9     
  Misses        244      244              
Impacted Files Coverage Δ
src/kernel.jl 100.00% <ø> (ø)
src/blas.jl 96.00% <100.00%> (ø)
src/tiling.jl 87.50% <100.00%> (+2.08%) ⬆️
src/layout.jl 16.21% <0.00%> (+0.76%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 322d4ca...84b53ff. Read the comment docs.

@maleadt maleadt merged commit 7595233 into master Nov 9, 2021
@maleadt maleadt deleted the tb/ci branch November 9, 2021 07:47
smnbl added a commit to smnbl/GemmKernels.jl that referenced this pull request Nov 15, 2021
in a previous commit (merge JuliaGPU#82), dynamic allocation of shared memory
was changed from the deprecated `@cuDynamicSharedMem` macro to the replacing `CuDynamicSharedArray` function,
this breaks compatability with CUDA.jl versions before v3.5.0 as the `CuDynamicSharedArray` function was only added in CUDA.jl v3.5.
@smnbl smnbl mentioned this pull request Nov 15, 2021
smnbl added a commit to smnbl/GemmKernels.jl that referenced this pull request Nov 15, 2021
in a previous commit (merge JuliaGPU#82), dynamic allocation of shared memory
was changed from the deprecated `@cuDynamicSharedMem` macro to the replacing `CuDynamicSharedArray` function,
this breaks compatability with CUDA.jl versions before v3.5.0 as the `CuDynamicSharedArray` function was only added in CUDA.jl v3.5.
smnbl added a commit to smnbl/GemmKernels.jl that referenced this pull request Nov 15, 2021
in a previous commit (merge JuliaGPU#82), dynamic allocation of shared memory
was changed from the deprecated `@cuDynamicSharedMem` macro to the replacing `CuDynamicSharedArray` function,
this breaks compatability with CUDA.jl versions before v3.5.0 as the `CuDynamicSharedArray` function was only added in CUDA.jl v3.5.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants