Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't touch gvars when compiling for an external back-end. #39387

Merged
merged 2 commits into from
Jan 26, 2021

Conversation

maleadt
Copy link
Member

@maleadt maleadt commented Jan 25, 2021

I noticed in JuliaGPU/CUDA.jl#552 that exported global variables in an llvmcall got internalized along the way:

function main()
    Base.llvmcall(
        ("""@constant_memory = addrspace(4) externally_initialized global [1 x i32] [i32 42]
            define void @entry() {
                ret void
            }
         """, "entry"), Nothing, Tuple{})
end

main()  # make sure it works


## compile the code using the aot compiler interface

using LLVM

# get the method instance
world = Base.get_world_counter()
meth = which(main, Tuple{})
sig = Base.signature_type(main, Tuple{})::Type
(ti, env) = ccall(:jl_type_intersection_with_env, Any,
                    (Any, Any), sig, meth.sig)::Core.SimpleVector
meth = Base.func_for_method_checked(meth, ti, env)
method_instance = ccall(:jl_specializations_get_linfo, Ref{Core.MethodInstance},
                (Any, Any, Any, UInt), meth, ti, env, world)

# set-up the compiler interface
params = Base.CodegenParams()

# generate IR
native_code = ccall(:jl_create_native, Ptr{Cvoid},
                    (Vector{Core.MethodInstance}, Base.CodegenParams, Cint),
                    [method_instance], params, #=extern policy=# 1)
@assert native_code != C_NULL
llvm_mod_ref = ccall(:jl_get_llvm_module, LLVM.API.LLVMModuleRef,
                        (Ptr{Cvoid},), native_code)
@assert llvm_mod_ref != C_NULL
llvm_mod = LLVM.Module(llvm_mod_ref)
@constant_memory = internal addrspace(4) externally_initialized global [1 x i32] [i32 42]

It seems like this functionality should not trigger when compiling for an external compiler back-end.


FWIW, regular @code_llvm does not trigger this conversion, so just doing @code_llvm dump_module=true main() still results in:

@constant_memory = addrspace(4) externally_initialized global [1 x i32] [i32 42]

@maleadt maleadt added compiler:codegen Generation of LLVM IR and native code gpu Affects running Julia on a GPU labels Jan 25, 2021
F->setPersonalityFn(juliapersonality_func);
// Add unwind exception personalities to functions to handle async exceptions
if (Function *F = dyn_cast<Function>(&G))
F->setPersonalityFn(juliapersonality_func);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Means backends will have to initialize the personality functions themselves, but I suspect that's okay.

@vchuravy vchuravy added the backport 1.6 Change should be backported to release-1.6 label Jan 25, 2021
src/aotcompile.cpp Outdated Show resolved Hide resolved
Co-authored-by: Julian Samaroo <jpsamaroo@jpsamaroo.me>
@maleadt
Copy link
Member Author

maleadt commented Jan 26, 2021

CI failures unrelated (Profile, timeout).

@maleadt maleadt merged commit a7848a2 into master Jan 26, 2021
@maleadt maleadt deleted the tb/extern_codegen_gvars branch January 26, 2021 07:17
@maleadt maleadt mentioned this pull request Jan 26, 2021
60 tasks
@KristofferC
Copy link
Sponsor Member

Might have been good to squash.

@maleadt
Copy link
Member Author

maleadt commented Jan 26, 2021

Ah, of course... 🤦‍♂️

@KristofferC KristofferC removed the backport 1.6 Change should be backported to release-1.6 label Feb 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:codegen Generation of LLVM IR and native code gpu Affects running Julia on a GPU
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants