Skip to content

Commit

Permalink
Rollup merge of rust-lang#38559 - japaric:ptx2, r=alexcrichton
Browse files Browse the repository at this point in the history
PTX support, take 2

- You can generate PTX using `--emit=asm` and the right (custom) target. Which
  then you can run on a NVIDIA GPU.

- You can compile `core` to PTX. [Xargo] also works and it can compile some
  other crates like `collections` (but I doubt all of those make sense on a GPU)

[Xargo]: https://github.com/japaric/xargo

- You can create "global" functions, which can be "called" by the host, using
  the `"ptx-kernel"` ABI, e.g. `extern "ptx-kernel" fn kernel() { .. }`. Every
  other function is a "device" function and can only be called by the GPU.

- Intrinsics like `__syncthreads()` and `blockIdx.x` are available as
  `"platform-intrinsics"`. These intrinsics are *not* in the `core` crate but
  any Rust user can create "bindings" to them using an `extern
  "platform-intrinsics"` block. See example at the end.

- Trying to emit PTX with `-g` (debuginfo); you get an LLVM error. But I don't
  think PTX can contain debuginfo anyway so `-g` should be ignored and a warning
  should be printed ("`-g` doesn't work with this target" or something).

- "Single source" support. You *can't* write a single source file that contains
  both host and device code. I think that should be possible to implement that
  outside the compiler using compiler plugins / build scripts.

- The equivalent to CUDA `__shared__` which it's used to declare memory that's
  shared between the threads of the same block. This could be implemented using
  attributes: `#[shared] static mut SCRATCH_MEMORY: [f32; 64]` but hasn't been
  implemented yet.

- Built-in targets. This PR doesn't add targets to the compiler just yet but one
  can create custom targets to be able to emit PTX code (see the example at the
  end). The idea is to have people experiment with this feature before
  committing to it (built-in targets are "insta-stable")

- All functions must be "inlined". IOW, the `.rlib` must always contain the LLVM
  bitcode of all the functions of the crate it was produced from. Otherwise, you
  end with "undefined references" in the final PTX code but you won't get *any*
  linker error because no linker is involved. IOW, you'll hit a runtime error
  when loading the PTX into the GPU. The workaround is to use `#[inline]` on
  non-generic functions and to never use `#[inline(never)]` but this may not
  always be possible because e.g. you could be relying on third party code.

- Should `--emit=asm` generate a `.ptx` file instead of a `.s` file?

TL;DR Use Xargo to turn a crate into a PTX module (a `.s` file). Then pass that
PTX module, as a string, to the GPU and run it.

The full code is in [this repository]. This section gives an overview of how to
run Rust code on a NVIDIA GPU.

[this repository]: https://github.com/japaric/cuda

- Create a custom target. Here's the 64-bit NVPTX target (NOTE: the comments
  are not valid because this is supposed to be a JSON file; remove them before
  you use this file):

``` js
// nvptx64-nvidia-cuda.json
{
  "arch": "nvptx64",  // matches LLVM
  "cpu": "sm_20",  // "oldest" compute capability supported by LLVM
  "data-layout": "e-i64:64-v16:16-v32:32-n16:32:64",
  "llvm-target": "nvptx64-nvidia-cuda",
  "max-atomic-width": 0,  // LLVM errors with any other value :-(
  "os": "cuda",  // matches LLVM
  "panic-strategy": "abort",
  "target-endian": "little",
  "target-pointer-width": "64",
  "target-vendor": "nvidia",  // matches LLVM -- not required
}
```

(There's a 32-bit target specification in the linked repository)

- Write a kernel

``` rust

extern "platform-intrinsic" {
    fn nvptx_block_dim_x() -> i32;
    fn nvptx_block_idx_x() -> i32;
    fn nvptx_thread_idx_x() -> i32;
}

/// Copies an array of `n` floating point numbers from `src` to `dst`
pub unsafe extern "ptx-kernel" fn memcpy(dst: *mut f32,
                                         src: *const f32,
                                         n: usize) {
    let i = (nvptx_block_dim_x() as isize)
        .wrapping_mul(nvptx_block_idx_x() as isize)
        .wrapping_add(nvptx_thread_idx_x() as isize);

    if (i as usize) < n {
        *dst.offset(i) = *src.offset(i);
    }
}
```

- Emit PTX code

```
$ xargo rustc --target nvptx64-nvidia-cuda --release -- --emit=asm
   Compiling core v0.0.0 (file://..)
   (..)
   Compiling nvptx-builtins v0.1.0 (https://github.com/japaric/nvptx-builtins)
   Compiling kernel v0.1.0

$ cat target/nvptx64-nvidia-cuda/release/deps/kernel-*.s
//
// Generated by LLVM NVPTX Back-End
//

.version 3.2
.target sm_20
.address_size 64

        // .globl       memcpy

.visible .entry memcpy(
        .param .u64 memcpy_param_0,
        .param .u64 memcpy_param_1,
        .param .u64 memcpy_param_2
)
{
        .reg .pred      %p<2>;
        .reg .s32       %r<5>;
        .reg .s64       %rd<12>;

        ld.param.u64    %rd7, [memcpy_param_2];
        mov.u32 %r1, %ntid.x;
        mov.u32 %r2, %ctaid.x;
        mul.wide.s32    %rd8, %r2, %r1;
        mov.u32 %r3, %tid.x;
        cvt.s64.s32     %rd9, %r3;
        add.s64         %rd10, %rd9, %rd8;
        setp.ge.u64     %p1, %rd10, %rd7;
        @%p1 bra        LBB0_2;
        ld.param.u64    %rd3, [memcpy_param_0];
        ld.param.u64    %rd4, [memcpy_param_1];
        cvta.to.global.u64      %rd5, %rd4;
        cvta.to.global.u64      %rd6, %rd3;
        shl.b64         %rd11, %rd10, 2;
        add.s64         %rd1, %rd6, %rd11;
        add.s64         %rd2, %rd5, %rd11;
        ld.global.u32   %r4, [%rd2];
        st.global.u32   [%rd1], %r4;
LBB0_2:
        ret;
}
```

- Run it on the GPU

``` rust
// `kernel.ptx` is the `*.s` file we got in the previous step
const KERNEL: &'static str = include_str!("kernel.ptx");

driver::initialize()?;

let device = Device(0)?;
let ctx = device.create_context()?;
let module = ctx.load_module(KERNEL)?;
let kernel = module.function("memcpy")?;

let h_a: Vec<f32> = /* create some random data */;
let h_b = vec![0.; N];

let d_a = driver::allocate(bytes)?;
let d_b = driver::allocate(bytes)?;

// Copy from host to GPU
driver::copy(h_a, d_a)?;

// Run `memcpy` on the GPU
kernel.launch(d_b, d_a, N)?;

// Copy from GPU to host
driver::copy(d_b, h_b)?;

// Verify
assert_eq!(h_a, h_b);

// `d_a`, `d_b`, `h_a`, `h_b` are dropped/freed here
```

---

cc @alexcrichton @brson @rkruppe

> What has changed since rust-lang#34195?

- `core` now can be compiled into PTX. Which makes it very easy to turn `no_std`
  crates into "kernels" with the help of Xargo.

- There's now a way, the `"ptx-kernel"` ABI, to generate "global" functions. The
  old PR required a manual step (it was hack) to "convert" "device" functions
  into "global" functions. (Only "global" functions can be launched by the host)

- Everything is unstable. There are not "insta stable" built-in targets this
  time (\*). The users have to use a custom target to experiment with this
  feature. Also, PTX instrinsics, like `__syncthreads` and `blockIdx.x`, are now
  implemented as `"platform-intrinsics"` so they no longer live in the `core`
  crate.

(\*) I'd actually like to have in-tree targets because that makes this target
more discoverable, removes the need to lug around .json files, etc.

However, bundling a target with the compiler immediately puts it in the path
towards stabilization. Which gives us just two cycles to find and fix any
problem with the target specification. Afterwards, it becomes hard to tweak
the specification because that could be a breaking change.

A possible solution could be "unstable built-in targets". Basically, to use an
unstable target, you'll have to also pass `-Z unstable-options` to the compiler.
And unstable targets, being unstable, wouldn't be available on stable.

> Why should this be merged?

- To let people experiment with the feature out of tree. Having easy access to
  the feature (in every nightly) allows this. I also think that, as it is, it
  should be possible to start prototyping type-safe single source support using
  build scripts, macros and/or plugins.

- It's a straightforward implementation. No different that adding support for
  any other architecture.
  • Loading branch information
alexcrichton authored Dec 30, 2016
2 parents 4e2e01e + aac5ff7 commit bcfd504
Show file tree
Hide file tree
Showing 16 changed files with 345 additions and 4 deletions.
2 changes: 1 addition & 1 deletion src/bootstrap/native.rs
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ pub fn llvm(build: &Build, target: &str) {
.profile(profile)
.define("LLVM_ENABLE_ASSERTIONS", assertions)
.define("LLVM_TARGETS_TO_BUILD",
"X86;ARM;AArch64;Mips;PowerPC;SystemZ;JSBackend;MSP430;Sparc")
"X86;ARM;AArch64;Mips;PowerPC;SystemZ;JSBackend;MSP430;Sparc;NVPTX")
.define("LLVM_INCLUDE_EXAMPLES", "OFF")
.define("LLVM_INCLUDE_TESTS", "OFF")
.define("LLVM_INCLUDE_DOCS", "OFF")
Expand Down
13 changes: 13 additions & 0 deletions src/etc/platform-intrinsics/nvptx/cuda.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
{
"intrinsic_prefix": "_",
"llvm_prefix": "llvm.cuda.",
"intrinsics": [
{
"intrinsic": "syncthreads",
"width": ["0"],
"llvm": "syncthreads",
"ret": "V",
"args": []
}
]
}
7 changes: 7 additions & 0 deletions src/etc/platform-intrinsics/nvptx/info.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"platform": "nvptx",
"number_info": {
"signed": {}
},
"width_info": {}
}
90 changes: 90 additions & 0 deletions src/etc/platform-intrinsics/nvptx/sreg.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
{
"intrinsic_prefix": "_",
"llvm_prefix": "llvm.nvvm.read.ptx.sreg.",
"intrinsics": [
{
"intrinsic": "block_dim_x",
"width": ["0"],
"llvm": "ntid.x",
"ret": "S32",
"args": []
},
{
"intrinsic": "block_dim_y",
"width": ["0"],
"llvm": "ntid.y",
"ret": "S32",
"args": []
},
{
"intrinsic": "block_dim_z",
"width": ["0"],
"llvm": "ntid.z",
"ret": "S32",
"args": []
},
{
"intrinsic": "block_idx_x",
"width": ["0"],
"llvm": "ctaid.x",
"ret": "S32",
"args": []
},
{
"intrinsic": "block_idx_y",
"width": ["0"],
"llvm": "ctaid.y",
"ret": "S32",
"args": []
},
{
"intrinsic": "block_idx_z",
"width": ["0"],
"llvm": "ctaid.z",
"ret": "S32",
"args": []
},
{
"intrinsic": "grid_dim_x",
"width": ["0"],
"llvm": "nctaid.x",
"ret": "S32",
"args": []
},
{
"intrinsic": "grid_dim_y",
"width": ["0"],
"llvm": "nctaid.y",
"ret": "S32",
"args": []
},
{
"intrinsic": "grid_dim_z",
"width": ["0"],
"llvm": "nctaid.z",
"ret": "S32",
"args": []
},
{
"intrinsic": "thread_idx_x",
"width": ["0"],
"llvm": "tid.x",
"ret": "S32",
"args": []
},
{
"intrinsic": "thread_idx_y",
"width": ["0"],
"llvm": "tid.y",
"ret": "S32",
"args": []
},
{
"intrinsic": "thread_idx_z",
"width": ["0"],
"llvm": "tid.z",
"ret": "S32",
"args": []
}
]
}
2 changes: 1 addition & 1 deletion src/librustc_llvm/build.rs
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ fn main() {

let optional_components =
["x86", "arm", "aarch64", "mips", "powerpc", "pnacl", "systemz", "jsbackend", "msp430",
"sparc"];
"sparc", "nvptx"];

// FIXME: surely we don't need all these components, right? Stuff like mcjit
// or interpreter the compiler itself never uses.
Expand Down
1 change: 1 addition & 0 deletions src/librustc_llvm/ffi.rs
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ pub enum CallConv {
X86StdcallCallConv = 64,
X86FastcallCallConv = 65,
ArmAapcsCallConv = 67,
PtxKernel = 71,
X86_64_SysV = 78,
X86_64_Win64 = 79,
X86_VectorCall = 80,
Expand Down
5 changes: 5 additions & 0 deletions src/librustc_llvm/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -376,6 +376,11 @@ pub fn initialize_available_targets() {
LLVMInitializeSparcTargetMC,
LLVMInitializeSparcAsmPrinter,
LLVMInitializeSparcAsmParser);
init_target!(llvm_component = "nvptx",
LLVMInitializeNVPTXTargetInfo,
LLVMInitializeNVPTXTarget,
LLVMInitializeNVPTXTargetMC,
LLVMInitializeNVPTXAsmPrinter);
}

pub fn last_error() -> Option<String> {
Expand Down
3 changes: 3 additions & 0 deletions src/librustc_platform_intrinsics/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,7 @@ static VOID: Type = Type::Void;
mod x86;
mod arm;
mod aarch64;
mod nvptx;

impl Intrinsic {
pub fn find(name: &str) -> Option<Intrinsic> {
Expand All @@ -104,6 +105,8 @@ impl Intrinsic {
arm::find(name)
} else if name.starts_with("aarch64_") {
aarch64::find(name)
} else if name.starts_with("nvptx_") {
nvptx::find(name)
} else {
None
}
Expand Down
92 changes: 92 additions & 0 deletions src/librustc_platform_intrinsics/nvptx.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.

// DO NOT EDIT: autogenerated by etc/platform-intrinsics/generator.py
// ignore-tidy-linelength

#![allow(unused_imports)]

use {Intrinsic, Type};
use IntrinsicDef::Named;

// The default inlining settings trigger a pathological behaviour in
// LLVM, which causes makes compilation very slow. See #28273.
#[inline(never)]
pub fn find(name: &str) -> Option<Intrinsic> {
if !name.starts_with("nvptx") { return None }
Some(match &name["nvptx".len()..] {
"_syncthreads" => Intrinsic {
inputs: { static INPUTS: [&'static Type; 0] = []; &INPUTS },
output: &::VOID,
definition: Named("llvm.cuda.syncthreads")
},
"_block_dim_x" => Intrinsic {
inputs: { static INPUTS: [&'static Type; 0] = []; &INPUTS },
output: &::I32,
definition: Named("llvm.nvvm.read.ptx.sreg.ntid.x")
},
"_block_dim_y" => Intrinsic {
inputs: { static INPUTS: [&'static Type; 0] = []; &INPUTS },
output: &::I32,
definition: Named("llvm.nvvm.read.ptx.sreg.ntid.y")
},
"_block_dim_z" => Intrinsic {
inputs: { static INPUTS: [&'static Type; 0] = []; &INPUTS },
output: &::I32,
definition: Named("llvm.nvvm.read.ptx.sreg.ntid.z")
},
"_block_idx_x" => Intrinsic {
inputs: { static INPUTS: [&'static Type; 0] = []; &INPUTS },
output: &::I32,
definition: Named("llvm.nvvm.read.ptx.sreg.ctaid.x")
},
"_block_idx_y" => Intrinsic {
inputs: { static INPUTS: [&'static Type; 0] = []; &INPUTS },
output: &::I32,
definition: Named("llvm.nvvm.read.ptx.sreg.ctaid.y")
},
"_block_idx_z" => Intrinsic {
inputs: { static INPUTS: [&'static Type; 0] = []; &INPUTS },
output: &::I32,
definition: Named("llvm.nvvm.read.ptx.sreg.ctaid.z")
},
"_grid_dim_x" => Intrinsic {
inputs: { static INPUTS: [&'static Type; 0] = []; &INPUTS },
output: &::I32,
definition: Named("llvm.nvvm.read.ptx.sreg.nctaid.x")
},
"_grid_dim_y" => Intrinsic {
inputs: { static INPUTS: [&'static Type; 0] = []; &INPUTS },
output: &::I32,
definition: Named("llvm.nvvm.read.ptx.sreg.nctaid.y")
},
"_grid_dim_z" => Intrinsic {
inputs: { static INPUTS: [&'static Type; 0] = []; &INPUTS },
output: &::I32,
definition: Named("llvm.nvvm.read.ptx.sreg.nctaid.z")
},
"_thread_idx_x" => Intrinsic {
inputs: { static INPUTS: [&'static Type; 0] = []; &INPUTS },
output: &::I32,
definition: Named("llvm.nvvm.read.ptx.sreg.tid.x")
},
"_thread_idx_y" => Intrinsic {
inputs: { static INPUTS: [&'static Type; 0] = []; &INPUTS },
output: &::I32,
definition: Named("llvm.nvvm.read.ptx.sreg.tid.y")
},
"_thread_idx_z" => Intrinsic {
inputs: { static INPUTS: [&'static Type; 0] = []; &INPUTS },
output: &::I32,
definition: Named("llvm.nvvm.read.ptx.sreg.tid.z")
},
_ => return None,
})
}
5 changes: 5 additions & 0 deletions src/librustc_trans/abi.rs
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ use cabi_mips64;
use cabi_asmjs;
use cabi_msp430;
use cabi_sparc;
use cabi_nvptx;
use cabi_nvptx64;
use machine::{llalign_of_min, llsize_of, llsize_of_alloc};
use type_::Type;
use type_of;
Expand Down Expand Up @@ -353,6 +355,7 @@ impl FnType {
Win64 => llvm::X86_64_Win64,
SysV64 => llvm::X86_64_SysV,
Aapcs => llvm::ArmAapcsCallConv,
PtxKernel => llvm::PtxKernel,

// These API constants ought to be more specific...
Cdecl => llvm::CCallConv,
Expand Down Expand Up @@ -608,6 +611,8 @@ impl FnType {
"wasm32" => cabi_asmjs::compute_abi_info(ccx, self),
"msp430" => cabi_msp430::compute_abi_info(ccx, self),
"sparc" => cabi_sparc::compute_abi_info(ccx, self),
"nvptx" => cabi_nvptx::compute_abi_info(ccx, self),
"nvptx64" => cabi_nvptx64::compute_abi_info(ccx, self),
a => ccx.sess().fatal(&format!("unrecognized arch \"{}\" in target specification", a))
}

Expand Down
53 changes: 53 additions & 0 deletions src/librustc_trans/cabi_nvptx.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.

// Reference: PTX Writer's Guide to Interoperability
// http://docs.nvidia.com/cuda/ptx-writers-guide-to-interoperability

#![allow(non_upper_case_globals)]

use llvm::Struct;

use abi::{self, ArgType, FnType};
use context::CrateContext;
use type_::Type;

fn ty_size(ty: Type) -> usize {
abi::ty_size(ty, 4)
}

fn classify_ret_ty(ccx: &CrateContext, ret: &mut ArgType) {
if ret.ty.kind() == Struct && ty_size(ret.ty) > 32 {
ret.make_indirect(ccx);
} else {
ret.extend_integer_width_to(32);
}
}

fn classify_arg_ty(ccx: &CrateContext, arg: &mut ArgType) {
if arg.ty.kind() == Struct && ty_size(arg.ty) > 32 {
arg.make_indirect(ccx);
} else {
arg.extend_integer_width_to(32);
}
}

pub fn compute_abi_info(ccx: &CrateContext, fty: &mut FnType) {
if !fty.ret.is_ignore() {
classify_ret_ty(ccx, &mut fty.ret);
}

for arg in &mut fty.args {
if arg.is_ignore() {
continue;
}
classify_arg_ty(ccx, arg);
}
}
53 changes: 53 additions & 0 deletions src/librustc_trans/cabi_nvptx64.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
// Copyright 2016 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.

// Reference: PTX Writer's Guide to Interoperability
// http://docs.nvidia.com/cuda/ptx-writers-guide-to-interoperability

#![allow(non_upper_case_globals)]

use llvm::Struct;

use abi::{self, ArgType, FnType};
use context::CrateContext;
use type_::Type;

fn ty_size(ty: Type) -> usize {
abi::ty_size(ty, 8)
}

fn classify_ret_ty(ccx: &CrateContext, ret: &mut ArgType) {
if ret.ty.kind() == Struct && ty_size(ret.ty) > 64 {
ret.make_indirect(ccx);
} else {
ret.extend_integer_width_to(64);
}
}

fn classify_arg_ty(ccx: &CrateContext, arg: &mut ArgType) {
if arg.ty.kind() == Struct && ty_size(arg.ty) > 64 {
arg.make_indirect(ccx);
} else {
arg.extend_integer_width_to(64);
}
}

pub fn compute_abi_info(ccx: &CrateContext, fty: &mut FnType) {
if !fty.ret.is_ignore() {
classify_ret_ty(ccx, &mut fty.ret);
}

for arg in &mut fty.args {
if arg.is_ignore() {
continue;
}
classify_arg_ty(ccx, arg);
}
}
Loading

0 comments on commit bcfd504

Please sign in to comment.