Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AMDGPU/GlobalISel: Handle atomic sextload and zextload #111721

Merged
merged 4 commits into from
Oct 31, 2024

Conversation

arsenm
Copy link
Contributor

@arsenm arsenm commented Oct 9, 2024

Atomic loads are handled differently from the DAG, and have separate opcodes
and explicit control over the extensions, like ordinary loads. Add
new patterns for these.

There's room for cleanup and improvement. d16 cases aren't handled.

Fixes #111645

Copy link
Contributor Author

arsenm commented Oct 9, 2024

@llvmbot
Copy link
Member

llvmbot commented Oct 9, 2024

@llvm/pr-subscribers-backend-amdgpu

@llvm/pr-subscribers-llvm-globalisel

Author: Matt Arsenault (arsenm)

Changes

Atomic loads are handled differently from the DAG, and have separate opcodes
and explicit control over the extensions, like ordinary loads. Add
new patterns for these.

There's room for cleanup and improvement. d16 cases aren't handled.

Fixes #111645


Patch is 62.48 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/111721.diff

9 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/AMDGPUGISel.td (+2)
  • (modified) llvm/lib/Target/AMDGPU/AMDGPUInstructions.td (+21)
  • (modified) llvm/lib/Target/AMDGPU/BUFInstructions.td (+7)
  • (modified) llvm/lib/Target/AMDGPU/DSInstructions.td (+7)
  • (modified) llvm/lib/Target/AMDGPU/FLATInstructions.td (+16)
  • (modified) llvm/lib/Target/AMDGPU/SIInstrInfo.td (+45)
  • (added) llvm/test/CodeGen/AMDGPU/GlobalISel/atomic_load_flat.ll (+331)
  • (added) llvm/test/CodeGen/AMDGPU/GlobalISel/atomic_load_global.ll (+478)
  • (added) llvm/test/CodeGen/AMDGPU/GlobalISel/atomic_load_local_2.ll (+509)
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUGISel.td b/llvm/lib/Target/AMDGPU/AMDGPUGISel.td
index 278d3536add916..d348f489d95dd3 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUGISel.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPUGISel.td
@@ -207,6 +207,8 @@ def : GINodeEquiv<G_STORE, AMDGPUst_glue> {
 
 def : GINodeEquiv<G_LOAD, AMDGPUatomic_ld_glue> {
   bit CheckMMOIsAtomic = 1;
+  let IfSignExtend = G_SEXTLOAD;
+  let IfZeroExtend = G_ZEXTLOAD;
 }
 
 def : GINodeEquiv<G_STORE, AMDGPUatomic_st_glue> {
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUInstructions.td b/llvm/lib/Target/AMDGPU/AMDGPUInstructions.td
index 09987a6504b9d0..7816ae911312c2 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUInstructions.td
+++ b/llvm/lib/Target/AMDGPU/AMDGPUInstructions.td
@@ -521,6 +521,27 @@ def atomic_load_64_#as : PatFrag<(ops node:$ptr), (atomic_load_64 node:$ptr)> {
   let IsAtomic = 1;
   let MemoryVT = i64;
 }
+
+def atomic_load_zext_8_#as : PatFrag<(ops node:$ptr), (atomic_load_zext node:$ptr)> {
+  let IsAtomic = 1;
+  let MemoryVT = i8;
+}
+
+def atomic_load_sext_8_#as : PatFrag<(ops node:$ptr), (atomic_load_sext node:$ptr)> {
+  let IsAtomic = 1;
+  let MemoryVT = i8;
+}
+
+def atomic_load_zext_16_#as : PatFrag<(ops node:$ptr), (atomic_load_zext node:$ptr)> {
+  let IsAtomic = 1;
+  let MemoryVT = i16;
+}
+
+def atomic_load_sext_16_#as : PatFrag<(ops node:$ptr), (atomic_load_sext node:$ptr)> {
+  let IsAtomic = 1;
+  let MemoryVT = i16;
+}
+
 } // End let AddressSpaces
 } // End foreach as
 
diff --git a/llvm/lib/Target/AMDGPU/BUFInstructions.td b/llvm/lib/Target/AMDGPU/BUFInstructions.td
index 6bdff9862e55ac..4ed0f9ade871f1 100644
--- a/llvm/lib/Target/AMDGPU/BUFInstructions.td
+++ b/llvm/lib/Target/AMDGPU/BUFInstructions.td
@@ -983,15 +983,22 @@ defm BUFFER_LOAD_LDS_U16 : MUBUF_Pseudo_Loads_LDSOpc <
 >;
 
 defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_UBYTE", i32, atomic_load_8_global>;
+defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_UBYTE", i32, atomic_load_zext_8_global>;
 defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_USHORT", i32, atomic_load_16_global>;
+defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_USHORT", i32, atomic_load_zext_16_global>;
 defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_UBYTE", i16, atomic_load_8_global>;
 defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_USHORT", i16, atomic_load_16_global>;
+defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_UBYTE", i16, atomic_load_zext_8_global>;
+defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_USHORT", i16, atomic_load_zext_16_global>;
 defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_UBYTE", i32, extloadi8_global>;
 defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_UBYTE", i32, zextloadi8_global>;
 defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_SBYTE", i32, sextloadi8_global>;
+defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_SBYTE", i32, atomic_load_sext_8_global>;
+defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_SBYTE", i32, atomic_load_sext_16_global>;
 defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_USHORT", i32, extloadi16_global>;
 defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_USHORT", i32, zextloadi16_global>;
 defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_SSHORT", i32, sextloadi16_global>;
+defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_SSHORT", i32, atomic_load_sext_16_global>;
 
 foreach vt = Reg32Types.types in {
 defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_DWORD", vt, load_global>;
diff --git a/llvm/lib/Target/AMDGPU/DSInstructions.td b/llvm/lib/Target/AMDGPU/DSInstructions.td
index e9283fde85a48d..7724821bbd7c36 100644
--- a/llvm/lib/Target/AMDGPU/DSInstructions.td
+++ b/llvm/lib/Target/AMDGPU/DSInstructions.td
@@ -795,12 +795,19 @@ defm : DSReadPat_mc <DS_READ_B32, vt, "load_local">;
 
 defm : DSReadPat_mc <DS_READ_U8, i16, "atomic_load_8_local">;
 defm : DSReadPat_mc <DS_READ_U8, i32, "atomic_load_8_local">;
+defm : DSReadPat_mc <DS_READ_U8, i16, "atomic_load_zext_8_local">;
+defm : DSReadPat_mc <DS_READ_U8, i32, "atomic_load_zext_8_local">;
+defm : DSReadPat_mc <DS_READ_I8, i16, "atomic_load_sext_8_local">;
+defm : DSReadPat_mc <DS_READ_I8, i32, "atomic_load_sext_8_local">;
 defm : DSReadPat_mc <DS_READ_U16, i16, "atomic_load_16_local">;
 defm : DSReadPat_mc <DS_READ_U16, i32, "atomic_load_16_local">;
+defm : DSReadPat_mc <DS_READ_U16, i32, "atomic_load_zext_16_local">;
+defm : DSReadPat_mc <DS_READ_I16, i32, "atomic_load_sext_16_local">;
 defm : DSReadPat_mc <DS_READ_B32, i32, "atomic_load_32_local">;
 defm : DSReadPat_mc <DS_READ_B64, i64, "atomic_load_64_local">;
 
 let OtherPredicates = [D16PreservesUnusedBits] in {
+// TODO: Atomic loads
 def : DSReadPat_D16<DS_READ_U16_D16_HI, load_d16_hi_local, v2i16>;
 def : DSReadPat_D16<DS_READ_U16_D16_HI, load_d16_hi_local, v2f16>;
 def : DSReadPat_D16<DS_READ_U8_D16_HI, az_extloadi8_d16_hi_local, v2i16>;
diff --git a/llvm/lib/Target/AMDGPU/FLATInstructions.td b/llvm/lib/Target/AMDGPU/FLATInstructions.td
index a9ab0c5a453e8e..0f2566937022e8 100644
--- a/llvm/lib/Target/AMDGPU/FLATInstructions.td
+++ b/llvm/lib/Target/AMDGPU/FLATInstructions.td
@@ -1355,11 +1355,17 @@ let OtherPredicates = [HasFlatAddressSpace] in {
 
 def : FlatLoadPat <FLAT_LOAD_UBYTE, atomic_load_8_flat, i32>;
 def : FlatLoadPat <FLAT_LOAD_UBYTE, atomic_load_8_flat, i16>;
+def : FlatLoadPat <FLAT_LOAD_UBYTE, atomic_load_zext_8_flat, i32>;
+def : FlatLoadPat <FLAT_LOAD_UBYTE, atomic_load_zext_8_flat, i16>;
 def : FlatLoadPat <FLAT_LOAD_USHORT, atomic_load_16_flat, i32>;
 def : FlatLoadPat <FLAT_LOAD_USHORT, atomic_load_16_flat, i16>;
+def : FlatLoadPat <FLAT_LOAD_USHORT, atomic_load_zext_16_flat, i32>;
+def : FlatLoadPat <FLAT_LOAD_USHORT, atomic_load_zext_16_flat, i16>;
 def : FlatLoadPat <FLAT_LOAD_UBYTE, extloadi8_flat, i32>;
 def : FlatLoadPat <FLAT_LOAD_UBYTE, zextloadi8_flat, i32>;
 def : FlatLoadPat <FLAT_LOAD_SBYTE, sextloadi8_flat, i32>;
+def : FlatLoadPat <FLAT_LOAD_SBYTE, atomic_load_sext_8_flat, i32>;
+def : FlatLoadPat <FLAT_LOAD_SBYTE, atomic_load_sext_8_flat, i16>;
 def : FlatLoadPat <FLAT_LOAD_UBYTE, extloadi8_flat, i16>;
 def : FlatLoadPat <FLAT_LOAD_UBYTE, zextloadi8_flat, i16>;
 def : FlatLoadPat <FLAT_LOAD_SBYTE, sextloadi8_flat, i16>;
@@ -1456,6 +1462,7 @@ def : FlatStorePat <FLAT_STORE_BYTE_D16_HI, truncstorei8_hi16_flat, i32>;
 }
 
 let OtherPredicates = [D16PreservesUnusedBits] in {
+// TODO: Handle atomic loads
 def : FlatLoadPat_D16 <FLAT_LOAD_UBYTE_D16_HI, az_extloadi8_d16_hi_flat, v2i16>;
 def : FlatLoadPat_D16 <FLAT_LOAD_UBYTE_D16_HI, az_extloadi8_d16_hi_flat, v2f16>;
 def : FlatLoadPat_D16 <FLAT_LOAD_SBYTE_D16_HI, sextloadi8_d16_hi_flat, v2i16>;
@@ -1477,8 +1484,14 @@ let OtherPredicates = [HasFlatGlobalInsts] in {
 
 defm : GlobalFLATLoadPats <GLOBAL_LOAD_UBYTE, atomic_load_8_global, i32>;
 defm : GlobalFLATLoadPats <GLOBAL_LOAD_UBYTE, atomic_load_8_global, i16>;
+defm : GlobalFLATLoadPats <GLOBAL_LOAD_UBYTE, atomic_load_zext_8_global, i32>;
+defm : GlobalFLATLoadPats <GLOBAL_LOAD_UBYTE, atomic_load_zext_8_global, i16>;
 defm : GlobalFLATLoadPats <GLOBAL_LOAD_USHORT, atomic_load_16_global, i32>;
 defm : GlobalFLATLoadPats <GLOBAL_LOAD_USHORT, atomic_load_16_global, i16>;
+defm : GlobalFLATLoadPats <GLOBAL_LOAD_USHORT, atomic_load_zext_16_global, i32>;
+defm : GlobalFLATLoadPats <GLOBAL_LOAD_USHORT, atomic_load_zext_16_global, i16>;
+defm : GlobalFLATLoadPats <GLOBAL_LOAD_SBYTE, atomic_load_sext_8_global, i32>;
+defm : GlobalFLATLoadPats <GLOBAL_LOAD_SBYTE, atomic_load_sext_8_global, i16>;
 defm : GlobalFLATLoadPats <GLOBAL_LOAD_UBYTE, extloadi8_global, i32>;
 defm : GlobalFLATLoadPats <GLOBAL_LOAD_UBYTE, zextloadi8_global, i32>;
 defm : GlobalFLATLoadPats <GLOBAL_LOAD_SBYTE, sextloadi8_global, i32>;
@@ -1488,6 +1501,8 @@ defm : GlobalFLATLoadPats <GLOBAL_LOAD_SBYTE, sextloadi8_global, i16>;
 defm : GlobalFLATLoadPats <GLOBAL_LOAD_USHORT, extloadi16_global, i32>;
 defm : GlobalFLATLoadPats <GLOBAL_LOAD_USHORT, zextloadi16_global, i32>;
 defm : GlobalFLATLoadPats <GLOBAL_LOAD_SSHORT, sextloadi16_global, i32>;
+defm : GlobalFLATLoadPats <GLOBAL_LOAD_SSHORT, atomic_load_sext_16_global, i32>;
+defm : GlobalFLATLoadPats <GLOBAL_LOAD_USHORT, atomic_load_zext_16_global, i32>;
 defm : GlobalFLATLoadPats <GLOBAL_LOAD_USHORT, load_global, i16>;
 
 foreach vt = Reg32Types.types in {
@@ -1525,6 +1540,7 @@ defm : GlobalFLATStorePats <GLOBAL_STORE_BYTE_D16_HI, truncstorei8_hi16_global,
 }
 
 let OtherPredicates = [D16PreservesUnusedBits] in {
+// TODO: Handle atomic loads
 defm : GlobalFLATLoadPats_D16 <GLOBAL_LOAD_UBYTE_D16_HI, az_extloadi8_d16_hi_global, v2i16>;
 defm : GlobalFLATLoadPats_D16 <GLOBAL_LOAD_UBYTE_D16_HI, az_extloadi8_d16_hi_global, v2f16>;
 defm : GlobalFLATLoadPats_D16 <GLOBAL_LOAD_SBYTE_D16_HI, sextloadi8_d16_hi_global, v2i16>;
diff --git a/llvm/lib/Target/AMDGPU/SIInstrInfo.td b/llvm/lib/Target/AMDGPU/SIInstrInfo.td
index 087ca1f954464d..7095563de4a1b1 100644
--- a/llvm/lib/Target/AMDGPU/SIInstrInfo.td
+++ b/llvm/lib/Target/AMDGPU/SIInstrInfo.td
@@ -348,6 +348,18 @@ def load_glue : PatFrag <(ops node:$ptr), (unindexedload_glue node:$ptr)> {
   let IsNonExtLoad = 1;
 }
 
+def atomic_load_zext_glue :
+  PatFrag<(ops node:$ptr), (AMDGPUatomic_ld_glue node:$ptr)> {
+  let IsAtomic = true; // FIXME: Should be IsLoad and/or IsAtomic?
+  let IsZeroExtLoad = true;
+}
+
+def atomic_load_sext_glue :
+  PatFrag<(ops node:$ptr), (AMDGPUatomic_ld_glue node:$ptr)> {
+  let IsAtomic = true; // FIXME: Should be IsLoad and/or IsAtomic?
+  let IsSignExtLoad = true;
+}
+
 def atomic_load_8_glue : PatFrag<(ops node:$ptr),
   (AMDGPUatomic_ld_glue node:$ptr)> {
   let IsAtomic = 1;
@@ -372,6 +384,30 @@ def atomic_load_64_glue : PatFrag<(ops node:$ptr),
   let MemoryVT = i64;
 }
 
+def atomic_load_zext_8_glue : PatFrag<(ops node:$ptr),
+  (atomic_load_zext_glue node:$ptr)> {
+  let IsAtomic = 1;
+  let MemoryVT = i8;
+}
+
+def atomic_load_sext_8_glue : PatFrag<(ops node:$ptr),
+  (atomic_load_sext_glue node:$ptr)> {
+  let IsAtomic = 1;
+  let MemoryVT = i8;
+}
+
+def atomic_load_zext_16_glue : PatFrag<(ops node:$ptr),
+  (atomic_load_zext_glue node:$ptr)> {
+  let IsAtomic = 1;
+  let MemoryVT = i16;
+}
+
+def atomic_load_sext_16_glue : PatFrag<(ops node:$ptr),
+  (atomic_load_sext_glue node:$ptr)> {
+  let IsAtomic = 1;
+  let MemoryVT = i16;
+}
+
 def extload_glue : PatFrag<(ops node:$ptr), (unindexedload_glue node:$ptr)> {
   let IsLoad = 1;
   let IsAnyExtLoad = 1;
@@ -453,6 +489,15 @@ def atomic_load_32_local_m0 : PatFrag<(ops node:$ptr),
                                       (atomic_load_32_glue node:$ptr)>;
 def atomic_load_64_local_m0 : PatFrag<(ops node:$ptr),
                                        (atomic_load_64_glue node:$ptr)>;
+
+def atomic_load_zext_8_local_m0 : PatFrag<(ops node:$ptr),
+                                      (atomic_load_zext_8_glue node:$ptr)>;
+def atomic_load_sext_8_local_m0 : PatFrag<(ops node:$ptr),
+                                      (atomic_load_sext_8_glue node:$ptr)>;
+def atomic_load_zext_16_local_m0 : PatFrag<(ops node:$ptr),
+                                      (atomic_load_zext_16_glue node:$ptr)>;
+def atomic_load_sext_16_local_m0 : PatFrag<(ops node:$ptr),
+                                      (atomic_load_sext_16_glue node:$ptr)>;
 } // End let AddressSpaces = LoadAddress_local.AddrSpaces
 
 
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/atomic_load_flat.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/atomic_load_flat.ll
new file mode 100644
index 00000000000000..788fb04e842b4e
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/atomic_load_flat.ll
@@ -0,0 +1,331 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -global-isel -mtriple=amdgcn-amd-amdhsa -mcpu=kaveri < %s | FileCheck -check-prefixes=GCN,GFX7 %s
+; RUN: llc -global-isel -mtriple=amdgcn-amd-amdhsa -mcpu=fiji < %s | FileCheck -check-prefixes=GCN,GFX8 %s
+; RUN: llc -global-isel -mtriple=amdgcn-amd-amdhsa -mcpu=gfx900 < %s | FileCheck -check-prefixes=GCN,GFX9 %s
+
+define i8 @atomic_load_flat_monotonic_i8(ptr %ptr) {
+; GCN-LABEL: atomic_load_flat_monotonic_i8:
+; GCN:       ; %bb.0:
+; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT:    flat_load_ubyte v0, v[0:1] glc
+; GCN-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GCN-NEXT:    s_setpc_b64 s[30:31]
+  %load = load atomic i8, ptr %ptr monotonic, align 1
+  ret i8 %load
+}
+
+define i32 @atomic_load_flat_monotonic_i8_zext_to_i32(ptr %ptr) {
+; GCN-LABEL: atomic_load_flat_monotonic_i8_zext_to_i32:
+; GCN:       ; %bb.0:
+; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT:    flat_load_ubyte v0, v[0:1] glc
+; GCN-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GCN-NEXT:    s_setpc_b64 s[30:31]
+  %load = load atomic i8, ptr %ptr monotonic, align 1
+  %ext = zext i8 %load to i32
+  ret i32 %ext
+}
+
+define i32 @atomic_load_flat_monotonic_i8_sext_to_i32(ptr %ptr) {
+; GFX7-LABEL: atomic_load_flat_monotonic_i8_sext_to_i32:
+; GFX7:       ; %bb.0:
+; GFX7-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX7-NEXT:    flat_load_sbyte v2, v[0:1] glc
+; GFX7-NEXT:    flat_load_ubyte v0, v[0:1] glc
+; GFX7-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GFX7-NEXT:    v_mov_b32_e32 v0, v2
+; GFX7-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX8-LABEL: atomic_load_flat_monotonic_i8_sext_to_i32:
+; GFX8:       ; %bb.0:
+; GFX8-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX8-NEXT:    flat_load_sbyte v2, v[0:1] glc
+; GFX8-NEXT:    flat_load_ubyte v0, v[0:1] glc
+; GFX8-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GFX8-NEXT:    v_mov_b32_e32 v0, v2
+; GFX8-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX9-LABEL: atomic_load_flat_monotonic_i8_sext_to_i32:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    flat_load_sbyte v2, v[0:1] glc
+; GFX9-NEXT:    flat_load_ubyte v3, v[0:1] glc
+; GFX9-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    v_mov_b32_e32 v0, v2
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+  %load = load atomic i8, ptr %ptr monotonic, align 1
+  %ext = sext i8 %load to i32
+  ret i32 %ext
+}
+
+define i16 @atomic_load_flat_monotonic_i8_zext_to_i16(ptr %ptr) {
+; GCN-LABEL: atomic_load_flat_monotonic_i8_zext_to_i16:
+; GCN:       ; %bb.0:
+; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT:    flat_load_ubyte v0, v[0:1] glc
+; GCN-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GCN-NEXT:    s_setpc_b64 s[30:31]
+  %load = load atomic i8, ptr %ptr monotonic, align 1
+  %ext = zext i8 %load to i16
+  ret i16 %ext
+}
+
+define i16 @atomic_load_flat_monotonic_i8_sext_to_i16(ptr %ptr) {
+; GFX7-LABEL: atomic_load_flat_monotonic_i8_sext_to_i16:
+; GFX7:       ; %bb.0:
+; GFX7-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX7-NEXT:    flat_load_sbyte v2, v[0:1] glc
+; GFX7-NEXT:    flat_load_ubyte v0, v[0:1] glc
+; GFX7-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GFX7-NEXT:    v_mov_b32_e32 v0, v2
+; GFX7-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX8-LABEL: atomic_load_flat_monotonic_i8_sext_to_i16:
+; GFX8:       ; %bb.0:
+; GFX8-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX8-NEXT:    flat_load_sbyte v2, v[0:1] glc
+; GFX8-NEXT:    flat_load_ubyte v0, v[0:1] glc
+; GFX8-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GFX8-NEXT:    v_mov_b32_e32 v0, v2
+; GFX8-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX9-LABEL: atomic_load_flat_monotonic_i8_sext_to_i16:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    flat_load_sbyte v2, v[0:1] glc
+; GFX9-NEXT:    flat_load_ubyte v3, v[0:1] glc
+; GFX9-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    v_mov_b32_e32 v0, v2
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+  %load = load atomic i8, ptr %ptr monotonic, align 1
+  %ext = sext i8 %load to i16
+  ret i16 %ext
+}
+
+define i16 @atomic_load_flat_monotonic_i16(ptr %ptr) {
+; GCN-LABEL: atomic_load_flat_monotonic_i16:
+; GCN:       ; %bb.0:
+; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT:    flat_load_ushort v0, v[0:1] glc
+; GCN-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GCN-NEXT:    s_setpc_b64 s[30:31]
+  %load = load atomic i16, ptr %ptr monotonic, align 2
+  ret i16 %load
+}
+
+define i32 @atomic_load_flat_monotonic_i16_zext_to_i32(ptr %ptr) {
+; GCN-LABEL: atomic_load_flat_monotonic_i16_zext_to_i32:
+; GCN:       ; %bb.0:
+; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT:    flat_load_ubyte v0, v[0:1] glc
+; GCN-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GCN-NEXT:    s_setpc_b64 s[30:31]
+  %load = load atomic i16, ptr %ptr monotonic, align 2
+  %ext = zext i16 %load to i32
+  ret i32 %ext
+}
+
+define i32 @atomic_load_flat_monotonic_i16_sext_to_i32(ptr %ptr) {
+; GFX7-LABEL: atomic_load_flat_monotonic_i16_sext_to_i32:
+; GFX7:       ; %bb.0:
+; GFX7-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX7-NEXT:    flat_load_sbyte v2, v[0:1] glc
+; GFX7-NEXT:    flat_load_ushort v0, v[0:1] glc
+; GFX7-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GFX7-NEXT:    v_mov_b32_e32 v0, v2
+; GFX7-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX8-LABEL: atomic_load_flat_monotonic_i16_sext_to_i32:
+; GFX8:       ; %bb.0:
+; GFX8-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX8-NEXT:    flat_load_sbyte v2, v[0:1] glc
+; GFX8-NEXT:    flat_load_ushort v0, v[0:1] glc
+; GFX8-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GFX8-NEXT:    v_mov_b32_e32 v0, v2
+; GFX8-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX9-LABEL: atomic_load_flat_monotonic_i16_sext_to_i32:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    flat_load_sbyte v2, v[0:1] glc
+; GFX9-NEXT:    flat_load_ushort v3, v[0:1] glc
+; GFX9-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    v_mov_b32_e32 v0, v2
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+  %load = load atomic i16, ptr %ptr monotonic, align 2
+  %ext = sext i16 %load to i32
+  ret i32 %ext
+}
+
+define half @atomic_load_flat_monotonic_f16(ptr %ptr) {
+; GCN-LABEL: atomic_load_flat_monotonic_f16:
+; GCN:       ; %bb.0:
+; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT:    flat_load_ushort v0, v[0:1] glc
+; GCN-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GCN-NEXT:    s_setpc_b64 s[30:31]
+  %load = load atomic half, ptr %ptr monotonic, align 2
+  ret half %load
+}
+
+define bfloat @atomic_load_flat_monotonic_bf16(ptr %ptr) {
+; GCN-LABEL: atomic_load_flat_monotonic_bf16:
+; GCN:       ; %bb.0:
+; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT:    flat_load_ushort v0, v[0:1] glc
+; GCN-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GCN-NEXT:    s_setpc_b64 s[30:31]
+  %load = load atomic bfloat, ptr %ptr monotonic, align 2
+  ret bfloat %load
+}
+
+define i32 @atomic_load_flat_monotonic_f16_zext_to_i32(ptr %ptr) {
+; GCN-LABEL: atomic_load_flat_monotonic_f16_zext_to_i32:
+; GCN:       ; %bb.0:
+; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT:    flat_load_ubyte v0, v[0:1] glc
+; GCN-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GCN-NEXT:    s_setpc_b64 s[30:31]
+  %load = load atomic half, ptr %ptr monotonic, align 2
+  %cast = bitcast half %load to i16
+  %ext = zext i16 %cast to i32
+  ret i32 %ext
+}
+
+define i32 @atomic_load_flat_monotonic_bf16_zext_to_i32(ptr %ptr) {
+; GCN-LABEL: atomic_load_flat_monotonic_bf16_zext_to_i32:
+; GCN:       ; %bb.0:
+; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT:    flat_load_ubyte v0, v[0:1] glc
+; GCN-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GCN-NEXT:    s_setpc_b64 s[30:31]
+  %load = load atomic bfloat, ptr %ptr monotonic, align 2
+  %cast = bitcast bfloat %load to i16
+  %ext = zext i16 %cast to i32
+  ret i32 %ext
+}
+
+define i32 @atomic_load_flat_monotonic_i16_d16_hi_shift(ptr %ptr) {
+; GCN-LABEL: atomic_load_flat_monotonic_i16_d16_hi_shift:
+; GCN:       ; %bb.0:
+; GCN-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT:    flat_load_ubyte v0, v[0:1] glc
+; GCN-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
+; GCN-NEXT:    v_lshlrev_b32_e32 v0, 16, v0
+; GCN-NEXT:    s_setpc_b64 s[30:31]
+  %load = load atomic i16, ptr %ptr monotonic, align 2
+  %ext = zext i16 %load to i32
+  %shl = shl i32 %ext, 16
+  ret i32 %shl
+}
+
+define <2 x i16> @atomic_load_flat_monotonic_i16_d16_hi_vector_insert(ptr %ptr, <2 x i16> %vec) {
+; GFX7-LABEL: atomic_load_flat_monotonic_i16_d16_hi_vector_insert:
+; GFX7:       ; %bb.0:
+; GFX7-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX7-NEXT:    flat_load_ubyte v0, v[0:1] glc
+; GFX7-NEXT:    v_lshlrev_b32_e32 v1, 16, v3
+; GFX7-NEXT:    v_and_b32_e32 v2, 0xffff, v2
+; GFX7-NEXT:    v_or_b32_e32 v1, v1, v2
+; GFX7-NEXT:    v_and_b32_e...
[truncated]

@arsenm arsenm changed the title AMDGPU: Handle atomic sextload and zextload AMDGPU/GlobalISel: Handle atomic sextload and zextload Oct 9, 2024
Copy link
Collaborator

@rampitec rampitec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing test for buffer loads?

@arsenm
Copy link
Contributor Author

arsenm commented Oct 9, 2024

Missing test for buffer loads?

Those are the gfx7 global cases. There aren't any atomic buffer load intrinsics

@rampitec
Copy link
Collaborator

rampitec commented Oct 9, 2024

Missing test for buffer loads?

Those are the gfx7 global cases. There aren't any atomic buffer load intrinsics

But patch adds several MUBUF_Pseudo_Load_Pats which are not covered by tests?

@arsenm arsenm force-pushed the users/arsenm/amdgpu-globalisel-missing-m0-zextload-sextload branch from 8a4ebaa to 7ddd036 Compare October 9, 2024 18:07
@arsenm arsenm force-pushed the users/arsenm/amdgpu-global-isel-fix-atomic-extloads branch from dd5d25a to 81dad07 Compare October 9, 2024 18:07
@arsenm
Copy link
Contributor Author

arsenm commented Oct 9, 2024

But patch adds several MUBUF_Pseudo_Load_Pats which are not covered by tests?

The only cases that might have missing coverage is extend to 16-bit register cases. In the DAG we didn't have legal 16-bit types on gfx6/7, but we could handle the loads here

Base automatically changed from users/arsenm/amdgpu-globalisel-missing-m0-zextload-sextload to main October 10, 2024 10:01
@arsenm arsenm force-pushed the users/arsenm/amdgpu-global-isel-fix-atomic-extloads branch 2 times, most recently from f61f760 to b2937eb Compare October 10, 2024 10:44
@arsenm
Copy link
Contributor Author

arsenm commented Oct 15, 2024

ping

@rampitec
Copy link
Collaborator

But patch adds several MUBUF_Pseudo_Load_Pats which are not covered by tests?

The only cases that might have missing coverage is extend to 16-bit register cases. In the DAG we didn't have legal 16-bit types on gfx6/7, but we could handle the loads here

I still think these added MUBUF patterns need at least some test coverage.

@arsenm arsenm force-pushed the users/arsenm/amdgpu-global-isel-fix-atomic-extloads branch from b2937eb to bdd2a6b Compare October 16, 2024 04:56
@arsenm
Copy link
Contributor Author

arsenm commented Oct 16, 2024

I still think these added MUBUF patterns need at least some test coverage.

Just removed the 16-bit result ones

@arsenm arsenm force-pushed the users/arsenm/amdgpu-global-isel-fix-atomic-extloads branch from bdd2a6b to 7d5e8ec Compare October 17, 2024 13:05
@arsenm arsenm force-pushed the users/arsenm/amdgpu-global-isel-fix-atomic-extloads branch from 7d5e8ec to 5c0c290 Compare October 30, 2024 22:04
Copy link
Contributor Author

@arsenm arsenm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ping

@@ -983,15 +983,20 @@ defm BUFFER_LOAD_LDS_U16 : MUBUF_Pseudo_Loads_LDSOpc <
>;

defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_UBYTE", i32, atomic_load_8_global>;
defm : MUBUF_Pseudo_Load_Pats<"BUFFER_LOAD_UBYTE", i32, atomic_load_zext_8_global>;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still think if you are adding buffer patterns you need to have tests for them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are there

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see modified tests for flat, global and local.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not for buffer intrinsics or fat pointers. This is for global pointers implemented with mubuf instructions. This is covered by the global run lines for gfx6. The -flat-for-global case for gfx7 do not correctly use the mubuf results, but that's a separate issue

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok

Copy link
Contributor Author

arsenm commented Oct 31, 2024

Merge activity

  • Oct 31, 10:38 AM EDT: A user started a stack merge that includes this pull request via Graphite.
  • Oct 31, 10:39 AM EDT: Graphite rebased this pull request as part of a merge.
  • Oct 31, 10:42 AM EDT: Graphite rebased this pull request as part of a merge.
  • Oct 31, 10:44 AM EDT: A user merged this pull request with Graphite.

@arsenm arsenm force-pushed the users/arsenm/amdgpu-global-isel-fix-atomic-extloads branch from 5c0c290 to 2548945 Compare October 31, 2024 14:39
Atomic loads are handled differently from the DAG, and have separate opcodes
and explicit control over the extensions, like ordinary loads. Add
new patterns for these.

There's room for cleanup and improvement. d16 cases aren't handled.

Fixes #111645
This would only be used if we started using direct i16
values on gfx6/gfx7, but they should legalize all to i32
@arsenm arsenm force-pushed the users/arsenm/amdgpu-global-isel-fix-atomic-extloads branch from 2548945 to d8e49cc Compare October 31, 2024 14:42
@arsenm arsenm merged commit 1240902 into main Oct 31, 2024
5 of 6 checks passed
@arsenm arsenm deleted the users/arsenm/amdgpu-global-isel-fix-atomic-extloads branch October 31, 2024 14:44
smallp-o-p pushed a commit to smallp-o-p/llvm-project that referenced this pull request Nov 3, 2024
Atomic loads are handled differently from the DAG, and have separate opcodes
and explicit control over the extensions, like ordinary loads. Add
new patterns for these.

There's room for cleanup and improvement. d16 cases aren't handled.

Fixes llvm#111645
smallp-o-p pushed a commit to smallp-o-p/llvm-project that referenced this pull request Nov 3, 2024
Atomic loads are handled differently from the DAG, and have separate opcodes
and explicit control over the extensions, like ordinary loads. Add
new patterns for these.

There's room for cleanup and improvement. d16 cases aren't handled.

Fixes llvm#111645
NoumanAmir657 pushed a commit to NoumanAmir657/llvm-project that referenced this pull request Nov 4, 2024
Atomic loads are handled differently from the DAG, and have separate opcodes
and explicit control over the extensions, like ordinary loads. Add
new patterns for these.

There's room for cleanup and improvement. d16 cases aren't handled.

Fixes llvm#111645
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

AMDGPU: sextload/zextload atomics do not select in globalisel
3 participants