Skip to content

Commit

Permalink
Merged master:6d374cf78c8a into amd-gfx:75c3a90abed8
Browse files Browse the repository at this point in the history
Local branch amd-gfx 75c3a90 Merged master:c2921d148e7b into amd-gfx:84afc21c89eb
Remote branch master 6d374cf [X86] Add 64-bit target tests
  • Loading branch information
Sw authored and Sw committed Sep 26, 2020
2 parents 75c3a90 + 6d374cf commit 7db6484
Show file tree
Hide file tree
Showing 2 changed files with 45 additions and 23 deletions.
12 changes: 8 additions & 4 deletions clang/docs/ThreadSafetyAnalysis.rst
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,9 @@ and data members. Users are *strongly advised* to define macros for the various
attributes; example definitions can be found in :ref:`mutexheader`, below.
The following documentation assumes the use of macros.

The attributes only control assumptions made by thread safety analysis and the
warnings it issues. They don't affect generated code or behavior at run-time.

For historical reasons, prior versions of thread safety used macro names that
were very lock-centric. These macros have since been renamed to fit a more
general capability model. The prior names are still in use, and will be
Expand Down Expand Up @@ -447,10 +450,11 @@ ASSERT_CAPABILITY(...) and ASSERT_SHARED_CAPABILITY(...)

*Previously:* ``ASSERT_EXCLUSIVE_LOCK``, ``ASSERT_SHARED_LOCK``

These are attributes on a function or method that does a run-time test to see
whether the calling thread holds the given capability. The function is assumed
to fail (no return) if the capability is not held. See :ref:`mutexheader`,
below, for example uses.
These are attributes on a function or method which asserts the calling thread
already holds the given capability, for example by performing a run-time test
and terminating if the capability is not held. Presence of this annotation
causes the analysis to assume the capability is held after calls to the
annotated function. See :ref:`mutexheader`, below, for example uses.


GUARDED_VAR and PT_GUARDED_VAR
Expand Down
56 changes: 37 additions & 19 deletions llvm/test/CodeGen/X86/2012-07-10-extload64.ll
Original file line number Diff line number Diff line change
@@ -1,14 +1,22 @@
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
; RUN: llc < %s -mtriple=i686-pc-win32 -mcpu=corei7 | FileCheck %s
; RUN: llc < %s -mtriple=i686-pc-win32 -mcpu=corei7 | FileCheck %s --check-prefixes=CHECK,X86
; RUN: llc < %s -mtriple=x86_64-pc-win32 -mcpu=corei7 | FileCheck %s --check-prefixes=CHECK,X64

define void @load_store(<4 x i16>* %in) {
; CHECK-LABEL: load_store:
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
; CHECK-NEXT: movq {{.*#+}} xmm0 = mem[0],zero
; CHECK-NEXT: paddw %xmm0, %xmm0
; CHECK-NEXT: movq %xmm0, (%eax)
; CHECK-NEXT: retl
; X86-LABEL: load_store:
; X86: # %bb.0: # %entry
; X86-NEXT: movl {{[0-9]+}}(%esp), %eax
; X86-NEXT: movq {{.*#+}} xmm0 = mem[0],zero
; X86-NEXT: paddw %xmm0, %xmm0
; X86-NEXT: movq %xmm0, (%eax)
; X86-NEXT: retl
;
; X64-LABEL: load_store:
; X64: # %bb.0: # %entry
; X64-NEXT: movq {{.*#+}} xmm0 = mem[0],zero
; X64-NEXT: paddw %xmm0, %xmm0
; X64-NEXT: movq %xmm0, (%rcx)
; X64-NEXT: retq
entry:
%A27 = load <4 x i16>, <4 x i16>* %in, align 4
%A28 = add <4 x i16> %A27, %A27
Expand All @@ -18,23 +26,33 @@ entry:

; Make sure that we store a 64bit value, even on 32bit systems.
define void @store_64(<2 x i32>* %ptr) {
; CHECK-LABEL: store_64:
; CHECK: # %bb.0: # %BB
; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
; CHECK-NEXT: xorps %xmm0, %xmm0
; CHECK-NEXT: movlps %xmm0, (%eax)
; CHECK-NEXT: retl
; X86-LABEL: store_64:
; X86: # %bb.0: # %BB
; X86-NEXT: movl {{[0-9]+}}(%esp), %eax
; X86-NEXT: xorps %xmm0, %xmm0
; X86-NEXT: movlps %xmm0, (%eax)
; X86-NEXT: retl
;
; X64-LABEL: store_64:
; X64: # %bb.0: # %BB
; X64-NEXT: movq $0, (%rcx)
; X64-NEXT: retq
BB:
store <2 x i32> zeroinitializer, <2 x i32>* %ptr
ret void
}

define <2 x i32> @load_64(<2 x i32>* %ptr) {
; CHECK-LABEL: load_64:
; CHECK: # %bb.0: # %BB
; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
; CHECK-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
; CHECK-NEXT: retl
; X86-LABEL: load_64:
; X86: # %bb.0: # %BB
; X86-NEXT: movl {{[0-9]+}}(%esp), %eax
; X86-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
; X86-NEXT: retl
;
; X64-LABEL: load_64:
; X64: # %bb.0: # %BB
; X64-NEXT: movsd {{.*#+}} xmm0 = mem[0],zero
; X64-NEXT: retq
BB:
%t = load <2 x i32>, <2 x i32>* %ptr
ret <2 x i32> %t
Expand Down

0 comments on commit 7db6484

Please sign in to comment.