Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[memref] Handle edge case in subview of full static size fold #105635

Merged
merged 1 commit into from
Aug 23, 2024

Conversation

MacDue
Copy link
Member

@MacDue MacDue commented Aug 22, 2024

It is possible to have a subview with a fully static size and a type that matches the source type, but a dynamic offset that may be different. However, currently the memref dialect folds:

func.func @subview_of_static_full_size(
  %arg0:  memref<16x4xf32,  strided<[4, 1], offset: ?>>, %idx: index)
  -> memref<16x4xf32,  strided<[4, 1], offset: ?>>
{
  %0 = memref.subview %arg0[%idx, 0][16, 4][1, 1]
   : memref<16x4xf32,  strided<[4, 1], offset: ?>>
     to memref<16x4xf32,  strided<[4, 1], offset: ?>>
  return %0 : memref<16x4xf32,  strided<[4, 1], offset: ?>>
}

To:

func.func @subview_of_static_full_size(
  %arg0: memref<16x4xf32, strided<[4, 1], offset: ?>>, %arg1: index)
  -> memref<16x4xf32, strided<[4, 1], offset: ?>>
{
  return %arg0 : memref<16x4xf32, strided<[4, 1], offset: ?>>
}

Which drops the dynamic offset from the subview op.

It is possible to have a subview with a fully static size and a type
that matches the source type, but a dynamic offset that may be
different. However, currently the memref dialect folds:

```mlir
func.func @subview_of_static_full_size(
  %arg0:  memref<16x4xf32,  strided<[4, 1], offset: ?>>, %idx: index)
  -> memref<16x4xf32,  strided<[4, 1], offset: ?>>
{
  %0 = memref.subview %arg0[%idx, 0][16, 4][1, 1]
   : memref<16x4xf32,  strided<[4, 1], offset: ?>>
     to memref<16x4xf32,  strided<[4, 1], offset: ?>>
  return %0 : memref<16x4xf32,  strided<[4, 1], offset: ?>>
}
```

To:

```mlir
func.func @subview_of_static_full_size(
  %arg0: memref<16x4xf32, strided<[4, 1], offset: ?>>, %arg1: index)
  -> memref<16x4xf32, strided<[4, 1], offset: ?>>
{
  return %arg0 : memref<16x4xf32, strided<[4, 1], offset: ?>>
}
```

Which drops the dynamic offset from the `subview` op.
@MacDue MacDue requested a review from c-rhodes August 22, 2024 10:08
@llvmbot
Copy link
Member

llvmbot commented Aug 22, 2024

@llvm/pr-subscribers-mlir

@llvm/pr-subscribers-mlir-memref

Author: Benjamin Maxwell (MacDue)

Changes

It is possible to have a subview with a fully static size and a type that matches the source type, but a dynamic offset that may be different. However, currently the memref dialect folds:

func.func @<!-- -->subview_of_static_full_size(
  %arg0:  memref&lt;16x4xf32,  strided&lt;[4, 1], offset: ?&gt;&gt;, %idx: index)
  -&gt; memref&lt;16x4xf32,  strided&lt;[4, 1], offset: ?&gt;&gt;
{
  %0 = memref.subview %arg0[%idx, 0][16, 4][1, 1]
   : memref&lt;16x4xf32,  strided&lt;[4, 1], offset: ?&gt;&gt;
     to memref&lt;16x4xf32,  strided&lt;[4, 1], offset: ?&gt;&gt;
  return %0 : memref&lt;16x4xf32,  strided&lt;[4, 1], offset: ?&gt;&gt;
}

To:

func.func @<!-- -->subview_of_static_full_size(
  %arg0: memref&lt;16x4xf32, strided&lt;[4, 1], offset: ?&gt;&gt;, %arg1: index)
  -&gt; memref&lt;16x4xf32, strided&lt;[4, 1], offset: ?&gt;&gt;
{
  return %arg0 : memref&lt;16x4xf32, strided&lt;[4, 1], offset: ?&gt;&gt;
}

Which drops the dynamic offset from the subview op.


Full diff: https://github.com/llvm/llvm-project/pull/105635.diff

4 Files Affected:

  • (modified) mlir/include/mlir/IR/BuiltinAttributes.td (+4)
  • (modified) mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp (+9-6)
  • (modified) mlir/lib/IR/BuiltinAttributes.cpp (+7)
  • (modified) mlir/test/Dialect/MemRef/canonicalize.mlir (+13)
diff --git a/mlir/include/mlir/IR/BuiltinAttributes.td b/mlir/include/mlir/IR/BuiltinAttributes.td
index d9295936ee97bd..f0d41754001400 100644
--- a/mlir/include/mlir/IR/BuiltinAttributes.td
+++ b/mlir/include/mlir/IR/BuiltinAttributes.td
@@ -1012,6 +1012,10 @@ def StridedLayoutAttr : Builtin_Attr<"StridedLayout", "strided_layout",
   let extraClassDeclaration = [{
     /// Print the attribute to the given output stream.
     void print(raw_ostream &os) const;
+
+    /// Returns true if this layout is static, i.e. the strides and offset all
+    /// have a known value > 0.
+    bool hasStaticLayout() const;
   }];
 }
 
diff --git a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
index 150049e5c5effe..9c021d3613f1c8 100644
--- a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
+++ b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
@@ -3279,11 +3279,14 @@ void SubViewOp::getCanonicalizationPatterns(RewritePatternSet &results,
 }
 
 OpFoldResult SubViewOp::fold(FoldAdaptor adaptor) {
-  auto resultShapedType = llvm::cast<ShapedType>(getResult().getType());
-  auto sourceShapedType = llvm::cast<ShapedType>(getSource().getType());
-
-  if (resultShapedType.hasStaticShape() &&
-      resultShapedType == sourceShapedType) {
+  MemRefType sourceMemrefType = getSource().getType();
+  MemRefType resultMemrefType = getResult().getType();
+  auto resultLayout =
+      dyn_cast_if_present<StridedLayoutAttr>(resultMemrefType.getLayout());
+
+  if (resultMemrefType == sourceMemrefType &&
+      resultMemrefType.hasStaticShape() &&
+      (!resultLayout || resultLayout.hasStaticLayout())) {
     return getViewSource();
   }
 
@@ -3301,7 +3304,7 @@ OpFoldResult SubViewOp::fold(FoldAdaptor adaptor) {
         strides, [](OpFoldResult ofr) { return isConstantIntValue(ofr, 1); });
     bool allSizesSame = llvm::equal(sizes, srcSizes);
     if (allOffsetsZero && allStridesOne && allSizesSame &&
-        resultShapedType == sourceShapedType)
+        resultMemrefType == sourceMemrefType)
       return getViewSource();
   }
 
diff --git a/mlir/lib/IR/BuiltinAttributes.cpp b/mlir/lib/IR/BuiltinAttributes.cpp
index 89b1ed67f5d067..8861a940336133 100644
--- a/mlir/lib/IR/BuiltinAttributes.cpp
+++ b/mlir/lib/IR/BuiltinAttributes.cpp
@@ -229,6 +229,13 @@ void StridedLayoutAttr::print(llvm::raw_ostream &os) const {
   os << ">";
 }
 
+/// Returns true if this layout is static, i.e. the strides and offset all have
+/// a known value > 0.
+bool StridedLayoutAttr::hasStaticLayout() const {
+  return !ShapedType::isDynamic(getOffset()) &&
+         !ShapedType::isDynamicShape(getStrides());
+}
+
 /// Returns the strided layout as an affine map.
 AffineMap StridedLayoutAttr::getAffineMap() const {
   return makeStridedLinearLayoutMap(getStrides(), getOffset(), getContext());
diff --git a/mlir/test/Dialect/MemRef/canonicalize.mlir b/mlir/test/Dialect/MemRef/canonicalize.mlir
index b15af9baca7dc7..02110bc2892d05 100644
--- a/mlir/test/Dialect/MemRef/canonicalize.mlir
+++ b/mlir/test/Dialect/MemRef/canonicalize.mlir
@@ -70,6 +70,19 @@ func.func @subview_of_static_full_size(%arg0 : memref<4x6x16x32xi8>) -> memref<4
 
 // -----
 
+// CHECK-LABEL: func @negative_subview_of_static_full_size
+//  CHECK-SAME:   %[[ARG0:.+]]: memref<16x4xf32,  strided<[4, 1], offset: ?>>
+//  CHECK-SAME:   %[[IDX:.+]]: index
+//       CHECK:   %[[S:.+]] = memref.subview %[[ARG0]][%[[IDX]], 0] [16, 4] [1, 1]
+//  CHECK-SAME:                    to memref<16x4xf32,  strided<[4, 1], offset: ?>>
+//       CHECK:    return %[[S]] : memref<16x4xf32,  strided<[4, 1], offset: ?>>
+func.func @negative_subview_of_static_full_size(%arg0:  memref<16x4xf32,  strided<[4, 1], offset: ?>>, %idx: index) -> memref<16x4xf32,  strided<[4, 1], offset: ?>> {
+  %0 = memref.subview %arg0[%idx, 0][16, 4][1, 1] : memref<16x4xf32,  strided<[4, 1], offset: ?>> to memref<16x4xf32,  strided<[4, 1], offset: ?>>
+  return %0 : memref<16x4xf32,  strided<[4, 1], offset: ?>>
+}
+
+// -----
+
 func.func @subview_canonicalize(%arg0 : memref<?x?x?xf32>, %arg1 : index,
     %arg2 : index) -> memref<?x?x?xf32, strided<[?, ?, ?], offset: ?>>
 {

@llvmbot
Copy link
Member

llvmbot commented Aug 22, 2024

@llvm/pr-subscribers-mlir-core

Author: Benjamin Maxwell (MacDue)

Changes

It is possible to have a subview with a fully static size and a type that matches the source type, but a dynamic offset that may be different. However, currently the memref dialect folds:

func.func @<!-- -->subview_of_static_full_size(
  %arg0:  memref&lt;16x4xf32,  strided&lt;[4, 1], offset: ?&gt;&gt;, %idx: index)
  -&gt; memref&lt;16x4xf32,  strided&lt;[4, 1], offset: ?&gt;&gt;
{
  %0 = memref.subview %arg0[%idx, 0][16, 4][1, 1]
   : memref&lt;16x4xf32,  strided&lt;[4, 1], offset: ?&gt;&gt;
     to memref&lt;16x4xf32,  strided&lt;[4, 1], offset: ?&gt;&gt;
  return %0 : memref&lt;16x4xf32,  strided&lt;[4, 1], offset: ?&gt;&gt;
}

To:

func.func @<!-- -->subview_of_static_full_size(
  %arg0: memref&lt;16x4xf32, strided&lt;[4, 1], offset: ?&gt;&gt;, %arg1: index)
  -&gt; memref&lt;16x4xf32, strided&lt;[4, 1], offset: ?&gt;&gt;
{
  return %arg0 : memref&lt;16x4xf32, strided&lt;[4, 1], offset: ?&gt;&gt;
}

Which drops the dynamic offset from the subview op.


Full diff: https://github.com/llvm/llvm-project/pull/105635.diff

4 Files Affected:

  • (modified) mlir/include/mlir/IR/BuiltinAttributes.td (+4)
  • (modified) mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp (+9-6)
  • (modified) mlir/lib/IR/BuiltinAttributes.cpp (+7)
  • (modified) mlir/test/Dialect/MemRef/canonicalize.mlir (+13)
diff --git a/mlir/include/mlir/IR/BuiltinAttributes.td b/mlir/include/mlir/IR/BuiltinAttributes.td
index d9295936ee97bd..f0d41754001400 100644
--- a/mlir/include/mlir/IR/BuiltinAttributes.td
+++ b/mlir/include/mlir/IR/BuiltinAttributes.td
@@ -1012,6 +1012,10 @@ def StridedLayoutAttr : Builtin_Attr<"StridedLayout", "strided_layout",
   let extraClassDeclaration = [{
     /// Print the attribute to the given output stream.
     void print(raw_ostream &os) const;
+
+    /// Returns true if this layout is static, i.e. the strides and offset all
+    /// have a known value > 0.
+    bool hasStaticLayout() const;
   }];
 }
 
diff --git a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
index 150049e5c5effe..9c021d3613f1c8 100644
--- a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
+++ b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
@@ -3279,11 +3279,14 @@ void SubViewOp::getCanonicalizationPatterns(RewritePatternSet &results,
 }
 
 OpFoldResult SubViewOp::fold(FoldAdaptor adaptor) {
-  auto resultShapedType = llvm::cast<ShapedType>(getResult().getType());
-  auto sourceShapedType = llvm::cast<ShapedType>(getSource().getType());
-
-  if (resultShapedType.hasStaticShape() &&
-      resultShapedType == sourceShapedType) {
+  MemRefType sourceMemrefType = getSource().getType();
+  MemRefType resultMemrefType = getResult().getType();
+  auto resultLayout =
+      dyn_cast_if_present<StridedLayoutAttr>(resultMemrefType.getLayout());
+
+  if (resultMemrefType == sourceMemrefType &&
+      resultMemrefType.hasStaticShape() &&
+      (!resultLayout || resultLayout.hasStaticLayout())) {
     return getViewSource();
   }
 
@@ -3301,7 +3304,7 @@ OpFoldResult SubViewOp::fold(FoldAdaptor adaptor) {
         strides, [](OpFoldResult ofr) { return isConstantIntValue(ofr, 1); });
     bool allSizesSame = llvm::equal(sizes, srcSizes);
     if (allOffsetsZero && allStridesOne && allSizesSame &&
-        resultShapedType == sourceShapedType)
+        resultMemrefType == sourceMemrefType)
       return getViewSource();
   }
 
diff --git a/mlir/lib/IR/BuiltinAttributes.cpp b/mlir/lib/IR/BuiltinAttributes.cpp
index 89b1ed67f5d067..8861a940336133 100644
--- a/mlir/lib/IR/BuiltinAttributes.cpp
+++ b/mlir/lib/IR/BuiltinAttributes.cpp
@@ -229,6 +229,13 @@ void StridedLayoutAttr::print(llvm::raw_ostream &os) const {
   os << ">";
 }
 
+/// Returns true if this layout is static, i.e. the strides and offset all have
+/// a known value > 0.
+bool StridedLayoutAttr::hasStaticLayout() const {
+  return !ShapedType::isDynamic(getOffset()) &&
+         !ShapedType::isDynamicShape(getStrides());
+}
+
 /// Returns the strided layout as an affine map.
 AffineMap StridedLayoutAttr::getAffineMap() const {
   return makeStridedLinearLayoutMap(getStrides(), getOffset(), getContext());
diff --git a/mlir/test/Dialect/MemRef/canonicalize.mlir b/mlir/test/Dialect/MemRef/canonicalize.mlir
index b15af9baca7dc7..02110bc2892d05 100644
--- a/mlir/test/Dialect/MemRef/canonicalize.mlir
+++ b/mlir/test/Dialect/MemRef/canonicalize.mlir
@@ -70,6 +70,19 @@ func.func @subview_of_static_full_size(%arg0 : memref<4x6x16x32xi8>) -> memref<4
 
 // -----
 
+// CHECK-LABEL: func @negative_subview_of_static_full_size
+//  CHECK-SAME:   %[[ARG0:.+]]: memref<16x4xf32,  strided<[4, 1], offset: ?>>
+//  CHECK-SAME:   %[[IDX:.+]]: index
+//       CHECK:   %[[S:.+]] = memref.subview %[[ARG0]][%[[IDX]], 0] [16, 4] [1, 1]
+//  CHECK-SAME:                    to memref<16x4xf32,  strided<[4, 1], offset: ?>>
+//       CHECK:    return %[[S]] : memref<16x4xf32,  strided<[4, 1], offset: ?>>
+func.func @negative_subview_of_static_full_size(%arg0:  memref<16x4xf32,  strided<[4, 1], offset: ?>>, %idx: index) -> memref<16x4xf32,  strided<[4, 1], offset: ?>> {
+  %0 = memref.subview %arg0[%idx, 0][16, 4][1, 1] : memref<16x4xf32,  strided<[4, 1], offset: ?>> to memref<16x4xf32,  strided<[4, 1], offset: ?>>
+  return %0 : memref<16x4xf32,  strided<[4, 1], offset: ?>>
+}
+
+// -----
+
 func.func @subview_canonicalize(%arg0 : memref<?x?x?xf32>, %arg1 : index,
     %arg2 : index) -> memref<?x?x?xf32, strided<[?, ?, ?], offset: ?>>
 {

@llvmbot
Copy link
Member

llvmbot commented Aug 22, 2024

@llvm/pr-subscribers-mlir-ods

Author: Benjamin Maxwell (MacDue)

Changes

It is possible to have a subview with a fully static size and a type that matches the source type, but a dynamic offset that may be different. However, currently the memref dialect folds:

func.func @<!-- -->subview_of_static_full_size(
  %arg0:  memref&lt;16x4xf32,  strided&lt;[4, 1], offset: ?&gt;&gt;, %idx: index)
  -&gt; memref&lt;16x4xf32,  strided&lt;[4, 1], offset: ?&gt;&gt;
{
  %0 = memref.subview %arg0[%idx, 0][16, 4][1, 1]
   : memref&lt;16x4xf32,  strided&lt;[4, 1], offset: ?&gt;&gt;
     to memref&lt;16x4xf32,  strided&lt;[4, 1], offset: ?&gt;&gt;
  return %0 : memref&lt;16x4xf32,  strided&lt;[4, 1], offset: ?&gt;&gt;
}

To:

func.func @<!-- -->subview_of_static_full_size(
  %arg0: memref&lt;16x4xf32, strided&lt;[4, 1], offset: ?&gt;&gt;, %arg1: index)
  -&gt; memref&lt;16x4xf32, strided&lt;[4, 1], offset: ?&gt;&gt;
{
  return %arg0 : memref&lt;16x4xf32, strided&lt;[4, 1], offset: ?&gt;&gt;
}

Which drops the dynamic offset from the subview op.


Full diff: https://github.com/llvm/llvm-project/pull/105635.diff

4 Files Affected:

  • (modified) mlir/include/mlir/IR/BuiltinAttributes.td (+4)
  • (modified) mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp (+9-6)
  • (modified) mlir/lib/IR/BuiltinAttributes.cpp (+7)
  • (modified) mlir/test/Dialect/MemRef/canonicalize.mlir (+13)
diff --git a/mlir/include/mlir/IR/BuiltinAttributes.td b/mlir/include/mlir/IR/BuiltinAttributes.td
index d9295936ee97bd..f0d41754001400 100644
--- a/mlir/include/mlir/IR/BuiltinAttributes.td
+++ b/mlir/include/mlir/IR/BuiltinAttributes.td
@@ -1012,6 +1012,10 @@ def StridedLayoutAttr : Builtin_Attr<"StridedLayout", "strided_layout",
   let extraClassDeclaration = [{
     /// Print the attribute to the given output stream.
     void print(raw_ostream &os) const;
+
+    /// Returns true if this layout is static, i.e. the strides and offset all
+    /// have a known value > 0.
+    bool hasStaticLayout() const;
   }];
 }
 
diff --git a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
index 150049e5c5effe..9c021d3613f1c8 100644
--- a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
+++ b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
@@ -3279,11 +3279,14 @@ void SubViewOp::getCanonicalizationPatterns(RewritePatternSet &results,
 }
 
 OpFoldResult SubViewOp::fold(FoldAdaptor adaptor) {
-  auto resultShapedType = llvm::cast<ShapedType>(getResult().getType());
-  auto sourceShapedType = llvm::cast<ShapedType>(getSource().getType());
-
-  if (resultShapedType.hasStaticShape() &&
-      resultShapedType == sourceShapedType) {
+  MemRefType sourceMemrefType = getSource().getType();
+  MemRefType resultMemrefType = getResult().getType();
+  auto resultLayout =
+      dyn_cast_if_present<StridedLayoutAttr>(resultMemrefType.getLayout());
+
+  if (resultMemrefType == sourceMemrefType &&
+      resultMemrefType.hasStaticShape() &&
+      (!resultLayout || resultLayout.hasStaticLayout())) {
     return getViewSource();
   }
 
@@ -3301,7 +3304,7 @@ OpFoldResult SubViewOp::fold(FoldAdaptor adaptor) {
         strides, [](OpFoldResult ofr) { return isConstantIntValue(ofr, 1); });
     bool allSizesSame = llvm::equal(sizes, srcSizes);
     if (allOffsetsZero && allStridesOne && allSizesSame &&
-        resultShapedType == sourceShapedType)
+        resultMemrefType == sourceMemrefType)
       return getViewSource();
   }
 
diff --git a/mlir/lib/IR/BuiltinAttributes.cpp b/mlir/lib/IR/BuiltinAttributes.cpp
index 89b1ed67f5d067..8861a940336133 100644
--- a/mlir/lib/IR/BuiltinAttributes.cpp
+++ b/mlir/lib/IR/BuiltinAttributes.cpp
@@ -229,6 +229,13 @@ void StridedLayoutAttr::print(llvm::raw_ostream &os) const {
   os << ">";
 }
 
+/// Returns true if this layout is static, i.e. the strides and offset all have
+/// a known value > 0.
+bool StridedLayoutAttr::hasStaticLayout() const {
+  return !ShapedType::isDynamic(getOffset()) &&
+         !ShapedType::isDynamicShape(getStrides());
+}
+
 /// Returns the strided layout as an affine map.
 AffineMap StridedLayoutAttr::getAffineMap() const {
   return makeStridedLinearLayoutMap(getStrides(), getOffset(), getContext());
diff --git a/mlir/test/Dialect/MemRef/canonicalize.mlir b/mlir/test/Dialect/MemRef/canonicalize.mlir
index b15af9baca7dc7..02110bc2892d05 100644
--- a/mlir/test/Dialect/MemRef/canonicalize.mlir
+++ b/mlir/test/Dialect/MemRef/canonicalize.mlir
@@ -70,6 +70,19 @@ func.func @subview_of_static_full_size(%arg0 : memref<4x6x16x32xi8>) -> memref<4
 
 // -----
 
+// CHECK-LABEL: func @negative_subview_of_static_full_size
+//  CHECK-SAME:   %[[ARG0:.+]]: memref<16x4xf32,  strided<[4, 1], offset: ?>>
+//  CHECK-SAME:   %[[IDX:.+]]: index
+//       CHECK:   %[[S:.+]] = memref.subview %[[ARG0]][%[[IDX]], 0] [16, 4] [1, 1]
+//  CHECK-SAME:                    to memref<16x4xf32,  strided<[4, 1], offset: ?>>
+//       CHECK:    return %[[S]] : memref<16x4xf32,  strided<[4, 1], offset: ?>>
+func.func @negative_subview_of_static_full_size(%arg0:  memref<16x4xf32,  strided<[4, 1], offset: ?>>, %idx: index) -> memref<16x4xf32,  strided<[4, 1], offset: ?>> {
+  %0 = memref.subview %arg0[%idx, 0][16, 4][1, 1] : memref<16x4xf32,  strided<[4, 1], offset: ?>> to memref<16x4xf32,  strided<[4, 1], offset: ?>>
+  return %0 : memref<16x4xf32,  strided<[4, 1], offset: ?>>
+}
+
+// -----
+
 func.func @subview_canonicalize(%arg0 : memref<?x?x?xf32>, %arg1 : index,
     %arg2 : index) -> memref<?x?x?xf32, strided<[?, ?, ?], offset: ?>>
 {

Copy link
Collaborator

@c-rhodes c-rhodes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch, LGTM cheers

@MacDue MacDue merged commit 84aa02d into llvm:main Aug 23, 2024
13 checks passed
@MacDue MacDue deleted the memref branch August 23, 2024 05:52
cjdb pushed a commit to cjdb/llvm-project that referenced this pull request Aug 23, 2024
…05635)

It is possible to have a subview with a fully static size and a type
that matches the source type, but a dynamic offset that may be
different. However, currently the memref dialect folds:

```mlir
func.func @subview_of_static_full_size(
  %arg0:  memref<16x4xf32,  strided<[4, 1], offset: ?>>, %idx: index)
  -> memref<16x4xf32,  strided<[4, 1], offset: ?>>
{
  %0 = memref.subview %arg0[%idx, 0][16, 4][1, 1]
   : memref<16x4xf32,  strided<[4, 1], offset: ?>>
     to memref<16x4xf32,  strided<[4, 1], offset: ?>>
  return %0 : memref<16x4xf32,  strided<[4, 1], offset: ?>>
}
```

To:

```mlir
func.func @subview_of_static_full_size(
  %arg0: memref<16x4xf32, strided<[4, 1], offset: ?>>, %arg1: index)
  -> memref<16x4xf32, strided<[4, 1], offset: ?>>
{
  return %arg0 : memref<16x4xf32, strided<[4, 1], offset: ?>>
}
```

Which drops the dynamic offset from the `subview` op.
dmpolukhin pushed a commit to dmpolukhin/llvm-project that referenced this pull request Sep 2, 2024
…05635)

It is possible to have a subview with a fully static size and a type
that matches the source type, but a dynamic offset that may be
different. However, currently the memref dialect folds:

```mlir
func.func @subview_of_static_full_size(
  %arg0:  memref<16x4xf32,  strided<[4, 1], offset: ?>>, %idx: index)
  -> memref<16x4xf32,  strided<[4, 1], offset: ?>>
{
  %0 = memref.subview %arg0[%idx, 0][16, 4][1, 1]
   : memref<16x4xf32,  strided<[4, 1], offset: ?>>
     to memref<16x4xf32,  strided<[4, 1], offset: ?>>
  return %0 : memref<16x4xf32,  strided<[4, 1], offset: ?>>
}
```

To:

```mlir
func.func @subview_of_static_full_size(
  %arg0: memref<16x4xf32, strided<[4, 1], offset: ?>>, %arg1: index)
  -> memref<16x4xf32, strided<[4, 1], offset: ?>>
{
  return %arg0 : memref<16x4xf32, strided<[4, 1], offset: ?>>
}
```

Which drops the dynamic offset from the `subview` op.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants