-
Notifications
You must be signed in to change notification settings - Fork 12k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[VPlan] Delay adding canonical IV increment. #82270
base: main
Are you sure you want to change the base?
Conversation
@llvm/pr-subscribers-llvm-transforms @llvm/pr-subscribers-backend-risc-v Author: Florian Hahn (fhahn) ChangesThis patch delays adding canonical IV increments, computing the next active lane mask and the branch recipes to exit the vector loop. During initial construction of a VPlan, only add the canonical IV phi and active-lane-mask phis if needed. Similarly, do not add the branches to exit the loop initially. Computing the next IV value, the next active-lane-mask or the exit branches are details that are only needed to simplify codegen (execute of the individual recipes). This makes the inital VPlans more abstract (and simpler), in that we initially leave out some details. Initial VPlans still have the canonical induction recipe, which provides the value of the canonical induction at every iteration, but we leave out the detail of how it is computed, which is not needed until code generation. Similar reasoning applies to active lane mask increments and branch recipes. The vector loop region initially abstractly models the loop processing all vector iterations, without specifying how exactly exiting the region is handled. Again these details are only needed for code generation. To introduce the missing pieces and complete the abstract VPlan, a new prepareToExecute transform is added and run just before exiting the VPlan. This is an attempt to further employ gradual lowering, as outlined in https://llvm.org/devmtg/2023-10/slides/techtalks/Hahn-VPlan-StatusUpdateAndRoadmap.pdf and already applied for replicate region handling. Patch is 88.10 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/82270.diff 27 Files Affected:
diff --git a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
index eca901fcdae4ce..8b5590c3400f48 100644
--- a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
+++ b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
@@ -7452,6 +7452,14 @@ LoopVectorizationPlanner::executePlan(
LLVM_DEBUG(dbgs() << "Executing best plan with VF=" << BestVF << ", UF=" << BestUF
<< '\n');
+ TailFoldingStyle Style =
+ CM.getTailFoldingStyle(!isIndvarOverflowCheckKnownFalse(&CM, BestVF));
+ // When not folding the tail, we know that the induction increment will not
+ // overflow.
+ bool HasNUW = Style == TailFoldingStyle::None;
+ bool WithoutRuntimeCheck =
+ Style == TailFoldingStyle::DataAndControlFlowWithoutRuntimeCheck;
+ VPlanTransforms::prepareToExecute(BestVPlan, HasNUW, WithoutRuntimeCheck);
if (!IsEpilogueVectorization)
VPlanTransforms::optimizeForVFAndUF(BestVPlan, BestVF, BestUF, PSE);
@@ -8509,17 +8517,6 @@ static void addCanonicalIVRecipes(VPlan &Plan, Type *IdxTy, bool HasNUW,
VPRegionBlock *TopRegion = Plan.getVectorLoopRegion();
VPBasicBlock *Header = TopRegion->getEntryBasicBlock();
Header->insert(CanonicalIVPHI, Header->begin());
-
- VPBuilder Builder(TopRegion->getExitingBasicBlock());
- // Add a VPInstruction to increment the scalar canonical IV by VF * UF.
- auto *CanonicalIVIncrement = Builder.createOverflowingOp(
- Instruction::Add, {CanonicalIVPHI, &Plan.getVFxUF()}, {HasNUW, false}, DL,
- "index.next");
- CanonicalIVPHI->addOperand(CanonicalIVIncrement);
-
- // Add the BranchOnCount VPInstruction to the latch.
- Builder.createNaryOp(VPInstruction::BranchOnCount,
- {CanonicalIVIncrement, &Plan.getVectorTripCount()}, DL);
}
// Add exit values to \p Plan. VPLiveOuts are added for each LCSSA phi in the
@@ -9005,7 +9002,7 @@ void LoopVectorizationPlanner::adjustRecipesForReductions(
PreviousLink = RedRecipe;
}
}
- Builder.setInsertPoint(&*LatchVPBB->begin());
+ Builder.setInsertPoint(LatchVPBB, LatchVPBB->begin());
for (VPRecipeBase &R :
Plan->getVectorLoopRegion()->getEntryBasicBlock()->phis()) {
VPReductionPHIRecipe *PhiR = dyn_cast<VPReductionPHIRecipe>(&R);
diff --git a/llvm/lib/Transforms/Vectorize/VPlan.h b/llvm/lib/Transforms/Vectorize/VPlan.h
index 42e3b4a003511b..cfc4161adc0c82 100644
--- a/llvm/lib/Transforms/Vectorize/VPlan.h
+++ b/llvm/lib/Transforms/Vectorize/VPlan.h
@@ -2363,7 +2363,8 @@ class VPCanonicalIVPHIRecipe : public VPHeaderPHIRecipe {
VPRecipeBase *clone() override {
auto *R = new VPCanonicalIVPHIRecipe(getOperand(0), getDebugLoc());
- R->addOperand(getBackedgeValue());
+ if (getNumOperands() == 2)
+ R->addOperand(getBackedgeValue());
return R;
}
diff --git a/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp b/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
index 3d443421024202..965ea0386b9475 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
+++ b/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp
@@ -1109,16 +1109,10 @@ void VPlanTransforms::optimize(VPlan &Plan, ScalarEvolution &SE) {
static VPActiveLaneMaskPHIRecipe *addVPLaneMaskPhiAndUpdateExitBranch(
VPlan &Plan, bool DataAndControlFlowWithoutRuntimeCheck) {
VPRegionBlock *TopRegion = Plan.getVectorLoopRegion();
- VPBasicBlock *EB = TopRegion->getExitingBasicBlock();
auto *CanonicalIVPHI = Plan.getCanonicalIV();
VPValue *StartV = CanonicalIVPHI->getStartValue();
- auto *CanonicalIVIncrement =
- cast<VPInstruction>(CanonicalIVPHI->getBackedgeValue());
- // TODO: Check if dropping the flags is needed if
- // !DataAndControlFlowWithoutRuntimeCheck.
- CanonicalIVIncrement->dropPoisonGeneratingFlags();
- DebugLoc DL = CanonicalIVIncrement->getDebugLoc();
+ DebugLoc DL = CanonicalIVPHI->getDebugLoc();
// We can't use StartV directly in the ActiveLaneMask VPInstruction, since
// we have to take unrolling into account. Each part needs to start at
// Part * VF
@@ -1128,21 +1122,6 @@ static VPActiveLaneMaskPHIRecipe *addVPLaneMaskPhiAndUpdateExitBranch(
// Create the ActiveLaneMask instruction using the correct start values.
VPValue *TC = Plan.getTripCount();
- VPValue *TripCount, *IncrementValue;
- if (!DataAndControlFlowWithoutRuntimeCheck) {
- // When the loop is guarded by a runtime overflow check for the loop
- // induction variable increment by VF, we can increment the value before
- // the get.active.lane mask and use the unmodified tripcount.
- IncrementValue = CanonicalIVIncrement;
- TripCount = TC;
- } else {
- // When avoiding a runtime check, the active.lane.mask inside the loop
- // uses a modified trip count and the induction variable increment is
- // done after the active.lane.mask intrinsic is called.
- IncrementValue = CanonicalIVPHI;
- TripCount = Builder.createNaryOp(VPInstruction::CalculateTripCountMinusVF,
- {TC}, DL);
- }
auto *EntryIncrement = Builder.createOverflowingOp(
VPInstruction::CanonicalIVIncrementForPart, {StartV}, {false, false}, DL,
"index.part.next");
@@ -1156,24 +1135,6 @@ static VPActiveLaneMaskPHIRecipe *addVPLaneMaskPhiAndUpdateExitBranch(
// preheader ActiveLaneMask instruction.
auto LaneMaskPhi = new VPActiveLaneMaskPHIRecipe(EntryALM, DebugLoc());
LaneMaskPhi->insertAfter(CanonicalIVPHI);
-
- // Create the active lane mask for the next iteration of the loop before the
- // original terminator.
- VPRecipeBase *OriginalTerminator = EB->getTerminator();
- Builder.setInsertPoint(OriginalTerminator);
- auto *InLoopIncrement =
- Builder.createOverflowingOp(VPInstruction::CanonicalIVIncrementForPart,
- {IncrementValue}, {false, false}, DL);
- auto *ALM = Builder.createNaryOp(VPInstruction::ActiveLaneMask,
- {InLoopIncrement, TripCount}, DL,
- "active.lane.mask.next");
- LaneMaskPhi->addOperand(ALM);
-
- // Replace the original terminator with BranchOnCond. We have to invert the
- // mask here because a true condition means jumping to the exit block.
- auto *NotMask = Builder.createNot(ALM, DL);
- Builder.createNaryOp(VPInstruction::BranchOnCond, {NotMask}, DL);
- OriginalTerminator->eraseFromParent();
return LaneMaskPhi;
}
@@ -1304,3 +1265,71 @@ void VPlanTransforms::dropPoisonGeneratingRecipes(
}
}
}
+
+void VPlanTransforms::prepareToExecute(
+ VPlan &Plan, bool HasNUW, bool DataAndControlFlowWithoutRuntimeCheck) {
+ auto *CanIV = Plan.getCanonicalIV();
+
+ VPBasicBlock *EB = Plan.getVectorLoopRegion()->getExitingBasicBlock();
+ VPBuilder Builder(EB);
+ DebugLoc DL = CanIV->getDebugLoc();
+ // Add a VPInstruction to increment the scalar canonical IV by VF * UF.
+ auto *CanonicalIVIncrement =
+ Builder.createOverflowingOp(Instruction::Add, {CanIV, &Plan.getVFxUF()},
+ {HasNUW, false}, DL, "index.next");
+
+ CanIV->addOperand(CanonicalIVIncrement);
+
+ auto FoundLaneMaskPhi = find_if(
+ Plan.getVectorLoopRegion()->getEntryBasicBlock()->phis(),
+ [](VPRecipeBase &P) { return isa<VPActiveLaneMaskPHIRecipe>(P); });
+
+ if (FoundLaneMaskPhi ==
+ Plan.getVectorLoopRegion()->getEntryBasicBlock()->phis().end()) {
+ // Add the BranchOnCount VPInstruction to the latch.
+ Builder.createNaryOp(VPInstruction::BranchOnCount,
+ {CanonicalIVIncrement, &Plan.getVectorTripCount()},
+ DL);
+ return;
+ }
+ auto *LaneMaskPhi = cast<VPActiveLaneMaskPHIRecipe>(&*FoundLaneMaskPhi);
+ auto *VecPreheader =
+ cast<VPBasicBlock>(Plan.getVectorLoopRegion()->getSinglePredecessor());
+ Builder.setInsertPoint(VecPreheader);
+
+ VPValue *TC = Plan.getTripCount();
+
+ // TODO: Check if dropping the flags is needed if
+ // !DataAndControlFlowWithoutRuntimeCheck.
+ CanonicalIVIncrement->dropPoisonGeneratingFlags();
+ VPValue *TripCount, *IncrementValue;
+ if (!DataAndControlFlowWithoutRuntimeCheck) {
+ // When the loop is guarded by a runtime overflow check for the loop
+ // induction variable increment by VF, we can increment the value before
+ // the get.active.lane mask and use the unmodified tripcount.
+ IncrementValue = CanonicalIVIncrement;
+ TripCount = TC;
+ } else {
+ // When avoiding a runtime check, the active.lane.mask inside the loop
+ // uses a modified trip count and the induction variable increment is
+ // done after the active.lane.mask intrinsic is called.
+ IncrementValue = CanIV;
+ TripCount = Builder.createNaryOp(VPInstruction::CalculateTripCountMinusVF,
+ {TC}, DL);
+ }
+ // Create the active lane mask for the next iteration of the loop before the
+ // original terminator.
+ Builder.setInsertPoint(EB);
+ auto *InLoopIncrement =
+ Builder.createOverflowingOp(VPInstruction::CanonicalIVIncrementForPart,
+ {IncrementValue}, {false, false}, DL);
+ auto *ALM = Builder.createNaryOp(VPInstruction::ActiveLaneMask,
+ {InLoopIncrement, TripCount}, DL,
+ "active.lane.mask.next");
+ LaneMaskPhi->addOperand(ALM);
+
+ // Replace the original terminator with BranchOnCond. We have to invert the
+ // mask here because a true condition means jumping to the exit block.
+ auto *NotMask = Builder.createNot(ALM, DL);
+ Builder.createNaryOp(VPInstruction::BranchOnCond, {NotMask}, DL);
+}
diff --git a/llvm/lib/Transforms/Vectorize/VPlanTransforms.h b/llvm/lib/Transforms/Vectorize/VPlanTransforms.h
index ff83c3f083b093..1ddbe769d4a132 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanTransforms.h
+++ b/llvm/lib/Transforms/Vectorize/VPlanTransforms.h
@@ -98,6 +98,9 @@ struct VPlanTransforms {
/// VPlan directly.
static void dropPoisonGeneratingRecipes(
VPlan &Plan, function_ref<bool(BasicBlock *)> BlockNeedsPredication);
+
+ static void prepareToExecute(VPlan &Plan, bool HasNUW,
+ bool DataAndControlFlowWithoutRuntimeCheck);
};
} // namespace llvm
diff --git a/llvm/lib/Transforms/Vectorize/VPlanVerifier.cpp b/llvm/lib/Transforms/Vectorize/VPlanVerifier.cpp
index d6b81543dbc9cc..22128e9dbb35a3 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanVerifier.cpp
+++ b/llvm/lib/Transforms/Vectorize/VPlanVerifier.cpp
@@ -51,12 +51,13 @@ static void verifyBlocksInRegion(const VPRegionBlock *Region) {
auto *VPBB = dyn_cast<VPBasicBlock>(VPB);
// Check block's condition bit.
- if (VPB->getNumSuccessors() > 1 || (VPBB && VPBB->isExiting()))
- assert(VPBB && VPBB->getTerminator() &&
- "Block has multiple successors but doesn't "
- "have a proper branch recipe!");
- else
- assert((!VPBB || !VPBB->getTerminator()) && "Unexpected branch recipe!");
+ /* if (VPB->getNumSuccessors() > 1 || (VPBB && VPBB->isExiting()))*/
+ /*assert(VPBB && VPBB->getTerminator() &&*/
+ /*"Block has multiple successors but doesn't "*/
+ /*"have a proper branch recipe!");*/
+ /*else*/
+ /*assert((!VPBB || !VPBB->getTerminator()) && "Unexpected branch
+ * recipe!");*/
// Check block's successors.
const auto &Successors = VPB->getSuccessors();
@@ -260,19 +261,20 @@ bool VPlanVerifier::verifyPlanIsValid(const VPlan &Plan) {
return false;
}
- if (Exiting->empty()) {
- errs() << "VPlan vector loop exiting block must end with BranchOnCount or "
- "BranchOnCond VPInstruction but is empty\n";
- return false;
- }
-
- auto *LastInst = dyn_cast<VPInstruction>(std::prev(Exiting->end()));
- if (!LastInst || (LastInst->getOpcode() != VPInstruction::BranchOnCount &&
- LastInst->getOpcode() != VPInstruction::BranchOnCond)) {
- errs() << "VPlan vector loop exit must end with BranchOnCount or "
- "BranchOnCond VPInstruction\n";
- return false;
- }
+ /* if (Exiting->empty()) {*/
+ /*errs() << "VPlan vector loop exiting block must end with BranchOnCount or
+ * "*/
+ /*"BranchOnCond VPInstruction but is empty\n";*/
+ /*return false;*/
+ /*}*/
+
+ /*auto *LastInst = dyn_cast<VPInstruction>(std::prev(Exiting->end()));*/
+ /*if (!LastInst || (LastInst->getOpcode() != VPInstruction::BranchOnCount &&*/
+ /*LastInst->getOpcode() != VPInstruction::BranchOnCond)) {*/
+ /*errs() << "VPlan vector loop exit must end with BranchOnCount or "*/
+ /*"BranchOnCond VPInstruction\n";*/
+ /*return false;*/
+ /*}*/
for (const VPRegionBlock *Region :
VPBlockUtils::blocksOnly<const VPRegionBlock>(
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/clamped-trip-count.ll b/llvm/test/Transforms/LoopVectorize/AArch64/clamped-trip-count.ll
index 44ace377ac7927..11ddb096e357e0 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/clamped-trip-count.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/clamped-trip-count.ll
@@ -18,12 +18,12 @@ define void @clamped_tc_8(ptr nocapture %dst, i32 %n, i64 %val){
; CHECK-NEXT: [[IND_END:%.*]] = getelementptr i8, ptr [[DST]], i64 [[N_VEC]]
; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64()
; CHECK-NEXT: [[TMP6:%.*]] = mul i64 [[TMP5]], 8
+; CHECK-NEXT: [[ACTIVE_LANE_MASK_ENTRY:%.*]] = call <vscale x 8 x i1> @llvm.get.active.lane.mask.nxv8i1.i64(i64 0, i64 8)
; CHECK-NEXT: [[TMP7:%.*]] = call i64 @llvm.vscale.i64()
; CHECK-NEXT: [[TMP8:%.*]] = mul i64 [[TMP7]], 8
; CHECK-NEXT: [[TMP9:%.*]] = sub i64 8, [[TMP8]]
; CHECK-NEXT: [[TMP10:%.*]] = icmp ugt i64 8, [[TMP8]]
; CHECK-NEXT: [[TMP11:%.*]] = select i1 [[TMP10]], i64 [[TMP9]], i64 0
-; CHECK-NEXT: [[ACTIVE_LANE_MASK_ENTRY:%.*]] = call <vscale x 8 x i1> @llvm.get.active.lane.mask.nxv8i1.i64(i64 0, i64 8)
; CHECK-NEXT: [[TMP12:%.*]] = call <vscale x 8 x i64> @llvm.experimental.stepvector.nxv8i64()
; CHECK-NEXT: [[TMP13:%.*]] = add <vscale x 8 x i64> [[TMP12]], zeroinitializer
; CHECK-NEXT: [[TMP14:%.*]] = mul <vscale x 8 x i64> [[TMP13]], shufflevector (<vscale x 8 x i64> insertelement (<vscale x 8 x i64> poison, i64 1, i64 0), <vscale x 8 x i64> poison, <vscale x 8 x i32> zeroinitializer)
@@ -115,12 +115,12 @@ define void @clamped_tc_max_8(ptr nocapture %dst, i32 %n, i64 %val){
; CHECK-NEXT: [[IND_END:%.*]] = getelementptr i8, ptr [[DST]], i64 [[N_VEC]]
; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64()
; CHECK-NEXT: [[TMP6:%.*]] = mul i64 [[TMP5]], 8
+; CHECK-NEXT: [[ACTIVE_LANE_MASK_ENTRY:%.*]] = call <vscale x 8 x i1> @llvm.get.active.lane.mask.nxv8i1.i64(i64 0, i64 [[WIDE_TRIP_COUNT]])
; CHECK-NEXT: [[TMP7:%.*]] = call i64 @llvm.vscale.i64()
; CHECK-NEXT: [[TMP8:%.*]] = mul i64 [[TMP7]], 8
; CHECK-NEXT: [[TMP9:%.*]] = sub i64 [[WIDE_TRIP_COUNT]], [[TMP8]]
; CHECK-NEXT: [[TMP10:%.*]] = icmp ugt i64 [[WIDE_TRIP_COUNT]], [[TMP8]]
; CHECK-NEXT: [[TMP11:%.*]] = select i1 [[TMP10]], i64 [[TMP9]], i64 0
-; CHECK-NEXT: [[ACTIVE_LANE_MASK_ENTRY:%.*]] = call <vscale x 8 x i1> @llvm.get.active.lane.mask.nxv8i1.i64(i64 0, i64 [[WIDE_TRIP_COUNT]])
; CHECK-NEXT: [[TMP12:%.*]] = call <vscale x 8 x i64> @llvm.experimental.stepvector.nxv8i64()
; CHECK-NEXT: [[TMP13:%.*]] = add <vscale x 8 x i64> [[TMP12]], zeroinitializer
; CHECK-NEXT: [[TMP14:%.*]] = mul <vscale x 8 x i64> [[TMP13]], shufflevector (<vscale x 8 x i64> insertelement (<vscale x 8 x i64> poison, i64 1, i64 0), <vscale x 8 x i64> poison, <vscale x 8 x i32> zeroinitializer)
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/scalable-strict-fadd.ll b/llvm/test/Transforms/LoopVectorize/AArch64/scalable-strict-fadd.ll
index fc67fb5aded6a8..11bae5068a180f 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/scalable-strict-fadd.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/scalable-strict-fadd.ll
@@ -138,12 +138,12 @@ define float @fadd_strict(ptr noalias nocapture readonly %a, i64 %n) #0 {
; CHECK-ORDERED-TF-NEXT: [[N_VEC:%.*]] = sub i64 [[N_RND_UP]], [[N_MOD_VF]]
; CHECK-ORDERED-TF-NEXT: [[TMP15:%.*]] = call i64 @llvm.vscale.i64()
; CHECK-ORDERED-TF-NEXT: [[TMP16:%.*]] = mul i64 [[TMP15]], 8
+; CHECK-ORDERED-TF-NEXT: [[ACTIVE_LANE_MASK_ENTRY:%.*]] = call <vscale x 8 x i1> @llvm.get.active.lane.mask.nxv8i1.i64(i64 0, i64 [[N]])
; CHECK-ORDERED-TF-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64()
; CHECK-ORDERED-TF-NEXT: [[TMP6:%.*]] = mul i64 [[TMP5]], 8
; CHECK-ORDERED-TF-NEXT: [[TMP7:%.*]] = sub i64 [[N]], [[TMP6]]
; CHECK-ORDERED-TF-NEXT: [[TMP8:%.*]] = icmp ugt i64 [[N]], [[TMP6]]
; CHECK-ORDERED-TF-NEXT: [[TMP9:%.*]] = select i1 [[TMP8]], i64 [[TMP7]], i64 0
-; CHECK-ORDERED-TF-NEXT: [[ACTIVE_LANE_MASK_ENTRY:%.*]] = call <vscale x 8 x i1> @llvm.get.active.lane.mask.nxv8i1.i64(i64 0, i64 [[N]])
; CHECK-ORDERED-TF-NEXT: br label [[VECTOR_BODY:%.*]]
; CHECK-ORDERED-TF: vector.body:
; CHECK-ORDERED-TF-NEXT: [[INDEX:%.*]] = phi i64 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
@@ -733,11 +733,11 @@ define void @fadd_strict_interleave(ptr noalias nocapture readonly %a, ptr noali
; CHECK-ORDERED-TF-NEXT: [[TMP22:%.*]] = call i64 @llvm.vscale.i64()
; CHECK-ORDERED-TF-NEXT: [[TMP23:%.*]] = mul i64 [[TMP22]], 4
; CHECK-ORDERED-TF-NEXT: [[TMP8:%.*]] = call i64 @llvm.vscale.i64()
+; CHECK-ORDERED-TF-NEXT: [[ACTIVE_LANE_MASK_ENTRY:%.*]] = call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 0, i64 [[TMP2]])
; CHECK-ORDERED-TF-NEXT: [[TMP9:%.*]] = mul i64 [[TMP8]], 4
; CHECK-ORDERED-TF-NEXT: [[TMP10:%.*]] = sub i64 [[TMP2]], [[TMP9]]
; CHECK-ORDERED-TF-NEXT: [[TMP11:%.*]] = icmp ugt i64 [[TMP2]], [[TMP9]]
; CHECK-ORDERED-TF-NEXT: [[TMP12:%.*]] = select i1 [[TMP11]], i64 [[TMP10]], i64 0
-; CHECK-ORDERED-TF-NEXT: [[ACTIVE_LANE_MASK_ENTRY:%.*]] = call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 0, i64 [[TMP2]])
; CHECK-ORDERED-TF-NEXT: br label [[VECTOR_BODY:%.*]]
; CHECK-ORDERED-TF: vector.body:
; CHECK-ORDERED-TF-NEXT: [[INDEX:%.*]] = phi i64 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
@@ -993,12 +993,12 @@ define float @fadd_of_sum(ptr noalias nocapture readonly %a, ptr noalias nocaptu
; CHECK-ORDERED-TF-NEXT: [[N_VEC:%.*]] = sub i64 [[N_RND_UP]], [[N_MOD_VF]]
; CHECK-ORDERED-TF-NEXT: [[TMP19:%.*]] = call i64 @llvm.vscale.i64()
; CHECK-ORDERED-TF-NEXT: [[TMP20:%.*]] = mul i64 [[TMP19]], 4
+; CHECK-ORDERED-TF-NEXT: [[ACTIVE_LANE_MASK_ENTRY:%.*]] = call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 0, i64 [[N]])
; CHECK-ORDERED-TF-NEXT: [[TMP6:%.*]] = call i64 @llvm.vscale.i64()
; CHECK-ORDERED-TF-NEXT: [[TMP7:%.*]] = mul i64 [[TMP6]], 4
; CHECK-ORDERED-TF-NEXT: [[TMP8:%.*]] = sub i64 [[N]], [[TMP7]]
; CHECK-ORDERED-TF-NEXT: [[TMP9:%.*]] = icmp ugt i64 [[N]], [[TMP7]]
; CHECK-ORDERED-TF-NEXT: [[TMP10:%.*]] = select i1 [[TMP9]], i64 [[TMP8]], i64 0
-; CHECK-ORDERED-TF-NEXT: [[ACTIVE_LANE_MASK_ENTRY:%.*]] = call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 0, i64 [[N]])
; CHECK-ORDERED-TF-NEXT: br label [[VECTOR_BODY:%.*]]
; CHECK-ORDERED-TF: vector.body:
; CHECK-ORDERED-TF-NEXT: [[INDEX:%.*]] = phi i64 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
@@ -1234,12 +1234,12 @@ define float @fadd_conditional(ptr noalias nocapture readonly %a, ptr noalias no
; CHECK-ORDERED-TF-NEXT: [[N_VEC:%.*]] = sub i64 [[N_RND_UP]], [[N_MOD_VF]]
; CHECK-ORDERED-TF-NEXT: [[TMP22:%.*]] = call i64 @llvm.vscale.i64()
; CHECK-ORDERED-TF-NEXT: [[TMP23:%.*]] = mul i64 [[TMP22]], 4
+; CHECK-ORDERED-TF-NEXT: [[ACTIVE_LANE_MASK_ENTRY:%.*]] = call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 0, i64 [[N]])
; CHECK-ORDERED-TF-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64()
; CHECK-ORDERED-TF-NEXT: [[TMP6:%.*]] = mul i64 [[TMP5]], 4
; CHECK-ORDERED-TF-NEXT: [[TMP7:%.*]] = sub i64 [[N]], [[TMP6]]
; CHECK-ORDERED-TF-NEXT: [[TMP8:%.*]] = icmp ugt i64 [[N]], [[TMP6]]
; CHECK-ORDERED-TF-NEXT: [[TMP9:%.*]] = select i1 [[TMP8]], i64 [[TMP7]], i64 0
-; CHECK-ORDERED-TF-NEXT: [[ACTIVE_LANE_MASK_ENTRY:%.*]] = call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 0, i64 [[N]])
; CHECK-ORDERED-TF-NEXT: br label [[VECTOR_BODY:%.*]]
...
[truncated]
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm concerned that such missed update till later phase make VPlan ill-formed for xforms and vplan-based cost model ? That is, that canonical IV can be seen as a dead recipe by xforms, i.e. requires special handling everywhere and possibly can break vendor's downstream.
For the vplan-based cost model such implicit representation will require extra code to properly model the cost (even though it's simple 1 reg + 1 add).
What is a plan to handle these areas later ?
Thank you very much for taking a look!
I don't think the form without the increment or branch-on-cond should be seen as ill-formed, just more abstract. The main idea is to gradually lower the from a more abstract VPlan representation to a more concrete one that can be code-gen'd easily (like also done for replicate regions, which only get introduced later). One advantage of VPlan should be that it allows us to model more complex constructs directly , especially early on. At the early stages, various IV increments and how exactly the loop is exited do not really add any additional info (e.g. the canonical IV may or may not control exiting the loop). IIRC when adding recipes to model the canonical IV and its increment, the reason why the increment was modeled was to simplify code-gen, similar to how #82021 will simplify code-gen. At that time, VPlan as only added for code-gen, so it made sense at that time. Since then the infrastructure has evolved quite a bit, and relatively recently we started to use gradual lowering to introduce complexity as needed at later stages. IIUC the main benefit from #82021 is that it simplifies codegen, so I think it would make sense to consistently introduce those in preparation for codegen; modeling the increments for all indications early on would unnecessarily make the plans more verbose, while only being needed for codegen. Just before codegen, if the increments are modeled explicitly, then we could also lower to a generic scalar or vector phi nodes, instead of having multiple recipes to generate phis.
It depends, if we see CanonincalIVPHIRecipe and VPWidenIntOrFpRecipe as complex recipes that take care of their increment as well then computing the cost for both is natural. If the more verbose form is only introduced just before executing, there should be no need to account for costing the increment separately. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting addition, indeed following outlined roadmap. Adding a couple of thoughts.
!VPBB->getParent()->isReplicator())) { | ||
if (!VPBB || !VPBB->getTerminator()) { | ||
if (!IsAbstract && (!VPBB || !VPBB->getTerminator())) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps rather than a global parameter, Is[RegionBackBranch]Abstract
is more a property of the parental region - whether it ends with an explicit terminating branch or not - the latter applying to both replicating regions and non-replicating regions in "abstract loop-control" state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It applies per-region, but the global flag indicates whether it is required at this verification stage. Not sure if it would be worth tracking separately somewhere?
@@ -1307,3 +1268,71 @@ void VPlanTransforms::dropPoisonGeneratingRecipes( | |||
} | |||
} | |||
} | |||
|
|||
void VPlanTransforms::prepareToExecute( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good to be more precise than "prepareToExecute()".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed to finalizePlan (with a comment), but there still might be a better name
// loop. | ||
static void addCanonicalIVRecipes(VPlan &Plan, Type *IdxTy, bool HasNUW, | ||
DebugLoc DL) { | ||
// Add the required canonical IV |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Add the required canonical IV | |
// Add the required canonical IV. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed, thanks!
static void addCanonicalIVRecipes(VPlan &Plan, Type *IdxTy, bool HasNUW, | ||
DebugLoc DL) { | ||
// Add the required canonical IV | ||
static void addCanonicalIV(VPlan &Plan, Type *IdxTy, bool HasNUW, DebugLoc DL) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps instead of creating the final original canonical-IV recipe, leaving it temporarily with only a single operand, it would be better to employ another (sub)class, having a single operand, which is later replaced by the more explicit CanonicalIVPHI?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think eventually CanonicalIVPHI could become itself an 'abstract' recipe (terminology needs to determined), that doesn't codegen itself. Instead of adding the increment as operand in prepareToExecute
we could replace it with a generic scalar PHI recipe, which could take care of codegen for any scalar phis (including EVL phi among others).
Could introduce a temporary recipe, but once we are converging on the overall approach, I can also put up a follow-up PR to use the generic scalar PHI recipe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, worth leaving behind a TODO?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, thanks!
Can you please share a VPlan design doc which has been discussed with a community ? Perhaps I'm missing something and as right now my view of a VPlan is it has self-contained i.e. it should be possible to get all required information to make xform without accessing underlying IR (except for live-ins and live-outs) and at any point be able to generate code out of it without any extra manipulations on a VPlan.
Ignoring cost model part, for VPlan that represents just one loop it's certainly simplifies xforms. But if we consider future work with peel/remainder (prolog/epilog/tail/cleanup) loops represented that is going to complicate xforms (unrolling, interleaving), codegen and cost model, am I wrong ? That technically can be mitigated by https://reviews.llvm.org/D148581, but if we consider SEME loops, from my perspective, that becomes complicated too.
Eventually yes, to simplify codegen. But most importantly, for EVL-based vectorization that change helps to replace
Right, and that's my point: now all places that need to be aware of the update will have extra logic to deal with them. Also, for the |
This patch adds a new VPInstruction::HeaderMask opcode to model the abstract header-mask used for tail-folding. It will be lowered depending on target preference (either using active-lane-mask, explicit-vector-length or a wide compare of the canonical IV and the backedge taken count) Similarly to llvm#82270, it would be good to clarify/agree on the terminology w.r.t. to recipes/opcodes that cannot be code-gen'd directly (i.e. require further gradual lowering). NOTE: some tests are failing or needed updating, due to widened IVs being replaced by scalar-steps, as their only use was the earlier wide compare. This could be fixed by either adding a suitable wide canonical IV as operand to the header-mask recipe and exactly preserve the original behavior. Alternatively we could keep the current behavior of the patch and update the tests. Or introduce a wide induction PHI instead of VPWidenCanonicalIVReicpe; currently we *only* use a wide IV for VPWidenCanonicalIVRecipe, if there was a suitable IV in the original loop, *even* if the mask compare is the *only* wide use. Either never or always using a wide PHI would be more consistent (or eventually make a more informed cost-based decision).
b2f8789
to
025037e
Compare
You can test this locally with the following command:git-clang-format --diff a5a1612deb7af713835b5c8cf22105c5699bc62d 53b812d4f97265e8ce042c7abaa1e660ba5169f7 --extensions cpp,h -- llvm/lib/Transforms/Vectorize/LoopVectorize.cpp llvm/lib/Transforms/Vectorize/VPlan.cpp llvm/lib/Transforms/Vectorize/VPlan.h llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp llvm/lib/Transforms/Vectorize/VPlanTransforms.h llvm/lib/Transforms/Vectorize/VPlanUtils.cpp View the diff from clang-format here.diff --git a/llvm/lib/Transforms/Vectorize/VPlan.cpp b/llvm/lib/Transforms/Vectorize/VPlan.cpp
index 00ab2c1207..345781d2c9 100644
--- a/llvm/lib/Transforms/Vectorize/VPlan.cpp
+++ b/llvm/lib/Transforms/Vectorize/VPlan.cpp
@@ -20,9 +20,9 @@
#include "LoopVectorizationPlanner.h"
#include "VPlanCFG.h"
#include "VPlanPatternMatch.h"
-#include "VPlanVerifier.h"
#include "VPlanTransforms.h"
#include "VPlanUtils.h"
+#include "VPlanVerifier.h"
#include "llvm/ADT/PostOrderIterator.h"
#include "llvm/ADT/STLExtras.h"
#include "llvm/ADT/SmallVector.h"
|
This patch delays adding canonical IV increments, computing the next active lane mask and the branch recipes to exit the vector loop. During initial construction of a VPlan, only add the canonical IV phi and active-lane-mask phis if needed. Similarly, do not add the branches to exit the loop initially. Computing the next IV value, the next active-lane-mask or the exit branches are details that are only needed to simplify codegen (execute of the individual recipes). This makes the inital VPlans more abstract (and simpler), in that we initially leave out some details. Initial VPlans still have the canonical induction recipe, which provides the value of the canonical induction at every iteration, but we leave out the detail of how it is computed, which is not needed until code generation. Similar reasoning applies to active lane mask increments and branch recipes. The vector loop region initially abstractly models the loop processing all vector iterations, without specifying how exactly exiting the region is handled. Again these details are only needed for code generation. To introduce the missing pieces and complete the abstract VPlan, a new prepareToExecute transform is added and run just before exiting the VPlan. This is an attempt to further employ gradual lowering, as outlined in https://llvm.org/devmtg/2023-10/slides/techtalks/Hahn-VPlan-StatusUpdateAndRoadmap.pdf and already applied for replicate region handling.
025037e
to
2ae1d69
Compare
Put up PRs to update some parts of the documentation (which started out as the original design docs from when VPlan was originally proposed) based on the recent Developers' Meeting talk #85688, #85689. W.r.t. to the VPWidenIntOrFpInductionRecipe, the increment was intentionally not modeled explicitly IIRC to make use of VPlan’s capabilities to model higher-level concepts directly, thus having simpler plans with more expressive recipes. The part of generating code at any point would get less true with that patch, but we already employ gradual lowering, so initial VPlans cannot be code-gen'd directly (e.g. predicated VPReplicateRecipes only get expanded to replicate regions later, which helps to simplifier earlier transforms)
For some transformations it may be helpful to have the fully expanded form, but in that case it could be introduced when needed (e.g. as we already do with expanding replicate regions before merging them) I finally managed to wrap up the implementation of an explicit interleave transform on the current state: #94339 The current version adds a set of VPInstructions to compute the vector parts of inductions, which requires quite a bit of work to handle all cases. The expanded form is quite a bit more verbose, which I think is why it is desirable to keep things more abstract until needed. I think the expanded form early on would also force us to detect the induction increments (chained together across parts) and handle that explicitly. It might simplify things to further delay full expansion until after interleaving by introducing an intermediate opcode to compute the step vector for a given part.
VPWidenIntOrFpInductionRecipe’s are by definition being tied to being an induction, which as a consequence also means their step is loop-invariant. Not modeling the increment explicitly as the benefit of any transform changing their increment to not be loop-invariant. To support wide inductions with EVL, I think it would be better to introduce a generic wide phi recipe (which doesn’t have to be an induction) and lower to that + explicit increment. This can also be used to lower wide induction recipes for simpler code-gen late in the pipeline. Another thing to note is that at the moment VPWidenIntOrFpInductionRecipe will create a step vector per part, using the step-vector of the last part as incoming value. There’s a small benefit from explicitly adding the VF as operand to VPWidenIntOrFpInductionRecipe (avoiding some vscale * VF.getKnownMinValue()) as in #95305. But that wouldn’t help with EVL, as EVL computations in the vector loop region won’t dominate the wide IV recipes.
Finally, another concrete example where abstract recipes can be used to simplify transformations initially is modeling the vector loop’s header mask as general recipe initially, to be specialized later once we decide how to model it concretely (e.g. EVL, active.lane.mask, wide IV compare) #89603 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worth starting with delaying the introduction of canonicalIVIncrement alone, while keeping a terminating branch (existing branch-on-count or introducing some other branch, temporarily)? This seems to provide substantial incremental improvement while incurring smaller changes, i.e., w/o affecting VPlanVerifier or introducing IsAbstract.
Delaying the introduction of the loop-region-closing branch may deserve separate attention.
static void addCanonicalIVRecipes(VPlan &Plan, Type *IdxTy, bool HasNUW, | ||
DebugLoc DL) { | ||
// Add the required canonical IV. | ||
static void addCanonicalIV(VPlan &Plan, Type *IdxTy, bool HasNUW, DebugLoc DL) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
static void addCanonicalIV(VPlan &Plan, Type *IdxTy, bool HasNUW, DebugLoc DL) { | |
static void addCanonicalIV(VPlan &Plan, Type *IdxTy, bool DebugLoc DL) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thanks!
@@ -8555,7 +8551,7 @@ LoopVectorizationPlanner::tryToBuildVPlanWithVPRecipes(VFRange &Range) { | |||
// When not folding the tail, we know that the induction increment will not | |||
// overflow. | |||
bool HasNUW = Style == TailFoldingStyle::None; | |||
addCanonicalIVRecipes(*Plan, Legal->getWidestInductionType(), HasNUW, DL); | |||
addCanonicalIV(*Plan, Legal->getWidestInductionType(), HasNUW, DL); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addCanonicalIV(*Plan, Legal->getWidestInductionType(), HasNUW, DL); | |
addCanonicalIV(*Plan, Legal->getWidestInductionType(), DL); |
along with the cleanup of code above that sets HasNUW, although Style is still needed by addActiveLaneMask() below? Note that IVUpdateMayOverflow is now computed for BestVF rather than across all VF's in range, when setting canonical IV but not active lane mask below? I.e., patch is not NFC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cleaned up ,thanks!
addCanonicalIVRecipes(*Plan, Legal->getWidestInductionType(), HasNUW, | ||
DebugLoc()); | ||
assert(verifyVPlanIsValid(*Plan) && "VPlan is invalid"); | ||
addCanonicalIV(*Plan, Legal->getWidestInductionType(), HasNUW, DebugLoc()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addCanonicalIV(*Plan, Legal->getWidestInductionType(), HasNUW, DebugLoc()); | |
addCanonicalIV(*Plan, Legal->getWidestInductionType(), DebugLoc()); |
Setting HasNUW for native, along with its comment, should move to finalizeVPlan()?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cleaned up as well thanks!
@@ -8991,7 +8986,7 @@ void LoopVectorizationPlanner::adjustRecipesForReductions( | |||
PreviousLink = RedRecipe; | |||
} | |||
} | |||
Builder.setInsertPoint(&*LatchVPBB->begin()); | |||
Builder.setInsertPoint(LatchVPBB, LatchVPBB->begin()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Independent of this patch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Undone in the latest version as it now still introduces the branch early.
; CHECK-NEXT: [[ACTIVE_LANE_MASK_ENTRY:%.*]] = call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 0, i64 [[UMAX]]) | ||
; CHECK-NEXT: [[ACTIVE_LANE_MASK_ENTRY3:%.*]] = call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 [[INDEX_PART_NEXT]], i64 [[UMAX]]) | ||
; CHECK-NEXT: [[ACTIVE_LANE_MASK_ENTRY4:%.*]] = call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 [[INDEX_PART_NEXT1]], i64 [[UMAX]]) | ||
; CHECK-NEXT: [[ACTIVE_LANE_MASK_ENTRY5:%.*]] = call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 [[INDEX_PART_NEXT2]], i64 [[UMAX]]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this addition intentional?
CanonicalIVIncrement->hasNoSignedWrap()}, | ||
CanonicalIVIncrement->getDebugLoc(), "index.evl.next"); | ||
NextEVLIV->insertBefore(CanonicalIVIncrement); | ||
new VPInstruction(Instruction::Add, {OpVPEVL, EVLPhi}, {false, false}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrap flags initiated to false may later be turned on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes at this point we don't know if it wraps or not. Without #111758, the flags for EVL increment would always be false I think, as it always folds the tail.
With it, we could update it when introducing the increment (currently not done)
@@ -1572,3 +1529,71 @@ void VPlanTransforms::dropPoisonGeneratingRecipes( | |||
} | |||
} | |||
} | |||
|
|||
void VPlanTransforms::finalizePlan(VPlan &Plan, bool HasNUW, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about something like lowerCanonicalIV()
, or materializeCanonicalIVIncrement()
, rather than finalizePlan()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated to lowerCanonicalIV
for the initial version
"active.lane.mask.next"); | ||
LaneMaskPhi->addOperand(ALM); | ||
|
||
// Replace the original terminator with BranchOnCond. We have to invert the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This now introduces the terminal rather than replace it.
Perhaps a terminal branch-on-count recipe should be introduced along with the abstract canonical IV (could conceptually check the pre-bumped IV with TC-step), delaying only the introduction of the canonical IV's increment between them for later? The canonical IV still remains abstract until this increment is added, but the VPlan continues to be "valid" w/o updating verify().
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated to initially still introduce the branch early. At the moment it still uses the same opcode, but would probably need to introduce a separate one for the different semantics (or define them conditionally on whether lowering has been finalized)
} | ||
auto *LaneMaskPhi = cast<VPActiveLaneMaskPHIRecipe>(&*FoundLaneMaskPhi); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} | |
auto *LaneMaskPhi = cast<VPActiveLaneMaskPHIRecipe>(&*FoundLaneMaskPhi); | |
} | |
// Now introduce a conditional branch to control the loop until the lane mask is exhuasted. | |
auto *LaneMaskPhi = cast<VPActiveLaneMaskPHIRecipe>(&*FoundLaneMaskPhi); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thanks!
; CHECK: VPlan 'Initial VPlan for VF={vscale x 2},UF>=1' { | ||
; CHECK: VPlan 'Final VPlan for VF={vscale x 2},UF>=1' { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this change relevant to this patch? Better separate to isolate test changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worth starting with delaying the introduction of canonicalIVIncrement alone, while keeping a terminating branch (existing branch-on-count or introducing some other branch, temporarily)? This seems to provide substantial incremental improvement while incurring smaller changes, i.e., w/o affecting VPlanVerifier or introducing IsAbstract.
Delaying the introduction of the loop-region-closing branch may deserve separate attention.
Done in the updated version, thanks!
@@ -7428,6 +7428,14 @@ LoopVectorizationPlanner::executePlan( | |||
"expanded SCEVs to reuse can only be used during epilogue vectorization"); | |||
(void)IsEpilogueVectorization; | |||
|
|||
TailFoldingStyle Style = | |||
CM.getTailFoldingStyle(!isIndvarOverflowCheckKnownFalse(&CM, BestVF)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated thanks!
TailFoldingStyle Style = | ||
CM.getTailFoldingStyle(!isIndvarOverflowCheckKnownFalse(&CM, BestVF)); | ||
// When not folding the tail, we know that the induction increment will not | ||
// overflow. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current code just moves the existing logic. Put up #111758 to improve this separately.
static void addCanonicalIVRecipes(VPlan &Plan, Type *IdxTy, bool HasNUW, | ||
DebugLoc DL) { | ||
// Add the required canonical IV | ||
static void addCanonicalIV(VPlan &Plan, Type *IdxTy, bool HasNUW, DebugLoc DL) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, thanks!
static void addCanonicalIVRecipes(VPlan &Plan, Type *IdxTy, bool HasNUW, | ||
DebugLoc DL) { | ||
// Add the required canonical IV. | ||
static void addCanonicalIV(VPlan &Plan, Type *IdxTy, bool HasNUW, DebugLoc DL) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thanks!
@@ -8555,7 +8551,7 @@ LoopVectorizationPlanner::tryToBuildVPlanWithVPRecipes(VFRange &Range) { | |||
// When not folding the tail, we know that the induction increment will not | |||
// overflow. | |||
bool HasNUW = Style == TailFoldingStyle::None; | |||
addCanonicalIVRecipes(*Plan, Legal->getWidestInductionType(), HasNUW, DL); | |||
addCanonicalIV(*Plan, Legal->getWidestInductionType(), HasNUW, DL); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cleaned up ,thanks!
@@ -1235,16 +1235,10 @@ void VPlanTransforms::optimize(VPlan &Plan, ScalarEvolution &SE) { | |||
static VPActiveLaneMaskPHIRecipe *addVPLaneMaskPhiAndUpdateExitBranch( | |||
VPlan &Plan, bool DataAndControlFlowWithoutRuntimeCheck) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, thanks!
addCanonicalIVRecipes(*Plan, Legal->getWidestInductionType(), HasNUW, | ||
DebugLoc()); | ||
assert(verifyVPlanIsValid(*Plan) && "VPlan is invalid"); | ||
addCanonicalIV(*Plan, Legal->getWidestInductionType(), HasNUW, DebugLoc()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cleaned up as well thanks!
@@ -1572,3 +1529,71 @@ void VPlanTransforms::dropPoisonGeneratingRecipes( | |||
} | |||
} | |||
} | |||
|
|||
void VPlanTransforms::finalizePlan(VPlan &Plan, bool HasNUW, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated to lowerCanonicalIV
for the initial version
} | ||
auto *LaneMaskPhi = cast<VPActiveLaneMaskPHIRecipe>(&*FoundLaneMaskPhi); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done thanks!
"active.lane.mask.next"); | ||
LaneMaskPhi->addOperand(ALM); | ||
|
||
// Replace the original terminator with BranchOnCond. We have to invert the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated to initially still introduce the branch early. At the moment it still uses the same opcode, but would probably need to introduce a separate one for the different semantics (or define them conditionally on whether lowering has been finalized)
Introduce a general recipe to generate a scalar phi. Lower VPCanonicalIVPHIRecipe and VPEVLBasedIVRecipe to VPScalarIVPHIrecipe before plan execution, avoiding the need for duplicated ::execute implementations. There are other cases that could benefit, including in-loop reduction phis. Builds on a similar idea as llvm#82270.
Introduce a general recipe to generate a scalar phi. Lower VPCanonicalIVPHIRecipe and VPEVLBasedIVRecipe to VPScalarIVPHIrecipe before plan execution, avoiding the need for duplicated ::execute implementations. There are other cases that could benefit, including in-loop reduction phis. Builds on a similar idea as llvm#82270.
Introduce a general recipe to generate a scalar phi. Lower VPCanonicalIVPHIRecipe and VPEVLBasedIVRecipe to VPScalarIVPHIrecipe before plan execution, avoiding the need for duplicated ::execute implementations. There are other cases that could benefit, including in-loop reduction phis. Builds on a similar idea as llvm#82270.
This patch delays adding canonical IV increments, computing the next active lane mask and the branch recipes to exit the vector loop.
During initial construction of a VPlan, only add the canonical IV phi and active-lane-mask phis if needed. Similarly, do not add the branches to exit the loop initially. Computing the next IV value, the next active-lane-mask or the exit branches are details that are only needed to simplify codegen (execute of the individual recipes).
This makes the inital VPlans more abstract (and simpler), in that we initially leave out some details. Initial VPlans still have the canonical induction recipe, which provides the value of the canonical induction at every iteration, but we leave out the detail of how it is computed, which is not needed until code generation.
Similar reasoning applies to active lane mask increments and branch recipes. The vector loop region initially abstractly models the loop processing all vector iterations, without specifying how exactly exiting the region is handled. Again these details are only needed for code generation.
To introduce the missing pieces and complete the abstract VPlan, a new prepareToExecute transform is added and run just before exiting the VPlan.
This is an attempt to further employ gradual lowering, as outlined in https://llvm.org/devmtg/2023-10/slides/techtalks/Hahn-VPlan-StatusUpdateAndRoadmap.pdf and already applied for replicate region handling.