Skip to content

Commit

Permalink
JIT: Mark swift error as busy before call definition RefPosition (#10…
Browse files Browse the repository at this point in the history
…1792)

The RefPosition we were inserting here was inserted too late to actually
protect the call definition from being allocated into the error
register.

Instead, we can just mark the existing `RefTypeFixedReg` created for the
argument use as delay freed, which will have the intended effect of
keeping the error register busy until after the call definition.
  • Loading branch information
jakobbotsch authored May 3, 2024
1 parent e965312 commit 15e98e5
Show file tree
Hide file tree
Showing 4 changed files with 36 additions and 34 deletions.
1 change: 1 addition & 0 deletions src/coreclr/jit/lsra.h
Original file line number Diff line number Diff line change
Expand Up @@ -2062,6 +2062,7 @@ class LinearScan : public LinearScanInterface
#endif
int BuildPutArgReg(GenTreeUnOp* node);
int BuildCall(GenTreeCall* call);
void MarkSwiftErrorBusyForCall(GenTreeCall* call);
int BuildCmp(GenTree* tree);
int BuildCmpOperands(GenTree* tree);
int BuildBlockStore(GenTreeBlk* blkNode);
Expand Down
18 changes: 1 addition & 17 deletions src/coreclr/jit/lsraarmarch.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -404,23 +404,7 @@ int LinearScan::BuildCall(GenTreeCall* call)
#ifdef SWIFT_SUPPORT
if (call->HasSwiftErrorHandling())
{
// Tree is a Swift call with error handling; error register should have been killed
assert((killMask & RBM_SWIFT_ERROR) != 0);

// After a Swift call that might throw returns, we expect the error register to be consumed
// by a GT_SWIFT_ERROR node. However, we want to ensure the error register won't be trashed
// before GT_SWIFT_ERROR can consume it.
// (For example, the PInvoke epilog comes before the error register store.)
// To do so, delay the freeing of the error register until the next node.
// This only works if the next node after the call is the GT_SWIFT_ERROR node.
// (InsertPInvokeCallEpilog should have moved the GT_SWIFT_ERROR node during lowering.)
assert(call->gtNext != nullptr);
assert(call->gtNext->OperIs(GT_SWIFT_ERROR));

// We could use RefTypeKill, but RefTypeFixedReg is used less commonly, so the check for delayRegFree
// during register allocation should be cheaper in terms of TP.
RefPosition* pos = newRefPosition(REG_SWIFT_ERROR, currentLoc + 1, RefTypeFixedReg, call, RBM_SWIFT_ERROR);
setDelayFree(pos);
MarkSwiftErrorBusyForCall(call);
}
#endif // SWIFT_SUPPORT

Expand Down
33 changes: 33 additions & 0 deletions src/coreclr/jit/lsrabuild.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -4479,3 +4479,36 @@ int LinearScan::BuildCmpOperands(GenTree* tree)
srcCount += BuildOperandUses(op2, op2Candidates);
return srcCount;
}

#ifdef SWIFT_SUPPORT
//------------------------------------------------------------------------
// MarkSwiftErrorBusyForCall: Given a call set the appropriate RefTypeFixedReg
// RefPosition for the Swift error register as delay free to ensure the error
// register does not get allocated by LSRA before it has been consumed.
//
// Arguments:
// call - The call node
//
void LinearScan::MarkSwiftErrorBusyForCall(GenTreeCall* call)
{
assert(call->HasSwiftErrorHandling());
// After a Swift call that might throw returns, we expect the error register to be consumed
// by a GT_SWIFT_ERROR node. However, we want to ensure the error register won't be trashed
// before GT_SWIFT_ERROR can consume it.
// (For example, by LSRA allocating the call's result to the same register.)
// To do so, delay the freeing of the error register until the next node.
// This only works if the next node after the call is the GT_SWIFT_ERROR node.
// (LowerNonvirtPinvokeCall should have moved the GT_SWIFT_ERROR node.)
assert(call->gtNext != nullptr);
assert(call->gtNext->OperIs(GT_SWIFT_ERROR));

// Conveniently we model the zeroing of the register as a non-standard constant zero argument,
// which will have created a RefPosition corresponding to the use of the error at the location
// of the uses. Marking this RefPosition as delay freed has the effect of keeping the register
// busy at the location of the definition of the call.
RegRecord* swiftErrorRegRecord = getRegisterRecord(REG_SWIFT_ERROR);
assert((swiftErrorRegRecord != nullptr) && (swiftErrorRegRecord->lastRefPosition != nullptr) &&
(swiftErrorRegRecord->lastRefPosition->nodeLocation == currentLoc));
setDelayFree(swiftErrorRegRecord->lastRefPosition);
}
#endif
18 changes: 1 addition & 17 deletions src/coreclr/jit/lsraxarch.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1380,23 +1380,7 @@ int LinearScan::BuildCall(GenTreeCall* call)
#ifdef SWIFT_SUPPORT
if (call->HasSwiftErrorHandling())
{
// Tree is a Swift call with error handling; error register should have been killed
assert((killMask & RBM_SWIFT_ERROR) != 0);

// After a Swift call that might throw returns, we expect the error register to be consumed
// by a GT_SWIFT_ERROR node. However, we want to ensure the error register won't be trashed
// before GT_SWIFT_ERROR can consume it.
// (For example, the PInvoke epilog comes before the error register store.)
// To do so, delay the freeing of the error register until the next node.
// This only works if the next node after the call is the GT_SWIFT_ERROR node.
// (InsertPInvokeCallEpilog should have moved the GT_SWIFT_ERROR node during lowering.)
assert(call->gtNext != nullptr);
assert(call->gtNext->OperIs(GT_SWIFT_ERROR));

// We could use RefTypeKill, but RefTypeFixedReg is used less commonly, so the check for delayRegFree
// during register allocation should be cheaper in terms of TP.
RefPosition* pos = newRefPosition(REG_SWIFT_ERROR, currentLoc + 1, RefTypeFixedReg, call, RBM_SWIFT_ERROR);
setDelayFree(pos);
MarkSwiftErrorBusyForCall(call);
}
#endif // SWIFT_SUPPORT

Expand Down

0 comments on commit 15e98e5

Please sign in to comment.