-
Notifications
You must be signed in to change notification settings - Fork 302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FIRRTL] collection of changes to InferWidths #7686
Conversation
This helper was used to add a specialization for DenseMapInfo, but we can just manually specify a specific DenseMapInfo without worrying about specialization.
When all result widths of an operation are known, we do not have to create a constraint expression based on the operands, we can directly create a known width expression. The check for known widths was a bit convoluted, checking some things in a loop which did not need to be. This change moves the functionality into a helper method for easier to understand control flow, and moves the mux operand check outside of a loop.
We can use destructuring binders with tuples to make the code more readable.
This expression holds a list of all other expressions, but it is actually entirely unused.
We would allocate a new workstack each time we solve a var, and reserve a size of 1/2 of the total number of expressions. The number of expressions in a ciruit is absolutely massive, making this not a good default. Instead, we just allocate the workstack from outside of the method and reuse the memory allocation for each solve. As an aside, this fixes the indentation for debug printing which was broken in llvm#5305.
This shrinks the size of the base Expr from 12 bytes to 8 bytes.
639b6c2
to
f4b2e8c
Compare
The KnownExpr always has a known width, but it stores this width using the Expr base class' memoized solution field. To make things a bit clearer, add an accessor for this value that doesn't return an optional width.
We are currently storing all allocated expressions in a single giant vector, which we later use to find VarExprs and DerivedExprs. This changes it to keep two separate dedicated vectors which only store the type of exprs we are interested in. It would be nice if we could iterate over all objects in a SpecificBumpPtrAllocator, but they don't expose such an API.
This changes InferWidths to stop eagerly creating KnownExprs for every operation with a known width result, and instead creates them when needed for an actual expression.
The width is just checked to be uninferred, so there is no point in checking again.
UnifyTypes works by setting the LHS's expression to point at the RHS's. Running declareVars on the LHS before unifying the expressions is pointless, as these VarExprs will just end up dangling. This change stops creating the useless VarExprs.
We used to duplicate InvalidValues to their uses, so that each use could be inferred separately. This was because InvalidValues were considered constants, and things like CSE might combine many unknown-width invalid values, and we wanted to counteract this affect. InvalidValues are no longer pure, and so will not be CSE'd away this way anymore, making this code no longer needed.
c36806e
to
2093e9e
Compare
auto width = cast<FIRRTLBaseType>(type).getBitWidthOrSentinel(); | ||
if (width < 0) | ||
return nullptr; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really smart!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thank you for cleaning up a lot of stuffs. I think lazy creation of KnownExprOp is definitely safe and great improvement. Removing duplication of InvalidValueOp change would be a breaking change but seems more reasonable.
I checked a diff on one of largest cores and there was no change.
Awesome work, thanks!!! |
This PR has a collection of mostly non-functional changes to InferWidths, with the aim of cleaning up some of the code and improving performance.