-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixing reduction ops #1673
Fixing reduction ops #1673
Conversation
Just curious of the reasoning, why did you decide to make this change in TTIR instead of Forge? You had to change bunch of code and bunch of tests because of this attribute renaming (and missed some). If we are already renaming, I would choose |
@mrakitaTT When I was talking with someone from forge couple days ago they told me they are trying to mirror torch as close as possible with ops so I decided not to do it in forge. If we were to change it in forge then we would have to either:
But regardless of the naming we still have a issue with attribute type which is not 1-1 what forge can lower.
I thought personally change was small and simplified stuff, like getting reduce dims. Unfortunately with these kind of changes it's inevitable to test changes . Regarding tests I fixed that, I didn't have stable hlo flag on...
Yea I thought this also, not sure why torch developers decided on that name. |
Yeah but this is not part of the Forge external interface (i.e. changing the Forge ops), this is part of the serialization from Forge graph to TTIR graph.
I am not too familiar with Forge codebase so please pardon my ignorance, but shouldn't this just be a simple serialization string change? Edit: I've just checked the Forge repo and I see what you mean, Forge serialization is just matching attribute names from Forge I see this pattern often where we are focusing too much on Forge, but I would implore everyone to always keep in mind that there are also other frontends and dialects, and to check their definitions too when deciding on something like this. For example we've named some op
I am not sure about this change too. Imagine if some other dialect used For example StableHLO uses i64 type for dimensions, so we do this during StableHLO->TTIR conversion:
Arguably we could've also changed TTIR to use i64 instead of i32, though I can't imagine tensor with such large rank ever existing :) |
For this change I'm comfortable with changing attribute name and couple of tests. This is blocking
Well this is perfect example. If it has the same semantics as If we want to standardize how we are adding an OP in TTIR I'm all for it (defining types/common names for attributes I like this). But this would require some thought which I don't see as P1 bth. If you don't agree with above we have Dialect team meeting on Tuesday and we can discuss this :) |
SmallVector<int64_t> dims = op.getReduceDims(); | ||
SmallVector<int32_t> dims32(dims.begin(), dims.end()); | ||
auto dimArg = | ||
op.getReduceDims().empty() ? 0 : toFlatbuffer<int32_t>(cache, dims32); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting to 0
here seems like good bug potential, can we leave it unset for the flatbuffer? Looking at the method signature, it's optional, so I think we shouldn't assume default value:
static Tensor invoke(
const Tensor& input_tensor_arg,
const std::optional<std::variant<int, ttnn::SmallVector<int>>>& dim_arg = std::nullopt,
const bool keepdim = true,
const std::optional<MemoryConfig>& memory_config_arg = std::nullopt,
const std::optional<DeviceComputeKernelConfig>& compute_kernel_config = std::nullopt,
float scalar = 1.0f);
@@ -615,7 +615,7 @@ class TTIR_ReductionOp<string mnemonic, list<Trait> traits = []> : | |||
let arguments = (ins AnyRankedTensor:$input, | |||
AnyRankedTensor:$output, | |||
BoolAttr:$keep_dim, | |||
OptionalAttr<I32ArrayAttr>:$dim_arg); | |||
OptionalAttr<AnyAttrOf<[SI32Attr, I32ArrayAttr]>>:$dim); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider changing I32ArrayAttr
to DenseI32ArrayAttr
(getter will give you llvm::ArrayRef
which is much nicer to use).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea so mentioned offline. We cannot do it because forge creates ArrayAttr which has deferent serializer/deserializer than DenseArray.
@@ -624,6 +624,26 @@ class TTIR_ReductionOp<string mnemonic, list<Trait> traits = []> : | |||
|
|||
void buildGenericRegion(::mlir::OpBuilder &opBuilder, ::mlir::Block* block); | |||
|
|||
SmallVector<int64_t> getReduceDims() { | |||
mlir::Attribute reduceDimsAttr = getDim().value_or(mlir::Attribute{}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get what you wanted to achieve here, but using std::optional<mlir::Attribute> reduceDimsAttr
and '*' for getting a value when it exists seems like a more straightforward way of dealing with this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if (!getDim()) {
return {};
}
mlir::Attribute reduceDimsAttr = getDim().value();
SmallVector<int64_t> reduceDimsVec;
Something like this?
I’m adding @nvukobratTT to this conversation. @mtopalovicTT and @mrakitaTT, I've reviewed the changes once more, and I honestly don’t see any drawbacks now. First, let's set aside all the different front-ends and third-party users of the MLIR compiler. I think we can all agree that "dim_arg" is not an appropriate name for this parameter. With that in mind, we should find a more suitable name and agree on it as it is commonly used in a lot of ML operations. Are we all in agreement on this? If we all agree with the above, let's choose and standardize the naming convention for dimensions in the compiler. To be honest, whether we use "dim," "dims," or "dimensions" is largely a matter of personal preference. It can be challenging to select one term when different frontends define this argument in various ways, and there will always be some inherent biases. Therefore, while we haven't settled on a single term to use consistently, I don't think this should block this PR for now. |
The name of the attribute itself is the least concerning to me. More concerning is changing the types and in general the fact that we are making these changes in compiler in order to accommodate one FE, while the changes themselves seem much more natural to be made on the FE side. |
Currently all reduction ops when lowered from forge are failing. Reduction ops in
TTIR/TTNN
have optionaldim_arg
attribute which can be used to specify dimension along which reduce should be applied.Forge uses
dim
attribute to specify reduction dims, so when lowered toTTIR
is completly ignored by our compiler.This PR fixes the naming of this attribute and also aligns possible values for this attribute from:
OptionalAttr<I32ArrayAttr>
->OptionalAttr<AnyAttrOf<[SI32Attr, I32ArrayAttr]>>
IR before change:
IR after change:
fixes #1574