-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[webnn] Add float32 tests for WebNN API l2Pool2d op #44314
Conversation
LGTM, thank you! |
{ | ||
"tests": [ | ||
{ | ||
"name": "l2Pool2d float32 4D constant tensor all positive default options", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all positive
Since each element is squared anyway, it doesn't really matter for this case whether there's a mix of positive and negative value. I'm not suggesting you change anything, just noting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it doesn't really matter for this case whether there's a mix of positive and negative value
I remembered that you once share me the operation list of avoiding catastrophic cancellation which included this l2Pool2d .
} | ||
}, | ||
{ | ||
"name": "l2Pool2d float32 4D tensor options.autoPad=same-lower ignores options.padding", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was talk of removing automagic padding consistently for both convolution and pooling: webmachinelearning/webnn-baseline#68 (review)
I suppose we can keep these cases until a spec CR is proposed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
@fdwr @patrto and @Honry PTAL, thanks.