-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use PyTorch's dynamo benchmark skip-list. #6416
Conversation
if name in self.skip["multiprocess"]: | ||
# No support for multiprocess, yet. So, skip all benchmarks that | ||
# only work with it. | ||
return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should include an "INFO" message when skipping models so it's clear on the logs - unless it gets reported elsewhere. wdyt @ysiraichi @zpcore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The scripts already report when a model is skipped. However, the reason is not logged. Do you think we should log that, too?
its lists of models into sets of models. | ||
""" | ||
|
||
benchmarks_dir = self._find_near_file(("benchmarks",)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It turns out pytorch/benchmarks and xla/benchmarks share the same benchmarks
name. And when we search for yaml file, the nearest will be the xla/benchmarks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see.
This PR starts using PyTorch's dynamo benchmark skip-list for skipping specific models, based on the experiments' configuration. Besides that, it also:
is_compatible
functionadd_torchbench_dir
functioncc @miladm