Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow benchmark runner config: openxla for inference runs. #5939

Merged
merged 3 commits into from
Dec 2, 2023

Conversation

ysiraichi
Copy link
Collaborator

This PR modifies the benchmarking script, enabling openxla dynamo backend for inference runs.

@ysiraichi
Copy link
Collaborator Author

@JackCaoG @miladm

Copy link
Collaborator

@frgossen frgossen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. Can you update the README accordingly?

@ysiraichi
Copy link
Collaborator Author

@frgossen Let me know if that is good enough.

Copy link
Collaborator

@frgossen frgossen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! Needs one fix in readme.

@@ -32,9 +32,10 @@ python xla/benchmarks/experiment_runner.py \
You can change the flags to add the configurations you are interested in. The
`experiment_runner.py` will expand the options to all supported configurations.
For example, in the case above, it will consider all the possible combinations
among the flags `--dynamo`, `--xla`, and `--test`, 4 of which are supported:
among the flags `--dynamo`, `--xla`, and `--test`, 5 of which are supported:

- `dynamo=openxla_eval`, `xla=PJRT`, `test=eval`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think you have to remove this line. It will still be 4 configs overall.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if I get why that is the case. Isn't the following combination supported in addition to the ones already specified in the readme?

  • dynamo=openxla, xla=PJRT, test=eval

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I misunderstand this change then. I thought you were getting rid of openxla_eval in favor of openxla. My thought was we'd have to remove dynamo=openxla_eval here.

Comment on lines 88 to 90
if experiment_config["dynamo"] == "openxla_eval" and not (
experiment_config["xla"] and experiment_config["test"] == "eval"):
return False
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I need to revert this deletion. openxla_eval is only supposed to work with eval test.

Copy link
Collaborator

@vanbasten23 vanbasten23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ysiraichi ysiraichi merged commit dcdd66e into pytorch:master Dec 2, 2023
18 checks passed
frgossen added a commit to frgossen/pytorch-xla that referenced this pull request Dec 11, 2023
chunnienc pushed a commit to chunnienc/xla that referenced this pull request Dec 14, 2023
…#5939)

* Allow openxla for eval.

* Update readme.

* Revert `openxla_eval` rule.
golechwierowicz pushed a commit that referenced this pull request Jan 12, 2024
* Allow openxla for eval.

* Update readme.

* Revert `openxla_eval` rule.
bhavya01 pushed a commit that referenced this pull request Apr 22, 2024
* Allow openxla for eval.

* Update readme.

* Revert `openxla_eval` rule.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants