Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[benchmarks] Add no-skip argument. #6484

Merged
merged 1 commit into from
Feb 7, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 16 additions & 4 deletions benchmarks/experiment_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,10 +111,17 @@ def generate_and_run_all_configs(self):
logger.info(f"SKIP already completed benchmark")
continue

# Skip unsupported config.
if not self.model_loader.is_compatible(benchmark_model,
benchmark_experiment,
self._args.strict_compatible):
# Check if we should execute or skip the current configuration.
# A configuration SHOULD be skipped if and only if:
#
# 1. --no-skip was not specified; AND
#
# 2. the model is not compatible with the experiment configuration
#
# Otherwise, we should go ahead and execute it.
if (not self._args.no_skip and not self.model_loader.is_compatible(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't we need to check for no_skip in line 99? hoping to save some time not loading a model we don't intend to run.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we can do that. We still have to "load the model" for updating the environment variables of the child process (which will actually execute the benchmark). That said, this loading won't take many cycles, since we are calling with dummy=True: "instantiate the wrapper BenchmarkModel, but don't instantiate the actual torchbench model."

benchmark_model, benchmark_experiment,
self._args.strict_compatible)):
logger.warning("SKIP incompatible model and experiment configs.")
self._save_results(benchmark_experiment.to_dict(),
benchmark_model.to_dict(), {"error": "SKIP"})
Expand Down Expand Up @@ -881,6 +888,11 @@ def __str__(self):
action="store_true",
help="Strictly skips some models including models without installation file or causing stackdump.",
)
parser.add_argument(
"--no-skip",
action="store_true",
help="Do not skip any model.",
)
return parser.parse_args(args)


Expand Down
Loading