-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#15979: Switch to google benchmark for pgm dispatch tests #16547
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clang-Tidy
found issue(s) with the introduced code (1/1)
tests/tt_metal/tt_metal/perf_microbenchmark/dispatch/test_pgm_dispatch.cpp
Outdated
Show resolved
Hide resolved
The current output format works well with a smaller number of tests, but with a large number of tests it's hard to connect the resulting number to the test that created it. This particular hinders storing test results in a database and comparing them across runs. Switch to using the google benchmark framework, which outputs a JSON file containing the results of all tests. By default, running the binary runs all the benchmarks from sweep_pgm_dispatch.sh. The set of tests to be run can be filtered using the `--benchmark_filter=<regex>` command-line argument. One-off test cases can be run by passing `--custom` and the command line arguments as before.
43eafb6
to
c1c334e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I use filt_pgm_dispatch for the bw_and_latency test as well, so don't delete that.
I still want to dump these results to a spreadsheet, what does that flow look like?
Otherwise looks good
The current output format works well with a smaller number of tests, but with a large number of tests it's hard to connect the resulting number to the test that created it. This particular hinders storing test results in a database and comparing them across runs. Switch to using the google benchmark framework, which outputs a JSON file containing the results of all tests. By default, running the binary runs all the benchmarks from sweep_pgm_dispatch.sh. The set of tests to be run can be filtered using the `--benchmark_filter=<regex>` command-line argument. One-off test cases can be run by passing `--custom` and the command line arguments as before.
c1c334e
to
a754edf
Compare
The current output format works well with a smaller number of tests, but with a large number of tests it's hard to connect the resulting number to the test that created it. This particular hinders storing test results in a database and comparing them across runs. Switch to using the google benchmark framework, which outputs a JSON file containing the results of all tests. By default, running the binary runs all the benchmarks from sweep_pgm_dispatch.sh. The set of tests to be run can be filtered using the `--benchmark_filter=<regex>` command-line argument. One-off test cases can be run by passing `--custom` and the command line arguments as before.
a754edf
to
99a4fc7
Compare
Ok, added back filt_pgm_dispatch.pl. I've added a new json_to_csv.py that can be used to dump to a CSV (one line per test). The workflow is to run
google benchmark framework also has a native way to output CSVs, but it's deprecated and requires some extra work. This patch also uploads the json file from the CI bots, so it's pretty easy to download that and convert it. |
tests/tt_metal/tt_metal/perf_microbenchmark/dispatch/compare_pgm_dispatch_perf_ci.py
Outdated
Show resolved
Hide resolved
The current output format works well with a smaller number of tests, but with a large number of tests it's hard to connect the resulting number to the test that created it. This particular hinders storing test results in a database and comparing them across runs. Switch to using the google benchmark framework, which outputs a JSON file containing the results of all tests. By default, running the binary runs all the benchmarks from sweep_pgm_dispatch.sh. The set of tests to be run can be filtered using the `--benchmark_filter=<regex>` command-line argument. One-off test cases can be run by passing `--custom` and the command line arguments as before.
99a4fc7
to
61c15a4
Compare
The current output format works well with a smaller number of tests, but with a large number of tests it's hard to connect the resulting number to the test that created it. This particular hinders storing test results in a database and comparing them across runs. Switch to using the google benchmark framework, which outputs a JSON file containing the results of all tests. By default, running the binary runs all the benchmarks from sweep_pgm_dispatch.sh. The set of tests to be run can be filtered using the `--benchmark_filter=<regex>` command-line argument. One-off test cases can be run by passing `--custom` and the command line arguments as before.
61c15a4
to
36ae490
Compare
tests/tt_metal/tt_metal/perf_microbenchmark/dispatch/compare_pgm_dispatch_perf_ci.py
Outdated
Show resolved
Hide resolved
tests/tt_metal/tt_metal/perf_microbenchmark/dispatch/json_to_csv.py
Outdated
Show resolved
Hide resolved
@TT-billteng LOOK SEE LOOK SEE |
The current output format works well with a smaller number of tests, but with a large number of tests it's hard to connect the resulting number to the test that created it. This particular hinders storing test results in a database and comparing them across runs. Switch to using the google benchmark framework, which outputs a JSON file containing the results of all tests. By default, running the binary runs all the benchmarks from sweep_pgm_dispatch.sh. The set of tests to be run can be filtered using the `--benchmark_filter=<regex>` command-line argument. One-off test cases can be run by passing `--custom` and the command line arguments as before.
36ae490
to
0c88372
Compare
Ticket
#15979
Problem description
The current output format works well with a smaller number of tests, but with a large number of tests it's hard to connect the resulting number to the test that created it. This particular hinders storing test results in a database and comparing them across runs.
What's changed
Switch to using the google benchmark framework, which outputs a JSON file containing the results of all tests. By default, running the binary runs all the benchmarks from sweep_pgm_dispatch.sh. The set of tests to be run can be filtered using the
--benchmark_filter=<regex>
command-line argument.One-off test cases can be run by passing
--custom
and the command line arguments as before.Checklist