Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[hexagon][testing] refactor benchmark-table code #11400

Merged
merged 1 commit into from
May 26, 2022

Conversation

cconvey
Copy link
Contributor

@cconvey cconvey commented May 20, 2022

Generalize the benchmark-table code to support arbitrary
independent values. This supports future changes to the benchmark
code.

cc @mehrdadh

@cconvey
Copy link
Contributor Author

cconvey commented May 20, 2022

CC: @Lunderberg

@cconvey
Copy link
Contributor Author

cconvey commented May 20, 2022

Output is basically unchanged from the previous table implementation:

dtype   sched_type      mem_scope       # 2KB vectors per tensor        status  median(µsec)    min(µsec)       max(µsec)       comments
int8    1       global  1       SUCCESS 0.600   0.600   0.600
int8    1       global  16      SUCCESS 0.700   0.700   0.700
int8    1       global  64      SUCCESS 0.900   0.900   0.900
int8    1       global  512     SUCCESS 2.900   2.900   2.900
int8    1       global  2048    SUCCESS 14.400  14.400  14.400
int8    1       global.vtcm     1       SUCCESS 0.600   0.600   0.600
int8    1       global.vtcm     16      SUCCESS 0.600   0.600   0.600
int8    1       global.vtcm     64      SUCCESS 1.200   1.200   1.200
int8    1       global.vtcm     512     SUCCESS 6.200   6.200   6.200
int8    1       global.vtcm     2048    SKIP                            Expect to exceed VTCM budget.
int8    2       global  1       SUCCESS 0.600   0.600   0.600
int8    2       global  16      SUCCESS 0.500   0.500   0.500
int8    2       global  64      SUCCESS 0.900   0.900   0.900
int8    2       global  512     SUCCESS 2.300   2.300   2.300
int8    2       global  2048    SUCCESS 14.400  14.400  14.400
int8    2       global.vtcm     1       SUCCESS 0.600   0.600   0.600
int8    2       global.vtcm     16      SUCCESS 1.000   1.000   1.000
int8    2       global.vtcm     64      SUCCESS 1.400   1.400   1.400
int8    2       global.vtcm     512     SUCCESS 4.800   4.800   4.800
int8    2       global.vtcm     2048    SKIP                            Expect to exceed VTCM budget.

@cconvey
Copy link
Contributor Author

cconvey commented May 20, 2022

Here's a cleaned-up version of that output:

$ cat out.txt | column -s $'\t' -t -n
dtype  sched_type  mem_scope    # 2KB vectors per tensor  status   median(µsec)  min(µsec)  max(µsec)  comments
int8   1           global       1                         SUCCESS  0.400         0.400      0.400      
int8   1           global       16                        SUCCESS  0.600         0.600      0.600      
int8   1           global       64                        SUCCESS  0.700         0.700      0.700      
int8   1           global       512                       SUCCESS  2.400         2.400      2.400      
int8   1           global       2048                      SUCCESS  18.300        18.300     18.300     
int8   1           global.vtcm  1                         SUCCESS  0.800         0.800      0.800      
int8   1           global.vtcm  16                        SUCCESS  0.900         0.900      0.900      
int8   1           global.vtcm  64                        SUCCESS  1.500         1.500      1.500      
int8   1           global.vtcm  512                       SUCCESS  6.300         6.300      6.300      
int8   1           global.vtcm  2048                      SKIP                                         Expect to exceed VTCM budget.
int8   2           global       1                         SUCCESS  0.700         0.700      0.700      
int8   2           global       16                        SUCCESS  0.700         0.700      0.700      
int8   2           global       64                        SUCCESS  0.700         0.700      0.700      
int8   2           global       512                       SUCCESS  2.700         2.700      2.700      
int8   2           global       2048                      SUCCESS  21.300        21.300     21.300     
int8   2           global.vtcm  1                         SUCCESS  0.600         0.600      0.600      
int8   2           global.vtcm  16                        SUCCESS  0.600         0.600      0.600      
int8   2           global.vtcm  64                        SUCCESS  1.200         1.200      1.200      
int8   2           global.vtcm  512                       SUCCESS  6.300         6.300      6.300      
int8   2           global.vtcm  2048                      SKIP                                         Expect to exceed VTCM budget.

@cconvey
Copy link
Contributor Author

cconvey commented May 23, 2022

Note: This PR is also lays some groundwork for providing this as a test fixture, if we eventually want to do that.

Copy link
Contributor

@Lunderberg Lunderberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the extraction of the benchmark generation into a separate utility. I have a couple questions on the design

  • Do we want the CSV names to be different from the names used in the code?
  • Should the user be required to provide values for all columns, regardless of success/skip/failure?

If we can answer no to both of those questions, we have the potential for a cleaner interface.

  • New columns are implicitly defined by kwargs passed to the record_* methods. If a column value isn't passed, the CSV has an empty cell in that column.
  • Column names are optional in __init__, but can still be passed. (e.g. To ensure that an "Error" column is always present, even if all benchmarks pass. Or to define the order of columns)

The downside of this approach would be decreased protection against typos in user-defined columns. Given that this is intended to create CSVs as part of ongoing optimization, and would quickly be seen by human eyes, I think that is worth the increased flexibility.

tests/python/contrib/test_hexagon/benchmarks_table.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/benchmarks_table.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/benchmarks_table.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/benchmarks_table.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/benchmarks_table.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/benchmarks_table.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/benchmarks_table.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/benchmarks_table.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/benchmarks_table.py Outdated Show resolved Hide resolved
@cconvey

This comment was marked as resolved.

Copy link
Contributor

@Lunderberg Lunderberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! There some assorted nitpicks, but no blockers.

tests/python/contrib/test_hexagon/benchmark_util.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/benchmark_util.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/benchmark_util.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/benchmark_util.py Outdated Show resolved Hide resolved
tests/python/contrib/test_hexagon/benchmark_util.py Outdated Show resolved Hide resolved
@cconvey cconvey force-pushed the bench-table branch 2 times, most recently from f523fac to 8c7f5b2 Compare May 25, 2022 21:01
Generalize the benchmark-table code to support arbitrary
independent values. This supports future changes to the benchmark
code.
@mehrdadh
Copy link
Member

mehrdadh commented May 25, 2022

@cconvey do we want tests/python/contrib/test_hexagon/benchmark_hexagon.py to be tested in the CI? If that's the case, you need to rename the file to test_benchmark.py or something starting with test_

@cconvey
Copy link
Contributor Author

cconvey commented May 26, 2022

@cconvey do we want tests/python/contrib/test_hexagon/benchmark_hexagon.py to be tested in the CI? If that's the case, you need to rename the file to test_benchmark.py or something starting with test_

Thanks for the info! I was wondering what mechanism decided which scripts got run.

IMHO I don't think it make sense (yet) for CI to include benchmarking runs, so I suggest we leave this as-is.

@mehrdadh
Copy link
Member

@cconvey sounds good! Since this file is under a test directory, can you rename the file and add pytest.skip to the test with some message that shows why we are not running this right now?

@mehrdadh
Copy link
Member

@cconvey we can also address that in a separate PR.

@mehrdadh mehrdadh merged commit a9ece3d into apache:main May 26, 2022
@cconvey cconvey deleted the bench-table branch May 26, 2022 16:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants